Advances of geo-spatial intelligence at LIESMARS

The enhancement of computing power, the maturity of learning algorithms, and the richness of application scenarios make Arti ﬁ cial Intelligence (AI) solution increasingly attractive when solving Geo-spatial Information Science (GSIS) problems. These include image matching, image target detection, change detection, image retrieval, and for generating data models of various types. This paper discusses the connection and synthesis between AI and GSIS in block adjustment, image search and discovery in big databases, automatic change detection, and detection of abnormalities, demonstrating that AI can integrate GSIS. Moreover, the concept of Earth Observation Brain and Smart Geo-spatial Service (SGSS) is introduced in the end, and it is expected to promote the development of GSIS into broadening applications.


Introduction
Artificial Intelligence (AI) (Russell and Norvig 2002) is increasingly applied in various fields given the development and enhancement of computing power, the maturity of learning algorithms, and the richness of application scenarios. AI is a comprehensive technical science that focuses on the development of theories, methods, techniques, and application systems for simulating and extending human intelligence. People are attempting to understand the essence of intelligence with AI, to produce a new kind of intelligent machines that can respond in a way similar to human intelligence. The term "Artificial Intelligence" was proposed in 1956 during a Dartmouth Conference (Hamet and Tremblay 2017). That meeting is considered as the official birth of the new discipline "artificial intelligence". At that time, IBM's "dark blue" computer defeated the world chess champion, and a perfect expression of artificial intelligence technology. An increasing number of AI applications target big data, computing power, internet of things, object detection, abnormality and change detection, image interpretation, and robotic mapping.
The significance of AI has risen to the level of national strategy in many countries. United States issued a series of policies to promote the development of AI (Krishnan 2016). In April 2013, the United States announced a new Brain Research Program to promote innovative neurotechnology, while an NIH Group in January 2014 developed a detailed plan for the next 10 years. The DARPA "Future Technology Forum" was held in October 2015, and CSIS published the National Defense 2045 plan in November 2015 pertaining to AI.
Synergistically, DARPA supports a third "offset strategy" proposed by the United States in February 2016, and the White House established an artificial intelligence committee in May 2016. In Japan, an AI Comprehensive Development Plan was implemented, which includes the New Robot Project initiated in January 2015. Linked to this plan, in June 2015, Japan established a Research Center of Artificial Intelligence, and in early 2016 created the Advanced Integrated Intelligent Platforms Program. The Republic of Korea also considers AI as one of the five key areas for development (Zhang 2016). The Exobrain Plan was released; a Second Master Plan for Intelligent Robots, and the AI star lab was launched by the Republic of Korea emphasizing natural language dialogue systems, robot technologies, and AI integration areas.
China also issued a series of policies related to AI development. In 2015, China announced "Made in China 2025" plan which includes a focus on AI (Liu 2016); key documents include "The State Council's Guiding Opinions on actively promoting the "Internet +" action, and the "Outline of the Thirteen Five-Year Plans for National Economic and Social Development (Draft)". Also, among these directives are the "Threeyear action plan" for the implementation of the "Internet + artificial intelligence" program, issued between 2015 and 2016 (Fan 2006). The rapid development of AI will likely lead the fourth industrial revolution in human intelligence, as illustrated in Figure 1.
The development of AI enhances Geo-spatial Information Science (GSIS) (Chen et al. 2009;Li 2012a), especially when combined with advances in big data analysis (Weng et al. 2009;Li et al. 2015;Yang et al. 2017;Zhang et al. 2013), deep learning, or other artificial intelligence techniques. Neural networks were reborn in the 1980s; AI and GSIS converged as artificial intelligence solved problems such as image matching and map generation in the GSIS field (Voženílek 2009). AI provides sophisticated techniques for GSIS projects and, at the same time, GSIS is a powerful technology with the vast data sets and a wide scope of applications for AI. Since 2006, a new generation of information technologies, such as the Internet of Things and Cloud Computing, was officially launched, realizing the comprehensive integration of industrialization and informatization (Li and Shao 2009;Li, Shao, and Yang 2011;Li et al. 2014).
The networked world is linked to the real world through the ubiquitous sensor network forming a new kind of Cyber physical space, which can automatically and real-time perceive the various states and changes in people and things in the real world. Cloud-computing centers can process the massive and complex computational problems, and control and generate intelligent feedback. In 2009, countries around the world officially proposed to build a Smart Earth collaboratively (Mei 2009); but, a Smart Earth cannot be realized without AI in GSIS for supervision and management of decisionmaking support.
With the development of AI, more and more GSIS researchers at LIESMARS have integrated AI methods into their architecture (Xin, Li, and Sun 2003;Zhan, Zhang, and Li 2008;Shao et al. 2019;Xiao et al. 2017). In the following sections, we describe applications developed at LIESMARS that combine AI methods with GSIS: • Large-scale block adjustment • Image search in big databases • Automatic change detection in images • Abnormal target and event detection • Earth Observation Brain (EOB) and Smart Geospatial Service (SGSS) • An Internet + Spaceborne Information Real-time Service System (PNTRC) These applications are all based on AI to achieve high performance and make progress toward breakthroughs that integrates various fields. GSIS, together with AI, is crucial to understand thoroughly the changes in time and space, solving difficult problems in the past, and will likely expand possibilities in the future.

Large-scale block adjustment
Block adjustment refers to the refinement of 3D coordinates in scene geometries, relative motion parameters, and the optical features of cameras that simultaneously acquire images. Subsequently, this technology has attracted the attention of many GSIS researchers. AI technology now permits large-scale storage and block adjustment to tackle large optimization problems; large-scale block adjustment demands less computational resources when using parallel processing, like GPU with CPU. Large-scale block adjustment of systems onboard the ZY-3 satellite is now solved through AI. The ZY-3 (Li 2012b), launched in 2012, obtains stereo imagery pairs and produces high-precision digital orthographic maps (DOM) and digital surface models (DSM) Wang et al. 2017). This satellite provides highprecision spatial references for surveying, land, and defense. The images from the ZY-3 satellite are used for "Global Automatic Mapping Major Projects": Central Asia, Thailand, Burma, and Germany. Figure 2 shows information about "Global Automatic Mapping Major Projects".
AI allowed the use of the entire uncontrolled regional network for block adjustment of 8810 scenes in the national ZY-3 satellite database (Wang et al. 2014a;Zhang et al. 2014bZhang et al. , 2014Wang et al. 2013). The gross error detection method based on weighted iteration posterior variance estimation is used to automatically select three million strong connection points from two billion matching points. The accuracy of autonomous positioning of remote sensing images was improved from 15 m to better than 5 m, to meet global mapping needs. The results of block adjustment without ground control point (GCP) for Shandong Province are shown in Figure 3.
Combined block adjustment for ZY-3 images and laser altimetry data successfully uses AI. In 2003, ICESat-1 (Li et al. 2016a) was successfully launched, with a spot size of 70 m in diameter, a plane accuracy of 10 m, and an elevation accuracy of 15CM, providing combined block adjustment capabilities. Elevation accuracy with laser data-assisted adjustment is significantly higher than the elevation accuracy without laser data adjustment (Li et al. 2016b). Combined blockadjustment can further improve the elevation accuracy of ZY-3 without GPCs to greater than 3 m. Figure 4 illustrates the combined block adjustment for ZY-3 images and ICESat-1 data.

Image retrieval from big databases
Driven by the demand from both in military and civilian fields in GSIS, automatic image retrieval from big remote sensing databases has become an increasingly urgent need and has attracted an increasing amount of research interest, due to its broad applications.  Image retrieval methods can be divided into two categories According to the approach of description for the image, Text-based Image Retrieval (TBIR)  and Content-Based Image Retrieval (CBIR) . TBIR methods were common in early remote sensing image retrieval systems (Shao, Li, and Zhu 2011;Shao et al. 2015). These methods rely on manual annotations and extracted keywords in terms of sensor types, waveband information, and the geographical locations of remote sensing images. This approach requires manual intervention in the labeling process, which makes TBIR time consuming and prohibitive especially as the volume of remote sensing images increases constantly. The advancement of satellite technologies means that the volume of remote sensing image databases has increased as well. TBIR cannot easily solve: a large amount of image data, high feature dimension, and short response time; as a consequence, CBIR has become more and more applied in remote sensing related fields in order to keep up with the growing need for automation.
CBIR generally contains two steps in traditional CBIR algorithms: feature extraction and image matching. The first step extracts high dimensional features that represent the whole image, and the second step retrieves the corresponding or relevant image from the image dataset by query image matching. Since image retrieval aims at image search and discovery in large-scale tiled remote sensing image databases Shao et al. 2014), manual information extraction from remote sensing big data is time consuming and prohibitive, and an arduous problem. Applying AI to image retrieval problems in GSIS brings the power of deep learning and high-performance online retrieval engines to focus on large-scale databases. Thus, many efficient image retrieval algorithms were proposed based on AI and highperformance computing Liang et al. 2016), A deep-learning-based high-performance online search engine on Large-Scale Tiled Remote Sensing Image Database is developed with 10 million tiled remote sensing images at LIESMARS, which combines object level, land cover level, and scene level image retrieval. In addition, the retrieval system interface is shown in Figure 5, which integrates keywords retrieval, information lists, and map engine.

Automatic change detection in images
Change detection (Hussain et al. 2013;Radke et al. 2005;Jiang et al. 2007) and analysis is one of the major topics in remote sensing. It is referred by Singh (1989) (Mahmoodzadeh 2007) as "the process of identifying differences in the state of object or phenomenon by observing it at different times". In GSIS, change detection is considered as surface component alterations with varying rates and is used as an evaluation index in many applications, including forestry, damage assessment, disaster monitoring, urban planning, and land management. Remote sensing and technology are fast, automatic, and accurate. Automatic change detection in GSIS can be divided into two categories: 2D change detection and 3D change detection.
(1) 2D change detection Traditional 2D change detection methods are typically based on classifiers deploying ensemble learning methods. The ensemble learning method is applied to optimize the advantages of the multiple supervised classifiers with multiple object contextual features, and to obtain stable and highly accurate results of change detection in urban areas from high-resolution remote sensing images. The image pairs used in change detection, however, represent the same location differently (Alberga 2009). Thus, the different modalities and properties of multiresolution data make ensemble learning method inappropriate for solving change detection problems.
Methods combining AI with other approaches for 2D change detection yield a more robust evaluation of a given pixel. To that end, a change detection algorithm based on unsupervised feature learning incorporated deep-architecture-based unsupervised feature learning and mapping-based feature change analysis . The learned mapping function can bridge the different representations and highlight changes. Three fully convolutional neural network architectures were developed to perform change detection using a pair of co-registered images (Daudt, Saux, and Boulch 2018). The network that contributed the most toward the solution contained extensions of two integrated fully convolutional networks, Figure 6 shows an overview of 2D change detection at LIESMARS, based on fully convolutional networks that break through the pixel-based CD to discover semantic information and create new knowledge from changed objects. Figure 8 shows an overview of change detection based on multi-source sensors on SAR at LIESMARS. It illustrates that methods based on AI can provide high performance for 2D change detection.
(2) 3D change detection (Qin, Tian, and Reinartz 2016) Figure 5. Self-designed image retrieval software from big database at LIESMARS. Digital Elevation Models (DEM) and 3D city models have become more accessible than ever before. The unprecedented technological development of 3D data acquisition and generation through 3D space-borne, airborne, and close-range data make image based, Light Detection and Ranging (LiDAR) based point clouds become more accessible than ever before. Moreover, 3D change detection has attracted attention with its high accurate measurement accuracy and dynamic visualization aided suitable for decision support (Taneja, Ballan, and Pollefeys 2013;Yang, Fang, and Li 2013). Traditional methods for 3D change detection, separate bundle adjustment processes and need manual intervention, lead to spatial coregistration errors and high false alarm detection rate.
The integration of AI promotes an automatic 3D change detection system with high accuracy and robustness in multi-source sensors. The paper in Li et al. (2017b) proposes a novel bundle adjustment strategy called united bundle adjustment (UBA) for multitemporal UAV image co-registration. It can automatically achieve high co-registration accuracy, which extends the capacities of consumer-level UAVs to meet the growing need for automatic building change detection and dynamic monitoring using only RGB band images, eventually. Meanwhile, deep-learning-based multi-stereo matching algorithms and multi-view matching algorithms are used for change detection in 3D point clouds. Figure 8 shows an overview of 3D

Abnormal target and event detection
Abnormal target and abnormal event detection is vital in various applications such as earthquake warnings, fire detection, and urban traffic supervision. In the field of GSIS, there are numerous publications focusing on such tasks (Liu 2000;Solheim, Hogda, and Tommervik 1995;Mercier and Girard-Ardhuin 2006). The traditional anomaly detection algorithms can be divided into three categories: model-based technology, proximity-based techniques (distance metrics), and density-based technology.
Model-based technology establishes a data model and trains a set of model parameters based on known samples. In the subsequent predictions, the so-called  abnormal points are those that cannot fit perfectly with the model. Proximity-based techniques are usually defined between objects, which are objects that are far away from most other objects. When the data can be rendered in a two-dimensional or threedimensional scatter plot, distance-based outliers can be visually detected. In addition, density-based technology is calculated relatively directly, especially when there is a measure of proximity between objects. Objects in low-density areas are relatively far from neighbors and may be considered as anomalies.
Since the emergence of AI, many researchers (Ravanbakhsh et al. 2018;Wei et al. 2018) are proposing deep-learning-based algorithms for abnormal target and abnormal event detection. In the field of GSIS, highresolution remote sensing images and videos facilitate numerous disaster reconstruction and rescue operations based on abnormal target and abnormal event detection. The results discussed in publications (Deng et al. 2017;Liang et al. 2018) as well as various projects demonstrate that the combination of AI and GSIS can provide abnormal detection for military and civilian users. Figure 9 shows a result of post-earthquake collapsed house extraction based on multi-feature and multi-core learning at LIESMARS, which provides vital information for earthquake reconstruction.
One of the national key research and development plans is the "On-orbit intelligent processing technology" led by Wuhan University. The team developed a satellite on-orbit processing system based on deep learning and GSIS. It can automatically identify, search, and locate forest fire points and surface vessels from orbit. The system integrates Beidou short message transmission, with real-time processing and transmission capabilities anywhere in the world. On 21 March 2019, inorbit processing independently identified a forest fire in the Mekong River Basin, and extracted fire information in orbit in only 2.02 s; it only takes 13 s to receive a short message from the camera to the ground terminal. Figure 10 shows an overview of forest fire emergency rescue system based on the satellite on-orbit processing, which uses infrared remote sensing data to find fires and pass on this information to the ground through the Beidou satellite network.

Earth Observation Brain (EOB) and Smart Geo-spatial Service (SGSS)
Artificial intelligence is associated with brain science, cognitive science, psychology, statistics, and computer science. The progress of brain science and cognitive science has a great effect on the development of artificial intelligence, more researchers strive to build machines similar to the human brain in various disciplines. In the field of GSIS, the combination of theoretical knowledge and cognitive science is the will raise intelligence level across the entire geo-spatial information network. With brain science and SGSS, remote sensing can automate those three processes through a variety of spatial information brains in Figure 11. The concept of EOB (Li et al. 2017a). massive geo-spatial data acquisition; intelligent geospatial data processing and mining; and quick geospatial data-driven responses.
Satellite sensors and intelligent onboard processing systems can be regarded as an Earth Observation Brain (Li et al. 2017a). Images are acquired and processed quickly; useful information is automatically extracted and sent directly to the end-users. The EOB concept is shown in Figure 11.
This figure illustrates how a human brain obtains information from the surrounding environment by visual, auditory, and other senses. Then, the information is transmitted to the left and right hemispheres along neurons. The left and right hemispheres analyze the surrounding environment, thus guiding behavior. Like the human brain, EOB can achieve on-board sensing, cognition, and transmission. In addition, satellite-ground collaborative processing system of task-driven remote sensing imagery can be achieved with the concept of EOB. Figure 12 shows an overview of satellite-ground collaborative processing system.
There are three objectives for the EOB to realize: (1) EOB is an intelligent earth observation system to simulate a brain cognition process. By integrating geo-spatial information science, computer science, data science, and brain cognition science, EOB will deliver real-time sensing, object extraction, target cognition, change detection, and information transmitting for quick responses.
(2) EOB is a space-air-ground integrated information network linking RS and navigation satellites, communication airships, and aircraft. The Figure 12. Overview of satellite-ground collaborative processing system.
EOB will process and analyze the information with on-board image processing and satelliteground collaborative cloud computing, to obtain useful information and knowledge for real-time users, automatically. (3) EOB is a smart service system for geo-spatial information retrieval that will send the correct data, information, and knowledge to the person at the right time and the right place.
7. An Internet + Spaceborne Information Real-time Service System (PNTRC) An Internet + Spaceborne Information Real-time Service System is suggested. It contains following features: (1) The communication, navigation, and remote sensing satellites which consist of about 500 high and low earth orbit satellites will build a space-borne information network with no board image process.
(2) The intelligent service index of remote sensing information can reach to 0.5 m of spatial resolution and revisit circle is better than 5 min. With PNTRC, any end-user can use his or her smartphone to get real-time response and service. The Luojia Satellites 1-3 are doing test in recent 2 years for that goal.
In recent years, China has also prioritized the development of brain and cognitive science. In the "national long-term scientific and technological development plan (2006-2020)", "brain science and cognitive science" was recognized as one of the eight frontier scientific fields. In 2012, the Chinese Academy of Sciences launched a strategic pilot science special project (B); the "Brain Function Link Map", and the basis for a future "Brain Science Plan". In future, the "Brain Science Plan" will promote the development of brain science in China, as well as brain disease prevention and control, as well as artificial intelligence development.

Conclusions
Many applications in photogrammetry and remote sensing can be automatically solved using AI, such as image matching, image target detection, change detection, image retrieval, Digital Orthophoto Quad (DOQ), and Digital Surface Model (DSM) generation. Yet there are still many applications in photogrammetry and remote sensing that however, remain difficult to solve with AI, such as Digital Elevation Model (DEM) generation from DSM, Digital Line Graphic (DLG) generation from DOQ and Multispectral Scanner (MSS) images, as well as 3D topological relationship generation of house models.
In the future, spatial cognition, such as Earth Observation systems, smart city, smartphone brains, and Internet + Spaceborne Information Real-time Service, will have broad applications on the smart earth. Additionally, through integration of Global Navigation Satellite System (GNSS), remote sensing, and communication, a smart air and space-borne realtime service system can provide geo-spatial information to end-user smartphones.

Disclosure statement
No potential conflict of interest was reported by the authors.

Funding
This work was supported in part by the National key R and D plan on strategic international scientific and technological innovation cooperation special project [

Notes on contributors
Deren Li is a professor in State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University. He was selected as a member of Chinese Academy of Sciences in 1991 and a member of Chinese Academy of Engineering in 1994. He got his bachelor and master degrees from Wuhan Technical University of Surveying and Mapping respectively in 1963 and 1981. In 1985, he got his doctor degree from University of Stuttgart, Germany. He was awarded the title of honorary doctor from ETH Zürich, Switzerland in 2008.
Zhenfeng Shao is a professor in State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University. He got his bachelor and master degrees from Wuhan Technical University of Surveying and Mapping respectively in 1998 and 2001, and received the PhD degree from Wuhan University in 2004. His research interest mainly focuses on urban remote sensing applications. The specific research directions include high-resolution remote sensing image processing and analysis, key technologies and applications from digital cities to smart cities and sponge cities.
Ruiqian Zhang is a PhD student in School of Remote Sensing and Information Engineering in Wuhan University. She received the bachelor degree in remote sensing science and technology from Wuhan University, Wuhan, China in 2015. She is currently working toward the Ph.D. degree in photogrammetry and remote sensing from School of Remote Sensing and Information Engineering from Wuhan University. Her research interests include image/video processing and object detection.