The future of geospatial intelligence

Abstract For centuries, humans’ capacity to capture and depict physical space has played a central role in industrial and societal development. However, the digital revolution and the emergence of networked devices and services accelerate geospatial capture, coordination, and intelligence in unprecedented ways. Underlying the digital transformation of industry and society is the fusion of the physical and digital worlds – ‘perceptality’ – where geospatial perception and reality merge. This paper analyzes the myriad forces that are driving perceptality and the future of geospatial intelligence and presents real-world implications and examples of its industrial application. Applications of sensors, robotics, cameras, machine learning, encryption, cloud computing and other software, and hardware intelligence are converging, enabling new ways for organizations and their equipment to perceive and capture reality. Meanwhile, demands for performance, reliability, and security are pushing compute ‘to the edge’ where real-time processing and coordination are vital. Big data place new restraints on economics, as pressures abound to actually use these data, both in real-time and for longer term strategic analysis and decision-making. These challenges require orchestration between information technology (IT) and operational technology (OT) and synchronization of diverse systems, data-sets, devices, environments, workflows, and people.

Since the dawn of human history, our ability to make informed decisions about the world around us has been one driven by perception; perception of personal relativity and importance. Since the dawn of computing, the digital world has remained, well, digital -data in boxes, hard-drives, servers, rarely integrated or analyzed within any larger context. For years, the physical world remained largely separated from the digital world, technology from business, information technology (IT) from operational technology (OT). However, the pace of technological advancement is finally unifying these worlds.
The discipline inherent in capturing the physical dimension of this intersection is the field of geospatial intelligence. This includes the perception, cognition, computation, control, reaction, and understanding of physical features and geographically referenced activities. As technology has evolved alongside this field, capabilities in these six areas have transformed how we use tools to shape change.
'Perceptality' , is the convergence of perception and reality, a term we have coined at Hexagon. It is the merging of the digital and physical worlds; the inevitable fusion of real life, objects, and environments with their cyber manifestations; it underlies the digital transformation of industry and society.
Of course, long before the digital age, humans were using technology to capture 'raw' data information in the field, 'at the edge. ' Centuries ago, surveyors traveled to remote locations and used papyrus to record angles and ranges to depict topography, then paper, and later ticker tape. Extracting information directly from and about the physical world has been central to industrial and societal development. Figure 1 depicts a few examples of geospatial technologies over the centuries.
Hexagon has been a pioneer of capturing data from the edge long before networked devices and connectivity. Early development of communications equipment and antennae helped us shape the trajectory for mobile phones and radios. Our repertoire of metrology technologies like laser scanners, portable measuring arms, calipers, theodolites, tomography, allowed us to redefine precision and quality assurance for industrial manufacturing. From micro-precision to geospatial dimensioning, our software and hardware innovations in the early 2000s pioneered intelligent mapping, spatial awareness, structural monitoring, and industrial plant control and management.
As Moore's Law has forced down computation costs and size constraints, we continued to accelerate our OPEN ACCESS technological capabilities to capture physical reality with even greater precision. For years now, we have deployed LiDARs, inertial navigation systems, multi-laser systems, precision 3D scanners, integration with Global Navigation Satellite Systems (GNSS), and many other measurement technologies. With advancements in machine learning, we now use 3D model generators, robotic total stations, multi-imaging sensors, and all manner of computer vision techniques. These capabilities have helped us deliver unprecedented perception and cognition applications across agriculture, mining, construction, government, transportation, and security by creating digital replicas of physical realities.
When organizations can mirror physical realities in the digital world, the capacity for agile and intelligence decision-making is redefined. Situational awareness sharpens and expands, and change detection accelerates. Precision and accuracy become more granular than ever before, which reduces error and waste, mitigates risk and uncertainty, and enables greater speed, reliability, productivity, safety and security.
Although critical sectors like engineering, manufacturing, agriculture, and others have been increasing their adoption of sensor technology and software systems, many applications remain siloed, disconnected from other data-sets or stakeholders, and generally lagging in returns on investment. In order to fully realize this potential, current models for computing must shift.
Enabling perceptality is not just about the transition from digitized endpoints to fully digitalized workflows and interactions; it is about seamlessly capturing reality and empowering these interactions at the far edges of the network.

Shift to the edge
When we reflect upon the development and evolution of the Internet, what quickly emerges is the ever-evolving direction of network topology and computing architecture. The Internet was born of the mainframe era, a centralized architecture in which a large high-speed processing and memory unit supported multiple workstations. With the rise of personal computers (PCs) and the need for distributed workstations, business logic, simple data, and interfaces to operate as one 'networked system, ' the client-server model emerged. For more than two decades, this model prevailed, until the massive transformation of user interface and computational power enabled the age of mobile.
Centralized cloud computing marked another shift back to centralized network topology, and the profound scale it enabled. Indeed, cloud computing architectures, software-as-a-service (SaaS) products, and innumerable apps have transformed the way billions of people live, navigate, work, bank, and communicate, and interact in society. To enable mobile functionality, flexibility, efficiency, and ubiquitous adoption, cloud computing has emerged as the de facto, centralized architecture  supporting mobile devices. However, the pace of technological innovation is yet again, inspiring a pivot. Figure 2 depicts the shifts between centralized and distributed compute over the past few decades.
With each new era, the total addressable marketboth users and machines 'nodes' -expands exponentially. The number of users in the height of the mainframe age was around 10 million; this number swelled to 2B (B means billion) with the advent of the PC; today there are roughly 4 billion mobile devices (Statista 2017). When sensors pervade any and every object in the physical world, the number of connected endpoints will once again, grow exponentially.
The gravity of so much data generated by so many endpoints has rendered centralized computing topologies inadequate. With the push from cloud services and pull from a rapidly expanding number of connected endpoints, the so-called 'edge' of the network must transform, from pure data generation to intelligence generation. Figure 3 offers an overview of where computation and analytics take place depending on application and power requirements.

Practical forces driving connected industries to 'live on the edge'
Just as economic and industrial forces have been awakening to 'digital' disruption, where cloud computing, mobile, and social are redefining operations and business models, another, far greater wave of disruption is emerging. What are today, centralized structuresorganizational, computational, communications -are quietly undergoing a seismic shift toward decentralized and distributed systems. There are a number of forces driving the shift from cloud to edge.

From big data to colossal data
First, it continues to be true that we have created more data in the last two years than in all of human history combined. In 1992, global internet traffic accounted for 100 GB per day (Cisco 2017), and in 2015, that number hit 15 billion GB per day. The digital universe is doubling in size every 12 months (Cloud Times 2014). By 2020, it is expected to reach some 44 zetabytes -what some scientists estimate to be more bytes in the digital universe than so that performance is unfazed, and data are quickly transferred to the cloud once grounded.

Security and reliability requirements
Enabling perceptality in industry must prioritize security and reliability. Securing assets, infrastructure, people, and ensuring reliability in workflows, to the greatest extent possible. Moreover, it is foundational to delivering quality of service, supporting certain economic channels, and most importantly, instilling safety and confidence in systems themselves.
Edge computing also impacts security, and in some applications, privacy. For one, the decentralized nature of edge networks reduces emphasis on the cloud or central premise as a core 'centralized' computing environment. In many instances, such 'hub and spoke' models are more vulnerable to bottleneck or failure than are distributed ones where no single node will necessarily take down the entire network. Secondly, when data encrypted on the device moves closer to the core, security points, firewalls, or other checkpoints identify any tampering more quickly. Finally, in some instances, such as in a smart city context, keeping sensitive data altogether on devices reduces the likelihood malicious actors will penetrate it, since it is generally far easier to penetrate centralized enterprise IT systems (via malware or phishing, for example) than it is to edge nodes scattered throughout an environment.
Even when assets face minimal security threat, reliability is key. Certain remote environments will inevitably have poor connectivity, others highly uncertain conditions, and others where performance has lifeor-death consequences. Risking latency to the cloud is not an option. In geospatial applications, it is imperative not only to capture data from the edge, but also to extract value from that data to function with precision, to monitor safety in spatial layouts, or to alert users of hazard.

From data collection to intelligence and decision-making
As the digitization of society and industry generate unfathomable amounts of data, pressures abound to actually use these data. The integration of digital and physical is not only for accelerating and automating realtime applications, but for decision-making and improvement over time. Despite so much data, IDC and others estimate some 80-90% of enterprise data is 'dark' data -i.e. data that organizations collect, process, and store, but never actually use (Technopedia 2017). The push to capitalize on what is today (mostly) underleveraged data is one of the biggest reasons applications are demanding intelligence at the edge. After all, investments spent to digitalize processes and equipment require business justification.
there are stars in the physical universe (IDC and EMC2 2014).

Performance and energy constraints
The problem with so much data is that existing infrastructure simply cannot handle the rates or the volumes. The proliferation of devices is collecting vast amounts of data and these data need to be processed in real time, a feat hardly achievable with centralized networks, limited bandwidth, and cloud infrastructure. To the extent high volume data and content are processed to the cloud today, it places tremendous cost pressures and constraints in the form of bandwidth, latency, storage, energy, or raw computational power. In distributed environments, so-called 'peer to peer' networks are utilized to lessen the load on core networks and share data locally (Shi et al. 2016). This is a key enabler for digitalizing energy-constrained environments.
Many Internet of Things (IoT) edge sensors, particularly in industrial settings, must be equipped to operate in regions of low connectivity, and often for years on the same battery. Even when energy harvesting is possible, power budgets for these devices are a function of processing capacity. In remote environments, when nodes require high-energy currents to both stay active (ie continue sensing, measuring, interpreting) and transmit data, enterprises face a trade-off between power versus performance efficiencies. For data-intensive devices, like video cameras or audio feeds, capabilities to fully harness these data have historically been extremely limited. Although sensing technology itself is advancing rapidly, firmware and CPU designs typically determine power consumption, sleep currents, performance, peripheral functionality, and processing speed.
In industrial and mission-critical environments, especially the inherent latency in connecting to the cloud render such a centralized model inadequate, even unsafe. Adjacent technologies in peer-to-peer energy transmission, storage, data compression, and potentially distributed ledgers architectures will influence performance, by enabling more seamless integration between physical and digital events such as transactions, energy distribution, and authentication.
This level of data volume and performance demands sophisticated data management techniques at every part of the stack, even within 'edge' devices. Geospatial applications, for example, are no stranger to demands physical conditions place on computing. In technology originally developed in collaboration with NASA, Hexagon offers a single photon LiDAR product which allows 10 times greater efficient data processing (of more than a 1 TB per hour) for airborne applications over any sort of terrain, day or night (Hexagon Geosystems 2017a). Additional onboard airborne sensors to support faster compute by compressing giant data-sets during flight

Application of sensors on any and every 'thing' -giving our world a digital nervous system
IoT automates operations by automatically gathering information about physical assets such as devices, machines, equipment, vehicles, infrastructure, facilities, and so on. Visibility into status and behaviors enables optimization of control, processes, and resources. Sensors enable devices to capture the physical reality of things through a wide range of functions. Commonly used sensors, often simultaneously applied to the same object, are depicted in Figure 5.
The modern smartphone for example, has between 8 and 11 sensors, capturing everything from where devices and their users go (GPS), to when devices are held to the ear (gyrometer), to identifiable biometrics (fingerprint), to how fast the phone is moving (accelerometer). However, these sensors are not so new. Hexagon's geospatial applications have been using sensors in conjunction with technologies like LiDAR, radar, GNSS, inertial navigation (INS), and simultaneous localization and mapping (SLAM) systems for years. What's new is the application of these sensors and systems to enable new commercial categories like self-driving vehicles or robotics.
At present, there are an estimated 17.6B connected devices online (IHS 2017). In 2016 alone, some 4B more connected devices came online. That number is forecasted to reach between 20B and 30B within just four years (IEEE Spectrum 2017). Manufacturers are adding sensing technology to everything from toys to turbines, cows to coffee makers, and all manners of machinery, appliances, wearables, and far beyond. We are laying the foundation for ubiquitous sensing networks; the interoperability and crowdsourcing of vast networks of professional and non-professional sensors sensing all manner of dimension, any place and any time.
The rise of sensing technology is significant, not only for the visibility, reality-capture, and new services it enables, Increasing performance and analytics at the edge instead of constantly using resources to communicate data back to the cloud has a number of subsequent benefits, as depicted in Figure 4.
This confluence of massive volumes and variety of data, the imperative for real-time, agile, and sustainable processing, the deep need to actually leverage these data, signals yet another twist in the evolution of network topology: Real-time data processing and service execution will reside at the 'edge' -that is, on the devicewhile advanced machine intelligence, learning, and longer term service innovation will develop in the cloud.

Myriad technological advancements accelerate shift to 'capture reality'
It is not just broader trends around data volumes, processing and utilization that are fostering the sea change in network topology; the diverse and rapid pace of technological innovation is accelerating the shift toward edge processing as well. Indeed, more than any single technology, it is the inevitable confluence of the following wherein lies the greatest prospect for disruption.

Capturing things through sensors and IoT
When we add sensors to something, we grant that 'thing' -object, vehicle, machine, infrastructure, any 'thing -the ability to communicate about itself, and very often their patterns of use or users themselves. For years, organizations have been placing sensors on heavy machinery and vehicles, but within the last five years, sharp declines in cost and significant improvements in connectivity have ushered in a new era of pervasive sensor application. Ubiquitous sensors, connectivity, and networked services, often coined 'IoT' , is redefining business and society's visibility into, and therefore understanding of, the physical world. around for years, it is the advent of artificial intelligence (AI) and algorithmically trained learning software that shifts understanding of the physical world from humansonly to machines.

Put simply -machines can now perceive spatial reality on their own
What are essentially advanced algorithms able to detect patterns, learn about them, and recommend outcomes, are responsible for hundreds of new use cases. What follows is a list of applications that are transforming computational capabilities for perceiving the physical world in diverse ways.
• Satellite imagery for geoanalytics • Object detection, navigation, and search • Localization and mapping • Motion detection • Weather forecasting • Sensor data fusion in machinery • Collaborative robots Hundreds of new use cases emerge when advanced algorithms are trained to detect, classify, and navigate objects, features, and patterns. For many applications, rich data generation requires intelligence at the edge. Through its work in industrial environments, Hexagon has led the development of advanced edge-enabled but for the massive amounts of data sensors will generate. Consider one example from one connected object in one industry: a single autonomous car will generate 1 GB of data per second; an estimated 2 petabytes of data per car per year (Datafloq 2017). The future of geospatial intelligence is about leveraging these data to perceive, compute, analyze, collaborate, learn from, and shape real change.

Capturing perception through machine learning and computer vision
Thanks to recent breakthroughs in hardware speed and significant improvements in algorithms, machine learning and artificial intelligence -fields that have existed (primarily in academia) for decades, but were handicapped by inadequate compute power and oversold expectationsare suddenly undergoing a rapid resurgence.
Artificial intelligence is an umbrella term for a range of algorithmically trained perception-capable computing models, including machine learning, computer vision, natural language processing, deep learning, robotics, planning, and beyond. Advancements in augmented and virtual realities, wherein media can be contextually overlaid on physical or virtual spaces will also accelerate demand for automated geospatial intelligence in real time. While plenty of hardware and equipment like cameras, LiDARs, radars, satellites, and countless other instruments used for spatial measurement have been Machine learning is the catalyst for harnessing the current and oncoming onslaught of data from the digitization of machines and the physical world. Put simply, if machine learning did not exist, we would have to create it.
Sophisticated modeling techniques such as sensor data fusion, situational forecasting, behavior and scenario simulation, autonomous agent-based decision-making are just a few examples of how constructs like deep learning and neural network architectures are helping enterprises: • Harness their 'dark' data • Process unstructured data • Learn from their data • Better analyze their data in conjunction with other sources (eg third-party, disparate sensors) • Predict anomalies, malfunction, corruption, even security threats • Simulate outcomes without risk • Detect patterns, interdependencies, relationships beyond human mental bandwidth (or bias) When combined with high-accuracy sensing for positioning intelligence, dynamic situational awareness, and multi-data-set contextual insights, unprecedented mobility solutions emerge. Hexagon's work in industries like mining and agriculture have led to the development of edge intelligence capable of autonomously grading terrain to integrate this information into real-time workflows as well as planning and resource allocation. Such machine-level vision is foundational to fully autonomous mining or farming operations.
The gravitational pull of data processing at the edge means the edge takes on a share of intelligence all its own. When the cloud serves as the training center, handling deeper, ongoing learning, simulation, and recommendation, the edge point of access (device) becomes the object of improvement and update. These updates could come in the form of software updates or patches delivering better sensing, smarter data curation, more accurate inferences, more automated actions and decision-making. equipment, not just for onboard data processing, but learning and adaptation as well.
For instance, its 'self-learning' total stations (shown in Figure 6) can be used in even the harshest environments to automatically adapt to local situations by separating relevant system reflectors from other reflectors on the job site. The robotic total solution, for instance, automatically searches, aims and follows targets, collects measurements, stakes, and defines the area of hundreds or thousands of points. The software on these devices turns complex data into workable 3D models right from the field, while working in conjunction with cloud-based software for deeper data mining and modeling back in the office.
Hexagon's IMAGINE Photogrammetry product is used in numerous geospatial applications for real-time object recognition and machine vision. For instance, simple applications like scanning floorplans help quickly delineate boundaries with extreme precision. This same distributed processing and on-board machine vision is also used in more complex applications like filtering moving objects in a street view scene for navigation, safety, and autonomous decision-making. When machines themselves are suddenly able to autonomously perceive the world around them, the costs, latency, distance, and unreliability of cloud connectivity no longer suffices. Referring back to our example of a self-driving vehicle, driving, navigation, and object detection data must be processed locally as even the tiniest amount of latency can be a matter of life or death. This reiterates another important driver of edge computing: mission-criticality.

Capturing intelligence through big data learning and synchronization
Although capturing physical objects through sensing technology and machine perception is demanding computational agility and reliability at the edge, a far greater driver of this technological decentralization is making sense of data at scale. Not only does the volume, velocity, and variety of data demand more real-time and agile processing, so too does the need to learn from it. Consider the diverse endpoints in a mining environment: • Inside mine and outside mine • Assets like stockpiles, instruments, and other equipment • Mobile devices, tablets, workstations, etc. • Vehicles such as trucks and tractors • Satellites, cameras, and antennae • Workers on-site and off • Conditions (eg roads, weather) • All workflows, communications, connectivity, analytics, interoperability, etc.

Challenge: lack of integration and connectivity stifles digging deep
Extractive industries such as mining generate enormous amounts of geospatial data, but they are often remote and therefore access to these data can hinder analysis and action. The challenge in mining is thus one of managing complex engineering information, constantly changing, sometimes unpredictable landscapes, and coordinating information across disparate stakeholders and environments. Given the diversity of mining operations, mining has traditionally had to rely on an array of point solutions as 'patches' to disparate problems. Not only is the circulation of information fragmented when, for instance, there is one solution for blast management, another for fleet management, another for environmental monitoring, but also too are the hidden relationships and insights when these data and capabilities are not integrated.

Solution: mining for insights keeping data on the surface
The solution to more productive, safe, and intelligent mining is not just about integration, but about powering nodes 'at the edge' to reliably and efficiently transmit these data. It is critical that stakeholders have the power and mobility to search and analyze these data from any application and in a disconnected mode. Our experience As depicted in Figure 7, the feedback loop between cloud and device accelerates performance, reliability, and more efficient data processing over time. Through sensors that capture the world as-is or as-built and software that interprets the captured data, organizations are able to better manage real conditions and take immediate action. With accurate and up-to-date digital depictions of what's going on in the real world, they are able to derive insight, ask relevant questions and manage extensive and complex enterprise-wide information. Reducing the time between information extraction and data-informed action is the foundation for shaping smart change.

Advanced industrial applications illustrate imperative and opportunity
The accelerated and intelligent convergence of digital and real-world reality capture -perceptality -will be empowered by distributed computing on the "edge" and "value-added" in the cloud. For many industrial contexts, edge computing is the enabler to reliability, risk mitigation, situational awareness, and synchronicity across the production chain. Hexagon's legacy in geospatial awareness, metrology, and data management in the real world have rewarded us the expertise and opportunity to orchestrate connectivity, data fusion, and reliable process automation across incredibly complex industrial environments. What follows are three case examples of environments in which edge networking is transforming capabilities for Hexagon customers.

Smart digital mine
Mining is an ancient and essential industry for extraction of minerals or other geological materials and resources from the earth to sustain populations and infrastructure. However, in an environment handling precious natural resources, it has never been more important to channel (big and small) data in a way that maximizes its usefulness in real-time, when and where it is needed.

Smart digital construction project
Construction is the process of building infrastructure. While construction companies are accelerating their adoption of information technologies to help manage the complexities inherent in their work, many tools have remained in disparate silos and efforts have proven to be more disruptive than helpful or cost-effective.

Challenge: laying the groundwork for coordinated construction
Mid-size and large-scale construction and infrastructure projects are some of the most complex endeavors to coordinate. The largest projects often require years of planning and execution and involve thousands of people, tens of thousands of interdependent tasks, and millions, if not billions, of dollars of investment. Regardless of size, every construction project is unique and demands extensive orchestration, beginning with strategic design, preparing the construction plan, budgeting, communicating detailed instructions, tracking variances, coordinating teams and workflows. Plans for successful outcomes are often compromised by lack of clear scope, incomplete designs, data entry errors for estimating and scheduling, never mind the realities of unexpected events or limitations. These projects are almost always subject to delays and cost overruns that erode both profitability and reputation.

Solution: clarity, connectivity, and simplicity from the ground up
Any modern construction site and its output must be designed to function as a large-scale information system. As such, edge computation is a requirement for agile connection, real-time service, data optimization, application intelligence, as well as security and privacy protection. Today, the intelligent construction is about coordinating endpoints from the ground up, from day one.
Numerous Hexagon clients leverage our expertise and SMARTBuild solution to connect all relevant project information -from CAD drawings, 3D models, specifications, schedules, materials, workflows, and instructions, to devices, machinery, and so on, to the thousands of people employed on the project the millions of tasks to manage. Advanced geospatial techniques sometimes offer new solutions to old problems: in one customer example, construction engineers used high-accuracy camera data to position drilling machines on both sides of the mountain so that the tunnels from both sides meet in the middle with centimeter accuracy. By feeding construction models and layout points to robot total stations, builders can streamline the process from planning to execution, quickly and accurately locating building elements they need. Communication 'at the edge' between devices, machines, people, and processes is essential for identifying issues or anomalies in order to take corrective action before costlier problems arise.
in this industry finds that what mines really need is an integrated solution uniting surveying, design, fleet management, production optimization, and collision avoidance together in a life-of-mine solution that connects people and processes and augments safety, productivity, and decision-making.
Forward-thinking mining companies are using powerful positioning intelligence technologies such as GNSS, LiDAR, antennae, satellites, GIS data, image detection, and navigation software to capture and map real-time environmental dynamics. Remote sensing monitors and communicates about difficult or dangerous to reach areas, determining surface features, vegetation variation, changes in infrastructure, even pinpointing the location of mineral outcroppings or suspect disturbances. Our fatigue monitoring system is an operator-friendly, unobtrusive monitoring and alert system that uses on-board algorithms to assess current and eventual driver fatigue levels to improve driver safety, prevent vehicle incidents, and improve mining productivity. (Hexagon Geosystems 2017b). In mining applications relying on satellite data, GNSS systems are preconfigured to constantly select the most appropriate positioning methods depending on which satellite and communication constellations are most readily available in the area of operation.
Generating actionable reports from these data is also central. Through integration, advanced modeling, and powerful 3D visualization tools, these same technologies help miners 'see' across large areas. For example, through repeated intervals of multi-spectral imagery, miner can measure changes to entire pits, quantify and monitor stock piles, the amount of hectares disturbed, even manage contracts scheduled. Combining image, sensor, and mapping data, these systems create land use or land cover maps, help ensure compliance with governmental stands, and help lower fees on disturbance. They aid in operational and environmental safety, and more agile coordination across teams, ensuring mine planning and operations stakeholders have access to the latest 'full-picture' data integrated across endpoints and workflows.

Impact: unearthing data's potential
Broadly speaking, these technologies impact miners by: • Integrating and communicating what and where change has occurred • Producing maps and reports for collaboration as well as compliance • Integrating data into advanced modeling systems for mine planning and operations • Determining any long-term effects to environment • Optimizing costs and efficiency through workflow optimization, risk mitigation, and maximum output • Simplifies creation and management of workflows suitable for any project environment, including BIM-compliant projects • Avoid errors in the field and mitigate costly reworks by having integrated workplans able to delivering detailed directions on work execution

The smart digital plant
Plants are often legacy infrastructure constructions, requiring extensive re-engineering, rehabilitation, and high financial and safety risks to maintain. It is not just digitizing every element of plant assets and facilities, but fusing these digital and physical realities to accelerate efficiency and ongoing optimization.

Challenge: disorganized legacy information hinders digital transformation
Before addressing the challenges associated with connectivity and coordination across assets, many plant operators must address core issues of legacy information management. Data and documents may have been created across decades of the facility's life cycle, being sourced from various contractors using different design and data management tools and standards. Some documents may only exist in hardcopy. And there may be dozens of versions (or even multiple copies of the same version) of any given document, drawings, model, list, or datasheet in various locations, making it unclear which version accurately represents the current configuration. With limited engineering, administrative, and IT personnel on a brownfield site, organizing and keeping track of this legacy information is a significant challenge, especially when the operational asset is subject to continual updates, revamps, shutdowns, and maintenance changes. As a result, information is difficult to find when it is needed most, such as for shutdown planning, project evaluation, incident investigation, modifications, revamps, compliance audits, and facility start-up. Unstructured, unreliable information undermines Consider the diverse endpoints in a construction environment, some of which are depicted in Figure 8.
• On-site and outside-of-site • Assets like building materials, metrology instruments, and other equipment • Mobile devices, tablets, workstations, etc. • Vehicles such as trucks, cranes, loaders, and bulldozers • Scanners, robotics total stations, satellites, cameras, and antennae • Workers on-site and off • Conditions (eg ground, roads, and weather) • All workflows, communications, connectivity, analytics, interoperability, etc.
Capturing and intuitively visualizing these data are key for empowering role-based, BIM-compliant information-sharing to estimate, model, and track actual costs and specs against RFIs and changes in order to mitigate risks. Project engineers can rely on connectivity and readily access data collected from every part of the job site; stakeholders and executives benefit from a centralized repository of designs, models, documents, and other materials, making it easier to manage projects in progress, avoid errors, and improve safety, efficiency, and profit margins. Ultimately, what begins as a digital construction project, enabled by agile information flow, lays the foundation for the output, a digital asset that is an intelligent building.

Impact: smart building enables smarter buildings
Our experience finds that a solution to power this level of coordination augments efficiency and improves profit margins in the following ways: • Single solution for real-time tracking, managing, and reporting of time, machinery, materials, workers, performance, progress, forecasts, budgets, etc. • Less or no need for complicated and expensive software and plug-ins thanks to end-to-end construction management solution incorporating all endpoints, edge devices, and workflows • Increases safety and regulatory compliance through timely access to information • Enables off-site access and team collaboration, avoiding travel costs and hazards associated with on-facility work • Fastest way to establish a single point of access to all engineering information To meet market, productivity, and safety requirements, organizations are using new tools to monitor and detect change and anomaly across just about every aspect of industrial environments, infrastructure, and operations. Environmental and infrastructure awareness is not just about capturing sensor data and imagery, but feeding these data into programs and workflows for a comprehensive view and what happened, what is happening, where, when, why, and triggering or automating actions and reports for optimization.

Conclusions
The singularity of perception and reality relies on distributed computing to capture reality and shape intelligent change. A true industry pioneer in the world's leading geospatial and metrology technologies and concepts, Hexagon supports the perception, cognition, computation, control, reaction, and most importantly, learning from diverse digital perceptions of physical realitiesperceptality.
Shaping change in industrial environments is about enabling (digital and workflow) connectivity, integrating tools, automating workflows, coordinating diverse nodes and needs for usable data visualization, and most importantly, transforming organizational currency from raw data to true geospatial intelligence.