Harnessing the power of immersive virtual reality - visualization and analysis of 3D earth science data sets

The availability and quantity of remotely sensed and terrestrial geospatial data sets are on the rise. Historically, these data sets have been analyzed and quarried on 2D desktop computers; however, immersive technologies and speci ﬁ cally immersive virtual reality (iVR) allow for the integration, visualization, analysis, and exploration of these 3D geospatial data sets. iVR can deliver remote and large-scale geospatial data sets to the laboratory, providing embodied experiences of ﬁ eld sites across the earth and beyond. We describe a work ﬂ ow for the ingestion of geospatial data sets and the development of an iVR workbench, and present the application of these for an experience of Iceland ’ s Thrihnukar volcano where we: (1) combined satellite imagery with terrain elevation data to create a basic reconstruction of the physical site; (2) used terrestrial LiDAR data to provide a geo-referenced point cloud model of the magmatic-volcanic system, as well as the LiDAR intensity values for the identi ﬁ cation of rock types; and (3) used Structure-from-Motion (SfM) to construct a photorealistic point cloud of the inside volcano. The workbench provides tools for the direct manipulation of the georeferenced data sets, including scaling, rotation, and translation, and a suite of geometric measurement tools, including length, area, and volume. Future developments will be inspired by an ongoing user study that formally evaluates the workbench ’ s mature components in the context of ﬁ eldwork and analyses activities.


Introduction
Over the last two decades there has been a dramatic increase in the collection, archiving, and open access of remotely-sensed and ground-based geospatial data sets (Whitmeyer, Nicoletti, and de Paor 2010;Pavlis and Mason 2017). For example, the Shuttle Radar Topography Mission (SRTM) provides 30 m digital elevation models (DEMs) for most of the globe (56°S to 60°N), and airborne and terrestrial Light Detection and Ranging (LiDAR) and Structure-from-Motion point clouds and derived products are being collected and utilized across the earth sciences 1 . These data sets allow researchers to study everything from processes related to plate tectonics and active volcanism to deciphering the structure of past societies. In the case of remotely-sensed data, regions that may have once been inaccessible are now available for scientific inquiry. However, geospatial data, which are predominantly three-dimensional and/or time varying, are often manipulated, integrated, visualized and/or analysed on two-dimensional displays (e.g. on computer screens), and researchers interrogate the data statically. Although these methods deliver results, researchers may not be utilizing their geospatial data sets to the fullest. Immersive Virtual Reality (iVR) refers to systems using external tracking sensors to enable motion tracking of 3D glasses or head-mounted displays (HMDs). Through iVR the virtual world is projected onto the floor and walls or directly rendered in HMDs by obtaining the user's head orientation and position in real time. Leveraging tracking sensors, locomotion is translated into the virtual world by physically walking and turning around, and bodily sensations are initiated from the coupling of visual changes and body actions. iVR systems allow for the visualization, integration, manipulation, and querying of geospatial data through embodied experiences. iVR provides researchers the ability to visit regions on earth and throughout the solar system, and to explore the full 3D characteristics of these rapidly expanding data sets.
Earth scientists have introduced iVR technologies in attempting to address issues present in conventional earth science visualizations and analyses (e.g. Kreylos et al. 2006). iVR renders geospatial data as 3D models and/or stereoscopic imagery within the context of the physical world. These digital representations preserve or partially preserve visual or spatial characteristics of the actual location where users have the freedom of navigation from an egocentric perspective in ways that are similar to what they experience in the actual field (Granshaw and Duggan-Haas 2012). For example, Head et al. (2005) developed a system called Advanced Visualization in Solar System Exploration and Research (ADVISER) for visualizing planetary geospatial data. This system creates a basic reconstruction of a planetary region, in this case Mars, by combining satellite imagery with high resolution digital terrain models (DTMs) and projecting the computer-generated representation onto the floor and walls of a room-sized cube (i.e. cave automatic virtual environment, or CAVE 2 ). Users wearing 3D glasses are able to look around and see the terrain information they would expect from being on the surface of Mars. Additionally, ADVISER offers users a field kit and virtual field instruments for measuring, probing, or performing field observations and measurements on Mars. The field kit is analogous to the tools commonly used by earth scientists in the field to measure the geometry of geologic units and structures (e.g. a Brunton compass to measure strike and dip, and an altimeter to measure the elevation of any point chosen in the data set and the relative elevation of any pair of points of interest). The virtual field instruments are analogous to the additional tools that earth scientists carry in the field, such as a Personal Data Assistant (PDA) with built-in Global Positioning System (GPS) and cameras for instant navigation, geologic mapping, and automated data display and recording. Multiple data views are organized as a workbench with each view providing its own 3D interface for interpreting an earth science data set. Such workbenches motivated various researchers to incorporate their own data sets into a well-established data processing standard for 3D visualization and analyses. Also, the ability to navigate through the data space with a 3D input device allows users to inspect the data at scale, as well as see the environment from a novel perspective. This technology could be more useful than viewing a small model on a desktop computer. Kreylos et al. (2006 developed an immersive point cloud visualization tool called LidarView, which integrates display configuration with 3D interaction devices on iVR platforms (CAVE or HMD 3 ). With the LiDAR data captured from airborne or terrestrial-based sensors loaded and visible in a 3D immersive environment, users can select a subset of data points, determine the distance between points or planes, and perform real-time 3D navigation through the data set. Users visualize the data as though actually present in the location where the data was collected. By showing multiple temporal or animated views, users are provided with insights into a particular phenomenon while the surrounding context is largely preserved. Users can therefore, for example, experience a seasonal change of wooded areas (Sherman et al. 2014) or a land mass change due to earthquakes or landslides (Kreylos et al. 2006;Glenn et al. 2006;Jianping and Huanzhou 2012).
In summary, it is desirable to create visualizations and measurement tools that users can utilize via gestures and intuitive user interfaces to interact with multiple data sets for a single environment (e.g. modular digital earth applications as described by Martinez-Rubi et al. 2016). With hand-held input devices, users in an iVR experience can transform digital models (i.e. position, orientation, and scale). This enables the perception of large-scale objects from a single viewpoint and the observation of marginal structures from multiple perspectives of view. Leveraging iVR tracking systems, users are able to use their body-sensor cues to perceive the size change of modeled entities in the virtual environment. This embodied experience allows users to keep track of the external environment scale in terms of their internal body scale and is therefore expected to remedy the effects of scale disorder (Dede 2009;Shipley et al. 2013).
In this article, we detail the design of an iVR workbench using consumer grade immersive technologies that allows researchers using geospatial data to visualize, manipulate, and make quantitative observations inside the virtual environment. We extend previous approaches such as ADVISER (Head et al. 2005) and LidarViewer (Kreylos et al. 2006;Kreylos, Bawden, and Kellogg 2008) by streamlining the data ingestion, visualization, and quantitative exploration for heterogeneous and multi-source data sets. The goals of our approach are to develop: (1) workflows for the ingestion of geospatial data sets; (2) a workbench that allows users to integrate, visualize, and manipulate 3D data sets; and (3) a workbench that allows users to make quantitative geometric measurements. Leveraging high-resolution geospatial data, our iVR workbench is able to construct a highfidelity environment in which users can apply visual and quantitative approaches to detect features in geospatial data (e.g. the offset across an active fault or the geometry of archaeological ruins). In the remainder of this paper, we describe our approach to visualizing high-resolution geospatial data and detail a general workflow for importing geospatial data from published, open-access, or self-collected data sets into iVR environments. We then describe the combination of virtual tools and the logic behind the interaction design. We then present an immersive workbench applied to a geospatial data set of the Thrihnukar volcano, Iceland (LaFemina et al. 2015). We finally discuss future applications of our iVR workbench and challenges in the present work.

Data import workflows
We selected Iceland's Thrihnukar volcano as an example for an iVR experience using our own geospatial and geological data, as well as published data. Thrihnukar is a small volume, monogenetic volcano located in the Thrihnukagigur system, a group of three monogenetic volcanoes that are part of the Brennisteinsfjöll fissure swarm in southwestern Iceland (Figure 1, top). Thrihnukar formed during a fissure eruption~3500 years before present (ybp) (Saemundsson 2006(Saemundsson , 2008. During the eruption, the rising magma assimilated parts of an older cinder cone at~120 m depth, forming a cave beneath the volcanic cone (Hudak 2016). At the end of the eruption, lava flowed back into the system, leaving behind an open, upper magmatic conduit and the cave ( Figure 1, bottom left). This open system provides a unique opportunity to investigate the internal plumbing system of a monogenetic volcano and quantify parameters (e.g. conduit radius and feeder dike width) that are often fixed in numerical magma transport models. In 2012, we mapped the interior of the cave using classic geological mapping and sampling techniques, combined with terrestrial laser scanning (i.e. LiDAR) and photogrammetry (LaFemina et al. 2015). The last two methods are described in more detail below. Here, we discuss the reconstruction of the Thrihnukagigur system by combining geospatial data acquired from open data repositories with our own data.

Geospatial data
We combine topographic data and satellite optical imagery to develop a regional model of the Reykjanes Peninsula, Iceland ( Figure 2). We extracted a digital elevation model (DEM) of the Reykjanes Peninsula, Iceland from the ArcticDem 4 , an online, open-source archive providing high-resolution digital surface models of the Arctic generated from the panchromatic bands of the WorldView-1, WorldView-2, and WorldView-3 satellites 5 . By combining elevation data with satellite imagery data 6 , we are able to create a basic reconstruction of the natural environment (see Figure 2 [top] for the workflow). We use the reconstruction of the Reykjanes Peninsula as an entry point to the Thrihnukar volcano iVR experience. The iVR experience allows users to: (1) fly through the Reykjanes Peninsula and individual fissure swarms, including the Brennisteinsfjöll fissure swarm that hosts Thrihnukar; (2) find and select the volcano whose location is marked by a radiated halo with text; and (3) "jump into" the volcano to access details of the volcano model (see Figure 2).

LiDAR data
Light Detection and Ranging (LiDAR) data of the volcano's interior and exterior were collected to create a 3D model of the Thrihnukar volcano to study the magmatic system and the formation of the cave. We used a tripod mounted Leica C10 laser scanner (LaFemina et al. 2015). In total we collected~1G points. The cave and magmatic system are accessed via an elevator. The point cloud data from the interior of the volcano-cave system were merged with the point cloud data collected of the outside of the volcano, by aligning common points on the girder system that ran across the top of the conduit for the elevator. The merged data set has an accuracy of~4 cm. We utilized ground control points (GCPs) collected with GPS stations during the data collection of the exterior of the volcano in order to georeference the point cloud. The resulting data set has four parameters: latitude, longitude, elevation, and intensity of the returned laser signal. Intensity values map well to the different rock types exposed within and outside the volcano. That is, the intensity values represented by colors in Figure 1 (bottom) are geologically meaningful.
We imported the LiDAR into a game engine (i.e. Unity3D®; Unity® 2017) to create an iVR experience that gives users the ability to view the Thrihnukar volcano in its regional context and interact with the 3D volcano data sets. Figure 3 (top) summarizes the workflow of importing the LiDAR data into Unity3D. To import the LiDAR data into Unity3D, we used an extension called Point Cloud Viewer & Tools 7 that accepts various input formats of point cloud files (e.g. LAS, XYZ and TXT formats), and is capable of reading up to 75 million points. We decimated the LiDAR data set to~500,000 points. This extension provides a pair of data conversion and preprocessing utilities. The preprocessor comes with dozens of optional directives and converts a standard LIDAR data format into Unity3D meshes suitable for fast rendering. Menus within the preprocessor enable the user to control shading and other rendering options. Shading options include calculating normals for each of the points in the cloud for improved lighting effects, creating multiple levels of detail, and using true RGB values that may have been assigned to the points to make it easier to see surface features such as roughness. Points can also be scaled up to optimize the rendering effect. Additionally, the number and size of the points can be controlled in order to balance between rendering quality and rendering speed. We have extended Point Cloud Viewer & Tools to assign specific colors to each point based on one of its attributes (e.g. x, y, z coordinates, intensity, or distance values to the scanner provided in the LAS file). This allows the user to classify, for example, intensity to highlight geological materials and formations. In Figure 1 (bottom right) the LiDAR data are visualized in Unity3D using a rainbow color spectrum applied to the point intensity values. Gradual color changes are the result of adjusting one of the RGB values continuously, distinguishing five different classes. From low to high, colors change sequentially from aqua, aqua to yellow, yellow to green, green to red, and red to blue. This allows the user to distinguish different geological formations (e.g. Matasci et al. 2017). In this case, the yellow to green are older lava flows exposed in the cave walls that have a thin alteration rind on them. The aqua regions represent unaltered basalt exposed in the eruptive conduit that formed during the eruption and formation of Thrihnukar and the lavas and tephra that were erupted at the surface to form the cone.

Photorealistic 3D point clouds using structure-from-motion
Structure-from-Motion (SfM) is a technique that allows for the construction of photorealistic point clouds using photographic images and photogrammetric techniques (Snavely, Seitz, and Szeliski 2006;James and Robson 2012;Yoshimura et al. 2016). SfM holds much higher versatility and usability by non-experts than classical photogrammetric workflows (Abellan, Derron, and Jaboyedoff 2016), which makes the creation of iVR experiences easier and more straightforward. We use Agisoft PhotoScan Pro® (Agisoft LLC 2017) in this project, which is a rapid SfM software that stitches together photos to form 3D point clouds.
We processed our collection of 280 photos taken at the Thrihnukar volcano to generate a dense point cloud (~50 million points) for the interior structure of the volcano (Figure 4, top). We did not have enough photos of the upper part of the volcanic system (i.e. the magmatic conduit, see Figure 1, bottom left) to produce a good point cloud. However, we did take videos of the 8-minute ride from the surface to the inside of the volcano, 120 m below. Two hundred and twenty-one (221) individual images from videos were aligned in PhotoScan to produce a 3D point cloud for the magmatic conduit to complete the 3D point cloud model ( Figure 4, bottom left).
We used the Point Cloud Viewer & Tools extension to ingest the photorealistic point cloud into Unity3D (Figure 3, bottom). This tool can preserve original point colors by reading RGB values from the LAS file and using the built-in PointCloudColorsMesh as the mesh material. True color information is important because it contributes to the realism of the photorealistic point cloud model, which is important for the user's perception of the physical environment ( Figure 4, bottom right).

Immersive workbench and interactions in the virtual environment
One of our essential goals is to create an immersive visualization environment for earth science data in which earth scientists are able to apply real-world scientific workflows (i.e. common observations carried out in the field) and hyper-real scientific workflows (e.g. using virtual tools that are hard to create in the real world) to interpret geospatial data sets. We developed an immersive workbench for the SteamVR system 8 , providing users with an interactive, immersive experience of the volcano and allowing researchers to perform quantitative investigations while being immersed. Our workbench is an integration of visualization and geometric measurement tools. These tools are arranged into different user interface layers and hierarchically organized in terms of their functionality categories ( Figure 5). Users wear an HTC Vive HMD to view the virtual content and use the hand controllers to interact with the data. A virtual pen model is assigned to the right controller as a selecting tool, while the left controller is used for hosting different tools of the workbench ( Figure 6). The way to operate these tools (i.e. point and select), as an ergonomic simulation of handheld tools in practice, is inspired by Kreylos et al.'s software tools, with which users are able to "manipulate data at their fingertips" (Kreylos et al. 2006).

Volcano visualization: transformation and information display
The Visualization category offers several visualization and filtering options for managing earth science data in multiple dataset views. This category has two main tools: Transformation and Information display. The first tool Transformation contains three sliders, allowing the user to transform the volcano point cloud via rotation, scaling, and vertical displacement by pointing the pen tip at one of several sliders and then dragging the slider to adjust the corresponding value. When earth scientists approach the actual volcano, the volcanic system itself, relative to the human's body scale, is large so that they cannot see the entire volcanic system from a single viewpoint. The transformation sliders give users flexibility to manipulate the volcano model to observe it from various angles, positions, and magnitudes. From a single viewpoint, users can adapt the model to either perceive the volcano as a whole through rescaling, or to grasp its detailed structures through vertical displacement and rotation.
The second tool of the Visualization category is Information display. This tool contains three subtools: (a) Components that allows for selection on or off of labels for the different components of the volcano; (b) Switch of model style that allows for switching between the different data sets (i.e. the LiDAR or the photorealistic SfM point clouds) (Figure 6, left); and (c) Documentation that provides access to a collection of documentation materials about the volcano in various media formats (e.g. hand specimen photographs, photomicrographs, tables and videos for geochemical analyses, and an electronic field notebook that will record relevant observations). More detailed explanations of the latter two subtools of Information display are provided below.
In the subtool Switch of model style, the user can pick between two LiDAR data formats or the photorealistic point cloud. The visualizations or styles include: (1) an intensity-based display of the LiDAR data (see Section 2.2); (2) an elevation-based format in which the z-value (i.e. distance above an assigned ground plane) of the LiDAR data is mapped to different colors; and (3) the photorealistic SfM model described above (see Section 2.3). In addition, the subtool Documentation currently contains a 56-page slide-presentation with details about the volcano; the content is predefined in this case. Figure 6 (right) illustrates how the Documentation subtool is used to display a PDF presentation in the virtual space. One future extension of the Documentation subtool will be the introduction of an electronic field notebook that will record measurement results and screenshots in a downloadable integrated data set.

Geometric measurementsdistance, area, and volume
Quantitative observations and analyses of geospatial data are critical for investigating geologic processes. In addition to the visualization tools described above, we developed a toolbox that allows users to make the geometric measurements of distance, area, and volume within the iVR environment. The geometric measurements are converted from virtual to real-world scale so that users acquire precise geometric information of the actual magmatic system. Although these parameters are geometrically basic, they are important for quantifying geologic processes. For example, quantifying the lateral offset (distance) of streams allows for estimate of longterm fault displacements and a better understanding of earthquake hazards (e.g. Kellogg et al. 2008). In the case of Thrihnukar volcano, we are able to easily quantify the volume of the cave formed during or after the eruption, and critical parameters for modeling conduit flow like the conduit radius. Because of the physical expanse of the magmatic-cave system, it was not feasible to make a complete set of measurements of the dike that fed the eruption, nor the magmatic conduit. These observations could be made within a point-cloud viewer on a 2D monitor; however, the ability to re-scale point clouds but in the meantime to use the body as a position reference within the iVR environment (i.e. being immersed in the data) provides users with a new perspective that cannot be accessed through a conventional desktop display, allowing them to make connections between the different data sets.
In the distance measurement tool, the user places points by pressing the trigger of the right controller. These points are connected by straight line segments for   (Figure 7). The real-world distance value may be different from what the embodied user intuitively expects if the model scale is not 1:1. Users have three options for how to measure distance depending on their needs. These options are available as a submenu from the middle button and their functionality and usages are described in the following: (1) Free draw allows for adding points without any restrictions or automatic procedures. Users can do free exploration and measure small distances. (2) Level draw snaps points to the volcano surface and restricts users to add points at the same vertical level to improve the precision of the level measurement. The line segments will be parallel to the horizontal plane such that users are able to acquire horizontal length (e.g. volcano conduit circumference or diameter). (3) Curve draw also snaps points to the volcano surface and further captures the concave and convex extent of the volcano surface. When the user adds two points (referred to as user points), connection nodes will be automatically generated and distributed between the user points and horizontally snapped to the volcano surface. Those connection nodes are then connected by straight lines. The number of connection nodes is proportional to the distance between a pair of neighboring user points. Curve draw improves the measuring accuracy and efficiency as the volcano shape is considered automatically. For example, when measuring the perimeter of the magmatic conduit, the user only needs to add a few points to precisely capture the surface undulations.
To increase the precision of distance measurements, user points should be exactly attached on the volcano surface, which can be difficult for the user to achieve. This is the reason why we adopt a point snapping approach for Level draw and Curve draw. With this approach, user points are automatically snapped to the volcano surface by identifying the point on the point cloud closest to the user's input. Because the volcano point clouds (i.e. LiDAR and SfM) are made up of a very high number of points (491,675 points in the LiDAR and 49,745,679 points in the SfM), linear search (Kanevski et al. 2004) is too time-consuming to calculate the shortest distance. To speed up the search process, we employ a k-dimensional tree (k-d tree) as a data structure for a points nearest neighbor search (Bentley 1975). Unlike the linear search that goes through all points in the list for each iteration, the k-d tree hierarchically divides a space into several equal sub-cubes. For each sub-cube, the distance between points inside and outside the cube is computed, which only takes points on the cube's boundary into account but disregards points inside the cube. The cube is then either expanded to assimilate neighboring points or shrunk to omit marginal points in terms of a comparison between the calculated distance value and the predefined threshold. Once cube boundaries become stable, points inside each cube are treated as a unit and only the distance between the user point and the center of each cube needs to be computed in order to find the nearest cube to the user point. After finding the nearest cube, a linear search is applied to the points inside it in order to determine the overall closest point on the volcano point cloud (Samet 1990;Yianilos 1993;Garcia, Debreuve, and Barlaud 2008). In this way, significantly fewer points in the point cloud list need to be considered compared to the linear search. By applying the k-d tree approach, latency caused by the search for the closest point is on average reduced to 0.2 seconds (from 5 seconds using the linear search). A threshold value of 0.014 Unity3D distance units (one Unity3D distance unit equals one meter in a body scale) is used as the snap distance. The input point will be snapped to its nearest vertex on the volcano only if its distance to any point from the point cloud is smaller than this value. This gives users the ability to also measure distances freely in 3D space using either Level draw or Curve draw. In other words, the Level draw and Curve draw will turn into Free draw if the user point is 0.014 or further away from the volcano surface.
For the area measurement, we adopt a square plane as the area detector. Users are able to change the vertical position of the plane by operating a slider attached to the left controller (Figure 8, top left). Once the user selects the Acquire button, the area detector will be horizontally adjusted to the volcano surface based on a snapping procedure of eight or more preset nodes whose locations are Figure 7. The process of using the Free draw option to draw points and line segments for distance measurement or outlining geologic features. Distance information and volcano scale are displayed on a mini pad attached to the left controller. By pressing the Clear button, points and line segments will be removed to reset the total distance value. The user can click on the Free Draw button to switch to the other Distance Measurement Modes (i.e. Level draw or Curve draw).
originally distributed along four sides of the area detector plane. In other words, we apply the k-d tree approach to these nodes to find the horizontally nearest points on the volcano surface and then build a new mesh based on those nearest points. As a result, the superficial area of the new mesh is calculated and displayed to the user. Figure  8 (top right) illustrates the mesh construction process. Moreover, users are able to select the number of nodes being snapped to, to adjust the precision of the area measurement to their needs. The precision can be set as five interval grades: the 1st grade uses one node along each side of the area detector in addition to the four corner vertices (with eight nodes in total), while the 5th grade uses five nodes along each side of the area detector (with 24 nodes in total, including four vertices). In other words, each higher grade will add one additional node to each side to better capture the concave-convex nature of surfaces. The bottom two images of Figure 8 show how the area measuring mesh is constructed to capture the volcano shape under different precision settings.
Similar to our area measurement approach, our volume measurement approach also applies the k-d tree to find the horizontally nearest points and uses the triangulated mesh construction to calculate volumes. Instead of using a 2D plane as the detector; however, the volume detector is a 3D cube. Users can change its thickness via two arrow buttons attached to the left controller (Figure 9). The measuring volume is defined by the cross-sections between the volcano surface and the top face and bottom face of the volume detector. The four side faces of the volume detector are snapped to the volcano surface to capture the volcano's shape. The two cross-sections along with four attached side faces constitute a polyhedral mesh for volume measurement. In the volume detector, each side face contains one center pivot and 8 side nodes (i.e. Figure 8, top right). These side nodes are distributed along the four sides of each side face. The top and bottom faces of the volume detector do not contain center pivots. In total, twenty side nodes along with 4 center pivots construct 48 triangles. These triangles form a polyhedral mesh for volume measurement. Figure 9 (right) illustrates how the volcano volume is measured in the virtual space. Figure 8. The inner area of the cross-section between volcano surface and area detector (purple square plane) is measured (topleft). A schematic diagram of the triangulated mesh construction used in Unity3D (top-right). Nodes on the area detector (N 0 -N 7 ) are snapped to the volcano surface and the mesh (red polygon) is built from the resulting nodes (N 0ʹ -N 7ʹ ). Each pair of them along with the center pivot (P 0 ) forms a vector triangle in clockwise direction (e.g. T 0 : N 0ʹ -P 0 -N 1ʹ ). Eight triangles in total (T 0 -T 7 ) constitute a mesh whose surface area is the result of the area measurement. Area measurement using the lowest precision (1st) (bottom-left) and using the highest precision (5th) (bottom-right) for the same region of the model.

Discussion
The number and type of earth sensing satellites, as well as missions to other planets, is rapidly expanding, providing unprecedented observations of planetary processes. Additionally, high-spatial resolution, terrestrial-based observations are also being collected. Integrating remotely sensed data sets and terrestrial, georeferenced data allows for increased synoptic studies of these processes over greater spatial and temporal domains. 3D visualization and query of these data sets is just now becoming feasible with consumer grade VR headsets. In the past this was reserved to high-end laboratories (e.g. Kreylos et al. 2006;Kreylos, Bawden, and Kellogg 2008), only allowing a minority of researchers to take full-advantage of the 3D and often higher dimensionality of the data. Immersive virtual reality (iVR) allows for integration, visualization, and qualitative and quantitative observations of remotely sensed and terrestrial data sets, increasing their utility in earth science research. We have developed an iVR workbench using consumer grade immersive technologies that allows researchers to investigate geologic processes by utilizing georeferenced data sets to integrate, visualize, and make qualitative and quantitative observations. These are powerful tools because they expand the overall usefulness of these data sets. For example, the intuitiveness and effectiveness of our workbench allow users to formulate and test scientific hypotheses and draw conclusions by naturally interacting with their data sets from an embodied, egocentric perspective (Keim et al. 2003). Moreover, data sets and field sites can be continuously observed, long after the data are collected and new observations can be added to an existing experience. We describe some benefits of iVR below and return to our example of the Thrihnukar volcano system, Iceland.
Some general benefits of immersive technologies stem from the properties of an iVR system; for example, strong computing power due to the high-end processing engine and graphics cards, high-resolution displays, and large field of view in conjunction with a 360degree field of regard (Ragan et al. 2013). There is a growing demand for enhanced analysis tools capable of handling and interpreting large and complex data sets (Helbig et al. 2017). We expect that advanced visualization approaches and quantitative exploration powered by iVR systems will have huge influence on understanding the increasingly large and complex datasets in earth sciences.
Another benefit of iVR is that virtual fieldwork can help to advance actual fieldwork that is fieldwork advancement (Kreylos et al. 2006;Kreylos, Bawden, and Kellogg 2008;Deng et al. 2016). Fieldwork advancement is expected to help earth scientists overcome information inaccessibility by providing access to implicit information behind geological entities (e.g. qualitative and quantitative observations that are physically impossible in the actual field). Earth scientists, for example, are able to use the fieldwork advancement as an information system to integrate multiple data sets and the workflow of geological field surveys, which otherwise have been dispersed by space and time, into a unified mediated environment to accelerate their research. For example, Lin et al. (2011) created an iVR application to enable noninvasive virtual archaeological excavation through the digital reconstruction of geophysical survey data of an archaeological site in Northern Mongolia. The general idea was to display all the data in a virtual reconstruction of the site. The visualization tool integrated a total of 12 different data types, including photographs, SfM-derived terrain models, 3D models that were created manually for structures that no longer exist, and Ground-Penetrating Radar (GPR) and Electrical Resistivity Figure 9. The volume detector is an amaranthine colored cube whose thickness and vertical position can be adjusted by the user (left). A menu for measuring volcano volume and a deep-green mesh inside the volcano surface (right). The measurement result is displayed on a mini pad attached to the left controller.
Tomography (ERT) data. About 20 million GPR and 400,000 ERT data points were rendered as translucent spheres to create a very densely packed grid for the detection of the properties of subsurface material and the layers of subsurface structures. The VR menu, displayed as a floating window, allowed for toggling the display of the respective visual elements and setting their parameters to highlight the data of interest. This is valuable because understanding spatial correlations between different data types is the key to making new discoveries in both archaeology and earth science research. Consequently, the aim of building fieldwork advancement is to develop a visual-analytic environment that allows users to use body-sensor cues and embodied gestures to interact with geospatial data (Sgambati et al. 2011;Lercari et al. 2017).
We have presented an example of the visualization of two geospatially referenced data sets for the Thrinukar volcano. Thrihnukar provides a unique opportunity for volcanologists to study the plumbing system of a monogenetic rift volcano. The scale of the Thrihnukar volcano made it an excellent target for collection of terrestrial LiDAR data and photorealistic reconstruction through SfM. However, since the bottom of the cave is 120 m below the crater rim and the walls are near vertical, the internal geometry and scale of this system make direct observations of key magmatic features (e.g. the magmatic dikes that fed the eruption) difficult. Our workbench allows for the visualization of both LiDAR and SfM point clouds and to make direct geometric observations important for studying the dynamics of the eruption. One of the key questions we had when starting our study of the Thrihnukar system was, how did the cave form? Visualization of the data in the immersive environment allowed for improved mapping of an older scoria cone that was assimilated by the magmatic dike. Our workbench allowed for quantification of the cave volume, which can then be used as input for models of magmatic assimilation (Hudak 2016). Additionally, the flux of magma through volcanoes during volcanic eruptions is an important quantity to estimate; however, the most influential parameter, conduit radius, is difficult to measure. We are able to directly measure this as a function of elevation in the system, allowing for accurate estimates of the flux of magma through the system during the paleoeruption 3500 years ago.
One problem we are facing in the present project is the difficulty to precisely measure geometric properties, because of data occlusion in the LiDAR and SfM point clouds. The nearest neighbor search, which only recognizes the nearest points, poses challenges for measuring complex 3D shapes. As can be seen in Figure 1 (bottom), the volcano cave extends upwards to two independent magmatic conduits forming a fork-like shape. When conducting area or volume measurements for this part of the system, the area or volume detectors are not able to exclude the gap between the two magmatic conduits and thus yield inaccurate measurement results. It is therefore desirable to apply a more adaptive approach to recognize intelligently the integral configuration of point clouds for more precise measurements and interactive data visualizations (e.g. Kanevski et al. 2004;Zhang and Yan 2007).
The volcano point cloud model is rendered as a set of data points in the virtual space. Currently the Unity3D based rendering approach does not recognize the geometry of data points for shadow mapping. Given the importance of shadow mapping in depth perception (Mamassian, Knill, and Kersten 1998), it is possible that users may not be able to accurately perceive distance within the point cloud. This problem has been reported by some of our users in a recent informal user study in which they had difficulties in drawing points and lines on the volcano surface for distance measurements, as they tended to underestimate the distance to the target.
Additionally, several elements have been identified from users' feedback that will be improved in an attempt to make the immersive workbench more successful, including: • Visualization quality. (1) "I could not find the magmatic dikes of the volcano"; (2) "There are some points of noise existing in the point cloud". • Ease of use. (1) "I expect that users without VR experiences would spend twice as long as I do to learn how to use the workbench"; (2) "It is not easy to understand some of the functions". • Interactivity. (1) "I have to 'click' [i.e. press the trigger of the right controller to select a button] a lot of times to reach a specific function"; (2) "The controller would sometimes physically collide with my hand when I dragged a slider on the virtual panel".
In the future, we plan to: (1) increase the resolution of LiDAR data; (2) apply a finer-grained color spectrum varied with intensity values to highlighting, for example, the dike structure; (3) use an eraser tool to remove noise and outliers from point clouds; (4) better integrate annotations/instructions with the immersive workbench; and (5) optimize the layout of user interfaces to support fast tool search. Additionally, we are in the process of conducting a more formal user study to assess the effectiveness of visualization and measurement tools in earth science research. Users will be instructed to use the immersive workbench to perform a series of tasks to estimate the dimension of magmatic features in the context of fieldwork activities (e.g. measuring the magmatic flux rate and the thickness of dike). Before and after the virtual fieldwork, users will be asked self-report and open-ended questions about the usability of the immersive workbench, as well as their attitude and opinions toward the fieldwork experience. We hope that their answers could shed light on the iterative design of different tools from a user's perspective.

Conclusion
We developed an immersive workbench as an iVR research platform delivering virtual fieldwork experiences of Iceland's Thrihnukar volcano. We imported and visualized real-world earth science data in the virtual environment. The iVR workbench enables interactive visualization and quantitative observation of earth science data through immersive interfaces. After iterative design, we summarize the core components of earth science virtual fieldwork as follows: (1) environmental fidelity, consisting of 3D visualization, expert modeling, and spatial context rendering for geological entities and their surroundings; (2) degree of agency, i.e. the flexible switch of user's frames of reference or transition of viewpoints to support embodied interpretation; (3) information display, that is, the integration of different sources of data into a single representation; (4) geometric measurements, i.e. the quantitative observation of geometric information during runtime; and (5) contextualization, or the integration of scaffolding and documentation/multimedia resources.
Our immersive workbench offers earth scientists the ability to visit sites of interest virtually, including the sites that can be expensive or physically impractical to visit, on a recurring basis. Earth scientists are then able to conduct both qualitative and quantitative observations of the geological sites. Although some measurements can be directly obtained while in the actual field, the way that earth scientists immerse in the data makes the measurements far easier and reasonably accurate, i.e. the ability to fly through or rescale the world and automatically compute geometric parameters of, for example, the cave and magma conduit systems. Consequently, our immersive workbench is not just for visiting or revisiting geological sites; it can also accelerate research by allowing earth scientists to explore the site faster, take more measurements, integrate different data sets, and leave behind annotations of new discoveries.