Immersive interaction design based on perception of vector field climate data

Increasing the perception of climate data plays an important role in meteorological data analysis. However, the presentation and human–computer interaction of most existing climate data display systems is still limited to two-dimensional screen space, which affects the user’s acquisition analysis and understanding of data. In order to help users to obtain a greater degree of information and improve user experience, this paper designs a vector climate data perception system based on virtual reality and gesture interaction techniques by using data visualization and dynamic gesture interaction. The experiments show that the proposed intuitive visual analysis system of 3D space data provides users with a highly immersive data acquisition platform, and helps users to understand data more effectively.


Introduction
Data visualization technology is an important mean of information processing. It can turn the original complex and obscure data into 3D visualization results. As an important application of data visualization, visualization of climate data can help meteorologists analyse the meteorological data. And meteorologists can make accurate judgments in a short time with its help. However, how to present and transmit these data in a better way remains a very challenging issue in the field of meteorological data presently.
Currently, there are many meteorological data visualization systems (Bell et al., 2007;Jia & Qian, 1998). However, they have several limitations. (1) Single display mode: Conventional methods for weather information display are generally limited to two-dimensional (2D) images or text-only format. To understand the data visualized by these simple expressions need certain professional knowledge which is complex and abstract for ordinary people. Although the visual representation of 3D weather data has been developed for a long time, the current 3D visualization system and the human-computer interaction are still limited in the 2D screen space. Projection of three-dimensional visualization into two-dimensional display space usually leads to some problems, such as visual confusion and occlusion. Recent commercial advancements have made VR available to the average consumer, making the time right for CONTACT Li Cai 975767975@qq.com its usage in research.
(2) Traditional interaction mode: A large number of meteorological data visualization display systems still apply traditional mouse and keyboard interaction. With the wide application of virtual reality technology, if the VR technology is to be introduced into the system, the traditional mouse-keyboard interaction is no longer suitable due to the occlusion of the sight by the head-mounted display device (HMD). In order to get a better experience in meteorological data analysis, we propose a system in which vector field climate data is visualized into a 3D environment and then combined with virtual reality settings. It achieves interaction with the system through the gestures. We apply gesture tracking so that an agent may interact with virtual objects. This allows for further analysis of meteorological data.

Related work
With the development of computer technology, some software and development tools for analysing meteorological data came out as early as the 1970s. After the 1990s, with the increase of the number of meteorological satellites, the amount of meteorological data also increased sharply. Meanwhile, meteorological geoinformatics was also rapidly developing. China Meteorological Administration started to analyse and process the data by using the self-developed Micaps system (Jia & Qian, 1998). This system provides meteorological workers with great convenience. In meteorology, visualization can be applied to the prediction of atmosphere and the analysis of phenomena. Meteorological visualization evolves from static 2D coverage to dynamic and strong intuitionistic 3D features, so the three-dimensional visualization of meteorological data has been developing rapidly in recent years. NASA's open source WorldWind (Bell et al., 2007) works well in this area by tracking recent events around the world and providing animated demonstrations of various meteorological activities. In addition, the main task of visualizing meteorological data is to integrate the results of meteorological data visualization in 3D environment with geographic information. Lu and Yuan filtered the geospatial data of traffic, meteorology and other fields to analyse the underlying motion law (Wen & Xiao-ru, 2016). Hui-ning, Miao, Li-jun, and Lei (2006) combined OpenGL and visualization algorithm to realize the visualization of multi-dimensional weather field data. It has a good visualization effect but lacks the expression of multi-dimensional dynamic characteristics. The texture-based visualization method proposed by Jobard, Erlebacher, and Hussaini (2001) is limited by the accuracy of vector field data. So it results in suboptimal visualization with a small amount of data. Jing, Cheng, Wu-meng, and Bo-yang (2016) integrated 3D meteorological field data into 3D virtual earth for expression and present a data model for the volume element. In addition, some researchers try to visualize the meteorological data on different platforms. Wang et al. (2012) proposed a meteorological data visualization algorithm based on the WeChat platform (Zhen-fei, Li-ling, Dan, & Xiao-wei, 2016). Based on volume rendering, Mei et al. (2016) proposed a cross-platform platform for meteorological visualization by means of hybrid rendering, which realizes the visual display of multivariate complex meteorological data on the browser side and improves the efficiency and user experience. To a certain extent, it affects the user's access to data and further analysis.
In order to make it easier for users to get information from the data, intuitive visualization is essential. In recent years, as virtual reality (VR) technology has become widely accepted, its intuitive interaction allows users to move more (Marchesi & Riccò, 2016), providing a new experience for data visualization. Research by Shaffer and Reed (2000), Acevedo et al. and Demesticha et al. have confirmed that immersive visualization tools can effectively improve data readability. By using VR technology, not only can data be more easily understood, remembered and referenced, but it also provides users with highly immersive human-computer interaction and data rendering. It greatly enhances the users' ability to perceive data. Katsouri et al. (2015) developed an immersive 3D visualization application using VR CAVE, designed to enable researchers to exploit the wealth of information provided by this ancient wreck. The Cave Virtual Reality Visualization Display System enhances the immersive experience by expanding the user's perspective through stereoscopic projection. However, this system not only takes up a lot of space but also is very expensive. With the recent popularity of consumer-grade head-mounted display (HMD), such devices have quickly become the most popular solution for virtual reality. Kwon et al. (2016) studied the layout, rendering, and interaction of immersive environments based on 3D visualization of head-mounted virtual reality devices. However, in terms of interaction, they still use the traditional mouse-based control method to show the movement of the pointer. The method proposed by Rong and Ming (2012) had good real-time recognition and interaction capabilities based on acceleration sensors for gesture recognition and real-time interaction. Ying et al. (2017) used the Kinect device to perform interactive operations in the real-time simulation of large-scale oceanic scenes, but it is still limited in the traditional 2D view. To provide an immersive data visualization experience, this paper proposes a vector field climate data visualization system that combines the latest virtual reality technology with gesture interaction. This system not only supports multiple data types but also has a good immersive effect.

Our approach
Our system needs to realize meteorological data visualization, gesture interaction and VR display. These requirements constitute the system design needs to effectuate three module functions: vector field data visualization, dynamic gesture recognition and VR task scene.

System design
As shown in Figure 1, our system is composed of three sub-modules: (i) a VR Module, which collects data about the real-world agent (e.g. head pose, hand pose), and displays the virtual environment to the agent, (ii) a Visualization Module, which provides meteorological data about rendering in virtual scene. Data visualization includes preprocessing meteorological data and visualizing vector fields based on texture mapping, and (iii) an Interaction Module, which consolidates all the data, allowing the realworld agent to interact with the virtual environment by gestures. The gesture interaction includes the setting of gesture semantics and the user's dynamic gesture recognition module.

Vector field data visualization based on texture mapping
Meteorological data can be generally divided into scalar field data (one-dimensional), vector field data (twodimensional) and tensor field data (n-dimensional). This paper focuses on meteorological vector field data, primarily visualizing wind field and ocean currents data. The processing method for wind and ocean currents is roughly the same. We explain the process for wind field data as follows; the same steps hold for ocean currents data.
Volume rendering is a popular visualization technique in the visualization of meteorological data. However, this method requires a large amount of computational resources. Considering that the virtual reality system has high requirements on the real-time performance, and the wind vector field data is at a certain altitude, it does not need to observe the earth at all levels through perspective. Therefore, a fast vector field data visualization method based on texture mapping is used. The realization method is to project the latitude and longitude of each grid point onto the texture coordinates of the twodimensional map, draw the vector wind field data on the planar two-dimensional map, and then map to the threedimensional earth model by using the texture map. Given the latitude ϕ and the longitude λ where λ 0 is the prime meridian), the conversion from latitude and longitude to texture coordinates is as follows: In this experiment, we simulate the surface weather of the Earth. The surface of the Earth can be approximated as a spherical surface. Therefore, the mapped two-dimensional texture can be mapped to a simple three-dimensional spherical surface with less computation and faster speed. Geographic wind field coordinate system using U and V represented. The +U component represents wind east (also called 'west wind') and +V represents wind north ('south wind'). According to the zonal wind and warp wind at the observation point, the vector summation is the wind direction of the vector. The length of the particle describes the wind speed and the direction describes the wind direction. Figure 2(a) shows the initial visualization effect of the wind field. In this result, the difference of wind particle length can be observed to determine the wind speed at each point, but it lacks a sense of the overall wind speed, especially the local differences. Thus it is impossible to compare the wind speeds between different regions. Considering the meteorological data analysis is precisely focused on regional weather conditions, we add a visual design of the overall wind speed. As shown in Figure 2(b), the particles are placed at equal intervals, and the particle motion curve with the arrow is generated by the particle tracking algorithm until the ratio of the length of the curve to the velocity of the initial position is equal to the preset constant. Since the curve vector diagram can display the motion path of the particle, it can accurately determine the velocity value according to the length of the line. The different colour patches represent the wind speed differences in different regions.

Gesture interaction design
This paper is to observe the visual data in an immersive virtual environment by wearing helmet, the traditional keyboard and mouse interaction is not suitable. We propose an interactive design of gesture recognition, which enables users to manipulate virtual objects using gestures.
In order to achieve the interaction between the user and the system, we need gesture recognition at first. There are two concepts in gesture recognition: hand type and gesture. Hand type is a specific posture, emphasizing a static posture. Gestures refer to the hand within a certain period of time which is continuous action emphasizing the dynamic posture. Because static gestures have poor experience in immersive interaction, they are more difficult and error-prone for continuous interaction. All interactions in this experiment are designed as dynamic gestures. In the interaction design, the linear gesture is usually more important than the circular gesture, the one-handed operation is more freedom than the two-handed operation, and the trajectory has a great influence on the operation difficulty (Accot and Zhai;Nancel et al., 2011). Based on these considerations, this paper gives priority to the linear gesture in the gesture design, and the trajectory is also considered as simple as possible. In addition, the experimental results of Paul (1954) showed the relationship between speed, amplitude and tolerance in perceived physical activity, and concluded that the average moving time is a linear equation for completing the difficulty task, so we control the trajectory movement time in the design. We use Leap Motion, a no-touch and non-wearable gesture capture tool that gives users a more intuitive operating experience in a virtual environment. This device allows users' hands to operate within space, freeing from the traditional operation restrictions. Leap Motion's binocular camera captures the left and right visual images of the operator's gesture. After the stereoscopic calibration, the stereoscopic image pairs are acquired, and the 13 feature points are stereo-matched to obtain the parallax images. The inner and outer parameters of camera are triangulated to obtain the depth image, and then the IK algorithm is used to derive the entire hand model. Finally, by tracking the change of the position of the gesture, the system identifies the semantics represented by the gesture.
Through the API provided by Leap Motion SDK, we get several kinds of data such as palm, finger, tool, direction vector and gesture from the frame data. When the motion tracking data is obtained, the required frame data is obtained from Leap Motion, the key data is transmitted to the recognition module, and the interaction with the object is completed according to the preset gesture semantics. The experiment uses the palm to control the change of the Earth. The position of the palm needs to be regarded as the centre of the sphere, so that the sphere changes according to the change of the palm. Taking into consideration the unstable position of the finger, the setting of the palm position will make control become more stable. Z represents palm position, D represents palm direction vector (palm points to fingers direction), T represents palm normal vector (perpendicular to the palm plane, pointing out from the palm), − → T 1 and − → T 2 represent initial gesture and end gesture respectively. Then the rotation angle of the gesture can be calculated by Equation (2), which is used to rotate the sphere.
The Euclidean distance d between the coordinates of Z 1 (x 1 , y 1 , z 1 ) (before palm position change) and Z 2 (x 2 , y 2 , z 2 ) (after position change) is calculated by the Equation (3) which is used to translate the sphere.
As shown in Figure 3(a), we can see that the algorithm locates the palm position according to the direction of the palm and the normal direction, and judges the vertical movement of the palm by tracking the change of the palm position. Figure 3(b) for the algorithm of the

Experimental design and environment
In order to verify the visualization and interactive performance of this method in the immersive virtual environment, we conduct a confirmatory experiment. Because this platform involves several hardware and software, compatibility is the first challenge to be solved. After several platform experiments, we choose to implement on the Processing. It not only supports many of the Java language frameworks, and performing well in data visualization, but it also provides both VR devices and the Leap Motion interface. The VR device selected for the experiment is the Oculus Rift DK2, which includes the Oculus Rift DK2 headset, position tracking camera, and a variety of patch cords and adapters. It has about two eyepieces, each eyepiece resolution of 640 * 800, binocular vision after the merger with a resolution of 1280 * 800. The gesture capture device uses Leap Motion, and the experiment is placed horizontally on the table (see Figure 4). The effective range is not a problem for this experiment. The experiment designs a simple scene that included an 'Earth' and universe as the background. After the weather data is visualized on Earth, the user can use the gestures to observe the visualized weather conditions in an immersive virtual environment. The experimental external environment is on the indoor desk, and the user experiences a 3-dimensional virtual scene in a sitting posture. Figure 4 is the user using the proposed visualization system.
We download map data from Natural Earth, and obtain meteorological data from the Global Forecasting System (GFS) operated by the National Weather Service. The meteorological document is the GRIB format proposed by the World Meteorological Organization. The format divides the world into 8 regions, each with a total of 3447 grids. It records the data for each grid point. The data is observed four times a day at 6:00, 12:00, 18:00, and 24:00. In the experiment, we use the vector wind as the visualization object to extract each grid point from the weather data file. Vector wind at observation point is represented by a combination of zonal wind and meridional wind data (it is the same for ocean currents data). Since the GFS model divides the horizontal direction into 64 layers, therefore, we select the wind field data for a particular layer as the actual data for the system visualization.
Because GRIB data does not provide a set of data calling APIs, GRIB data can't be read directly. Therefore, data needs to be preprocessed before visualization, that is, data of the zonal wind and warp wind is extracted.

Experimental analysis
In terms of the visualization effect, we can see that the path of the vector wind particle is finely rendered on the earth from Figure 5(a). Whether it is the trend of the overall trend of movement at this altitude, or the movement of various winds in a certain area, it has been well displayed. By observing we can obtain global impression and judgment. Figure 5(b) is the rectangular area selected in the close observation (a). According to the speed of the wind particles, we can compare the speed of each observation point's wind speed. Figure sensor by gestures. The computer screen displays a stereoscopic image synchronized with the VR goggles display. Figure 5(c) shows a comparison of the movement of wind fields at different time in the same region, so as to have a clear understanding of the wind movement in the area over a period of time. The visual design of wind speed helps obtain the overall wind speed perception. By comparing the wind speeds in various regions, it increases the presence of local differences. The darker the colour, the faster the wind speed. Although the background colour is similar to the wind particles, the particles will not be confused with the background because the particles are in motion.
The comparison between our method and the methods proposed in Kurakin, Zhang, and Liu (2012) and Wang et al. (2012) are shown in Table 2. On the MSRGesture3D dataset, our method has higher recognition accuracy than the other two methods.
Our method designs five gestures for interaction, including panning left and right G 1 , vertically panning G 2 , enlarging G 3 shrinking G 4 and rotating G 5 . We used Leap Motion to collect dynamic gesture depth data at a   Table 3. The recognition correct rate of each gesture.
sampling frequency of 60 Hz, with 10 testers helping to build the test set. Each dynamic gesture is collected three times. Therefore, a sequence of 150 deep data frames is included in the data set. And the experimental results are shown in Table 3. The proposed gesture recognition has high accuracy, which ensures that users can have good interaction. In the premise of high accuracy, it provides a good operation in immersive virtual environment, users can not only change their position to adjust the direction and angle of the observed object, but also through gesture interaction to meet their own operating requirements: when you want to carefully observe a region, you can rotate the sphere to the surface you want to observe by rotating the palm of your hand, and then you can enlarge the sphere by opening the fist,; by translating the palm, you can also change the distance between the sphere and the observer. The experiment simulates the changes of the vector field of wind in the global climate vividly, increasing the perception of the wind animations for vector and the data, reducing the difficulty of obtaining the information on of data, and displaying the higher level detail information at the same time. Immersive virtual reality is a good environment to experience the weather. We find stereo vision useful in distinguishing near and distant features. As shown in Figure 6, a wider field of view allows users to observe data vividly at the same time. In an intuitive spatial  data visualization environment, users 'wander' wherever they want, recognizing the area of interest from afar and then simply gesturing to the area for further viewing analysis. Scaling the area or adjusting their perspective to their satisfaction, it's great convenience for the observer. Table 4 lists the comparisons between several current commonly used visualization software and the performance of this paper. It can be seen that the system supports the meteorological data roaming in the immersive environment combined with the gesture interaction mode, and supports the 2D and 3D meteorological information visualization function. In the supported data types, the system supports multi-dimensional data visualization and time series data. Animated display, able to provide a suitable visual display method for various types of meteorological data, including various commonly used visualization technologies. Compared with other existing software, the system explores new forms of meteorological data display under the premise of having complete basic functions. The meteorological data display format and the corresponding interaction mode make full use of the performance of the GPU, which can enhance the experience of the meteorological data. It provides a new idea for the subsequent development of meteorological data visualization software.

User experience
The experiment in this paper aims to improve the user's experience of acquiring information. Therefore, it is necessary to effectively evaluate the usability of the proposed method and test the practicality of the method. The result will evaluate subjective functionality and satisfaction. User research is conducted by means of questionnaires. Questionnaires are used as quantitative methods to obtain feedback, which proposes certainty of the results. The questionnaire consists of 8 questions. Q 1 -Q 3 are evaluation surveys on the usability of the three types of gestures. Q 4 -Q 6 are comfortable sexual survey, Q 7 -Q 8 are to evaluate the visual effect.
The interactivity test mainly evaluates the practicability of all gestures from the sense of sensitivity and fatigue. Defined by five-level metrics, level 4 represents natural gestures that do not require practice, level 0 indicates an inability to interact. Similarly, comfort scales are in three levels, level 2 represents a natural comfort experience; level 0 represents a challenging physical or physical experience. And the visual test of VR version evaluates the visualization mainly from five levels, as shown in Table 5.
The survey invited 14 college students to participate, with an average age of 24.8. Ten of the testers experienced VR for the first time and the remaining four testers had relevant VR experiences. Before the start of the test, we put on the device for each student, the first free experience for 2 min, then told the relevant operations, let the testers experience 2 min again. The total process takes about 8 min. Conduct a questionnaire assessment after completion.
According to the results of the questionnaire feedback, 71% of the first-time experiencers score slightly greater than or equal to the testers have experienced, indicating  that the experimental design can really bring new experiences and attractiveness for new users. However, those users who have experienced VR before respond more quickly to tests. And they are more adaptable and faster to adjust, as shown in Figure 7. At the same time, the statistical results show that the mean scores of the usability scores of the three types of gestures are greater than 3. That is to say, all kinds of gesture operations conform to the user's intuition in the scene designed in this paper, and they can be easily used without additional guidance and prompts. The comfort feedback is also better because of conforming to real-world daily exercise habits. Testers are very satisfied with the panning gestures based on feedback from comfort and usability tests; the usability of the zooming gestures is further improved with increased flexibility; the rotation gesture score is slightly lower. It's limited by the accuracy of the device. And the rate that testers choose 'Easy to fatigue' is up to 29%, and this gesture average score is close to 1 point. The gesture may satisfy the interactive function. The reason may be that Leap Motion needs to be put on the horizontal desktop. The height of the tabletop and the seat make the tester's wrist to be dangled for a long time over the Leap Motion, which causes upper limb muscles to remain tight. Thus rotation and scaling gestures bring some fatigue. At the same time, the visualization results reflect that the 'VR version' score is greater than 3, confirming that the VR environment can indeed bring a better immersion experience. But a few reflect general or dissatisfaction. Most people may experience the first time, wearing the VR devices for a long time make dizzy. In general, the experimental design can meet the demand of human-computer interaction in immersive virtual reality and enhance the immersive experience of users.

Conclusion and future work
This paper presents a meteorological data visualization display system based on head-mounted devices with virtual reality experience. The system provides a human-computer interaction method based on dynamic gesture recognition, which conforms to the daily action habits of the user and has a good user experience. There are also some future works we need to do. Firstly, because the virtual reality experience requires higher fluency, in order to satisfy this real-time requirement, the proposed method gives up some precision in visualization. The GFS model has 64 layers in the vertical direction and they are not fully rendered in the current implementation. In the future, we plan to improve the algorithm to further accelerate the speed of data visualization. At the same time, we will expand system functions to integrate multiple data and multiple levels. Secondly, the interaction of complex gestures requires the high sensitivity of the hand motion capture device. In the process of switch between different gestures, the semantics of the interaction will be unclear, which will cause incorrect operation. We will further optimize the gesture recognition algorithm to improve the user interaction experience.