Digital twin and virtual reality: a co-simulation environment for design and assessment of industrial workstations

ABSTRACT In the context of the Industry 4.0 and of the digital factory, digital twin and virtual reality represent key technologies to design, simulate and optimize cyber-physical production system and interact with it remotely or in a collaborative way. Moreover, these technologies open up new possibilities which can be involved in the co-design and ergonomics studies of workstations, based on Industry 4.0 components like cobots. In order to satisfy these needs and to create dynamic and immersive virtual environment, it is therefore necessary to combine the capacities of the digital twin to perform simulation of the production system with the capacities of the immersive virtual environment in term of interactions. This paper proposes a co-simulation and communication architecture between digital twin and virtual reality software, then it presents a use case on a human-robot collaborative workplace design and assessment.


Introduction
The integration of Industry 4.0 components and the development of digital technologies can lead to more efficient and more flexible processes in order to manufacture a variety of high-quality products with a reduction of cost and time, giving a significant competitive advantage (Rüßmann et al., 2015). When gathered, flexible manufacturing systems, industrial internet of things, simulation, big data analysis, and cloud manufacturing, lead to a Cyber-Physical Production System (CPPS) (Monostori, 2014). Several authors demonstrate the benefits of CPPS deployment within Industry 4.0 thanks to more relevant analysis and management (Colombo, Karnouskos, & Bangemann, 2014;Lee, Bagheri, & Kao, 2015;Wang, Orban, Cunningham, & Lang, 2004). In order to make optimal decisions, a CPPS needs to include a digital representation of the real system. offline data) with the production system. For Boschert and Rosen (2016), the digital twin is the linked collection of the digital artefacts of component and system, and the digital twin evolves with the real system. Boschert and Rosen (2016) indicate that the digital twin is also connected to existing IT-systems to use the available digital information. Digital twin is used to run simulations and acts as a process-monitoring tool to search for potential incidents, retrieve performance metrics, prevent failure, and then optimize the real system (Ojstersek & Buchmeister, 2017). Those approaches, which combine the real system behaviour with its virtual representation, allows dealing with many optimization problems, such as the deficiency and the lack of control over the data, fragmented between different parts of the system while getting a realistic representation of the system. In that sense, digital twin is an evolution of monitoring, diagnostics, and optimization tools, based on a web-based approach, developed before the introduction of the Industry 4.0 (Moore et al., 2008;Wang et al., 2004). In (Moore et al., 2008), a 3D virtual engineering approach (VIR-ENG) is proposed to access data, through a web service or a socket connection, in order to perform Internet-based monitoring integrated into virtual engineering tools.
Digital twin is also usable for design. Tao et al. (2018) propose a new method for product design, product manufacturing, and product service driven by the digital twin to conserve unicity and centralization of data. Nowadays, in addition to the control of information, digital systems tend to be more and more autonomous and become a relevant decision-maker, able to assist humans in real-time in the most efficient way. Rosen, von Wichert, Lo, and Bettenhausen (2015) deal with the challenges of smart industry by focusing on digital twin to ensure modularity, connectivity, and autonomy of industrial systems.
CPPS including a digital twin is a very powerful tool to aim for the best possible performance of a production system but it can hardly be fully autonomous. Human interventions are frequently needed to control and run the production system and also to design or re-design the process. For that reason, there is a strong need of human to CPPS interfaces.

Virtual reality
Virtual reality (VR), thanks to its natural acting capabilities, is a well-adapted mean to help a human interact with a CPPS. Indeed, virtual reality offers immersive 3D scaleone visualization, realistic rendering, natural gesture interactions, collaboration functionalities, and a quick navigation tools in wide area. Therefore, it allows users to easily focus on every component of the system, from the smallest one to the whole factory. It is moreover increasingly easier to implement and use thanks to unceasing progress in dedicated hardware and software. Virtual reality is used in several activities in industry: product or process design (Berg & Vance, 2017;Smparounis et al., 2009), facility layout (Menck et al., 2012), training (Marzano, Friel, Erkoyuncu, & Court, 2015), or remote collaboration (Galambos et al., 2015). Virtual reality helps to do right first time and involves stakeholders in the decision process (Gong, Berglund, Saluäär, & Johansson, 2017). For example, Abidi, Lyonnet, Chevaillier, and Toscano (2016) show the positive contribution that virtual reality can have on a lean production system to minimize production times and eliminating waste of time.
Moreover, Bougaa, Bornhofen, Kadima, and Rivière (2015) show that virtual reality allows involving and engaging all stakeholders in the decision process. As instance, operators' ergonomics can be assessed thanks to virtual environment in order to increase the comfort at work (Bäckstrand, Hogberg, De Vin, Case, & Piamonte, 2007;Högberg et al., 2007). Recent works deal with assessing ergonomics in a real situation thanks to motion capture (Bortolini, Faccio, Gamberi, & Pilati, 2018) or within simulated situation in virtual reality (Rizzuto, Sonne, Vignais, & Keir, 2019).
Virtual reality is also used for training as explained in (Berg & Vance, 2017) and is an efficient and safe tool. As an example, Sousa, Ribeiro Filho, Nunes, and Da Costa Lopes (2010) present a virtual reality environment for Hydroelectric Power Unit (HPU) servicing and maintenance. Each module contains three levels of difficulty (watch only, guided, autonomous) to progressively learn. In same way Saunier, Barange, Blandin, and Querrec (2016) propose two training virtual reality environments about wind turbine. One to learn the principle and components of energy conversion and another one dedicated to the learning of safety procedures by maintenance operators for interventions in wind turbines. Another use of virtual reality is assembly training. Marzano et al. (2015) use his VR_MATE framework to produce a virtual reality environment for assessing and training operators to assemble a locomotive or to disassemble a plane. These scenarios allow assessing the feasibility and evaluating the operator's body posture during the scenario. The interesting point that Ganier, Hoareau, and Tisseau (2014) show is that virtual reality training on a maintenance procedure is transferred as well as a training on the real object. Ordaz, Romero, Gorecky, and Siller (2015) also show that gaming experience has no impact on the learning transfer. As a last example, Matsas and Vosniakos (2017) present a virtual reality tool where a human and a robotic arm collaborate. Whereas the robotic arm does not fully behave as in reality, the virtual reality training allows the operator to understand how to safely behave during the collaboration.

Digital twin and virtual reality
In order to train people in virtual reality with systems that realistically behave, there is an interesting potential in combining virtual reality and digital twin technologies. Turner, Hutabarat, Oyekan, and Tiwari (2016) namely demonstrate the benefits provided by this association by reviewing different case studies where discrete event simulation is combined with virtual reality. For now, simulation software usually shows a lack of virtual reality functionalities. Digital factory CAD tools like (Dassault Systems, 2018) introduced recently some virtual reality visualization capabilities but with restricted interactions and collaboration features. That is why it would be interesting and wise to develop the connectivity between simulation and virtual reality software to take advantage of their strength and to avoid redundant development between applications. The possibility of co-simulation through real-time data exchange between those two types of software could allow training or designing session in dynamic virtual environments.
Functional Mock-up Interface (FMI) is a software independent standard to support both model exchange and co-simulation of dynamic models using a combination of xml-files and compiled C-code (FMI, 2018). It is a particularly appropriate standard to develop CPPS and data exchange between simulation and virtual reality applications. For example, Waurich and Weber (2017) propose a Functional Mock-Up Unit (FMU) developed in C++ that allows the visualisation in virtual reality software (Unity) of an excavator model driven by simulation software (OpenModelica). Unity is used to provide a graphically satisfying view of the model within a realistic environment while the use of OpenModelica ensures a relevant behaviour of the system model. This combination is developed to compensate the lack of contextualisation and visualization possibilities of simulation software, thus giving better understanding and analysis on the system. Finally, we can see that several tools are involved for developing the digital twin. Each has advantages and drawbacks and do not have the same features. As a result, this article, which is an extended version of our work presented in (Lacomblez, Jeanne, Havard, & Baudry, 2018), aims to propose a generic and reusable architecture of cosimulation application between a digital twin and a virtual reality environment in the context of industry 4.0, based on ZMQ socket machine-to-machine communication (Meng, Wu, Muvianto, & Gray, 2017).
The rest of the paper is organized like this. First, we will position the digital twin associated with the virtual reality tool in the process of designing or redesigning an element of a production system. Then, the proposed technical architecture allowing cosimulation between the digital twin and the virtual reality environment is presented. This architecture is illustrated by the co-simulation of an UR10 6-DOF cobotic arm where simulations of motion are performed within a digital factory suite (Dassault Systems, 2018) and virtual reality interactions and simulations within a dedicated software based on Unity. Then, the proposed co-simulation architecture and tools are studied as an assessment tool for the operator's safety and ergonomics during the design of industrial workstations. Perspectives to exploit this co-simulation environment for virtual reality operator training are also discussed.

Using digital twin associated to virtual reality as an engaging decision tool
The proposed usage is illustrated on the LINEACT CESI CPPS, a flexible manufacturing system (FMS) involving several robots with different capabilities in a shop floor layout context. This use case is composed of an automatic production system and a set of manual workstations where the operator can be assisted by cobotic arms for assembly tasks. Mobile robots with different capacities and capabilities ensure transportation tasks. Pick and place operation can be ensured by human operator or robotic arms. Several types of product can be manufactured and augmented reality devices can be used to help the operator in production or maintenance tasks. A digital twin associated to Human-machine interface (HMI) based on virtual or augmented reality is also present (see Figure 1). In the next sections, the benefits of coupling digital twin and virtual reality in that CPPS are discussed.
As presented in the introduction, digital twin is a relevant tool for validating a new factory configuration either in a global point of view by flow simulations or for a specific element by defining a new design. However, digital twin lacks realism for human-centred design of manufacturing process and workstation. In that sense, virtual reality is a key tool as it can represent the system at scale one and allow instinctive interactions. As an example, if a new configuration is needed, engineers of the production methods department can design it with the digital twin (indicated with (1) in Figure 2). When it is done, the digital twin is used to simulate the new configuration. Then simulated data are shared in real-time with virtual reality through the data server (see (2) in Figure 2). Thanks to that approach, end-user operators, getting a realistic behaviour of the factory, can interact with it and finally give their feedbacks about that configuration (see (3) in Figure 2). As any requested change is only made in the digital world the cost is much lower than a change on the physical system. Lastly, once the new configuration is validated, the new parameters and software programs can be deployed to the physical system thanks to the digital twin. The human-centred tool, such as augmented reality is also updated with the new configuration to support maintenance operations (see (4) in Figure 2). In order to show the feasibility of this approach, the next section will describe a more particular case involving a robotic arm and using specific digital twin, data server and virtual reality environment.

Co-simulation workflow and architecture between digital twin and a virtual reality environment
This section describes the proposed architecture and workflow allowing the execution of a co-simulation between the mains tools (see Figure 3). To illustrate the co-simulation architecture, we have applied this architecture to a digital twin representation of a UR10 robotic arm. This workflow is composed of a digital factory suite (Dassault Systems, 2018) including CAD (Computer-Aided Design), CAM (Computer-Aided Manufacturing) and simulation tool. It allows the conception and simulation of the manufacturing system and components during design or redesign stage (see Digital twin in Figure 3). On the other hand, a visualization and interaction tool in virtual reality, based on the 3D engine Unity is proposed. Virtual reality and IT engineers are developing the 3D engine functionalities such as interactions with objects, movement in virtual environment, and integration of the environment (see Virtual reality environment in Figure 3). In order not to again develop the system behaviour in virtual reality environment, it is worth using the simulation data provided from the digital twin and used them inside the virtual reality tool. This allows a realistic behaviour of the system inside virtual environment. Conversely, it is worth using virtual reality features (real scale visualization, interactions) and to take them into account inside the simulation. In order to make both tools able to communicate, a data server is needed for exchanging real-time data (see Data server in Figure 3). The next section is going to detail how each component of the architecture is exchanging data.
The proposed architecture is divided into three distinct blocks but a continuous communication is kept between them (see Figure 4). It is managing real-time data exchange between the digital twin and the virtual reality environment, through a client-server pattern.

Digital twin
As presented in the introduction, a digital factory suite (Dassault Systems, 2018) is used for the digital twin and more particularly two modules: Catiafunctional and logical design and Modelica, to model the system and the associated constraints (indicated with (1.a) in Figure 4). The logical model of the robotic arm is constituted by: the physical constraints which represent the gravity and mass of each part of the system (see (1.a) Physical in Figure 4), the kinematic chain constraints which represent connection between each part and the parameters associated (friction, limits), and the engine constraints which represent the engine force and torques applied (see (1.a) Kinematic chain in Figure 4). Once those parameters are set, the digital twin version of the system is behaving in a realistic way. From that point, it is necessary to send data about the simulation to the virtual reality environment. As presented in introduction, a co-simulation architecture based on Functional Mock-up Interface is retained and a FMU is developed (see (1.b) in Figure 4). It is composed of three major components: • a ModelDescription.xml file, which defines input/output data managed by the FMU.
Each of those can transmit the state, in real-time, of a component of the system and are included within a < scalarVariables> tag in the xml file. The naming convention is {Name of the part}_{type of action}_{element concerned} (example: Axis1_rotate_x); • a ModelDescription.xslt file, which automatically parses the ModelDescription.
xml file to transform it in a C++ header file (modelDescription.h). C++ variables can thus be used in the FMU Core C++ program;

Visualization Interactions
Data server Context Figure 3. Co-simulation workflow between the digital twin and the virtual reality environment.
• a FMU Core C++, a binary file which is a compiled plugin (see (1.b) in Figure 4). This particular program is developed to transmit all the scalar variables, generated by the simulation done in the digital twin, to the data server, through a socket.
Once compiled, the FMU becomes a new component, in the Modelica software, usable by any person used to work with it. As a result, FMU makes it possible to transmit the logic behaviour of a system to another tool in real-time. In the FMU UR10 presented in this paper, the position and rotation of each part of the robot arm are sent in JSON format to the data server. The next section is describing this mechanism.

Data server: coordinator of the data transmission
The role of the data server is to deal with the data in order to transmit it to the right virtual reality environment in real time. The data server act as a coordinator and the digital twin and the virtual reality environment represent the clients. In order to manage several clients simulating different part or behaviours of the systems, SimulationID is used. Hence, only clients using the same SimulationID will share their data.
To communicate in real time, the socket technology is chosen. As each digital twin is using different languages, we have chosen the ZeroMQ framework since it has been developed to connect any code to any code. Moreover, Client-Server communication with ZMQ is done asynchronously, that is, processing will not be blocked while another operation is in progress. Thus, it ensures multiple connections by many clients. ZMQ also offers a set of intuitive communication design patterns to suit many requests. In this architecture, the dealer, router, and the publisher subscriber patterns have been used (Hintjens, 2013). The dealer, available in the FMU for the digital twin and in the SocketManager in the virtual reality environment, sends data to the router in the data server (see Figure 5). Then, the data server sends them through the publisher pattern. Lastly, information is dispatched to all the clients that have subscribed to it. As a result, the digital twin can send data to the virtual reality environment and conversely.

Virtual reality environment: a visualization and interaction tool in VR for the digital twin
Virtual reality environment is developed in Unity, a powerful C# 3D rendering engine that is used as a visualization and interaction tool. For that, it receives data, thanks to the subscriber pattern. The data received are the scalar variables representing the pose of each part of the robot arm generated by the digital twin. Thanks to the naming convention defined in the digital twin section, InterpretorManager class manages the identification of the part to update, the axis of the part to update and the value to apply on it. On each message received, a list of ScalarVariable, called partsToUpdate, is filled in the FMUConnect class (see (3a) in Figure 4). Then the FMUBehaviorManager converts the partsToUpdate list (see (3b) in Figure 4) into a Unity object objToTransform. This dictionary is filled with the game object to update and the transformation to apply on it (see (3b) in Figure 4). Thus, the application browses the objToTransform Dictionary and apply it. The virtual reality environment will finally display the system arm simulated through the digital twin.  Figure 4. Technical architecture to exchange data between the digital twin and the virtual reality environment.

Case study and design steps workflow
The case study is based on the presented CPPS and particularly focusing on the manual workstation where an operator is working in collaboration with a robotic arm (UR10) to assemble a subassembly of a children's bike (see Figure 6). The assembly process at this workstation consists of 28 operations divided into seven assembly instruction sheets. The human-robot interaction of this hybrid cell is classified into the shared workspace and task category (Tsarouchi, Matthaiakis, Makris, & Chryssolouris, 2017). The robotic arm will pick up and present the element being assembled in different orientations and positions in order to help the operator during the assembly steps (see positions (1) to (8) in Figure 7). The final subassembly is presented on Figure 7. Moreover, for ergonomic purposes, the position of the element being assembled and carried by the robotic arm should change according to the operator's size. Figure 8 presents the proposed workflow that leads to the manual workstation with robotic arm design or redesign. The first steps are to create an initial design of the workstation, to select and program the robotic arm, and to create the assembly instructions. As the end of the design step, a digital prototype is available in the digital twin. In the second step, ergonomics, safety, and robot behaviour assessments are realized on this digital prototype in the virtual reality environment. Several loops between the design and prototype stages could be done. The next section will focus on the digital prototype assessment in virtual reality.
Once all stakeholders validate the design, the workstation is implemented in production and the co-simulation environment is used to train operators in virtual reality. As cobotic represents a new way of collaboration between human and machine, stakeholders need to train themselves to understand how they should behave. By using the ontological model presented in previous research work (Havard, Jeanne, Savatier, & Baudry, 2017), the assembly instructions can be converted to create a complete virtual reality training system (see Training in VR in Figure 8). This assembly scenario available in virtual reality is also used during the digital prototype step by the operators for assessment of the configuration, safety, and ergonomics of the workstation (see Digital prototype step in Figure 8).

Safety & robot behaviour assessments in VR
Digital Twin can simulate the robotic arm behaviour. Engineers can program and set different positions the robot must take at each operation step. In this case study, nine positions of the robotic arm must be defined. The design of kinematics to reach these different positions are done through the robot simulation module of the digital twin. Then through the co-simulation framework, the digital twin provides the robotic arm movement to the virtual reality environment. As a result, the robotic arm realistically behaves and the engineers who conceive the program can check the robot behaviour in virtual reality. Moreover, the tuning of the robotic arm can be done without using the real equipment and thus it avoids interrupting a current production process. Virtual reality allows also validating the robotic arm behaviour with a real operator involved in the scene. As human and robot shared tasks and workspace, it is mandatory to safely test and check that the robotic arm will not collide with any part of the workstation or hurt the operator (see Figure 9). What is fundamentally different between virtual reality and simulation through digital twin is that, in virtual reality, the operators are real persons acting and taking risks, as they would do on the real workstation. As an example in Figure 9 same operator has performed several times the same assembly task inside the virtual reality environment. With the habits, the operator has approached the robotic arm and it has collided with him. When a collision occurs, the virtual reality engine detects it and can report it, by putting the robotic arm in red (see (b) in Figure 9). In such a case, the configuration of the manual station or the program of the robotic arm must be reconsidered.

Ergonomics assessment in VR
As virtual reality engages operator into the process, configuration and ergonomics assessments of the workstation can be addressed during the process. It has been conducted into the virtual reality environment presented on Figure 9. This cosimulation environment reduces the delay and costs as modification of the design in the digital twin are tested on the digital prototype and not a physical one. On the basis of this co-simulation architecture and the available digital porotype in virtual reality, we studied the possibility of carrying out ergonomics studies according to the operator's characteristics. As instance, the operator's comfort and ergonomics can be taken into account. To illustrate the study, we select, for example, an assembly step where the robot position can be set to two different positions: a high and a low one (see (a) and (b) in Figure 10). In order to choose the best robot position according to the operator, we choose to make a tall and a small operator perform the operation in VR to evaluate the both robot configurations. Operator is first equipped with a suit to get position of his skeleton in real time; in this case study, we have used a Perception Neuron Pro one.  Thanks to the sensor suit, while the operation is carried out, the RULA score is computed at 120 Hz (McAtamney & Corlett, 1993). The proposed approach is relevant for three factors. On one hand, computing the RULA score during the whole operation is relevant because we can assess that the entire operation step is ergonomically acceptable, contrary to current industrial approach which bases the analysis only on a few key steps of the procedure. On the other hand, using virtual reality for assessing the workstation allows testing several configurations without keeping busy a real workstation. Finally, a virtual design assessment allows 'doing well at first time' and reducing the cost of a redesign in case of any ergonomics misconception. An example of the setup proposed is shown in (c) (d) in Figure 10, for the small operator, and (e) (f), for the tall operator. In this setup, each operator is performing the first operation step, which needs to assemble four parts and last around 1 min.

Design Digital Prototype Production
The RULA score computed during the virtual reality session is shown in Figures 11  and 12. In these figures, the time is normalized from the start and the end of the assembly operation to compare both virtual simulation with the same duration scale. In order to find the best robot position according to the operator's characteristics, a Wilcoxon test is performed between high and low position for each one. For the tall operator, the results show that there is a significant difference between the high position and the low position (W = 2492267, p-value = 1.963e-10). Therefore, it is ergonomically better, for the tall operator, to use the high position of the robot.
As far as the small operator is concerned, the Wilcoxon test also shows that there is a significant difference between the high position and the low position (W = 8458029, p-value < 2.2e-16). Therefore, it is also ergonomically better, for the small operator, to use the high position of the robot.
This study shows that a virtual reality session allows evaluating the ergonomics of manual workstations assisted by a cobot during an operation step. As a result, it is possible to choose one of the robot position configuration. At first thought, we could expect that the small operator would get a better ergonomics score with the low configuration of the robot. However, the results obtained from the method proposed show the contrary.
Nevertheless, there remain limitations of the virtual reality usage. For instance, operators do not grab parts in exactly the same way in the physical world with their hands as in the virtual reality environment with the controller. However, the operator performs each operation twice, once with the high position of the robot and once with (a) (b) Figure 9. Safety checking in virtual reality thanks to the collision detection.
the low position. Thus, any bias due to the part handling discrepancy is limited, making the method suitable as a comparative method.

Conclusion and future work
In this article, a real-time co-simulation architecture between a digital twin and a virtual reality environment has been proposed. This real-time co-simulation ensures both a realistic behaviour of the actual system thanks to the simulation provided by the digital twin and a natural interaction interface with its 3D representation thanks to the virtual reality. The proposed solution is composed of a client-server architecture and the real-time machine-tomachine communication uses ZMQ socket to exchange data. Then, this architecture has been employed on a concrete use case focusing on a manual station where a cobotic arm UR10 is added to help operator in his assembly task. The co-simulation allows getting a realistic behaviour of the robotic arm, and the  Figure 11. RULA score computed during the VR assembly operation for a tall operator, with robot in high position (in blue) and low position (in orange). association with the virtual reality allows: digital prototyping, operator's safety assessment, and studying the layout and ergonomics of the workstation. The proposed workflow allows an intermediate level of simulation involving human in the digital prototype step compared to virtual ergonomics solution based on digital human model or simulation realized on the physical system. This architecture and communication method has been validated with the UR10 cosimulation. Thanks to the use of standard tools and scalable architecture, our future research works will be, first, to co-simulate more than one system at the same time and secondly, to be able to use heterogeneous simulation tools, such as Gazebo to simulate mobile robot, and gather their results inside the same virtual reality environment.
Other research perspectives focus on the evaluation of this co-simulation architecture in the context of a dynamic and collaborative (multi-operators) virtual reality training environment.