• Ei tuloksia

Concept for Virtual Safety Training System for Human-Robot Collaboration

N/A
N/A
Info
Lataa
Protected

Academic year: 2022

Jaa "Concept for Virtual Safety Training System for Human-Robot Collaboration"

Copied!
7
0
0

Kokoteksti

(1)

ScienceDirect

Available online at www.sciencedirect.com

Procedia Manufacturing 51 (2020) 54–60

2351-9789 © 2020 The Authors. Published by Elsevier Ltd.

This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/) Peer-review under responsibility of the scientific committee of the FAIM 2021.

10.1016/j.promfg.2020.10.009

10.1016/j.promfg.2020.10.009 2351-9789

© 2020 The Authors. Published by Elsevier Ltd.

This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/) Peer-review under responsibility of the scientific committee of the FAIM 2021.

ScienceDirect

Procedia Manufacturing 00 (2019) 000–000

www.elsevier.com/locate/procedia

2351-9789 © 2020 The Authors. Published by Elsevier Ltd.

This is an open access article under the CC BY-NC-ND license https://creativecommons.org/licenses/by-nc-nd/4.0/) Peer-review under responsibility of the scientific committee of the FAIM 2020.

30th International Conference on Flexible Automation and Intelligent Manufacturing (FAIM2021) 15-18 June 2021, Athens, Greece.

Concept for Virtual Safety Training System for Human-Robot Collaboration

Morteza Dianatfar

1

, Jyrki Latokartano

1

, Minna Lanz

1

*

1 Automation Technology and Mechanical Engineering Unit, Tampere University, Tampere, Finland

* Corresponding author. Tel.: +358408490278; E-mail address: minna.lanz@tuni.fi

Abstract

The emergence of Human-Robot Collaboration (HRC) creates the opportunity for operators and robots to work in a shared workspace. Adaptation of such a concept in the field of manufacturing industry aims to increase the flexibility of factory lines and optimize the efficiency of resources.

However, employing industrial robots with high speed and enormous forces forms various strains. The major setback is the safety of operators, which demands detailed investigation. Additionally, companies have attempted to create solution-based products to enhance safety in a shared workspace. These solutions require maintaining known-how knowledge regarding usability. Formerly, operators accomplish training for educating themselves upon the assembly process. Training operators lead to diminishing the rate of injury and product cycle time, as well as raising performance and quality. The virtual environment provides a means for operators to experience immersive environments to perceive reality confrontation in a virtual world. Training in these environments empowers operators to recognize safety concerns and instructions. The objective of this study is to propose a concept of virtual safety training in Human-Robot Collaboration.

© 2020 The Authors. Published by Elsevier Ltd.

This is an open access article under the CC BY-NC-ND license https://creativecommons.org/licenses/by-nc-nd/4.0/) Peer-review under responsibility of the scientific committee of the FAIM 2020.

Keywords: Human-Robot Collaboration, Virtual Reality, Virtual Environment, Human Safety

1. Introduction

Nowadays, agile and reconfigurable automation systems are an attractive trend among scholars and industries. Such phenomena encompass Human-Robot Collaboration (HRC) solutions where human dexterity and flexibility without complexity and inflexibility can be merged with semi or fully automated processes. In the future, raise in quality and productivity is expected in collaboration and cooperation of human and robot resources [1], [2]. However, utilizing industrial robots with high speed and significant power forces require safety concerns for the operator in the shared workspace. The emergence of collaborative robots with

inherently safe design aid the achievement of a safe environment for an operator in a close shared workspace. For instance, collaborative robots can play a role as the assistance of an operator in complex assembly tasks. Meanwhile, demands for creating the HRC solutions with industrial robots in manufacturing industries such as automotive and aerospace are increasing.

Hence, International Organization for Standardization (ISO) supplies safety standards such as ISO 10218 part I [3] and technical specification ISO/TS 15066 [4] for manufacturing industries that utilize the HRC solutions. The ISO 10218-1 categorizes collaborative operations into four levels of safety- rated monitored stop, hand guiding, speed, and separation

ScienceDirect

Procedia Manufacturing 00 (2019) 000–000

www.elsevier.com/locate/procedia

2351-9789 © 2020 The Authors. Published by Elsevier Ltd.

This is an open access article under the CC BY-NC-ND license https://creativecommons.org/licenses/by-nc-nd/4.0/) Peer-review under responsibility of the scientific committee of the FAIM 2020.

30th International Conference on Flexible Automation and Intelligent Manufacturing (FAIM2021) 15-18 June 2021, Athens, Greece.

Concept for Virtual Safety Training System for Human-Robot Collaboration

Morteza Dianatfar

1

, Jyrki Latokartano

1

, Minna Lanz

1

*

1 Automation Technology and Mechanical Engineering Unit, Tampere University, Tampere, Finland

* Corresponding author. Tel.: +358408490278; E-mail address: minna.lanz@tuni.fi

Abstract

The emergence of Human-Robot Collaboration (HRC) creates the opportunity for operators and robots to work in a shared workspace. Adaptation of such a concept in the field of manufacturing industry aims to increase the flexibility of factory lines and optimize the efficiency of resources.

However, employing industrial robots with high speed and enormous forces forms various strains. The major setback is the safety of operators, which demands detailed investigation. Additionally, companies have attempted to create solution-based products to enhance safety in a shared workspace. These solutions require maintaining known-how knowledge regarding usability. Formerly, operators accomplish training for educating themselves upon the assembly process. Training operators lead to diminishing the rate of injury and product cycle time, as well as raising performance and quality. The virtual environment provides a means for operators to experience immersive environments to perceive reality confrontation in a virtual world. Training in these environments empowers operators to recognize safety concerns and instructions. The objective of this study is to propose a concept of virtual safety training in Human-Robot Collaboration.

© 2020 The Authors. Published by Elsevier Ltd.

This is an open access article under the CC BY-NC-ND license https://creativecommons.org/licenses/by-nc-nd/4.0/) Peer-review under responsibility of the scientific committee of the FAIM 2020.

Keywords: Human-Robot Collaboration, Virtual Reality, Virtual Environment, Human Safety

1. Introduction

Nowadays, agile and reconfigurable automation systems are an attractive trend among scholars and industries. Such phenomena encompass Human-Robot Collaboration (HRC) solutions where human dexterity and flexibility without complexity and inflexibility can be merged with semi or fully automated processes. In the future, raise in quality and productivity is expected in collaboration and cooperation of human and robot resources [1], [2]. However, utilizing industrial robots with high speed and significant power forces require safety concerns for the operator in the shared workspace. The emergence of collaborative robots with

inherently safe design aid the achievement of a safe environment for an operator in a close shared workspace. For instance, collaborative robots can play a role as the assistance of an operator in complex assembly tasks. Meanwhile, demands for creating the HRC solutions with industrial robots in manufacturing industries such as automotive and aerospace are increasing.

Hence, International Organization for Standardization (ISO) supplies safety standards such as ISO 10218 part I [3] and technical specification ISO/TS 15066 [4] for manufacturing industries that utilize the HRC solutions. The ISO 10218-1 categorizes collaborative operations into four levels of safety- rated monitored stop, hand guiding, speed, and separation

(2)

monitoring, and power and force limiting. Furthermore, safety devices such as laser scanner, laser curtain, and vision-based safety system have been manufacturing by companies as a tool for implementing HRC operation requirements. These devices increase safety levels in addition to traditional safety systems, such as physical barriers. The operator needs to be aware of robots' paths and intentions to avoid the emotional distress of the operator in forms of shock, fear and surprises [5].

On the one hand, awareness of robot status, location and intention would assist the operator in decreasing chances for physical accidents. On the other hand, the data collected dynamically from safety devices would help the operator to avoid presence in danger zones. Therefore, the awareness of the operator can be established by audio and visual signals, or through actions and motion paths are required [6]. Indeed, to be familiarized with the HRC scenario and how to confront safety matters, the operator requires training before performing tasks in the HRC work cell. Traditionally in automated and semi-automated systems, safety training was performed on the factory shop floor with manual documentation, warning signs, and face-to-face instructions. This type of training requires shutting down the whole line every time while there would be changes in the system or new operator assigned to the line.

Training with this method needs more time as the operator should cope mentally with the robot’s presence and significant movements.

Utilizing interactive and immersive features of the Virtual Reality (VR) system could increase the perception and the awareness of the user and prepare them to face real-world operation skills. Virtual Reality Training Systems (VRTS) assist the operator in being mentally prepared by creating a virtual environment of the system and providing an opportunity for interaction with the help of visual, audio, and haptic feedback. The core of VRTS consists of three main components: input sensors for capturing the user’s movement, VR engines to generate data as well as interaction and graphical presentation and finally, the output device, which could be VR HMD. VRTS features to initiate the demands of employing a VR system for training purposes of HRC systems. It worth to mention that the advances in the game engines produce more realistic simulations and aid to advance VR Head-Mounted Displays (HMD) technology.

This paper aims to propose a concept for the virtual safety training system. The second section will address the related work regarding the utilization of a virtual environment in the field of HRC. Afterward, in the next section, the proposed concept is presented with its components along with its structural design and safety approaches. Ultimately, discussion and potential future works are explained.

2. Related work

Hongyi Liu [7] explores the application of Augmented Reality (AR) in Human-Robot Interaction (HRI) by developing proper instructions to overlay on a real object. The architecture of the system consists of an AR-based instruction system, task sequence planning & re-planning system, worker monitoring system, and industrial robot control system. Palmarini [8]

designed the AR interface to evaluate human trust in the HRC

system. In this study, the robot arm movements are simulated by a virtual robot to inform the user of the robots’ intention before actual execution. In [9], the authors outline the utilization of simulation methods for HRC with the support of VR. The user gains experience and programs the robot by interaction with VR controllers. Chong [10] investigates on programming robot where users can define free space and collision-free volume in path planning of robot. This AR environment assists the user in creating robots’ path by utilizing a beam light. This light is powered by a search algorithm interactively with Collision Free Volume. Another VR application in HRI studied the collaboration between two operators and a robot [11]. In this solution tracking sensor collects head position, pose and eye gaze of the operator. The operator monitors the environment by VR HMD and sends commands to another operator by pointing out the object with HMD red dot pointer.

Moreover, Choi [12] collected and analyzed 154 articles relevant to virtual reality’s application in manufacturing industries, where VR technology generates visualization for the product development process, consolidation of information and decision-making. Regarding operators’ stress level while cooperating with a robot, Arai [13] investigates the measurement of strain caused by an industrial robot in a shared workspace. The user confronts high mental strain while the robot operates close to the operator, and its speed exceeds 500 mm/s.

The operator’s acceptability of presence in HRC studied in [14] and shows a significant increase in physiological measures while the operator physically performs tasks close to the robot.

The result from questionnaires implies that VR could be utilized for assessing human acceptability in assembly scenarios before the physical experiment. Zaeh and Roesel [15]

present a safe HRC system where it collects data from multiple sensor modalities and communicates with an industrial robot for assembly of workpieces. Gaze control, speech control and hand detection could be employed to control and program the industrial robot. Additionally, Vogel [16], [17] pursues a safe HRC system with a projector-based and camera system. The safe workspace is dynamically updated and projected on a table where projection assists the user in avoiding violation of the danger zone.

Similarly, Hietanen [18], [19] exploited a depth sensor to monitor operator movements and then communicate with a robot controller by ROS commands and execute safety instructions on the platform. Safety instructions were augmented by projection on the platform or by using AR HMD Microsoft Hololens. Bambusek [20] studied the integration of AR HMD for programming a pick and place task. The results present that programming with AR could avoid stopping time in the assembly line. Moreover, jogging and manipulating the joints angles of an industrial robot through VR HMD is investigated by Gammieri [21]. Forward and inverse kinematics of the robot is modeled which enable the operator to jog the robot similar to the teach pendant. Additionally, collision is demonstrated by a red color in the scene. This implementation aims to produce a clear and precise workflow as well as remove unnecessary code translations.

(3)

An immersive simulation system is developed by Kausch [22], which allows the user to see augmented tools in operators' hands in AR mode. Alternatively, the authors developed VR mode to visualizes objects artificially. Furthermore, a study with 40 subjects has been carried out shows that executing times and accuracy of users are approximately equal in both cases of AR and VR. However, the users personally preferred AR System over VR. Wang [23] investigates a method to execute a welding process by integrating the VR interface to update the status of the welding arc, arc length, welding current, and control the path of welding through operating by HMD controller. Ultimately, Mastas in [24] implemented a highly interactive and immersive VRTS for the HRC task with industrial robots. He aimed to enhance user experience and behavior in VR simulation while concerned about safety issues such as contacts and collisions. Safety presentation and instructions demonstrated by visual stimuli and sound alarms.

Robot workspace, safety zones visualized statically.

3. Concept for Virtual Safety Training System

Firstly, in this section, the system structure and main components will be explained. Secondly, the factors required for implementing the simulation of safety actions will be discussed. Finally, the developed test sequence and the example of this concept will be shown.

3.1. Overall Concept

The Virtual Safety Training System (VSTS) consists of three main components: a user, a HMD, and a PC. In this design, the virtual environment and its objects are modeled in CAD software such as 3DMax or Autodesk Maya and exported to the UNITY game engines with Filmbox (FBX) format.

Additionally, forward and inverse kinematics of the robot are simulated and scripted for visualization of robot tasks and movements. To further refine and enhance the experiment toward obtaining a more realistic training system, the collision of the system’s components (robot, user, parts) can be simulated. The collision can be defined by the UNITY feature called collider. Collider is made of simple 3D shapes, such as the box or cylinder around components. The collision is

acquired when defined shapes contact each other and invoke the scripted signal to illustrate specific actions for the operator.

Meanwhile, VR HMD’s tracking sensors track the operator’s navigation. Afterward, the collected data are transferred to the UNITY engine and the operator’s location and navigation will be computed to display a dynamic representation of the virtual environment on the interface.

The interaction of the operator and exploration of the environment, along with its components within the virtual work cell, can be achieved via HMD controllers. The simple representation of the system is depicted in Figure 1.

3.2. Approach to Safety Actions

In order to implement the HRC work cell, it is necessary to establish and maintain the safety requirements of the system.

Thus, both human and robot resources should follow specific actions to fulfill the safety concerns. In this framework, both resource actions are considered for further creation of virtual training applications.

3.2.1. Robots Action

As mentioned before, guidelines and instruction of safety related to the robot system are discussed in the ISO 10218-2 and ISO/TS 15066. It is mandatory to follow these standards and guidelines in order to provide the safety inside the shared workspace. As a result, it is possible to distinguish the safety actions of the robot side into four different levels in collaborative operation:

a) Safety-Related Monitoring Stop: In this operation, the operator and robot can perform responsible tasks in a separate workspace. Collaborative tasks between the human and the robot can be the manual assembly of a part on another component which is holding by the robot. For this matter, whenever the operator reaches the collaborative workspace, the robot should be stopped in a protective stop state. The robot keeps its state until the operator exits the collaborative workspace.

b) Hand Guiding: The purpose of this operation is to enable the operator to reach the robot and continues to control it manually for completing the assembly task.

Here, when the operator is outside of the collaborative area, the robot could move at full speed. If the robot finishes its task and needs to wait for the collaborative task, it should move into the safety-related monitoring stop state. Also, if the operator is in the middle of the operation and walks by the collaborative area, the robot should get into safety-related monitoring stop. When the operator wants to manipulate the robot, s/he needs to hold an enabling device for continuing the task, and robot quits from safety-related monitoring stop state.

Moreover, in case of enabling device is disengaged, the safety-related monitoring stop would be activated again, and it will remain in safety-related monitoring stop state until the operator exits the collaborative area.

Figure 1. Structure of VSTS

(4)

c) Speed and Separation Distance: the robot reacts related to the distance between the operator and itself.

If the operator approaches the area which closes to the protective separation distance, the speed of the robot should be reduced, if the distance between the operator and the hazard part of the robot gets lower than the protective separation distance, the robot should perform a protective stop and safety-related function should be initiated.

d) Power and force limiting: this operation is limited to the collaborative robots where a force sensor is integrated into the design of the robot. There are two types of contact between the operator and the robot, transient contact and quasi-static. In transient contact, the robot collides with the operator, but the operator can move freely after impact. In this case, if there is a big impact on the operator, the robot should perform a protective stop, and If the impact is small, then the robot could continue the process. In quasi-static contact, the operator stuck between the robot and an object and the robot should immediately perform a protective stop before it creates fatal damages.

3.2.2. Operator’s Action

In contrast to robot system safety requirements, the operator should follow procedures to avoid collision with a robot and acts safe against when violating the danger zone. Performing a safe process with proper action requires different acknowledgment. The list below is a description of sent information from the virtual system to the user:

a) Acknowledgment of robot intentions: The data which demonstrates robots’ path and trajectory can assist the operator in the presence of the robot dynamically in a work cell.

b) Acknowledgment of workspaces: The operator can improve the performance of the assembly process and decrease injuries and stopping the time of process by understanding working areas such as operator, robot, and collaborative workspace.

c) Acknowledgment of task sequence: The operator requires proper instruction and guidance about the next task of the assembly process. This would reduce confusion about the hierarchy of operations, which can avoid harm for a user.

d) Acknowledgment of visual and audio warnings:

equip the environment with audio and visual tools to notify users regarding the upcoming events and status of the whole operation.

The mentioned information will be utilized in the VSTS application to visualize the status of the system, instruction for an operator in different scenarios, such as how to face with an incident in the collaborative area, how to avoid systems stopping by demonstrating required distance to a different zone, and how to react to the robot system malfunction.

3.3. Proposed test sequence

The user should receive visual guidance when violating or is about to violate the safety zones. For example, warning signs of system status should be illuminated, and the user should receive proper instructions on the safe use of the system. For this, we propose two types of safety training approaches; safety zones test and simulator experiment.

In the safety zones test the users are able to see and test the safety zones via visual elements, which should include color coded workspaces. In addition, the visual elements could include information about the robot status, audio and visual warnings. However, the user interface should be in line with human UI/UX guidelines. After the user gets familiar with interface components, s/he would be ready to participate in real simulator experience.

Similarly, the simulator experiment should supply the opportunity for the user to explore the real work cell and interact with objects. Users should be able to follow assembly scenarios, investigate the different statuses of the system while navigating the area. Additionally, the user should interact with the user interface of the application and acknowledged safety action instructions and act accordingly to avoid violating hazard zones.

The concept of the virtual safety training system is shown in Figure 2 and similarly, Figure 3 shows an illustration of the parts of the simulator test.

Figure 2 Activity Diagram of virtual safety training sequence

Run periment

Run imulator Run afety ones

est

avigate

afety is not violate

afety is Violate

Run Robots afety

action Run perators

afety action

ot afe

afe ontinue

it

(5)

Figure 3 Simulator test illustration Figure 5 Virtual safety training in Hand Guiding demo

Figure 4 VSTS example in Hand Guiding

(6)

3.4. Example of Hand Guiding Procedure

The simulator experiment should be specific to the character of collaborative operation. Figure 4 shows the procedure for hand guiding operation. In case that the user approaches the collaborative workspace, and the robot is not in hand guiding mode, the system will run safety functions. Therefore, the robot movements should stop, and robot workspace will be shown on the screen with red color. A visual warning will notify the user to leave the collaborative space as the performing task is not meant for hand guiding. Moreover, an audio warning notifies the user entered the collaborative workspace.

In addition to audio and visual warnings, safety instructions will assist the user to leave violation area. For example, by instructing the user to move back until reaching a safe space.

However, if the task requires a hand guiding operation, the user needs to confirm the initiation of the hand guiding operation.

The simple demo of such an example is shown in Figure 5.

Afterward, the operator can continue assembly by holding the enabling device; in this case, the user needs to press and hold both trigger buttons of the HMD controller at the same time.

Consequently, the robot deactivates its safety function and the red indicating robot zone will be removed. Meanwhile, instruction of assembly sequence is shown on UI. Then, when the user disengages the enabling device, safety function runs again, if the user quit the collaborative area s/he can decide to continue experiment or exit from the simulator.

4. Discussion and Future Work

In this paper, a concept of the training system was proposed to integrate the safety guidelines of the robot system.

Subsequently, the visual and graphical elements in the virtual environment assist the user in perceiving assembly sequence, types of workspaces, and robot intentions. Such a virtual training environment can enable the operator with or without prior experience to have comprehensive training and prepare the operator to perceive the safety system processes.

Additionally, virtual training can increase human trust while performing close to the robot and handle fear during accident occurrence.

For future work, the use case of diesel engine assembly in HRC will be implemented. VSTS specifically for this system will be designed and customized and utilized by VR HMD.

Nevertheless, the study will be conducted through participants and evaluate the benefit of the proposed system. In addition, the different UI will be tested, and the best choice could assist to create guidelines for further research.

Acknowledgements

This project has received funding from the European Union’s Horizon 2020 research an innovation program under grant agreement No. 825196.

References

[1] A. Bauer, D. Wollherr, an M. Buss, “Human–robot collaboration:

a survey,” vol. 5, no. 1, pp. 47–66, 2008.

[2] P. sarouchi, . Makris, an G. hryssolouris, “Human – robot interaction review and challenges on task planning and programming,” Int. J. Comput. Integr. Manuf., vol. 29, no. 8, pp.

916–931, 2016.

[3] . tan ar isoimisliitto, “ fs-en iso 10218-1 3.,” no. Malminkatu 34, 2018.

[4] . K. für ormung, “ H I AL P IFI A I I / Robots and robotic devices —,” vol. 2016, 2016.

[5] R. Alami et al., “ afe an epen able physical human-robot interaction in anthropic omains: tate of the art an challenges,”

2006.

[6] V. V. Unhelkar, H. . iu, an J. A. hah, “ omparative performance of human and mobile robotic assistants in

collaborative fetch-and- eliver tasks,” in ACM/IEEE International Conference on Human-Robot Interaction, 2014, pp. 82–89.

[7] H. Liu an L. Wang, “An AR-based Worker Support System for Human-Robot ollaboration,” Procedia Manuf., vol. 11, no. June, pp. 22–30, 2017.

[8] R. Palmarini et al., “Designing an AR interface to improve trust in Human-Robots collaboration,” in Procedia CIRP, 2018, vol. 70, pp.

350–355.

[9] A. e Giorgio, M. Romero, M. nori, an L. Wang, “Human- machine Collaboration in Virtual Reality for Adaptive Production ngineering,” Procedia Manuf., vol. 11, pp. 1279–1287, 2017.

[10] J. W. S. Chong, S. K. Ong, A. Y. C. Nee, and K. Youcef-Youmi,

“Robot programming using augmente reality: An interactive method for planning collision-free paths,” Robot. Comput. Integr.

Manuf., vol. 25, no. 3, pp. 689–701, Jun. 2009.

[11] M. M. Moniri, F. A. E. Valcarcel, D. Merkel, and D. Sonntag,

“Human gaze an focus-of-attention in dual reality human-robot collaboration,” in Proceedings - 12th International Conference on Intelligent Environments, IE 2016, 2016, pp. 238–241.

[12] S. Choi, K. Jung, an . Do oh, “Virtual reality applications in manufacturing industries: Past research, present findings, and future irections,” Concurr. Eng. Res. Appl., vol. 23, no. 1, pp. 40–63, Mar. 2015.

[13] . Arai, R. Kato, an M. Fujita, “Assessment of operator stress in uce by robot collaboration in assembly,” CIRP Ann. - Manuf.

Technol., vol. 59, no. 1, pp. 5–8, 2010.

[14] V. Weistroffer et al., “Assessing the acceptability of human-robot co-presence on assembly lines: A comparison between actual situations an their virtual reality counterparts,” IEEE RO-MAN 2014 - 23rd IEEE Int. Symp. Robot Hum. Interact. Commun.

Human-Robot Co-Existence Adapt. Interfaces Syst. Dly. Life, Ther.

Assist. Soc. Engag. Interact., pp. 377–384, 2014.

[15] M. Zaeh and W. Roesel, “ afety aspects in a human-robot interaction scenario: A human worker is co-operating with an in ustrial robot,” Commun. Comput. Inf. Sci., vol. 44 CCIS, pp. 53–

62, 2009.

[16] . Vogel, . Walter, an . lkmann, “ afeguar ing an Supporting Future Human-robot Cooperative Manufacturing Processes by a Projection- and Camera-base echnology,”

Procedia Manuf., vol. 11, pp. 39–46, 2017.

[17] . Vogel, . Walter, an . lkmann, “A projection-based sensor system for safe physical human-robot collaboration,” in IEEE International Conference on Intelligent Robots and Systems, 2013, pp. 5359–5364.

[18] A. Hietanen, J. Latokartano, R. Pieters, M. Lanz, and J.-K.

Kämäräinen, “AR-based interaction for safe human-robot

(7)

collaborative manufacturing.”

[19] A. Hietanen, R.-J. Halme, J. Latokartano, R. Pieters, M. Lanz, and J.-K. Kämäräinen, “Depth-sensor-projector safety model for human-robot collaboration.”

[20] D. Bambuˇ an M. Kapinus, “ ombining Interactive patial Augmented Reality with Head-Mounted Display for End-User ollaborative Robot Programming.”

[21] L. Gammieri, M. Schumann, L. Pelliccia, G. Di Gironimo, and P.

Klimant, “ oupling of a Re un ant Manipulator with a Virtual Reality Environment to Enhance Human-robot ooperation,” in Procedia CIRP, 2017, vol. 62, pp. 618–623.

[22] B. Kausch an . M. chlick, “ mbe e Augmente Reality Training System for Dynamic Human-Robot Cooperation Jan A.

euhoefer ( cientist).”

[23] Q. Wang, Y. Cheng, W. Jiao, M. T. Johnson, and Y. M. Zhang,

“Virtual reality human-robot collaborative welding: A case study of weaving gas tungsten arc wel ing,” J. Manuf. Process., vol. 48, pp.

210–217, Dec. 2019.

[24] E. Matsas, G.-C. Vosniakos, E. Matsas, and G.-C. Vosniakos,

“Design of a virtual reality training system for human-robot collaboration in manufacturing tasks,” Int J Interact Des Manuf, vol. 11, pp. 139–153, 2008.

Viittaukset

LIITTYVÄT TIEDOSTOT

This section contains a safety checklist, stable safety map and good practices to support human health and horse welfare and to prevent injuries in horse-related activities.. Reviews

Tiivistelmä SecNet-hankkeen tavoitteena oli tukea turvallisuusalan yritysten kansainvälisten verkosto- jen muodostumista neljällä liiketoiminta-alueella: turvallisuus ja

Homekasvua havaittiin lähinnä vain puupurua sisältävissä sarjoissa RH 98–100, RH 95–97 ja jonkin verran RH 88–90 % kosteusoloissa.. Muissa materiaalikerroksissa olennaista

Hä- tähinaukseen kykenevien alusten ja niiden sijoituspaikkojen selvittämi- seksi tulee keskustella myös Itäme- ren ympärysvaltioiden merenkulku- viranomaisten kanssa.. ■

Automaatiojärjestelmän kulkuaukon valvontaan tai ihmisen luvattoman alueelle pääsyn rajoittamiseen käytettyjä menetelmiä esitetään taulukossa 4. Useimmissa tapauksissa

Sovittimen voi toteuttaa myös integroituna C++-luokkana CORBA-komponentteihin, kuten kuten Laite- tai Hissikone-luokkaan. Se edellyttää käytettävän protokollan toteuttavan

Solmuvalvonta voidaan tehdä siten, että jokin solmuista (esim. verkonhallintaisäntä) voidaan määrätä kiertoky- selijäksi tai solmut voivat kysellä läsnäoloa solmuilta, jotka

EU:n ulkopuolisten tekijöiden merkitystä voisi myös analysoida tarkemmin. Voidaan perustellusti ajatella, että EU:n kehitykseen vaikuttavat myös monet ulkopuoliset toimijat,