• Ei tuloksia

Development of a real-time platform for mapping and transfer of video, simulation and movement between a humanized robotic head and a set of VR glasses

N/A
N/A
Info
Lataa
Protected

Academic year: 2022

Jaa "Development of a real-time platform for mapping and transfer of video, simulation and movement between a humanized robotic head and a set of VR glasses"

Copied!
116
0
0

Kokoteksti

(1)

Rustem Sadykov

DEVELOPMENT OF A REAL-TIME PLATFORM FOR MAPPING

AND TRANSFER OF VIDEO, SIMULATION AND MOVEMENT BETWEEN A HUMANIZED ROBOTIC HEAD AND A SET OF VR GLASSES

Examiners: Professor Heikki Handroos D.Sc. (Tech.) Hamid Roozbahani

(2)

LUT School of Energy Systems LUT Mechanical Engineering Rustem Sadykov

Development of a Real-Time Platform for Mapping and Transfer of Video, Simulation and Movement Between a Humanized Robotic Head and a Set of VR Glasses

Master’s thesis 2019

63 pages, 41 figures, 7 tables and 1 appendix Examiners: Professor Heikki Handroos

D.Sc. (Tech.) Hamid Roozbahani

Keywords: VR, immersion, PID, teleoperation, FPV, biomimetic, telepresence, HMI.

Development of new technologies opens new possibilities in teleoperation. In recent years new affordable VR systems appeared. Intended for entertainment and gaming, they can be utilized in other areas.

This work is focused on development a new real-time platform and a robotic head connected to a VR system. Developed system offers a method of teleoperator control with a VR system and a biomimetic robotic head. The designed platform is powered by a computer. In order to control the robotic head a C++ program was written.

The new teleoperation method with VR offers teleoperation with immersion. Teleoperaton control with immersion allows human operator to better focus, makes control more intuitive, enhances situational awareness and decreases cognitive load.

(3)

I would like to thank my supervisors Doctor Hamid Roozbahani operational approach to work and patience, for his desire and help with getting the equipment and Professor Heikki Handroos for giving me an opportunity to work with this project.

I also would like to thank Juha Koivisto for providing all the necessary materials and tools, for his help and assistance in this project.

Finally, I would like to thank my parents for their invaluable support.

Rustem Sadykov

Lappeenranta 20.12.2019

(4)

TABLE OF CONTENTS

ABSTRACT ... 2

ACKNOWLEDGEMENTS ... 3

TABLE OF CONTENTS ... 4

LIST OF SYMBOLS AND ABBREVIATIONS ... 6

1 INTRODUCTION ... 7

1.1 Background ... 7

1.2 Research Problem ... 14

1.3 Contribution ... 15

2 METHODS AND EQUIPMENT ... 16

2.1 Design of the robotic head ... 16

2.2 Hardware ... 20

2.2.1 HTC Vive Pro ... 20

2.2.2 Cameras ... 23

2.2.3 Video grabber ... 26

2.2.4 Wi-Fi ... 29

2.2.5 Servomotors ... 32

2.2.6 Microphone ... 36

2.2.7 Sound card ... 39

2.2.8 Plate ... 40

2.2.9 PC ... 41

2.3 Software ... 43

2.3.1 OpenVR SDK ... 43

2.3.2 Dynamixel SDK ... 43

2.3.3 Microsoft Visual Studio 2019 ... 43

(5)

2.3.4 SteamVR ... 43

2.3.5 OBS ... 43

3 DEVELOPMENT OF THE REAL-TIME PLATFORM FOR TRANSFER VIDEO, AUDIO AND MOVEMENTS BETWEEN THE ROBOTIC HEAD AND THE VR SYSTEM ... 44

3.1 Video connection ... 44

3.2 Audio connection ... 50

4 RESULTS ... 54

5 CONCLUSION ... 58

LIST OF REFERENCES ... 59 APPENDIX

APPENDIX I: Designed program algorithm

(6)

LIST OF SYMBOLS AND ABBREVIATIONS

2D Two-dimensional

3D Three-dimensional

AERcam Autonomous Extravehicular Robotic Camera API Application Programming Interface

DHCP Dynamic Host Configuration Protocol FOV Field-of-View

FPV First Person View GUI Graphical User Interface HMD Head-mounted Display HMI Human-machine interface

IDE Integrated Development Environment

IEEE Institute of Electrical and Electronics Engineers IMU Inertial Navigation Unit

IT Information Technology LED Light-emitting diode

NAT Network Address Translation Navcam Navigation Camera

OS Operating System Pancam Panoramic Camera PC Personal Computer ROS Robot Operating System SDK Software Development Kit TCP Transmission Control Protocol UDP User Datagram Protocol USB Universal Serial Bus VR Virtual Reality Wi-Fi Wireless Fidelity

WPA2 Wi-Fi Protected Access II

(7)

1 INTRODUCTION

This chapter introduces the basic conceps of the work and the current situation and issues in teleoperation.

At the beginning this chapter introduces a theoretical background of teleoperation and virtual reality systems and its practical implementations. It also provides basic and conventional definitions used is that field of research. The second part explains a research problem of this work and establishes aims of this work. At the end positive effects of the work are explained.

1.1 Background

Teleoperation – is a control method by the human at a distance. It allows the human operator to explore the area or objects and perfom the work remotely. (Mihelj, Podobnik, 2012, p.161).

Despite development and integration of automated system and reducing of human intervention in the production process teleoperation is still in demand. Moreover, new areas where teleoperation is possible to apply are discovered. Implementation of teleoperation allows to assess the situation by a human. With the current level of technology only humans still able to make non-programmable quick decisions. Sometimes teleoperation is the only way to manage and solve the problem.

Teleoperation control is used to control mobile robots. Besides utilization on mobile robots, it is also possible to use teleoperation in production: in places with harmful effect on human.

Teleoperator can be used in dangerous places where it is not safe for a human to be and in places where it is not possible to for a human to get in.

In some areas it is difficult or unprofitable to use automatic systems. Sometimes, development of an automatic system is not the best solution. For example, in space (Figure 1), ocean exploration or in areas with a high radiation level. (Aracil et al., 2007, p. 1) Another case where it is required to perform a non-repetitive task. Development of an automatic system for that task may require too much time and money.

(8)

There are also extraordinary tasks where unpredictable factors and problems makes impossible to use an automatic system and only a human can manage with them. However, using a person for work in certain harsh conditions can be not only dangerous but may cause large financial costs. In that case it is cheaper to use a teleoperation instead of direct human presence.

Figure 1. Practical implementation of teleoperated AERcam freely-flying robot in space.

(Fredrickson, 2002, p. 1)

Development of mechatronics and robotics can give new opportunities in the field of teleoperation. Application of mechatronics is utilized in a drive of capturing devices.

Development of robotics created a new discipline – telerobotics. In telerobotics robots are reproduce operator actions in distance. In this ineteraction robot is a slave device controlled by a human with a master device. (Figure 2)

(9)

Figure 2. An example of a master and slave devices. (Mihelj, Podobnik, 2012, p.162) Development of teleoperation is also advanced due to cost reduction television equipment and an increase in the width of data transmission channels.

A great leap has taken place in the field of virtual reality (VR) systems. Previous attempts to create systems have faced the problem of high development costs and component costs. The reason is that virtual reality systems must meet high requirements:

• Display in VR system should have a high pixel density (DPI) and high resolution.

• Display refresh rate and frame rate should be high. Delays should be small.

• Accuracy of tracking a person’s position should be fast and high. The picture on the screen should change quickly and correspond to a change in position in space.

If any of these requirements is not fulfilled or is insufficiently implemented due to the insufficient power of auxiliary equipment, for example, with an underperforming computer, then there is a danger for a person using this system in the appearance of so-called motion sickness.

Motion sickness occurs with incongruity between visual perceived movement and vestibular system’s sense. Motion sickness is accompanied with unpleasant syndromes, such as dizziness, vertigo, fatigue and nausea. With these syndromes it becomes difficult or impossible to work and use the virtual reality system. Therefore, one of the tasks of developing VR system is to minimize the likelihood of motion sickness.

(10)

The usage of VR systems is considered in many projects. It is possible to find a huge amount of NASA projects where VR systems are used, for example, in the project with teleoperation on AERcam freely-flying robot. (Figure 3)

Figure 3. Utilization the VR interface for remote exploration in space with teleoperation on AERcam freely-flying robot.

The big advantage to use the new VR systems is their price. Developed for the consumer segment, in a competitive environment (and competition between different companies and products) and to conquer the market, the price of these systems has decreased significantly.

New virtual reality systems are also open new, non-standard applications. Some parts of Virtual Reality systems are possible to use as an input device: this system can provide kinematic-based control. VR systems also have a good video output: monitor on their HMD part. Each eye gets a separate picture. This method is one of the best methods to create a 3D picture. It also creates a good immersion effect.

(11)

For different tasks robots have different design. The robot’s design is chosen to be more suitable for the exact task. Sometimes it is a good idea to use solutions made by nature with thousands of years of evolution and natural selection. This type of robot’s design called biomimetic.

Biomimetic means to be similiar to something biological. In the case if the design of a robot copies some part of a human. With this robot it is possible to make a telepresence device – a robotic avatar which copies human movemants. This system has next adavnatages: it can be intuitively controlled. It increases speed and provide real-time operability. It helps operators to study work with such systems faster. It also decreases entrance level. Decrease costs for education.

With of two or more cameras it is possible to create a 3D image. With biomimetic installation of cameras, it is easier to provide a stereoimage directly to a human operator. Some devices called “Head-mounted display” or HMD have a separate display for each human eye. The mechanism how stereodisplays works is presented in Figure 4.

In conjunction with an HMD, using a stereovideo from two cameras with a biomimetic installation, it is possible to enhance the effect of immersion. With a stereo image human can fully use his binocular vision It becomes possible to evaluate the depth and distance between objects.

(12)

Figure 4. The principle of forming of a 3D image on stereo screens. (Patterson, 2015, p. 4) Besides, a 3D image also can be very helpful in other sphears for better image processing, for example in machine vision. Robots with several cameras can get more valuable data. For example, NASA used a several camera pairs as Panoramic Camera (Pancam) and Navigation Camera (Navcam) (Figure 5) on their Martian rovers from Mars Exploration Rover mission:

Spiurit and Opportinity (Figure 6).

X

X X

X X

Background

Left-Eye View Right-Eye View

(13)

Figure 5. Pancam. (two cameras mounted around the edges) and Navcam (two cameras mounted closer to the center) mounted on the mast. (The Panoramic Camera (Pancam) - NASA Mars, 2019)

(14)

Figure 6. Mars Exploration Rover Opportinity with installed Pancam and Navcam. (Mars Exploration Rovers – NASA Mars, 2019)

1.2 Research Problem

In this paper, it is proposed to use virtual reality systems for teleoperator control of a special robotic head. It is proposed to use VR as an input-output device from the operator.

In order to enreach this goal a real-time platform for mapping and transfer of video, simulation and movement between a humanized robotic head and a set of VR glasses should be created.

(15)

This platform will connect VR and the robot head in both directions. From the side of the robot head, it is a recording and transmission of stereo images, as also recording and transmission of stereo sound.

The VR system should read the position of the operator’s head in space. The robot head must repeat the positions and movements of the operator’s head.

1.3 Contribution

Firstly, thanks to the biosimilar design of the robotic head, its movements are adapted to human movements. The movements of the robotic head are adapted to human movements and are able to move in the two most important directions for humans.

Utilization of this system will free an operator’s hands. A movement of the head, which can occur on an intuitive level and does not require distraction, replaces a separate operation of turning cameras. This in turn carries several advantages:

• It increases the reaction rate of the operator to an unexpected event.

• The entry threshold is reduced. Learning time is reduced.

• Work in the new system is intuitive.

• 3D stereo image allows you to fully use the binocular vision of a person, designed to more accurately assess the depth, and therefore more accurately perform work and assess the situation.

• In general, the system creates an immersion effect, which will also allow the operator to better concentrate on work.

• Work where the operator moves his head also reduces a probability to get neck diseases.

(16)

2 METHODS AND EQUIPMENT

This chapter present methods and equipments necessary to make a real-time platform At the beginning this chapter establishes requirements to a design of the robotic head. These requirements are based on a physical parameters and neck movement capabilites of an average person. The second chapter focuses on the selection of hardware. The chosen equipment must comply with the requirements set in the previous chapter.

2.1 Design of the robotic head

The robotic head should have a biomimetic design. In combination with a control by recording of an operator’s head kinematics, the robotic head could be used as an operator’s head avatar – a device which will copy human movements. This design will make control of the head much easier, will free hands from the task of controlling it and decrease a cognitive load to the human operator. (Park, Jung & Bae, 2018)

Hardware for this robotic head was taken from another LUT project – TIERA. The design was modified in order to make it more biomimetic.

There is also another point to place cameras on a biomimetic design. Even with a one functional and open eye human’s brain still able to estimate depth of the picture and objects positions. It happens partially because human brain reveals the position of objects while moving the viewpoint.

In order to describe rotations of an object axes can be used. One rotation can be described like it is several rotations around several axes. Three axes are enough to express any rotation of any subject in 3-dimensional space. (Arcoverde et al., 2014)

By a rotation of a head people usually consider rotation only around one vertical axis. In other cases, they use word “tilt”. For example, “tilt forward”. But for the further work it is not convenient. In scientific papers, yaw, pitch and roll types of rotation are usually used to describe head rotations. (Figure 7)

(17)

Figure 7. Human head motions (Alfayad et al., 2016)

The roll axis is aligned with the main direction of the human eye. The roll axis is the only one that just rotates image obtained by human eyes. It means the roll rotation rotation almost does not change field of view and it does not increase amount of data. That is why between pitch, yaw and roll rotations, the roll rotation has the least importance.

The initial position of the developed robotic head should be close to a human head usual position. In this case, it is more likely that after turning on, the robotic head will not make unnecessary movements.

An average person can rotate his head in yaw up to 70 degrees in each direction from the usual position. In pitch rotation it is approximately 65 degrees up. 35 down. (Gilman, Dirks

& Hunt, 1979)

The robotic head should be able to rotate at least as far as human can. (Figure 8) But there is no reason to artificially reduce the robotic head rotation limits. Moreover, it can be a situation when a human operator rotates his head by the rotation of the whole body, not only by the neck. Therefore, the robotic head rotations should follow rotations of the human head in space, not turns of his neck.

(18)

Figure 8. Limits of head rotation (Gilman, Dirks & Hunt, 1979)

At least two biggest companies on the world’s VR market, Oculus and HTC, advise developers to use the axes and directions of rotation presented in Figure 9.

(19)

Figure 9. Types and directions of head rotations in VR systems (Oculus Rift PC SDK Developer Guide, 2019)

(20)

2.2 Hardware

This chapter focuses on the necessary hardware which is needed to make a real-time platform.

2.2.1 HTC Vive Pro

On of the best VR sets in the market is HTC Vive Pro. It uses a Lighthouse tracking system.

(Figure 10).

Lighthouse system provides fast and precise tracking. Average positional error of Vive is 2.63 mm, and an average rotational error of 0.45°. (Groves, 2019)

But it also has it is own drawbacks. Before utilization the Lighthouse system should be set up. Once manually tuned bases should not be moved from their place or it causes distortions in positioning.

Figure 10. Lighthouse position tracking explanations (Deyle, 2015)

(21)

The principles of the system are shown in Figure 11. In the beginning, Lighthouse stations should be installed and configured. Inside these stations there stationary infrared LEDs and two the rotating infrared LEDs with pitch and yaw rotation.

Figure 11. Calculating the angle between the device with photodiod sensor and a station in Lighthouse position system. (Deyle, 2015)

Photodiodes are installed on the system devices. They capture flashes from the stations and so determine the angle vertically and horizontally from each of the stations. Each rotating LED determines the plane of the device in space. Two stations are more than enough to determine the exact location of the device in space.

Head monitor has an OLED display with a resolution 2800×1600 with a 615 ppi (pixels per inch) and 90Hz refreshrate.

In order to reduce the size and weight of the HMD, in HTC Vive Pro have lenses with a special design called “frenzel lenses”. (Figure 12)

(22)

Figure 12. Differences between a frenzel lense and a common one.

It is also possible to use a Vive Wireless Adapter to make a wireless connection. It can increase usability of the system. But this device is not included in the standard set.

Besides using Lighthouse HTC Vive uses sensors for positioning embedded in HMD:

G-Sensor and gyroscope sensor.

(23)

Figure 13. Installation of photodiods under the HMD front cover of HTC Vive Pro kit.

(Dempsey, 2018, p.82-83)

2.2.2 Cameras

One of the goals of the project head is to capture and transfer a video signal to the HMD. In order to do that two videocameras are required.

Vision is the main human sense. 90% of all information human gets from eyes. That is why the main task in teleoperation is to provide a good video signal.

Humans are able to estimate the depth of the image because human vision is binocular. VR systems provide stereo imaging. Depth assessment does not only increase the immersion, but also increases the amount of information transmitted. It allows you to better understand the positioning of objects, assess the situation and increases the accuracy of the work.

It is not needed to replace this camera with a new one. Main characteristics of both models are the same. For example, the closest analogs model for CV500-M2 are CV502 and CV503.

Maximal resolution is 1920×1080 with 60 frames per sec with a progressive scanning.

(Marshall Electronics CV503, 2019)

(24)

But a video stream with 60 frames per second is not enough to get a good quality picture.

One more thing is important – it is an exposure time (Shutter Speed) of one frame. A video with a high framerate may consist of a lot of blurry frames. Increasing the number of frames will not solve this problem. Moreover, increasing the number of frames, the maximum possible time for one frame decreases.

An exposure time depends on illumination. In CV-500-M2 it is possible to use a black and white mode if the luminosity is not enough. In this mode, the exposure time is reduced, and it gives more distinct frames.

Figure 14. Marshall CV-500 MB (Marshall Electronics CV500 MB, 2019)

(25)

Table 1. Specifications of Marshall CV-500 MB (Marshall Electronics CV500 MB, 2019) Imaging Sensor 2.2 Megapixel 1/3” CMOS

S/N Ratio More than 50

Video Output Mode CV500-MB-2: 1080p59.94, 1080i59.94, 720p59.94fps CV500-M-2: 1080p60/50/30/25, 1080i60/50, 720p60/50fps Video Output Level HD-SDI / 3G -SDI (BNC): 1920x1080, 1280x720

Composite-CVBS (BNC): 700TVL NTSC/PAL

Min. Illumination 1.0 Lux (Color), 0.5Lux(B/W), 0.02 Lux (Sens-up x30) Electronic Shutter Speed Auto / Flk

AGC 1 ~ 15 Step

Sens-up Auto / Off (x2~x30)

D-WDR On / Off

DEFOG On / Off

Backlight Compensation WDR / BLC / HSBLC / Off

Day & Night Auto / B.W / Color / EXT (Selectable)

DNR 2DNR / 3DNR (SMART NR)

DIS On / Off (A-30/25P (Only))

Privacy Masking On / Off (8 Zone Selectable)

Language English / Japanese / Chinese / Korean / German / French / Italian / Spanish / Polish / Russian / Portuguese / Dutch / Turkish / Hebrew / Arab

Pixel Correction LIVE DPC / STATIC DPC / BLACK DPC

Protocol Pelco - D/P

Adjust SHARPNESS / MONITOR / LSC / NTSC/PAL

Supplied Voltage DC12 ±10%

Body Dimension 36(W) X 36(H) X 35(D) mm

Weight 50g

(26)

2.2.3 Video grabber

Cameras create a 3G-SDI signal. In order to connect transmit this signal to PC (Figure 15) a video a video grabber should be chosen.

Figure 15. Scheme of frame grabber’s operation (Epiphan Video, 2019)

AV.io SDI (Figure 16) is a fast USB device with almost zero delay (Epiphan AV.io SDI Technical Specifications, 2019). It supports video up to FullHD format with 60 progressive uncompressed frames per second.

(27)

Figure 16. AV.io video grabber.

Other specifications of AV.io video grabber are provided in the table below. (Table 2) Table 2. AV.io Technical Specifications (Epiphan AV.io SDI, 2019)

Interface USB 3.0, USB 2.0

OS drivers UVC and UAC device

Dimensions 3.54″ × 2.36″ × 0.91″ (90 mm × 60 mm × 23 mm) Connectors SDI (BNC-style), USB 3.0 B-Type connector

Input 3G-SDI, HD-SDI, SD-SDI (including embedded audio for 3G- SDI and HD-SDI*)

Input resolution Up to 1920×1080 Output frame rate

Note that the third-party

Output Resolution Available frame rates, fps

640×360 15, 23.97, 24, 25, 29.97, 30, 50, 59.94, 60

(28)

software you're using sets the frame size and bitrate

640×480 15, 23.97, 24, 25, 29.97, 30, 50, 59.94, 60 960×540 15, 23.97, 24, 25, 29.97, 30, 50, 59.94, 60 1280×720 15, 23.97, 24, 25, 29.97, 30, 50, 59.94, 60 1920×1080 15, 23.97, 24, 25, 29.97, 30, 50, 59.94, 60 Output color space YUV 4:2:2

Capture latency Near-zero. However, third-party applications may contribute to capture delay.

Audio (input) 16-bit and 24-bit PCM encoded audio 48 kHz (audio capture supported from 3G-SDI and HD-SDI*)

Audio output 16-bit 48 kHz stereo audio

LED One LED to indicate the status of the AV.io SDI (power, readiness, and operation in progress)

OS Support Windows 7, Windows 8.1, Windows 10, Mac OS X 10.10 and up, Linux distribution with kernel 3.5.0 or higher supported Country of Origin Made in North America (Canada)

(29)

2.2.4 Wi-Fi

In order to increase mobility of the robotic head it is possible to use a WiFi to transmit video stream. In this work SWIT S-4914 receivers (Figure 17) transmitters (Figure 18) set is used.

Figure 17. SWIT S-4914R (recievers)

A set consists of a receiver and a transmitter and provides a video transmittion range up to 700 meters. The latency of video translation is less then 1 ms. (Table 3).

Transmitters (Figure 18) should be installed nearby from the source. They also required a 12V powersupply. (Table 3)

(30)

Figure 18. SWIT S-4914T (Transmitters)

All technical specifications are presented in the next table. (Table 3)

(31)

Table 3. Technical specifications of SWIT S-4914 set. (SWIT S-4914 T/R SDI/HDMI, 2019)

Model Transmitter S-4914T Receiver S-4914R

Input 3G/HD/SD-SDI×1 /

HDMI×1

Output / 3G/HD/SD-SDI×1

HDMI×1 Wireless frequency 5.1~5.9GHz

Radio modules OFDM 16QAM

Max Transmission distance Approx 700m (Line of sight)

Latency ≤1 millisecond

Power consumption ≤6.5W ≤6.5W

Radio power Max 63mW /

Input voltage DC / Battery:6.5~17V DC / Battery:6.5~17V Working environment Temperature: 0°C -+40°C

Dimension 71×120×37 (mm) 130×188×47 (mm)

Net weight (with antenna) 300g 797g

SDI Format SMPTE-425M 1080P (60 / 59.94 / 50)

SMPTE-274M 1080i (60 / 59.94 / 50) 1080p (30 / 29.97 / 25 / 24 / 23.98)

SMPTE-RP211 1080psf (30 / 29.97 / 25 / 24 / 23.98)

SMPTE-296M 720p (60 / 59.94 / 50)

SMPTE-125M 480i (59.94)

ITU-R BT.656 576i (50)

(32)

2.2.5 Servomotors

In order to provide fast and accurate movement Dynamixel MX-106 servomotors was chosen. (Figure 19)

Figure 19. Dynamixel MX-106 servomotor. (Robotis emanual. MX-106 R/T, 2019)

Each device can provide one rotation. It is also possible to connect up to 100 servomotors in series and assemble them in different combinations (Figure 20, Figure 21)

Figure 20. MX-106T in dual mode (Robotis emanual. MX-106 R/T, 2019).

(33)

Figure 21. Possible assemble combinations with two MX-106 servomotors. (Robotis emanual. MX-106 R/T, 2019)

In order to provide rotation around two axis these motors are assembled in one of assemble combinations presented in Figure 22.

In that connection Dynamixel servomotors provides yaw and pitch rotation similar to human head rotation. The yaw rotation rotates the axis with a pitch rotation.

(34)

Figure 22. Two Dynamixel MX-106 servomotors in the assemble.

The yaw rotation is limited by the length of connection wires between two motors. It is possible to use a longer wire but the length of the wire is enough to make one full revolution.

The full specifications of Dynamixel MX-106 servomotors are presented in Table 4.

Table 4. Specifications of Dynamixel MX-106 servomotors (ROBOTIS, 2019).

Item Specifications

MCU ARM CORTEX-M3 (72 [MHz], 32Bit)

Position Sensor Contactless absolute encoder (12Bit, 360 [°]) Maker: ams(www.ams.com), Part No: AS5045

Motor Coreless (Maxon)

Baud Rate 8,000 [bps] ~ 4.5 [Mbps]

Control Algorithm PID control

(35)

Resolution 4096 [pulse/rev]

Backlash 20 [arcmin] (0.33 [°]) Operating Mode Current Control Mode

Velcoity Control Mode

Position Control Mode (0 ~ 360 [°])

Extended Position Control Mode (Multi-turn) Current-based Position Control Mode

PWM Control Mode (Voltage Control Mode)

Weight 165 [g]

Dimensions (W x H

x D) 40.2 x 65.1 x 46 [mm]

Gear Ratio 225: 1

Stall Torque 8.0 [Nm] (at 11.1 [V], 4.8 [A]) 8.4 [Nm] (at 12[V], 5.2 [A]) 10.0 [Nm] (at 14.8 [V], 6.3 [A]) No Load Speed 41 [rev/min] (at 11.1 [V])

45 [rev/min] (at 12 [V]) 55 [rev/min] (at 14.8 [V])

Radial Load 1 40 [N] (10 [mm] away from the horn)

Axial Load 1 20 [N]

Operating

Temperature -5 ~ +80 [°C]

Input Voltage 10.0 ~ 14.8 [V] (Recommended: 12.0 [V]) Command Signal Digital Packet

Protocol Type TTL Half Duplex Asynchronous Serial Communication with 8bit, 1stop, No Parity

RS485 Asynchronous Serial Communication with 8bit, 1stop, No Parity

Physcial Connection RS485 / TTL Multidrop Bus

ID 253 ID (0 ~ 252)

Feedback Position, Velocity, Current, Realtime tick, Trajectory, Temperature, Input Voltage, etc

(36)

Material Full Metal Gear

Engineering Plastic (Front, Middle, Back) 1 Metal (Front)

Standby Current 100 [mA]

2.2.6 Microphone

As a sound capturing device a binaural microphone is used. In this work a 3Dio Free Space binaural microphone is used. (Figure 23-24).

Figure 23. Appearance of the 3Dio Free Space Binaural Microphone (3Dio Free Space Binaural Microphone, 2019)

(37)

Figure 24. Side view of a 3Dio Free Space Binaural Microphone (3Dio Free Space Binaural Microphone, 2019)

Binaural microphones capture sound from two sides. Capsule microphones are placed inside prosthetic silicone ear-shaped reflectors. This shape of reflectors helps to make sound record in a way similar to human ears. Thanks to this form of recording, the human brain can process and determine the direction and even the location of the sound source.

Utilization of this binaural microphone increases level of telepresence. It also can increase operability and improve immersion effect.

With a proper operation, this device can provide high quality sound recording but require an external power supply and also it uses a 9V battery.

Specifications of 3Dio Free Space Binaural Microphone are provided from a manufacturer and presented in the next table. (Table 5. Specifications of 3Dio Free Space Binaural Microphone (3Dio Free Space Binaural Microphone, 2019).Table 5)

(38)

Table 5. Specifications of 3Dio Free Space Binaural Microphone (3Dio Free Space Binaural Microphone, 2019).

Directional Pattern Omnidirectional

Frequency range 100 Hz - 10kHz

Sensitivity -28±3dB at 1kHz (0dB=1V/Pa) Rl=3.9KΩ, Vcc=5V

S/N Ratio 80 dB at 1kHz

Max. SPL, peak before clipping 122 dB SPL (Typ.) at 1kHz Distortion level 3% max Output impedance 2.4k Ω ± 30% at 1kHz (RL=3.9k Ω)

Operating Voltage 5V (3V~10V)

Microphone diameter 10 mm (0.39 in)

(39)

2.2.7 Sound card

In a case if PC doesn’t have a line input it is possible to use an external sound card with it.

It usually connects with the PC with USB 2.0 port.

In our case we use a Terratec AUREON 5.1 USB MKII external sound card. It has a line-in with a stereo-jack plug. 16-bit ADC and support a sample rate up to 48 kHz.

Figure 25. External sound card (Terratec - AUREON 5.1 USB MKII, 2012)

(40)

2.2.8 Plate

In order to provide biomimetic design for the robotic head the chosen two cameras must be installed in a special way. Сameras must be rotated in the same direction and installed next to each other at the same height, approximately like human eyes. With that installation, images from cameras are possible to process and get a 3D image or a stereo image. Video streams from these cameras also can be processed into a 3D video or a stereo video.

The plate (Figure 26) is designed with a shape for Marshall CV-500-MB2 cameras and its aim was to fix these cameras on a 3Dio Binaural Microphone case. This work was done in previous LUT researches in LUT-TIERA project and the plate was made with 3D printing.

Figure 26. Design of the fixing Marshall CV-500-MB2 cameras plate.

Assembled robotic head is presented on the following photo. (Figure 27)

(41)

Figure 27. Assembled robotic head with installed cameras on the plate.

2.2.9 PC

The developed system is based on a X86-64 computer with Windows 10 operation system.

Most VR systems does not have their own computing power to process complex three- dimensional scenes regularly used in games.

VR systems require powerful computers with high-performance graphics systems. For a proper work of an HTC Vive Pro, manufacturer advises to use a computer that meets the following system requirements in the Table 6.

(42)

Table 6. System requirements to a PC for HTC Vive Pro. (VIVE Pro HMD User guide, 2018)

Component Recommended system

requirements Minimum system requirements

Processor Intel Core i5-4590/AMD FX 8350

equivalent or better Intel Core i5-4590/AMD FX 8350 equivalent or better

GPU NVIDIA GeForce GTX 1060,

AMD Radeon RX 480 equivalent or better

NVIDIA GeForce GTX 970, AMD Radeon R9 290 equivalent or better

Memory 4 GB RAM or more 4 GB RAM or more

Video output HDMI 1.4, DisplayPort 1.2 or

newer HDMI 1.4, DisplayPort 1.2 or

newer

USB port 1x USB 2.0 or newer 1x USB 2.0 or newer Operating

system Windows 7 SP1, Windows 8.1 or

later, Windows 10 Windows 7 SP1, Windows 8.1 or later, Windows 10

These requirements are caused by the fact that the main direction – computer games with complex 3D graphics, while the frame rate should be higher than the frequency of the HMD display (90 Hz). First of all, a high-performance graphics card is required.

But the developed platform supports work with less productive systems, so in tasks it is not necessary to use 3D. it is Possible to use less powerful graphics cards. It is even possible to use laptops of average price category. However, it is necessary to be convinced of existence of ports of expansion (in our case docking station) Graphic Card.

Since the use of a PC for this platform is mandatory, it was decided to create a program to run on a computer using OpenVR.

(43)

2.3 Software 2.3.1 OpenVR SDK

In order to use OpenVR API (application programming interface) VR system an OpenVR is used.

Like the rest of the APIs it contains a set of libraries and commands required for application development.

2.3.2 Dynamixel SDK

SDK – for software development kit. It is a package for development.

It includes a set of ready-made commands and libraries for developing applications working with Dynamixel servomotors.

2.3.3 Microsoft Visual Studio 2019

Microsoft Visual Studio 2019 is an IDE. This software is needed to increase programmer’s productivity.

A C++ program was written in it using OpenVR SDK and Dynamixel SDK.

2.3.4 SteamVR

SteamVR is developed by a Valve Corporation – one of the biggest gaming studios.

Novadays this company is focused on another product – video game digital distribution service Steam.

SteamVR a basis software which is required to work with HTC Vive. It happened because the main purpose of consumer segment of VR systems is games

2.3.5 OBS

For capturing, compositing and projecting video content an open software Open Broadcast Studio was used.

(44)

3 DEVELOPMENT OF THE REAL-TIME PLATFORM FOR TRANSFER VIDEO, AUDIO AND MOVEMENTS BETWEEN THE ROBOTIC HEAD AND THE VR SYSTEM

This chapter briefly explains connections between chosen devices and their setting up. The first chapter focuses on a providing a video transmission from cameras to HMD and the second on a providing an audio transmission from binaural microphone to headphones.

3.1 Video connection

This chapter explains method of video transfer. Cameras record and transmit two FullHD 60 FPS video signals through 3G-SDI. Two AV.io video-grabbers receive 3G-SDI video signals and transmit uncompressed video to the PC via USB 3.0. These two video signals are transformed in PC program OBS into a stereoscopic side-by-side video and trasmits to HMD.

In order to provide a video signal trasfer to an HMD it is needed to set up OBS (Figure 28).

Figure 28. OBS program interface.

(45)

At the beginning it is necessary to set video parameters for an HMD in Video tab in OBS settings. (Figure 29) To avoid rescaling of video signal the resolution should be the same to a display. Base and output resolution are chosen according to HMD: 2880×1600. Signal framerate is chosen according to video sources: 60 FPS.

Figure 29. Parameters of the video in OBS.

On the next step, video grabbers should be connected before setting up input video signals.

In order to use them in OBS it is needed to add these video devices as new sources of a video signal. In the sources section, click on the plus button and select “Video Capture Device”

(Figure 30)

(46)

Figure 30. Adding a new video source.

Then a window with video properties appears. A required video capture device should be selected. In this work case both devices are named “Av.io SDI Video”. On this step it is not important to know from a which camera a video signal transmits (Figure 31).

Positioning and transforming settings of each video signal are demonstrated in the next step.

(47)

Figure 31. Properties for a new video source.

(48)

Despite that the resolution of HTC Vive Pro is 2880×1600 – FOV (field-of-view) depends on the distance between eyes and display. This parameter is adjustable. Restriction of FOV is needed to reduce VR sickness. (Al Zayer et al., 2019)

But with the closest position it is not possible to enreach an edge of display (to see)

In this step positioning of each video signal is shown. According to previous step the number of pixels in hight is a bit more then number of pixels in FullHD (1080 pixels). It is not nessesery to rescale the source video. Due to this, there is no loss in quality which occurs with a rescaling a raster image. Also, it causes no additional load on the processor.

In OBS it is possible to transform each video source. (Figure 32)

Figure 32. Positioning and transformation of a video signal.

Evaluated parameters for a videostream from the left camera are shown in Figure 33.

(49)

Figure 33. Transform parameters of the video from the left camera.

Evaluated parameters for a videostream from the right camera are shown in Figure 34.

Figure 34. Transform parameters for the right video stream

In order to start video transmition to the chosen to display a Fullscreen Projector of the scene should be chosen. (Figure 35)

(50)

Figure 35. Choosing an HMD for a fullscreen projectoring of processed stereovideo.

3.2 Audio connection

This chapter explains method of audio connection. 3DIO Free Space Binaural Microphone record sound in a way how human do it. Audio signals transmits via AUX cable with mini- jack connectors to a sound card.

For a better work it as better to use a line input instead of common microphone input. The differences in a different sensibility electrical resistance (impedance). Microphone input usually is used for a cheap capsule microphone without their own power supply. While line

(51)

input does not have any. Line input is always stereo while mono microphone input in most cases is mono.

Nowadays a lot of PC does not have its own line input. Usually they have only a mini-jack for a stereo output, for example to connect headphones and microphone input. Or sometimes a PC has both: headphones and one mono for a microphone in one mini-jack.

It is possible to use an external sound card with all necessary audio ports. In this work Aureon 5.1 USB MKII external sound card is used (Figure 36).

Figure 36. Connection to Aureon 5.1 USB MKII external sound card

After connection of aux cable to the sound card a new connection appeared in Recording tab of Windows 10 sound settings. (Figure 37)

(52)

Figure 37. Choosing a default record device.

After that it is possible to redirect an audio signal to any device. In our case HTC Vive Pro has its own headphones and it is better to use them.

It is needed to open properties of Line, open Listen tab, put a tick on “Listen to this device”

and choose a desirable device. (Figure 38)

(53)

Figure 38. Choosing a playback device.

After that all sounds captured by a binaural microphone are played in HTC Vive Pro headphones.

(54)

4 RESULTS

During the research, a real-time platform for mapping and transfer of video, simulation and movement between a humanized robotic head and a set of VR glasses was developed. This model was implemented for the part of LUT Mobile Assembly Robot Tiera – existing robotic head. In order to make the robotic head more human-like it was modified, and servomotors connection was changed. In purpose to make the real-time platform, a computer program was written. It allows to use a humanized robotic as a teleoperated device based on HTC Vive VR system and a new developed computer program. Operator sees and hears captured video and audio from the humanized robotic head. The head also follows operators head movements.

The scheme in Figure 39 shows the interaction between different parts of the system.

Figure 39. Connection scheme between system components.

The images presented in Figure 40 is a screenshot of a real-time video from robotic head’s cameras.

(55)

Figure 40. Example of a stereoimage received from a robotic head.

In order to create an accurate system, it is necessary to make precise and quick regulation of the position. Accuracy of the system is very important. The level of accuracy affects to the possibility to occur a motion sickness. That is why development of fast system is one of priority of this work. The most common and widely spread method of control is to use PID regulator.

Servomotor Dynamixel MX-106 has it is own embedded PID position regulator. It is also possible to change a value of maximum torque and maximum speed. These values are placed in RAM sector of its memory. It is possible to dynamically change them in order to improve control performance.

Different parameters and PID coefficients of servomotors’ controllers were found. The final parameters are shown below in Table 7

Table 7. Adjusted Control Table of both yaw and pitch Dynamixel servomotors

Address Name Value

Yaw Pitch

0 Model Number 320 320

2 Version of Firmware 33 40

3 ID 1 2

4 Baud Rate 1 1

(56)

5 Return Delay Time 250 250

6 CW Angle Limit (Joint / Wheel Mode) 0 1694

8 CCW Angle Limit (Joint / Wheel Mode) 4095 3413

10 Drive Mode 0 0

11 The Highest Limit Temperature 80 80

12 The Lowest Limit Voltage 60 60

13 The Highest Limit Voltage 160 160

14 Max Torque 1023 1023

16 Status Return Level 2 1

17 Alarm LED 36 36

18 Alarm Shutdown 36 0

20 Multi turn offset 28 1

22 Resolution divider 212 0

24 Torque Enable 0 0

25 LED 0 0

26 D Gain 0 0

27 I Gain 0 0

28 P Gain 18 22

30 Goal Position 2257 2565

32 Moving Speed 0 0

34 Torque Limit 1023 1023

36 Present Position 2183 2565

38 Present Speed 0 0

40 Present Load 0 0

42 Present Voltage 116 117

43 Present Temperature 32 32

(57)

44 Registered Instruction 0 0

46 Moving 0 0

47 Lock 0 0

48 Punch 0 0

50 Realtime Tick 0 7361

68 Sensed Current 2048 2048

70 Torque Control Mode 0 0

71 Goal Torque 0 0

73 Goal Acceleration 0 0

An example of using a fully assembled system is presented in the next picture. (Figure 41)

Figure 41. Demonstration of the developed system.

(58)

5 CONCLUSION

The general aim of this research was to develop and implement a real-time platform for mapping and transfer of video, simulation and movement between a humanized robotic head and a set of VR glasses.

During the research the constraction of the TIERA robotic head developed by the previous LUT researchers have been studied and modified into a separate telepresence robot with a biomimetic design. HTC Vive Pro VR system was integrared into this platform. A С++

program perfoms positioning of the human operator’s head and controls the movements of the huminized robotic head in a way to ensure correspond of the robot movements to human movements has been written. Three-dimensional video and stereophonic sound transmission from robot’s cameras and binaural microphone to a VR system’s head-mounted display and headphones are provided.

The developed platform is applicable to mobile robots or vehicles. It is also possible to install it separately in a hazardous environment.

Future work can be carried out in several directions. First to make a remote control of this platform using Internet.

Second, to add MR/AR elements and VR GUI which will help human operator to assess the situation and increase awareness.

Third, to use another tracked by the Lighthouse traction system devices, for instance HTC Vive controllers included in a starnard HTC Vive Pro kit.

Fourth to add more servomotors and increase the number of degrees of freedom.

(59)

LIST OF REFERENCES

3Dio Free Space Binaural Microphone, 2019a, [3Dio webpage]. [Referred 14.08.2019].

Available: https://3diosound.com/products/free-space-binaural-microphone.

3Dio User Guide, 2019a, [3Dio webpage]. [Referred 15.08.2019]. Available:

https://3diosound.com/pages/user-guide-manual-3dio-binaural-microphone.

Al Zayer, M., Adhanom, I.B., MacNeilage, P. & Folmer, E. 2019, "The Effect of Field-of- View Restriction on Sex Bias in VR Sickness and Spatial Navigation Performance", Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems ACM, New York, NY, USA, pp. 354.

Alfayad, S., El Asswad, M., Abdellatif, A., Ouezdou, F.B., Blanchard, A., Beaussé, N. &

Gaussier, P. 2016, "HYDROïD Humanoid Robot Head with Perception and Emotion Capabilities: Modeling, Design, and Experimental Results", Frontiers in Robotics and AI, vol. 3, art. 15. pp. 1-5.

Aracil, R., Buss, M., Cobos, S., Ferre, M., Hirche, S., Kuschel, M. & Peer, A. 2007, "The Human Role in Telerobotics" in Advances in Telerobotics, eds. M. Ferre, M. Buss, R. Aracil, C. Melchiorri & C. Balaguer, Springer Berlin Heidelberg, Berlin, Heidelberg, pp. 11-24.

Arcoverde, E., Duarte, R., Barreto, R., Magalhaes, J., Bastos, C., Ing Ren, T. & Cavalcanti, G. 2014, "Enhanced real-time head pose estimation system for mobile device", Integrated Computer Aided Engineering, vol. 21, pp. 281-293.

Bahat, H.S., Sprecher, E., Sela, I. & Treleaven, J. 2016, "Neck motion kinematics: an inter- tester reliability study using an interactive neck VR assessment in asymptomatic individuals", European Spine Journal, vol. 25, no. 7, pp. 2139-2148.

Borrego, A., Latorre, J., Alcañiz, M. & Llorens, R. 2018, "Comparison of Oculus Rift and HTC Vive: feasibility for virtual reality-based exploration, navigation, exergaming, and rehabilitation", Games for health journal, vol. 7, no. 3, pp. 151-156.

Buys, K., De Laet, T., Smits, R. & Bruyninckx, H. 2010, "Blender for Robotics: Integration into the Leuven Paradigm for Robot Task Specification and Human Motion Estimation", Simulation, eds. N. Ando, S. Balakirsky, T. Hemker, M. Reggiani & O. von Stryk, Springer Berlin Heidelberg, Berlin, Heidelberg, Modeling, and Programming for Autonomous Robots, pp. 15-25.

Caserman, P., Garcia-Agundez, A., Konrad, R., Göbel, S. & Steinmetz, R. 2019, "Real-time body tracking in virtual reality using a Vive tracker", Virtual Reality, vol. 23, no. 2, pp. 155-168.

Clary, P. & Kellar, K. Aug 11, 2017, Vive Setup Guide. [GitHub webpage]

[Referred 12.08.2019] Available: https://github.com/osudrl/CassieVrControls/wiki/Vive- Setup-Guide.

(60)

Dempsey, P. 2018, "The Teardown: HTC Vive Pro", Engineering & Technology, vol. 13, no. 5, pp. 82-83.

Deyle, T. 2015, Valve's "Lighthouse" Tracking System May Be Big News for Robotics.

[Hizook webpage] [Referred 28.07.2019] Available:

http://www.hizook.com/blog/2015/05/17/valves-lighthouse-tracking-system-may-be-big- news-robotics.

Epiphan AV.io SDI Technical Specifications, 2019b, [Epiphan webpage]. [Referred 15.08.2019]. Available: https://www.epiphan.com/products/avio-sdi/tech-specs/.

Fredrickson, S.E. [web document]. Feb 29, 2012. [Referred 18.09.2019]., "Mini AERCam for In-Space Inspection", NASA/Langley Research Center, Hampton. p. 9. Available in PDF-file: https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20120002583.pdf

Galambos, P., Csapó, Á, Zentay, P., Fülöp, I.M., Haidegger, T., Baranyi, P. & Rudas, I.J.

2015, "Design, programming and orchestration of heterogeneous manufacturing systems through VR-powered remote collaboration", Robotics and Computer-Integrated Manufacturing, vol. 33, pp. 68-77.

Gilerson, A. 2018, Oct 12, RVIZ Plugin Instructions. [GitHub webpage]. [Referred 02.08.2019]. Available: https://github.com/AndreGilerson/rviz_vive_plugin.

Gilman, S., Dirks, D. & Hunt, S. 1979, "Measurement of head movement during auditory localization", The Journal of the Acoustical Society of America, vol. 11, pp. 37-41.

Groves, L.A., Carnahan, P., Allen, D.R., Adam, R., Peters, T.M. & Chen, E.C.S. 2019,

"Accuracy assessment for the co-registration between optical and VIVE head-mounted display tracking", International Journal of Computer Assisted Radiology and Surgery, vol.

14, no. 7, pp. 1207-1215.

Guzman, R., Navarro, R., Beneto, M. & Carbonell, D. 2016, "Robotnik—Professional Service Robotics Applications with ROS" in Robot Operating System (ROS): The Complete Reference (Volume 1), ed. A. Koubaa, Springer International Publishing, Cham, pp. 253-288.

Inamura, T. & Mizuchi, Y. 2018, "Competition Design to Evaluate Cognitive Functions in Human-Robot Interaction Based on Immersive VR", RoboCup 2017: Robot World Cup XXI, eds. H. Akiyama, O. Obst, C. Sammut & F. Tonidandel, Springer International Publishing, Cham, pp. 84-94.

Joseph, L. 2018, "Programming with ROS" in Robot Operating System (ROS) for Absolute Beginners: Robotics Programming Made Easy, ed. L. Joseph, Apress, Berkeley, CA, pp. 171-236.

Kellar, K. 2017a, Aug 16, OpenVR Quick Start. [GitHub webpage]. [Referred 01.07.2019].

Available: https://github.com/osudrl/CassieVrControls/wiki/OpenVR-Quick-Start.

(61)

Kellar, K. 2017b, "Tracking: OpenVR SDK", [GitHub webpage], [Referred 12.08.2019].

Available:https://github.com/osudrl/CassieVrControls/wiki/Tracking:-OpenVR-SDK.

Lager, M. & Topp, E.A. 2019, Remote Supervision of an Autonomous Surface Vehicle using Virtual Reality. 10th IFAC Symposium on Intelligent Autonomous Vehicles IAV 2019. Gdansk, Poland, 3–5.07.2019. pp. 387-392.

Lubetzky, A.V., Wang, Z. & Krasovsky, T. 2019, Mar, Head mounted displays for capturing head kinematics in postural tasks. Journal of Biomechanics, vol. 86, pp. 175-182.

Ludwig, J., Selan, J. & Leiby, A. Jun 11, 2019, OpenVR SDK Wiki. [GitHub webpage].

[Referred 02.08.2019]. Available: https://github.com/ValveSoftware/openvr/wiki.

Mars Exploration Rovers – NASA Mars, 2019d, [Nasa Mars webpage]. [Referred 21.09.2019]. Available: https://mars.jpl.nasa.gov/mer/.

Marshall Electronics CV500-MB2. 2019a. [Marshall Electronics webpage]. [Referred 15.07.2019]. Available: http://www.marshall-usa.com/discontinued/cameras/CV500-MB- 2.php.

Marshall Electronics CV503, 2019c, [Marshall Electronics webpage]. [Referred 22.08.2019]

Available: http://www.marshall-usa.com/cameras/CV503/.

Mihelj, M. & Podobnik, J. 2012, "Teleoperation" in Haptics for Virtual Reality and Teleoperation, eds. M. Mihelj & J. Podobnik, Springer Netherlands, Dordrecht, pp. 161-178.

Niehorster, D.C., Li, L. & Lappe, M. [web document] 2017, [Referred 12.08.2019]. "The accuracy and precision of position and orientation tracking in the HTC vive virtual reality system for scientific research", i-Perception, vol. 8, no. 3, pp. 1-23. Available in PDF-file:

https://www.unimuenster.de/imperia/md/content/psyifp/ae_lappe/freie_dokumente/niehorst er__li__lappe_2017.pdf

Oculus Rift PC SDK Developer Guide, 2019e, [Oculus Rift webpage]. [Referred 18.08.2019].

Available: https://developer.oculus.com/documentation/pcsdk/latest/concepts/dg-sensor/.

Park, S., Jung, Y. & Bae, J. 2018, "An interactive and intuitive control interface for a tele- operated robot (AVATAR) system", Mechatronics, vol. 55, pp. 54-62.

Patterson, R.E. 2015, "Basics of Human Binocular Vision" in Human Factors of Stereoscopic 3D Displays, ed. P.D. Patterson Robert Earl, Springer London, London, pp. 9-21.

Pfeiffer, S. Mar 25, 2018, Teleoperating on HTC Vive. [GitHub webpage] [Referred 12.08.2019]. Available: https://github.com/uts-magic-lab/htc_vive_teleop_stuff.

R. Codd-Downey, P. M. Forooshani, A. Speers, H. Wang & M. Jenkin. 2014, From ROS to unity: Leveraging robot and virtual environment middleware for immersive teleoperation.

(62)

ROBOTIS MX-106T/R e-Manual, 2019g, [Robotis webpage]. [Referred 17.07.2019].

Available: http://emanual.robotis.com/docs/en/dxl/mx/mx-106-2/.

Roldán, J.J., Peña-Tapia, E., Garzón-Ramos, D., de León, J., Garzón, M., del Cerro, J. &

Barrientos, A. 2019, "Multi-robot Systems, Virtual Reality and ROS: Developing a New Generation of Operator Interfaces" in Robot Operating System (ROS): The Complete Reference (Volume 3), ed. A. Koubaa, Springer International Publishing, Cham, pp. 29-64.

Shen, X., Chong, Z.J., Pendleton, S., James Fu, G.M., Qin, B., Frazzoli, E. & Ang, M.H.

2016a, "Teleoperation of On-Road Vehicles via Immersive Telepresence Using Off-the- shelf Components", Intelligent Autonomous Systems 13, eds. E. Menegatti, N. Michael, K.

Berns & H. Yamaguchi, Springer International Publishing, Cham, pp. 1419-1433.

Shtuchkin, A., Nov 28, 2016, HTC Vive's Lighthouse DIY Position Tracking Wiki. [GitHub webpage] [Referred 02.08.2019] Available: https://github.com/ashtuchkin/vive-diy- position-sensor/wiki.

Somrak, A., Humar, I., Hossain, M.S., Alhamid, M.F., Hossain, M.A. & Guna, J. Dec 2019, Estimating VR Sickness and user experience using different HMD technologies: An evaluation study. Future Generation Computer Systems, vol. 94. pp. 302-316.

Streppel, B., Pantförder, D. & Vogel-Heuser, B. 2018, "Interaction in Virtual Environments - How to Control the Environment by Using VR-Glasses in the Most Immersive Way", Virtual, Augmented and Mixed Reality: Interaction, Navigation, Visualization, eds. J.Y.C.

Chen & G. Fragomeni, Springer International Publishing, Cham, Embodiment, and Simulation, pp. 183-201.

SWIT S-4914 T/R SDI/HDMI 700m Wireless Transmission System, 2019h, [SWIT webpage]. [Referred 05.09.2019]. Available: http://swit.cc/productshow.aspx?id=193.

The Panoramic Camera (Pancam) - NASA Mars, 2019f, [Nasa Mars webpage]. [Referred 20.09.2019]. Available: https://mars.jpl.nasa.gov/mer/mission/instruments/pancam/.

Varandas de Sousa, F. & J. Dias, T. May 2019, ROS package for publishing HTC VIVE device locations. [GitHub webpage]. [Referred 03.08.2019]. Available:

https://github.com/robosavvy/vive_ros.

Vikne, H., Bakke, E.S., Liestøl, K., Engen, S.R. & Vøllestad, N. 2013, "Muscle activity and head kinematics in unconstrained movements in subjects with chronic neck pain; cervical motor dysfunction or low exertion motor output?", BMC musculoskeletal disorders, vol. 14, no. 1, pp. 314.

VivePort SDK User Guide, [VivePort webpage]. [Referred 15.08.2019]. Available:

https://developer.viveport.com/documents/sdk/en/viveportsdk.html?_ga=2.74502738.1113 751930.1560256737-1170058082.1560256737.

Xiong, J. Jun 9, 2017, Vive Single Base Station Tracking. [GitHub webpage] [Referred 10.08.2019]. Available: https://github.com/JamesBear/ViveSingleBaseStationTracking.

(63)

Xu, X., Chen, K.B., Lin, J. & Radwin, R.G. 2015, "The accuracy of the Oculus Rift virtual reality head-mounted display during cervical spine mobility measurement", Journal of Biomechanics, vol. 48, no. 4, pp. 721-724.

(64)

#include <SDL.h>

#include <GL/glew.h>

#include <SDL_opengl.h>

#if defined( OSX )

#include <Foundation/Foundation.h>

#include <AppKit/AppKit.h>

#include <OpenGL/glu.h>

// Apple's version of glut.h #undef's APIENTRY, redefine it

#define APIENTRY

#else

#include <GL/glu.h>

#endif

#include <stdio.h>

#include <string>

#include <cstdlib>

#include <openvr.h>

#include "shared/lodepng.h"

#include "shared/Matrices.h"

#include "shared/pathtools.h"

//#include <vector>

//dynamixel SDK library

#include <dynamixel_sdk.h>

#include <conio.h>

// Control table address

#define ADDR_MX_TORQUE_ENABLE 24 // Control table address is different in Dynamixel model but it worked with RX-28 that I tested the code with

#define ADDR_MX_GOAL_POSITION 30

#define ADDR_MX_PRESENT_POSITION 36

#define ADDR_MX_P_GAIN 28

#define ADDR_MX_I_GAIN 27

#define ADDR_MX_D_GAIN 26

#define ADDR_MX_TORQUE_LIMIT 34 //Protocol version

#define PROTOCOL_VERSION 1.0 // Default setting

#define DXL_ID 2 // Dynamixel ID: 1

#define BAUDRATE 1000000

#define DEVICENAME "COM6" // Check which port is being used on your controller

// ex) Windows: "COM1" Linux: "/dev/ttyUSB0"

#define TORQUE_ENABLE 1 // Value for enabling the torque

(65)

between this value

#define DXL_MAXIMUM_POSITION_VALUE 3724 // and this value check manual for range. This is the range of RX-28

#define DXL_MOVING_STATUS_THRESHOLD 10 // Dynamixel moving status threshold note:if it is set too small the dynamixel will never get to its target point...

#define DXL_P_GAIN 18 // R: P gain

#define DXL_I_GAIN 3

#define DXL_D_GAIN 0

#define DXL_TORQUE_LIMIT 600

#if defined(POSIX)

#include "unistd.h"

#endif

#ifndef _WIN32

#define APIENTRY

#endif

#ifndef _countof

#define _countof(x) (sizeof(x)/sizeof((x)[0]))

#endif

void ThreadSleep(unsigned long nMilliseconds) { #if defined(_WIN32)

::Sleep(nMilliseconds);

#elif defined(POSIX)

usleep(nMilliseconds * 1000);

#endif }

class CGLRenderModel { public:

CGLRenderModel(const std::string& sRenderModelName);

~CGLRenderModel();

bool BInit(const vr::RenderModel_t& vrModel, const vr::RenderModel_TextureMap_t& vrDiffuseTexture);

void Cleanup();

void Draw();

const std::string& GetName() const { return m_sModelName; } private:

GLuint m_glVertBuffer;

GLuint m_glIndexBuffer;

GLuint m_glVertArray;

GLuint m_glTexture;

(66)

};

static bool g_bPrintf = true;

//--- // Purpose:

//--- class CMainApplication

{ public:

CMainApplication(int argc, char* argv[]);

virtual ~CMainApplication();

bool BInit();

bool BInitGL();

bool BInitCompositor();

void Shutdown();

void RunMainLoop();

bool HandleInput();

void ProcessVREvent(const vr::VREvent_t& event);

void RenderFrame();

bool SetupTexturemaps();

void SetupScene();

void AddCubeToScene(Matrix4 mat, std::vector<float>& vertdata);

void AddCubeVertex(float fl0, float fl1, float fl2, float fl3, float fl4, std::vector<float>& vertdata);

void RenderControllerAxes();

bool SetupStereoRenderTargets();

void SetupCompanionWindow();

void SetupCameras();

void RenderStereoTargets();

void RenderCompanionWindow();

void RenderScene(vr::Hmd_Eye nEye);

Matrix4 GetHMDMatrixProjectionEye(vr::Hmd_Eye nEye);

Matrix4 GetHMDMatrixPoseEye(vr::Hmd_Eye nEye);

Matrix4 GetCurrentViewProjectionMatrix(vr::Hmd_Eye nEye);

void UpdateHMDMatrixPose();

Matrix4 ConvertSteamVRMatrixToMatrix4(const vr::HmdMatrix34_t& matPose);

(67)

pchVertexShader, const char* pchFragmentShader);

bool CreateAllShaders();

CGLRenderModel* FindOrLoadRenderModel(const char*

pchRenderModelName);

private:

bool m_bDebugOpenGL;

bool m_bVerbose;

bool m_bPerf;

bool m_bVblank;

bool m_bGlFinishHack;

vr::IVRSystem* m_pHMD;

std::string m_strDriver;

std::string m_strDisplay;

vr::TrackedDevicePose_t

m_rTrackedDevicePose[vr::k_unMaxTrackedDeviceCount];

Matrix4 m_rmat4DevicePose[vr::k_unMaxTrackedDeviceCount];

struct ControllerInfo_t

{ vr::VRInputValueHandle_t m_source = vr::k_ulInvalidInputValueHandle;

vr::VRActionHandle_t m_actionPose = vr::k_ulInvalidActionHandle;

vr::VRActionHandle_t m_actionHaptic = vr::k_ulInvalidActionHandle;

Matrix4 m_rmat4Pose;

CGLRenderModel* m_pRenderModel = nullptr;

std::string m_sRenderModelName;

bool m_bShowController;

};

enum EHand { Left = 0, Right = 1,

}; ControllerInfo_t m_rHand[2];

private: // SDL bookkeeping

SDL_Window* m_pCompanionWindow;

uint32_t m_nCompanionWindowWidth;

uint32_t m_nCompanionWindowHeight;

SDL_GLContext m_pContext;

private: // OpenGL bookkeeping

int m_iTrackedControllerCount;

(68)

int m_iValidPoseCount_Last;

bool m_bShowCubes;

Vector2 m_vAnalogValue;

std::string m_strPoseClasses; // what classes we saw poses for this frame

char m_rDevClassChar[vr::k_unMaxTrackedDeviceCount]; // for each device, a character representing its class

int m_iSceneVolumeWidth;

int m_iSceneVolumeHeight;

int m_iSceneVolumeDepth;

float m_fScaleSpacing;

float m_fScale;

int m_iSceneVolumeInit; // if you want something other than the default 20x20x20

float m_fNearClip;

float m_fFarClip;

GLuint m_iTexture;

unsigned int m_uiVertcount;

GLuint m_glSceneVertBuffer;

GLuint m_unSceneVAO;

GLuint m_unCompanionWindowVAO;

GLuint m_glCompanionWindowIDVertBuffer;

GLuint m_glCompanionWindowIDIndexBuffer;

unsigned int m_uiCompanionWindowIndexSize;

GLuint m_glControllerVertBuffer;

GLuint m_unControllerVAO;

unsigned int m_uiControllerVertcount;

Matrix4 m_mat4HMDPose;

Matrix4 m_mat4eyePosLeft;

Matrix4 m_mat4eyePosRight;

Matrix4 m_mat4ProjectionCenter;

Matrix4 m_mat4ProjectionLeft;

Matrix4 m_mat4ProjectionRight;

struct VertexDataScene { Vector3 position;

(69)

struct VertexDataWindow { Vector2 position;

Vector2 texCoord;

VertexDataWindow(const Vector2& pos, const Vector2 tex) : position(pos), texCoord(tex) { }

};

GLuint m_unSceneProgramID;

GLuint m_unCompanionWindowProgramID;

GLuint m_unControllerTransformProgramID;

GLuint m_unRenderModelProgramID;

GLint m_nSceneMatrixLocation;

GLint m_nControllerMatrixLocation;

GLint m_nRenderModelMatrixLocation;

struct FramebufferDesc { GLuint m_nDepthBufferId;

GLuint m_nRenderTextureId;

GLuint m_nRenderFramebufferId;

GLuint m_nResolveTextureId;

GLuint m_nResolveFramebufferId;

}; FramebufferDesc leftEyeDesc;

FramebufferDesc rightEyeDesc;

bool CreateFrameBuffer(int nWidth, int nHeight, FramebufferDesc&

framebufferDesc);

uint32_t m_nRenderWidth;

uint32_t m_nRenderHeight;

std::vector< CGLRenderModel* > m_vecRenderModels;

vr::VRActionHandle_t m_actionHideCubes = vr::k_ulInvalidActionHandle;

vr::VRActionHandle_t m_actionHideThisController = vr::k_ulInvalidActionHandle;

vr::VRActionHandle_t m_actionTriggerHaptic = vr::k_ulInvalidActionHandle;

vr::VRActionHandle_t m_actionAnalongInput = vr::k_ulInvalidActionHandle;

vr::VRActionSetHandle_t m_actionsetDemo = vr::k_ulInvalidActionSetHandle;

};

(70)

---

// Purpose: Returns true if the action is active and had a rising edge

//--- ---

bool GetDigitalActionRisingEdge(vr::VRActionHandle_t action, vr::VRInputValueHandle_t* pDevicePath = nullptr)

{ vr::InputDigitalActionData_t actionData;

vr::VRInput()->GetDigitalActionData(action, &actionData, sizeof(actionData), vr::k_ulInvalidInputValueHandle);

if (pDevicePath)

{ *pDevicePath = vr::k_ulInvalidInputValueHandle;

if (actionData.bActive)

{ vr::InputOriginInfo_t originInfo;

if (vr::VRInputError_None == vr::VRInput()-

>GetOriginTrackedDeviceInfo(actionData.activeOrigin, &originInfo, sizeof(originInfo))) { *pDevicePath = originInfo.devicePath;

} }

} return actionData.bActive && actionData.bChanged && actionData.bState;

}

//--- ---

// Purpose: Returns true if the action is active and had a falling edge

//--- ---

bool GetDigitalActionFallingEdge(vr::VRActionHandle_t action, vr::VRInputValueHandle_t* pDevicePath = nullptr)

{ vr::InputDigitalActionData_t actionData;

vr::VRInput()->GetDigitalActionData(action, &actionData, sizeof(actionData), vr::k_ulInvalidInputValueHandle);

if (pDevicePath)

{ *pDevicePath = vr::k_ulInvalidInputValueHandle;

if (actionData.bActive)

{ vr::InputOriginInfo_t originInfo;

if (vr::VRInputError_None == vr::VRInput()-

>GetOriginTrackedDeviceInfo(actionData.activeOrigin, &originInfo, sizeof(originInfo))) {

(71)

} }

return actionData.bActive && actionData.bChanged && !actionData.bState;

}

//--- ---

// Purpose: Returns true if the action is active and its state is true

//--- ---

bool GetDigitalActionState(vr::VRActionHandle_t action, vr::VRInputValueHandle_t*

pDevicePath = nullptr)

{ vr::InputDigitalActionData_t actionData;

vr::VRInput()->GetDigitalActionData(action, &actionData, sizeof(actionData), vr::k_ulInvalidInputValueHandle);

if (pDevicePath)

{ *pDevicePath = vr::k_ulInvalidInputValueHandle;

if (actionData.bActive)

{ vr::InputOriginInfo_t originInfo;

if (vr::VRInputError_None == vr::VRInput()-

>GetOriginTrackedDeviceInfo(actionData.activeOrigin, &originInfo, sizeof(originInfo))) { *pDevicePath = originInfo.devicePath;

} }

} return actionData.bActive && actionData.bState;

}

//---

// Purpose: Outputs a set of optional arguments to debugging output, using // the printf format setting specified in fmt*.

//--- void dprintf(const char* fmt, ...)

{ va_list args;

char buffer[2048];

va_start(args, fmt);

vsprintf_s(buffer, fmt, args);

va_end(args);

if (g_bPrintf)

Viittaukset

LIITTYVÄT TIEDOSTOT

Vuonna 1996 oli ONTIKAan kirjautunut Jyväskylässä sekä Jyväskylän maalaiskunnassa yhteensä 40 rakennuspaloa, joihin oli osallistunut 151 palo- ja pelastustoimen operatii-

Mansikan kauppakestävyyden parantaminen -tutkimushankkeessa kesän 1995 kokeissa erot jäähdytettyjen ja jäähdyttämättömien mansikoiden vaurioitumisessa kuljetusta

Konfiguroijan kautta voidaan tarkastella ja muuttaa järjestelmän tunnistuslaitekonfiguraatiota, simuloi- tujen esineiden tietoja sekä niiden

Tornin värähtelyt ovat kasvaneet jäätyneessä tilanteessa sekä ominaistaajuudella että 1P- taajuudella erittäin voimakkaiksi 1P muutos aiheutunee roottorin massaepätasapainosta,

(Hirvi­Ijäs ym. 2017; 2020; Pyykkönen, Sokka &amp; Kurlin Niiniaho 2021.) Lisäksi yhteiskunnalliset mielikuvat taiteen­.. tekemisestä työnä ovat epäselviä

Työn merkityksellisyyden rakentamista ohjaa moraalinen kehys; se auttaa ihmistä valitsemaan asioita, joihin hän sitoutuu. Yksilön moraaliseen kehyk- seen voi kytkeytyä

In chapter eight, The conversational dimension in code- switching between ltalian and dialect in Sicily, Giovanna Alfonzetti tries to find the answer what firnction

The new European Border and Coast Guard com- prises the European Border and Coast Guard Agency, namely Frontex, and all the national border control authorities in the member