• Ei tuloksia

Mobile Teleoperation of a Mobile Robot

N/A
N/A
Info
Lataa
Protected

Academic year: 2022

Jaa "Mobile Teleoperation of a Mobile Robot"

Copied!
109
0
0

Kokoteksti

(1)

Ehab Aboudaya

Mobile Teleoperation of a Mobile Robot

Examiners: Professor Ville Kyrki

Professor Heikki Kälviäinen

Supervisor: Professor Ville Kyrki

(2)

Lappeenranta University of Technology Faculty of Technology Management Department of Information Technology Ehab Aboudaya

Mobile Teleoperation of a Mobile Robot

Thesis for the Degree of Master of Science in Technology 2010

109 pages, 67 figures and 8 tables.

Examiners: Professor Ville Kyrki

Professor Heikki Kälviäinen

Keywords: Mobile Teleoperation, Mobile Devices, Nokia N770, Maemo 2.2, ARIA, Graphical User Interface.

This thesis describes the design and implementation of a graphical application on a mobile device to teleoperate a mobile robot. The department of information tech- nology in Lappeenranta University conducts research in robotics, and the main mo- tivation was to extend the available teleoperation applications for mobile devices.

The challenge was to port existing robot software library onto an embedded de- vice platform, then develop a suitable user interface application that provides suf- ficient functionality to perform teleoperation tasks over a wireless communication network.

This thesis involved investigating previous teleoperation applications and conducted similar experiments to test and evaluate the designed application for functional ac- tivity and measure performance on a mobile device which have been identified and achieved.

The implemented solution offered good results for navigation purposes particularly for teleoperating a visible robot and suggests solutions for exploration when no environment map for the operator is present.

(3)

Tampere, September 1st, 2010 Ehab Aboudaya

(4)

Contents

1 INTRODUCTION 15

1.1 Background . . . 15

1.2 Objective and Scope . . . 15

1.3 Structure of the thesis . . . 16

2 FUNDAMENTAL CONCEPTS 17 2.1 Mobile Robots . . . 17

2.2 Teleoperation . . . 20

2.3 Mobile Teleoperation . . . 21

2.4 Mobile Devices . . . 22

2.5 Data Communication . . . 25

2.6 Graphical User Interfaces . . . 28

3 RELATED WORK 31 3.1 Stationary Teleoperation . . . 33

3.1.1 Desktop Client Interfaces . . . 34

3.1.2 Web-Based Interfaces . . . 38

3.2 Mobile Teleoperation . . . 45

3.3 Conclusions . . . 57

4 REQUIREMENTS AND SPECIFICATION 61

(5)

4.6 Teleoperating Environment . . . 63

4.7 Dependencies and Scope . . . 64

4.8 Design and Implementation Constraints . . . 66

4.9 Use Cases . . . 67

4.9.1 Connection Settings . . . 69

4.9.2 Connect To ARIA Server . . . 70

4.9.3 Disconnect from ARIA Server . . . 71

4.9.4 Command . . . 72

4.9.5 Stop . . . 74

4.9.6 Get Image . . . 75

4.9.7 Move . . . 76

4.9.8 Get Robot Status . . . 77

4.9.9 Get Environment Map . . . 78

4.10 Non-Functional Requirements . . . 79

4.10.1 Performance and Quality Requirements . . . 79

(6)

4.10.2 Safety Requirements . . . 80

5 IMPLEMENTATION 81 5.1 UI Wire Frames . . . 81

5.2 Deployment Diagram . . . 85

5.3 Architectural Design . . . 86

5.4 Class Descriptions . . . 87

5.5 Sequence Diagrams . . . 88

5.6 Teleoperation Command Buttons . . . 91

5.7 Development Tools . . . 91

5.8 Interface Screen Shots . . . 93

5.8.1 RTMU Startup and Idle State . . . 93

5.8.2 RTMU Menu And Settings Dialog . . . 93

5.8.3 Teleoperation With Environment Map . . . 94

5.8.4 Teleoperation With No Map . . . 94

5.8.5 RTMU Hidden Toolbar . . . 95

5.8.6 RTMU Ratio Increase / Decrease Indicator . . . 95

5.8.7 RTMU Robot Image Feedback . . . 96

5.9 Installation and Help system . . . 96

6 TESTS AND RESULTS 97 6.1 Functional Testing . . . 97

(7)

6.4 Discussion . . . 101

7 CONCLUSIONS AND FUTURE DEVELOPMENT 103

REFERENCES 104

(8)

List of Figures

1 Pioneer 3-DX Mobile Robot . . . 17

2 Typical ARIA application structure . . . 19

3 Human operator for mobile robots . . . 20

4 Teleoperation system architecture sample . . . 21

5 Remote driving with PdaDriver . . . 22

6 Maemo N770, 2005 Nokia . . . 24

7 Maemo platform Key Components . . . 24

8 Minimum WLAN setup for intended teleoperation communication. . 26

9 Sample of WLAN connectivity configuration on the N770. . . 27

10 MSC for client/server communication . . . 27

11 Different dialog GUI but same functionality . . . 28

12 MaemoPad GUI application . . . 29

13 MVC Pattern . . . 29

14 Novel UI for 3D map Exploration . . . 31

15 Virtual joystick gestures . . . 32

16 Fixed Location Operator Teleoperation . . . 33

17 RobotUI Desktop client UI . . . 34

18 RobotUI, GUI Architecture . . . 35

19 The Advantage of Mobility - Desktop GUI design . . . 36

(9)

25 SWATS - Web Services Teleoperation . . . 42

26 Java Based Teleoperation System . . . 43

27 Java Based Teleoperation Web UI . . . 43

28 Finger gesture interface . . . 45

29 PdaDriver: . . . 46

30 PdaDriver System Architecture . . . 46

31 PdaDriver Screen modes . . . 48

32 PdaDriver Vision Screen . . . 49

33 PdaDriver Sensor Modes . . . 49

34 PdaDriver Combined Screen Mode . . . 50

35 The Advantage of Mobility - PDA Interface . . . 52

36 The Advantage of Mobility - Simulated map teleoperation . . . 53

37 The Advantage of Mobility - Visibility of Robot . . . 54

38 The Advantage of Mobility - PDA and Desktop covered area . . . . 55

39 The Advantage of Mobility -Mean and Standard Deviation Visibil- ity Times . . . 55

(10)

40 Teleoperation Common Tasks and UI elements. . . 58

41 RTMU overall system entities. . . 61

42 P3DX with laptop . . . 62

43 Teleoperating Environment Map . . . 64

44 Screen shot of MobileSim robot simulator. . . 65

45 Operator manages robot hardware and ARIA server software. . . 67

46 RTMU application use cases. . . 68

47 RTMU Command Use Case Sequence Diagram. . . 73

48 Hildon Application Views . . . 82

49 Single Toolbar, 360 or 420 pixels . . . 82

50 RTMU Menu Items. . . 83

51 RTMU Settings Dialog. . . 83

52 RTMU Auxiliarly Popup’s . . . 84

53 RTMU Deployment Diagram. . . 85

54 RTMU Class View. . . 86

55 Connect SD . . . 88

56 Move or Stop Command . . . 89

57 Request Robot Image . . . 90

58 RTMU Startup and Idle State . . . 93

59 Menu And Settings Dialog . . . 93

(11)

65 Teleoperation Test Arena . . . 99 66 Robot Home Position . . . 99 67 Robot Goal Position . . . 100

(12)

List of Tables

1 PDA-Based Human-Robotic trail and task times . . . 51 2 PDA-Based Human-Robotic goal achievement . . . 51 3 PDA-Based Human-Robotic Best Performance . . . 56 4 Advantages and Disadvantages of teleoperation using mobile devices. 57 5 Recommendations for Mobile Teleoperation and GUI Components. 60 6 RTMU Class Descriptions. . . 87 7 RTMU physical Nokia N770 command buttons . . . 91 8 Functional Testing Results . . . 98

(13)

GPS Global Positioning System GUI Graphical User Interface HTML HyperText Markup Language

HW Hardware

IO Input Output

IT Information Technology

JPEG Joint Photographic Experts Group LAN Local Area Network

LUT Lappeenranta University of Technicology MSC Message Sequence Chart

MVC Model View Controller OS Operating System P3-DX Pioneer 3-DX PC Personal Computer PDA Personal Digital Assistant

RTMU Robot Teleoperation Maemo User Interface SA Situational Awareness

SD Sequence Diagrams SDK Software Development Kit

(14)

SW Software

TCP Transmission Control Protocol UI User Interface

USB Universal Serial Bus

WLAN Wireless Local Area Network

(15)

another plant.

The interface is expected to provide sufficient controls and enough feedback infor- mation to the operator in order to complete the assigned task efficiently, and by adding autonomous features on remote machines like a mechanism to avoid obsta- cles or use solar power if present etc .. it will enhance the teleoperation experience but the main research topic remains the human operator interface.

With current technology advances in wireless communication and mobile devices it is beneficial to research the possibility of utilizing those components for teleopera- tion applications. In addition to the importance of the topic, the Department of In- formation Technology in Lappeenranta University Of Technology robotic research is active and encourages development in this area.

1.2 Objective and Scope

The objective of this thesis is to design and implement a graphical application on a mobile device to wirelessly teleoperate a mobile robot. The application interface must be able to perform the following tasks in real time:

• Connect and disconnect to/from the remote robot.

• Display the robots environment map and plot its locations.

• Move the robot in four directions forward, backward, left and right.

• Display robot feedback information.

• Request and display an image from the robots mounted camera.

(16)

The application will be developed on the Nokia N770 mobile device and use the Pioneer P3-DX as the robot hardware, those can be requested and borrowed from the IT departments laboratory. In the next chapter and in chapter 4 more application related details are specified.

1.3 Structure of the thesis

This thesis contains 8 chapters:

Chapter 2 presents fundamental concepts by discussing briefly teleoperation and mobile device platforms, WLAN ( Wireless Local Area network ), software imple- mentation practices, then concludes with a section for GUI interfaces on mobile devices. Chapter 3 discusses previous studies on similar topic and concludes with a general list of desired functionalities that will be considered in thesis project work.

In Chapter 4 requirements are identified and use cases are described where such in- formation is used as a bases for the project development phase. Chapter 5 explains architecture and implementation design including GUI design proposal. Then on Chapter 6 the implementation is tested with functional and teleoperation experi- ments, remaining sections shows results and discusses result findings. Chapter 7 concludes the study outcome with some experiences and finally suggests possible further work. Chapter 8 is reserved for references.

(17)

Mobile robots may be classified according to the following features [1]:

• Environmental usage on land, aerial or underwater robots.

• Locomotive devices with legs, on tracks or using wheels for movement.

A popular mobile robot produced by MobileRobots Inc. shown in Fig. 1 is used for academic purposes, this particular model is useful for research fields that involve mapping, teleoperation and robot localization studies [2].

Figure 1. Pioneer 3-DX Mobile Robot [3].

(18)

The hardware specification can be summarized in the following list [2] :

• Weight 9kg, Payload 25kg; Dimensions: Length 45cm, Width 40cm, Height 24cm.

• Run time with on board PC 3-4 hrs, using 12V batteries with recharge time of 6 hrs.

• Mobility wheels consists of 2 foam-filled wheels and 1 rear caster for balance.

• Max speeds of 1.6 meters per second, Translate in 1,400 mm/sec, rotate of 300 deg/sec.

• Sensors on front and back using ultrasonic sonars and 500-tick wheel en- coders.

• Controls and Ports through Serial, Auxiliaries, Motors and Microcontroller and joydrive.

This thesis mobile robot hardware will be based on the P3-DX (Pioneer 3-DX) model and extensive work is applied using ARIA (Advanced Robot Interface for Applications) software library. ARIA comes with every MobileRobots product, it is an open-source software development kit based on C++ programming language.

Developing robot application with ARIA is a straight forward practice due to its client/server methodology, in Fig. 2 the main classes infrastructure are illustrated.

In addition to a supported simulator that matches the same functional behaviour as the real robot hardware.

The IT department of LUT ( Lappeenranta University of Technicology) provides the robot hardware and robot server software used for this thesis project, thus focus and objectives are related to the client end of teleoperation.

(19)

Figure 2. Typical ARIA application structure [4].

(20)

2.2 Teleoperation

Teleoperation indicates operation of a machine at a distance [5], and it is mostly associated with mobile robots performing a certain task. If such a device has the ability to perform autonomous work, it is called a telerobot [6].

Teleoperation systems can be considered as a master/slave relationship where a hu- man operator executes a task from a remote environment simplified in Fig. 3.

Figure 3. Human operator for mobile robots [7].

Teleoperation systems can be composed of a basic textual input/output terminal connected with a joystick for robot navigation. The success factor depends highly on their control system, additional environment feedback information enhances the operator’s decision making, therefore interface research is necessary.

Safe navigation can be realized using haptic interfaces. A study [8] for measuring force feedback information from a special haptic joystick resulted in less obstacle collisions. The joystick was used to control velocity of a minirover by applying force and pressure. In addition to other sensory calculations it demonstrated better navigation and improved remote control performances results.

Another example of a haptic device is a touch screen, where tactile feedback tech- nology creates the perception of pressing physical buttons [9] on the screen surface.

Nokia corporation in 2005 released an internet tablet model N700 using similar technology called Haptikos [10]. This particular model is an ideal candidate for mobile teleoperation and was selected for this thesis mobile operator device. While robots can operate autonomously an operator will require an intelligent interface less for control and more for monitoring and diagnosis [11].

Teleoperation systems share a common architecture design depicted in Fig. 4 and similarly most operator application interfaces are found to offer robot movement controls, execute a defined function e.g. pick up a box, including sending feedback

(21)

Figure 4. Teleoperation system architecture sample [12].

2.3 Mobile Teleoperation

Teleoperation is inherently remote and while fixed location control centers can serve the main purpose, developing a mobile system will increase SA (situational aware- ness) for the operator, offer easy deployment, eliminate installation costs, increase operation communication area, and by designing intuitive interface training time is reduced significantly.

To address the need of mobility, handheld devices are used for the following advan- tages:

• Portable, lightweight and affordable.

• Easy to operate and extend.

• Availability of haptic screen and keys designs.

• Wireless communication ready.

• Offer extra communication services such as GPS (Global Positioning Sys- tem).

• Variety of operating systems and multitasking support.

• Wide range of development platforms available.

(22)

At the same time teleoperation on handheld devices may not be suitable for that tasks that require extensive computing calculations, wide screen display and large memory usage, in addition to prolonged power usage.

In a study of mobile teleoperation [13] a PDA (Personal Digital Assistant) was used for remote driving a vehicle. The system was called PdaDriver and uses multiple control modes to make vehicle teleoperation fast and efficient Fig. 5.

Figure 5. Compaq iPAQ PocketPC and remote driving with PdaDriver [13].

Mobile handhelds are the most attractive devices for mobile teleoperation and in the next section we will introduce the Nokia internet tablet N770.

2.4 Mobile Devices

Most mobile devices are closely related to wireless technology, cell phones are a good example of mobile devices. In this thesis we aim to teleoperate a mobile robot on a mobile device which should essentially meet the following criteria:

• The teleoperating hardware is a mobile device or a smartphone.

• The mobile device can run the P3-DX robot software library.

• Wireless communicate network must be used between the mobile device and robot.

(23)

department lab.

• Power saving modes can be switched on or off on the mobile device.

The Nokia N770 internet tablet in Fig. 6 has been selected for this thesis mobile device because it meets all of the above conditions. The Nokia N770 is an elec- tronic mobile device that can access internet through WLAN/Wi-Fi or Bluetooth connections [14] and its main specification can be described as follows:

• Price: Starting from $250 US dollars [15].

• Physical Properties: weight 230 g , size dimensions 141 x 79 x 19 mm [16].

• Display: High-resolution (800x480) or 4.13 inches touch screen with up to 65,536 colours.

• Processor & Memory: TI OMAP, 244.1 MHz, 64 MB, flash memory RS- MMC.

• Operating System: Internet Tablet 2006 edition, Maemo OS (Operating Sys- tem) 2.2.

• Networking: WLAN (IEEE 802.11b/g).

• Battery Life: 1500mAh, 7 days standby time, 3 hours browsing time.

• Other: support for JPEG, USB device mode for PC connectivity, bundled with multiple useful applications, Bluetooth is also available [17].

What makes the N770 attractive is its use of haptic screen keyboard and programmable hardware keys such as directional scroll keys, zoom key, escape and menu keys.

(24)

Figure 6. Maemo N770, 2005 Nokia [18].

One can think of the scroll keys as a 2D virtual joystick, the scroll keys make a clicking sound when the operator keeps pressing thus, giving the impression of pressure feedback.

The N770 offers Maemo as the operating system, Maemo is mostly based on open source software. At the kernel level it is Debian Linux distribution which not only suitable for network development but also for rebuilding available libraries from scratch. Fig.7 illustrates the key architectural components of the N770 Maemo platform.

Figure 7. Key Components of the a Maemo platform [19].

Linux based device was another key factor for selecting the N770 and the following items describe structure layer roles from Fig. 7 [19]:

• Linux kernel is the core and heart of Maemo platform that controls and facil- itates usage of hardware such as memory, process, networking, file and other

(25)

• Maemo Launcher: Specialized in launching all Maemo applications, it pro- vides initializations and startup data.

• GTK+: A framework of C libraries responsible for graphical interface de- signs, it is based on call back events which triggers execution of specific functions.

• Hildon UI Framework: Provides the X Windows System using GTK+ UI (User Interface) framework developed in the GNOME project. Hildon adds more components such as control panel, status bar and home applets etc.

• Maemo SDK: The SDK (Software Development Kit) Contains tools needed for application development, C++ can be used with a cross compilation en- vironment called Scratchbox. Scratchbox is used to solve host and target environment compilation problems by isolating them.

2.5 Data Communication

Communication between the operator and robot will be established by WLAN (Wireless Local Area Network). The N770 already comes equipped with a WLAN/Wi- Fi embedded adapter, it is required to connect the device on a network access point for operation. This task is a straight forward procedure and very transparent to the end operator.

WLAN is a standard and compliant with the IEEE 802.11b or 802.11g specification.

Data exchange rate can reach up to 54 Mbits/s, large enough to stream continuous video and heavy network communication. An access point is usually a router used as a bridge [20] to a wired system or can provide a standalone private network. The minimum data communication setup needed is illustrated in Fig. 8.

(26)

Figure 8. Minimum WLAN setup for intended teleoperation communication.

Indoor connectivity coverage can expands to few tens of meters and extension is possible by adding an extra WLAN access points. This gives the operators more mobility to move around and remain connected to the network within a broad cov- erage area [21].

Network security can be established using encryption systems such as WPA (Wi-Fi Protected Access), when joining a secure wireless network a key is expected to be entered and upon verifying the router grants a distinct IP network for the device. For this thesis work the mobile device and robot are joined in a wireless private network with security turned off. Fig. 9 shows the interface for connecting the N770 device with an available WLAN ( Wireless Local Area Network ) access point.

Data communication for teleoperation will follows the same protocol states of any client/server application depicted in Fig. 10 sample MSC ( Message Sequence Chart ). This chart captures a scenario in which a user (U) send a request to an interface (I) to gain access to a resource R. The interface in turn sends a request to the resource, and receives "grant" as a response, after which it sends "yes" to U [22].

ArNetworking ( MobileRobots Advanced Robotics Networking Infrastructure ) is used to add networking services to a robot control program. The client requests a service from the server in a form of a command or the client can request information feedback at a specified rate [4]. The command is executed on the robot hardware to accomplish its defined task, then the outcome can be sent back to the requesting client.

(27)

Figure 9. Sample of WLAN connectivity configuration on the N770.

ArNetworking interface library is the main building block for communication be- tween the client application used by the operator and the robots server. In the im- plementation chapter of this thesis more details will be identified.

Figure 10. Typical MSC chart for client/server communication [22].

(28)

2.6 Graphical User Interfaces

Software applications represent information in different forms ranging from simple text outputs to sophisticated 3D graphs, this representation is encapsulated within a form called an interface. GUI (Graphical User Interface) are the most common features of today’s applications running on common computing devices.

Software applications are expected not only to meet functional requirements but also offer visual coherency with simple command layouts easy enough for an av- erage user to get accustomed to. Major software vendors start the GUI interface trend which is picked up and used by other developers becomes very common and eventually some features become standards of next GUI interfaces application gen- erations.

Fig. 2.6 shows two different GUI designs with the same functionality. The differ- ence is how they are arranged, where the right window groups properties, on the left window properties are layed out horizontally, it is harder and becomes increasingly complex to maintain. Thus, designing a GUI should aim at simplicity, grouping, usability and consistency’s.

Figure 11. Same functional dialog but different GUI design [23].

Designing interfaces on mobile devices is effort demanding due to the limited re- sources of display size, communication costs and power consumption. User interac- tion must also be kept to a minimum and for this reason some mobile GUI designs such as in Fig. 12 for MaemoPad the keyboard can be popped out or hidden when not needed.

Successful software development can be practiced using design patterns such as MVC pattern ( Model View Controller ) simplified in Fig. 13. The controller is the

(29)

Figure 12. Sample of a Maemo application with virtual keyboard [19].

middle layer where user actions received from the viewer are passed to the model for processing, then model outcome is read by the controller via callbacks and is finally reflected on to the view. This type of pattern is very useful for different types of platforms and because details are separated MVC pattern offers an efficient method of development. Most modern mobile platforms consists of a standard development interface framework but it is also necessary to investigate if it includes the required components.

Figure 13. MVC Pattern

On the Nokia N770 Maemo platform the desktop interface is based on the Hildon application framework built using GTK+. The Hildon GTK+ is more suitable for embedded devices interface design as in Fig. 12, it is also binary-compatible with a

(30)

normal GTK+ applications without the native Maemo look [24].

GTK+ is a multi-platform toolkit for creating graphical user interfaces. It offers very complete set of widget’s to use when building applications, GTK+ was initially developed for and used by The GIMP, The GNU Image Manipulation Program.

Today GTK+ is used by a large number of applications, and is the toolkit used by the GNU project’s GNOME desktop [25].

In the next chapter we shall review previous studies related to teleoperation GUI designs, and Hidlon GTK+ will be selected for the thesis project development plat- form.

(31)

The P3-DX robot used for this thesis is basically a vehicle robot Fong et al. identi- fied the following teleoperation interfaces [26]:

• Direct manual interfaces: the user is presented with a direct feedback from the camera and uses a joystick for robot locomotion.

• Multimodal/-sensor interfaces: the user is presented with multiple feedbacks in a single view.

• Supervisory control interfaces: the user issues high level commands and mon- itors the remote environment.

• Novel interfaces: the user can teleoperate using haptic gestures or a noval state-of-the-art future technology.

An interesting interface category would be the novel interface designs because the design can be mixed with new concepts for example, Fig. 14 adopted from [26], [27]

illustrates teleoperation in a virtual 3D world that demonstrated improved results compared to a 2D interface.

Figure 14. 3D map Exploration, [27], 2006, novel interface example.

(32)

In Fig. 15 the operator is using gestures to drive a robot, the operator is acting more of a virtual joystick where the left hand used to activate the gesture system and right hand to specify direction and magnitude [28]. Due to complexity of such systems and limited system resources on mobile devices, direct and sensor interfaces are considered for this project GUI design.

Figure 15. Visual gesturing for vehicle teleoperation [28], 2000

(33)

operation systems.

2. Recognize the types of GUI elements used to successfully represent feedback information.

3. Tests and experiments conducted to evaluate the designed systems.

4. Identify perhaps shortcomings when porting existing systems onto mobile devices platforms.

Figure 16. Standard and Wireless teleoperation systems [29].

(34)

3.1.1 Desktop Client Interfaces

Graphical Teleoperation Interface For A Mobile Robot, Jukka Turunen 2006 [30], developed a GUI application in Fig. 17 to teleoperate a P3-DX robot, the interface was based on a property software called MobileEyes by MobileRobots Inc., the implementation architectural design followed a MVC pattern where he presented robot sonar and map using GTK and OpenGl libraries implemented on a Linux based operating system.

Figure 17. RobotUI [30].

The interface is composed of a rich set of feedback information using map drawing of the remote environment, video images, specific robot data such as velocity and battery level. The user can move the robot using directional buttons and rotate camera angles. Sample of the user interface architecture can be seen in Fig. 18.

Among other features included is the ability to send text to the robot server and a use a pre-installed synthesizer called ArFestival-speech that offers text to speech conversion. The communication between RobotUI and the robot was done using LAN (Local Area Network) or WLAN networks.

(35)

The application functionality outcome performed very well passing 11 out of 15 tests, the failed or could not be tested cases were results of video technical limita- tions, requesting safe driving from ARIA library was not taking affect, and sonar measurements had to be estimated for plotting on the map.

The study concluded with successful teleoperation matching the quality of Mo- bileEyes application, in addition to the usability of synthesizer and communication on WLAN network. The author also mentions that MVC pattern architecture was beneficial and aids in further development.

A final recommendation was to update the MVC interface for better observations and suggested map interface improvement [30]. This particular study served as the base reference for implementing this thesis project because the same ARIA robot server from LUT IT department laboratory will be used for thesis project.

(36)

The Advantage of Mobility: Mobile Tele-operation for Mobile Robots [31]

2001, this study is presented in two parts, in this section the desktop interface is discussed then the later is available on the mobile teleoperation section. The inter- esting aspect of this study is it compares two interfaces PDA and desktop against teleoperation effectiveness. The study also considered the optimal task distribution between a hand-held and a desktop stationary operations [31].

The authors suggest that using a PDA interface can boost teleoperation effectiveness for situations that cannot be fixed to a particular location. Another advantage for using mobile devices is that they offer on-field information and partial view of the actual robot environment not retrievable by the robot sensors.

The desktop interface in Fig. 19, was designed for tasks related to exploration, nav- igation and mapping issues, it can also support a team of robots by the offered switching mechanism.

Figure 19. The Advantage of Mobility - Desktop GUI design [31].

(37)

operation, having safe mode on can prevent colliding into obstacles. Shared control "operator clicks robot destination on map" and autonomy "wander mode" each with different velocities (slow, normal and speedy) that have dif- ferent heuristic’s to explore the environment.

• Robot tools panel is for robot kinematics readouts. There is a speedometer, a chronometer to keep track of the mission length, and a gyroscope directly embedded in the robot within the 3D View (yellow directional arrow).

• Settings Panel is divided into two views, consists of the Interface Settings and the Robot Settings.

Several experiments were conducted on exploration and navigation using different variations of locations and situations. The study observed independent variables of interface effectiveness and operation time measurements.

The results demonstrated that it is more useful to drive the robot in narrow spaces using laser view navigation than using the constructed map alone. Additional data analysis and results will be described in the mobile teleoperation section of same study.

(38)

3.1.2 Web-Based Interfaces

Web based teleoperation started back in 1994 [32], [33], and Olivier Michel et al. in 1997 introduced KhepOnTheWeb [34] web based teleoperation system. The study motivation was due to the recognized benefit of a single application utilized for mul- tiple remote operators. In addition to the similar client/server architectural design.

Fig. 20 shows the KhepOnTheWeb interface and operates in this manner:

• Robot state is represented in a static HTML (HyperText Markup Language) document with a still image and numerical data.

• The operator enters and sends unsupervised commands.

• The robot executes the commands without user supervision or additional in- teraction.

• Command outcome is sent back with generated result after some delay.

Figure 20. KhepOnTheWeb Web Interface[34].

The results indicated that it was difficult to introduce complex systems, web latency caused delay in completion of teleoperations tasks. On the positive side web in- teractivity was successful and by introducing Java technology to handle responses

(39)

Figure 21. WebDriver system architecture [11].

• User Interface required browser support for Java applets, where the user can issue commands and able to monitor continuos feedback from the robot’s sensors.

• Base Station is responsible for user communication with the interface, image processing, and high-level robot controls.

• Robot was a teleoperated vehicle equipped with on-board sensors (including a pan-tilt camera) and a motion controller. It is connected to the base station via a radio modem and analog video transmitter [11].

The user interface like shown in Fig. 22, served two primary tools, the dynamic map and the image manager allowed the user to generate commands and to receive feed- back. The interface controls can manipulate the display and pan angle the robot’s

(40)

on-board camera. An added proximity light indicated distance between the robot and nearby obstacles [11].

Figure 22. WebDriver user interface [11]

The dynamic map in shown in Fig. 23 is constructed using ultrasonic sonar and robot position. The sensor data is filtered, stored and then displayed as colored points: gray for sensed obstacles and red for robot position. Color points reflect confidence value where the darker tones indicate higher confidence.

Figure 23. WebDriver map sample [11]

The design displayed status and supported touch commands, on the map images were stored as (blue circles) by the image manager [11] that can be recalled later.

(41)

Figure 24. WebDriver ImageManager [11].

The base station functionality compresses data and exchanged communication with the user interface. The image server returns images in JPEG format via TCP (Trans- mission Control Protocol). High-level robot control was established by Saphira platform built on top of ARIA SDK by MobileRobots Inc.

The study indicate that the architecture was reliable and robust particularly using ARIA safe driving mode, data communication was compressed and some infor- mation was pre-processed which reduced network usage. Operators explored dif- ferent rooms with ease and camera images helped navigation along the corridors.

Although limitations were related to map plotting because obstacle detection was limited to small amount of proximity sensors (ultrasonic sonar) even by adding more ultrasonic sonar sensors may not solve this limitation due to added noise and imprecise readouts.

The study mentions future work can be put in adding higher level of autonomy, proximity sensors may be increased to avoid obstacles and use radio beacons or carrier-phased GPS for better map positioning. The conclusion stated that success- ful and robust teleoperation results were obtained and the feedback GUI design was plausible and efficient.

(42)

Automated Teleoperation of Web-Based Devices Using Semantic Web Services, SWATS [35] 2005, is a design proposal that employs Semantic Web Services tech- nology and AI (Artificial intelligence) planning techniques to achieve task-oriented automated teleoperation for Web-based devices.

The implementation addition to typical teleoperation architectures was a semantic layer shown in Fig. 25 and by encoding useful tasks into web ontology languages is expected to provide the operator agents an automatic plan for process execution, thus reducing laborous work and increase automaticity. The solution presented was a step forward for adding local intelligence on remote ends.

Figure 25. Architecture of SWATS Web services Teleoperation [35].

Internet-based Robotic Systems for Teleoperation, [36] 2001, aimed to design an intuitive user interface for inexperienced users and carry out fundamental research on multi-agent/multi-robot cooperation. The study motivation listed multiple com- mercial benefits gained using multi-agent/multi-robot cooperation in fields of tele- manufacturing, tele-training, tele-repairing, remote surveillance, distributed service robots for office, hospital and elderly care research value can be realized.

The architecture in presented in Fig. 26 and the interface in Fig. 27 exhibits it’s much usage of Java web interface components. Experiments and results summa- rized several observations that can be considered for further improvements that is by increasing image quality, adding more autonomy for complex environments and reducing frequent network transmission delays.

(43)

Figure 26. System Structure for Java Based Teleoperation [36].

Figure 27. Interface sample for Java web based teleoperation [36].

(44)

The following list summarizes characteristics of web based interfaces:

• Internet web applications can operate with high expectancy of communication delay, this limitation prompts to use minimum and effective packet exchange.

Data loss is also expected thus recovery methods are required [36].

• Web GUI elements are somewhat limited compared to desktop vast range of available widget’s which suggests more UI design consideration.

• The nature of the web is stateless and servers are responsible for maintain- ing session information among other localization and other autonomous al- gorithms should not impact performance.

• Web based applications are platform independent and can be exposed to a wider set of operators in multiple locations.

• The advantage of web ubiquity [37] can help develop teleoperations research by using effective methods for multi operators "agents" on multi-robots [36].

• The interface design architecture can be used on mobile devices with some limitation only if it is considered in advance but the ability to access web applications on mobile devices is possible.

• Web based teleoperation is not suitable if collaboration between the user and the robot is required [32].

(45)

rized as:

• Mobile devices can serve as a possible stationary teleoperation backup.

• When fixed locations are not possible due to high costs, technical limitations or other preventive constraints.

• When human/robot collaboration is needed or if some feedback not present on remote robots.

Figure 28. Finger gesture interface, Walky 2009 [38].

(46)

PdaDriver: A Handheld System for Remote Driving, [13] 2003, is a design for a PDA device to teleoperate a vehicle like shown in Fig. 29. The main project features include: practical deployment, consumes low network bandwidth and the interface is easy to use with little training.

Multiple control modes, sensor fusion displays, and safeguarded teleoperation to make remote driving fast and efficient are offered. PdaDriver is intended to en- able any operator (novice and expert alike) to teleoperate a mobile robot from any location and at any time [13].

Figure 29. PdaDriver: user interface (left), remote driving a mobile robot (right) [13].

The PdaDriver system architecture is divided into three main components displayed in Fig. 30 and described as follows:

Figure 30. PdaDriver system architecture [13].

(47)

3. PDA GUI was primarily designed to increase situational awareness and aimed for ease of use, it was implemented with Personal Java of Sun Microsystems, Inc and runs it on a Casio Cassiopeia E-105 Palm-size PC. The interface screens can be seen in Fig. 31, the following descriptions and results have been adopted from [13]:

• Video mode displays images from the robot-mounted camera, horizontal lines overlaid on the image indicate the projected horizon line and the robot width at different depths. The user is able to position (pan and tilt) the camera by clicking in the lower-left control area.

The user drives the robot by clicking a series of way points on the image and then pressing the go button. As the robot moves from point to point, the motion status bar displays the robot’s progress. This image-based way point driving method was inspired by STRIPE [39].

• Map mode displays a map (a histogram-based occupancy grid constructed with sonar range data) registered to either robot (local) or global (world) coordinates. As in video mode, the user drives the robot by clicking a se- ries of way points and then pressing the go button. As the robot moves, the motion status bar displays the robot’s progress.

• Command mode provides direct control (relative position or rate) of robot translation and rotation. The user commands translation by click- ing on the vertical axis. Similarly, the user command rotation by click- ing on the horizontal axis. A scale bar (located to the right of the pose button) is used to change command magnitude. The centered circle in- dicates the size of the robot and is scaled appropriately.

• Sensors mode provides direct control of the robot’s on-board sensors.

The user is able to directly command the robot’s camera (pan, tilt, zoom), enable/disable sonars and to activate movement detection triggers.

(48)

Figure 31. PdaDriver Screen modes for video, map, command, sensors [13].

Experiment and tests conducted were on different field environments indoors and outdoors (paved roads, off-road benign terrain, uncluttered indoor) suggests that the PdaDriver has high usability, robustness, and performance [40].

The results state ease of interface use by novice and expert users, safe mode driving was enabled through out testing which allowed users to concentrate on exploring using different screen modes. The screen stylus is less used because it proved to be difficult while walking or running.

The study concluded with further work suggestions by using sensor fusion displays in addition to adding collaboration control dialogs between operator and robot hard- ware, that is the operator express intent to the robot when context information is made available.

(49)

Figure 32. PdaDriver Vision screen mode [40].

• Sensor Screen provides 180 laser view and ultrasonic sensor finder. The rectangles in the Fig. 33 represent objects detected by the ultrasonic sonar and the connected lines represent objects detected by the laser range finder.

Figure 33. PdaDriver Sensor screen mode [40].

(50)

• Vision with Sensory Overlay Screen provides the image and the sensory in- formation in concert Fig. 34

Figure 34. PdaDriver Vision and Sensor screen mode [40].

Operator evaluations were performed to determine which interface screen was the most understandable and facilitated decision-making. The evaluation collected ob- jective information regarding the task completion times, number of precautions, ability to reach the goal location, as well as the number and location of screen touches. The participants also had to complete a post task questionnaire after each task and a post trial questionnaire after each trial.

Trial Round 1 : The vision only screen tests required lower workload than the vision with sensory overlay. On the questionnaire results; participants rated the Vision- only screen easier to use than the other screens.

Trial Round 2 : The vision only screen was significantly easier to use than the vision with sensory overlay screen. Participants rated the sensor-only screen was ranked as easiest to use. The results across screens during trial two indicate that no significant relationship existed.

The fastest task completion time displayed in Table 1 for using the sensor screen, this was because it involved the shortest path and least visual processing.

Table 2 provides the average goal achievement accuracy for all tasks across both trials. The significant increase occurred across trials for the vision task may be attributed to learning the interface and how to control the robot. Screen processing

(51)

for all layers caused a delay which resulted in 50% unfinished goals within the allotted time.

Table 2. Accuracy percentage of goal achievement by trial and task [40].

Vision Task Sensor Task Vision & sensory Task

Trial One

Reached 12 14 3

Almost Reached 9 6 4

Passed 2 2 0

Not Reached 7 8 23

Trial Two

Reached 23 26 7

Almost Reached 4 1 3

Passed 1 2 2

Not Reached 2 1 18

The study did not anticipate poor performance of longest time and lowest goal achievement by using all screen layers overlaid, this is attributed to the PDA pro- cessing delay. On contrary when participants were allowed to see directly the robot task completion was the fastest using only the sensory screen, this screen provided precautions while goal achievement was the highest.

In conclusion the study presented three different interfaces and experiment mea- surements were processed using objective data analysis. Results indicate the delay caused by PDA screen processing reduces performance. Best results were obtained using sensor screen and direct robot view. The authors also suggest extended data normalization can be utilized for different purposes.

(52)

The Advantage of Mobility: Mobile Tele-operation for Mobile Robots [31]

2001, part one of the same study was described in previous stationary desktop tele- operation section. In this section the same application is considered and presented for a PDA mobile device.

Only map and laser interfaces was implemented for the PDA due to size limitation and processing capability, the settings interface merged autonomy level and the agent mode panels together. In Fig. 35 the laser view identifies obstacles in the remote environment and the map view allows robot directing by point and click, the autonomy level settings controls robot modes.

(a) Laser View (b) Map View (c) Autonomy Settings View

Figure 35. PDA Interface [31].

Three experiments conducted to study the main differences between desktop and PDA teleoperation interfaces with one simulated and two on real environments de- scribed next:

First Experiment: For twenty minutes using a Player/Stage simulator [41] partic- ipants had to explore an environment shown in Fig. 36 without colliding into the wall or other objects. For data calculation the independent variable was defined as Interface Type ε { PDA interface, desktop interface}, and the dependent variable was the covered area by robot in square meters.

Second Experiment: On a real indoor environment operators try to navigate re- motely a P2AT robot to a target point. The test environment was a 15 meters path

(53)

composed of cluttered corridors and narrow spaces. The dependent variable was navigation time measured in seconds and the independent variable was interface type.

Third Experiment: Test environment was outdoors divided into three different zones, all realized using reclining panels and cartons for the following:

1. Maze: using single entrance and exit.

2. Narrow Spaces: limited area and directions where robot must pass through them.

3. Cluttered Areas: robot must navigate through several obstacles but can choose different directions.

Participants were allowed into the environment but not within the arena, robot is only visible from a window like seen in Fig. 37 and can be completely hidden for half of the path.

Participants had to complete the path measured in seconds for both variable types.

The independent used path variation onSpace Typeε{Maze, Narrow Spaces, Clut- tered Areas}, in addition to visibility variable asOperator View Degree ε { Total Visibility, Partial Visibility}. The dependent variable was set for interface type, and other factors related to robot configurations, wireless signal strength remained constant to guarantee replicability.

(54)

Figure 37. Robot partly visible for PDA operator [31].

Preliminary Hypothesis and Data Analysis:The operators used the laser and map view for path planning and survey knowledge. Path planning reduces obstacle col- lision but still depends on the operator spatial awareness so an egocentric reference system was used to decide the direction of movement.

Survey knowledge for way finding depends on the operator’s location awareness, which is generally considered an integrated form of representation with fast and route independent access to selected locations, this was achieved by using an allo- centric coordinate system [31].

Surveying and navigating operations depends highly on information provided from the interface, thus, on the PDA performance is expected less due to switching views where on the desktop interface views are always available. So it was priorly hy- pothesized for better performance and operators will favour the desktop interface.

In the third experiment the PDA performance was expected to do better than on the desktop because the operator was allowed to see at some stage the robot so it required less information accessability and may counterbalance the results.

For each experiment the collected data was analyzed using ANOVA (Analysis of Variance) statistical methods. The description can be referred to the study [31]. Re- sults for the first experiment are shown in Fig. 38 where the area is mostly covered by using the desktop interface and performs considerably better then the PDA for exploration.

On the second experiment navigation times for driving resulted in no significance for both interfaces. For the third experiment results are shown in Fig. 39, it indicates faster times by using the PDA interface with partial and full visibility.

(55)

Figure 38. Covered area in square meters by the operator using the PDA (bottom curve) and the operator using the desktop interface [31].

More ANOVA test were applied on the partial visibility data due to variable space which also revealed it was faster and visibility independent in comparison with the desktop interface Fig. 39a. But for partial visibility results are shown in Fig. 39b the desktop interface results indicates faster navigation in comparison with PDA for maze space type.

(a) Full visibility (b) Partial visibility

Figure 39. Mean and Standard Deviation Times for Visibility experiment three [31].

Discussion: The experiment results confirms different performance effectiveness can be gained from different interfaces displayed in Table 3. The exploration task corresponds to how operators estimates the state of unknown environment when the navigation applies to following a certain path. In case of the maze environment both exploration and navigation were used while driving in the cluttered environment.

(56)

Table 3. Best performance interface based on task and visibility [31].

Exploration Expl. / Nav. Navigation Total Visibility

PDA

Partial Visibility

Desktop Desktop / PDA Desktop / PDA

No Visibility

Desktop Desktop / PDA

Under the same visible conditions the two interfaces are practically identical in the navigation task ( second experiment ), where a highly relevant distinction between them could be stated for the exploration task ( first experiment). The results show that at just 1.5 minute of exploration the desktop exploration area was greater than the PDA one; moreover this difference gradually increased with time [31].

Data analysis for the third experiment reveled better navigation times using the PDA interface regardless of space type. Interface simplicity and having the operator at the robot site provided more situational awareness, thus completed the task effectively.

However for the maze like space, the desktop interface brings faster navigation times with partial visibility than on the PDA interface.

Conclusion and further work: Results demonstrated that when the operator only follows a path, both interfaces were feasible enough to drive the robot with same performance [31]. Interface differences showed the advantages for using desktop design but the PDA interface limitations can be counterbalanced if the operator was permitted inside the operating area. This implies that mobile device teleoperation can be most effective on field missions. The authors suggested to enhance the survey knowledge (location awareness) through the PDA interface which will diminish the PDA processing delay.

(57)

Small, lightweight, easy to use and transport [40].

Battery recharging is necessary for pro- longed usage.

Cost effective, serves as a mobile backup [29].

Small screen display size with limited computing hardware resources.

Provides solutions when human/robot collaboration is needed [32].

Cannot support complex teleoperation tasks such as multiple user / robots col- laboration [42]

Highly recommended for field work when stationary is not possible.

Susceptible to frequent jams and com- munication losses.

Platform and model independent, each model contains features may not sup- port teleoperation requirement e.g.

power saving modes cannot be con- trolled.

Performs well for certain tasks e.g.

navigation, observing or for emergency intervention only [31].

In some operations e.g. exploration can be limited or not usable.

Novice operators training can be straight forward with basic knowledge required.

Limited to interface controls and com- ponents.

Development time takes longer than desktop platforms and is more chal- lenging to overcome some simple op- erations such as quick display updates.

Haptics and gesture based interactions can enhance the practice but must not be a decision factor.

Requires innovative GUI designs that require less interactive steps to com- mand the robot.

(58)

There were also challenges related to communication, processing latency and limi- tation of environment feedback information. Suggestions for general teleoperation improvements can be achieved by using additional autonomy tools or by adding remote back end intelligent system. The experiments and tests conducted can be reproduced on other designs. Presented interfaces suggest simplicity would over- come device limitations and should provide the maximal viewing of information.

The flow chart displayed in Fig. 40 shows common elements used for different tele- operation tasks.

Figure 40. Teleoperation Common Tasks and UI elements.

(59)

using ( 2D map, sonar and video or images) and in combination.

(60)

Table 5. Recommendations for Mobile Teleoperation and GUI Components.

Robot Software Platform: Open source distributions such as ARIA or any ven- dor libraries that can support porting to different mobile device platforms.

Effective Practice: Best suited for full or partial robot viability. Collab- oration can be achieved when operator is in close environment proximity.

Necessary Controls: Robot directional movement, speed control, stop and reset commands preferably as device physical buttons. In addition to safe driving mode to reduce obstacle collision.

Feedback Information:

1. Robot status information, battery power, avail- able communication signal.

2. Sonar sensors read outs/displayed helps reduces obstacle collisions.

3. 2D environment map for exploration and naviga- tion.

4. Environment video or images for visual feed- back.

Mobile Device HW: Screen display should be large enough to show a good portion of the 2D environment map. Strong wireless communication support, power manage- ment can be controlled with long lasting battery life, feasible costs. Several programmable physical and ergonomic buttons.

Mobile Device SW: The OS should be responsive and preferably sup- ports threading.Development tools are available, a robot simulator is a must have tool. The device can supports common graphical drawing libraries for faster implantation times.

Supporting HW & SW: GPS for precise location reading, haptic devices for sensitive operations e.g. robotic surgery. Situation autonomy tools and speed acceleration control e.g.

mining robots.

Notes and Issues: Communication latency can occur specially if us- ing video streaming, data packets can be lost thus, recovery methods are necessary in implementation design.

(61)

4.1 Overall Description

RTMU can be defined as a GUI application running on a mobile Nokia N770 device with a sufficient set of commands and GUI layout to teleoperate the P3-DX mobile robot via WLAN. The visual system entities are depicted in Fig. 41 and each part will be explained in more detail.

Figure 41. RTMU overall system entities.

4.2 Functional Requirements

Functional requirements define the expected system features, lists the main objec- tives and serve as a reference for identifying design specifications. The following lists the required application functionality.

• RTMU is a client oriented application that will connect and use services of- fered by the robot server for teleoperation tasks.

• RTMU shall use GUI components to achieve teleoperation tasks.

(62)

• The necessary teleoperation tasks involve moving the robot in four directions, display image from remote environment and present robot status feedback information.

• The required robot feedback status information are position coordinates, bat- tery level, rotation angle, velocity speed and WLAN signal strength.

• The GUI is expected to display a 2D map. The map can be requested from the robot server.

• Hardware buttons are used for directional commands ( forward, backward, right and left ) that will move the robot accordingly. Also a stop command is required. Those HW buttons must be easily usable on the mobile device.

• IT must be able to request and display the robot’s environment image. If possible, overlay the image under the robots 2D map.

• The user can save connection settings for later connections, with out the need to enter them again.

4.3 Hardware

Mobile Robot: The P3-DX shown in Fig. 42 will be used as the robot hardware.

It is available from the IT department. Onboard there is a mounted camera and a laptop. The laptop is connected to the robot hardware via USB (Universal Serial Bus) or an RS-232 connection and runs a server that can offer teleoperation services when started and connected to a WLAN network.

Figure 42. P3DX robot with camera and ARIA server laptop [3]

(63)

the IT department WLAN and must be established prior to any teleoperation tasks.

It is also possible to use a single wireless router for more connectivity and network bandwidth support.

4.5 Software

ARIA Server: The laptop onboard the P3-DX in operational mode, connected to WLAN and running a special application called ARIA server, is responsible for communicating with ARIA clients and with the robot hardware. Using a special JPEG image frame grabber, ARIA server can offer images to clients requesting current environment images. The frame grabber is a separate application that needs to be running and identified with the ARIA server.

Nokia N770:The Nokia N770 operates on a Maemo OS and its SDK environment will be used for implementing and running the future RTMU application. RTMU is essentially an ARIA client and the major development tools taken in use will be Maemo 2.2 SDK, Scratchbox, ARIA client library. Other helper applications not limited to Slickedit, VirtualBox, Xming, Total Commander, VncViewer, RapidSVN and Sparex Enterprise Architect. RTMU GUI wire frames will be presented in the implementation chapter.

4.6 Teleoperating Environment

The robot will be teleoperated within the IT departments corridors like shown in map Fig. 43, the RTMU operator can follow the robot or be located in a fixed posi- tion. The environment can contain obstacles and forbidden areas / lines, in addition

(64)

to home and goal location points.

Figure 43. Map of the Lappeenranta University of Technology phase 6, Floor 5 [43]

4.7 Dependencies and Scope

• Communication between the mobile device and robot is established via WLAN TCP/IP, and is available on the teleoperating site.

• The N770 and P3-DX are preferably connected on the same network group.

• The robot is configured and running ARIA server in a listening to clients state. This includes starting the JPEG image frame grabber and ARIA is able to read and can send images.

• ARIA server is accessible to RTMU by an IP and port number, which are given to the RTMU operator including any required user credentials needed for connectivity.

• Prior to RTMU development ARIA server additional commands or special features are made available.

• ARIA API documentation and code examples are available and can be recom- piled within scratchbox.

(65)

Figure 44. Screen shot of MobileSim robot simulator.

(66)

4.8 Design and Implementation Constraints

• The application implementation should follow the MVC design pattern.

• Linux is the base operating system for both the server laptop and the N770 device, thus GNU GCC C/C++ development language and compiler will be used.

• Development on N770 requires Maemo 2.2 SDK, the package and sources can be downloaded from the Nokia web site. The SDK offers Hildon and GTK+ libraries for GUI design and application development that is used ac- cordingly.

• ARIA package includes source code released under GNU public license that can be recompiled with at least GCC 3.4 on Linux based OS. This recompila- tion is needed with scratchbox and for Maemo 2.2 ARMEL target platforms.

Porting source code maybe required.

• The actual robot can be simulated using MobileSim [44] application like shown in Fig. 44 and can read the departments map, the ARIA examples can work with the simulator and development should follow same logic flow of connectivity and control example code.

• The C/C++ coding convention shall follow the IT departments recommenda- tion style.

• For documentation purposes Doxygen [45] will be used to produce HTML files that contains usage instructions and implementation documentation.

(67)

hardware and retrieve it’s status.

Figure 45. Operator manages robot hardware and ARIA server software.

RTMU application will connect to the ARIA server using the laptop IP and the listening port number. The IP and port numbers can be obtained from the laptop’s display or can already be noted from the robot operator. Without this information no teleoperation can be accomplished. The robot operator is presumed to be the IT department lab assistant.

(68)

RTMU : The identified overall use cases can be grouped in Fig. 46 and are ex- plained individually in the upcoming subsections. It is assumed the robot hardware and ARIA server are running. On the N770 device RTMU application is launched and is ready to perform the use case presented.

Figure 46. RTMU application use cases.

(69)

1. Actor selects settings dialog.

2. Application settings dialog is displayed and consists of current IP, Port, User Name and Password fields.

3. Actor modifies values by typing into corresponding en- try fields.

4. Actor saves or cancels changes.

5. If saved connection settings are retained for later use sessions.

Alternatives Flows: None

Exceptions: None

Priority: High

Frequency of Use: First time or when such field values need to be updated.

Notes and Issues: None

(70)

4.9.2 Connect To ARIA Server

Actors: RTMU operator, P3-DX Robot

Summary: This use case allows the RTMU operator to connect to the ARIA server, it is the initial step and the connection must remain open during the teleoperation session.

Preconditions: The N770 device is connected to a WLAN network routable to ARIA server. RTMU application is running and connection settings are vaild. No connection to ARIA server is already active.

Postconditions: A successful connection will initializes UI and robot map if possible. Otherwise the RTMU operator will be notified with failure.

Flow Description:

1. RTMU operator selects the connect command.

2. A notification of attempting to connect is displayed.

3. If connection establishes it will hide the previous noti- fication and updates the interface by populating robot status values and try to display the robots environment map.

4. Otherwise if a connection attempt has failed then a notification dialog is displayed with connection failed message and exists case.

5. RTMU application waits for RTMU operator com- mand.

Alternatives Flows: RTMU operator selects the connect command but no con- nection settings have been set then a notification to edit settings fields is displayed and exists case.

Exceptions: Only one connection is allowed from the RTMU applica- tion to the ARIA server. The operator is expected to dis- connect after each connection established and when not needed anymore.

Priority: High

Frequency of Use: For every teleoperation session a connection must be es- tablished.

Notes and Issues: None

(71)

command for further requests to connect again.

Flow Description:

1. RTMU operator selected disconnect command that closes the connection.

2. Interface is cleared from robot status values and envi- ronment map.

3. Interface enables the connect command.

4. RTMU application waits only for exit or connect use case.

Alternatives Flows: RTMU operator can select the exit command but it implic- itly close connection if it’s already open.

Exceptions: None.

Priority: Medium

Frequency of Use: After each connection the RTMU operator should close the connection.

Notes and Issues: If RTMU crashes ARIA server is expected to close the connection, if the crash does not terminate RTMU then it is necessary to kill the process using the OS software manager or such tools.

(72)

4.9.4 Command

This is an abstract generalized use case, its main purpose is to provide the possibility of extending the system commands and act as a pattern for implementation. Most commands expect a response from the server implemented as an output callback and is described separately.

Actors: RTMU operator , P3-DX Robot

Summary: The RTMU operator selects a specific command from the interface and is sent to the ARIA server. If the command is supported then it is executed and the outcome is sent back to the RTMU application. Responses are typically executed within output call back implementations.

Preconditions: RTMU is running and connected to ARIA server.

Postconditions: Command output call back is executed.

Flow Description:

1. RTMU operator selects a specific command from the interface.

2. Interface is updated or a notification is displayed.

3. RTMU application waits for operator next command or was requested to exit.

Alternatives Flows: RTMU operator can select the same command represented in UI or by using a physical button located on the N770 e.g. move robot forward can be accomplished by click- ing/tapping UI up arrow scroll up key.

Exceptions: In some cases a single command can perform several tasks i.e: move forward and update display.

Priority: High - Medium

Frequency of Use: Most commands are handled this way.

Notes and Issues: Timeout and recovery can be challenging to implement. It is a good practice to check if ARIA server supports several commands and reflect interface accordingly.

Command interaction can be depicted in a sequence diagram shown in Fig. 47 where it identifies the main entity roles and objects how they communicate over a period of time.

(73)

Figure 47. RTMU Command Use Case Sequence Diagram.

(74)

4.9.5 Stop

Actors: RTMU operator , P3-DX Robot

Summary: The RTMU operator can select the stop command at any time which will request the ARIA server to halt and ter- minate any active robot activity.

Preconditions: RTMU is running and connected to ARIA server, robot can be idle or busy executing an action.

Postconditions: Robot responded by stopping the current activity and is put in ready state for handling next commands.

Flow Description:

1. RTMU operator presses the physical stop button.

2. If there is no activity running on the robot then nothing is changed, otherwise the robot will immediately stop its current activity i.e: stops moving forward and is put in ready state.

3. RTMU application waits for operator next command.

Alternatives Flows: None.

Exceptions: None

Priority: High

Frequency of Use: A stop button is necessary to set the robot back in ready state and resume other actions.

Notes and Issues: None.

(75)

played in the appropriate image interface container.

Flow Description:

1. RTMU operator requests the get image command by pressing a physical button.

2. While image is being requested a notification is dis- played to wait.

3. If image is ready, the previous message will be hidden and the image is displayed, otherwise a failed notifica- tion is shown.

4. RTMU application waits for other commands.

Alternatives Flows: None.

Exceptions: If ARIA sever does not support image requests then the RTMU operator is notified with images are not supported message.

Priority: High, Medium

Frequency of Use: Environment feedback is essential and image is necessary if the teleoperated robot is remotely invisible.

Notes and Issues: Requesting images can cause more network bandwidth us- age and should be called when necessary or as few times as possible. The images are expected to be in JPEG for- mat.

Viittaukset

LIITTYVÄT TIEDOSTOT

Its task is to communicate with the wireless node on the robot board that might receive configuration packets sent by the wireless node communicating with the Graphical

Audio processing in mobile phones involves many hardware and software modules and therefore needs careful testing at various stages of the development life cycle.. This work is

On the other hand, by proposing an adaptive estimation method, we achieved nearly the same quality of mobile robot positioning compared to marker-based ODT in distances that

The basic purpose of this thesis is to probe if the industrial robot is suitable for teleoperation system; the knowledge and experience generated through the

are calculated online using a modified version of the robot kine- matic model and projected to I S. This guarantees that the robot does not accidentally hit the operator with an

Hardware implementation used Jetson Nano to compute the NN and RPi to capture images and control the robot’s motors. The NN model and code used in this thesis is available at

[r]

Work cell, possible external modules in work cell, all robot work areas and needed robot tool locations are created into simulating environment.. Then robot model is placed