• Ei tuloksia

Addressing Usability Issues: Redesigning the User Interface of Collaborative Robot Franka Panda

N/A
N/A
Info
Lataa
Protected

Academic year: 2022

Jaa "Addressing Usability Issues: Redesigning the User Interface of Collaborative Robot Franka Panda"

Copied!
108
0
0

Kokoteksti

(1)

Md Abdur Rahman Emad

ADDRESSING USABILITY ISSUES: RE- DESIGNING THE USER INTERFACE OF

COLLABORATIVE ROBOT FRANKA PANDA

Master’s Thesis

Faculty of Information

Technology and Communication

Sciences

Examiner: Dr. Aino Ahtinen

Examiner: Dr. Kirsikka Kaipainen

May 2021

(2)

ABSTRACT

Md Abdur Rahman Emad: Addressing Usability Issues: Redesigning the User Interface of Col- laborative Robot Franka Panda

Master’s Thesis Tampere University

Master's Degree Education in Information Technology May 2021

The use of collaborative robots (cobot) is soaring in different industries, especially for manu- facturing and logistics purpose. Cobots are helping their human co-workers in doing repetitive, complex, dangerous, and laborious tasks. Before using those cobots, industry workers need ad- equate training since handling those complex machines demands a synchronization between their hardware and robotic user interface.

In this thesis, the cobot we used is the research version of Franka Panda. The primary users of this particular version of cobot are the students, teachers and researchers who work in the field of robotics. It is observed that the users of this cobot struggle to operate it because they start using it after a short introduction. Most importantly, the root cause of their struggle is the robotic user interface of Franka Panda, which is not intuitive enough and does not provide instructions following which users can proceed ahead with their tasks.

In this research, we followed a mixed approach of two similar methods, design thinking and human-centered design, to extract the best out of them. Utilizing the method observation, our study begins with searching for the usability issues users face when using the robotic user inter- face of Franka Panda. Afterwards, the issues an advanced user of this robot faces are revealed in the individual expert review sessions. In total, seven users (including the writer of this thesis) took part in the sessions of observation and individual expert review; we name this stage of our research as pre-study. Report prepared from the pre-study of this research compiles the usability issues related to Franka Panda’s robotic user interface. Along with determining the frequency of unique usability problems in the pre-study, we implemented the methods severity rating (intro- duced by Nielsen) and human-robot interaction heuristics (developed by Clarkson and Arkin) to categorize and rank the usability issues.

Next, in the redesigning stage, we figured out the solutions to the usability issues of Franka Panda's old robotic user interface and prepared some new features for implementation. The tool we used for preparing the wireframes and designing the high-fidelity prototype is Figma. Before putting the solutions and features in the new robotic user interface of Franka Panda, we made sure that they comply with the design guidelines we prepared from the literature review part of this research.

Finally, the new interface of Franka Panda got evaluated by ten users in a simulated environ- ment. Every participant highly praised the new interface after checking out both the new and old ones. In the scales of user experience questionnaire, system usability, and the time users took to evaluate the user interfaces, the new one outperforms the old one in terms of attractiveness and usability. From the evaluation sessions, users’ feedbacks, and suggestions towards the new ro- botic user interface of Franka Panda provides many clues which can come helpful to enhance our design in future iterations.

Keywords: Human-robot Interaction, Robotic User Interface, Web-based User Interface, User Interface Redesign, Design Thinking, Human-centered Design, Usability

The originality of this thesis has been checked using the Turnitin OriginalityCheck service.

(3)

PREFACE

My mom sometimes reminisces that her first kid used to break the toy cars searching for the driver sitting inside those cars. Since my early childhood, It is me who has always been fascinated by any machine or device that can drive or manage itself autonomously.

Time flies, and I had started my master’s degree at the Tampere University of Technol- ogy (currently, Tampere University) and got introduced to a dedicated lab for robotics here. It brought me immense pleasure when I figured out that I can also work in this lab if I take a topic for my thesis related to robotics. Eventually, I took the course ‘User Ex- perience in Robotics’ and discovered how a good user experience could lessen the gap between human and robot and make the interaction better. While doing this course, I approached the course teacher Dr. Aino Ahtinen seeking her supervision for my master’s thesis. Fortunately, she accepted me as her thesis student, proposed to me prospective topics. Dear Aino Ahtinen, I can’t thank you enough for your supervision, guidance, pa- tience, unconditional support and being so kind-hearted for me in this long and arduous journey.

There are a couple of people who also supported me in many ways. Aparajita Chow- dhury, I would like to thank you for your time, advice and help throughout my thesis work.

Tons of thanks to Emran Hussain Emon for his kind assistance in the designing phase.

From him, I learnt how to implement a solution in the design, use of design tools and components. I thank Aman Khan for his excellent introduction to the tools and basics of writing a thesis; this helped me break the ice at the beginning of the writing. Thanks to Saiful Islam Sunny and my MacBook Air, I would not be able to finish my write-up peace- fully without this machine.

Finally, a million thanks to my friends and family who have always supported me with love and compassion, stood beside me in thick and thin. If I ever lost interest in my work, you people kept me motivated. My mom deserves a particular note of thanks: the cour- age she gave and the love she provided have, as always, helped me to start again.

I hope you enjoy your reading.

Helsinki, May 4, 2021 Md Abdur Rahman Emad

(4)

CONTENTS

1.INTRODUCTION ... 1

1.1 Background and Motivation ... 1

1.2 Research Objective and Research Questions... 4

1.3 Structure of the Thesis ... 4

2.LITERATURE REVIEW ... 6

2.1 Usability and Web UI Design Guidelines ... 6

2.2 Web Application Redesign and Evaluation Cases ... 7

2.3 Human-robot Interaction and Robotic User Interface ... 10

2.4 Summary ... 18

3.RESEARCH METHODOLOGY ... 20

3.1 Research Approach and Process ... 20

3.2 Research Methods and Platform ... 23

3.2.1Data Collection Methods ... 23

3.2.2Data Analysis Methods ... 24

3.2.3Research Platform ... 24

4.PRE-STUDY ... 26

4.1 Methodology ... 26

4.1.1Data Collection and Analysis Methods ... 27

4.2 Pre-Study Part 1: Observation ... 31

4.2.1Purpose and Procedure ... 31

4.2.2Participants ... 32

4.3 Pre-Study Part 2: Individual Expert Review ... 33

4.3.1Purpose and Procedure ... 33

4.4 Findings of Pre-study ... 33

4.4.1Login: ... 34

4.4.2Home - Sidebar: ... 34

4.4.3Home – Timeline Area: ... 35

4.4.4Home – Tasks Area: ... 36

4.4.5Home – Apps Area: ... 36

4.4.6Application – Motion Applications: ... 37

4.4.7Application – Gripper Applications: ... 38

4.4.8Application – Repeat Application: ... 38

4.4.9Page - Settings: ... 39

4.4.10 Homepage and Common Things: ... 40

4.4.11 Quantitative Analysis of Pre-Study ... 41

5.REDESIGNING THE RUI ... 44

5.1 Low-fidelity Prototype: Sketch and Wireframe ... 44

5.2 High-fidelity Prototype: Mock-up ... 44

5.2.1Login and Homepage: ... 45

5.2.2Home – Sidebar: ... 46

5.2.3Home – Timeline Area: ... 48

(5)

5.2.4Home – Tasks Area: ... 49

5.2.5Home – Apps Area: ... 49

5.2.6Applications – Motion Apps, Gripper Apps: ... 50

5.2.7Application – Repeat Application: ... 51

5.2.8Page – Settings: ... 52

5.2.9Summary ... 52

6.EVALUATION AND PROSPECTIVE CORRECTIONS IN NEW RUI ... 54

6.1 Methodology ... 54

6.1.1Data Collection ... 55

6.1.2Data Analysis ... 55

6.1.3Participants ... 57

6.2 Evaluation Procedure and Design of the Prototype ... 57

6.3 Findings ... 58

6.4 Users’ Feedback and Prospective Corrections in Design ... 60

7.DISCUSSION AND CONCLUSION ... 63

7.1 Discussion of Research Questions ... 63

7.2 Limitations ... 65

7.3 Ethical Considerations ... 66

7.4 Conclusion ... 67

8.REFERENCES ... 69

9.APPENDICES ... 76

9.1.1Appendix A: Compilation of Usability and UI Design Guidelines .. 76

9.1.2Appendix B: Documents of Pre-Study ... 79

9.1.3Appendix C: Sketches and Wireframe ... 81

9.1.4Appendix D: Mockup ... 90

9.1.5Appendix E: Evaluation of New RUI ... 93

9.1.6Appendix F: Final Report of Pre-study ... 95

(6)

LIST OF FIGURES

Figure 1: GUI of SBS [9] ... 12

Figure 2: GUI of Robotic Hand on AGV [64] ... 15

Figure 3: GUI of Servosila Engineer Robot [67] ... 17

Figure 4: GUI of Vineyard Sprayer [68] ... 17

Figure 5: GUI of ROBCO [66] ... 18

Figure 6: Phases of Design Thinking [80] ... 21

Figure 7: Human-Centered Design Process [72] ... 22

Figure 8: Combination of DT and HCD [72]... 22

Figure 9: Timeline of the Research Process ... 23

Figure 10: Franka Panda ... 25

Figure 11: Reports of Pre-Study ... 28

Figure 12: Information of Participants ... 32

Figure 13: Login Page of Franka Panda’s Web RUI (version: 4.0.1) ... 34

Figure 14: Right Sidebar of Franka Panda’s Web RUI (version: 4.0.1) ... 35

Figure 15: Blank Timeline Area of Franka Panda’s Web RUI (version: 4.0.1) ... 35

Figure 16: Timeline Area Before and After Tasks Are Added (Franka Panda’s Web RUI, version: 4.0.1) ... 36

Figure 17: Tasks Area of Franka Panda’s Web RUI (version: 4.0.1) ... 36

Figure 18: Apps Area of Franka Panda’s Web RUI (version: 4.0.1) ... 37

Figure 19: Motion Application of Franka Panda’s Web RUI (version: 4.0.1) ... 37

Figure 20: Gray Buttons (black in this picture) on Both Side of End-effector ... 38

Figure 21: First Window of Gripper App in Franka Panda’s Web RUI (version: 4.0.1) ... 38

Figure 22: Repeat App in Timeline of Franka Panda’s Web RUI (version: 4.0.1) ... 39

Figure 23: Inside of Repeat App (Franka Panda’s Web RUI, version: 4.0.1) ... 39

Figure 24: Settings Page in Franka Panda’s Web RUI (version: 4.0.1) ... 40

Figure 25: Controller of Franka Panda ... 40

Figure 26: Relation Between Severity Rating Versus Frequency of Cases ... 42

Figure 27: Relation Between HRI Heuristics Versus Other Parameters (frequency of usability problems, total number of usability problems and mean severity rating) ... 42

Figure 28: Login Page ... 45

Figure 29: Homepage of New RUI ... 46

Figure 30: Contextual Instruction in Homepage ... 46

Figure 31: Left Sidebar of Homepage ... 47

Figure 32: Shutdown Pop-up ... 47

Figure 33: Documentation & Dark theme ... 48

Figure 34: Timeline Area ... 48

Figure 35: Topbar & Task Area ... 49

Figure 36: Apps Area ... 50

Figure 37: Motion Application ... 50

Figure 38: Gripper Application ... 51

Figure 39: Repeat Application in Timeline ... 51

Figure 40: Settings of Repeat Application ... 51

Figure 41: Settings Page ... 52

Figure 42: Six Aspects of UEQ [126] ... 56

Figure 43: Information of Participants of Final Evaluation ... 57

Figure 44: Time Taken for Evaluation of Old and New RUI ... 58

Figure 45: UEQ Benchmark of Both RUIs [136] ... 59

Figure 46: Comparison of Different Qualities of RUIs Using UEQ ... 59

Figure 47: Result of SUS ... 60

(7)

LIST OF TABLES

Table 1: Usability Guidelines for Industrial Robots and Robot Manipulators [58] ... 11

Table 2: HRI Heuristics by Clarkson and Arkin [108] ... 30

Table 3: Severity Rating of Usability Problems by Nielsen [112] ... 30

Table 4: Number of the Use of Apps in the Tests ... 41

Table 5: Score of System Usability Scale (SUS) [134] ... 56

(8)

LIST OF SYMBOLS AND ABBREVIATIONS

UI User Interface

UX User Experience

RUI Robotic User Interface

USAR Urban Search and Rescue Robot

DT Design Thinking

HCD Human-Centered Design

HRI Human-Robot Interaction

UEQ User Experience Questionnaire

SUS System Usability Scale

HCI Human-Computer Interaction

HRC Human-Robot Collaboration

SUISQ-R Speech User Interface Service Quality – Reduced QUIS Questionnaire for User Interface Satisfaction

SBS Skill Based System

MER Mobile Exploratory Robots

DOF Degree of Freedom

AGV Automated Guided Vehicle

SRS Speech-Recognition System

USE Usefulness, Satisfaction and Ease to Use

CTA Concurrent Think Aloud

RTA Retrospective Think Aloud

SEQ Single Easy Question

CSUQ Computer System Usability Questionnaire

SAM Self-Assessment Manikin

UCD User-Centered Design

RP Research Process

RQ Research Question

CAD Computer-aided Programming URL Uniform Resource Locator

CSS Cascading Style Sheets

(9)

1. INTRODUCTION

This chapter presents the focus of this research which is redesigning a collaborative robot manipulator’s web-based user interface. Besides briefing the motives behind choosing this topic, this chapter also discusses the motivation, research objective and research questions of this thesis.

1.1 Background and Motivation

Robots are nowadays being used to perform various roles in our society, including man- ufacturing, at home, and even as first responders in catastrophic accidents. Especially in the industries, robots are now widely accepted and used for manufacturing and logis- tics [1]. Robots used in industries for manufacturing purpose are very little intelligent and can perform specific repetitive tasks but with a high level of precision [2]. Intelligence in robots is an artificial feature that can sense a situation and maximize the chance of suc- cess by taking proper action. It is a fact that old day’s those little intelligent machines are getting more complex nowadays to meet the skyrocketing demand for faster production, maintaining top-notch quality alongside [3].

A type of robot known as collaborative robots (Cobot) is now increasingly being adopted in the industries for manufacturing and logistics [1]. Cobots minimize production cost and human workers’ load, and maximize production by increasing working efficiency [4].

Cesta et al. define cobot as any operating robot that works with humans side-by-side without any fence [5]. On the other hand, Gil-Vilda et al. define cobot as “a robot designed to assist human beings as a guide or assistor in a constrained motion” [6, p. 110]. Some of the top use cases of the cobots are pick-and-place, machine tending, packaging and palletizing, processing, finishing and quality inspection [7].

Among hundreds of collaborative robots in the market, some of the industry leaders are YuMi from ABB, AURA from Comau, CR series from Fanuc, PROB 2R from F&P Per- sonal Robotics, Emika from Franka, LBR IIWA from KUKA and UR series from Universal Robotics [8]. All these cobots have versatile rotating arms and equipped with different type of hardware like gripper, suction cup, force-torque sensor, external soft skin, camera and controller. Since cobots work closely with humans, the necessity of having a simple and intuitive user interface (UI) is enormous for efficiency and safety reasons. Also, it is

(10)

assumable that operators can easily alter, build, and customize cobot systems both of- fline and online when the UI of the cobot is easy to use and intuitive. The state of a robot is called offline when it is being programmed or in idle mode, and on the other hand, online means the robot is in operation mode.

Zaatari et al. categorized robotic user interface (RUI) of cobots into four different types:

teaching pendant, icon-based programming, CAD-based (Computer-aided Program- ming) and task-based programming [4]. Some cobots can perform task-based program- ming by taking input from human demonstration. The cobot in the focus of this research, Franka Panda, works according to a method ‘programming by demonstration’ and has a web-based RUI for task-based programming. In task-based programming, different kinds of ‘skills’ are presented in the form of building blocks. As a result, operators can perform different tasks by parametrizing those blocks with continuous evaluation [9]. On the other hand, the type of robot programming named ‘programming by demonstration’ comes with the combination of three activities: teach-in, guiding and play-back [10].

Cobot user interfaces require interactions with software and hardware and may vary de- pending on the interaction's intent [11]. Because of such a complicated interaction sys- tem, inexperienced robot users demand RUI with higher intuitiveness and usability [12].

Therefore, in designing a RUI for a wide range of users, non-expert users must be kept in mind [13]. Supporting this hypothesis, Dong et al. report that beginners and even en- gineers find it difficult to operate a cobot efficiently [14].

On the contrary, most of the times, robotic user interfaces are being designed and de- veloped by the same people who build the robot [15]. Although design and engineering development are different areas of expertise. A designer remains in charge of a product’s aesthetic appearance and feel, makes it visually appealing to the user using different graphical elements. In contrast, the developers put programming logic behind the graph- ical elements and turn a design or prototype into a final product for end-users [16]. So, clearly, there is a connection between the facts: unintuitive RUIs of cobots and difficulties users face when operating those RUIs [17].

The operation of Franka Panda requires a combination of handling the robot manipulator itself, commanding it from the RUI by setting different applications and pressing/twisting two controllers (when necessary). Because of such complexity in operation, Schmid- bauer et al. discovered a poor usability value while conducting a study on the RUI of Franka Panda [11]. Worth mentioning, when operating this robot, it is hard to guess the correct sequence of working principles since there are insufficient clues in the RUI. Thus,

(11)

it gets much more complex to work with the research version of Franka Panda, predom- inantly when its userbase comprises newbie students and researchers with little or al- most no experience of interacting with an industrial cobot.

The topic designing and redesigning RUIs has been discussed and done in different kind of robots before, such as in an urban search and rescue robot (USAR) to increase effi- ciency in rescue operations [18], in a robotic wheelchair system to make its RUI more efficient and straightforward [19], in a mobile shopping robot for a more intuitive RUI [20].

The method user-centered design was followed to design the shopping robot mentioned earlier. Another method, iterative interface design was used in the research conducted by Asif et al. [21]. The designing processes of RUI are described in the studies conducted by Mayer and Panek, Keatas et al., and Schou et al. sequentially for an assistive robot, an interactive robot and a collaborative robot [9], [22], [23]. In Fricke et al.’s research on a web-based robot platform, and Kraft and Rickert’s research on a RUI of an industrial robot, the topic of redesigning is discussed [24], [25]. A couple of these cases are de- scribed in the literature review (chapter 2) of this thesis, but to our best knowledge, the example of web-based RUIs are few and redesigning such interface for a cobot has never been studied before. Therefore, the challenges in our study lie in redesigning a web-based RUI focusing on standard usability guidelines for web applications without compromising the best handling experience of the robot hardware.

Since the web-based RUI of Franka Panda is a ready-made solution, we did not start our research work from an empty table. To find flaws in the old RUI, we conducted usability studies. Later, parts of two popular design methodologies, design thinking (DT) and hu- man-centered design (HCD), were adopted while designing the new interface. DT helped to define the root causes of the issues raised in the usability studies, understand users’

needs from different perspectives and find innovative solutions [26]. On the other hand, HCD helped to attain maximum user satisfaction by implementing new features in the new RUI of Franka Panda [27]. Moreover, HCD helped to fix the new RUI's functional requirements and usability criteria [28].

Conducting user tests is a standard method to evaluate usability [29]. The method ob- servation was utilized while conducting user testing on the old RUI of Franka Panda to understand users' expectations and desires. Additionally, another method, individual ex- pert review added more insights in the pre-study. Human-robot interaction (HRI) heuris- tics and severity ratings were considered as standards to derive expected information from the data of the pre-study part. On the other hand, user testings were done to eval- uate the new RUI utilizing the methods user experience questionnaire (UEQ), system usability scale (SUS), and semi-structured interviews; these methods helped establish

(12)

positive differences in the new RUI compared to the old one. All these methods men- tioned above are discussed in respective chapters.

1.2 Research Objective and Research Questions

The main objective of this study is to find out the usability related issues in the old RUI of Franka Panda and design a new RUI for it, addressing viable solutions to those issues.

Moreover, our target is to reduce the interaction steps in Franka Panda’s RUI as little as possible to increase intelligibility and minimize the training time of new users [30]. Our final aim is to prove that the solutions we propose and the new ideas we implement eradicate the issues of the old RUI, enhance the usability and bring a positive experience to the new RUI. With such aim and objective, we decided to investigate the following research questions:

1. What usability issues users are currently facing on the robotic user inter- face (RUI) when operating the collaborative robot Franka Panda?

2. What kind of robotic user interface (RUI) solutions can be redesigned to solve the addressed usability issues?

1.3 Structure of the Thesis

Chapter 2 of this thesis covers the literature review of different relevant topics. It begins with a brief introduction to usability and a summary of the guidelines for web interface design. In the following subchapter, a couple of web interface redesigning cases are compiled to demonstrate the process of RUI renovation. Later, interaction action be- tween human and robot, and how researchers implemented RUIs in different kind of robots are discussed. Finally, a summary of chapter 2 is written to connect the topics usability, web UI and RUI.

Chapter 3 addresses how different approaches, processes and methods were utilized to design the new interface in different stages of this research. A short description of the robot platform of this thesis is also described in this chapter. Later in chapter 4, two methodologies, observation and individual expert review are described; how those were implemented in the pre-study of this thesis to find usability issues associated with the old interface of Franka Panda. Afterwards, findings from the sessions of those methodolo- gies are fused to eliminate repeated issues. At the end of this chapter, relation between the findings and theoretical frameworks are established in the result section.

In chapter 5, the gradual steps to develop the design of new RUI from sketches up to the high-fidelity prototype are explained in detail; how the new design is solving problems of

(13)

old RUI is also covered here. In chapter 6, theoretical aspects and the result of the eval- uation process of the new RUI is discussed. Later, the new design is corrected after taking users’ feedback from the final evaluation. Chapter 7 is allocated for discussion and conclusion; the research questions are answered there in light of chapter 2 (literature review). Moreover, the validity and reliability of the findings from the empirical studies are discussed there, along with the limitations of this study.

(14)

2. LITERATURE REVIEW

This chapter discusses the literature review of the three main topics related to this thesis.

The first subchapter 2.1, focuses on the topics usability and different guidelines for web UI. Then the next sub-chapter addresses different methods of web-interface design, in- cluding some design and redesign cases. In sub-chapter 2.3, the topics HRI, a couple of RUI guidelines and some cases of RUI designing are discussed. Finally, the formulation of design guidelines for the new RUI of Franka Panda is highlighted in the last sub- chapter 2.4.

2.1 Usability and Web UI Design Guidelines

Usability is a consistency metric used to test the convenience of using a user interface in the design process. In ISO 9241-11:2018, usability is defined as “the extent to which a system, product or service can be used by specified users to achieve specific goals with effectiveness, efficiency and satisfaction in a specified context of use” [31]. Accord- ing to Jakob Nielsen, five quality components characterize usability: learnability, effi- ciency, memorability, low error rate and satisfaction [32]. Usability is a must to have cri- teria to keep users engaged in a web application since in many fields, users have plenty of options to move to another service as there are many competitors in the market [33].

To avoid such loss and avoid design rework, usability issues need to be considered dur- ing the early phase of software development [34]. Moreover, addressing usability in the early stage of design is cost-effective [35].

Islam et al. mentioned that usability in human-computer interaction (HCI) mainly focuses on the layout, navigation, information architecture and content of the application [36].

They also mentioned that any sign in a UI should express accurate meaning and be easy and intuitive to understand. According to their research, some studies indicate that de- signing intuitive interface signs is critical for maintaining user satisfaction, enhancing a system's learnability, ensuring task completion and providing efficient communication [36].

An essential thing for improving usability is conducting user testing by hiring representa- tive users, assigning them some tasks and observing what they do. Tekmen and Tanri- over extracted seventeen highest rated (by U.S. Department of Health and Human Sci- ences) usability guidelines for web application out of five hundred references from the

(15)

book ‘Usability Guidelines’ [37]. Alonso-Virgós et al. compiled extensive usability guide- lines from various sources and divided those into five groups. In their research, they were confined in focusing on one of those groups which come up with the recommendations to attract users’ attention in important parts of the website and minimize the noise gen- erated by unimportant parts. They tested these recommendations on twenty web devel- opers by asking them to design some websites. From those websites, they figured out which of the guideline developers forgets most and which of those found as the most important according to those developers’ thinking [38]. From the evaluation results, all the recommendations scored more than 60%, which implies that those are useful, in theory.

Usability testing is a powerful tool in designing a product since it allows designers to investigate, identify and solve design problems [39]. Marenkov et al. developed an auto- mated web-based tool named ‘Guideliner’ to test the usability of applications. To develop that tool, they prepared a list after extensive sorting and filtering that covers most UI elements [40]. On the other hand, Camargo et al. proposed a visual design checklist for the evaluation of a GUI [41]. Additionally, Sajedi et al. compiled a set of usability guide- lines for designing a user interface [42].

Avoiding repetition, we compile seventy-seven guidelines from the five sources (Appen- dix A) mentioned in this subchapter and divide them into seven categories: colour, ty- pography, shapes, layout, patterns, general composition and others. Thus, we get a con- crete idea about the proper use of colour and typography in a web-based application.

Moreover, these guidelines help choose relevant shapes and icons to design as well as proper location for placing contents. Knowledge regarding the layout of a web application in these guidelines helps to maintain consistency and better space utilisation. The rest of the categories in these guidelines help to attain different user interface related qualities and conveniences.

2.2 Web Application Redesign and Evaluation Cases

The robot we are studying in this research comes with a web-based RUI. So, reviewing the literature related to web applications helps us to understand how designers conduct the redesign process for different web-based applications and evaluate them.

In the latest card-like space allocation technique for placing different contents in mobile and web applications, Apple Inc. mentioned six design principles in their human interface guidelines: aesthetic integrity, consistency, direct manipulation, feedback, metaphors

(16)

and user control [43]. Additionally, a detailed description for the dark version of an appli- cation is also explained in those guidelines. Following this dark version and card-shaped content representation trend, Facebook has also done a major redesign work of their existing UI to prioritise these two characteristics. Use of black colour and the technique of differentiating contents from each other increase visibility and clarity [44].

Mass Rapid Transport (MRT) Jakarta redesigned its mobile application since many users found the existing application challenging to use. At the beginning of the process, they conducted qualitative and quantitative usability testing, users’ think aloud, responses and suggestions. Based on the insights from the usability tests, the new interface is rede- signed using Google Material Design since it provides a design standard that aligns with the ten usability heuristic for UI design cited by Nielsen [45]. The new design of the ap- plication was tested with performance measurement that includes assigning users tasks and tracking the time, measuring the number of errors, task success and task comple- tion. They also used usefulness, satisfaction, and ease of use (USE) questionnaire to evaluate the final design. In all the indicators, the new interface proved better usability than its previous version.

Rumah belajar is a web-based supplemental learning application in Indonesia. To in- crease the user satisfaction of that platform, Ahsin et al. conducted a usability test to understand users’ needs and flaws in the application [46]. To establish requirements for the new interface, they conducted usability testing with the technique concurrent think aloud (CTA) on the old interface, used the questionnaire named system usability scale (SUS), and carried out a contextual interview. Based on the data found in the test, they utilized the UCD method to propose a new design. After developing low and high fidelity (interactive) prototypes, they conducted evaluation like before they did in old UI; but for this case, they added three scenarios, breaking them into four tasks. In the evaluation, the redesigned application performed better than the old one, scored 83 in SUS, and earned an A grade.

On that same application, Septiandi and Suzianti took the same initiative of redesigning the interface [47]. They conducted usability testing on different parts of the application, applied different methods like system usability scale (SUS), questionnaire for user inter- face satisfaction (QUIS), retrospective thinking aloud (RTA) to get insights from the us- ers. While designing, they mainly focused on the four most problematic features of that application: navigation bar, dashboard, registration and three other pages. The new in- terface adopts Google’s Material Design and Nielsen’s ten heuristics for user interface design. The new design is proved as more effective since it showed a significant differ- ence in performance measurement. Notably, performance measurement is calculated

(17)

from four indicators: task success (TS), error (E), time on task (ToT) and completion rate (CR).

To check the suitability of design, enhance the user experience, determine the needs and difficulties in arsitag.com, Puspitasari and Tarigan performed an analysis targeting the UI and UX usability of the mobile version of that website [48]. To accommodate de- sign decisions in this research, they conducted a heuristic evaluation to avoid repetitive design process, SUS testing to determine usability and single ease question (SEQ) to implement task-based testing. The evaluation result of the newly designed interface for SUS improved from 61 to 88.5, which means the display of the content was very good.

SEQ result of their test determines that the page ‘professional search’ does not entirely address the users’ needs since accessing this seems complicated. Subsequently, heu- ristic evaluation addresses the users' difficulty finding the FAQ button in the mobile ver- sion of the website, but overall a good improvement is observed.

Adinda and Suzianti redesigned the user interface of an e-government application named Kota Bekasi [49]. First, they invited thirty citizens ages 19-60 and used RTA, SUS, and QUIS to analyse usability. Along with extracting data applying the methods mentioned earlier, they measured the parameters tasks success, error and time on task, and discovered lower usability level of that application. Later they jumped on redesigning the application and set two things: formal rules (ten usability heuristics) and best prac- tices (material design) as the foundation of the process. Taking lessons from the RTA, they fixed the margin of application in the layout section, kept font size for heading and content to 24pt and 16pt respectively, placed elements in the menu bar maintaining a hierarchy. The same thing they did in the navigation implementing notification system and option for two languages. The time of the task and result of the SUS questionnaire were recorded when ten participants responded to the redesigned app's verification test.

Finally, the application scored 82.2 in the SUS, which was 43.438 in the old version.

To achieve better performance in an e-government application, Chang and Huang rede- signed it using the UCD evaluation approach [50]. To find the pain point in the old appli- cation, they conducted user surveys and user experiments. They divided the whole pro- cess into four steps: collected users’ opinions and expectations from an online question- naire survey, recorded users’ behaviours and performance data from observation and interview, reorganized the information architecture in the low-fidelity wireframe and re- fined the high-fidelity prototype through subsequent validations. They identified design pain points in layout, colour, icon and search system of the previous UI. Repeated func- tions, no separation and prioritizing between contents were also discovered in the layout.

Issues with colour were observed in the background, theme and buttons of the UI. The

(18)

design style, colour and size of the icons were not consistent. Too small search icons, insufficient search bars and issues with search results were found in the search system.

Later, while generating a new design, they simplified the functions, eradicated the redun- dancy, reorganized the position of the services as well as the information architecture.

Furthermore, they adjusted the size of visual elements, changed the theme colour to a suitable one for bringing consistency in the theme, designed easily recognizable icons, improved the search bar and search result. The final design validation of the new appli- cation was done in six gradual parts: providing consent form, giving pre-experiment questionnaires, assigning tasks, allowing free exploration, another questionnaire (SUS) session after the experiment, and finally taking open comments about the application.

The experiment based on completion time, errors, interface path and operation nodes was conducted on five participants. A better user experience was recorded as the out- come of the redesign of the application.

Forte and Darin evaluated the user experience (UX) of a bike-sharing platform named Bicicletar to redesign its UI [51]. Their redesigning process is divided into three stages:

1) UX evaluation in actual context and analysis, 2) developing a high-fidelity prototype of the new design using the tool POP 2.0, and 3) prototype validation. They used the DE- CIDE framework to plan this research, and for further evaluation, they recorded both webcam video and computer screen. In the first stage, they explored the application, conducted usability inspection both in the lab and real-world contexts, and collected data from online questionnaires (CSUQ, QUIS and SAM). After analysing qualitative data us- ing the ‘content analysis’ technique, they moved to design the new application using the UCD process, conceptualized the application first then made the prototype. Later, fifteen participants evaluated the high-fidelity prototype, and researchers discovered a positive change in the UX of the new UI.

2.3 Human-robot Interaction and Robotic User Interface

According to Goodrich and Schultz, “human-robot interaction (HRI) is a field of study dedicated to understanding, designing, and evaluating robotic systems for use by or with humans” [52, p. 204]. Schmidtler et al. define HRI as a general term for all kinds of inter- action between humans and robots [53]. So, HRI can be defined as a situation where human(s) and robot(s) share the same spatial location and react or communicate with each other. Kanda reports that humans can do two kinds of interaction with robots: re- mote and proximate [54]. In remote interaction, robot and human are not located in the same place. On the contrary, human and robot share the same place in the proximate interaction. Hentout et al. report that collaborative robots do proximate interaction

(19)

through direct or indirect communication [55]. The direct communication occurs when the human collaborators directly instruct the cobot through a robotic user interface (RUI) or modalities like speech or gestures. On the other hand, indirect communication occurs through the robot’s understanding of the users' eye gaze and facial expression.

The robotic user interface (RUI) is crucial for controlling robots while ensuring the pro- tection of a complex environment. RUI can be defined as a user interface that augments virtual access so that human operators can communicate with robots in different envi- ronments, such as in situ maintenance or rescue missions [15]. To assist operators by reducing their cognitive workload, the RUI should be well configured and properly inter- faced [56]. RUIs come with unintuitive and unusual design make the RUI challenging to understand because they create cognitive handling issues [17]. Research in interface design and evaluation can enhance the performance of a RUI, especially the collabora- tion between human and robot [57].

Higher levels of usability can boost a collaborative robot's performance, human well- being, and level of acceptance [58]. Campana and Quaresma researched for better us- ability guidelines to develop RUIs, mainly for industrial robots and robot manipulators.

The guidelines for the cobot they reported are as (Table 1) follows:

1) Implicitly Switching Interfaces and Autonomy Modes 2) Use Natural Human Cues

3) Directing Manipulating the World

4) Manipulate the Robot-World Relationship 5) Information is Meant to be Manipulated 6) Externalize Memory

7) Support Attention Management

Table 1: Usability Guidelines for Industrial Robots and Robot Manipulators [58]

On the other hand, Villani et al. mentioned three main modules for an intelligent RUI in a cobot: 1) UI can measure human capabilities, 2) adopt the interface according to the capabilities and 3) can teach and train unskilled users [3]. Additionally, Niculescu et al.

specified the system requirements for their industrial robot according to their need and followed three principles while designing a RUI: 1) simplicity of UI, 2) structured division

(20)

and chronology, and 3) feedback regarding action [59]. For the qualitative evaluation, they assigned participants tasks and conducted interviews. Alongside, two standard questionnaires: questionnaire for user interface satisfaction (QUIS) and speech user in- terface service quality - reduced (SUISQ-R), were also implemented in their research for getting quantitative data.

Design cases of robotic programming frameworks and tools:

Task-level programming is a method that provides an easy and quick guide to robots.

Schou et al. developed a software tool named skill based system (SBS) that allows novice collaborative robot users to program industrial tasks [9]. For people of various backgrounds, SBS offers an intuitive GUI (Figure 1) and a system; these two are built on the concepts: robot skills with different parameters and task-related actions. That GUI of SBS provides menus for creating new tasks (Figure 1 A), manages and runs existing tasks (Figure 1 B), maintains the setup (Figure 1 C), calibrates other hardware and man- ages the autonomous navigation of the cobot (Figure 1 D).

Figure 1: GUI of SBS [9]

From the skill parameterization window of the GUI, users can set and edit parameters offline, use existing tasks and sequences, and go for online teaching. Throughout the process, the GUI, connected to a robot arm, provides users instructions and feedback from the system. The researchers observed a significantly reduced performance gap between novice and expert robot users from the usability study of the system. Further- more, they found that a short description in skill-based programming helps novice users easily program robot tasks. Worth mentioning, upon the users' requests who took part in several round usability tests, they redesigned the GUI several times to better present graphical instructions and robot’s status.

(21)

Intending to develop a framework for task-level programming, Steinmetz et al. proposed the approaches: ‘creating skills from experts’, providing those to the system so that shop- level workers can ‘create executable robot tasks’ in an intuitive RUI [12]. In the framework RAZER, they merged both the approaches to optimise their benefits, thus mitigating dis- advantages. The principle of RAZER is a class hierarchy where the major classes con- nected to the robot are: task, entity, step, service and skill. With HRI and RUI, users can create a new task there or proceed from the task previously made. Clicking on a task opens a window with a vertical sequence of ‘drag and drop’ skills and several buttons for setting the values of different parameters. At any time, users can change, delete, edit, rearrange things in a task. After completing the steps mentioned above, users need to press the play button to begin the execution. In evaluating this framework, two kinds of user studies were conducted: cognitive walkthrough and thinking aloud. The first study was conducted to gather qualitative feedback from two robot experts, assigning them four step-by-step works. The outcomes of this session were finding some significant in- consistencies in design, missed description in dialogues, issues with labels, choices of colours and some desired features. The ISONORM 9241/110 questionnaire was pro- vided to collect quantitative data and qualitative feedback in the other study. The three main issues identified in this study are modifying parameter value, teaching trajectories, and defining a selected pattern. Based on the feedback, the framework was improved further, and now it has earned an average rating of 2.0 from 1.4. Also, it is found that users of any experience level now can command the robot by making a program in this framework within 3 minutes.

Ramírez-Benavides et al. developed two tools TITIBOTS and TITIBOTS Colab which allow kids to program robots in their early childhood [60]. The tools consist of icon-based interfaces for visual programming, robot, and mobile device; using these, kids can create programs and run them with robots. The methodology they used is UCD alongside ap- plying participatory-design, experience prototyping and usability testing. In the designing process, twelve experts in different areas, eleven researchers, fifteen preschool teachers and approximately one hundred children were involved. The UCD process sequentially specified the context of use, determined the requirements, created design solutions, and finally evaluated it. In total, they conducted seven iterations to complete the project. The technical feasibility of the programming for the first standalone tool was focused on in the first iteration. Then in the next step, the generated design solutions were tested on that tool through a paper prototype validation. In the fourth iteration, they implemented the solution and improved it in the next step. In the sixth and last iteration, they focused

(22)

on the collaborative version of the prototype and then conducted another round of eval- uation. The phases they maintained in all these seven iterations are: analysis, concep- tualization, design solutions, test, and refine.

Uii is a robot interfacing framework consisting of Uii Desktop and Uii Mobile [61]. The desktop version is a GUI based programming approach, works as a front-end interface between the system and the components of the robot. The RUI has mainly two tabs:

welcome tab and robot tab. ‘welcome tab’ is the first one where users find a quick re- sponse (QR) code and set configurations by specifying the type of manipulator, gripper and internet protocol (IP) address of the devices for getting connected to specific robots.

After a successful connection, the tab ‘robot tab’ appears where users can set the robot's manipulator and gripper orientation and position. On the other hand, Uii Mobile acts as a mobile front-end of the Uii framework when it handshakes with Uii Desktop by getting scanned of QR code. Any robot that is connected to the Uii Desktop, can be controlled from the mobile app. Uii Mobile comes up mainly with five screens. The first screen is called ‘splash screen’ that notifies users regarding the launching of the application. Then the ‘welcome screen’ appears where users find an option to scan QR code from Uii Desktop. Later in the ‘task pick screen’, users can load new task files or continue working with preloaded old ones. Afterwards, the ‘jogging screen’ appears that has four main areas. The first and second areas have some arrows for controlling the position and orientation of the robot. The third area is dedicated for gripper action and the free drive option, the fourth area acts as the main part of ‘jogging screen’. Finally, the last one

‘programming screen’ shows the overview of the actions and tasks, and the options to delete, edit, execute, test and stop actions.

Design cases of RUIs:

One of the necessary conditions for a cobot run by daily factory workers is an inherent and simple programming interface [62]. Nair et al. presented a proof of concept for the mobile collaborative robot KUKA LBR iiwa to teach non-programming experts different production scenarios of a factory [63]. They used pre-programmed task blocks in the graphical interface when modelling work sequences for their robot. As an output, they found those blocks enhance human-robot collaboration by providing easy accessibility to the users when creating applications for that robot.

Naveed et al. designed and developed a single operator RUI for an autonomous rescue robot GETbot. Using that RUI, users can control and navigate multiple mobile explora- tory robots (MER) [15]. In their case, data streams were coming from multiple robots' multiple sensors. So, they came to the decision that the operational complexity level of

(23)

their robot will get increased for an operator. Thus, for proper manoeuvring and precise operation of MERs, they decided to design an optimised interface from the perspective of usability. At the beginning of the research, they gathered design principles used by others, finalised an approach with some new metrics, evaluated the existing RUI of GET- bot, identified its weakness, and combined HRI and HCI principles for the design im- provements. In the refreshed design, then combined everything on a menu and in three tabs. In the first tab, all sensor data is fused into a place, combining map view, status information of the system, and robot’s mode of operation. Thus, operator’s eye move- ment and manual control were reduced, which resulted in less cognitive load. In the sec- ond tab, an enlarged view of the maps displays detailed information about the selected robot. In the last tab, detailed sensor reading was shown with highlighted text colour if the victim is identified in the rescue operation. As the menu on the top part of the RUI presents all necessary options, the operator does not need to memorise the item or func- tionality; hence the menu reduces the operator's mental load. The new RUI was discov- ered presenting more features, showing better performance in objective evaluation, and optimised screen percentage for major interface elements in the evaluation.

Figure 2: GUI of Robotic Hand on AGV [64]

To perform order-picking tasks in industry, D’Souza et al. proposed a solution of installing a robotic hand (cobot) on an automated guided vehicle (AGV) [64]. The AVG they chose was ActiveOne, a vehicle produced by Portuguese company Active Space Auto- mation and the robotic hand they selected was KR 810, a seven degree of freedom (DOF) robot manipulator developed by a Danish company named Kassow Robotics. Af- ter setting up the cobot on top of AVG, they programmed the robot's teach pendant (an Android tablet) and created a GUI (Figure 2). Both AVG and cobot communicate through that GUI; hence an operator can control both the machines from a single interface. The

(24)

main features of that GUI were: being able to command the position of AVG and cobot (Figure 2 A), open and close the gripper (Figure 2 B), grasp and drop objects (Figure 2 C), showing the status of both the machines (Figure 2 D). The design of GUI presents some icons in a column on the left of the screen. Those icons offer different services like sending cobot to the home position, terminating a task, sending AVG to the next position, creating a task list, moving the AVG to a point. Additionally, the GUI is compatible with the speech-recognition system (SRS) in English and Portuguese, and can work accord- ing to preset commands.

Under a French project QuoVADis, Pino et al. developed a GUI for a socially assistive, mobile robot Kompaï for adults suffering from cognitive impairment [65]. Kompaï can take input on a touch screen and mic, do social interaction, and provide different kinds of support. In this research, they developed a touchscreen-based web GUI using UCD design approach. The home screen of the GUI is kept simple to avoid accessibility and usability issues seen in older people. There a set of button icons are being represented by nine different images to express different applications. Once a user opens an appli- cation, relevant fields and navigation windows pop up. While evaluating the GUI, they assigned participants to do a set of tasks using the elements main menu, shopping list and agenda. In the result of evaluation, most of the participants understood the meaning of the button icons but expected to see text labels as well. Users found control menus like navigation and numeric up/down control lying inside the icons difficult because of their limited computer literacy.

ROBCO 18 is a home assistant service robot developed to support elderly and disa- bled persons in doing their essential household works [66]. This robot can be controlled by a joystick, gesture, speech and a web interface. The web interface of this robot is presented by a tablet attached to it, is developed based on Django (Python-based web framework), NodeJS and ROS server. Since this RUI is a web interface, users can con- trol the robot from any device. The menu in the RUI of ROBCO 18 presents mainly four options: voice functions, settings, manual and autonomous control mode. In manual mode, the user can handle the robot via a virtual joystick, see the data from sensors and watch the video stream from the robot's camera. On the other hand, in autonomous mode, this robot can move independently with the help of a laser scanner, kinect and sensor; hence no human guidance is needed. Similarly, it can be operated through voice commands as it can recognise and synthesise users' voices. The settings page offers users to set basic parameters of the robot's control package and sensor. Finally, the robot’s functionality of doing tasks like remembering when to take medicine, serving food or drinks, turning electronic devices on or off, setting reminders and calling emergency

(25)

ambulances were tested on fifteen elderly persons. The participants liked the idea of taking ROBCO 18 as their personal assistant but wanted it to look like a human with a female voice. They preferred virtual joystick, voice command, and remote control via a web interface in operating this robot.

Figure 3: GUI of Servosila Engineer Robot [67]

Mavrin et al. developed a mobile robot named Servosila Engineer. The GUI (Figure 3) of that robot comprises two windows: velocity window and maximal velocity window [67].

The maximum velocity window (Figure 3 C) consists of some text boxes and sliders so that the operator can set maximum values for different parts of the robot. On the other hand, the velocity windows (Figure 3 A) consist of two tabs: speed and position. Users can fix the settings as well as can control the tools and gripper of the robot from the speed tab. From the position tab, parameters related to the elbow, neck, shoulder and waist of the robot can be set to determine (Figure 3 B) the robot's position.

Figure 4: GUI of Vineyard Sprayer [68]

(26)

For a vineyard sprayer cooperative robot, Bernstein et al. designed a GUI (Figure 4) that consists of five windows [68]. Security window (Figure 4 A) is the first one where a user must put login credentials to get into the system. Then from the welcome window (Figure 4 B), the user can create a new job or carry on with an old job made earlier. Later user settings window (Figure 4 C) allows a user to set different values to fix the field and supervision characteristics. Using some navigation arrow buttons, users can spray the chemicals with the help of a virtual joystick and a spray button. Finally, the main window acts as a bridge between the operator and the robot; using this, an operator can help the robot in marking grape clusters. In this same window, there is an emergency stop button for tackling any unwanted situation.

Chivarov and Chikurtev wrote about the web interface of a robot manipulator named ROBCO developed for upper limbs rehabilitation [66]. The interface (Figure 5) is a single- page representation with five sub-menus. The first submenu has two icons/buttons: con- nection and emergency stop; a tick box in this section shows whether the manipulator is connected to the interface or not. Next, in the ‘procedure settings’ submenu, users can set the cycle, speed, date and time. The third and fourth submenu allows setting the angle of the joints and the power input subsequently. The selected power button in the fourth submenu turns green to show that a button is activated.

Figure 5: GUI of ROBCO [66]

Finally, in the last submenu, users find the buttons: pause, stop and start. A bar indicator in this row shows the progress of a procedure. Notably, the sub-menus of this GUI are not divided into different sections; hence it is hard to follow which section does what.

2.4 Summary

In a nutshell, there are numerous guidelines for gaining good usability in a web-UI. Our prepared guidelines mentioned (Appendix A) in chapter 2.1 is for inspecting the old RUI

(27)

and designing frames and components of the new RUI. Since the work of this thesis is redesigning a RUI, we aim to observe and note the sequences and functionalities of the robot how it works and then rectify the ideas of the old RUI and implement new ideas in redesigned RUI. Thus, in allocating all the things related to robot operation on a homep- age, the guidelines of chapter 2.1 help us to solve problems and implement ideas and features.

On the other hand, all the RUI cases we reviewed in chapter 2.2 mention only a handful of guidelines, mostly UCD. Indeed, these are not adequate to establish a standalone set of design guidelines to redesign a RUI. Regarding the evaluation of a RUI, we found the methods QUIS, SUISQ-R, cognitive walkthrough and think aloud are being used in our described cases. In most of the literature, two questions of how they designed the RUIs and why they placed different features in RUI frames are not well explained. Such an insufficiency proves that those RUIs were designed and developed to attain convenience as much as possible, overlooking the quality ‘usability’. This phenomenon again reminds us that usability is ignored in the development phase of RUIs since the robot developers themselves try to do the design works.

Compared to RUI, guidelines and different methods for developing and evaluating a web- based UI are more affluent. We aim to bind the guidelines and methods from both the RUI and web-based UI arenas and implement them in different development phases of the new RUI for Franka Panda.

(28)

3. RESEARCH METHODOLOGY

At the beginning of this chapter, how the different research approaches were fused to extract maximum benefits from them, and the research stages of this thesis are ex- plained. Later, different data collection and analysis methods were discussed, along with a brief introduction to the research platform of this thesis.

3.1 Research Approach and Process

Two similar approaches, design thinking (DT) and human-centered design (HCD) were combined and implemented to generate a new RUI for Franka Panda from the existing one. According to DiMeo, the approach design thinking (DT) focuses on creating an item or process from scratch, whereas human-centered design (HCD) aims to make an exist- ing product or process even better for users [69]. Bethany describes DT as a process since it looks at the bigger picture, focuses on innovation and development of product or services [70]. On the other hand, HCD is a mindset that looks at the details, works for improving usability and user experience of a specific product or service. Despite differ- ences, Filiz thinks that both approaches have many common features, hence can be reconciled [71]. Hoover defines the approach DT as a process using which designers create a solution that people will adopt. On the other hand, he explains HCD as a mindset that works in combination with DT to ensure that products or services are relevant and beneficial to the people who will use them in the long run [72].

Altman et al. define DT as a structured, iterative problem-solving approach that goes through several rounds of ideation, prototyping, and testing to develop novel products, services, or business standard [73]. According to Hehn et al., DT focuses on exploring human needs, non-technical prototyping, iterative problem reframing, and interdiscipli- nary teamwork [74]. In a frequently cited definition, Brown refers DT as a scope of inte- grating technologies and defined it as an action plan of a designer that addressed peo- ple’s need and leads to a human-centered innovation [75].

Design thinking has got much coverage lately as a powerful tool for solving socially rel- evant design problems as well as for fixing uncertainty in open challenges [76], [77]. DT mainly intended to be used in a project that is volatile in nature, and its requirements are unidentified or hidden [78]. Hasso-Plattner Institute of Design at Stanford suggests five phases/stages (Figure 6) of design thinking: empathise, define, ideate, prototype and test [79].

(29)

Figure 6: Phases of Design Thinking [80]

At the stages empathise and define, researchers/designers look at the things from the users' point of view. Subsequently, they try to find the users’ experiences, understand- ings, motivations by observing, engaging and immersing themselves in the environment.

To accumulate data for these stages, interaction, interview, documenting user, gathering stories, reviewing the literature are common practices [81]. Later, researchers/designers define the core and major problems by analysing the obtained data.

In the ideate stage, researchers/designers look at the problems from ‘outside the box’, search for alternative ways to inspect the problems and find ingenious solutions [82].

Afterwards, based on the earlier stage's best solutions, researchers/designers create cheap, scaled-down versions of the product known as a prototype [83]. Finally, in the test stage, the prototype gets evaluated on the real users, and the iteration process keeps going on until the solutions fulfil the pre-set requirements.

The second approach that considered in the thesis is known as human-centered design (HCD). According to 9241-210:2019 ISO standard, “human-centred design is an ap- proach to interactive systems development that aims to make systems usable and useful by focusing on the users, their needs and requirements, and by applying human fac- tors/ergonomics, and usability knowledge and techniques” [31]. This standard also de- scribes HCD as an approach that enhances effectiveness and efficiency, improves hu- man well-being, user satisfaction, accessibility and sustainability.

UCD is a subset of HCD and a similar designing approach that focuses mainly on end- users needs [73]. On the other hand, HCD creates a universal product/environment/ser- vices for all users, regardless of their physical and cognitive abilities [27]. HCD brings stakeholders (from a wide range) and developers together to co-create products, ser- vices. Additionally, it delivers the strategies that meet the users' needs and capabilities and identify, prioritize, and address barriers to usability [84].

(30)

Figure 7: Human-Centered Design Process [72]

IDEO, one of the most creative companies in the technology world, made HCD famous by proving its effectiveness in a wide variety of applications. According to them, HCD has six phases: observation, ideation, rapid prototyping, user feedback, iteration and implementation [85], [86]. Some sources combined closely related phases of HCD (Fig- ure 7) and suggested three main phases, such as inspiration, ideation, and implementa- tion [87], [88].

To combine the benefits of both approaches, Hoover allocated the phases of DT in dif- ferent phases of HCD (Figure 8) [72]. This new combination offers a process as well as a mindset for creating self-sustaining solutions. From Figure 8, we find that the DT’s first phase empathize falls in the inspiration phase of HCD. As this phase is diverging in na- ture, it reminds us not to rush in executing anything. In this phase, we conducted the pre- study sessions of our research and reviewed appropriate kinds of literature. This phase's timeline lies in stages 1 and 2 in the research process (RP) (Figure 9). Participants re- cruitment, planning the sessions, creating necessary documents, drafting test reports, finding relevant journal papers, blogs and articles were done in these stages. However, this research's pre-study part started before searching for the literature, but both the pre- study and literature reviewing kept going parallelly later.

Figure 8: Combination of DT and HCD [72]

(31)

The define phase (Figure 8) of DT lies on stage three of RP (Figure 9). Cleaning up unnecessary information from the session reports and making a final report (Appendix F) for RP’s stage 1 helped define the research question (RQ) 1. Later in the ideate phase of DT (Figure 8), we started seeking feasible solutions to the usability problems of the RUI of Franka Panda. After that, accumulated solutions were narrowed down to choose the best viable solutions. Documenting the design ideas, sketching, and wireframing the new RUI were done later in DT’s ideate phase. Notably, both the define and ideate phase of DT remain into the ideation phase of HCD (Figure 8), and they all fall in stage 3 of the RP (Figure 9).

Afterwards, we entered into DT’s prototype phase (Figure 8), which is the implementation phase of HCD as well. In this phase (stage 4 of RP), we started developing the high- fidelity prototype along with implementing interactivity in between the sections and com- ponents of the new RUI (Figure 9).

Figure 9: Timeline of the Research Process

Later in the test phase of DT (stage 5 of RP and implementation phase of HCD), evalu- ation of the new RUI begun. This process was concluded by compiling the feedbacks, lackings and improvement ideas in a final report created from all individual evaluation sessions. In this stage 6 of RP (Figure 9), how the solution for RQ 2 drastically changes the user experience and usability in the new RUI was presented. Worth mentioning, this test phase is iterative till the new RUI scores good in the subsequent evaluations.

3.2 Research Methods and Platform 3.2.1 Data Collection Methods

In two stages (pre-study, evaluation of new RUI) of this research, a mixed-method ap- proach was followed to collect qualitative and quantitative data from seventeen users [89]. Qualitative data gathered in this thesis helped bring an individual’s experiences, interactions, feedback, and insights into words [90]. Furthermore, qualitative data helped

(32)

find answers to the questions of why and how in different research scenarios by providing different viewpoints [90].

The methods utilised to collect qualitative data in different parts of this thesis are obser- vation, think aloud, individual expert review and semi-structured interview. The specific methods for collecting qualitative data in the observation of pre-study are ‘in situ’ and

‘controlled observation’. These same methods were used to evaluate the new RUI to collect both qualitative and quantitative data. Observation and think-aloud made it pos- sible to collect insights from the user's actions, expression, and behaviour. Afterwards, individual expert review corrected the missing aspects of the earlier observation ses- sions. Later in the evaluation of the new RUI, observation, think aloud, and semi-struc- tured interview extracted users’ thoughts regarding different events of the sessions as well as the things they liked, challenges they faced, alternative solutions, their prefer- ences and advice towards both new .

On the other hand, the two quantitative data collection methods: the user experience questionnaire (UEQ) and the system usability scale (SUS), were used to evaluate the new RUI. These flexible to conduct methods helped to understand users’ attitude, feel- ings, opinions, insights, accurate reflection of views and experiences towards the new RUI [91]. Based on the obtained results and in-depth data, we could figure out the areas where things can be better in future update or implementation.

3.2.2 Data Analysis Methods

Insights from the analysis can reframe a situation and relate different elements [92]. To analyse and get insights from obtained data from all qualitative methods, the process followed in this research is called heavyweight data analysis [93]. The reason for picking up this process is to get a standard report in a uniform and robust structure. The data analysis goal from the observation and interview data was to search the common pat- terns [94]. Then categorisation of data was done utilising those common patterns. Later, all the categories from all the reports were compiled in a final report (Appendix F).

On the other hand, analysis of quantitative data for UEQ and SUS was done on Microsoft Excel, where the mean and median of the data were calculated. All the data collection methods and analysis stated above are explained in the respective chapters.

3.2.3 Research Platform

The robot we picked as the platform for our research is Franka Panda (Figure 10), a cobot developed by a German company named FRANKA EMIKA GmbH [95]. The Panda is an increasingly popular ~17.8 kg lightweight robot that can take a payload of 3 kg with

(33)

its seven axes and seven degree of freedom (DOF) [96]. Sophisticated sensors and con- trol algorithms of Franka Panda empower this robot to react within a millisecond to avoid unwanted collisions. This machine achieves a high-quality human-machine interaction since its flexible torque-controlled joints act the same as humans contract or relax their muscles to adopt a task or an environment.

Figure 10: Franka Panda

A web-based RUI named ‘Desk’, rich with different kinds of parameterised drag and drop applications comes bundled with this robot. These applications support users in work- flow-based programming, setting various parameters according to need, and planning sequential tasks by deploying apps for grasping, plugging, insertion and screwing [97].

Moreover, Franka Panda offers a community platform named ‘Franka World’ through which owners, customers, partners, developers and robots are connected. One can man- age his/her robot fleet, purchase software and hardware extensions, update the system and apps, access support and community-provided materials, develop an app, and share with others through this platform [98].

Viittaukset

LIITTYVÄT TIEDOSTOT

The results of the research indicate that user-driven innovations are often used by the players to cope with usability issues, but also they tend to improve

tieliikenteen ominaiskulutus vuonna 2008 oli melko lähellä vuoden 1995 ta- soa, mutta sen jälkeen kulutus on taantuman myötä hieman kasvanut (esi- merkiksi vähemmän

− valmistuksenohjaukseen tarvittavaa tietoa saadaan kumppanilta oikeaan aikaan ja tieto on hyödynnettävissä olevaa & päähankkija ja alihankkija kehittävät toimin-

Helppokäyttöisyys on laitteen ominai- suus. Mikään todellinen ominaisuus ei synny tuotteeseen itsestään, vaan se pitää suunnitella ja testata. Käytännön projektityössä

7 Tieteellisen tiedon tuottamisen järjestelmään liittyvät tutkimuksellisten käytäntöjen lisäksi tiede ja korkeakoulupolitiikka sekä erilaiset toimijat, jotka

Työn merkityksellisyyden rakentamista ohjaa moraalinen kehys; se auttaa ihmistä valitsemaan asioita, joihin hän sitoutuu. Yksilön moraaliseen kehyk- seen voi kytkeytyä

The US and the European Union feature in multiple roles. Both are identified as responsible for “creating a chronic seat of instability in Eu- rope and in the immediate vicinity

Heuristic evaluation as a method assists in the identification of usability issues that cause damage to user experience, and in the enhancement of product usability in its user