• Ei tuloksia

There is a visible trend in technology for making user interfaces resemble living creatures.

It is believed that by providing them natural human communication channels, such as speech or touch, we can improve quality of an interaction and make them more accessible for ordinary users. Moreover, human or animal appearance of technology could make it easier for users to relate to it compared with text-based interfaces of the past. Two greatest examples of this trend are Embodied Conversational Agents and robots. The aim of introducing these autonomous agents is not only to support people in different tasks, but also to make them capable of doing things without human supervision. One of domains where robots or ECAs would mean great improvement is education. Since they could be personal and do not have limits of current human teachers, who can be present in only one place at a time, they could monitor students‟ progress continuously.

The objective of this study was to compare special qualities of ECAs and robots and see their impact on users‟ task performance. An experiment was conducted in which participants were solving modular arithmetic problems and receiving feedback on their task performance from the rabbit-like robot or the ECA. Sixteen participants took part in this study. In addition to the task performance, answers of the post-test questionnaire were analyzed.

Statistical analysis was carried out using independent samples and matched pairs t-tests and Mann-Whitney U test. The most interesting results are summarized below. Due to small samples used in this experiment the results were marginally statistically significant or not significant. Nevertheless, few interesting trends were observed.

No difference was found in the amount of problems solved by participants in both conditions. However, participants interacting with the robot needed on average almost twice as much time to solve one modular arithmetic problem compared with the ECA. The difference was especially visible at the beginning and vanished towards the latter part of the experiment. Moreover, also only in the first part of the experiment Nabaztag‟s feedback had impact on the speed of solving the task. There are two potential explanations for these results. Powers et al. [2007] implied that people interacting with robots are focused more on them rather than a main task compared to ECAs. It is also possible that the social

facilitation effect is responsible for these differences [Bartneck, 2003]. It would also mean that robots invoke stronger social facilitation effect than ECAs.

While unexpectedly participants declared the task as relatively difficult, which meant that the direction of H1 was reversed, they performed it very well hardly making any mistakes. Moreover, what is very important for the use of robots and ECAs in the educational domain, in both conditions users significantly decreased required time to solve one modular arithmetic task during the course of the experiment. In other words, they have learnt to solve problems faster after receiving supporting feedback from Nabaztag.

Unfortunately, due to the lack of a control group it is impossible to say how big impact Nabaztag‟s feedback had and what was a result of simply learning better techniques for solving the task, which would also occur in absence of Nabaztag. Nevertheless, participants declared that the robot/agent helped them to focus on the main task.

These findings have important practical consequences. Both ECAs and robots seem to be well suited for the educational domain since subjects showed improvements in the speed of solving modular arithmetic problems. However, a choice between these 2 technologies should be based on how well practiced material is processed by users. If robots induce stronger social facilitation effect, they should be used in situations when users are doing well trained tasks, as a robot‟s presence will result in better task performance than an ECA‟s. On the other hand, if a task is difficult or not well trained, a robot‟s presence would impair user‟s task performance. Similarly, a choice between utilizing a robot or an ECA at work should be based on the same criteria.

Moreover, in both conditions participants enjoyed interaction with Nabaztag. It is also very important as it confirms high value of these technologies in the entertainment domain. Furthermore, it is also positive information for a potential use of them in education, since pupils who enjoyed interaction with a robot or an ECA will also have more positive perception of their study experience and spend more time on it. By seeing it as a fun, they will probably also process material better than if they were just reading it.

Finally, the last important finding of this experiment with practical implications is that people tend to be more forgiving for robots than ECAs for their imperfections. In both conditions the same repetitive feedback was used. However, only in the agent‟s condition it

was perceived as irritating. While this difference was not statistically significant, there was a strong trend in favor of the robot. Considering that currently robots‟ technology is still in its infancy, it is possible that people will be more understanding in case system fails in some situations. This would be also very important as it could accelerate the social acceptance of robots and foster their development. Moreover, it is another aspect for consideration when making a choice between robots and ECAs in education, entertainment or any other sector as users require flawless and sophisticated capabilities from agents.

On the other hand, there were some methodological flows in the design of presented experiment. Any future research should include bigger samples. Small samples used in this experiment had less statistical power and the results were only marginally statistically significant or non-significant trends. Moreover, it increased the probability of individual differences influencing the results. It is hard to draw any solid conclusions from such results and future research should answer the question whether real differences exist.

Furthermore, boredom effect could have affected subjects‟ answers to the post-test questionnaire. Since there was very limited amount of feedback provided by Nabaztag, it is possible that participants got tired of them and, in both conditions, rated the impact of the robot/agent lower than they would have if the question was asked earlier. A lack of negative feedback, due to participants‟ superior task performance, made it also impossible to see how they would have responded to negative messages. It would be interesting to see whether negative feedback brings different reactions between conditions.

Moreover, there was only one robot and one agent used in the experiment. There is strong evidence that even seemingly small differences in shape of a robot‟s head can affect participants‟ perception and attributions of a robot [Powers and Kiesler, 2006]. It is possible that if another robot was used in the experiment, the results would have been different. Further questions could be asked also about the use of different types of agents. A researcher must make a choice between 2D and 3D or animated and cartoon ECAs. It is currently not known how such differences would affect users. Both the robot and the ECA used in this experiment were static with light signaling that they are going to speak.

However, the biggest advantage in general of robots and agents is that they are capable of moving themselves or moving parts of their bodies. Such a robot/ECA would be definitely more entertaining for users and could have influenced the results. In addition, the ECA used

was based on the robot‟s appearance, but lacked key functionality of computer agents – moving mouth when speaking. Therefore, while making it mimic the robot helped to ensure equal conditions, it made the comparison unfair for computer agents‟ technology.

Future research should also focus more specifically on few areas reported in this paper. There is some evidence in the previous [Bartneck, 2003] and the current study that robots induce stronger social facilitation effect than ECAs. However, systematic research that would compare their impact on users‟ performance on easy and difficult tasks is required. If these suggestions are confirmed, it could also add a new light on potentially higher anthropomorphization of robots than ECAs [Kiesler et al., 2008].

Finally, this research was unable to answer the question whether receiving feedback from a robot or an agent can improve user‟s task performance. It is necessary to include a control group in future research. Only then we will be able to know to what degree increase of a speed of solving problems is a result of learning new solutions and to what confidence boost after a robot‟s or an ECA‟s feedback.

References

[Aiello and Douthitt, 2001] Aiello, J. and Douthitt, E. Social facilitation from Triplett to electronic performance monitoring. Group Dynamics: Theory, Research, and Practice 5, 3 (2001), 163-180.

[Barley, 1988] Barley, S. The social construction of a machine: Ritual, superstition, magical thinking and other pragmatic responses to running a CT scanner. In M.M.

Lock and D.R. Gordon, Biomedicine Examined. Reidel, Hinghan, MA, USA, 1988, 497-540.

[Baron, 1986] Baron, R. Distraction-conflict theory: Progress and problems. In L.

Berkowitz, Advances in Experimental Social Psychology. Academic Press, Orlando, FL, 1986, 1-40.

[Bartneck, 2003] Bartneck, C. Interacting with an embodied emotional character.

Proceedings of the 2003 International Conference on Designing Pleasurable Products and Interfaces - DPPI '03, ACM Press (2003), 55-60.

[Beilock, et al., 2004] Beilock, S., Kulp, C., Holt, L., and Carr, T. More on the fragility of performance: Choking under pressure in mathematical problem solving. Journal of Experimental Psychology: General 133, 4 (2004), 584-600.

[Bickmore and Picard, 2005] Bickmore, T. and Picard, R. Establishing and maintaining long-term human-computer relationships. ACM Transactions on Computer-Human Interaction (TOCHI) 12, 2 (2005), 293-327.

[Breazeal and Scassellati, 2000] Breazeal, C. and Scassellati, B. Infant-like social interactions between a robot and a human caregiver. Adaptive Behavior 8, 1 (2000), 49-74.

[Burgard et al., 1999] Burgard, W., Cremers, A.B., Fox, D., et al. Experiences with an interactive museum tour-guide robot. Artificial Intelligence 114, 1-2 (1999), 3-55.

[Catrambone et al., 2004] Catrambone, R., Stasko, J., and Xiao, J. ECA user interface paradigm: Experimental findings within a framework for research. In Z. Ruttkay and C. Pelachaud, From brows to trust: Evaluating Embodied Conversational Agents. Kluwer Academic Publishers, Dordrecht, the Netherlands, 2004, 239-267.

[Dehn and van Mulken, 2000] Dehn, D. and van Mulken, S. The impact of animated interface agents: A review of empirical research. International Journal of Human-Computer Studies 52, (2000), 1-22.

[Dennett, 1987] Dennett, D.C. The Intentional Stance. MIT Press, Cambridge, MA, 1987.

[Fogg and Nass, 1997] Fogg, B. and Nass, C. How users reciprocate to computers: An experiment that demonstrates behavior change. Conference on Human Factors in Computing Systems, ACM (1997), 331-332.

[Friedman et al., 2003] Friedman, B., Kahn Jr, P., and Hagman, J. Hardware companions?:

What online AIBO discussion forums reveal about the human-robotic relationship.

Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM (2003), 273-280.

[Hall and Henningsen, 2008] Hall, B. and Henningsen, D. Social facilitation and human–

computer interaction. Computers in Human Behavior 24, 6 (2008), 2965-2971.

[Huang et al., 2002] Huang, W., Olson, J.S., and Olson, G.M. Camera angle affects dominance in video-mediated communication. CHI '02 extended abstracts on Human factors in computing systems - CHI '02, ACM Press (2002), 716-717.

[Judge and Cable, 2004] Judge, T.A. and Cable, D.M. The effect of physical height on workplace success and income: Preliminary test of a theoretical model. The Journal of Applied Psychology 89, 3 (2004), 428-441.

[Kawamichi et al., 2005] Kawamichi, H., Kikuchi, Y., and Ueno, S.

Magnetoencephalographic measurement during two types of mental rotations of three-dimensional objects. IEEE Transactions on Magnetics 41, 10 (2005), 4200-4202.

[Khan, 1998] Khan, Z. Attitudes towards intelligent service robots. Royal Institute of Technology, IPLab, NADA, Report TRITA‐NA‐E98421 ‐ IPLab‐154, August 1998, 1-30. Also available as

http://scholar.google.fi/scholar?hl=en&q=Attitudes+towards+intelligent+service+

robots&btnG=Search&as_sdt=2000&as_ylo=&as_vis=0#0.

[Kiesler and Hinds, 2004] Kiesler, S. and Hinds, P. Introduction to this special issue on human-robot interaction. Human-Computer Interaction 19, 1 (2004), 1-8.

[Kiesler et al., 2008] Kiesler, S., Powers, A., Fussell, S.R., and Torrey, C.

Anthropomorphic interactions with a robot and robot–like agent. Social Cognition 26, 2 (2008), 169-181.

[King and Ohya, 1996] King, W. and Ohya, J. The representation of agents:

Anthropomorphism, agency, and intelligence. Conference Companion on Human Factors in Computing Systems: Common Ground, ACM (1996), 289-290.

[Klein et al., 2002] Klein, J., Moon, Y., and Picard, R. This computer responds to user frustration: Theory, design, and results. Interacting with Computers 14, 2 (2002), 119-140.

[Koda and Maes, 1996] Koda, T. and Maes, P. Agents with faces: The effect of personification. Proceedings 5th IEEE International Workshop on Robot and Human Communication. RO-MAN'96 TSUKUBA, IEEE (1996), 189-194.

[Laurel, 1997] Laurel, B. Interface agents: Metaphors with character. In B. Friedman, Human Values and the Design of Computer Technology. Center for the Study of Language and Information, Stanford, CA, USA, 1997, 207-219.

[Lee et al., 2000] Lee, E., Nass, C., and Brave, S. Can computer-generated speech have gender?: An experimental test of gender stereotype. CHI '00 Extended Abstracts on Human Factors in Computing Systems, ACM (2000), 289-290.

[Lee and Nass, 2003] Lee, K. and Nass, C. Designing social presence of social actors in human computer interaction. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM (2003), 289-296.

[Lester et al., 1997] Lester, J., Converse, S., Kahler, S., Barlow, S., Stone, B., and Bhogal, R. The persona effect: Affective impact of animated pedagogical agents.

Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM (1997), 359-366.

[Nass and Lee, 2000] Nass, C. and Lee, K. Does computer-generated speech manifest personality? An experimental test of similarity-attraction. Conference on Human Factors in Computing Systems, ACM (2000), 329-336.

[Nass et al., 2000] Nass, C., Isbister, K., and Lee, E. Truth is beauty: Researching embodied conversational agents. In J. Cassell, J. Sullivan, S. Prevost and E.

Churchill, Embodied Conversational Agents. MIT Press, Cambridge, MA, 2000, 374-402.

[Nass et al., 1997] Nass, C., Moon, Y., Morkes, J., and Kim, E. Computers are social actors: A review of current research. In B. Friedman, Human Values and the Design of Computer Technology. Cambridge University Press, Stanford, CA, USA, 1997, 137-162.

[Norman, 1994] Norman, D. How might people interact with agents. Communications of the ACM 37, 7 (1994), 68-71.

[Nowak and Biocca, 2003] Nowak, K.L. and Biocca, F. The Effect of the agency and anthropomorphism on users' sense of telepresence, copresence, and social presence in virtual environments. Presence 12, 5 (2003), 481-494.

[Park and Catrambone, 2007] Park, S. and Catrambone, R. Social facilitation effects of virtual humans. Human Factors 49, (2007), 1054–1060.

[Powers and Kiesler, 2006] Powers, A. and Kiesler, S. The advisor robot: Tracing people‟s mental model from a robot‟s physical attributes. Proceeding of the 1st ACM SIGCHI/SIGART Conference on Human-Robot Interaction - HRI '06, ACM Press (2006), 218-225.

[Powers et al., 2007] Powers, A., Kiesler, S., Fussell, S., and Torrey, C. Comparing a computer agent with a humanoid robot. Proceeding of the ACM/IEEE International Conference on Human-Robot Interaction - HRI '07, ACM Press (2007), 145-152.

[Rajaniemi, 2007] Rajaniemi, J. Java framework for WiFi-based Nabaztag device.

University of Tampere, Dept. of Computer Science, Series of Publications D‐2007‐11, September 2007, 44-63. Also available as

http://www.cs.uta.fi/reports/dsarja/D-2007-11.pdf.

[Rickel and Johnson, 2000] Rickel, J. and Johnson, W. Task-oriented collaboration with embodied agents in virtual worlds. In J. Cassell, J. Sullivan, S. Prevost and E.

Churchill, Embodied Conversational Agents. MIT Press, Cambridge, MA, 2000, 95-122.

[Robins et al., 2005] Robins, B., Dautenhahn, K., Boekhorst, R., and Billard, A. Robotic assistants in therapy and education of children with autism: Can a small humanoid

robot help encourage social interaction skills? Universal Access in the Information Society 4, 2 (2005), 105-120.

[Schmitt et al., 1986] Schmitt, B., Gilovich, T., Goore, N., and Joseph, L. Mere presence and social facilitation: One more time. Journal of Experimental Social Psychology 22, 3 (1986), 242-248.

[Scholl and Tremoulet, 2000] Scholl, B. and Tremoulet, P. Perceptual causality and animacy. Trends in Cognitive Sciences 4, 8 (2000), 299-309.

[Searle, 1980] Searle, J.R. Minds, brains, and programs. The Behavioral and Brain Sciences 3, 3 (1980), 417-457.

[Shechtman and Horowitz, 2003] Shechtman, N. and Horowitz, L. Media inequality in conversation: how people behave differently when interacting with computers and people. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM (2003), 281-288.

[Shiomi et al., 2006] Shiomi, M., Kanda, T., Ishiguro, H., and Hagita, N. Interactive humanoid robots for a science museum. Proceeding of the 1st ACM SIGCHI/SIGART Conference on Human-Robot Interaction - HRI '06, ACM Press (2006), 305-312.

[Shneiderman and Maes, 1997] Shneiderman, B. and Maes, P. Direct manipulation vs.

interface agents. Interactions 4, 6 (1997), 42-61.

[Sproull et al., 1997] Sproull, L., Subramani, M., Kiesler, S., Walker, J., and Waters, K.

When the interface is a face. In B. Friedman, Human Values and the Design of Computer Technology. Cambridge University Press, Stanford, CA, 1997, 162-190.

[Takeuchi and Naito, 1995] Takeuchi, A. and Naito, T. Situated facial displays: Towards social interaction. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM Press/Addison-Wesley Publishing Co. (1995), 450-455.

[Turkle, 1984] Turkle, S. The second self: Computers and the human spirit. Simon &

Schuster, New York, NY, 1984.

[van Mulken et al., 1998] van Mulken, S., André, E., and Muller, J. The persona effect:

how substantial is it? People and Computers: Proceedings of HCI'98, Springer (1998), 53-66.

[Walker et al., 1994] Walker, J., Sproull, L., and Subramani, R. Using a human face in an interface. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems: Celebrating Interdependence, ACM Press (1994), 85-91.

[Weiss et al., 2009] Weiss, A., Bernhaupt, R., Lankes, M., and Tscheligi, M. The USUS evaluation framework for human-robot interaction. AISB2009: Proceedings of the Symposium on New Frontiers in Human-Robot Interaction, (2009).

[Wilson, 1997] Wilson, M. Metaphor to personality: the role of animation in intelligent interface agents. Proceedings of the IJCAI-97 Workshop on Animated Interface Agents: Making them Intelligent, Citeseer (1997), 1-10.

[Winograd and Flores, 1987] Winograd, T. and Flores, F. Understanding computers and cognition: A new foundation for design. Addison-Wesley, Reading, MA, USA, 1987.

[Wright et al., 1999] Wright, P., Milroy, R., and Lickorish, A. Static and animated graphics in learning from interactive texts. European Journal of Psychology of Education 14, 2 (1999), 203-224.

[Yamato et al., 2001] Yamato, J., Shinozawa, K., Naya, F., and Kogure, K. Evaluation of communication with robot and agent: Are robots better social actors than agents.

Proc. of the 8th IFIP TC 13, (2001), 690-691.

[Yerkes and Dodson, 1908] Yerkes, R. and Dodson, J. The relation of strength of stimulus to rapidity of habit-formation. Journal of Comparative Neurology and Psychology 18, 5 (1908), 459-482.

[Zajonc, 1965] Zajonc, R. Social Facilitation. Science 149, (1965), 269-274.

[Zlotowski, 2010] Zlotowski, J. Social human-robot interaction: Review of existing literature. University of Tampere, Dept. of Computer Sciences, Series of Publications D-2010-1, January 2010, 266-284. Also available as

http://www.cs.uta.fi/reports/dsarja/D-2010-1.pdf.

[Zuboff, 1988] Zuboff, S. In the age of the smart machine: The future of work and power.

Basic Books, New York, NY, 1988.

Appendix A

Possible Nabaztag feedback for subjects’ task performance

Negative Nabaztag statements:

 This is a wrong answer. Next time it will go better.

 Unfortunately this is an incorrect answer. Try next equation.

 Incorrect answer.

Positive Nabaztag statements:

 Well done! Keep on good work.

 Correct answer.

 Good! Please continue the task!

Appendix B

Post-test questionnaire statements (on a 5-point Likert scale)

1. Nabaztag encouraged me to focus on the task.

2. It was entertaining to do the task with Nabaztag.

3. I felt comfortable with Nabaztag performing the task with me.

4. Nabaztag‟s feedback made me more confident doing the task.

5. The task was easy.

6. I liked Nabaztag.

7. I felt Nabaztag‟s presence.

8. Nabaztag‟s feedback was irritating.

9. The task was boring.

10. It was more fun to do the task with Nabaztag.

Appendix C

Instructions given to subjects before the experiment

You are asked to perform series of mathematical tasks on a computer. Please do them as fast and as accurately as you can. During the process, Nabaztag (robot rabbit), placed above the computer screen, will regularly give you some feedback on your performance.

You will be shown series of modular arithmetic statements. Your goal is to judge whether problem statements are true or false. You will do it by pressing “True” or “False”

button displayed under each statement. The task will last for 10 minutes after which you will be asked to fill the questionnaire. However, you can stop the mathematical task at any time by pressing “End the task” button, which will lead you straight to the questionnaire.

Example of modular arithmetic statement 50≡38 (mod 4)

To solve the problem you need to subtract a middle number (i.e. 38 - between 50 and 4) from the first number (i.e. 50 – 38) and then the result of this (i.e. 12) is divided by the last number (i.e. 12/4). If the dividend is a whole number (as here 3) then the statement is true.

On the contrary, if the dividend is a decimal number then the statement is false.