• Ei tuloksia

121

(VII), the findings were first communicated internally and informally within the project team to move to the next phase and only later prepared for academic dissemination, which resulted in several publications. Thus, the target audience and the message that wants to be communicated for it are extremely important.

To sum up, when reporting the results of a user experience evaluation, one should try to answer the following questions—despite the dissemination forum: What was done in the evaluation? What are the results? What do the results mean? What will be done next, or at least what should be done, based on the results?

4.4 S

UMMARY

To sum up, the proposed process model for evaluating the user experience of interactive systems comprises three main phases: before the evaluation, during the evaluation, and after the evaluation. The phase before the actual evaluation is vital, considering the whole process. There are four main steps that need to be carefully considered and performed to design the evaluation itself properly. These steps and their key action points are as follows:

Study background: defining and understanding the purpose and aims of the study. Familiarize yourself with the purpose, aims, and the environment of the project and the evaluation. Utilize the professionalism and knowledge available within the project partners.

Make sure the whole project group has a common understanding about the study: Demand everyone communicate even small matters that might seem self-evident, and share your own knowledge as well.

Circumstances: acknowledging the possibilities, challenges, and limitations in the evaluation. Regarding the system under evaluation, consider its fundamental purpose, its unique or especially important functionalities, novel characteristics, such as new interaction techniques, and in general, how it differs from other systems meant for the same purpose. Consider the aspects existing within or raised by the evaluation context: The physical evaluation’s environment, the social aspects of the context, domain-related matters, and principle-level restrictions by norms or laws affect not only the evaluation design but also the interpretation of the results.

Remember to take into account users’ characteristics, such as age, expertise regarding the subject under evaluation, technical knowledge, and possible disabilities or other special characteristics.

Furthermore, keep in mind that a system and an evaluation may have more than one user group, and design the evaluation accordingly.

…………

122

Data collection: designing the data collection and producing the material for it. Concentrate on having at least a questionnaire with a set of quantitative user experience items and at least a few open questions to gain some understanding about possible reasons for the experiences. If possible, broaden the data collection to include user expectations to find out participants’ attitudes before the usage and to compare these with the actual experiences. Remember that, to enable the comparison, the asked-about items have to be similar both before and after the usage. To further deepen the results and their interpretation, interviews, observation, and log data, e.g., can be used. In any case, it is advisable to include basic background information, such as age, gender, and previous experience with similar interaction techniques or systems. The actual content of the data collection is case specific, but do include general user experience items, such as pleasantness and willingness for future use, and items corresponding to the special characteristics of the case, such as statements or questions about the system’s interaction techniques. In questionnaire design, pay attention to clarity as well as to the characteristics of the user group(s) and context.

Recruiting participants. In case participants are recruited beforehand, the recruitment should be started early enough. One’s own recruitment channels may not be sufficient if the target user group or the purpose of the system, e.g., is very specific. Therefore, also utilize stakeholders’ contacts, or contact companies or associations to find suitable recruitment channels. Aim for getting optimal participants who truly represent the target user group. In case there is no beforehand recruitment but the evaluation process relies on spontaneous participation on the evaluation scene, e.g., plan ways to attract participants if necessary.

When all evaluation material is produced, possible participants recruited, the system ready for evaluation, the evaluation scene prepared, and the personnel involved instructed, it is time for conducting the evaluation. The evaluation situation itself can be divided into three stages, and they are as follows:

Before the usage. The system under evaluation is introduced to the participant in a pre-defined manner and at a pre-defined level of detail. Remember to keep the information provided as objective as possible and similar among participants. As a general rule, it may be best to provide only the necessary information so the participants are able to use the system. In case gathering user expectations is part of the evaluation design, it is advisable to collect them as early as possible to prevent affecting participants’ attitudes with the system introduction, for instance. Before the usage, the necessary information about the evaluation procedure or content may be

…………

123

communicated to the participant. Try to avoid giving out information that may affect participants’ expectations and experiences.

During the usage. Instruct the participant about what he or she needs to do, be it using the system freely or performing pre-defined tasks. Plan beforehand how to react to participants or answer possible questions: Sometimes, additional information is not given after a certain point in the evaluation to examine the intuitiveness of a system, and also to keep the provided information similar among participants. Perhaps the most important actions during the usage are related to gathering supportive and objective data: Collect log data, video or audio recordings, or observational data during the usage as designed.

After the usage. Gather the user experience data with questionnaires, interviews, or a combination of these. Remember that this is the most crucial moment and action considering user experience evaluation.

If not done before, basic background information is gathered at this point.

When all evaluation sessions are finished and data collected, it is time to investigate the outcome. The steps after the evaluation are as follows:

Analysis and conclusions: analyzing the data and interpreting the results. If not already in electronic form, prepare the data for analysis, and transcribe it into electronic form. Analyze the data with suitable methods considering the type and scale of data, sample size, possible normal distribution, and so forth. At least calculate medians from the numerical data, and reflect those with the qualitative data, i.e., answers to open and interview questions. Furthermore, reflect the results from the subjective data with the objective data, such as observation or log data. Conclude what the results together mean or indicate.

Dissemination: reporting the results. Report the results clearly in a manner appropriate for the purpose of the evaluation, and especially the target forum and audience. This reporting should explain what the results mean in practice and what should be done next, if anything.

The process model presented here inevitably extends beyond user experience per se, but the practical issues, such as participant recruitment or available resources, have to be considered and resolved to run a successful evaluation—be the core aim studying user experience or its specific aspects, interaction patterns, or technical functionality, for instance.

User evaluations are complex wholes where many things are tightly interlinked, and these need to be taken into account to design and conduct proper user experience evaluations with valuable results.

…………

124

It should be highlighted that the process model and the examples are based on the eight user experience evaluation cases presented in this dissertation.

Thus, the model may not be exhaustive considering all kinds of evaluations.

For example, the evaluations have mainly considered short-term user experiences evoked from rather short usage periods of the systems.

Exceptions to this are demonstrated by the Dictator case (VI), where the evaluation lasted three months, and the LightGame case’s (VII) Evaluation II, which lasted three weeks, including three usage sessions by the participants. The proposed model is suitable for these evaluations, but in case monitoring longer-term user experience with variations would be the focus of an evaluation, the process model might need to be modified accordingly. Furthermore, given that the research has been conducted in an academic environment, the discussion here inevitably concentrates on issues that might be irrelevant for industry, but might lack issues that would be relevant for user evaluations outside of academia.

…………

125