• Ei tuloksia

Analyzing case studies is difficult (Rowley, 2002) because there are no clear guidelines on how to do it. The analysis was inductive and driven by the data, although the first part of Study 2 was driven by theory. In Study 1, I analyzed the data inductively using a method I developed for this particular purpose called

“pathway analysis.” In Study 2 and 3, I followed Braun and Clarke’s (2006) approach to thematic analysis.

I transcribed the participants’ interviews and group discussions during peer assessments and compiled each student’s data into a set that contained the discussions, the interviews they had participated in, the artefacts they had produced and assessed, and feedback they had received and provided. Thus, each data set contained broad information about the student’s journey through the intervention.

In Studies 2 and 3, I continued the analysis by exploring the data and attaching descriptive labels to essential data chunks (Miles & Huberman, 1994).

The data came in several forms, and in many cases, labelling required interpreting extracts alongside other documents. I created the categories through an iterative process that required multiple rounds of labelling. I then examined and adjusted the labels before again coding the data with new labels. When the labels appeared to form categories, I went through the data extracts of each category to examine their coherence, and I wrote descriptions of the categories.

The process continued until recoding no longer brought about essential differences to the categories or their descriptions (Braun & Clarke, 2006). To avoid biased interpretations, I discussed the findings with other researchers (Yin, 2009). As a final step, I named the categories. I used Atlas.ti for coding and data management. The program was useful for transcribing the audio files, creating the descriptive list of codes, and modifying the list during coding. Additionally, it allowed me to retrieve all data extracts for each label for examination.

Each study’s analysis had its own characteristics. As I was broadly interested in peer assessment, the analytical methods were not fixed. All analyses required a certain level creativity and a tolerance for trial and error. The characteristics of each study are described below.

46

Study 1. The analysis of students’ pathways through peer assessment demanded eliciting information from different sequences of the peer assessment process: working with the original task, providing peer feedback, receiving feedback, revising work, and experiencing other benefits. The information from different sequences came from different data forms, and therefore, the analysis required different levels of interpretation. After creating the sequences and their values through inductive coding, I constructed a pathway for each student through all the sequences (Figure 6).

The reliability of coding was tested with peer coding. Four of five stages of peer assessment were peer coded, and the agreement between me and two other researchers was approximately 80%, which appeared good in qualitative coding that required a high level of interpretation. Moreover, the differences in coding were negotiated. One sequence was based on criteria-based observations of students’ efforts in a specific task and could not therefore be peer coded.

However, research shows that observations are relevant, as teachers’ ratings of students’ efforts correlate positively with students’ own reports (Zhu & Urhahne, 2014). After coding, I grouped the students’ pathways according to the benefits (both improvement and other benefits) of peer assessment that the students experienced. I compared individual students’ experiences from different groups to examine and describe the factors that influenced the benefits of peer assessment.

Study 2. The analysis was comprised of three parts. The aim was to recognize the features of feedback literacy that appeared in the data and to adjust the features of Carless and Boud’s (2018) framework to the context of formative peer assessment. In a theory-driven analysis, I discerned three categories of feedback literacy skills that students showed during peer assessment. In the second part of the analysis, the aim was to examine students’ skills more closely.

During the first part of the analysis, I noticed that students had varying feedback literacy skills, and I defined and described them via an iterative process. The analysis was driven by data, but its scope was limited to the previously identified categories. Within each category, I formed case groups with similar skill levels;

described them; and with a sensitivity to the theory, organized the groups from

FIGURE 6: An example of students’ pathway through peer assessment. This student lacked effort in the original work, he did not provide constructive feed-back, he received only constructive critique about his work, he did not improve his work, and he experienced other benefits than improving work.

47 the most basic to the most advanced, creating a criteria-based rubric for feedback literacy skills. A simplified example of criteria is shown in Table 4.

In the third phase of the analysis, I used the criteria that were used to evaluate students’ feedback literacy in the midway of the seventh and eighth grades, and I thus defined the development of students’ feedback literacy during a year. I then engaged in peer-debriefing (Onwuegbuzie & Leech, 2007) with two other researchers to test the levels and categories. I prepared the full data set of five students for discussion with the other two researchers, who explained their views on my coding, and we carefully discussed the discrepancies.

TABLE 4 A simplified criteria-based rubric for one category of feedback literacy. If a student, for example, did not make any changes to their work on seventh grade after peer assessment, but on eight grade made light changes, they moved from level 1 to level 2 in this category of feedback literacy.

Engagement in making revisions

Level 1 Level 2 Level 3

No interest in feedback Reading feedback Active interpretation of feedback

Study 3. This analysis was the most straightforward and followed the procedure described in the beginning of this subsection. I examined the data, looking for and labeling extracts that related to forms of students’ agency during peer assessment. I used Gresalfi et al.’s (2009) definition of agency to identify the extracts: “An individual’s agency refers to the way in which he or she acts, or refrains from acting, and the way in which her or his action contributes to the joint action of the group in which he or she is participating” (p. 53). Again, during the analysis, peer debriefing with the two researchers was used to test and discuss the categories. The categories’ relationships were elaborated with a thematic map (see Braun & Clarke, 2006) and the identified forms of agency were related to the positions of assessor, assessee, and group member. Therefore, in the last phase, we examined and compared the forms of agency in each of position.

5.1 Study 1: Pathways through peer assessment: Implementing