• Ei tuloksia

As Flick (2007) notes, no universal set of criteria exists for assessing the quality of a given study given the vast array of existing approaches, methodological procedures, and contexts. Nevertheless, certain intertwined criteria of what could be called quality science, such as reliability, objectivity, and validity, are commonly observed across research studies of various traditions as they are deemed crucial for determining the rigor of a study. Whilst there is debate on the applicability and relevance of such criteria in qualitative studies, there is also a general consensus that they should not be discarded;

they should be reformulated (Tracy, 2013; Flick, 2007). Following such recommendations,

this section shall now tackle the three criteria to frame the discussion of the present study’s quality.

Reliability, as referring to the researcher’s stability and consistency (Tracy, 2013), was primarily attended to in two ways, namely multiple coding and transparency. Similar in process to the quantitative equivalent inter-rater reliability, multiple coding differs from the former in that it does not aim at the complete replication of results, but rather at the cross checking and refinement of coding strategies. In that sense, such a technique is useful in strengthening the rigor of data analysis in qualitative studies. As has been discussed, a second coder coded 20% of the data set at the review stage of analysis. Whilst no statistical tests such as Cohen’s kappa were run, percentage calculations were made and, on the basis of those, discussion on how to improve coding took place with a focus on the content of disagreements which, according to Barbour (2001), is just as valuable in quantitative research.

Although valuable, multiple coding is by no means a complete measure to increase reliability in qualitative research. Thus, a concern with transparency was also deemed essential. As Elman and Kapiszewski (2014) affirm, “scholars cannot just assert their conclusions, but must also share their evidentiary basis and explain how they were reached.” (2014, p. 43) As such, it becomes essential to clearly state the philosophical and epistemological assumptions underlying the research process, as well as carefully outline the data collection and analysis processes in detail (Braun and Clarke, 2006) to achieve analytic and production transparency (Moravcsik, 2014). Both steps have been taken in this study. Furthermore, to achieve data transparency, data extracts were included verbatim in the findings section to support claims and serve as an illustration of the content present across the data set – which is particularly critical given the impossibility of making it available to the larger audience due to legal and ethical constraints. It should be noted that, in an attempt to increase transparency and, in turn, reliability, despite having the utterance as the unit of analysis, full respondent contributions (i.e. data items) were included in the findings chapter in place of single utterances – with only few exceptions made for those cases in which including the entire data item would break the

As regards objectivity, it should be made known that this criterion was not a primary focus of this study given qualitative studies require active interpretation, which is, by nature, a subjective undertaking. Despite there being some more objective aspects of the study, especially as concerns the employment of SCT-DA techniques, or the level at which data was analyzed, data had to be interpreted at all levels of analysis, even if to a minimal extent. As Braun and Clarke (2006) put it, even in TA studies which identify themes at a semantic level, there is some level of interpretation, especially as one moves from codes into themes. It is important to acknowledge that, in making choices on how to interpret data, biases based on one’s beliefs, values, and previous experience potentially operate at a subjective level.

Having acknowledged the inherent subjectivity of the research process in qualitative studies, it should be stated that both measures taken to increase reliability measures also serve to improve the objectivity of the present study. Multiple coding serves this purpose in that it aims at responding to “the charge of subjectivity sometimes levelled at the process of qualitative data analysis” (Barbour, 2001, p. 1116), whilst research transparency helps readers and assessors identify possible subjective biases or misinterpretations of data that may occur as a result of those.

Finally, it is vital to discuss validity, understood in this study as the accuracy of the findings. Despite taking a position of denial of a positivist paradigm, and therefore assuming no commitment to replication or generalization of findings (Elman and Kapiszewski, 2014), a specific measure has been taken to taken to improve accuracy of findings. Given the prominent role language played in the analysis, both because it operated at the semantic level and because techniques from SCT-DA were used, special attention was given to linguistic detail (Gee, 2005), i.e. the manner in which words and linguistic structures were used consistently by participants. For example, in naming codes during the first cycle after words or expressions that had been employed by participants, there was a concern with establishing a level of validity. Such a concern can also be seen in the observation of cohesive devices as a stable criterion for identifying and coding utterances.

All in all, despite denying a positivist epistemology and understanding that the three criteria discussed come from a quantitative tradition, measures have been taken to increase reliability, objectivity, and validity. In addition to the three criteria above, other measures suggested by Lincoln and Guba (1985) which are especially relevant for qualitative research were taken to improve quality. For example, prolonged engagement with the data took place. Not only was data analyzed at different levels, multiple times, but the entire analysis process lasted for about five months, during which the researcher would analyze data, take breaks, and then come back to it with a different mental state to double check and improve analysis. Furthermore, both ‘member checks’ and ‘peer debriefing’ took place. During the entire research process, meetings with both the thesis supervisor and other MA students were held to discuss the present study. As a result of these, blind spots were identified and working results and hypothesis were assessed, allowing for the researcher to take a more reflexive stance and reconsider, for instance, how consistent the process has been or how subjective biases have operated in decision-making.