• Ei tuloksia

Data Management, Analyses and Interpretation

The analyses of the interview data began on the field. After each of the interviews, the recording was automatically saved to the recorder which also functioned as a USB storage. The recordings were copied to the computer and backed up to another external storage device. Each of the recordings for the day was played back to ascertain the quality and content of the interviews. Listening to the recorded interviews helped the researcher, in some instances, revise some of the questions as well as including other interesting angles emerging from the views of the participants.

All the interviews were manually transcribed verbatim using the F4 Transcription software the researcher had purchased for the purpose. The software was particularly useful since it allowed a pause, rewind, forwarding and reducing or increasing the speed of the playback. These controls are all available by pressing either the F4 or F5 key on the computer’s keyboard and would therefore not require taking the hands off the keyboard to use any of the controls. It also has the option of saving the transcript with the timestamps which was extremely useful since it made going back to replay specific portions of the interviews easier. The transcripts were saved in a rtf format which is more compatible with other software for analysis like the Atlas.ti and Nvivo. These features of the F4 Transcription software made it more preferable to the conventional Windows Media Player, for example.

With the exception of words such as ‘eh, ‘umm’ which were excluded, other crutch words were included in the transcription. In a few instances, where the participant’s voice was inaudible a question mark was put there in parenthesis or where the researcher heard a certain word but was not completely sure about it, the word was put in parenthesis and a question mark was added [?]. Those instances, were very minor and did not in any way

89 impact the overall quality of the recordings. Sentences that were started but could not be completed before another was began, were indicated with a stroke symbol (/). The symbol (inc.–low voice) was used in situations where the statement was incomprehensible and or the voice was too low. The symbols used followed the conventions suggested by the F4 Transcription manual. A total of 180 pages of texts were generated from the transcription. The quantitative data were processed using Microsoft Excel; and percentages, frequency tables and charts were generated to support the qualitative data from the transcripts as and when necessary.

Following the completion of the transcription, the transcripts were reviewed after which followed the reduction, sense-making and a thematic analyses of the data. Dawson (2002) believes that “qualitative data analysis is a very personal process” adding that the analysis

“can be viewed as forming a continuum from highly qualitative methods to almost quantitative methods, which involve an element of counting” (p. 128). Others hold that the process should be “governed both by fitness for purpose and legitimacy…” (Cohen et al., 2005: 82).

The transcripts from the interviews were manually coded, categorised and labelled using the ‘cut and paste’ function of a word processor. Thus, the views of all the participants on a particular theme, for example, affirmative action, were grouped together and the patterns, variations etc. were identified, aggregated and reported. Patton (2002) argues that despite the fact that software programmes affords various tools and formats for coding, “the principles of the analytical process are the same whether doing it manually or with the assistance of a computer program” (p. 120). In fulfilment of the researcher’s pledge of anonymity to the participants, all the quotations were anonymised as follows:

 SP – Student participant

 GP – Graduate participant

 HOP – HEI official participant

 GOP – Government official participant

The students and graduates had their gender and disciplines added to their comments to reflect the importance of the gender variable in particular, and for a fair representation and diversity of opinions – a multi-perspective reportage. The students’ levels in their various programmes were in most cases omitted to strengthen anonymity.

90 The researcher experimented Atlas.ti for the coding but retreated at some point due to the tendency of the software to fragment the data. Such an approach would have been out of step with the researcher’s goal of giving the participants, especially the disadvantaged, a voice which necessitated long verbatim quotations and the story-telling (narrative) mode of reporting the results in some instances, and by so doing, identifying the ‘trees’

taking shade in the ‘forest’.

Apart from the verbatim quotes giving the participants a voice, it also served the purposes of evidence and illustration (see Bryman, 2012). “What people actually say and the descriptions of events observed remain the essence of qualitative inquiry...Indeed, the skilled analyst is able to get out of the way of the data to let the data tell their own story” (Patton, 2002:457, emphasis in original). The coding was both concepts and data-driven (Gibbs, 2007) or what Patton (2002: 454, 456) calls ‘sensitizing concepts’

(categories that oriented the fieldwork usually from the literature review) or ‘indigenous concepts’ (emerging from the data and often from the participants) – it was a mix of both induction and deduction using both the themes arising out of the literature review and that emerging from the data. Qualitative analysis is deemed to begin from an inductive phase where themes, patterns and categories are identified from the data and progresses to a deductive stage where the authenticity of the patterns, themes, categories identified in the inductive phase, including deviant cases, are established and confirmed for their appropriateness (ibid: 453, 454). The qualitative, quantitative data processed from the data archived by the relevant institutions, as well as the official documents were all integrated in the reporting of the results. The results were interpreted using the literature reviewed, the conceptual frameworks as well as the researcher’s common sense knowledge and experience from the context. As far as practicable, multiple perspectives (students, graduates, HEI and government officials) were reported and negative cases were equally stated on each of the themes for a deeper understanding and a better appreciation of the phenomenon under consideration. This was necessary to avoid researcher bias and ‘cherry-picking’ only confirming evidence.

In the presentation of results from the qualitative data, the frequency of participant views was sometimes counted – a practice referred to as ‘quasi-quantification’ (Bryman, 2012;

Kuckartz, 2014)— and some information presented in a tabular form. Bryman (2012) argues that qualitative researchers sometimes quantify a limited amount of their data.

91 Such quasi-quantification counters the oft-cited critique of qualitative presentation of data being too anecdotal, noting that “such simple counting conveys a clear sense of their relative prevalence” and further allows the researcher to inject “greater precision into estimates of frequency”, as opposed to the use of terms such as ‘many’, ‘frequently’, ‘some’

(p. 624-625).