• Ei tuloksia

1 RESEARCH BACKGROUND

1.2 Research purpose, task, strategy, questions, and stakeholders

My practical training in development cooperation issues, began in 1991, when I worked for five years in the VET project, MHCC, in Tanzania. It was my direct experience during those years that first prompted me to carry out this research. The local participants’ need for evaluation on this development project, chiefly funded by the MFA of Finland and channelled through NGOs, was voiced in Tanzania for the first time by the current Archbishop of the Free Pentecostal Church of Tanzania (later FPCT)100, Mr Batenzi. As the committee chairman of MHCC he especially emphasised the need for an evaluation on VET impacts of the centre — precisely, regarding MHCC’s former students within the Tanzanian labour market and more broadly the impacts of VET on these students’ lives. In 1997, this research received additional impetus from Mr Karanko, the Director for the Evaluation Unit of the Finnish MFA at the time, who indicated to me his strong support for the suggestion made in Tanzania of needing an evaluation study on VET impacts of MHCC. (Figure 1.)

In the late 1990s, I moved back to Finland. I became very interested and involved in studying the quality of development interventions and their evaluations when working with the project dealing with development of self-evaluation practices among the Finnish NGO, Lähetyksen Kehitysapu (henceforth LKA), [the Development Aid of the Mission], today known as Fida International ry (later Fida).101 Following that, I became absorbed in reading and analysing several development evaluation reports, researching multiple stakeholders’ involvement in evaluation processes and their possibilities to utilise the published evaluation results.

Based on mapping out these evaluation reports, I concluded that the majority of evaluations, excluding mid-term reviews, had been conducted after the termination or during the latter part of development intervention, by external evaluators. After reading another evaluation on MHCC conducted on the initiative of the Finnish MFA, I became more mystified as to why evaluation results were seldom fed back102

100 FPCT 2009b

101 Fida 2014b

102 Feedback evaluation looks only backwards by summing up knowledge, while feed forward –saying expresses a requirement for feedback evaluation activities having the future orientation by building the future and by looking also forwards on (Linnakylä & Atjonen 2008a, 60; 2008b, 80). Thus, I suggest that evaluation should produce ‘not only ‘feedback’ from the past but also knowledge (i.e., feed forward) for the future so that the future activities could be planned and developed (see e.g., Weeden, Winter & Broadfoot 2002).

to the “recipients”, to the local Southern organisations and their stakeholders. This meant the loss of possibilities to further develop their activities, capacities and those of their organisations, and to simply learn during the evaluation, by means of the evaluation itself while evaluating. Thus, I formulated the initial idea for the research purpose of my doctoral thesis as follows: to strengthen self-evaluative capacities among NGOs. (Figure 1.)

Figure 1. The research process of this thesis

My own evaluation experiences, combined with the needs of the Tanzanian decision makers and of the Finnish officials, as well as results of evaluation reports, began to sow seeds in my mind. With power intertwined in evaluation and reflected in its utilisation, power relations affect evaluative processes in several ways through such issues as what the target of an evaluation is, what methodological choices are used, by whom evaluation is conducted and used, and with whose values, as well as the

48

question of who has power to decide over these issues, as Guijt and Roche emphasised.103 I realised that every evaluation initiator, commissioner and evaluator have a lot of power and roles in evaluation, for they can influence evaluation use and impacts, especially through its evaluation factor, as Alkin and Taut’s findings confirmed.104 Inarguably, these key evaluation agents consider the standpoint(s) of those for whom the results of this certain activity are produced and whose learning is at aimed through evaluation use.

In my research, the case selected was the development intervention, the VET programme at MHCC, channelled through NGOs in Tanzania, for case study-based evaluations typically focus on the evaluation of certain programmes, projects or interventions,105 as was true in my case. This evaluation experiment consisted of two components. In the evaluation section, the socio-economic impacts of VET in the Tanzanian VET centre were studied, while in the research on evaluation portion the processes and influences on its participants and VET services were examined. This micro and local perspective taken in the evaluation experiment was intended to contribute to evaluative learning, to produce action-oriented knowledge and to assist stakeholders to learn about themselves. Next, the issues from the viewpoints of individuals and groups, directly working with the intervention, at the VET centre, or affected by it, were looked at. To this end, the centre’s future implementation could be improved by focussing on its internal structures and issues. In this way, evaluation would become a never-ending, cyclical learning and social change process, if utilisation of evaluation processes did not become neglected as learning sources.106

This research on evaluation was meant to test for impacts by the local multi-stakeholders of evaluation. Thus, the statement made by Ofir, who demanded an

“evaluation for development” approach be used more intensively in development evaluation (rather than the prevailing “evaluation of development” approach) was supported.107 This research was conducted “for impacts,” not “of impacts” solely, and was not based only on evaluation findings but also on evaluation processes, then simply called a process use of evaluation.108 This means that active, local participation, training and learning of the local multi-stakeholders in evaluation processes would strengthen local evaluation impacts, through evaluation utilisation,

103 Guijt & Roche 2014, 51

104 Alkin & Taut 2003

105 Yin 2009a, 19; 2012, xix, 171

106 See Armytage 2011, 273; Rebien 1997, 453.

107 Ofir 2013, 584

108 Baptiste 2010, 58

as well as contribute towards improvement of development practices.109 In this research, a group of evaluation and research learners as well as their users were targeted to be widened outside the typical evaluation commissioners: the policy-makers, the funders or donors. Thus, three learning groups, named by Suzuki, played a key role. They were two organisational groups inside MHCC: development practitioners (the staff) and development leadership (the committee); the beneficiaries (the former students of MHCC and their extended family members); as well as a third group outside the organisation (the employers of MHCC’s former students and VET officers).110

In this evaluation experiment, three analytical tiers were identified. Figure 2, below, displays the various interest groups, stakeholders — typical of the action research strategy as well as of evaluation research approaches used in this research

— and service users of the evaluation experiment, the VET case at MHCC.111 The VET case at MHCC included three levels: micro (former students of MHCC, their extended families and communities), meso (the VET programme at MHCC and its multi-stakeholders, such as the trainers, the management group and committee members of MHCC), and macro levels. At the third, macro level, the multi-stakeholders represented the national and international partners of MHCC both from the development cooperation and the VET field; including the Finnish NGO (i.e., Fida) and the Tanzanian one (i.e., FPCT), as well as the Tanzanian VET authority (VETA112), the Finnish development policy actor and the donor agency, the MFA of Finland. The representatives of Tanzanian private VET providers and employers were also placed in this third category. Moreover, topics such as VET utilisation were examined at three levels and evaluation impacts by means of the process use; (i.e., individual, interpersonal and collective ones); the conscious standpoint taken; and the paradigm chosen in development evaluation. When one refers to meta-analytic discussion based on the evaluation literature, the key players on the donor side were found among the policy level representatives of foreign aid and its evaluation, as designers, funders and decision-makers of these actions, shown in Figures 1 and 2.

109 Linnakylä & Välijärvi 2005, 22; Patton 1997, 121; Pickford & Brown 2006; Saunders 2012

110 Carlsson 2000, 121–122; Suzuki 2000, 93

111 Kuusela 2005, 59–64

112 VETA 2014a

50

Figure 2. Positions of the researcher among multi-stakeholders of this evaluation experiment, the VET case at MHCC

The research process of this evaluation experiment was carried out by emphasising the recipient hegemonic paradigm and standpoint of the locals.113 Thus, my working hypothesis set for this research runs as follows: The conscious standpoint taken by the evaluator (then referring primarily to one of Alkin and Taut’s three factors, viz.

the evaluation component, having impact on the evaluation use114) could generate stronger evaluation impacts at the local level of the intervention. Therefore, in this research, stress was laid on such evaluation elements and procedures chosen over which we, I together with the local evaluation participants, could exercise power and have influence on stronger evaluation impacts and utilisation. These parts of the evaluation factor, if referring to Alkin and Taut, Saunders as well as Pickford and Brown,115 covered the evaluator’s position, the users’ location, the evaluation goal,

113 Collins 2000; 2013

114 Alkin & Taut 2003, 4

115 Alkin & Taut 2003; Pickford & Brown 2006; Saunders 2012

the evaluation design and methodology, as well as its time-line. What is more, it was essential from viewpoints of evaluation utilisation of both processes and findings that I, as the researcher and evaluator, revealed evaluation logic, knowledge, skills and evaluation standards, while non-evaluator stakeholders brought their knowledge of the evaluand and evaluation context, and then evaluation was carried out in our cooperation.116

Figure 3. The research strategy used for the evaluation experiment, the VET case at MHCC Many elements from the first idea paper have proven relevant and useful. However, more steps were gradually taken towards the research on evaluation use and impacts by utilising the evaluative action research strategy over the course of my research

116 Cousins, Goh, Clark & Lee 2004, 106–107, 124; Fetterman 2003, 49; Harnar & Preskill 2007; King 2007, 46; Preskill & Boyle 2008a. See Levin-Rozalis, Rosenstein & Cousins 2009, 204.

52

process (Figure 3). To carry out the interventionist research process, enlightened by thoughts of Waterman, Tillen, Dickson, and de Koning, I used one of the action research orientations: empowerment evaluation.117 This meant, as Figures 1 and 3 demonstrated, that scientific procedures and professional learning processes of the evaluation experiment were related to development of certain everyday life questions at MHCC.

I opted for using the qualitative approach as the research methodology due to the focus of my research task. This focus was on strengthening of evaluation impacts derived from the process use of evaluation and from the evaluation paradigm emphasising the local recipients’ involvement and learning in the evaluation process.

The standpoint of resisting asymmetric power relationships is again stressed. This mode and standpoint chosen allowed me to crosscut various issues such as disciplines, fields and subject matters.118 Typical of the boundaries which I needed to cross were disciplines (Education Sciences and Development Studies); contexts, spheres and levels (foreign aid: donor [government] and recipients [NGOs; local stakeholders]); cultures (Tanzanian and Finnish); and subjects (VET and evaluation).

In this evaluation experiment at MHCC, democratisation of knowledge with the assistance of local learning and active local participation was emphasised. This knowledge was neither to be produced for knowledge’s sake nor evaluation for evaluation’s sake but for being used, so that the local participants could become social actors in the VET programme. It was aimed at improving the lives of people involved in and maximising impacts of evaluation and of VET activities at MHCC in Tanzania through their experiences and participatory, evolving and mobilising processes.119 In addition, reflection and adaptation of the VET were concentrated on the locals’ evaluative thinking, involvement and skills needed for an on-going evaluation. This type of evaluation experiment not only gave voice to the stakeholders engaged in it, but also preserved their multiple realities, experiences, and interpretations by focussing on the participants’ perspectives in their cultural context.120

The research methodology, a branch of philosophy or logic, used in this research, combined the questions’ formulation as well as the data generation and analysis by using certain research methods. To an extent, it followed the footsteps of

117 Waterman, Tillen, Dickson & de Koning 2001

118 Denzin & Lincoln 2013, 5

119 see e.g., Gaventa & Cornwall 2001, 76; 2006, 126–127

120 see Ronkainen, Pehkonen, Lindblom-Ylänne & Paavilainen 2011, 81; Wandersman, Snell-Johns, Lentz, Fetterman, Keener, Livet, Imm & Flaspohler 2005, 28

Rwegoshora and Silverman.121 Regarding this research, the methodology selected, being predominantly qualitative, was holistic (e.g., contextual, case oriented, resistant of reductionism and elementalism, relatively non-comparative); empirical (e.g., field-oriented, privileging of natural language descriptions, underscoring observables also made by stakeholders); interpretive (e.g., stressing a researcher-subject interaction);

and empathic (e.g., design responsive). These characteristics of qualitative research were adopted from Stake.122

The formulation of the research questions had a strong influence on my research design. They gave shape and focus to this research, and they also helped me to choose the appropriate methods and means of analysis. Simply put, to keep me, as the researcher, on track.123 These research questions solidified the theoretical presuppositions underlying the questions themselves, as well as the ontological and epistemological standpoints taken in this research. Thus, to contribute to stronger evaluation use and impacts at the local level of the case, value was placed on the process use of evaluation with empowerment evaluation and utilisation of the social relationship between the researcher and the researched. Further, the emphasis was put on the processes, meanings and qualities of entities instead of the measurements or analysis of causal relationships between experimentally124 measured or examined variables. This focus on local actors in the process did not conform to the politics of strengthening donor hegemony and methods of positivism used in the majority of development evaluations.125

It is often said that determining the research questions, is the most significant part of the research process, to which I agree. Indeed, I realised that good research questions shaped the study and caused me to focus on those essential issues that

121 Rwegoshora 2006, 95; Silverman 2006, 15

122 Stake 1995, 12, 47–48; Stake & Abma 2005, 376–380; see also Mathison 2005, 396–397; Ronkainen, Pehkonen, Lindblom-Ylänne & Paavilainen 2011, 81–83; Rossman & Rallis 2003, 8, 11 in Marshall &

Rossman 2011, 2–3

123 Flick 2006, 137; Laine, Bamberg & Jokinen 2007, 47; Ronkainen, Pehkonen, Lindblom-Ylänne &

Paavilainen 2011, 42–45; Simons 2009, 31–32

124 In a classic experimental design with randomisation typically two groups – the treated one and untreated one – were measured before and after the treatment of one. Comparing changes in these groups enables one to evaluate the cause and effect, as well as impact of the programme and its effectiveness on the grounds of the theory of causation. Evaluation designs without randomisation but involving pre- and post-tests and which compare groups are called quasi-experiments. Generally, experimental and quasi-experimental evaluation represents methodologically “hard” approaches and uses quantitative methods. (Campbell 1969 in Pawson & Tilley 2000, 4–5.)

125 see Denzin & Lincoln 1994, 4

54

could be answerable.126 So, when comparing the target of this evaluation research, two typically separated actions,127 namely the evaluation and the research on it were carried out. When the evaluation research covered the action- and change-oriented evaluation on the VET impacts, as an evaluation characteristically does, the research part focussed on utilisation and impacts of development evaluation. In addition, it was evaluation questions which determined the use of the most appropriate methodology in the evaluation context, as Ginsburg and Rhett made clear.128

The VET experiment was concentrated on the evaluation factor of the evaluation use, recognising that the context factor is most commonly valued in evaluation use and evaluation research, but which has also been seen to cause inefficient evaluation use and impacts. Typical of this situation are the institutional evaluation systems with their power over development evaluations. The research results on evaluation literature referenced earlier revealed that the evaluation paradigm and standpoint chosen had an impact on evaluation utilisation as well as on learning in evaluation;

understanding that decisions made on epistemological, ontological and methodological stances valued in evaluation had crucial effects later, on evaluation utilisation and finally, on evaluation impacts.129 Thus, in the end, the major research question took its final form as follows:

How did the evaluation factor (through the conscious standpoint taken in evaluation), and the evaluation paradigm chosen, impact utilisation of evaluation among multiple, local stakeholders of a development cooperation intervention?

To be able to answer this key question, the following specific sub-questions were asked, each of which touched on the evaluation experiment carried out on the VET case in Tanzania.

1. What were the key evaluation impacts of the use of the “recipient hegemonic”

standpoint and paradigm in development evaluation utilisation on the evaluation experiment?

1.1 How did the evaluation process proceed?

1.2 What were the evaluation findings from the VET utilisation?

1.3 What was the kind of process use of evaluation in the VET case? With what results?

126 Laine, Bamberg & Jokinen 2007, 47; Ronkainen, Pehkonen, Lindblom-Ylänne & Paavilainen 2011, 42; Simons 2009, 31–32; Yin 2009a, 13–14

127 Botcheva, Shih & Huffman 2009, 178

128 Ginsburg & Rhett 2003, 497

129 see e.g., Heikkinen 2004

1.4 How was evaluation used? How were impacts of the evaluation experiment carried out manifested at the personal, interpersonal and collective levels of the VET case? What changed?