• Ei tuloksia

The new strategy of the University of Helsinki for the years 2013–2016 sets the strategic objective of the University to be counted among the 50 leading universities in the world by 2020. The goal requires that the University remains at the cutting edge or research in as many fields as possible and that it enhances its reputation as a high-quality learning environment with the ability to resolve global issues. The mission of the University is to be the most comprehensive research institution of higher education, edification and intellectual regeneration in Finland. It is a pioneer and builder of the future. To make right strategic choices in achieving presented goals, the University regularly carries out international evaluations of its research and teaching. Previous research evaluations including doctoral training was carried out in 1999 and 2005. The present international evaluation of research and doctoral training to be launched took place between 2010 and 2012 and the material under evaluation covered the period from 1 January 2005 to 31 December 2010.

The University’s performance depends on its personnel and students. The evaluation findings will be expected to offer useful information to the university to identify areas of strength as well as areas in need of development within research and doctoral training.

Also, structures that relate to research and doctoral training and extend over faculty and departmental boundaries will be more readily recognised. For the Researcher Community, the evaluation offers an opportunity to receive feedback from external experts and to obtain feedback on the quality of research and doctoral training in the international context.

1.2 STEERING GROUP AND ITS MANDATE

For the planning of the evaluation, the Rector of the University appointed a steering group 27 January 2010, Decision No 101/01/2010. The steering group was chaired by Vice-Rector Johanna Björkroth with Professor Marja Airaksinen who served as the Vice-Chair. The other group members were Chief Information Specialist Maria Forsman, Professor Arto Mustajoki, University Lecturer Kirsi Pyhältö, Director of Strategic Planning and Development Ossi Tuomi and doctoral candidate Jussi Vauhkonen.

The steering group prepared the evaluation plan, tasks, aims, objectives, methods, the timetable for implementation and the bases for financial or other recognition from the evaluation. The basic material used in the preparation was the strategic documents of the University. In the planning process, the steering group thoroughly discussed the options for how the evaluation would be implemented. A particular focus of the discussion was how to ensure continuity with the previous evaluations and what

means could be used to support the choice of research focus areas, societal impact, new innovative research openings, the composition of research communities and their interdisciplinarity, multidisciplinarity or transdisciplinarity, and international visibility. It was stated in the discussions that the new structures in the faculties did not support continuity with the previous evaluations. The steering group argued on behalf of voluntary participation and that the participating Researcher Communities (RCs) should cross department and faculty borders. RC refers to the group of researchers who registered together to participate in the evaluation of their research and doctoral training. It was assumed that this voluntary aspect would support the bottom-up approach in forming Researcher Communities. Further, encouragement factors were also needed, such as benefits for the researcher groups, departments or faculties of the university. The main impetus for using the new method was the strategic aim of the university: “to the top and out to society” or “excellence for society”.

The key topic in the discussions of the steering group was how to be able to recognise the diversity of research and its conditions and preconditions in a multidisciplinary university.

After the planning period, the steering group continued its work and followed the implementation of the evaluation and made principled decisions when necessary.

During the planning stage, the steering group discussed the implementation of the evaluation with the academic community in several forums.

1.3 EVALUATION METHOD Background

The structure of the faculties and departments has changed since the year 2010. The steering group stated that there is no longer a good reason to repeat the previous evaluation model and to compare the research performance of the old and new structures. In practice, the implementation of a new model meant taking a risk. There were many unknown factors, and the university had no earlier experience in how to manage the selected model successfully.

The concept of a Researcher Community was especially unclear to both the academic community and the panellists. Also, the distinctions between categories were not clear, and the categories were not exclusive. The RCs had several options to choose from among the participation categories. It was not, however, possible to foresee or speculate on the optimal choice of category.

Bottom-up approach

The participants in the evaluation were Researcher Communities (RCs). Conditions in forming RC were given in the Guidelines for the Participating Researcher Communities.

The RCs defined whether the composition of their communities should be considered, for example, as well-established or new.

The challenge for this evaluation was to recognise and justify the diversity of research practices and publication traditions. Traditional Research Assessment Exercises (RAE) do not necessarily value high quality research if the volumes are low or the research is distinct from mainstream research. It is always challenging to expose the diversity of research to fair comparison. Understanding the divergent starting points of the RCs demanded sensitivity from the panellists.

The evaluation’s emphasis on a researcher-oriented approach was already apparent during the registration procedure when the RCs were formed. The platform for the evaluation made it possible to consider a variety of compositions of researcher communities. Because the publications covered the years 2005–2010, it was assumed that the RCs would have certain aspects in common during the years under evaluation, e.g., common themes in their publications or some common type of cooperation and also common plans for the future.

The evaluation can be considered as enhancement led. Instead of ranking, the main aim was to provide useful information for the enhancement of the research and doctoral training of the participating RCs. The comparison should take into account each field of science and acknowledge its special characteristics. The comparison should be able to produce information to identify the present status of the RC and the factors that have lead to success. Moreover, challenges in the operations and outcomes should be recognised.

The evaluation approach was designed to recognise the significance and specific nature of researcher communities and research areas in a multidisciplinary top level university. Furthermore, one of the aims of the evaluation was to bring to light those evaluation principles that differ from the prevalent ones. Thus, the views of various fields of research can be described and research arising from various starting points better understood. The evaluation of doctoral training was integrated into the evaluation as a natural component related to research. Operational processes of doctoral training were examined in the evaluation questions and in a separate doctoral survey for background information.

Five main stages of the evaluation method

• Registration

• Self-evaluation

• TUHAT8 compilations on publications and other scientific activities9

• External evaluation

• Public reporting

The external part of the evaluation – the peer evaluation – took place in panels comprising distinguished national and international experts who based their evaluation on the materials submitted by the participating Researcher Communities and the data stored in the University’s research information system TUHAT.

The previous evaluations of research, in the years 1998 and 2005, covered all the institutions in the university. The previous model was traditional in the sense that a 8 TUHAT (acronym) of Research Information System (RIS) of the University of Helsinki

9 E.g. editorial work, memberships, public appearances, peer reviews, supervision or co-supervision of doctoral thesis

distinguished collection of research reports formed the main evaluation material in addition to the evaluation questions. The external panels evaluated the publications and scored their level of performance.

It is essential to emphasise that the present evaluation combined both meta-evaluation and traditional research assessment and that its focus was both on research outcomes and the procedures associated with research and doctoral training. The approach to the evaluation where self-evaluation constituted the main source of information can be considered enhancement led. The answers to the evaluation questions together with the information about publications, its bibliometric analyses and the lists of other scientific activities formed an entity that was to be reviewed as a whole.

1.4 MONETARY REWARDS OF THE EVALUATION

The financial consequence of the first research assessment in the year 1998 took the form of monetary rewards to successful departments. Successful faculties were rewarded as well.

In the year 2005, the departments and faculties with high scores whose performance was enhanced compared to the previous evaluation were rewarded. The rewards were assigned for 3 or 6 years.

The Rector will decide on the amount and allocation criteria of the resources to be distributed on the basis of the present evaluation results. High quality performance as well as the amount of participation in the evaluation will be considered in the allocation of resources in the planning of the next strategy period (2013–2016) and in the preparation of the University’s research policy.

1.5 AIMS AND OBJECTIVES IN THE EVALUATION

The aims of the evaluation were stated as follows:

• to improve the level of research and doctoral training at the University of Helsinki and to raise their international profile in accordance with the University’s strategic policies.

The improvement of doctoral training should be compared to the University’s policy,10

• to enhance the research conducted at the University by taking into account the diversity, originality, multidisciplinary nature, success and field-specificity,

• to recognize the conditions and prerequisites under which excellent, original and high-impact research is carried out,

• to offer the academic community the opportunity to receive topical and versatile international peer feedback,

• to better recognize the University’s research potential,

• to exploit the University’s TUHAT research information system to enable transparency of publishing activities and in the production of reliable, comparable data.

10 Policies on doctoral degrees and other postgraduate degrees at the University of Helsinki.

1.6 CONDITIONS TO FORM A RESEARCHER COMMUNITY FOR THE EVALUATION

The evaluation was targeted to researcher communities which were formed on the basis of collaboration in research and doctoral training. The researcher communities must include Principal Investigators (PI) and doctoral candidates. In addition, the researcher community typically included academics also on the other levels of the four-level hierarchy of researcher positions. As the purpose of the evaluation was to recognise the conditions and requirements for producing cutting edge and high quality research results and doctoral training, the University encouraged researcher communities with established collaboration between its members to participate. The practical motivation (e.g. research, doctoral training) for forming the researcher community was to be demonstrated in the evaluation materials.

Researcher communities that, in addition to meeting the above requirements, had to fulfil the following conditions (a-c) in the evaluation:

a) The researcher community consists of 20–120 members of the research and teaching staff who are or have been affiliated with the University of Helsinki between 1 January 2005 and 31 October 2010. On 31 October 2010 at least three members of such a group act as Principal Investigators appointed by the University of Helsinki.

b) During the period under evaluation (from 1 January 2005 to 31 December 2010), some members of the researcher community have served as supervisors of doctoral dissertations, appointed for the task by a University of Helsinki faculty.

c) Data on publications and other scientific activities of the researcher community members from 1 January 2005 to 31 December 2010 is updated in the TUHAT database by 31 January 2011.

Moreover, the following conditions were applied to the participating researcher communities:

• Participation is voluntary.

• The participating researcher community may include researchers across department and faculty boundaries.

• The participating researcher communities do not need the approval of their faculty or independent institute, even though it is recommended that faculties and independent institutes encourage their researchers to participate.

• The minimum number of members in a researcher community may be lower for a wellgrounded reason in categories three (research distinct from mainstream research) and four (an innovative opening). However, the members in such a researcher community must include at least two acting Principal Investigators appointed by the University as well as doctoral candidates under their supervision.

• Only Principal Investigators (PI) can participate in the evaluation as members of two researcher communities (A and B). In such cases, the prerequisites are:

• there is only one PI in common between the researcher communities A and B, and

• the researcher communities A and B do not go under the minimum size set for researcher

• communities, and

• the researcher communities A and B participate the evaluation in different categories.

• The participating researcher communities may also include researchers from outside the University of Helsinki. Such researchers will not, however, be included in the number of researchers in the participating researcher community.

• The publications and other scientific activities of a researcher will be evaluated only for the period during which he or she has been affiliated with the University of Helsinki.

• The researcher community must register for the evaluation and submit the required evaluation materials within the set deadline.

The Evaluation Steering Group reserved the right to reject researcher communities which did not fulfil the conditions requested.

At the first stage in February 2010, 141 communities registered. After discussions with the responsible persons of the RCs, some members were excluded, and some RCs were excluded due to a conflict with the regulations of the evaluation. One community felt that participation was too demanding and withdrew their participation.

In the final count, 136 RCs participated in the evaluation, prepared all the material and reviewed their publications and other scientific activities.

Critical aspects in forming the Researcher Communities for the evaluation

The procedure for forming a Researcher Community was not supervised by the Evaluation Office. Many PIs were asked to become involved by several communities. Sometimes the regulations were not known, and not everyone, especially with regard to doctoral candidates, was sure of which community he or she was affiliated with. In some cases the PhD candidates were not aware of the restrictions in the evaluation. The tight schedule, e.g., of just one month to form a community was challenging and sometimes caused unintended negative side effects.

Some critical voices were presented by participants in the humanities about the minimum size of the Researcher Community. Some fields in the humanities felt that the minimum size of 20 members in a RC was too big and challenging to compose.

However, the conditions allowed a smaller number of participants in categories three and four. The restriction was that in this case, the PIs were allowed to participate only in one RC.

One good example that worked well was the meetings organised by some faculties and departments together with the PIs in which ideas for forming communities were shared. The willingness to participate was often augmented in these meetings. What can be learned from this experience? One possibility of making the process more efficient could be an open platform (e.g., a WIKI) where all potential and interested members could be reviewed.

1.7 PARTICIPATION CATEGORIES

The Researcher Communities had to choose one of the participation categories:

1. The research of the participating community represents the international cutting edge in its field.

2. The research of the participating community is of high quality, but the community in its present composition has yet to achieve strong international recognition or a clear break-through.

3. The research of the participating community is distinct from mainstream research, and the special features of the research tradition in the field must be considered in the evaluation. The research is of high quality and has great significance and impact in its field. However, the generally used research evaluation methods do not necessarily shed sufficient light on the merits of the research.

4. The research of the participating community represents an innovative opening. A new opening can be an innovative combination of research fields, or it can be proven to have a special social, national or international demand or other significance. Even if the researcher community in its present composition has yet to obtain proof of international success, its members can produce convincing evidence of the high level of their previous research.

5. The research of the participating community has a highly significant societal impact.

The participating researcher community is able to justify the high social significance of its research. The research may relate to national legislation, media visibility or participation in social debate, or other activities promoting social development and human welfare. In addition to having societal impact, the research must be of a high standard.

1.8 EVALUATION MATERIAL

The primary material in the evaluation was the RCs’ self-evaluations, which were qualitative in character and allowed the RCs to choose what was important to mention or emphasise and what was left unmentioned. The present evaluation is exceptional, at least in the Finnish context, because it was based on the evaluation documentation (self-evaluation questions, publications and other scientific activities), bibliometric reports and a doctoral survey. All the documents were delivered to the panellists for examination.

The evaluation questions formed a substantial part of the evaluation material. In addition, in spring 2011 the University carried out bibliometric analyses based on the data updated in the TUHAT system before 12 April 2011. The bibliometric analyses were based on those carried out by professionals from the CWTS/Leiden (Centre for Science and Technology Studies) and the Helsinki University Library. The analyses done by the CWTS/ the Centre for Science and Technology Studies Leiden were based on a standard method using indicators that have been tested and widely approved.

When using the Web of Science database, for example, traditional bibliometrics can be reasonably done in fields such as medicine, biosciences and natural sciences.

The bibliometrics provided by the CWTS/Leiden covered only the publications that included a WoS identification number (UT) in the TUHAT research information system.

The analyses used publication data that was stored in the University’s own research information system. One reason for using TUHAT as a data source was to support the new information system and to accelerate its implementation. The total number of all publications stored in the research information system was 67,465 publications published in the years 2005–2010. A part of that, i.e., 16,000, had been assigned a WoS identification number. About 3,500 identification numbers were added afterwards to publications that were in the TUHAT system but had not WoS identification number.

This was done in cooperation with the CWTS/Leiden.

Traditional bibliometrics are seldom relevant in the humanities and social sciences because international comparable databases do not include every type of high quality research publication, such as books and monographs and scientific journals in languages other than English. The Helsinki University Library carried out analyses for the RCs if their publications were not well represented in the Web of Science database, the RC’s

Traditional bibliometrics are seldom relevant in the humanities and social sciences because international comparable databases do not include every type of high quality research publication, such as books and monographs and scientific journals in languages other than English. The Helsinki University Library carried out analyses for the RCs if their publications were not well represented in the Web of Science database, the RC’s