• Ei tuloksia

Improving individual and collaborative E-assessment through multiple-choice questions and WebAVALIAA: A new e-assessment strategy implemented at a Portuguese university

N/A
N/A
Info
Lataa
Protected

Academic year: 2022

Jaa "Improving individual and collaborative E-assessment through multiple-choice questions and WebAVALIAA: A new e-assessment strategy implemented at a Portuguese university"

Copied!
256
0
0

Kokoteksti

(1)

Dissertations in Forestry and Natural Sciences

DISSERTATIONS | ROSALINA BABO | IMPROVING INDIVIDUAL AND COLLABORATIVE E-ASSESSMENT THROUGH ... | NO 425

ROSALINA BABO

Improving individual and collaborative E-assessment through multiple- choice questions and

WebAVALIA

A new e-assessment strategy implemented at a Portuguese university PUBLICATIONS OF

THE UNIVERSITY OF EASTERN FINLAND

(2)
(3)

Improving individual and collaborative E-assessment through multiple-choice

questions and WebAVALIA

A new e-assessment strategy implemented at a Portuguese university

(4)
(5)

Rosalina Babo

Improving individual and collaborative E-assessment through multiple-choice

questions and WebAVALIA

A new e-assessment strategy implemented at a Portuguese university

Publications of the University of Eastern Finland Dissertations in Forestry and Natural Sciences

No 425

University of Eastern Finland Joensuu

2021

(6)

PunaMusta oy Joensuu, 2021

Editors: Pertti Pasanen, Matti Tedre, Jukka Tuomela, and Matti Vornanen Sales: University of Eastern Finland Library

ISBN: 978-952-61-3830-5 (print) ISBN: 978-952-61-4255-5 (PDF)

ISSNL: 1798-5668 ISSN: 1798-5668 ISSN: 1798-5676 (PDF)

(7)

Author’s address: Rosalina Babo

Porto Accounting and Business school Rua Jaime Lopes Amorim, s/n

4465-004 S. Mamede de Infesta PORTO, PORTUGAL

email: babo@iscap.ipp.pt

Supervisors: Professor Markku Tukiainen, PhD University of Eastern Finland School of Computing

P.O. Box 111

80101 JOENSUU, FINLAND email: markku.tukiainen@uef.fi Dr Jarkko Suhonen, PhD, Docent University of Eastern Finland School of Computing

P.O. Box 111

80101 JOENSUU, FINLAND email: jarkko.suhonen@uef.fi

Reviewers: Professor Jo Coldwell-Neilson, PhD Deakin University, Australia

Faculty of Sci Eng & Built Env Locked Bag 20000

Geelong VIC 3220 AUSTRALIA

email: jo.coldwell@deakin.edu.au Professor Ismar Frango Silveira, PhD Mackenzie Presbyterian University, Brazil Faculdade de Computação e Informática R. Pio XI, 1500 - Alto da Lapa

CEP 05468-901 - São Paulo/SP - BRASIL email: ismarfrango@gmail.com

(8)

Opponent: Professor Valentina Dagienė, PhD Vilnius University

Institute of Data Science and Digital Technologies 3 Universiteto St.

LT-01513 VILNIUS LITHUANIA

email: valentina.dagiene@mif.vu.lt

(9)

7 Babo, Rosalina

Improving individual and collaborative E-assessment through multiple- choice questions and WebAVALIA. A new e-assessment strategy

implemented at a Portuguese university.

Joensuu: University of Eastern Finland, 2021 Publications of the University of Eastern Finland

Dissertation in Forestry and Natural Sciences 2021; 425 ISBN: 978-952-61-3830-5 (print)

ISSNL: 1798-5668 ISSN: 1798-5668

ISBN: 978-952-61-4255-5 (PDF) ISSN: 1798-5676 (PDF)

ABSTRACT

The increasing number of students in Portuguese higher education institutions (HEI) led to an equivalent increase in the number of teachers.

However, despite the ideal teacher:student ratio being around 1:20 to 1:30, it is common to have crowded classrooms with around 100 students per class. This situation compromises the quality of the learning process whilst negatively impacting assessment activities, most notably in practical

courses. In this type of class, students are expected to develop,

experiment, and practice acquired knowledge. The assessment moments of these practical topics with many students become more laborious and time-consuming. Consequently, the quality of the whole learning process declines.

Considering these facts, clearly, it was urgent to find alternatives to the assessment method, especially for practical courses with crowded

classrooms. The beliefs were that it would be possible to design and implement such an alternative method, grounded in a continuous e- assessment setting, that would deliver reliable and fair evaluations whilst not compromising the learning outcomes established for the courses. The investigation led to the development of a new assessment method based

(10)

8

on Multiple-Choice Questions (MCQ) quizzes supported by a learning management system (LMS), Moodle. Nonetheless, the research concluded that MCQ quizzes could not assess all the skills and competencies expected from the students. Accordingly, it was necessary to complement it with other assessment approaches, notably problem-based learning (PBL).

However, the implementation of a workgroup project to assist the students in achieving the learning goals and attaining desired skills and competencies raised a new challenge for the teachers when evaluating the students. It was perceived that it would not be fair or truthful to assess the project outcomes as a whole and simply assign the same mark to all the group members. In fact, a frequent occurrence is that not every group member contributes in the same way to the work development. There was a need to guarantee that everyone would be assessed according to their performance, thus providing fair and accurate assessments of individual contributions to the work developed. Therefore, to distinguish each workgroup member, self- and peer-assessment practices were added, but these assessment practices also increased the teacher’s administrative tasks. The solution to alleviate the assessment management and increased workload from these tasks was to support them with specific online tools.

Upon understanding the need for a new self- and peer-e-assessment software tool, WebAVALIA was developed. WebAVALIA provides the evaluator with an environment where it is possible to deliver a distinction between group members, allowing a fair assessment. WebAVALIA uses workgroup members’ perceptions about their own and their peers’

performance and contribution to the work development to achieve fair and unbiased individual results, while considering the project grade. By

allowing a distinction between individuals, WebAVALIA can return to the students the feeling that the assessment was fair and that their efforts have been adequately rewarded.

This innovative work led to the publication of seven papers and followed a pragmatic research philosophy using mixed-methods research by

combining both qualitative and quantitative approaches. More specifically, it used the methodologies of action research and design science research (DSR). The data collection techniques chosen assisted in the achievement

(11)

9 of in-depth understanding about the problems or gathered objective data to perform statistical analyses. Other methodologies related to the

development of software or assistance in the problem-solving process.

The work conducted toward the effective design and implementation of the eMCQ assessment method adopted an action research methodology.

The process comprised three action cycles of five phases each (diagnosis, action plan, action taking, evaluation, and specifying learning). From these cycles, it was possible to implement a successful strategy suitable for shifting the assessment method. Subsequently, a DSR methodology was used to research a framework to achieve a fair and unbiased assessment of workgroups, so students also gained the feeling that the assessment was fair and that their efforts had been adequately rewarded. Thus, a theoretical framework was designed, and background knowledge for the framework development was sought. The design and development of WebAVALIA was explained with the support of DSR, which also allowed an explanation of the motivation behind its phased development, offering the means to identify the features and problems of each version, successfully leading to the tool’s final version.

Universal Decimal Classification: 004.415.2, 378.4, 37.018.43:004, 37.091.279.7; 37.091.3

Keywords: e-assessment; higher education; self and peer assessment; multiple- choice questions; Moodle; Moodle quiz; webavalia; technology enhanced assessment; learning management systems; usability; workgroup; collaborative work; summative assessment; continuous assessment; e-learning; software tools; evaluation tools; problem-based learning

Yleinen suomalainen ontologia: korkea-asteen koulutus; oppilasarviointi;

itsearviointi; vertaisarviointi (arviointimenetelmät); tietokoneavusteinen opetus;

opetusohjelmat (tietokoneohjelmat); oppimisalustat; ongelmalähtöinen oppiminen

(12)

10

(13)

11

Acknowledgements

It is with immense gratitude that I acknowledge the support and counseling of my advisors, Professor Markku Tukiainen and Dr Jarkko Suhonen. Their knowledge and experience have encouraged me during my PhD studies. A special acknowledge to Dr Jarkko Suhonen for his

outstanding scientific review and restructuring of all my research.

I would like to convey my heartfelt thanks to all those who taught me so much and have always supported my academic career—Professor João Álvaro Carvalho and Professor Jorge Reis Lima. Many thanks to all my colleagues and friends at the universities where I have spent many years of my life, specifically Universidade Portucalense, University of Minho, and Polytechnic of Porto–ISCAP.

I share the credit of my work with all my co-authors, especially those whose articles are part of this dissertation and with whom I have learnt a lot. In particular, I extend my appreciation to the students Joana Rocha, Ricardo Fitas, and Verónica Fraga, and to MEng Vitor Silva (a P. Porto

software developer) for their support and enthusiasm over my work, which is a little part of each.

I cannot find words to express my gratitude to Professor Teresa Andrade, my friend and fellow swimmer, who did a great job with many excellent suggestions, which quite improved the organization and readability of the thesis.

This thesis would not have been possible without my family. My mother invested all her efforts in mine and my sisters’ education. My late father, as many parents do, believed I was the best among the best. I regret not having him here anymore to be proud of me silently.

I owe my deepest gratitude to Pedro, the one present and supportive throughout all my undergraduate, graduate, and PhD studies. Mainly, he is the father of my children and the one who gives support in the main

moments of my life. I am who I am also because he has been nearby since I was 17 years old.

(14)

12

Furthermore, I would like to acknowledge my wonderful children, my son André and daughters Leonor and Carolina for their persistent wonder.

They grew up hearing the term PhD, which nowadays is a funny anecdote among our family. At least today (hopefully), they will stop joking about this subject.

Joensuu, June 28, 2021 Rosalina Babo

(15)

13 LIST OF ORIGINAL PUBLICATIONS

ARTICLEI

Babo, R., & Azevedo, A. (2013). Planning and implementing a new

assessment strategy using an e-learning platform. In Mélanie Ciussi & M.

Augier (Eds.), 12th European Conference on e-Learning (ECEL 2013), 1–9.

Academic Conferences and Publishing International, Reading, UK.

http://recipp.ipp.pt/handle/10400.22/2726 ARTICLEII

Babo, R., Azevedo, A., & Suhonen, J. (2015). Students’ perceptions about assessment using an e-learning platform. 2015 IEEE 15th International Conference on Advanced Learning Technologies, 244–246.

https://doi.org/10.1109/ICALT.2015.73 ARTICLEIII

Babo, R., & Suhonen, J. (2018). E-assessment with multiple choice questions: A qualitative study of teachers’ opinions and experience regarding the new assessment strategy. International Journal of Learning Technology, 13(3), 220–248. https://doi.org/10.1504/ijlt.2018.095964 ARTICLEIV

Babo, R., Babo, L., Suhonen, J., & Tukiainen, M. (2020). E-assessment with multiple-choice questions: A 5-year study of students’ opinions and experience. Journal of Information Technology Education: Innovations in Practice, 19, 1–29.https://doi.org/10.28945/4491

ARTICLEV

Babo, R., Rocha, J., Fitas, R., Suhonen, J., & Tukiainen, M. (2021). Self and Peer E-Assessment: A Study on Software Usability. International Journal of Information and Communication Technology Education (IJICTE), 17(3), 68-85.

https://doi.org/10.4018/IJICTE.20210701.oa5

(16)

14 ARTICLEVI

Babo, R., Suhonen, J., & Tukiainen, M. (2020). Improving workgroup assessment with WebAVALIA: The Concept, framework and first results.

Journal of Information Technology Education: Innovations in Practice (JITE:IIP), 19, 157–184. https://doi.org/10.28945/4627

ARTICLEVII

Babo, R., Fitas, R., Suhonen, J., & Tukiainen, M. (2020). Analysing and improving mathematical formulation of WebAVALIA: A self and peer assessment tool. IEEE Global Engineering Education Conference (EDUCON), 1325–1332 http://doi.org/10.1109/EDUCON45650.2020.9125101

(17)

15

AUTHOR’S CONTRIBUTION

The contributions of the author in each publication are detailed below.

I) The author was responsible for project management by defining and coordinating the teams, elaborating the chronogram of activities and its phases and deadlines, chairing meetings, as well as designing and implementing the project’s questions databank, the writing of the article, and its presentation at the ECEL 2013 conference. This article was co-authored by Prof. A. Azevedo, who assisted with the coordination of the project and all its tasks, from the chronogram rules to the division of categories for the questions’ databank. Moreover, to perform this research project, all colleagues of the IS department of ISCAP had to be involved in its development, providing valuable contributions to it.

II) The author formulated the research questions and designed, validated, and distributed the questionnaire. The data statistical treatment was performed by both the author and Prof. A. Azevedo, who assisted with the article writing. The author performed the literature review, wrote the paper, and presented it at the ICALT 2015 conference. Dr. J.

Suhonen advised on the paper writing.

III) The author was responsible for the interview guide design, the preparation of the focus group environment, and its realization. The interview transcription and translation were performed with the assistance of undergraduate interns. Then, the data from the interviews was treated out by the author. The author performed the literature review and wrote the article. Dr. J. Suhonen provided a great contribution with the article’s content organization and revision.

IV) The author designed the surveys and implemented them in the LimeSurvey platform. They were then distributed through the students involved in the new assessment method. The author was also responsible for writing the students’ interview guide and interviewing the students. The survey questions and open-ended answers, as well as the interviews, were transcribed and translated with the assistance

(18)

16

of undergraduate interns. Prof. L. Babo collaborated in the statistical treatment of the interviews’ data and in the article writing. Dr. J.

Suhonen and Prof. M. Tukiainen advised on the writing of the paper.

V) The author was responsible for the research on the existing assessment software tools and helped with writing the article. Miss J.

Rocha performed the literature review of the paper and assisted with its writing. Mr. R. Fitas tested all the suggested software tools, developing a table with its main characteristics and differences. Dr. J.

Suhonen and Prof. M. Tukiainen advised on the writing of the paper.

VI) The author was responsible for the development, coding, and design of the tool in its several phases of implementation throughout the years, with the assistance of Eng. V. Silva. The author also used the tool with student groups to ascertain its usability. The author designed the surveys, implemented them in the LimeSurvey platform, and was responsible for their distribution. The survey’s questions and open- ended answers, as well as the interviews, were translated with the assistance of undergraduate interns. The data treatment was performed by the author, as well as the literature review and article writing. Dr. J. Suhonen and Prof. M. Tukiainen advised on the writing of the paper.

VII) The author has been working for several years on the mathematical formulation of the WebAVALIA tool discussed in this article. The author was also responsible for the literature review, writing of the paper, and its presentation at the EDUCON 2020 conference. Mr. R. Fitas collaborated in the development of algorithm D, which was implemented in the tool, and he assisted with the writing process of the article. Dr. J. Suhonen and Prof. M. Tukiainen advised on the composition of the paper.

(19)

17

Table of contents

1. Introduction ... 23

1.1 Research background and motivation ... 23

1.2 Problem statement and research questions ... 27

1.3 Research process... 32

1.4 Dissertation structure ... 35

2. Literature review ... 37

2.1 Literature review introduction ... 37

2.2 Education overview ... 43

2.3 Assessment overview ... 44

2.4 Collaborative work ... 47

2.4.1 Collaborative learning approaches ... 47

2.4.2 Problem-based learning ... 48

2.5 Technology-enhanced assessment ... 49

2.5.1 E-assessment tools ... 49

2.5.2 Learning management systems ... 50

2.5.3 E-multiple-choice questions ... 52

2.5.4 Self and peer e-assessment ... 54

2.5.5 E-assessment software usability... 55

2.6 Literature review summary ... 56

3. Research design ... 59

3.1 Research methodologies ... 59

3.1.1 Mixed-methods research ... 60

3.1.2 Action research methodology ... 61

3.1.3 Design science research ... 62

3.2 Dissertation research design ... 63

3.3 Research ethics ... 69

4. Research findings ... 73

4.1 Introduction ... 73

4.2 E-assessment implementation process ... 75

4.3 Stakeholders’ opinions and perceptions on MCQ Moodle e-assessment 77 4.4 Self and peer e-assessment framework development—WebAVALIA ... 84

4.4.1 Self and peer e-assessment tools ... 85

(20)

18

4.4.2 Design and development of WebAVALIA ... 93

5. Discussion and conclusion ... 113

5.1 Summary of findings ... 113

5.2 Study Reflection ... 119

5.3 Recommendations to practitioners/lessons learned ... 124

5.4 Research quality and limitations ... 131

5.5 Future research ... 136

Bibliography ... 139

Appendices ... 159

(21)

19 LIST OF TABLES

Table 1. Authors cited for each publication topic 41

Table 2. Research methodologies and techniques used in each publication 64 Table 3. Phases of two action research cycles (based on Article I) 75

Table 4. Question bank categories 76

Table 5. Lecturers’ focus group results concerning MCQ quizzes’ advantages and

disadvantages (Article III) 79

Table 6. Survey questions regarding students’ opinion about MCQ e-assessment

(Article IV) 80

Table 7. Students’ opinion about the eMCQ e-assessment (Article IV) 81 Table 8. Search and selection criteria for e-assessment tools (based on Article V) 85 Table 9. List of research articles by the respective authors and parameters (Article

V) 86

Table 10. Comparison of assessment tools (Article V) 91

Table 11. Variables used in the formulation of WebAVALIA 102

Table 12. List of parameters used in the formulation of WebAVALIA 103 Table 13. Students’ self- and peer-assessment marks—assessment moment 1 108 Table 14. Students’ self- and peer-assessment marks—assessment moment 2 108 Table 15. Students’ self- and peer-assessment marks—assessment moment 3 108 Table 16. Students’ self- and peer-assessment marks—final voting mark 109 Table 17. Students’ final marks, according to formulas A, C, and D 109

LIST OF FIGURES

Figure 1. Information systems course assessment at ISCAP 24

Figure 2. Example of part of a CCAA test in a DBMS file (Article III) 25 Figure 3. ISCAP’s Information Systems Department’s traditional assessment

process 26

Figure 4. The number of students and teachers along the years in Portuguese higher education institutions (PORDATA––Estatísticas, Gráficos e Indicadores de

Municípios, Portugal e Europa, n.d.) 28

Figure 5. Group assessment without individuals’ distinctions 30

Figure 6. New students’ e-assessment approach 31

Figure 7. eMCQ design, implementation, and analysis research timeline 33

Figure 8. Overview of the research development process 34

Figure 9. Google Scholar keywords search (Retrieved on 2020/06/22) 37 Figure 11. Design science research cycles (Hevner et al., 2004) 63

Figure 12. Action research cycles 66

(22)

20

Figure 13. Design science research cycles of WebAVALIA (Article VI) 69 Figure 14. Research development process grouped by findings’ categories 74 Figure 15. Students’ opinions about the multiple-choice question summative e-

assessment (based onArticle II) 78

Figure 16. Students’ perceptions about the eMCQ (mean scores; Article IV) 82 Figure 17. Students’ opinion about the time to answer the test (mean scores), as

discussed in Article IV 82

Figure 18. Points’ distribution by question (%) 83

Figure 19. Theoretical framework for an unbiased workgroup assessment (based

on Article VI) 93

Figure 20. WebAVALIA’s first version development and feature implementation

(Article VI) 96

Figure 21. User types in WebAVALIA 97

Figure 22. WebAVALIA parameters’ configuration (Article VII) 98

Figure 23. WebAVALIA student page 98

Figure 24. WebAVALIA voting board (Article VII) 99

Figure 25. Results page on WebAVALIA (Article VI) 100

Figure 26. Use case diagram of WebAVALIA (Article VI) 101

Figure 27. Graphic representing formulas A, C, and D 110

Figure 28. Consistency of marks with everyone’s performance 111 Figure 29. Stakeholders on the assessment shifting process (Adapted from Folden,

2012). 125

(23)

21 LIST OF ACRONYMS

CAA Computer-Assisted Assessment

CCAA Classical Computer-Assisted Assessment DBMS Database Management Systems

DSR Design Science Research HEI Higher Education Institutions

ICT Information and Communication Technologies IS Information Systems

ISCAP Instituto Superior de Contabilidade e Administração do Porto (Porto Accounting and Business School)

ISO International Organization for Standardization LMS Learning Management System

MCQ Multiple-Choice Questions

Moodle Modular Object-Oriented Dynamic Learning Environment PBL Problem-Based Learning

P. Porto Polytechnic of Porto

RCR Responsible Conduct of Research RQ Research Question

TEA Technology-Enhanced Assessment

(24)

22

(25)

23

1. Introduction

1.1 Research background and motivation

ISCAP (Porto Accounting and Business School) is a higher education institution (HEI) of the Polytechnic of Porto (P. Porto) that includes around 270 teachers and 4,300 students who attend a variety of undergraduate and graduate programs. Until 2006, the traditional method used in a continuous assessment setting in information and communication

technologies (ICT) courses was the classical computer-assisted assessment (CCAA).

The CCAA method adopted many years ago at ISCAP comprises formative and summative assignments (Figure 1). A formative assessment consists of regular homework that encompasses the accomplishment of small tasks on a computer in an unsupervised environment. Summative assessments, on the other hand, include the completion of several tasks in a supervised computer-assisted test environment. It comprises individual tests performed typically at specific times throughout the semester and the development of a workgroup project.

(26)

24

Figure 1. Information systems course assessment at ISCAP

The workgroup project was held during the whole semester, in which students were asked to solve an analytic problem. Accordingly, students engaged in the resolution of an enterprise’s real problem, offering them the possibility not only of demonstrating their knowledge but also of developing skills by designing and implementing a software product. The results of such work were presented at the end of the semester to a real audience. The proposed workgroup aimed to achieve learning outcomes and skills such as “self-directed learning, project management,

collaboration, communication, and collaborative knowledge construction (…)” (Edström & Kolmos, 2014, p. 541) plus analytical and presentation skills (Tiwari, Arya, & Bansal, 2017).

In individual tests, the students were asked to answer a series of questions by performing respective tasks directly on a Database

Management System (DBMS) in MS Access or a spreadsheet in MS Excel, depending on the course (Figure 2). To accomplish the test, the students had to download from Moodle the test file with the set of questions and requests and a base file to be transformed according to the given requests

(27)

25 (a semi-completed file in which the students had to accomplish the

requests). Then, upon performing the tasks and questions directly in the base file, the students would upload the resulting file to Moodle.

Figure 2. Example of part of a CCAA test in a DBMS file (Article III) In each assessment moment, which could be held two or three times during the semester, each student generated and submitted at least one file for evaluation. Figure 3 represents one assessment process using CCAA.

(28)

26

Figure 3. ISCAP’s Information Systems Department’s traditional assessment process

This individual assessment procedure required the simultaneous use of one computer per student. However, due to the large number of students attending each course at ISCAP, the institution did not have sufficient computer availability. Accordingly, to enable all students to undergo this assessment procedure, it was necessary to implement several test shifts.

Consequently, the need to devise as many test versions as the number of shifts proved difficult, especially to ensure homogeneity. Presenting different tests per shift aimed at avoiding the possibility of cheating. This had a direct impact on the complexity of the whole assessment process.

Furthermore, since there were several test versions to grade, the amount of time lecturers spent grading process was significant.

The need to implement a new assessment approach arose from this reality, establishing as a hypothesis that shifting the CCAA method of assessment to another e-assessment method would decrease the overall complexity whilst ensuring quality. The premises that were advanced were that the new method would allow the development of as many test

versions as needed in a simple way, whilst assuring its homogeneity and assessment rigor. It would allow for reducing the time spent in preparing each assessment moment, as well as in the grading process,

(29)

27 simultaneously providing greater accuracy and fairness across the

assigned marks.

This necessity is also highlighted in other studies (Barber et al., 2015;

Botički & Milašinović, 2008; Crisp, 2011; Pachler et al., 2010; Stödberg, 2012). According to Stödberg (2012), there has been a trend to standardize academic assessment, which, along with the increased adoption of ICT, has led to a growing interest in e-assessment activities. Barber et al. (2015, p.

66) also agree that there is the need to ‘‘embrace all that the digital world has to offer.’’

In an environment where teachers are expected to present a variety of learning and assessment experiences to their students and support an active learning approach, the necessity of enhancing assessment organization and technology is crucial. E-assessment can allow these experiences as long as they are used accurately. Pachler et al. (2010) expresses the potential of e-assessment to significantly change the learning processes in higher education institutions (HEI; Botički &

Milašinović, 2008; Crisp, 2011; Pachler et al., 2010).

Stödberg (2012, p. 602) performed a review of e-assessment and affirms that “computer marking can be as accurate as human marking and

sometimes even more consistent.” This is extremely relevant, since accurate marking is a prerequisite of this form of assessment to provide students with correct and suitable feedback. In this study, Stödberg also expresses that ‘‘e-assessments can be used to save the assessor’s time.”

1.2 Problem statement and research questions

The increasing number of students in HEI led to an equivalent increase in the number of teachers, as can be seen in Figure 4, that reflects the HEI teacher: student ratio in Portugal. However, “while an ideal teacher:student ratio is around 1:20 to 1:30,” it is very common in HEIs to have classes of

“around 100 students,” which is not beneficial for learning quality (Chebrolu et al., 2017).

(30)

28

Figure 4. The number of students and teachers along the years in

Portuguese higher education institutions (PORDATA––Estatísticas, Gráficos e Indicadores de Municípios, Portugal e Europa, n.d.)

This situation has a strong negative impact on assessment activities, in particular, in courses with practical subjects, where individuals must practice and test their knowledge. In fact, with many students, the assessment becomes more laborious, since these subjects are more difficult to evaluate and are time-consuming, often requiring that the teacher be fully dedicated to one student at a time. The e-assessment procedure currently in use at ISCAP, illustrated in Figure 3, imposes a considerable burden on the teacher, as it requires the production and evaluation of several versions of the tests and organizing the students in shifts, whilst being susceptible to failure and unfairness.

The hypothesis is that it is possible to use e-assessment in more innovative ways so these problems may be overcome. In particular, the author argues that it is possible to establish a continuous e-assessment setting, particularly useful for large courses with crowded classrooms, which delivers reliable and fair evaluation whilst not compromising the learning outcomes established for the course.

With this goal in view, two main research questions were defined to act as drivers for the research work of this dissertation study:

(31)

29 RQ1: How can higher education teachers use continuous assessment of practical Information Systems topics in crowded classrooms,

without compromising the learning process and learning outcomes of these students?

RQ1 translates into the conceptual aim of this work, which is

understanding how to implement continuous e-assessment of practical subjects in crowded classrooms efficiently, without compromising the learning process. The investigation led to the development of a new assessment method based on multiple-choice question (MCQ) quizzes supported by a learning management system (LMS).

The choice of an LMS platform was derived from previous research (Babo et al., 2012; Babo & Azevedo, 2012; 2009), which provided a broader view of the use of such platforms in HEI. The selection of Moodle as the LMS to use in this work was grounded in the fact that this LMS is one of the most popular within the higher education landscape. Additionally,

considering ISCAP was already using MCQ supported by Moodle to assess theoretical topics, it seemed a natural step also to use it for practical topics.

However, the research to answer RQ1 concluded that MCQ quizzes could not assess all the skills and competencies expected of the students.

It was necessary to complement them with other assessment approaches, notably problem-based learning (PBL). Douglas et al. (2012) also point to this problem, recommending combining these quizzes with other

assessment approaches. Therefore, it was decided to use PBL by gathering students in groups to solve a proposed problem. It was expected that this method could assist with the assessment related to the skill and

competency acquisition that MCQ quizzes lacked.

While the use of PBL on a group basis allowed solving flaws of MCQ quizzes toward finding an answer to RQ1, it also led to the identification of another problem. Given that most of the students’ work is developed during non-contact hours, that is, outside classes and, thus, without the consistent supervision of teachers, it is not possible to have a good

(32)

30

perception of the individual performance of each group member. This posed great difficulty in differentiating students according to their commitment and skills, leading to assigning the same grade to all group members and, thus, to (re-)introducing unfairness in the evaluation process (Figure 5).

Figure 5. Group assessment without individuals’ distinctions

Considering this situation, the second research question was formulated.

RQ2: How can teachers assess individuals working in groups and guarantee that they are assessed according to their performance and contribution to the work developed? How can a software tool assist evaluators with their task of assessing everyone in a workgroup?

The second goal is a natural extension of the first, intending to answer how the assessment of workgroups can be performed in a fair and simple manner. In 2009, Luxton-Reilly expressed the need for the development of peer assessment tools that can be used by various institutions. This study also refers to the need for more usability studies on available tools

(Luxton-Reilly, 2009).

Since this dissertation is contextualized in Portugal, namely a public higher education institution, ISCAP, it is important to note the need for a freeware tool. The tuition fees in Portuguese universities and polytechnics are low in comparison to other countries, where, as reported by DGES website, the maximum value fixed for tuition fees is € 697. Usually, there is

(33)

31 hardly financial availability to acquire new softwares. Therefore, to

innovate and bring different technologies to classrooms, lectures many times depend on their own tools, which in some cases can be the use of freeware (Propinas, n.d.).

The work conducted in the context of this dissertation over a period of years allowed the answering of these research questions. The outcomes of such work have been presented in original publications (articles I–VII) and, thus, validated by the relevant research community. Figure 6 illustrates the new e-assessment approach that resulted from the research developed.

Figure 6. New students’ e-assessment approach

To answer the research questions, research work was conducted in successive phases, and the outcomes were presented in articles I–VII.

Articles I to IV result from the research work developed to answer RQ1, whereas articles V to VII present the work developed to answer RQ2.

(34)

32

1.3 Research process

Previous research (Babo et al., 2012; Babo & Azevedo, 2012; 2009) was essential to raise awareness of the importance of LMS in HEI and led to the development of RQ1. This work made it possible to understand how LMSs were being used and to identify the most used systems. It also assisted in characterizing internet usage and its role in education by profiling

Portuguese students and by gathering the most suitable digital pedagogy processes and tools.

The answer to RQ1 was fully addressed in articles I to IV, where the planning of the new assessment method was designed and the opinions and perceptions of students and teachers on the method were gathered.

For these publications, the research methodology used was action research methodology (further explained in Chapter 3).

Article I, “Planning and implementing a new assessment strategy using an e-learning platform,” describes the project for implementing a new assessment strategy to evaluate practical topics in undergraduate degree programs. It also presents two cycles of the action research methodology used to conduct the research.

Article II, “Students' perceptions about assessment using an e-learning platform,” presents an analysis and discussion of questionnaires and interviews applied to the students who performed the new assessment method to understand their perceptions.

Article III, “E-assessment with multiple choice questions: A qualitative study of teachers' opinions and experience regarding the new assessment strategy,” analyzed the perceptions of the senior lecturers who used MCQ quizzes in their courses through a qualitative study. These perceptions were collected from a focus group interview and resulted in the

identification of advantages and disadvantages, improvements to the strategy, and considerations of whether this assessment method develops and evaluates the same knowledge and skills compared to CCAA.

Article IV, “E-assessment with multiple choice questions: A 5-year study of students’ opinions and experience,” presents a qualitative and

quantitative study on the students’ perceptions and experiences gathered

(35)

33 through surveys and interviews about the new assessment method over five academic years (Figure 7).

Figure 7. eMCQ design, implementation, and analysis research timeline

Articles I to IV also contributed to the development of RQ2, since these articles provided the advantages and disadvantages of MCQ quizzes and the opinions of the stakeholders. RQ2 is then answered in articles V to VII, supported by design science research methodology (further explained in Chapter 3).

First, existing tools to assess groups of students had to be reviewed, as well as an analysis of their usability. When the need was ascertained for a new freeware for self- and peer-assessment in workgroups, the research process began to design and develop WebAVALIA. Later, an analysis of the mathematical formulations for self- and peer-assessment in workgroups and how to improve them were discussed, given WebAVALIA’s example.

Article V, “Self and peer e-assessment: A study on software usability,”

presents seven e-assessment software tools, their features and

functionalities, and an evaluation and comparison of parameters between tools, based on usability and user experience definitions.

Article VI, “Improving workgroup assessment with WebAVALIA: The concept, framework, and first results,” presents a tool that allows its users to perform their self- and peer-assessment according to their perspective of the performance of each group member in the work development. The

(36)

34

design and development of WebAVALIA are presented with the support of design science research (DSR) methodology. WebAVALIA was developed to support teachers with the distinction of individual performance in

workgroups, while providing fair and unbiased assessments.

Article VII, “Analysing and improving mathematical formulation of WEBAVALIA: A self and peer assessment tool,” introduces formulation problems about mark weighting in self and peer evaluation. It presents solutions’ advantages and disadvantages for the formulation and suggestions of adapting them among tools, feedback surveys, and teachers’ methods of evaluation. It also presents the formulation implementation in the software tool WebAVALIA.

The reasoning behind each research development is explained in Figure 8. The wave shapes represent the problems and motivation, and the green boxes express each research development towards a solution.

Figure 8. Overview of the research development process

Despite the good results obtained with the developed research, as

described in the referred articles, the work is not considered closed. In fact, the constant evolution in the fields foresees equally the possibility of

(37)

35 continually innovating e-assessments. Accordingly, fostered by the good results obtained thus far, the research will continue in the constant search for improvements.

1.4 Dissertation structure

This dissertation comprises five chapters and seven original articles (I to VII), which are provided at the end of the dissertation. The first chapter presented the research background and motivation for the study, the problem statement and research questions that led to the research development, and the research process.

The second chapter provides a wider view of the literature underlying the research process by presenting the most critical topics to better understand the problem and strategies to improve the assessment of practical topics in crowded classrooms. Therefore, the search for e- assessment and collaborative work was performed.

Subsequently, the third chapter presents the research methodologies used in the dissertation study. Mixed-methods research was implemented by applying action research methodology to answer RQ1 and design science research methodology in RQ2. These topics are followed by an explanation of the specific research methodologies and data collection techniques applied throughout the research development. This chapter concludes with research ethics.

The fourth chapter provides the findings of the research process, grouped in the following categories: (1) the e-assessment implementation process; (2) stakeholders’ opinions and perceptions on the MCQ Moodle e- assessment; and (3) self- and peer e-assessment framework development, that is, WebAVALIA.

Finally, Chapter 5 presents the discussion and conclusion. This chapter contains a summary of findings and a study reflection, followed by

recommendations to practitioners based on the lessons learned and a reflection about the research quality, strengths, and limitations. It ends by outlining plans for future research.

(38)

36

(39)

37

2. Literature review

2.1 Literature review introduction

Having clearly stated our research questions and goals, it became crucial to conduct a thorough review of the state of the art on pertinent scientific and technological topics. The results of this study were presented in the

dissertation articles, providing evidence of the relevance of the work there described. The approach adopted for conducting the study relied on the identification of topics and representative keywords.

From these keywords, the search for articles was performed in several databases, namely, SCOPUS, Google Scholar, ScienceDirect, EBSCO Discovery, and ERIC, among others. This search, however, led to an incredible number of results, which would make the study unfeasible within a reasonable period. Therefore, to narrow them and obtain more specific and meaningful articles that could sustain the research in view, the search was repeated by combining two or more keywords. Nonetheless, this search with combined keywords still led to a significant number of results (Figure 9), even if these were more targeted.

Figure 9. Google Scholar keywords search (Retrieved on 2020/06/22)

Despite having fewer results with the combined keywords search, clearly, these could still be narrowed. Correspondingly, it was decided to establish a three-year period to obtain recent articles with the limitation of displaying only journal articles ranked in JUFO, rejecting conferences, as

(40)

38

well as journals not ranked in JUFO. JUFO, which was created by the Finnish scientific community, is a publication forum that rates and classifies

scientific publication channels to ”support the quality assessment of academic research” (Julkaisufoorumi, n.d.). Imposing such restrictions considerably reduced the list of returned results. In fact, in some cases, the list was so reduced that it did not contain any useful results. In those situations, to solve this problem, it was decided to extend the time range until appropriate articles were found. Likewise, it was decided to drop the restrictions concerning the articles’ sources. It proved extremely useful to include in the list of results relevant articles appearing as citations but published before the established time range or by other sources.

The search results had to be evaluated to find the most relevant

sources, which implied reading several studies to find if they were relevant or related to the topics at hand. The evaluation was mainly done by

reading the studies’ titles, abstracts, introduction, and conclusion to understand their relevance. These results led to the discussion of several important topics described in the dissertation articles. The literature findings were essential to support the research activities.

First, it was necessary to understand the challenges in the current higher education context. Then, considering the importance of assessment practices in this context and assessment procedures being the main goal of this dissertation work, it was imperative to understand it. Additionally, the ICT reading of teachers and students, along with the adoption of ICT in HEI, led to a broader implementation of e-assessment.

Simultaneously, some literature gaps were found. Previous research had determined that the use of MCQ in an LMS would be a feasible way to assess the increasing number of students per class. Meanwhile, an LMS platform—Moodle—was already being used at ISCAP as a support for learning and assessment. Therefore, ISCAP’s lecturers were already experts in its use. Thus, the choice to use it to assess practical topics, through eMCQ quizzes, arose naturally. Nonetheless, the different LMS platforms, their characteristics, and functionality had to be explored to find the most suitable for this purpose.

(41)

39 Meanwhile, the literature on the use of eMCQ also had to be explored.

Through this search, it was ascertained that, at that time, despite the existence of several publications on the use of eMCQ to assess students, these studies usually addressed its use to assess theoretical topics.

Considering that the issue was the assessment of practical topics, the motivation to pursue this subject became clearer. Therefore, one goal of this dissertation study is to present the strategy planning and

implementation of a new assessment method capable of assessing practical IS topics in crowded classrooms while maintaining the same learning outcomes and not compromising students’ learning to process.

Consequently, a research team, composed of ISCAP lecturers from the IS department, was gathered to develop and manage a project to shift the assessment process. The team decided that e-assessment could be a solution to assess practical topics in crowded classrooms. Thus, a project was delineated, where the research team members had to combine their individual efforts to achieve the goal. In this situation, all the stakeholders involved must be considered, since they play a vital part in the success of the project. During the literature collection, there was also a search of publications that addressed the stakeholders’ opinions, which proved difficult to find.

Despite several publications that gathered and analyzed the students’

opinions about these quizzes’ usability and satisfaction, the same could not be said about the lecturers. Most papers in the literature expressed the lecturers’ experiences and observations when implementing such an assessment method. However, they did not gather their formal opinions in a scientific discussion. Therefore, the research development tried to fill this gap by understanding such issues.

From the research process to develop the dissertation study, the author determined, using the stakeholders’ opinions, that eMCQ could not

efficiently assess all the expected skills and competencies in practical topics. Therefore, it was necessary to explore complementary assessment approaches to understand which methods could support the assessment, such as collaborative learning approaches. Nonetheless, since the

stakeholders were in favor of maintaining the already-in-use problem-

(42)

40

based learning method, the decision was made to increase the weight of this group project in the students’ assessment. The combination of these projects with eMCQ quizzes was considered an effective strategy to assess students and ensure all the expected learning outcomes would be

achieved.

However, a new problem arising from this decision is the lecturers’

difficulty in distinguishing an individual’s performance and contribution to a workgroup. Consequently, following an iterative approach, it was

necessary to search the literature for frameworks capable of supporting self and peer assessments in workgroups and delineating a distinction between its members. The frameworks presented in the literature were compared to others provided by software companies and used in different environments. This comparison revealed the need for freeware capable of performing both self and peer assessments and available to all

communities.

The previous subchapters 1.2 and 1.3 have explained how the research process was developed, and Figure 8 presented an overview of said

process. The current chapter provides a wider view of the literature underlying the research process. Therefore, the following subchapters present the state of the art concerning the most relevant topics to better understand the dissertation study and substantiate the pertinence of the developments made.

The following Table 1 lists the literature that was searched and read for the basis of the research process and development. This table presents the main topics (and subtopics) that compose the publications’ literature

review and the authors cited for each topic.

(43)

41 Table 1. Authors cited for each publication topic

Publication Topic(s) and subtopic(s) Cited authors

Article I Assessment Using Learning

Management Systems (LMS) - Information and communication

technologies (ICT) - Higher education (HE) - Formative and summative e-

assessment

- Multiple-choice question (tests) (MCQ)

Llamas-Nistal et al. (2013); Dascalu & Bodea (2010);

Boticki & Milasinovic (2008); Burrow et al. (2005); Salas- Morera et al. (2012); Prakash & Saini (2012); Ventouras et al. (2010); Stödberg (2012); Triantis & Ventouras (2012)

Total of 31 references Article II E-Assessment Using LMS

- ICT - HE

- LMS stakeholders - MCQ

- Moodle quizzes - Students’ perceptions

Sorensen (2013); Burrow, M., Evdorides, H., Hallam, B.,

& Freer-Hewish (2005); Eccles et al. (2012); Hodgson &

Pang (2012); Jawaid et al. (2014); Walker et al. (2008);

Salas-Morera, et al. (2012); Folden (2012)

Total of 9 references Article III E-assessment with MCQ in Moodle LMS

- HE - ICT

- Teacher and student perceptions - Advantages of MCQ assessment - Negative aspects of MCQ tests - MCQ in non-theoretical learning - Computer-assisted tests - Problem-based group work - Competences and skills

development

Dermo (2009); Stödberg (2012); Sorensen (2013); Sim et al. (2004); Öz, (2014); Babo and Azevedo (2015); Babo and Azevedo (2013); Ferrão (2010); Jordan and Mitchell (2009); Llamas-Nistal et al. (2013); Ellaway and Masters (2008); Nicol (2007); Maier et al. (2016); Miguel et al.

(2015); Ventouras et al. (2010); Douglas et al. (2012);

Hodgson and Pang (2012); Fernández-Sanz et al. (2017);

Kantane et al. (2015); Youssef et al. (2015); Crisp (2009);

Yonker (2011)

Total of 36 references Article IV Assessment and E-assessment

- Assessment models - Continuous assessment - ICT

- MCQ

- Students’ perceptions

Pereira, Flores, and Niklasson (2015); Buzzetto-More &

Alade (2006); Kanwar (2012) ; William (2018); Coryn, Noakes, Westine, & Schröter (2011); Anh (2018); Tyler (1949); Stake (2011); Youker & Ingraham (2014);

Stufflebeam (2003); Aziz, Mahmood, & Rehman (2018);

López-Pastor and Sicilia-Camacho (2017); Ferrão (2010)

; Myers (2013); Torres, Lopes, Babo, & Azevedo (2011);

Dixson & Worrell (2016); Day, van Blanken-stein, Westenberg, & Admiraal (2018); Tuunilaa and Pulkkinen (2015); Bahar & Asil (2018); Okada et al. (2019) ; Ripley (2017); Ranganath, Rajalaksmi, & Simon (2017).;

Holmes (2015); Alsadoon (2017) Learning management systems and

multiple-choice questions - Moodle

- MCQ quizzes

Fathema, Shannon, & Ross, 2015; Koneru (2017);

Holmes (2015); Smith & Karpicke, (2014); Babo &

Suhonen (2018); Llamas-Nistal, Fernández-Iglesias, González-Tato, & Mikic-Fonte (2013) Sorensen (2013);

Nicol (2007); Babo & Azevedo (2013); Babo et al. (2015);

Douglas, Wilson, & Ennis (2012); Maier, Wolf, & Randler

(44)

42

- Advantages and concerns of MCQ quizzes

- CCAA

- Students’ gain of skills and competencies through MCQ testing

(2016); Jordan & Mitchell (2009); Ferrão (2010); Maier et al., 2016; Ellaway & Masters (2008); Miguel, Caballé, Xhafa, & Prieto, (2015); Ventouras, Triantis, Tsiakas, &

Stergiopoulos (2010); Triantis & Ventouras (2012);

Paechter, Maier, & Macher, 2010; Fitó-Bertran et al.

(2015); Zheng, Ward, & Stanulis (2019); Scouller (1998);

Elmas, Bodner, Aydogdu, and Saban (2018); Johnstone

& Ambusaidi (2000); Cerutti et al. (2019); Kangasniemi (2016)

Problem-based Learning - Collaborative work - Competencies and skills

development

Alias, Masek, & Salleh (2015); Tiwari, Arya, & Bansal (2017); Khoiriyah & Husamah (2018); Loyens, Jones, Mikkers, & van Gog (2015); Savery (2015); Frank &

Barzilai (2004)

Total of 75 references Article V Problem-based learning: The

importance of self and peer assessment

- Assessment

- Self and peer assessment - Acquisition of skills

Daba, Ejersa, & Aliyi (2017); Frank & Barzilai (2004);

Khoiriyah & Husamah (2018); Kızkapan & Bektaş (2017);

Hall & Buzwell (2013); Dochy, Segers, & Sluijsmans (1999)

Software tools: Usability and user experience

Đorđević (2017); Shackel (2009); Haaksma, de Jong, &

Karreman (2018); Corrao, Robinson, Swiernik, & Naeim (2010); Costabile et al. (2005); Enriquez, Brito, &

Orellana (2017)

Assessment tool parameters Abelló Gamazo et al. (1992); sor’evi’ (2017); Nielsen (1993); Muqtadiroh, Astuti, Darmaningrat, & Aprilian (2017); Rolstad, Adler, & Rydén (2011); Smits & Vorst (2007); Van Selm & Jankowski (2006); Edwards, Roberts, Sandercock, & Frost (2004); Mackison, Wrieden, &

Anderson (2010); Kokil (2018)

Total of 71 references Article VI Collaborative learning

- Collaborative learning approaches - PBL

- Computer-supported collaborative learning (CSCL)

Eshuis et al. (2019); Babo, et al. (2020); Chen & Kuo (2019); Hakkarainen et al. (2013); Ruiz-Gallardo &

Reavey (2019); Chan & Pow (2020); Hmelo-Silver (2004); Khoiriyah & Husamah (2018); Järvelä et al.

(2019); Ludvigsen and Steier (2019); Bell (2010); Hmelo- Silver (2004); Kizkapan and Bektas (2017); Loyens et al.

(2015); West (2018) Importance of workgroup to develop

skills and competencies - Higher education - Collaborative practices

Pai et al. (2015); Vicente et al. (2018); Daba et al. (2017);

Wen (2017); Chen & Kuo (2019); Othman et al. (2012);

Lavy (2017) Self and peer evaluation

- Workgroup - PBL

McNamara and O’Hara (2008); Farrell and Rushby (2015); Vanhoof et al. (2009); Chang et al. (2020); Alias et al. (2015); Kolmos and Holgaard (2007); Luxton-Reilly (2009); Bell (2010); Cowan (1988); Stefani (1994); Tan and Keat (2005)

Technology-enhanced assessment Rodríguez-Triana et al. (2019); West (2018);

Hettiarachchi et al. (2014); Cook & Jenkins (2010); Tan

& Keat (2005); Gray & Roads (2016); Pereira, Flores, &

Niklasson (2016)

Total of 68 references

(45)

43

Article VII Assessment and assessment practices PBL

- Acquisition of skills and competencies with PBL Workgroups

Self and peer assessment Software tools

- Self- and peer-assessment software tools

- Evaluation tools characteristics/

functionalities

Bahar & Asil (2018); Buzzetto-More & Alade (2006);

Kanwar (2012); Okada et al. (2019); Pereira et al. (2015);

William (2018); Hmelo-Silver (2004); Daba et al. (2017);

Wen (2017); Borg & Edmett, (2019); Dochy et al. (1999);

Li (2017); Reinholz (2016); Topping (2009)

Total of 38 references

Upcoming subchapters will review these concepts, starting with a presentation of the challenges in the current higher education context.

2.2 Education overview

The rapid expansion of knowledge and the impact of globalization has influenced governments in the importance of having a higher education degree to achieve global competitiveness (Mok, 2016, p. 52). These circumstances led to an increase in students in higher education, consequently bringing new challenges and different perspectives to the operations of these institutions. Some of these challenges entailed varied considerations of teaching and learning quality standards (Giannakis &

Bullivant, 2016; Mok, 2016; Schwarz & Westerheijden, 2004).

According to Dlouhá and Burandt (2014), the manner in which students perceive learning affects the efficiency of their learning process and influences their final performance. The learning process is then regarded as a “complex process,” which depends on “students’ individual

preferences” and “perceptions of the learning environment and motivations” (Dlouhá & Burandt, 2014, p. 248).

To improve the “learning to experience,” the practice of e-learning processes has become more regular in many HEIs (González, 2013, p. 81).

These practices are regarded as an improvement in learning quality, since they provide access to education in a costly and effective manner, as well as the means to achieve and improve learning outcomes and performance.

E-learning has evolved much in recent years, mainly due to the constant development of information and communication technologies (ICT;

(46)

44

Govindasamy, 2001; Kahiigi Kigozi et al., 2008; Poulova et al., 2019;

Sorensen, 2013).

Poulova et al. (2019, p. 297) define the concept of e-learning as the “use of electronic material and didactic means” to achieve learning outcomes in an effective manner. Once it is based in ICT, it allows a pedagogical model for distance learning in a “flexible learner-centered education” (Azeiteiro et al., 2015, pp. 308–309).

The teaching of IT courses is more of a practice-oriented activity that aims to endow students with the skills and competencies necessary for them to thrive in their professional future. The courses are intended to be multidimensional and provide students with essential business resources.

Therefore, since they are very oriented toward practice and performance, assessment also needs to be carefully structured. IT assessment must be capable of measuring skills, competencies, and required theoretical knowledge (Decker et al., 2018; Martins et al., 2019; Tedre et al., 2018).

2.3 Assessment overview

Assessment is considered a “core activity” of the learning process in higher education (Stödberg, 2012, p. 591). It is a way to provide feedback to the students, which also promotes their learning. The assessment is based on the students’ acquisition of learning goals, which besides the theoretical knowledge, also involves the understanding of the learning situations and the development of other skills and competencies. The determination of all these aspects will ensure that the learners achieve the learning outcomes (Dlouhá & Burandt, 2014; Stödberg, 2012).

Assessment models guide how evaluators do or should do their evaluation practices. These models establish the “evaluation purposes,”

strategies, activities, people that participate in the assessment process, and

“method choices, and roles and responsibilities of the evaluator” (Coryn et al., 2011). According to Anh (2018), in the ‘40s, ‘50s, and ‘60s, several evaluation models were created. For educational program evaluation purposes, the following models were developed: “Tyler’s objective model, Stake’s responsive model, Scriven’s goal-free model, and Stufflebeam’s CIPP model” (Anh, 2018, p. 140). Tyler’s objective model conceptualized the

(47)

45 evaluation as a comparison between intended and actual outcomes. This model considers “curriculum as a means of aiming toward an educational object.” Tyler’s model is best used when the evaluator needs to identify if the learning outcomes of the programs were met (Anh, 2018, p. 142; Article IV; Tyler, 1949).

Stake’s responsive model “sacrifices some precision in measurement, hopefully to increase the usefulness of findings to persons in and around the program” (Stake, 2011, p. 184). This model is based on “what people do naturally to evaluate things” (Stake, 2011, p. 185) and assumes that “there may be many valid interpretations of the same events, based on a person’s point of view, interest, and beliefs” (Anh, 2018, p. 143). Therefore, the evaluator must consider all these interpretations (Anh, 2018; Stake, 2011).

The goal-free model was developed by Scriven in 1972. This model focuses the evaluation on educational outcomes (Anh, 2018). It is an evaluation where the “evaluator con-ducts the evaluation without particular knowledge of or reference to stated or predetermined goals and objectives” (Article IV; Youker & Ingraham, 2014, p. 51).

The CIPP (Context, Input, Process, and Product) model is defined as a

“comprehensive framework for guiding formative and summative

evaluations” (Stufflebeam, 2003, p. 2). This model is based on “learning by doing” (Anh, 2018), since it emphasizes “the evaluation of teaching learning and development process” (Aziz et al., 2018, p. 192) and provides a “view of every element by evaluating (……) from each and every angle” (Aziz et al., 2018, p. 192). The CIPP model also intends to make an effort to “identify and correct mistakes made in evaluation practice” (Anh, 2018, p. 146) and, therefore, implement new and innovative practices (Article IV).

Regarding the CIPP model, the assessment can either be formative or summative. According to López-Pastor and Sicilia-Camacho (2017),

formative assessment is the method in which, during the learning process, teachers provide information to students to modify their understanding. It can also assist the teacher in adjusting his or her teaching approach (Ferrão, 2010; Myers, 2013; Torres et al., 2011). Summative assessments

“intend to capture what a student has learned, or the quality of the learning, and judge performance against some standards” (National

Viittaukset

LIITTYVÄT TIEDOSTOT

muksen (Björkroth ja Grönlund 2014, 120; Grönlund ja Björkroth 2011, 44) perusteella yhtä odotettua oli, että sanomalehdistö näyttäytyy keskittyneempänä nettomyynnin kuin levikin

7 Tieteellisen tiedon tuottamisen järjestelmään liittyvät tutkimuksellisten käytäntöjen lisäksi tiede ja korkeakoulupolitiikka sekä erilaiset toimijat, jotka

Työn merkityksellisyyden rakentamista ohjaa moraalinen kehys; se auttaa ihmistä valitsemaan asioita, joihin hän sitoutuu. Yksilön moraaliseen kehyk- seen voi kytkeytyä

Koska tarkastelussa on tilatyypin mitoitus, on myös useamman yksikön yhteiskäytössä olevat tilat laskettu täysimääräisesti kaikille niitä käyttäville yksiköille..

The new European Border and Coast Guard com- prises the European Border and Coast Guard Agency, namely Frontex, and all the national border control authorities in the member

The shifting political currents in the West, resulting in the triumphs of anti-globalist sen- timents exemplified by the Brexit referendum and the election of President Trump in

The US and the European Union feature in multiple roles. Both are identified as responsible for “creating a chronic seat of instability in Eu- rope and in the immediate vicinity

States and international institutions rely on non-state actors for expertise, provision of services, compliance mon- itoring as well as stakeholder representation.56 It is