• Ei tuloksia

4. Research findings

4.3 Stakeholders’ opinions and perceptions on MCQ Moodle e-assessment 77

4.4.2 Design and development of WebAVALIA

The previously conducted analysis on the use of e-MCQ quizzes to assess practical topics led to the need to combine said assessment with other assessment approaches. Accordingly, it was decided that this other approach would be the implementation of workgroups and that, to make fair assessments, self and peer assessments would be implemented.

Therefore, there was a need to have in-depth knowledge about collaborative work and about the contribution and importance of the workgroup for students’ learning and skills development.

Consequently, once this knowledge was acquired and the importance of collaborative work was confirmed, the development of WebAVALIA began, as it was concluded that the existing tools for workgroup assessment did not have all the key features presented previously. WebAVALIA was designed and developed, considering the following theoretical framework illustrated in Figure 19 and presented in Article VI.

Figure 19. Theoretical framework for an unbiased workgroup assessment (based on Article VI)

94

WebAVALIA’s design and development was supported by DSR methodology (presented in Figure 13, Chapter 3). It aims to assist evaluators in assessing practical work conducted by a group of students. Indeed, it is often the case that the evaluator lacks a good perception of who are the most dedicated or committed students in a group and, therefore, cannot distinguish individual performances and assign fair grades.

The first version of this tool was implemented in 2012, and it was called AVALIA (which means “Evaluate” in Portuguese). It was implemented in a higher education setting, where the lecturers used it to assess students.

This tool version was basic and composed of an MS Access database, being the first approximation of WebAVALIA.

Considering the key features previously identified in 4.4.1, WebAVALIA presents the functionality to assign students to groups, the capacity to restrict voting scores, the possibility to adjust assessment weights, and the capacity to notify students every time an assessment moment occurs. It also allows the students to quickly assign a score to themselves and their peers on their performance and contribution to the workgroup.

The development of software goes through several iterations. In each cycle, new features are developed and implemented, and these need to be evaluated to attain usability and user satisfaction. Additionally, the design cycle of DSR expresses the need to evaluate and identify weaknesses in an artefact to improve it and achieve a final, satisfactory design. In

WebAVALIA’s development, expert users’ evaluations and the direct observation technique were utilized. During the first iterations of its development, the software evaluation was conducted by gathering the opinions of real users during an assessment moment, being teachers or students. At the time of its use, the users would express their concerns or problems with the tool in a thinking-aloud method, which were listed to ensure a continuous improvement of the tool. Then, in the next cycle, new features were developed to meet these issues.

Besides these methods, which allow the developers to improve the tool itself, the software evaluations were also complemented by the collection of surveys. At the end of each academic year, surveys were distributed to every student who had used the software to perform self- and

peer-95 assessment of workgroups during the development of an assignment or a project. These surveys had both open-answer questions, used to gather advantages and disadvantages of this tool, and five-point Likert scales, important to understand a respondent’s opinions, experience, and perceived usability when using WebAVALIA. The statements of the latter, divided into five categories, are presented in Appendix 4.

As explained before, this self- and peer-assessment framework evolved from a basic version, called AVALIA. AVALIA’s first version was implemented in 2012 in an MS Excel spreadsheet. Then, the second version of AVALIA was implemented in mid-2012. It comprised a relational database with a simple template, designed and developed in MS Access. In 2013, the third version of AVALIA’s database was placed on a network path, which solved one of the problems of the previous versions. Appendix 5 compiles the features and problems of each AVALIA version described by the users (Article VI).

AVALIA could perform self- and peer-assessments of students, and the teacher could distinguish members of workgroups, achieving individual results. However, the process was very laborious and lacked usability. It also did not allow the framework to be used by other institutions outside P.

Porto (Article VI).

For this reason, a new version was designed—WebAVALIA. Since it is a web version of the software, it allows access from anywhere. This version was developed and implemented on the Visual Studio 2012 platform (C#

language), and an IIS server was configured. Some features implemented in the first version of WebAVALIA are presented in the following Figure 20.

96

Figure 20. WebAVALIA’s first version development and feature implementation (Article VI)

The main goals of WebAVALIA are to provide an easy, quick, anonymous, and fair assessment. Easier and faster procedures allow group members to share their opinions about each other’s performance in a more authentic way, leading to unbiased assessments. Anonymity assures the individuals that they can answer without the fear of reprisal. Another important goal is to ensure the assessments’ fairness, so each member achieves a score that reflects his or her actual performance and contribution (Article VI).

There are two main types of users in this tool: the evaluator and the workgroup member (Figure 21). The evaluator is responsible for the assessment (lecturer), and the workgroup member is being assessed (student). The evaluator has access to most features of the tool, being able to create editions in which the evaluation will occur. An edition is related to a specific assignment or project and can be parameterized according to personal preferences for evaluation settings. Conversely, the workgroup member only has access to the voting board and his or her individual profile (Article VI).

97 Figure 21. User types in WebAVALIA

In this dissertation work, WebAVALIA was applied in a higher education setting; therefore, the evaluator is the lecturer, that is, the person

responsible for the assessment. The workgroup member is the student, the one being assessed. Nonetheless, in this context of self and peer assessment, terms such as “evaluate,” “assess,” and “vote” also apply to the students, since they vote and attribute a score to their workgroup

colleagues. The lecturer is the mega evaluator, while the student also acts as an evaluator when voting on personal and colleagues’ performance in a certain moment.

As Figure 22 presents, the configurable parameters are the number of assessment moments, the weight of each moment, the relative weight of the self and the peer evaluations, and the dates of assessment moments.

The number of assessment moments, which can vary between 1 and 3, are when the voting process should occur. This number of assessment

moments and their relative weight are decided by the evaluator according to the type of assignment or project to be evaluated. Additionally,

depending on the course, there can be different assessment approaches, and their importance can vary. Therefore, the evaluator, by being able to decide these parameters, has the flexibility to adapt the assessment to the course.

98

Figure 22. WebAVALIA parameters’ configuration (Article VII)

The evaluator can also choose which moments are more relevant to the assessment and attribute to these a higher or lower weight. Then, the relative self and peer evaluation weight can be configured in a range of 1 to 5, with 1 of low importance to the final score and 5 of higher importance.

The evaluator, according to his or her understanding of fairness, might choose to assign diverse values to self and to peer assessment.

Another feature that the evaluator can access is the students’ page. On this page, it is possible to view the list of students enrolled in the edition, distribute students to workgroups by assigning a group number to each student, and remove members of groups, if necessary (Figure 23).

Figure 23. WebAVALIA student page

99 WebAVALIA is designed to allow an easy and simple way for members of a group to evaluate themselves and their peers. At the voting date, each workgroup member only must attribute a score to each group member, including themselves, on the voting board until the total of points sums 100 (Figure 24). After submitting the voting, the process is complete, and it will be repeated on the next voting date. The voting process takes about one minute to complete, and it can occur up to three times.

Figure 24. WebAVALIA voting board (Article VII)

At the end of the project, the evaluator must autonomously grade each group project. Then, after providing the grade on the results page, final results can be calculated (Figure 25). These results are figured using several voting scores. These can be exported to various formats (csv, xcsx, pdf, xml, etc.) and printed.

100

Figure 25. Results page on WebAVALIA (Article VI)

To better understand the order of actions in WebAVALIA, Appendix 6 presents a figure with the process of evaluation. Each step must take place in a certain order; therefore, the figure explains the order of each action to achieve the expected results. Nonetheless, the use case diagram of the WebAVALIA system is presented in Figure 26 and illustrates the major tasks the user can perform with the system. The first task is registration by the lecturer, who then must wait for profile confirmation by the system administrator. Upon profile confirmation, the lecturer can log into the system. Subsequently, the lecturer can create an edition and configure its parameters. An edition corresponds to a project or assignment that must be previously outlined and delivered to the students. The edition’s

characteristics depend on the lecturer’s preferences. There can be the delivery of a workgroup project with a longer timespan with many milestones. In this edition, the lecturer attributes a grade to the overall quality of the project. Nonetheless, the edition can be a workgroup assignment that occurs once in a course or several times. Upon the edition’s creation, the students can register in the corresponding edition.

101 Figure 26. Use case diagram of WebAVALIA (Article VI)

Afterwards, the teacher needs to allocate the available students to workgroups. Close to the voting date, the teacher must activate the

assessment moment, allowing the workgroup members to vote. Before the next voting period, the previous assessment moment must be deactivated.

This process must be done for every assessment moment, which can have a maximum of three moments.

Subsequently, the teacher must grade the project or assignment autonomously, outside the platform. Then, these project or assignment grades must be registered per workgroup on the results page (Figure 25), which enables the calculation of the final scores for each member. Finally, the reports can be created and then printed or exported.

To reach those results, WebAVALIA has several mathematical

formulations (A, C, and D) that consider parameters, gathered students’

feedback, and the project’s grade defined by the teacher. As Figure 22 displays, WebAVALIA has parameters that can be configurable and are listed in Table 12. The variables used in each parameter are described in

102

Table 11. The abbreviations (last column of Table 12) to classify the type of configuration of each parameter are as follows:

● L: configurable by the lecturer;

● S: variables according to students’ opinions;

● N: non-configurable, calculated by WebAVALIA.

Table 11. Variables used in the formulation of WebAVALIA

Variable Variable description Range of possible values

g Workgroup number 1 to 20

ng Number of students in a workgroup 1 to 20

k Workgroup student 1 to ng

i Student in the same workgroup as k 1 to ng

j Assessment moment 1 to 3

The range of possible values in Table 12 can vary depending on the

parameter. When discussing the weighting of a parameter, the value varies from 1 to 5, like a Likert scale. When the parameter concerns a voting score, the value varies from 0 to 100, the value being a score. Finally, the range of values for a grade is 0 to 20, since this is the standard Portuguese range of possible assessment grades.

To better understand the following presentation, a correspondence

between the terminology of the WebAVALIA formulas and their equivalent mathematical expression is detailed below:

● Formula A = (4)

● Formula C = (5)

● Formula D = (6)

All the students in the workgroup assign the respective voting score for self and peer evaluation. Let us assume 𝑋𝑔𝑘𝑗 (between 0 and 100) is the voting score of the self-evaluation of student k and 𝑌𝑔𝑘𝑖𝑗, ∀𝑖 ≠ 𝑘 ∧ 𝑖 ≤ 𝑛 as the peer evaluation voting scores. The lecturer can control the weights S and P to adjust the relative contributions of the self and peer evaluations. (1)

103 Table 12. List of parameters used in the formulation of WebAVALIA

Parameter Parameter description Range of possible values Class.

S Weight of the self-evaluation 1-5, integer L

P Weight of the peer evaluation 1-5, integer L

NA Number of moments of self and peer assessment 1-3, integer L

Wj Weight of the assessment moment j 1-5, integer L

Mgkj Voting score of student k of workgroup g at

assessment moment j 0-100 N

Mgk Final voting score of student k 0-100 N

Mg,max Maximum voting score in the set of voting scores 0-20 L

Xgkj Voting of student k in workgroup g to himself,

only at assessment moment j 0-100 S

Ygkij

Voting of student k in workgroup g regarding

other student i, only at assessment moment j 0-100 S

R Maximum possible grade for the project 20 N

Rg Project’s grade of workgroup g 0-20 L

Rgk Final grade of student k of workgroup g 0-20 N

All the students in the workgroup assign the respective voting score for self and peer evaluation.

Let us assume 𝑋𝑔𝑘𝑗 (between 0 and 100) is the voting score of the self-evaluation of student k and 𝑌𝑔𝑘𝑖𝑗, ∀𝑖 ≠ 𝑘 ∧ 𝑖 ≤ 𝑛 as the peer evaluation voting scores. The lecturer can control the weights S and P to adjust the relative contributions of the self and peer evaluations. (1) presents the average voting score of student k of workgroup g in assessment moment j:

𝑀𝑔𝑘𝑗= ⌊𝑆 ⋅ 𝑋𝑔𝑘𝑗+ 𝑃 ∙ ∑𝑛𝑖=1,𝑖≠𝑘𝑔 𝑌𝑔𝑘𝑖𝑗 𝑆 + 𝑃(𝑛𝑔− 1) ⌋

(1)

First, it is important to establish that some conditions do not allow students to assign a full score on both self and peer evaluation. The existence of this condition prevents the students from assigning the same score to every member of their workgroup, therefore imposing a

comparison between members of a group. This restriction is important since it allows the capturing of disparate perceptions and opinions on the performance of workgroup members. Thus, it allows the lecturer to have a real perspective on the actual performance of the students, prevent biased

104

assessments, and better distinguish students. The condition implemented in WebAVALIA for X and Y values is:

𝑋𝑔𝑘𝑗+ ∑

𝑛𝑔

𝑖=1,𝑖≠𝑘

𝑌𝑔𝑘𝑖𝑗 = 100, ∀𝑘 ∈ {1, … , 𝑛𝑔}, ∀𝑗, 𝑔

(2)

From condition (2), it is possible to realize that, instead of all students evaluating themselves and their colleagues with a maximum of 100 points each, they evaluate with 100/ng. However, it would be preferred if all members performed the work equally, since all are expected to do the same percentage of tasks.

After the students have performed all the assessment moments, a weighted average of all assessment moments for each student is calculated, as suggested by (3):

𝑀𝑔𝑘= ⌊∑𝑁𝑗=1𝐴 𝑊𝑗𝑀𝑔𝑘𝑗

𝑁𝑗=1𝐴 𝑊𝑗

(3)

With all marks 𝑀𝑔1, … , 𝑀𝑔𝑛𝑔 calculated, let us assume 𝑀𝑔,𝑚𝑎𝑥 is equal to the maximum value of this set, and continue with Mgk (3) as the voting score of self and peer evaluation of each student in a workgroup. Thus far, this average mark has been computed based only on the marks assigned by the students. However, it is also important to consider the appreciation of the lecturer concerning the overall performance of the group and the quality of the obtained results. Variable Rg is set by the lecturer and is the project’s grade.

The first formula, A (4), assumes that there exists a linearity correlation of the final mark in relation to mark Mgk (3). This also guarantees that the students with Mgk=0 will have a final mark of Rgk equal to 0 and those with Mgk=Mg,max will have a final mark of Rg. Hence, the formula given by (4):

105 𝑅𝑔𝑘=𝑀𝑔𝑘∙ 𝑅𝑔

𝑀𝑔,𝑚𝑎𝑥

(4)

This formula was tested and indicated that the final grades had a broad range, which means that this formula caused a huge difference between the students’ grades of the same workgroup. Considering grades vary from 0 to 20, when this formula is applied in a course where the project is the sole assessment method, the student will only be successful in the course by achieving a grade between 10 and 20. Consequently, half of the range (0 to 10) can fail the course. Additionally, if there are various assessment moments, formula A (4) will correctly benefit the students who work the most in each moment. However, simultaneously, it can highly compromise, more than expected, the students for whom the voting scores are lower than the average 100/ng. This distinction among the students’ marks might be broader than the real performance of each student.

WebAVALIA considers that, upon the first assessment moment, the workgroup members must remain the same. Another case where formula A (4) can entail a bigger gap between grades is when the difference

between Mgk is not substantial. For example, assuming Rg=20, if one student achieves Mg1=24 and another Mg2 =25, the difference in Rgk will be one value. Accordingly, even though there is a minimal distinction between voting scores, their final grades will differ considerably between 19 and 20, which might not be fair.

Following the implementation of formula A (4), there was the design of formula B. This formula was based on formula A (4) and aimed to narrow the grade range, allowing the less-helpful students to be less penalized.

Nevertheless, during formulation testing, formula B could, in some

extreme cases, deliver unfeasible results. For example, when two students had a small disparity between their voting scores, it could result in a considerable and incorrect distinction between students’ final marks.

Therefore, there was the decision not to implement this formula in WebAVALIA.

Formula C (5) was developed to address the problem of the flexibility of formula A (4) by shortening the range of grades and diminishing the

106

excessive gap between the lowest and the highest mark in a workgroup.

This formula devises that by introducing a new parameter

T-{1,1.25,1.5,1.75,2} (view ““Step” Res. C” in Figure 25). T values allow a broader or shorter range between the students’ grades. When T=1, formula C will provide a broader range between the grades, while when T=2, the range is shorter. The choice for T is made according to lecturers’

preferences. Therefore, equation (5) will also vary in the function of T.

𝑅𝑔𝑘= 𝑅𝑔− ⌊𝑀𝑔,𝑚𝑎𝑥− 𝑀𝑔𝑘 𝑀𝑔,𝑚𝑎𝑥 ∙𝑅𝑔

𝑇𝑔

(5)

In fact, it is possible to change the value of T if the main objective is to increase the range of values as a function of the dispersion of students’

self- and peer-evaluation marks. Using the variable T, formula C allows increasing or decreasing the range of marks, leading to fairer results. The main objective with the implementation of formula C (5) is to offer the lecturer the possibility to modulate the students’ marks in an easy manner.

By manipulating the value of a single parameter, the lecturer may allow a higher or lower range of marks. To better consider the value of the highest scores, it is important to ensure a score exists that the groups consider the lowest mark. Therefore, if the rest of the students receive the same marks as the best student, this behavior could be optimized by formula D (6):

𝑅𝑔𝑘= 𝑅𝑔∙ √ 𝑀𝑔𝑘 𝑀𝑔,𝑚𝑎𝑥

(6)

The framework has evolved over time. Some alternatives implemented related to the improvement of the results provided by the tool. Therefore, different mathematical algorithms were designed to obtain better and fairer results. WebAVALIA, nowadays, presents three formulas: A, C, and D.

In summary, formula A provides a marked dispersion between grades. In cases where the students achieve close voting scores, their final grades would differ considerably. Formula C, by introducing parameter T, allows

107 the lecturer to choose the dispersion between the grades’ range. Either there is a broader or shorter disparity between workgroup members’

107 the lecturer to choose the dispersion between the grades’ range. Either there is a broader or shorter disparity between workgroup members’