Computer-based assessment of collaborative problem-solving skills
Research 20 Oct 2020 7 minute readIn education systems around the world, teachers are being tasked with monitoring and improving students’ collaboration skills. One of the major challenges in that endeavour is identifying exactly what collaboration looks like in the classroom, and how student proficiency in it can be described.
The definition of collaboration is much more complex than simply working with others. The literature has shifted from a simple definition of collaboration as working in groups, to defining collaboration as an action where two or more learners pool knowledge, resources and expertise from different sources in order to reach a common goal.
Collaboration is a complex skill and measuring it not only requires identifying evidence from directly observable behaviours, but also making inferences about student ability from those demonstrated behaviours. While technology can help identify associated behaviours, it also presents challenges to the measurement of this innately human construct.
Earlier this year, ACER published a Skill Development Framework for Collaboration that takes into consideration major assessments of collaboration. Perhaps the most commonly known of these is the 2015 OECD PISA assessment of collaborative problem solving (CPS). About 125,000 15-year-old students in 52 countries and economies participated in the computer-based PISA-CPS assessment.
To measure the ability of individuals to work in collaborative settings, PISA-CPS had students interact with computer agents instead of other humans over various computer-simulated assessment tasks. Each task involved a scenario with multiple individual items that students had to work through. In order to communicate with other group members (i.e. computer agents), students had to select a response from a list of pre-defined messages displayed at the task space. Actions such as clicking or dragging and dropping were implemented in the task space. Each of these behaviours were anticipated to reflect a specific CPS skill.
One of the challenges in measuring collaboration skills is the impact of group composition. Research shows individuals may perform differently depending on the group to which they are assigned. Group composition therefore has the potential to either enhance or suppress an individual’s ability to show their own skills.
The use of computer agents in PISA-CPS had the advantage of controlling for group dynamics, but limited the generalisability of the results. In addition, constraining the actions students could take to a small set of pre-defined choices posed challenges in allowing for and eliciting students’ creativity, ability to introduce new ideas, and behaviours reflecting negotiation.
Subsequent assessments of collaboration, such as ACER’s assessment of general capabilities, have shown that it is possible to measure negotiation skills using human-human collaboration in computer-based assessment. This assessment uses a combination of HTML pages and software from Google to host and deliver the test content to students via a web-browser. Google documents contain activity instructions and tables in which students can enter information as the response format. Google Hangouts is used to host chats between group members where students collaborate on activities.
When assessing something that is collaborative, an immediate question is whether one should assess each individual within a group, the group as a whole or both. While PISA-CPS assessed individuals only, ACER’s assessment of general capabilities contained some activities that assessed individuals, and others in which a group score was considered. Each approach carries its own limitations.
A related, but different issue of scoring is whether, given the interactive nature of collaboration, one should assess some final group product, some part of the collaborative process or both. Measuring the solution and whether it is correct can be a useful criterion depending on how it is interpreted, but is not stand-alone evidence of student collaborative ability. Instead, sets of indicators can help identify which steps each student went through to gain that outcome. Teachers would benefit from this information when determining how best to improve their students’ collaborative ability.
Overall, the differences between the two assessments highlight the different purposes they serve: PISA-CPS being large scale, comparative and policy orientated, and ACER’s assessment of general capabilities being classroom and formative assessment orientated. Ultimately, decisions about the design of any assessment of collaboration should be made with the purpose and the nature of the evidence it aims to collect firmly in mind, with the knowledge that such decisions may influence the aspects of the construct that can be elicited by the assessment. The benefit of having multiple, and different, assessments of collaboration is that it provides increased understanding of the skill – both how it can be elicited through such assessments, and how behaviours students demonstrate can be associated with the skill.■
Read the full article:
‘Comparative analysis of student performance in collaborative problem solving: What does it tell us?’ by Claire Scoular, Sofia Eleftheriadou, Dara Ramalingam and Dan Cloney, Australian Journal of Education (October 2020).
Watch the webinar:
On Wednesday 4 November 2020, the Australian Journal of Education hosted a panel discussion with several of the authors of the special issue, including Dr Claire Scoular. Catch up, free and on demand, on '20 Years of PISA in Australia: an AJE special issue'.
Find out more:
Dr Claire Scoular was a presenter at our fully online Research Conference 2021. Watch recordings of all events in the main program through our Research Conference On Demand package, available for a limited time. Find out more.