Reimagining writing assessment
Research 20 Dec 2022 5 minute readAt Research Conference 2022, delegates took a closer look at the benefits of different models of assessing writing.
Innovative online formative writing assessment
Dr Sandy Heldsinger and Associate Professor Stephen Humphry from the University of Western Australia shared their research on writing assessment. Their work led to the development of an innovative assessment process that provides the advantages of rubrics, comparative judgements, and automated marking, with few of the disadvantages.
Obtaining reliable and consistent teacher assessments of students’ work is complex. Low teacher-to-teacher reliability and bias in assessment of essays may result in a reluctance to use teachers’ professional judgement to its full potential. Low reliability of scoring is a pressing issue where the data are used for summative purposes, but it is also an issue for formative assessment.
‘Inaccurate assessment necessarily impedes the effectiveness of any follow-up activity, and hence the effectiveness of formative assessment,’ Heldsinger and Humphry said.
Their method enables teachers to make reliable judgements of writing by producing and using a scale that describes learning progressions, and then using assisted marking with the Brightpath software. The automated marking assistant predicts the location of a piece of writing on the scale as a starting point, helping teachers to quickly focus in the right zone in much the same way that a search-suggestion helps users focus on information that is most relevant.
‘This process is designed to help teachers to concentrate on features of writing that are best judged by humans,’ Heldsinger and Humphry said.
‘The use of assisted marking further reduces assessment time by enabling teachers to focus on what they are best placed to assess in performances…teachers assess their own students and provide formative feedback based on their own assessments and familiarity with the students’ work.’
ACER models of writing assessment
ACER designs and delivers many writing assessments designed for different assessment contexts, from high-stakes selection to formative use by classroom teachers. In their presentation, Juliette Mendelovits and Dr Judy Nixon, Research Director and Assistant Research Director in ACER’s Assessment and Reporting program, explained three main models of marking.
Holistic scoring captures an on-balance judgement about a piece of writing, and is useful to crystalise writing proficiency in a single score.
Partial analytical scoring gives two kinds of reporting on writing: a summative assessment which can be used comparatively to map progress over time or to compare to other results in the same year level both within schools and across schools; and formative information for teachers to help them focus on their own teaching strategies.
Customised criterion scoring has been pioneered by ACER in recent years to accurately assess what a student knows, understands and can do as a writer across a range of contexts. Each student is administered several writing tasks, giving the opportunity to demonstrate proficiency drawing on a range of writing skills, covering different text types. Each task is marked on a different set of criteria.
‘Each form of writing assessment and scoring has its virtues and deficiencies. A consideration of the purpose and context of the assessment will determine which type of writing instrument is most fit-for-purpose,’ Mendelovits and Nixon said.
Watch Research Conference on demand
Stream over 16 hours of content from Research Conference 2022 to watch at any time that suits you. Featuring leading names in education research and practice around the globe, including keynotes from Professor Geoff Masters, Associate Professor Lenore Adie and Dr Diane DeBacker, ACER's Research Conference 2022 was held online in August on the theme ‘Reimagining Assessment’.
Tickets to access Research Conference on Demand will be available until 1 June 2023. Find out more https://www.researchconference.com.au ■