Reliable assessment of open tasks: why is it so hard?
Assessing complex skills like writing traditionally requires a trade-off between reliability and validity. More closed tasks like multiple-choice questions deliver better reliability, but there are legitimate worries over the impact they have on classroom practice and on the validity of the common inferences that can be drawn from them when they are used in high-stakes contexts. More open tasks like essays are more authentic, but persistent difficulties with assessing such tasks reliably means that the apparent validity of such tasks is in fact moot. In this speech, I’ll assess two possible alternatives: more precise rubrics, and comparative judgement.
Daisy Christodoulou is the Director of Education at No More Marking, a provider of online comparative judgement. She works closely with schools on developing new approaches to assessment. Before that, she was Head of Assessment at Ark Schools, a network of 35 academy schools. She has taught English in two London comprehensives and has been part of government commissions on the future of teacher training and assessment. Daisy is the author of Seven Myths about Education and Making Good Progress? The future of Assessment for Learning, as well as the influential blog https://thewingtoheaven.wordpress.com
National exams and schools’ internal assessment systems have a big impact on what gets taught in the classroom, and often lead to unintended and damaging consequences. How can we change assessment so that it helps to improve education rather than distorting it?
Seven myths about education: What are they and why do they matter?
National Conference 2016
A valid and reliable timesaver? Comparative judgment of year 6 writing with Chris Wheadon
Comparative judgment (CJ) is a new and innovative way of marking complex and open tasks like essays. In this study, we used CJ to assess 300 Y6 writing assessments from 6 schools in England to see if it could provide a replacement for traditional moderation processes.