Adaptive comparative judgment: a 100-year-old novel take on peer assessment

Tuesday, February 23, 2016
Simon Bates (CTLT/PHAS)
Ido Roll (CTLT)
James Charbonneau (PHAS)
Mark MacLean (Math)
Tiffany Potter (English)
Evaluation of a peer’s work — and the reflection it can prompt on one’s own thinking — is a valuable activity to support student learning. What if we could imagine a system that could help students judge the quality of their peer's submissions, while also reflecting on their own responses? Using only the input of peers in a cohort, could we produce a ranked list of submissions? And would those judged as "best" by students match those chosen by faculty? We have built a prototype online tool that does exactly this, using an algorithm of adaptive comparative judgment (ACJ). Unlike other tools to support student peer review, ACJ tasks students with reviewing pairs of answers submitted by other students. Once they have submitted their own (text) answer to a problem, students are presented with a pair of answers authored by two of their peers and asked to make a comparative judgment as to which is better (something that humans are very good at doing quickly and generally very reliably, given a shared understanding of what constitutes "good"). The system is adaptive in the sense that it can present students with pairs of answers that have not yet been compared against each other, rather than brute-force random pairings. This session will present an overview of the tool, which is open-source and available online (https://github.com/ubc) and details of a pilot implementation in English and Science courses.
supperseries@science.ubc.ca