Peer assessment: quality feedback at scale
I'm often asked by clients in both academic and commercial environments how we can balance scale with quality.
Scale, of course, is one of the great benefits of e-learning: we can reach a number of learners far greater than could be reached face-to-face.
But can this be done at the same time as creating a learning experience that feels personalised to each learner's individual needs?
It's a conundrum, because scale and personal attention are by definition in conflict...
...or are they?
Of course there's no simple answer to reconciling these apparently conflicting imperatives. But I'm always interested in ideas that can help us scale the ability to provide valuable, personalised feedback to a written assignment (such as a short-format essay or similar). How can this be done?
First off: not through AI, because we're a long way from having artificially intelligent machines that can do this. There's an emotional component to giving good feedback to a student and machines can't feel.
(Full disclosure: I'm with the camp that holds that machines will never develop human-like emotions.)
But finding ways for students to grade each other—"peer assessment"—can be a great way to achieve the quality we're looking for even in a massive course.
Done properly, a peer assessment initiative needs to be carefully calibrated, meaning that students need to "earn" their right to grade other students against a standard competency rubric. But I like the idea of making this part of a meaningful "gamification" system. To earn the right to be a "qualified peer grader" can be a badge of success.
For many years I've followed the work of an expert in Human-Computer interaction called Scott Klemmer at UC San Diego, who's done a lot of work in the MOOC space . The video below is from a few years ago, but gives a very clear explanation of peer assessment and how it can be set up. Recommended.