"Response shift bias"—and how to avoid it
There are lots of challenges with conventional “pre/post” testing as a means of assessing the impact and success of a learning intervention. (By “conventional pre/post testing” I mean, broadly speaking, any approach where learners, are asked what they know before taking a course—and then asked the same questions again after the course, following which any change—delta—is measured.)
Perhaps the biggest challenge of all, especially with soft skills or behaviour-based learning, is that a typical learner has an incomplete understanding of what they’re being asked. They’re dealing with “unknown unknowns” in the famous Donald Rumseld formulation.
To illustrate with an example from our own work, we recently designed a course on teamwork for early-career technology professionals, on the verge of making the transition from being a “doer” (in this case a software engineer) to being a leader (leading a team of software engineers, and being accountable for team output not merely their own individual results).
In this example, we were dealing with a user-group who were not low on self-esteem (understatement!). They’d been hired into the highly competitive technology division of a leading name in the financial services sector. And, four years or so into their career, they’ve been identified as “high-potential”—a future leader of the company.
Perhaps unsurprisingly, many of this group see themselves already as leaders. After all, they’re the first to come up with ideas and solutions; they work the longest hours and lead the way in terms of commitment and success. If asked to rate their qualities as a leader, a large number of this group would, without false modesty, award themselves an honest “A”.
But do the characteristics listed above reflect the inclusive leadership needed by leading organizations in 2019? Are these individuals showing the behaviours they’ll need to get the most out of a team of people, many of whom will be very different (in terms of social background, professional experience and temperament) from them?
The answer is “almost certainly not”—and this group, following the course we’ve designed, are the first to see it. This means, then, that their pre-course self-assessment is totally irrelevant.
A potential answer here—not perfect but certainly workable—is the “retrospective pre-post” test. With this approach, learners are asked to self-assess only after the course. The question is framed along the lines of: “Think back to before you took this course. In terms of what you now know about leadership, how do you rate your former self?” The answer provides a baseline against which improvements can be measured.
It’s a very useful and (once you get past the rather off-putting and clunky name) quite straightforward instrument. Highly recommended!
There are plenty of good articles on the web—for convenience I’ve pasted a couple links below.
A decent “primer” can be found at Area365.org
A more scholarly paper on the topic is here.