Breadth and depth

Yesterday I had a very thought provoking interaction with @powley_r who kindly took the time to have a look at the prototype and provided some interesting feedback.

I mulled this for a while Christodoulou’s post on MCQs does feature some great examples (as does Joe Kirby‘s) but I think questions like this are serving a slightly different purpose to the certainty-based approach I’m advocating.

Domain sampling = Breadth

A traditional MCQ (or in fact any closed question) is generally used as a sample of a much larger domain of knowledge. By asking a series of closed questions, sampling knowledge from across a domain, you can make a fairly accurate estimate of the proportion of the domain that the pupil knows. This is a useful exercise, but there are some limitations.

  • The marking is binary (right/wrong) and does not differentiate between mastery and guessing/insecure knowledge. Mastery is defined in terms of breadth of knowledge and ability to comprehend questions.
  • Probability undermines the assessment of easier and harder concepts:
    • Direct questions about easy concepts are often avoided because they are guessable – and this often distorts the sampling exercise.
    • Even with 4 options (which the pupil might easily narrow down to 2) there is a chance of successful guessing of 25%.
  • Devising complex MCQs that are both fair and testing is very difficult (I should know – it’s a large part of my day-job!)

Certainty = Depth

The certainty-based scoring approach controls against guessing. It can be applied to any closed question and works for a much broader difficulty spectrum than a simple right/wrong score. However, its real strength is the insight is can provide on core concepts – the ones you need every pupil to know in order to progress.

Scoring for certainty gives much more insight into the ‘depth’ of pupil knowledge. This is the type of thing we are often trying to test via open questions, i.e. do they understand these concepts well enough to apply it to this problem/creative task?

Simple closed questions treat knowledge as a binary (know/don’t know) and this lead to inaccuracy. By measuring certainty, we get a much more accurate and reliable result (as consistently demonstrated in controlled studies).

When to use

I think there is probably a place for both approaches, not least because assessing and feeding back about certainty can be emotionally intense. Binary-marked closed questioning is a quick and easy way to measure domain coverage and produces easy to read data, assessing certainty gives more insight into levels of mastery but on larger assessments some of the insight may be too much. Both are underused (in English schools) to the detriment of teacher workload.

Leave a Reply

Your email address will not be published. Required fields are marked *