Distractor
A distractor is an incorrect option in a multiple-choice item. In a standard four-option MC item, there is one correct answer (the key) and three distractors. The quality of a multiple-choice test depends as much on the quality of its distractors as on the quality of its correct answers.
How Distractors Work
A well-designed distractor is plausible but wrong. It should attract test-takers who lack the target knowledge or skill, while being clearly rejected by test-takers who possess it. The distractor must create a genuine choice — not trick the learner, but test whether they know the material.
Hughes (2003) notes that the key requirement is that distractors be definitely wrong but not obviously wrong. An obviously wrong option does not contribute to measurement — it effectively reduces the item to fewer choices, inflating the chance of guessing correctly.
Distractor Quality
Effective Distractors
- Are plausible to learners who lack the target knowledge
- Address common misconceptions or typical errors
- Are grammatically parallel to the correct answer
- Are similar in length and complexity to the correct answer
- Cannot be eliminated through test-wise strategies alone
Ineffective Distractors
| Problem | Example | Why it fails |
|---|---|---|
| Obviously wrong | In a reading comprehension item, an option that contradicts basic common sense | Everyone eliminates it; item becomes effectively 3-option |
| Too similar to key | Two options that are both arguably correct | Introduces construct-irrelevant difficulty — the item tests nitpicking, not comprehension |
| Grammatically inconsistent | Distractors that do not fit the sentence frame | Can be eliminated by grammar alone, without understanding the content |
| Length mismatch | The correct answer is significantly longer/shorter than distractors | Test-wise learners learn that the longest (or most qualified) option is often correct |
| Absurd | Clearly humorous or impossible options | Reduces effective choice; wastes test space |
| "All of the above" / "None of the above" | Formulaic options used to fill space | Often signals guessable patterns; limited diagnostic value |
Distractor Analysis
Distractor analysis is a component of item analysis that examines how each distractor performed. Key questions:
Does each distractor attract responses? A distractor chosen by no one is non-functional — it adds nothing and should be revised or replaced. Haladyna & Downing (1993) found that many MC tests contain non-functional distractors, effectively reducing them to two- or three-option items.
Do distractors attract low-ability test-takers? An effective distractor has a negative discrimination index — it is chosen more by weaker learners than stronger ones. If a distractor attracts strong learners, it may be ambiguous or the key may be wrong.
Is response distribution reasonable? In a well-functioning four-option item, the key should attract the most responses, and the three distractors should attract the remaining responses in a roughly distributed pattern. If 90% choose the key and 10% split among three distractors, the item is too easy.
Example Distractor Analysis
| Option | % choosing | High-ability group | Low-ability group | Interpretation |
|---|---|---|---|---|
| A | 12% | 5% | 22% | Good distractor — attracts weaker learners |
| B* (key) | 65% | 88% | 35% | Functions well — discriminates by ability |
| C | 18% | 5% | 30% | Good distractor |
| D | 5% | 2% | 13% | Marginally functional — could be improved |
Writing Good Distractors
Base distractors on real learner errors. The best distractors come from error analysis of actual learner responses. If learners commonly confuse "affect" and "effect," that confusion makes an excellent distractor for a vocabulary item.
For reading comprehension: Distractors should reflect common misreadings — taking information from the wrong part of the text, confusing details with main ideas, misinterpreting reference words, falling for surface-level word matching.
For grammar: Distractors should reflect typical interlanguage errors at the target level — L1 transfer errors, overgeneralisation, developmental errors.
For vocabulary: Distractors should be semantically related to the key (same field, similar form, or common confusions), not random unrelated words.
Avoid "none of the above." Hughes (2003) argues against this option because it does not require the test-taker to know the right answer — only to know that the listed options are wrong.
Why It Matters
Distractor quality directly affects test quality. Poor distractors:
- Reduce reliability by introducing noise (random guessing when distractors are transparent)
- Undermine construct validity by testing test-wiseness rather than the target skill
- Create negative washback when learners learn to game items rather than develop language ability
- Waste testing time on items that do not discriminate
For EH test development, systematic attention to distractor quality — using error analysis data, piloting items, and running distractor analysis — is one of the most efficient ways to improve test quality.
Key References
- Hughes, A. (2003). Testing for Language Teachers (2nd ed.). Cambridge University Press.
- Brown, H. D., & Abeywickrama, P. (2010). Language Assessment: Principles and Classroom Practices (2nd ed.). Pearson.
- Haladyna, T. M., & Downing, S. M. (1993). How many options is enough for a multiple-choice test item? Educational and Psychological Measurement, 53(4), 999-1010.
- Heaton, J. B. (1988). Writing English Language Tests. Longman.
- Alderson, J. C., Clapham, C., & Wall, D. (1995). Language Test Construction and Evaluation. Cambridge University Press.