Standard setting anchor statements: a double cross-over trial of two different methods [version 1]

Steven Burr, Theresa Martin, James Edwards, Colin Ferguson, Kerry Gilbert, Christian Gray, Adele Hill, Joanne Hosking, Karen Johnstone, Jolanta Kisielewska, Chloe Milsom, Siobhan Moyes, Ann Rigby-Jones, Iain Robinson, Nick Toms, Helen Watson, Daniel Zahra

Research output: Working paper

29 Downloads (Pure)


Context: We challenge the philosophical acceptability of the Angoff method, and propose an alternative method of standard setting based on how important it is for candidates to know the material each test item assesses, and not how difficult it is for a subgroup of candidates to answer each item.

Methods: The practicalities of an alternative method of standard setting are evaluated here, for the first time, with direct comparison to an Angoff method. To negate bias due to any leading effects, a prospective cross-over design was adopted involving two groups of judges (n=7 and n=8), both of which set the standards for the same two 100 item multiple choice question tests, by the two different methods.

Results: Overall, we found that the two methods took a similar amount of time to complete. The alternative method produced a higher cut-score (by 12-14%), and had a higher degree of variability between judges' cut-scores (by 5%). When using the alternative method, judges reported a small, but statistically significant, increase in their confidence to decide accurately the standard (by 3%).

Conclusion: This is a new approach to standard setting where the quantitative differences are slight, but there are clear qualitative advantages associated with use of the alternative method.
Original languageEnglish
Publication statusPublished - 3 Feb 2021


  • Angoff
  • standard setting
  • assessment


Dive into the research topics of 'Standard setting anchor statements: a double cross-over trial of two different methods [version 1]'. Together they form a unique fingerprint.

Cite this