Affordable Access

Access to the full text

Maintaining Content Validity in Computerized Adaptive Testing

Authors
  • Luecht, Richard M.1
  • de Champlain, André1
  • Nungester, Ronald J.1
  • 1 National Board of Medical Examiners, 3750, Market St., Philadelphia, PA, 19104 E-mail, U.S.A. , Philadelphia
Type
Published Article
Journal
Advances in Health Sciences Education
Publisher
Springer Netherlands
Publication Date
Jan 01, 1998
Volume
3
Issue
1
Pages
29–41
Identifiers
DOI: 10.1023/A:1009789314011
Source
Springer Nature
Keywords
License
Yellow

Abstract

A major advantage of using computerized adaptive testing (CAT) is improved measurement efficiency; better score reliability or mastery decisions can result from targeting item selections to the abilities of examinees. However, this type of engineering solution can result in differential content for different examinees at various levels of ability. This paper empirically demonstrates some of the trade-offs which can occur when content balancing is imposed in CAT forms or conversely, when it is ignored. That is, the content validity of a CAT form can actually change across a score scale when content balancing is ignored. On the other hand, efficiency and score precision can be severely reduced by over specifying content restrictions in a CAT form. The results from two simulation studies are presented as a means of highlighting some of the trade-offs that could occur between content and statistical considerations in CAT form assembly.

Report this publication

Statistics

Seen <100 times