Publication search
with Automated item generation as keyword
Maertens, Rakoen Götz, Friedrich M Golino, Hudson F Roozenbeek, Jon Schneider, Claudia R Kyrychenko, Yara Kerr, John R Stieger, Stefan McClanahan, William P Drabot, Karly
...
Interest in the psychology of misinformation has exploded in recent years. Despite ample research, to date there is no validated framework to measure misinformation susceptibility. Therefore, we introduce Verification done, a nuanced interpretation schema and assessment tool that simultaneously considers Veracity discernment, and its distinct, meas...
Maertens, Rakoen Götz, Friedrich M Golino, Hudson F Roozenbeek, Jon Schneider, Claudia R Kyrychenko, Yara Kerr, John R Stieger, Stefan McClanahan, William P Drabot, Karly
...
Published in
Behavior research methods
Interest in the psychology of misinformation has exploded in recent years. Despite ample research, to date there is no validated framework to measure misinformation susceptibility. Therefore, we introduce Verification done, a nuanced interpretation schema and assessment tool that simultaneously considers Veracity discernment, and its distinct, meas...
Westacott, R Badger, K Kluth, D Gurnell, M Reed, MWR Sam, AH
BACKGROUND: Automated Item Generation (AIG) uses computer software to create multiple items from a single question model. There is currently a lack of data looking at whether item variants to a single question result in differences in student performance or human-derived standard setting. The purpose of this study was to use 50 Multiple Choice Ques...
Westacott, R Badger, K Kluth, D Gurnell, M Reed, MWR Sam, AH
BACKGROUND: Automated Item Generation (AIG) uses computer software to create multiple items from a single question model. There is currently a lack of data looking at whether item variants to a single question result in differences in student performance or human-derived standard setting. The purpose of this study was to use 50 Multiple Choice Ques...
Pugh, Debra De Champlain, André Gierl, Mark Lai, Hollis Touchie, Claire
Published in
Research and Practice in Technology Enhanced Learning
The purpose of this study was to compare the quality of multiple choice questions (MCQs) developed using automated item generation (AIG) versus traditional methods, as judged by a panel of experts. The quality of MCQs developed using two methods (i.e., AIG or traditional) was evaluated by a panel of content experts in a blinded study. Participants ...