Affordable Access

Publisher Website

Can machine scoring deal with broad and open writing tests as well as human readers?

Authors
Journal
Assessing Writing
1075-2935
Publisher
Elsevier
Publication Date
Volume
15
Issue
2
Identifiers
DOI: 10.1016/j.asw.2010.04.002
Keywords
  • Automated Essay Scoring
  • Machine Scoring Of Writing
  • Computer Scoring Of Writing
  • Validity Of Writing Tests
  • Writing Test Design

Abstract

Abstract This article considers the claim that machine scoring of writing test responses agrees with human readers as much as humans agree with other humans. These claims about the reliability of machine scoring of writing are usually based on specific and constrained writing tasks, and there is reason for asking whether machine scoring of writing requires specific and constrained tasks to produce results that mimic human judgements. The conclusion of a National Assessment of Educational Progress (NAEP) report on the online assessment of writing that ‘the automated scoring of essay responses did not agree with the scores awarded by human readers’ is discussed. The article presents the results of a trial in which two software programmes for scoring writing test responses were compared with the results of the human scoring of a broad and open writing test. The trial showed that ‘automated essay scoring’ (AES) did not grade the broad and open writing task responses as reliably as human markers.

There are no comments yet on this publication. Be the first to share your thoughts.

Statistics

Seen <100 times
0 Comments

More articles like this

How poor writing and bad word choices can turn rea...

on The Journal of invasive cardio... October 2001

Multiple forms of objective tests; a test-scoring...

on Educational and Psychological... 1947

Human versus machine: can automated scoring improv...

on Journal of the American Academ... May 2010

Open versus closed units: chaos versus a well-oile...

on Critical Care Medicine October 2003
More articles like this..