To address overlapping needs in clinical neuropsychology and epidemiology of the elderly, we report an in-depth analysis of interrater scoring for a test commonly used to assess visual memory in older persons. Benton Visual Retention Test protocols (Form C, Administration A) from 277 community-dwelling male participants (M = 65.2 years) in two ongoing cardiovascular epidemiologic studies were scored independently by two trained raters. Interrater reliabilities, calculated as intraclass correlations, were .963 and .974 for total number of correct reproductions and total number of errors, respectively. Interrater agreements on categorical determinations of the presence or absence of 6 error codes on 10 separate designs were evaluated using kappa measures of agreement. Kappa values for each of the 10 designs ranged between .780 and .930. Kappa values for each of the error codes ranged from a high of .976 for omissions to a moderate .737 for size errors. Kappa values by error code type within each of the 10 designs revealed particular problem areas for misplacements on design 9 and size errors on design 10 (kappa being as low as .440 and .480 respectively). Suggestions for improving the accuracy of scoring are presented.