Although the Rorschach test has demonstrated significant refinements in reliability, validity, and statistical power as a result of the procedural standardization and scoring innovations introduced by Exner's Comprehensive System, the issue of Rorschach interrater reliability remains unexplored. This article examines the psychometric foundations of Rorschach interrater reliability and applies notions from applied behavioral analysis to the treatment of Rorschach data. We empirically compare 3 methods of quantifying interrater agreement, their accuracy in estimating interrater agreement, and efficiency in reducing error in Rorschach research. Results indicate that the magnitude of differences between methods of quantifying interrater agreement and the associated reductions of error are significant. We propose a standard method for quantifying interrater agreement in Rorschach research.