Clinical skills assessments have traditionally been scored via experts' ratings of examinee performance. However, this approach to scoring may be impractical in a large-scale context due to logistical and cost considerations as well as the increased probability of rater error. The purpose of this investigation was therefore to identify, using discriminant analysis, weighted score-based models that maximize the accuracy with which mastery level can be estimated for examinees taking a nationally administered standardized patient test. Additionally, the accuracy with which the resulting classification functions can be applied to predict mastery level for a cross-validation sample of examinees was also examined. Results suggest that it might be feasible to implement an automated scoring procedure in a cost-effective manner while still retaining the important facets of the decision-making process of expert raters. Cost-benefit, test development and psychometric implications of these results are important and discussed in the full paper.