The Assess By Computer (ABC) project (Sargeant et al 2004) follows a Human-Computer Collaborative (HCC) approach to assessment. We focus on constructed answers such as text and diagrams rather than answers requiring mere selection between alternatives. The HCC assessment process is an active collaboration between humans and a software system, where the software does the routine work and the humans make the important judgements. Similar approaches in Artificial Intelligence research are developed in Englebart 1962, Grosz 2004, and Potter et al 2004, among others. Our students, through their answers to questions, also implicitly collaborate in the development of resources. We can develop marking support tools which handle the nature and range of variation found in real exam data, and we can adapt marking judgements and feedback - even, in the longer term, our teaching material - in the light of what students really say. In this paper we focus on the reality of student text answers. We present student data from on-line examinations showing a remarkably wide range of acceptable answers to even the most straightforward of questions. We show how the analysis of these examples is supported by the ABC tools, especially the Keyword Manager and answer clustering options.