The evaluation of teaching and learning has become an important activity in tertiary education institutions. Student surveys provide information about student perceptions and judgments of a particular subject. However, as is widely recognised, the appropriate interpretation of this data is problematic. There is a large literature, mainly for the US, on the use and usefulness of student subject evaluations. This literature has highlighted a number of ‘mitigating factors’ such as subject difficulty, discipline area, etc., that should be taken into account in interpreting the results of these questionnaires. In this paper we examine 8 years of QOT responses from an Economics Department in an Australian University which accounted for more than 79,000 student subject enrolments in 565 subjects. The purpose of this analysis is to establish how the information contained in these data can be used to interpret the responses. In particular, we determine to what extent other factors besides the instructor in charge of the subject have an impact on the raw average student evaluation scores. We find that the following characteristics of the students in these classes had an influence on the average QOT score: year level, enrolment size, the quantitative nature of the subject, the country of origin of the students, the proportion that are female, Honours status of the student, the differential in their mark from previous marks, quality of workbook, quality of textbook and the relative QOT score versus other subjects taught at the same time. However, a number of other factors proposed in the literature to be important influences were found not to be. These include the student’s fee paying status, whether they attended a public, private or catholic secondary school, which other faculty within the University they came from, and if the subject was taught in multiple sessions.