The use of speech recognition in automotive environments has received increased attention in recent times. Unfortunately, evaluations of algorithms designed to improve recognition performance in this environment have been performed on differing data collections, making results difficult to compare. In recent years, the University of Illinois released a large in-car audio and visual data collection known as AVICAR ("audio-visual speech in a car") . The AVICAR database is freely available, but to date no uniform evaluation protocol on which to perform experiments has been reported. This paper introduces a speaker-independent, continuous speech recognition evaluation protocol for the audio data of the AVICAR database. It is designed to allow for model adaptation, evaluation and testing using native English speakers. Baseline recognition results obtained using this protocol are also presented.