Information theory provides a theoretical framework for addressing fundamental questions concerning the nature of neural codes. Harnessing its power is not straightforward, because of the differences between mathematical abstractions and laboratory reality. We describe an approach to the analysis of neural codes that seeks to identify the informative features of neural responses, rather than to estimate the information content of neural responses per se. Our analysis, applied to neurons in primary visual cortex (V1), demonstrates that the informative precision of spike times varies with the stimulus modality being represented. Contrast is represented by spike times on the shortest time scale, and different kinds of pattern information are represented on longer time scales. The interspike interval distribution has a structure that is unanticipated from the firing rate. The significance of this structure is not that it contains additional information, but rather, that it may provide a means for simple synaptic mechanisms to decode the information that is multiplexed within a spike train. Extensions of this analysis to the simultaneous responses of pairs of neurons indicate that neighboring neurons convey largely independent information, if the decoding process is sensitive to the neuron of origin and not just the average firing rate. In summary, stimulus-related information is encoded into the precise times of spikes fired by V1 neurons. Much of this information would be obscured if individual spikes were merely taken to be estimators of the firing rate. Additional information would be lost by averaging across the responses of neurons in a local population. We propose that synaptic mechanisms sensitive to interspike intervals and dendritic processing beyond simple summation exist at least in part to enable the brain to take advantage of this extra information.