Encoding and decoding of information through efficient neural representations
- Authors
- Publication Date
- Dec 19, 2022
- Source
- Hal-Diderot
- Keywords
- Language
- English
- License
- Unknown
- External links
Abstract
In this thesis, we investigate the principles which underlie optimal information coding in neural systems, by combining models from information theory and machine learning with experimental data analysis. Much of the theoretical work on efficient coding has focused on neurons whose mean response as a function of stimulus features–the neuron’s tuning curve–can be described by a simple function. Real neurons, however, often exhibit more complex tuning curves: in grid cells, for example, the periodicity of the responses imparts the population code with high accuracy. It is unclear if the high accuracy results from the fine periodic structure of the responses or obtains more generally in neurons with complex tuning curves. In a first project, we address this question with the use of a benchmark model: a shallow neural network in which complex and irregular tuning curves emerge in the second layer neurons due to random synaptic weights. Irregularity enhances the local resolution of the code but gives rise to catastrophic, global errors. When balancing these two errors, the resulting code achieves exponential accuracy as a function of the population size, and the network compresses information from a high-dimensional to a low-dimensional representation. By analyzing recordings from monkey motor cortex, we provide an example of such ‘compressed’ code. Our results show that efficient codes might not need a finely tuned design, but they emerge robustly from randomness and irregularity. In the first chapter, the population coding properties are derived under the assumption of an ‘ideal’ decoder, which has access to details of the encoding process. In the second chapter we ask how optimality criteria of such a neural code are affected when the system performing the decoding operation is non-ideal. We consider decoders parametrized as neural networks, trained in a supervised setting on a dataset of pairs of stimuli and noisy responses. Due to the noise in the training set, the decoder is biased towards learning smooth and regular functions. This yields a gap in the performance as compared to the ideal decoder, which achieves a lower error by exploiting the irregularities of the tuning curves. This gap is reduced when the complexity of the decoding architecture is increased, revealing a trade-off between the ideal performance of a coding scheme and the ease of the decoding process. In a third project, we consider the neural representations which emerge in an unsupervised learning setting. An encoder, which maps stimuli to neural responses, and a decoder, whose task is to maintain an internal generative model of the environment, are optimized jointly, in a variational autoencoder framework. Optimality is achieved when the encoder is set so as to maximize a bound to the mutual information between stimuli and neural responses, as postulated by the efficient coding hypothesis, subject to a metabolic constraint which penalizes the difference between stimulus-evoked and spontaneous neural activity. We derive optimal neural responses in a conventional model of population coding with simple tuning curves according to this framework. By varying the constraint, we obtain a family of solutions which yield equally satisfying generative models, but qualitatively different neural representations. Our work illustrate how the interaction between the encoding and the decoding process shape neural representation of the external world.