A learning algorithm for a model binocular cell was derived according to an information maximization principle and by using a low signal-to-noise-ratio approximation. The algorithm updates cell's synaptic weights so that the information obtained from the cell's output is increased. According to the algorithm, model binocular cells were trained by using computer-generated stereo images as training data. As a result, cells tuned to various disparities were generated. Also, generated synaptic weight patterns of the cells were similar to Gabor-wavelets and receptive fields of simple cells in the visual cortex. Thus, they were orientation and spatial frequency selective as well as disparity selective. Gabor functions were used to fit the generated weight patterns. The fitting results indicated that the generated cells encode disparities in terms of phase disparity and/or position disparity. This result agrees with experimental findings by Anzai et al. [J Neurophys 82 (1999) 874] and is consistent with ICA-based theoretical results obtained [Network: Comput Neural Syst 11 (2000) 191].