Abstract It is shown that by introducing lateral inhibition in Boltzmann machines (BMs), hybrid architectures involving different computational principles, such as feedforward mapping, unsupervised learning, and associative memory, can be modeled and analysed. This is of great advantage for getting a better understanding of the capability of the Boltzmann machine and for the study of hybrid architectures in the context of neurobiology as well as i engineering. Analytic learning rules can be derived for these networks that allow for fast simulation on sequential machines. As a result, time-consuming Glauber dynamics need not be invoked to calculated the learning rule. Two examples how lateral inhibition in the BM leads to fast learning rules are considered in detail: Boltzmann perceptrons (BP) and radial basis Boltzmann machines (RBBM). BPs are shown to be universal classifiers. The main difference between BPs and MLPs are indicated. For RBBMs, it is shown that noise in the system controls an interesting symmetry-breaking pattern that leads to specialization of hidden units.