Abstract Analog computers are inherently inaccurate due to imperfections in fabrication and fluctuations in operating temperature. The classical solution to this problem uses extra hardware to enforce discrete behaviour. However, the brain appears to compute reliably with inaccurate components without necessarily resorting to discrete techniques. The continuous neural network is a computational model based upon certain observed features of the brain. Experimental evidence has shown continuous neural networks to be extremely fault-tolerant; in particular, their performance does not appear to be significantly impaired when precision is limited. Continuous neurons with limited precision essentially compute k-ary weighted multilinear threshold functions, which divide R n into k regions with k-1 hyperplanes. The behaviour of k-ary neural networks is investigated. There is no canonical set of threshold values for k > 3, although they exist for binary and ternary neural networks. The weights can be made integers of only O(( z+ k)log( z+ k)) bits, where z is the number of processors, without increasing hardware or running time. The weights can be made ±1 while increasing running time by a constant multiple and hardware by a small polynomial in z and k. Binary neurons can be used if the running time is allowed to increase by a larger constant multiple and the hardware is allowed to increase by a slightly larger polynomial in z and k. Any symmetric k-ary function can be computed in constant depth and size O( n k−1 k−2) !) , and any k-ary function can be computed in constant depth and size O(nk n) . The alternating neural networks of Olafsson and Abu-Mostafa, and the quantized neural networks of Fleisher are closely related to this model.