This thesis reports on work done in applying some of the concepts and architectures found in biological computation to computer algorithms. Biology has long inspired computer technology at the level of processing elements. This thesis explores the application of biologically inspired algorithms at a higher level-that of functional structures of the nervous system. The first chapter gives background on the attentional/awareness model of the brain, why it is important to biology and the advantages in real-time performance and in learning facilitation which we expect from applying it in computer algorithms. The second chapter examines the application of this model to a canonical computer science problem-the bin packing problem. Approaching this NP-complete problem when limited by computational resources and time constraints means that algorithms which throwaway large amounts of the information about the problem perform better than those which attempt to consider everything. The existence of an optimum in the size of a working memory needed to find the best solution under time pressure is shown. The transition between the regime of strict time constraints and more forgiving time constraints is quite sudden. Chapter 3 presents an analytical model for better understanding the performance of various bin packing algorithms. Chapter 4 examines the application of the attentional model to a real-time computer game testbed. This testbed is explained, and results are shown which illustrate that in a complex, unpredictable environment with tight time and resource constraints conditions, an algorithm which examines only that information which falls into a relatively small part of the playing area can win against player which addresses it all. Chapter 5 turns to an examination of the role of reduced informational representations upon learning. Solving of various logical-kinetic puzzles by a simulated segmented arm is done by a learning system. A logic supervisory subsystem utilizes attentional/awareness methods to train, and pass control of the different control levels of the articulate arm over to, the neural networks, adaptive resonance theory networks, and declarative computer memory which it trains. Finally, chapter 6 presents an overview and evaluation of the work.