Different levels of analysis provide different insights into behavior: computational-level analyses determine the problem an organism must solve and algorithmic-level analyses determine the mechanisms that drive behavior. However, many attempts to model behavior are pitched at a single level of analysis. Research into human and animal learning provides a prime example, with some researchers using computational-level models to understand the sensitivity organisms display to environmental statistics but other researchers using algorithmic-level models to understand organisms’ trial order effects, including effects of primacy and recency. Recently, attempts have been made to bridge these two levels of analysis. Locally Bayesian Learning (LBL) creates a bridge by taking a view inspired by evolutionary psychology: Our minds are composed of modules that are each individually Bayesian but communicate with restricted messages. A different inspiration comes from computer science and statistics: Our brains are implementing the algorithms developed for approximating complex probability distributions. We show that these different inspirations for how to bridge levels of analysis are not necessarily in conflict by developing a computational justification for LBL. We demonstrate that a scheme that maximizes computational fidelity while using a restricted factorized representation produces the trial order effects that motivated the development of LBL. This scheme uses the same modular motivation as LBL, passing messages about the attended cues between modules, but does not use the rapid shifts of attention considered key for the LBL approximation. This work illustrates a new way of tying together psychological and computational constraints.