We use a limited information environment to mimic the state of confusion in an experimental, repeated public goods game. The results show that reinforcement learning leads to dynamics similar to those observed in standard public goods games. However, closer inspection shows that individual decay of contributions in standard public goods games cannot be fully explained by reinforcement learning. According to our estimates, learning only accounts for 41 percent of the decay in contributions in standard public goods games. The contribution dynamics of subjects, who are identified as conditional cooperators, differ strongly from the learning dynamics, while a learning model estimated from the limited information treatment tracks behavior for subjects, who cannot be classified as conditional cooperators, reasonably well.