The ability to learn from user behavior during image segmentation to replicate the innate human ability to adapt shape delineation to contextually specific local information is an important area of study in image understanding. Current approaches to image segmentation usually incorporate specific designs, either relying on generic image features or specific prior knowledge, which usually prevent their application in different contextual settings. In this paper, a general segmentation framework based on reinforcement learning is proposed. It demonstrates how user-specific behavior can be assimilated in-situ for effective model adaptation and learning. It incorporates a two-layer reinforcement learning algorithm that constructs the model from accumulated experience during user interaction. As the algorithm learns 'pervasively' whilst the user performs manual segmentation, no additional steps are required for the training process, allowing the method to adapt and improve its accuracy as experience is acquired. Detailed validation of the method on in-vivo magnetic resonance (MR) data demonstrates the practical value of the technique in significantly reducing the level of user interaction required, whilst maintaining the overall segmentation accuracy.