Affordable Access

deepdyve-link
Publisher Website

Where Does Value Come From?

Authors
  • Juechems, Keno1
  • Summerfield, Christopher2
  • 1 Department of Experimental Psychology, University of Oxford, Radcliffe Observatory Quarter, Woodstock Road, Oxford, UK. Electronic address: [email protected]
  • 2 Department of Experimental Psychology, University of Oxford, Radcliffe Observatory Quarter, Woodstock Road, Oxford, UK.
Type
Published Article
Journal
Trends in cognitive sciences
Publication Date
Oct 01, 2019
Volume
23
Issue
10
Pages
836–850
Identifiers
DOI: 10.1016/j.tics.2019.07.012
PMID: 31494042
Source
Medline
Keywords
Language
English
License
Unknown

Abstract

The computational framework of reinforcement learning (RL) has allowed us to both understand biological brains and build successful artificial agents. However, in this opinion, we highlight open challenges for RL as a model of animal behaviour in natural environments. We ask how the external reward function is designed for biological systems, and how we can account for the context sensitivity of valuation. We summarise both old and new theories proposing that animals track current and desired internal states and seek to minimise the distance to a goal across multiple value dimensions. We suggest that this framework readily accounts for canonical phenomena observed in the fields of psychology, behavioural ecology, and economics, and recent findings from brain-imaging studies of value-guided decision-making. Copyright © 2019 Elsevier Ltd. All rights reserved.

Report this publication

Statistics

Seen <100 times