Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp01s4655k528
Title: Computational Mechanisms of Selective Attention during Reinforcement Learning
Authors: Radulescu, Angela
Advisors: Niv, Yael
Contributors: Psychology Department
Keywords: computational modeling
human behavior
particle filtering
reinforcement learning
reward prediction
selective attention
Subjects: Cognitive psychology
Neurosciences
Issue Date: 2020
Publisher: Princeton, NJ : Princeton University
Abstract: The multidimensional nature of our environment raises a fundamental question in the study of learning and decision-making: how do we know which dimensions are relevant for reward, and which can be ignored? For instance, an action as simple as crossing the street might benefit from selective attention to the speed of incoming cars, but not the color or make of each vehicle. This thesis proposes a role for selective attention in restricting representations of the environment to relevant dimensions. It further argues that such representations should be guided by the inferred structure of the environment. Based on data from a paradigm designed to assess the dynamic interaction between learning and attention, the thesis introduces a novel sequential sampling mechanism for how such inference could be realized. The first chapter discusses selective attention in the context of Partially-Observable Markov Decision Processes. Viewed through this lens, selective attention provides a mapping from perceptual observations to state representations that support behavior. Chapter 2 provides evidence for the role of selective attention in learning such representations. In the ‘Dimensions Task,’ human participants must learn from trial and error which of several features is more predictive of reward. A model-based analysis of choice data reveals that humans selectively focus on a subset of task features. Age-related differences in the breadth of attention are shown to modulate the speed with which humans learn the correct representation. Next, a method is introduced for directly measuring the dynamics of attention allocation during multidimensional reinforcement learning. fMRI decoding and eye-tracking are combined to compute a trial-by-trial index of attention. A model-based analysis reveals a bidirectional interaction between attention and learning: attention constrains learning; and learning, in turn, guides attention to predictive dimensions. Finally, Chapter 4 draws from statistical theory to explore a novel mechanism for selective attention based on particle filtering. The particle filter keeps track of a single hypothesis about the task structure and updates it in light of incoming evidence. To offset the sparsity of the representation suggested by gaze data, the particle filter is augmented with working memory for recent observations. Gaze dynamics are shown to be more consistent with the particle filter than with gradual trial-and-error learning. This chapter offers a novel account of the interaction between working memory and selective attention in service of representation learning, grounded in normative inference.
URI: http://arks.princeton.edu/ark:/88435/dsp01s4655k528
Alternate format: The Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the library's main catalog: catalog.princeton.edu
Type of Material: Academic dissertations (Ph.D.)
Language: en
Appears in Collections:Psychology

Files in This Item:
File Description SizeFormat 
Radulescu_princeton_0181D_13351.pdf5.57 MBAdobe PDFView/Download


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.