Keywords

face perception, adaptation, aftereffects, decoding strategies

Abstract

The aftereffects of adaptation to faces have been studied widely, in part to characterize the coding schemes for representing different facial attributes. Often these aftereffects have been interpreted in terms of two alternative models of face processing: 1) a norm-based or opponent code, in which the facial dimension is represented by relative activity in a pair of broadly-tuned mechanisms with opposing sensitivities; or 2) an exemplar code, in which the dimension is sampled by multiple channels narrowly-tuned to different levels of the stimulus. Evidence for or against these alternatives is based on the different patterns of aftereffects they predict (e.g. whether there is adaptation to the norm, and how adaptation increases with stimulus strength). However, these models make many and often implicit assumptions about the channels themselves and how they are combined. We re-evaluated these models to explore how their output depends on factors such as the number, selectivity, and decoding strategy of the channels to clarify the fundamental differences between these coding schemes and the adaptation effects most diagnostic for discriminating between them. We show that the distinction between norm and exemplar codes has less to do with the number of channels and more on how the channel outputs are decoded to represent the stimulus. We also compare how these models depend on assumptions about the stimulus (e.g. broadband vs. punctate) and the impact of noise. These analyses point to the fundamental distinctions between different coding strategies and the patterns of visual aftereffects that are best for revealing them.

Start Date

16-5-2018 5:05 PM

End Date

16-5-2018 5:30 PM

Share

COinS
 
May 16th, 5:05 PM May 16th, 5:30 PM

Inferring the neural representation of faces from adaptation aftereffects

The aftereffects of adaptation to faces have been studied widely, in part to characterize the coding schemes for representing different facial attributes. Often these aftereffects have been interpreted in terms of two alternative models of face processing: 1) a norm-based or opponent code, in which the facial dimension is represented by relative activity in a pair of broadly-tuned mechanisms with opposing sensitivities; or 2) an exemplar code, in which the dimension is sampled by multiple channels narrowly-tuned to different levels of the stimulus. Evidence for or against these alternatives is based on the different patterns of aftereffects they predict (e.g. whether there is adaptation to the norm, and how adaptation increases with stimulus strength). However, these models make many and often implicit assumptions about the channels themselves and how they are combined. We re-evaluated these models to explore how their output depends on factors such as the number, selectivity, and decoding strategy of the channels to clarify the fundamental differences between these coding schemes and the adaptation effects most diagnostic for discriminating between them. We show that the distinction between norm and exemplar codes has less to do with the number of channels and more on how the channel outputs are decoded to represent the stimulus. We also compare how these models depend on assumptions about the stimulus (e.g. broadband vs. punctate) and the impact of noise. These analyses point to the fundamental distinctions between different coding strategies and the patterns of visual aftereffects that are best for revealing them.