Keywords
V2, model, natural, texture
Abstract
Neurons in cortical area V2 respond selectively to higher-order visual features, such as the quasi-periodic structure of natural texture. However, a functional account of how V2 neurons build selectivity for complex natural image features from their inputs – V1 neurons locally tuned for orientation and spatial frequency – remains elusive.
We made single-unit recordings in area V2 in two fixating rhesus macaques. We presented stimuli composed of multiple superimposed grating patches that localize contrast energy in space, orientation, and scale. V2 activity is modeled via a two-layer linear-nonlinear network, optimized to use a sparse combination of V1-like outputs to account for observed activity.
Analysis of model fits reveals V2 neurons to be well-matched to natural images, with units combining V1 afferent tuning dimensions to effectively capture natural scene variation. Remarkably, although the models are trained on responses to synthetic stimuli, they can predict responses to novel image classes, i.e. naturalistic texture, reproducing single-unit selectivity for higher-order image statistics. Thus, we demonstrate state-of-the art performance of modeling V2 selectivity, and provide a mechanistic account of single-unit tuning for higher-order natural features.
Start Date
12-5-2022 3:45 PM
End Date
12-5-2022 4:10 PM
Location
New York University
Included in
Applied Statistics Commons, Computational Neuroscience Commons, Systems Neuroscience Commons
A Two-Layer Model Explains Higher-Order Feature Selectivity of V2 Neurons
New York University
Neurons in cortical area V2 respond selectively to higher-order visual features, such as the quasi-periodic structure of natural texture. However, a functional account of how V2 neurons build selectivity for complex natural image features from their inputs – V1 neurons locally tuned for orientation and spatial frequency – remains elusive.
We made single-unit recordings in area V2 in two fixating rhesus macaques. We presented stimuli composed of multiple superimposed grating patches that localize contrast energy in space, orientation, and scale. V2 activity is modeled via a two-layer linear-nonlinear network, optimized to use a sparse combination of V1-like outputs to account for observed activity.
Analysis of model fits reveals V2 neurons to be well-matched to natural images, with units combining V1 afferent tuning dimensions to effectively capture natural scene variation. Remarkably, although the models are trained on responses to synthetic stimuli, they can predict responses to novel image classes, i.e. naturalistic texture, reproducing single-unit selectivity for higher-order image statistics. Thus, we demonstrate state-of-the art performance of modeling V2 selectivity, and provide a mechanistic account of single-unit tuning for higher-order natural features.