Keywords

relevance, saliency, V1, cortex, top-down, model

Abstract

The human visual system processes information defining what is visually conspicuous (saliency) to our perception, guiding eye movements towards certain objects depending on scene context and its feature characteristics. However, attention has been known to be biased by top-down influences (relevance), which define voluntary eye movements driven by goal-directed behavior and memory. We propose a unified model of the visual cortex able to predict, among other effects, top-down visual attention and saccadic eye movements. First, we simulate activations of early mechanisms of the visual system (RGC/LGN), by processing distinct image chromatic opponencies with Gabor-like filters. Second, we use a cortical magnification function to reproduce foveation towards V1 retinotopy. Third, we feed these signals to an excitatory-inhibitory neurodynamic model of lateral interactions in V1 as a saliency mechanism. Fourth, projections towards the SC (modeled as WTA-like computations) determine the targets of fixations and saccade sequences. Fifth and last, we integrate a top-down inhibition process by simulating retrieval of visual representations as goal-directed selection processes (DLPFC/FEF), later projected towards V1/SC. These top-down representations will modulate the prediction of visual relevance during visual search tasks, where its weights (orientation, scale and opponency) are mapped as cortical signals from early visual areas for each exemplar/category. Our results show that our model predictions of eye movements improve by including the aforementioned top-down computations. In addition, our model has previously seen to simultaneously reproduce visual discomfort, brightness and chromatic induction effects.

Start Date

16-5-2019 2:00 PM

End Date

16-5-2019 2:30 PM

Location

Barcelona, Spain

Share

COinS
 
May 16th, 2:00 PM May 16th, 2:30 PM

Computations of top-down attention by modulating V1 dynamics

Barcelona, Spain

The human visual system processes information defining what is visually conspicuous (saliency) to our perception, guiding eye movements towards certain objects depending on scene context and its feature characteristics. However, attention has been known to be biased by top-down influences (relevance), which define voluntary eye movements driven by goal-directed behavior and memory. We propose a unified model of the visual cortex able to predict, among other effects, top-down visual attention and saccadic eye movements. First, we simulate activations of early mechanisms of the visual system (RGC/LGN), by processing distinct image chromatic opponencies with Gabor-like filters. Second, we use a cortical magnification function to reproduce foveation towards V1 retinotopy. Third, we feed these signals to an excitatory-inhibitory neurodynamic model of lateral interactions in V1 as a saliency mechanism. Fourth, projections towards the SC (modeled as WTA-like computations) determine the targets of fixations and saccade sequences. Fifth and last, we integrate a top-down inhibition process by simulating retrieval of visual representations as goal-directed selection processes (DLPFC/FEF), later projected towards V1/SC. These top-down representations will modulate the prediction of visual relevance during visual search tasks, where its weights (orientation, scale and opponency) are mapped as cortical signals from early visual areas for each exemplar/category. Our results show that our model predictions of eye movements improve by including the aforementioned top-down computations. In addition, our model has previously seen to simultaneously reproduce visual discomfort, brightness and chromatic induction effects.