Keywords
Spatial Updating, Eye Movements, Saccades, Smooth Pursuit, Remapping
Abstract
Despite the ever-changing visual scene on the retina between eye movements, our perception of the visual world is constant and unified. It is generally believed that this space constancy is due to the brain’s ability of spatial updating. Although many efforts have been made to discover the mechanism underlying spatial updating across eye movements, still there are many unanswered questions about the neuronal mechanism of this phenomenon.
We developed a state space model for updating gaze-centered spatial information. To explore spatial updating, we considered two kinds of eye movements, saccade and smooth pursuit. The inputs to our proposed model are: a corollary discharge signal, an eye position signal and 2D visual topographic maps of visual stimuli. The state space is represented by a radial basis function neural network and we can obtain a topographic map of the remembered visual target in its hidden layer. Finally, the decoded location of the remembered target is the output of the model. We trained the model on the double step saccade-saccade and pursuit-saccade tasks. Training this model revealed that the receptive fields of state-space units are remapped predictively during saccades and updated continuously during smooth pursuit. Moreover, during saccades, receptive fields also expanded (to our knowledge, this predicted expansion has not yet been reported in the published literature). We believe that incorporating this model can shed light on the underlying neural mechanism for Trans-saccadic perception.
Start Date
13-5-2015 11:30 AM
End Date
13-5-2015 11:55 AM
Session Number
01
Session Title
Motion, Attention, and Eye Movements
Included in
A Computational Model to Account for Dynamics of Spatial Updating of Remembered Visual Targets across Slow and Rapid Eye Movements
Despite the ever-changing visual scene on the retina between eye movements, our perception of the visual world is constant and unified. It is generally believed that this space constancy is due to the brain’s ability of spatial updating. Although many efforts have been made to discover the mechanism underlying spatial updating across eye movements, still there are many unanswered questions about the neuronal mechanism of this phenomenon.
We developed a state space model for updating gaze-centered spatial information. To explore spatial updating, we considered two kinds of eye movements, saccade and smooth pursuit. The inputs to our proposed model are: a corollary discharge signal, an eye position signal and 2D visual topographic maps of visual stimuli. The state space is represented by a radial basis function neural network and we can obtain a topographic map of the remembered visual target in its hidden layer. Finally, the decoded location of the remembered target is the output of the model. We trained the model on the double step saccade-saccade and pursuit-saccade tasks. Training this model revealed that the receptive fields of state-space units are remapped predictively during saccades and updated continuously during smooth pursuit. Moreover, during saccades, receptive fields also expanded (to our knowledge, this predicted expansion has not yet been reported in the published literature). We believe that incorporating this model can shed light on the underlying neural mechanism for Trans-saccadic perception.