Keywords

active vision, fixational eye movements, ocular drift

Abstract

There is now compelling evidence that the spatiotemporal remapping carried out by fixational eye movements (FEMs) is an essential step in visual processing. Moreover, the overall Brownian-like statistics of FEMs are calibrated to map fine spatial detail into the temporal frequency range to which retinal circuitry is tuned. Here, we tested the hypothesis that the detailed spatial characteristics of FEMs can be adjusted to task demands via cognitive influences that operate even in the absence of a visual stimulus. We examined FEMs in a task that required subjects (N=6) to report which of two letters was displayed. Trials were blocked; in each block, the letter pair was known in advance: H vs. N or E vs. F. The task was demanding: letters were 1.5 deg and embedded in 1/f noise, and had a contrast that yielded ~75% correct performance. Note that the HN discrimination could be accomplished by identification of either a horizontal or oblique contour, but the EF discrimination required identification of a horizontal contour. Thus, in the EF blocks, only a vertical ocular drift would be expected to maximize the neural signal. For each condition, FEM velocity statistics, which were approximately Gaussian, were characterized by their covariance. As predicted, the ratio of velocity variance in the vertical vs. oblique direction was greater in EF trials than in HN trials. This difference was greater when no stimulus was present (20% of trials in each block), indicating open-loop control. We also found that single-trial drift trajectories could be decoded by a simple decoder to identify the task (HN vs. EF) at above-chance levels in most subjects. While the observed covariance patterns showed substantial inter-subject variability, we found that a single transformation, applied with subject-specific strengths, could largely account for all subjects’ findings. Critically, this shared transformation acts holistically on the plane, rather than individually on horizontal and vertical axes. In sum, we find that knowledge of the specific requirements of a visual task exerts fine-tuned open-loop control over ocular drifts, and we characterize the nature of this control.

Location

ModVis 2023, St. Petersburg, FL

Share

COinS
 

Task-driven influences on fixational eye movements

ModVis 2023, St. Petersburg, FL

There is now compelling evidence that the spatiotemporal remapping carried out by fixational eye movements (FEMs) is an essential step in visual processing. Moreover, the overall Brownian-like statistics of FEMs are calibrated to map fine spatial detail into the temporal frequency range to which retinal circuitry is tuned. Here, we tested the hypothesis that the detailed spatial characteristics of FEMs can be adjusted to task demands via cognitive influences that operate even in the absence of a visual stimulus. We examined FEMs in a task that required subjects (N=6) to report which of two letters was displayed. Trials were blocked; in each block, the letter pair was known in advance: H vs. N or E vs. F. The task was demanding: letters were 1.5 deg and embedded in 1/f noise, and had a contrast that yielded ~75% correct performance. Note that the HN discrimination could be accomplished by identification of either a horizontal or oblique contour, but the EF discrimination required identification of a horizontal contour. Thus, in the EF blocks, only a vertical ocular drift would be expected to maximize the neural signal. For each condition, FEM velocity statistics, which were approximately Gaussian, were characterized by their covariance. As predicted, the ratio of velocity variance in the vertical vs. oblique direction was greater in EF trials than in HN trials. This difference was greater when no stimulus was present (20% of trials in each block), indicating open-loop control. We also found that single-trial drift trajectories could be decoded by a simple decoder to identify the task (HN vs. EF) at above-chance levels in most subjects. While the observed covariance patterns showed substantial inter-subject variability, we found that a single transformation, applied with subject-specific strengths, could largely account for all subjects’ findings. Critically, this shared transformation acts holistically on the plane, rather than individually on horizontal and vertical axes. In sum, we find that knowledge of the specific requirements of a visual task exerts fine-tuned open-loop control over ocular drifts, and we characterize the nature of this control.