Keywords

V1, MT, direction selectivity, pattern motion, motion in depth

Abstract

Processing of visual motion by neurons in MT has long been an active area of study, however circuit models detailing the computations underlying binocular integration of motion signals remains elusive. Such models are important for studying the visual perception of motion in depth (MID), which involves both frontoparallel (FP) visual motion and binocular signal integration. Recent studies (Czuba et al. 2014, Sanada and DeAngelis 2014) have shown that many MT neurons are MID sensitive, contrary to the prevailing view (Maunsell and van Essen, 1983). These novel data are ideal for constraining models of binocular motion integration in MT. We have built binocular models of MT neurons to show how MID sensitivity can arise via inter-ocular velocity differences (IOVDs). Our modeling framework encompasses features common to established monocular MT models and extends the model of Rust et al. (2006) to be image-computable. We built binocular versions of pattern and component model units in this framework. We reproduced the previously unexplained results of Tailby et al. (2010) showing a striking loss in pattern motion sensitivity with dichoptic plaid presentation. We found that monocular motion opponent suppression creates the decrease in pattern index seen in both pattern and component cells. We also found that the characteristic differences between pattern and component computations make different predictions for MID tuning: FP motion-tuned neurons were better represented by the component cell model, whereas MID tuned cells could only be reconciled with an 3D motion-tuned pattern cell with binocularly imbalanced input. By implementing binocular mixing in V1, we were able to generate robust IOVD-based MID tuning even without strictly monocular signals. Interestingly, our 3D tuned models predict a characteristic change in direction tuning with dichoptic plaids, with the preferred direction shifted by 90° with dichoptic presentation. Overall, our models integrate motion and binocular processing to explain recent novel findings, make several testable predictions relating different forms of motion sensitivity, and provide a foundation for building a unified binocular disparity-motion model of the dorsal stream.

Start Date

14-5-2015 9:50 AM

End Date

14-5-2015 10:15 AM

Session Number

03

Session Title

Binocular Vision and Stereo

Share

COinS
 
May 14th, 9:50 AM May 14th, 10:15 AM

A Binocular Model for Motion Integration in MT Neurons

Processing of visual motion by neurons in MT has long been an active area of study, however circuit models detailing the computations underlying binocular integration of motion signals remains elusive. Such models are important for studying the visual perception of motion in depth (MID), which involves both frontoparallel (FP) visual motion and binocular signal integration. Recent studies (Czuba et al. 2014, Sanada and DeAngelis 2014) have shown that many MT neurons are MID sensitive, contrary to the prevailing view (Maunsell and van Essen, 1983). These novel data are ideal for constraining models of binocular motion integration in MT. We have built binocular models of MT neurons to show how MID sensitivity can arise via inter-ocular velocity differences (IOVDs). Our modeling framework encompasses features common to established monocular MT models and extends the model of Rust et al. (2006) to be image-computable. We built binocular versions of pattern and component model units in this framework. We reproduced the previously unexplained results of Tailby et al. (2010) showing a striking loss in pattern motion sensitivity with dichoptic plaid presentation. We found that monocular motion opponent suppression creates the decrease in pattern index seen in both pattern and component cells. We also found that the characteristic differences between pattern and component computations make different predictions for MID tuning: FP motion-tuned neurons were better represented by the component cell model, whereas MID tuned cells could only be reconciled with an 3D motion-tuned pattern cell with binocularly imbalanced input. By implementing binocular mixing in V1, we were able to generate robust IOVD-based MID tuning even without strictly monocular signals. Interestingly, our 3D tuned models predict a characteristic change in direction tuning with dichoptic plaids, with the preferred direction shifted by 90° with dichoptic presentation. Overall, our models integrate motion and binocular processing to explain recent novel findings, make several testable predictions relating different forms of motion sensitivity, and provide a foundation for building a unified binocular disparity-motion model of the dorsal stream.