Keywords

V1, data repository, Python code, web-based

Abstract

Virtual V1sion is a new idea for fostering modeling collaborations and data sharing. While still in its infancy, the ultimate goal is a website that hosts repositories for (1) interchangeable model elements, (2) datasets that can be fit/predicted by those models, and (3) educational modules that explain the background for both the models and the datasets. The scope of the modeling is limited to predictions of V1 responses, although not all computations represented by model elements in Virtual V1sion are required to be V1-intrinsic: a goal of the project is to provide a framework in which predictions for modulation by cortico-cortical projections (i.e., feedback) can be tested. The basic framework is an array of channels (neurons), mapped either to image space or to cortex space (via log-polar transform), with user-specified tuning (e.g., oriented vs. non-oriented, color-selective or not, simple or complex) and user-specified interactions (e.g., normalization or facilitation by selective or non-selective local pooling or long-range signals). Thus, the model predicts either behavioral responses (sensitivity/selectivity in channel most sensitive to stimulus) or electrophysiology data (diverse responses in local population) or neuroimaging data (pattern of aggregate response, with user-selected weighting and blurring, mapped to cortical hemodynamic responses or scalp potentials).

Start Date

13-5-2016 11:30 AM

End Date

13-5-2016 11:55 AM

Share

COinS
 
May 13th, 11:30 AM May 13th, 11:55 AM

Virtual V1sion: A Collaborative Coding Project

Virtual V1sion is a new idea for fostering modeling collaborations and data sharing. While still in its infancy, the ultimate goal is a website that hosts repositories for (1) interchangeable model elements, (2) datasets that can be fit/predicted by those models, and (3) educational modules that explain the background for both the models and the datasets. The scope of the modeling is limited to predictions of V1 responses, although not all computations represented by model elements in Virtual V1sion are required to be V1-intrinsic: a goal of the project is to provide a framework in which predictions for modulation by cortico-cortical projections (i.e., feedback) can be tested. The basic framework is an array of channels (neurons), mapped either to image space or to cortex space (via log-polar transform), with user-specified tuning (e.g., oriented vs. non-oriented, color-selective or not, simple or complex) and user-specified interactions (e.g., normalization or facilitation by selective or non-selective local pooling or long-range signals). Thus, the model predicts either behavioral responses (sensitivity/selectivity in channel most sensitive to stimulus) or electrophysiology data (diverse responses in local population) or neuroimaging data (pattern of aggregate response, with user-selected weighting and blurring, mapped to cortical hemodynamic responses or scalp potentials).