Keywords

feature distributions, visual search, orientation, attention

Abstract

Chetverikov, Campana, and Kristjansson (2017) used visual search to demonstrate that human observers are able to extract statistical distributions of visual features. Observers searched for an odd-one-out target with distractors randomly drawn from the same distribution over the course of several “prime” trials. Then, on test trials parameters of the target and distractors changed and response times (RT) were analyzed as a function of the distance between the target position in feature space and the mean of distractor features during prime trials. The resulting RT curves followed the probability density of prime distractor distributions. This approach provides a detailed estimation of observers’ probabilistic representations. However, several transformations involved in the mapping of physical distributions of features to response times increase the noise. Moreover, observers do not know target and distractors features in advance and should learn and re-learn them during the task, further complicating the matter. An accurate model of the process is necessary to gain further insights.

Here I report the first naïve attempts to construct a model of the distribution encoding using the data from orientation domain. The model includes a column of feature detectors with equally-spaced tuning curves at each stimuli location. Their spike rates are modeled with a simple Poisson generator and fed into second-level neurons that compute spatial and temporal surprise (Itti & Baldi, 2009). This model already provides some estimates of distributions with population codes and surprise maps can guide search. However, the correlation with RT is weak (r = 0.13). I plan to improve the model to obtain more precise probability coding and incorporate a decision-making module (Chen & Perona, 2015) to increase the accuracy of RT predictions.

Start Date

17-5-2017 4:38 PM

End Date

17-5-2017 5:00 PM

Share

COinS
 
May 17th, 4:38 PM May 17th, 5:00 PM

Modeling distribution learning in visual search

Chetverikov, Campana, and Kristjansson (2017) used visual search to demonstrate that human observers are able to extract statistical distributions of visual features. Observers searched for an odd-one-out target with distractors randomly drawn from the same distribution over the course of several “prime” trials. Then, on test trials parameters of the target and distractors changed and response times (RT) were analyzed as a function of the distance between the target position in feature space and the mean of distractor features during prime trials. The resulting RT curves followed the probability density of prime distractor distributions. This approach provides a detailed estimation of observers’ probabilistic representations. However, several transformations involved in the mapping of physical distributions of features to response times increase the noise. Moreover, observers do not know target and distractors features in advance and should learn and re-learn them during the task, further complicating the matter. An accurate model of the process is necessary to gain further insights.

Here I report the first naïve attempts to construct a model of the distribution encoding using the data from orientation domain. The model includes a column of feature detectors with equally-spaced tuning curves at each stimuli location. Their spike rates are modeled with a simple Poisson generator and fed into second-level neurons that compute spatial and temporal surprise (Itti & Baldi, 2009). This model already provides some estimates of distributions with population codes and surprise maps can guide search. However, the correlation with RT is weak (r = 0.13). I plan to improve the model to obtain more precise probability coding and incorporate a decision-making module (Chen & Perona, 2015) to increase the accuracy of RT predictions.