Abstract
It is common to compare properties of visual information processing by artificial neural networks and the primate visual system.
Some remarkable similarities were observed in the responses of neurons in IT cortex and units in higher layers of CNNs. Here I show that latent representations formed by weights in convolutional layers do not necessarily reflect visual domain. Instead they are strongly dependent on a choice of training set and cost function.
The most striking example is when an individual unit, which is highly selective to some members of a category is, nevertheless, inhibited by visually similar objects of the same category.
And this surprising selectivity-profile cannot be attributed to incidental differences in low level statistics.
Keywords
Neural networks, latent representation, information processing
Location
St.Petersburg, Russia
Start Date
16-5-2018 3:20 PM
End Date
16-5-2018 3:45 PM
Recommended Citation
Malakhova, Katerina, "Why Latent Representations in Convolutional Neural Networks Fall Outside Visual Space" (2018). MODVIS Workshop. 3.
https://docs.lib.purdue.edu/modvis/2018/session02/3
Included in
Cognitive Neuroscience Commons, Computational Engineering Commons, Computational Neuroscience Commons
Why Latent Representations in Convolutional Neural Networks Fall Outside Visual Space
St.Petersburg, Russia
It is common to compare properties of visual information processing by artificial neural networks and the primate visual system.
Some remarkable similarities were observed in the responses of neurons in IT cortex and units in higher layers of CNNs. Here I show that latent representations formed by weights in convolutional layers do not necessarily reflect visual domain. Instead they are strongly dependent on a choice of training set and cost function.
The most striking example is when an individual unit, which is highly selective to some members of a category is, nevertheless, inhibited by visually similar objects of the same category.
And this surprising selectivity-profile cannot be attributed to incidental differences in low level statistics.