Keywords

Neural networks, latent representation, information processing

Abstract

It is common to compare properties of visual information processing by artificial neural networks and the primate visual system.

Some remarkable similarities were observed in the responses of neurons in IT cortex and units in higher layers of CNNs. Here I show that latent representations formed by weights in convolutional layers do not necessarily reflect visual domain. Instead they are strongly dependent on a choice of training set and cost function.

The most striking example is when an individual unit, which is highly selective to some members of a category is, nevertheless, inhibited by visually similar objects of the same category.

And this surprising selectivity-profile cannot be attributed to incidental differences in low level statistics.

Start Date

16-5-2018 3:20 PM

End Date

16-5-2018 3:45 PM

Location

St.Petersburg, Russia

Share

COinS
 
May 16th, 3:20 PM May 16th, 3:45 PM

Why Latent Representations in Convolutional Neural Networks Fall Outside Visual Space

St.Petersburg, Russia

It is common to compare properties of visual information processing by artificial neural networks and the primate visual system.

Some remarkable similarities were observed in the responses of neurons in IT cortex and units in higher layers of CNNs. Here I show that latent representations formed by weights in convolutional layers do not necessarily reflect visual domain. Instead they are strongly dependent on a choice of training set and cost function.

The most striking example is when an individual unit, which is highly selective to some members of a category is, nevertheless, inhibited by visually similar objects of the same category.

And this surprising selectivity-profile cannot be attributed to incidental differences in low level statistics.