Keywords
Hebbian learning, intrinsic plasticity, object recognition, visual cortex
Abstract
Much effort has been spent to develop biologically plausible models for different aspects of the processing in the visual cortex. However, most of these models are not investigated with respect to the functionality of the neural code for the purpose of object recognition comparable to the framework of deep learning in the machine learning community.
We developed a model of V1 and V2 based on anatomical evidence of the layered architecture, using excitatory and inhibitory neurons where the connectivity to each neuron is learned in parallel. We address learning by three different mechanisms of plasticity: intrinsic plasticity, Hebbian learning with homeostatic regulations, and structural plasticity.
For evaluation, we trained the model on natural scenes and analyzed the resulting receptive fields, i.e. invariance properties, tuning characteristics and robustness to input variations, being of essential interest as they are showing the basic principles of feature extraction and invariant representation for object recognition. Further, we show the recognition accuracies of the model to the COIL-100 dataset for all V1 and V2 layers to demonstrate the object recognition capability of each model layer.
Start Date
13-5-2015 4:25 PM
End Date
13-5-2015 4:50 PM
Session Number
02
Session Title
Shape and Form
Included in
A Recurrent Multilayer Model with Hebbian Learning and Intrinsic Plasticity Leads to Invariant Object Recognition and Biologically Plausible Receptive Fields
Much effort has been spent to develop biologically plausible models for different aspects of the processing in the visual cortex. However, most of these models are not investigated with respect to the functionality of the neural code for the purpose of object recognition comparable to the framework of deep learning in the machine learning community.
We developed a model of V1 and V2 based on anatomical evidence of the layered architecture, using excitatory and inhibitory neurons where the connectivity to each neuron is learned in parallel. We address learning by three different mechanisms of plasticity: intrinsic plasticity, Hebbian learning with homeostatic regulations, and structural plasticity.
For evaluation, we trained the model on natural scenes and analyzed the resulting receptive fields, i.e. invariance properties, tuning characteristics and robustness to input variations, being of essential interest as they are showing the basic principles of feature extraction and invariant representation for object recognition. Further, we show the recognition accuracies of the model to the COIL-100 dataset for all V1 and V2 layers to demonstrate the object recognition capability of each model layer.