Keywords
structural plasticity, experience-dependent plasticity, neural network, Hebbian learning, V1, V2
Abstract
One challenge in creating neural models of the visual system is the appropriate definition of the connectivity. The modeler constrains the results with its definition. Unfortunately, there is often just insufficient information about connection sizes available, e.g. for deeper layer or different neuron types like interneurons. Hence, a mechanism refining the connection structure based on the learnings would be appreciated.
Such mechanism can be found in the human brain by structural plasticity. That is, the formation and removal of synapses. For our model, we exploit that synaptic connections are likely to be formed in the proximity of other synapses and that synapse removal is related to the volume of the spine forming the synapse. We implemented these mechanisms as probabilistic processes. The probability for synapse formation is determined by the strength of the neighboring synapses and synapse removal depends on the weight strength.
We demonstrate the functioning in a model of the visual areas V1 and V2. The model learns biologically plausible receptive fields, while developing connection matrices closely fitting the learned receptive fields. We show that connections grow and retract with learning and, thus, receptive fields are not restricted to their initial boundaries. Nevertheless, the initial retinotopic organization of the neurons is preserved. Testing the ability to overcome the modeler's bias by varying the size of the initial connection matrix, shows that all versions develop similar receptive field sizes. Hence, we suggest structural plasticity as suitable mechanism for learning diverse receptive field structures while overcoming the modeler's bias.
Start Date
11-5-2016 2:50 PM
End Date
11-5-2016 3:15 PM
Included in
Spatial Synaptic Growth and Removal for Learning Individual Receptive Field Structures
One challenge in creating neural models of the visual system is the appropriate definition of the connectivity. The modeler constrains the results with its definition. Unfortunately, there is often just insufficient information about connection sizes available, e.g. for deeper layer or different neuron types like interneurons. Hence, a mechanism refining the connection structure based on the learnings would be appreciated.
Such mechanism can be found in the human brain by structural plasticity. That is, the formation and removal of synapses. For our model, we exploit that synaptic connections are likely to be formed in the proximity of other synapses and that synapse removal is related to the volume of the spine forming the synapse. We implemented these mechanisms as probabilistic processes. The probability for synapse formation is determined by the strength of the neighboring synapses and synapse removal depends on the weight strength.
We demonstrate the functioning in a model of the visual areas V1 and V2. The model learns biologically plausible receptive fields, while developing connection matrices closely fitting the learned receptive fields. We show that connections grow and retract with learning and, thus, receptive fields are not restricted to their initial boundaries. Nevertheless, the initial retinotopic organization of the neurons is preserved. Testing the ability to overcome the modeler's bias by varying the size of the initial connection matrix, shows that all versions develop similar receptive field sizes. Hence, we suggest structural plasticity as suitable mechanism for learning diverse receptive field structures while overcoming the modeler's bias.