Development of algorithms for generalization, convergence, and parallelization in neural networks

Jamshid Nazari, Purdue University

Abstract

The major goals of our research are to improve neural network classification accuracy (generalization), faster convergence, and improved implementation strategies, either in small modules or in parallel machines. A probabilistic input representation is proposed which is shown to have better generalization properties than some other representations. Parallelization issues are investigated for the neural network algorithms and it was shown that the backpropagation algorithm is easily parallelizable. Redundant representations at the output are studied and post processing similar to bit correction at the output of a communication channel is investigated. It is shown that the error control code techniques can be applied to do error correction at the output of the neural network. Powerful C++ class libraries are designed to provide tools to facilitate further research in this area. Several new output representations involving unitary transforms are introduced and it is shown that these output representations provide faster convergence, better generalization, and have need for smaller network size than the conventional output representations to solve a given problem. Also it is shown that the conventional output representations get stuck in local minima whereas our output representations find direct paths into desired solutions.

Degree

Ph.D.

Advisors

Ersoy, Purdue University.

Subject Area

Electrical engineering

Off-Campus Purdue Users:
To access this dissertation, please log in to our
proxy server
.

Share

COinS