Parallel, self-organizing, hierarchical neural networks with fuzzy input signal representation, competitive learning and safe rejection schemes

Seongwon Cho, Purdue University

Abstract

In this thesis we present parallel self-organizing, hierarchical neural networks with fuzzy input signal representation, competitive learning and safe rejection schemes (FCSNN). A computational scheme of the partial degree of match (DM) in fuzzy expert classification systems and a method for the automatic derivation of the membership functions for fuzzy sets are proposed. The derived fuzzy sets and the computational scheme of the degree of match are used together for the fuzzy representation of input information in neural networks, not only for improving the classification accuracy but also for being able to classify objects whose attribute values do not have clear boundaries. The original input is converted into multidimensional values using the fuzzy input signal representation scheme. A new learning algorithm with competitive learning and multiple safe rejection schemes is proposed and used as the learning algorithm of the parallel self-organizing, hierarchical neural network (PSHNN) to get around disadvantages of both supervised learning algorithms and competitive learning algorithms. After reference vectors are computed using competitive learning in a stage neural network, the safe rejection schemes are constructed. The basic idea of safe rejection schemes is to reject the vectors so that there are no misclassified training vectors. Two different kinds of safe rejection schemes, RADPN and RAD, are developed and used together. The next stage neural network is trained with nonlinearly transformed values of only those training vectors that were rejected in the previous stage neural network. Experimental results comparing the performance of the proposed learning algorithm with those of the backpropagation network and the PSHNN with the delta rule learning algorithm are discussed. The proposed learning algorithm produced higher classification accuracy and much faster learning. Learning of reference vectors were done by two methods and their classification accuracies were compared. When the reference vectors are computed separately for each class (Method II), higher classification accuracy is obtained as compared to the method in which reference vectors are computed together for all the classes (Method I). This conclusion has to do with rejection of hard vectors, and is the opposite of what is normally expected. Method II has the advantages of parallelism by which the reference vectors for all the classes can be computed simultaneously. Experiments with the fuzzy input representation in comparison to the input representation of the original decimal values indicated the superiority of representation of the transformed inputs using the computational scheme of the partial degree of match.

Degree

Ph.D.

Advisors

Ersoy, Purdue University.

Subject Area

Electrical engineering|Computer science|Artificial intelligence

Off-Campus Purdue Users:
To access this dissertation, please log in to our
proxy server
.

Share

COinS