Structure and parameter learning algorithms for fuzzy neural network systems

Chin-Teng Lin, Purdue University

Abstract

A general neural-network-based connectionist model, called Fuzzy Neural Network (FNN), is proposed for the realization of a fuzzy logic control and decision system. The proposed FNN is a feedforward multi-layered network which integrates the basic elements and functions of a traditional fuzzy logic controller into a connectionist structure which has distributed learning abilities. First, a two-phase hybrid learning algorithm is proposed which combines unsupervised and supervised learning procedures to build the rule nodes and train the membership functions. The two-phase hybrid learning algorithm performs well if sets of training data are available off-line. However, it does not perform well in a real-time environment when sets of training data are available on-line. Furthermore, it cannot increase nodes or change network structure dynamically. Thus, an on-line supervised structure/parameter learning algorithm is proposed for constructing FNNs efficiently and dynamically. This algorithm utilizes a similarity measure of fuzzy sets for the structure learning and the back-propagation learning scheme for the parameter learning. Based on the similarity measure of fuzzy sets, a new output membership function may be added, and the rule-node connections are changed appropriately, and then the back-propagation learning scheme is utilized for the parameter learning. A Reinforcement Fuzzy Neural Network (RFNN) is further proposed. The RFNN is constructed by integrating two FNNs, one functioning as a fuzzy predictor and the other as a fuzzy controller. By combining the on-line supervised structure/parameter learning technique, the temporal difference prediction method, and the stochastic exploratory algorithm, a reinforcement learning algorithm is proposed, which can construct a RFNN automatically and dynamically through a reward-penalty signal (i.e., "good" or "bad" signal) or through very simple fuzzy information feedback such as "high," "too high," "low," and "too low." Computer simulation examples will be presented to illustrate the performance and applicability of the proposed FNN, RFNN, and their associated learning algorithms for various applications. (Abstract shortened with permission of author.)

Degree

Ph.D.

Advisors

Lee, Purdue University.

Subject Area

Electrical engineering

Off-Campus Purdue Users:
To access this dissertation, please log in to our
proxy server
.

Share

COinS