A new neural network architecture called the parallel self-organizing hierarchical neural network (PSHNN) is discussed. The PSHNN involves a number of stages in which each stage can be a particular neural network (SNN). At the end of each SNN, error detection is carried out, and a number of input vectors are rejected. Between 2 SNN’s there is a nonlinear transformation of those input vectors rejected by the first SNN. The PSHNN has many desirable properties such as optimized system complexity in the sense of minimized self-organizing number of stages, high classification accuracy, minimized learning and recall times, and truly parallel architectures in which all SNN’s are operating simultaneously without waiting for data from each other during testing. The experiments performed in comparison to multilayered networks with backpropagation training indicated the superiority of the PSHNN

Date of this Version