For hyperspectral data classification, the avoidance of singularity of covariance estimates or excessive near singularity estimation error due to limited training data is a key problem. This study is intended to solve problem via regularized covariance estimators and feature extraction algorithms. A second purpose is to build a robust classification procedure with the advantages of the algorithms proposed in this study but robust in the sense of not requiring extensive analyst operator skill. A pair of covariance estimators called Mixed-LOOCs is proposed for avoiding excessive covariance estimator error. Mixed-LOOC2 has advantages over LOOC and BLOOC and needs less computation than those two. Based on Mixed-LOOC2, new DAFE and mixture classifier algorithms are proposed. Current feature extraction algorithms, while effective in some circumstances, have significant limitations. Discriminate analysis feature extraction (DAFE) is fast but does not perform well with classes whose mean values are similar, and it produces only N-1 reliable features where N is the number of classes. Decision Boundary Feature Extraction does not have these limitations but does not perform well when training sets are small, A new nonparametric feature extraction method (NWFE) is developed to solve the problems of DAFE and DBFE. NWFE takes advantage of the desirable characteristics of DAFE and DBFE, while avoiding their shortcomings. Finally, experimental results show that using NWFE features applied to a mixture classifier based on the Mixed-LOOC2 covariance estimator has the best performance and is a robust procedure for classifying hyperspectral data.

Date of this Version

June 2001