Classification of remotely sensed multispectral images involves assigning a class to each pixel which has similar characteristics with known land cover. This is the important step in remote sensing to extract information about the Earth's surface. Statistical methods and computational intelligence algorithms such as neural networks are commonly used for classification. However, no single classifier can be good for all kinds of multispectral images. To obtain consistent and improved results, consensual and hierarchical approaches are applied. The proposed method consists of nonlinear image filtering, different multiple classifiers which use statistical methods or hierarchical neural networks with rejection schemes, and a combining scheme for integrating the results of multiple classifiers by consensus rule. Nonlinear image filtering is used to reduce variance of homogeneous region and improve spectral separability. Most errors in classification occur with the data which are close to boundaries between classes. To handle these data more effectively, hierarchical structure is applied in classification using neural networks. By successive classifiers which are tuned to reduce remaining error, classification performance increases. This structure includes detection schemes to decide whether successive classifiers are utilized for each input. Rules are developed to determine automatically how many successive classifiers are needed. To obtain more reliable classification result for a given input pattern, multiple classification results for the same input pattern are combined by a consensus rule. Optimal weights for combining multiple classification results are computed in the sense of least squares based on the trained results of single classifiers to be combined. These are used to derive a consensus of multiple classification results. If the classifier is based on neural networks, a classifier with a single algorithm can generate multiple different results by preprocessing input data and varying learning parameters. Since the same learning algorithm can be trained in different ways by preprocessing of the input pattern and varying learning parameters, generated classification results are different from each other with diverse errors as in classification with multiple different types of classifiers. By combining these classification results, classification performance increases. Experimental results with the proposed methods are discussed.

Date of this Version

April 2006