Wide and deep neural networks in multispectral and hyperspectral image classification are discussed. Wide versus deep networks have always been a topic of intense interest. Deep networks mean large number of layers in the depth direction. Wide networks can be defined as networks growing in the vertical direction. Then, wide and deep networks are networks which have growth in both vertical and horizontal directions. In this report, several directions in order to achieve such networks are described. We first review a methodology called Parallel, Self-Organizing, Hierarchical Neural Networks (PSHNN’s) which have stages growing in the vertical direction, and each stage can be a deep network as well. In turn, each layer of a deep network can be a PSHNN. The second methodology involves making each layer of a deep network wide, and this has been discussed especially with deep residual networks. The third methodology is wide and deep residual neural networks which grow both in horizontal and vertical directions, and include residual learning principles for improving learning. The fourth methodology is wide and deep neural networks in parallel. Here wide and deep networks are two parallel branches, the wide network specializing in memorization while the deep network specializing in generalization. In leading to these methods, we also review various types of PSHNN’s, deep neural networks including convolutional neural networks, autoencoders, and residual learning. Partially due to moderate sizes of current multispectral and hyperspectral image sets, design and implementation of wide and deep neural networks hold the potential to yield most effective solutions. These conclusions are expected to be valid in other areas with similar data structures as well.
wide and deep neural networks, remote sensing, multispectral, hyperspectral, classification
Date of this Version