Neurocomputing on distributed memory machines

Hyeran Byun, Purdue University

Abstract

The "optimal" partitioning of geometric data structures (e.g., meshes, grids) is the basis for the formulation and implementation of the so called domain decomposition methodology for the numerical solution of partial differential equations on distributed memory machines. This partitioning problem can be described by an unconstrained optimization model. In this thesis we formulate and study several neural network and stochastic based optimizers to solve the optimization problems corresponding to the partitioning of finite element meshes into balanced sub-meshes with minimum interface length. These optimizers tend to be computation and memory bound for large meshes, thus their parallel implementation is well justified. In this thesis, we develop a portable parallelization strategy for neurocomputations in general, which we apply to the considered optimizers and some other commonly used neurocomputations. This strategy is based on a set of computationally intensive mathematical functions suitable for expressing neurocomputations which we have identified and implemented on nCUBE 2, Intel iPSC/i860 and Intel DELTA machines using the PICL message-passing interface. Moreover, we have implemented a set of well known neurocomputations in terms of these functions and executed their implicitly parallelized code on the above parallel machines. The timing data collected indicate the effectiveness of this approach for the selected benchmark neurocomputations.

Degree

Ph.D.

Advisors

Houstis, Purdue University.

Subject Area

Computer science

Off-Campus Purdue Users:
To access this dissertation, please log in to our
proxy server
.

Share

COinS