New methods for computation of zeros of smooth functions and eigen -decomposition of symmetric matrices

Wei He, Purdue University

Abstract

This thesis presents new algorithms for two of the fundamental problems that form the bedrock of nonlinear optimization: (i) computation of zeros of smooth functions and (ii) computation of eigenvalues of symmetric matrices. Computing a zero of a smooth function is an old and extensively researched problem in numerical computation. While a large body of results and algorithms has been reported on this problem in the literature, to the extent we are aware, the published literature does not contain a globally convergent algorithm for finding a zero of an arbitrary smooth function. We present the first globally convergent algorithm for computing a zero (if one exists) of a general smooth function. Besides the globally convergent algorithm, we also present a second algorithm—called the quartic method —for one-dimensional optimization. The quartic method is the third and final member of a family of algorithms, called the Taylor Approximation Methods, which includes Newton's method and Euler's method. Theoretical considerations and preliminary numerical results suggest that the quartic method could emerge as a serious candidate for practical use in the future. In the context of eigen-computation, we first show that every n-dimensional orthogonal matrix can be factored into O(n 2) Jacobi rotations. It is well known that the Jacobi method is capable of computing eigenvalues, particularly tiny ones, to a high relative accuracy. The above decomposition shows that the infinite-precision nondeterministic Jacobi method can construct the eigen-decomposition with O(n2) Jacobi rotations. Speeding up the Jacobi algorithm while retaining its excellent numerical properties would be of considerable interest. In the second part of our discussion on eigen-computation, we present a new vector field algorithm whose performance, in preliminary tests, excels that of the QR method—currently, the fastest eigenvalue algorithm for small matrices. Specifically, we construct a family of eigenvalue algorithms—called VFM2—which compute the integral curves of a 2-dimensional vector field. In the preliminary computational tests, that we present, MCGA1 and MCGA2—two of the members of the VFM2 family—were found to outperform the QR method on small matrices (of size less than 200). MCGA1 and MCGA2 may be possible to improve upon their efficiency even further.

Degree

Ph.D.

Advisors

Prabhu, Purdue University.

Subject Area

Industrial engineering

Off-Campus Purdue Users:
To access this dissertation, please log in to our
proxy server
.

Share

COinS