## Date of Award

5-2018

## Degree Type

Dissertation

## Degree Name

Doctor of Philosophy (PhD)

## Department

Mathematics

## Committee Chair

Jianlin Xia

## Committee Member 1

Venkataramanan R. Balakrishnan

## Committee Member 2

Marten V. De Hoop

## Committee Member 3

Jie Shen

## Abstract

The dissertation presents some fast direct solvers and efficient preconditioners mainly for sparse matrices. Hierarchical low-rank approximations such as hierarchically semiseparable (HSS) representation play a vital role in the development of these methods. As have been explored by many pioneers, hierarchical low-rank approximations can reduce the computational costs and the space requirement of many matrix operations while preserving desired accuracy. The utilization of such techniques leads to many fast algorithms for both dense and sparse matrix computations. One of the significant contributions of this dissertation is that we propose some novel preconditioners for both dense and sparse symmetric positive definite matrices. In the literature, many preconditioners are developed in a heuristic approach. The rigorous analysis regarding the effectiveness and robustness of these preconditioners is often lacking. For the preconditioners constructed via incomplete factorization or approximate inversion, it is often unclear how to choose the threshold for dropping entries or the accuracy for low-rank compression. In this thesis work, we construct some effective algebraic preconditioners by using rank structured incomplete factorizations and a novel scaling-and-compression strategy. We rigorously show the bounds of the eigenvalues of the preconditioned matrix when a certain accuracy is used in the low-rank compression. Therefore, we can determine how effective the preconditioners are by adjusting the accuracy of the low-rank approximation in the construction procedure. We also analyze the robustness of the new preconditioners for the prototype case and the generalized cases, which is valuable both in theory and in practice. The construction of the preconditioners costs roughly O(n2) for general dense matrices and O(n log2 n) for sparse matrices where n is the matrix size. The cost of applying the preconditioners is O(n log n), and the storage is O(n log n). In addition, for some model problems such as discretized 2D\3D Poisson’s equations, we find that the new preconditioners have even more appealing properties. It indicates that the new preconditioners may work exceptionally well for particular problems. Another major contribution of this dissertation is the parallel implementation of a structured fast direct solver for large-scale sparse problems. The structured fast direct solver combines the multifrontal method, HSS representation, and randomized techniques. The multifrontal method is a popular sparse direct solver. For discretized elliptic PDEs, the dense intermediate blocks in the multifrontal procedure can be approximated by hierarchically low-rank representations (we use HSS representation) such that the factorization costs and the storage are greatly reduced. However, the extend-add operations in the multifrontal procedure are very complicated for the HSS matrices. Therefore, some earlier work compromises by still using the dense Schur complements of the frontal matrices for the extend-add operations. But this tends to be a bottleneck of the solver for large-scale problems. The randomized techniques are therefore essential to avoid the HSS extend-add operations and the storage of dense Schur complements. What’s more, the randomized construction of the HSS representation has a lower complexity compared to the normal construction. For discretized elliptic PDEs, if certain rank pattern exists, the complexity of the randomized solver is about O(n) for 2D problems and O(n) to O(n4/3) for 3D problems while the earlier work without the randomized techniques has a complexity of at least O(n4/3) for 3D problems. For large-scale problems, the sequential implementation of the solver still has great limitations in terms of memory and speed. The distributed memory implementation is thus important. We propose some novel parallel algorithms for the components of the solver and integrate them into a scalable algebraic solver. The communication costs are analyzed and shown to be superior to those of an earlier work. Since the parallelization of rank structured algorithms is an emerging research topic, our work may be a valuable reference for other similar work. Lastly, for the linear systems discretized from some PDEs, We discuss how to solve them efficiently with the help of a recently proposed superfast structured eigensolver.

## Recommended Citation

Xin, Zixing, "Fast Direct Solvers and Effective Preconditioners for Large-Scale Sparse Matrices" (2018). *Open Access Dissertations*. 1847.

https://docs.lib.purdue.edu/open_access_dissertations/1847