Best rank-1 approximations without orthogonal invariance for the 1-norm

Varun A Vasudevan, Purdue University

Abstract

Data measured in the real-world is often composed of both a true signal, such as an image or experimental response, and a perturbation, such as noise or weak secondary effects. Low-rank matrix approximation is one commonly used technique to extract the true signal from the data. Given a matrix representation of the data, this method seeks the nearest low-rank matrix where the distance is measured using a matrix norm. The classic Eckart-Young-Mirsky theorem tells us how to use the Singular Value Decomposition (SVD) to compute a best low-rank approximation of a matrix for any orthogonally invariant norm. This leaves as an open question how to compute a best low-rank approximation for norms that are not orthogonally invariant, like the 1-norm. In this thesis, we present how to calculate the best rank-1 approximations for 2-by-n and n-by-2 matrices in the 1-norm. We consider both the operator induced 1-norm (maximum column 1-norm) and the Frobenius 1-norm (sum of absolute values over the matrix). We present some thoughts on how to extend the arguments to larger matrices.

Degree

M.S.E.C.E.

Advisors

Gleich, Purdue University.

Subject Area

Applied Mathematics|Mathematics|Electrical engineering

Off-Campus Purdue Users:
To access this dissertation, please log in to our
proxy server
.

Share

COinS