## Randomized matrix compression algorithms for fast scientific computations

- Nov. 22, 2019
- 3:30 p.m.
- LeConte 317R

## Abstract

In modern scientific computations, such as large scale differential-integral equations, optimizations, or machine learning algorithms, we face challenges from both "massive" and "complicated" data. When the number of variables N is large, iterative solvers or solution updating processes and data storage become extremely expensive. Further, generating matrix entries itself could be time consuming or subject to information lost and experimental uncertainty. Our research is motivated by two types of problems, the hypersingular volume integral equation in layered materials, and manifold-based optimization for predicting 3D structures of chromosomes from experimental data. We introduce randomized numerical linear algebra (RNLA) based algorithms to perform low-rank approximation of given data, such that the sequential matrix operations are highly efficient. Our algorithms (1) only use partial information of the original matrix entries and (2) the compression process itself is cheaper than the direct matrix-vector multiplication. Accuracy and statistical robustness are analyzed with respect to sampling strategies. Numerical results will also be displayed for the two applications.