## Approximating eigenvalues of large stochastic matrices

- Oct. 21, 2013
- 3:15 p.m.
- LeConte 312

## Abstract

Bounds on the rate at which a stationary Markov chain converges typically involve the subdominant eigenvalue of the chain's underlying transition probability matrix. However, for some applications, transition matrices are so big that it's impossible to store even a single column vector in fast memory, and traditional algorithms for approximating eigenvalues become intractable.

In this talk I will show that that if a Markov chain is reversible, and if we know something about the structure of the chain or the underlying stochastic process, then coefficients of the standard Lanczos algorithm can be obtained without storing a single column vector. This is done by considering variational properties of observables on the state space. As background I will review the connection between the Lanczos coefficients and the eigenvalues of the Markov chain.