Parametric PDEs: Sparse or low-rank approximations?
- Feb. 23, 2017
- 4:15 p.m.
- LeConte 312
We consider a class of parametric operator equations where the involved parameters could either be of deterministic or stochastic nature. In both cases we focus on scenarios involving a large and possibly infinite number of parameters. Typical strategies for addressing the challenges posed by high dimensionality use low-rank approximations of solutions based on a separation of spatial and parametric variables. One such strategy is based on performing sparse best n-term approximations of the solution map in an a priori chosen system of tensor product form in the parametric variables. This approach has been extensively analyzed in the case of tensor product Legendre polynomial bases, for which approximation rates have been established. A first theme in this talk is to investigate what can be gained by exploiting further low rank structures, in particular, using adapted systems of basis functions obtained by singular value decomposition techniques. Moreover, we discuss under which circumstances such adaptive low-rank expansions can either bring significant or no improvement over sparse polynomial expansions. A second major issue concerns a unified adaptive algorithm that can be specialized to sparse Legendre expansions as well as to low-rank approximations driven in either case by a posteriori residual based error estimators. Moreover, it can be shown to exhibit optimal complexity for certain benchmark classes which are argued to be representative for the problems under consideration.