Doctoral Dissertations

Date of Award


Degree Type


Degree Name

Doctor of Philosophy



Major Professor

Clayton G. Webster

Committee Members

Jack Dongarra, Ohannes Karakashian, Abner J. Salgado-Gonzalez, Steven M. Wise


This work studies sparse reconstruction techniques for approximating solutions of high-dimensional parametric PDEs. Such problems are relevant to mathematical modeling in engineering and the sciences, and are inherently difficult to solve due to the exponential relationship between dimension and computational cost of approximation, a phenomenon referred to as the curse of dimensionality. Nonetheless, for many practical applications, their solutions possess favorable properties, e.g., regularity and compressibility, enabling efficient numerical methods of solution.For the past few decades, polynomial approximations, e.g., stochastic Galerkin (SG) methods [124], have remained popular for solving parameterized PDEs due to their ability to exploit these properties, achieving fast (sub-exponential) convergence rates. However, in practice the complexity of SG methods depends on the type of parametric dependence in the underlying PDE. Previous analysis on SG complexity neglected this fact, focusing only on the affine case. To resolve this, we prove sharp bounds on the cost of solving SG systems, resulting from discretizing elliptic PDEs with arbitrary parametric dependence, to a prescribed tolerance. We also verify our theoretical results with detailed numerical experiments over a variety of parameterizations.Since the mid-2000s, increasing interest in sparse signal reconstruction through compressed sensing (CS) [20, 21, 47] has led to a wide array of applications, including sparse polynomial approximation. To improve convergence of CS polynomial approximations, we introduced a novel weighting scheme based on lower sets for smooth function approximation, also relevant to parameterized PDEs. However, standard CS is only capable of approximating functionals of solutions to parameterized PDEs. Therefore, we introduced a novel energy norm-based reformulation of basis pursuit denoising, which enables sparse recovery over tensor products of Hilbert spaces. We also derive the forward-backward iterations for its solution, proving strong convergence in the infinite dimensional setting. To improve efficiency of our solver, we combine Bregman iterations and fixed-point continuation, derived for this setting, with forward-backward iterations. We denote our approach sparse Hilbert-valued vector (HV) recovery, which we prove achieves sub-exponential rates of convergence. We also conduct detailed numerical experiments, demonstrating the minimal sample complexity requirements of our approach compared to several alternative methods of solution.


Portions of this document were previously published in the journals Computers & Mathematics With Applications and Mathematics of Computation, and are currently available on arXiv.

Orcid ID

Files over 3MB may be slow to open. For best results, right-click and select "save as..."