Optimization for statistical learning with low dimensional structure: regularity and conditioning
January 24, 2023
Location: ESB 4133 (PIMS Lounge)
Live presentation only
Many statistical machine learning problems, where one aims to recover an underlying low-dimensional signal, are based on optimization. Existing work often either overlooked the computational complexity in solving the optimization problem, or required case-specific algorithm and analysis -- especially for nonconvex problems. This talk addresses the above two issues from a unified perspective of conditioning. In particular, we show that once the sample size exceeds the intrinsic dimension, (1) a broad class of convex and nonsmooth nonconvex problems are well-conditioned, (2) well conditioning, in turn, ensures the efficiency of off-the-shelf optimization methods and inspires new algorithms. Lastly, we show that a conditioning notion called flatness leads to accurate recovery in overparametrized models.