Lijun Ding
Speaker Link: 

January 24, 2023


Time: 1:00-2:00pm

Location: ESB 4133 (PIMS Lounge)

Live presentation only

View All Events


Many statistical machine learning problems, where one aims to recover an underlying low-dimensional signal, are based on optimization. Existing work often either overlooked the computational complexity in solving the optimization problem, or required case-specific algorithm and analysis -- especially for nonconvex problems. This talk addresses the above two issues from a unified perspective of conditioning. In particular, we show that once the sample size exceeds the intrinsic dimension, (1) a broad class of convex and nonsmooth nonconvex problems are well-conditioned, (2) well conditioning, in turn, ensures the efficiency of off-the-shelf optimization methods and inspires new algorithms. Lastly, we show that a conditioning notion called flatness leads to accurate recovery in overparametrized models.

Event Topic: 

Event Details

January 24, 2023


, , CA

View Map


  • Seminars