COLLOQUIUM
3:00 p.m., Friday (October 10, 2008)
MATX 1100
Ozgur Yilmaz
UBC
Sparse recovery via nonconvex optimization
Abstract: The sparse recovery problem has received a lot of attention, both because of its role in
transform coding with redundant dictionaries, and more importantly because it has inspired
"compressed sensing". The main questions are:
Suppose we have a "sparse" vector x in R^n (i.e., x has only a few nonzero entries). Given b=Ax where
A is a short matrix (mbyn with m<n), can we recover x? Under what conditions on A? In a
computationally tractable way? If there is noise? If x is not sparse, but can be wellapproximated
by a sparse vector (i.e., x is "compressible")?
It has been recently shown by CandesRombergTao and independently by Donoho that one can recover
estimates of sparse and compressible signals from an "incomplete" set of noisy measurements via
onenorm minimization methods under certain conditions on the "measurement matrix" A. These conditions
are satisfied (asymptotically optimally) when the matrix A is a random matrix whose entries are drawn
i.i.d. from a Gaussian distribution.
In this talk, we present the theoretical recovery guarantees obtained when decoding by
pquasinorm minimization with 0<p<1 in the setting described above, and we prove
that the corresponding guarantees are significantly better than those one can obtain in the
case of onenorm minimization. In particular, we show that decoders based on pquasinorm minimization
are (l²,l^p) instance optimal. Moreover, these decoders are (l²,l²) instance optimal in probability
(this latter relates to a result on BanachMazur distances of pconvex bodies to their convex hulls).
Finally, we comment on algorithmic issues.
This is joint work with Rayan Saab.
Refreshments will be served at 2:45 p.m. (Math Lounge, MATX 1115).
