Applications of Random Matrices in Spectral Computations and Machine Learning.

Abstract: We will show how carefully crafted random matrices can be used to enhance a wide variety of computational tasks, including: dimensionality reduction, spectral computations, and kernel methods in machine learning. Two concrete examples are:

Imagine that we want to compute the top few eigenvectors of a matrix A, hoping to extract the "important" features in A. We will prove that either this is not worth doing, or that we can begin by randomly throwing away a large fraction of A's entries.

A famous result of Johnson and Lindenstrauss asserts that any set of n points in d-dimensional Euclidean space can be embedded in k-dimensional Euclidean space, where k=O(log n), such that all pairwise distances are preserved with arbitrarily good accuracy. We prove that to construct such an embedding it suffices to multiply the n x d matrix of points with a random d x k matrix, whose entries are set to ±1 independently, with equal probability.

The emphasis of the talk will be on Euclidean intuition. In particular, no prior knowledge on random matrices will be assumed.