Speeding up Metropolis using Theorems

Markov chain Monte Carlo (MCMC) algorithms, such as the Metropolis algorithm, are designed to converge to complicated high-dimensional target distributions, to facilitate sampling. The speed of this convergence is essential for practical use. In this talk, we will present several theoretical probability results which can help improve the Metropolis algorithm's convergence speed. Specific topics will include: diffusion limits, optimal scaling, optimal proposal shape, tempering, adaptive MCMC, the Containment property, and the notion of adversarial Markov chains. The ideas will be illustrated using the simple graphical example available at probability.ca/met. No particular background knowledge will be assumed.
p.s. There will be a reception at 10:30 in the PIMS lounge!