UBC Mathematics Department
http://www.math.ubc.ca
The first talk explained that (computational) hardness can be effectively turned into (computational) randomness. In this talk we will be stingy and ask exactly how hard (and how easy) must a function be, so as to achieve complete or partial derandomization of all efficient probabilistic algorithms. Successively weakening the hardness assumptions should be viewed as part of a program leading (hopefully) to obtaining UNCONDITIONAL limits on the power of randomness, such as e.g. BPP != EXP. I will talk mostly on one of the following two recent works with Russell Impagliazzo.
The first work deals with non-uniform hardness assumptions. Here we achieve a tight trade-off: exponential circuit lower bounds on any problem in E suffice to prove BPP=P. The improvement over previous trade-offs come from a solution to another problem of independent interest: deterministic amplification of hardness. This construction turns a hard-on-average Boolean function on n bits into a function 100n bits that is nearly unpredictable.
The second work deals with uniform hardness assumptions. Derandomization under such assumptions was known only for one-way functions. We show that at least for derandomizing algorithms in AvRP (namely probabilistic algorithms that work well for random inputs), it suffices to assume BPP != EXP. The obvious difficulty is that the conversion of a distinguisher of the Nisan-Wigderson generator (which is the only known tool that can use hard functions that are not 1-way) into an efficient algorithm for the hard function it is based on looks inherently non-uniform. We overcome this difficulty using a novel bootstrapping strategy.