Abstract: I will present a rigorous framework and new algorithmic tools for handling cryptographic keys which are subject to errors. These arise, for example, when a user wants to use biometric measurements -- fingerprints, iris scans, etc -- as secret keys for authentication or encryption. Traditional cryptographic protocols become unusable in the presence of this type of noise.

The talk will focus on the roles of two combinatorial concepts in solving this problem: error-correcting codes and randomness extractors. Both have been important in cryptography and theoretical computer science. This work provides new applications of both concepts, as well as a new connection between them.

We begin by defining "fuzzy extractors", which combine aspects of both error-correction and randomness extraction. I will explain how these allow one to use noisy secret keys under minimal assumptions, and give algorithmically efficient, near-optimal constructions for three types of errors: Hamming errors, set difference, and binary stringe edits.

I will then explain how randomness extraction is useful in ensuring a strong notion of secrecy even in situations where some information about a secret must be leaked. Returning to the problem of noisy secret keys, we show that there exist algorithmically efficient fuzzy extractors which satisfy this stronger notion of secrecy. This result may be interpreted as follows: although correcting errors requires leaking a lot of information (i.e. losing a lot of entropy in the secret), it need not leak anything useful to the adversary.