So Alice and Bob want to flip a coin… – STOC 2014 Recaps (Part 10)

A major use case for coin flipping (with actual coins) is when you’re with friends, and you have to decide where to eat. This agonizing decision process can be elegantly avoided when randomness is used. But who’s doing the coin flipping? How do you know your friend isn’t secretly choosing which restaurant to go to? Cryptography offers a solution to this, and Sunoo will tell us about how this solution is actually equivalent to one way functions!


Sunoo Park on Coin Flipping of Any Constant Bias Implies One-Way Functions, by Itay Berman, Iftach Haitner, and Aris Tentes

In this post, we look at a fundamental problem that has plagued humankind since long before theoretical computer science: if I don’t trust you and you don’t trust me, how can we make a fair random choice? This problem was once solved in the old-fashioned way of flipping a coin, but theoretical computer science has made quite some advances since.

What are the implications of this? In their recent STOC paper, Berman, Haitner, and Tentes show that the ability for two parties to flip a (reasonably) fair coin means that one-way functions exist. This, in turn has far-reaching cryptographic implications.

A function f is one-way if it is “easy” to compute f(x) given any input x, and it is “hard”, given the image y of a random input, to find a preimage x' such that f(x')=y. The existence of one-way functions imply a wide range of fundamental cryptographic primitives, including pseudorandom generation, pseudorandom functions, symmetric-key encryption, bit commitments, and digital signatures – and vice versa: the seminal work of Impagliazzo and Luby [1] showed that the existence of cryptography based on complexity-theoretic hardness assumptions – encompassing the all of the aforementioned primitives – implies that one-way functions exist.

About coin-flipping protocols, however, only somewhat more restricted results were known. Coin-flipping has long been a subject of interest in cryptography, since the early work of Blum [2] which described the following problem:

“Alice and Bob want to flip a coin by telephone. (They have just divorced, live in different cities, want to decide who gets the car.)”

More formally, coin-flipping protocol is a protocol in which two players interact by exchanging messages, which upon completion outputs a single bit interpreted as the outcome of a coin flip. The aim is that the coin should be (close to) unbiased, even if one of the players “cheats” and tries to bias the outcome towards a certain value. We say that a protocol has constant bias if the probability that the outcome is equal to 0 is constant (in a security parameter).

Impagliazzo and Luby’s original paper showed that coin-flipping protocols achieving negligible bias (that is, they are very close to perfect!) imply one-way functions. Subsequently, Maji, Prabhakaran and Sahai [3] proved that coin-flipping protocols with a constant number of rounds (and any non-trivial bias, i.e. 1/2-o(1)) also imply one-way functions. Yet more recently, Haitner and Omri [4] showed that the same holds for any coin-flipping protocol with a constant bias (namely, a bias of \frac{\sqrt{2}-1}{2}-o(1)). Finally, Berman, Haitner and Tentes proved that coin-flipping of any constant bias implies one-way functions. The remainder of this post will give a flavor of the main ideas behind their proof.

The high-level structure of the argument is as follows: given any coin-flipping protocol \Pi=(\mathsf{A},\mathsf{B}) between players \mathsf{A} and \mathsf{B}, we first define a (sort of) one-way function, then show that an adversary capable of efficiently inverting that function must be able to achieve a significant bias in \Pi. The one-way function used is the transcript function which maps the players’ random coinflips to a protocol transcript. The two main neat insights are these:

  • Consider the properties of a coin-flipping protocol when interpreted as a zero-sum game between two players: Alice wins if the outcome is 1, and Bob wins otherwise. If the players play optimally, who wins? It turns out that from the winner, we can deduce that there is a set of protocol transcripts where the outcome is bound to be the winning outcome, no matter what the losing player does: that is, transcripts that are “immune” to the losing player.
  • A recursive variant of the biasing attack proposed by Haitner and Omri in [4] is proposed. The new attack can be employed by the adversary in order to generate a transcript that lies in the “immune” set with probability close to 1 – so, this adversary (who has access to an inverter for the transcript function) can bias the protocol outcome with high probability.

The analysis is rather involved; and there are some fiddly details to resolve, such as how a bounded adversary can simulate the recursive attack efficiently, and ensuring that the inverter will work for the particular query distribution of the adversary. Without going into all those details, the last part of this post describes the optimal recursive attack.

Let \mathsf{Inv} be the function that takes as input a pair (\mathbf{t},b), where \mathbf{t} is a random transcript of a partial (incomplete) honest execution of \Pi and b\in\{0,1\}, and outputs a random pair of random coinflips (r_1,r_2) for the players, satisfying the following two conditions: (1) they are consistent with \mathbf{t}, that is, the coinflips could be plausible next coinflips given the partial transcript \mathbf{t}; and (2) there exists a continuation of the protocol after \mathbf{t} and the next-coinflips (r_1,r_2) that leads to the protocol outcome b.

It seems that an effective way for the adversary to use \mathsf{Inv} might be to call \mathsf{Inv}(\mathbf{t},1) for each partial transcript \mathbf{t} at which the relevant player has to make a move, and then to behave according to the outputted coins. We call this strategy the biased-continuation attack, which is the crucial attack underlying the result of [4].

The new paper proposes a recursive biased-continuation attack that adds an additional layer of sophistication. Let \mathsf{A}^{(0)}=\mathsf{A} be the honest first player’s strategy. Now, define \mathsf{A}^{(i)} to be attacker which, rather than sampling a random 1-continuation among all the possible honest continuations of the protocol (\mathsf{A},\mathsf{B}), instead samples a random 1-continuation among all continuations of (\mathsf{A}^{(i-1)},\mathsf{B}). Note that \mathsf{A}^{(1)} is the biased-continuation attacker described above! It turns out that as the number of recursions grows, the probability that the resulting transcript will land in the “immune” set approaches 1 – meaning a successful attack! Naïvely, this attack may require exponential time due to the many recursions required; however, the paper circumvents this by analyzing the probability that the transcript will land in a set which is “almost immune”, and finding that this probability approaches 1 significantly faster.

[1] One-Way Functions Are Essential for Complexity Based Cryptography. Impagliazzo, Luby (FOCS 1989).
[2] Coin Flipping by Telephone: A Protocol for Solving Impossible Problems. Blum (CRYPTO 1981).
[3] On the Computational Complexity of Coin Flipping. Maji, Prabhakaran, Sahai (FOCS 2010).
[4] Coin Flipping with Constant Bias Implies One-Way Functions. Haitner, Omri (FOCS 2011).

Advertisements

Spaghetti code crypto – STOC 2014 Recaps (Part 6)

We return to our STOC recap programming with Justin Holmgren, who will tell us about an exciting new result in program obfuscation. (However, he doesn’t tell us how to say “indistinguishability obfuscation” 10 times fast).


Justin Holmgren on How to Use Indistinguishability Obfuscation: Deniable Encryption, and More, by Amit Sahai and Brent Waters

One very recent topic of study in cryptography is program obfuscation – that is, how to make a program maximally unintelligible, while still preserving the functionality. We will represent programs as circuits, which for most purposes is equivalent to any other representation. At some level, obfuscation of a program seems like it should be an easy task – any programmer knows that it’s easy to end up with unreadable spaghetti code, but theoretical treatment of the issue was lacking, and finding a construction with provable security properties has proven quite difficult.

In 2001, Barak et al. [1] precisely formulated black-box obfuscation, which says that if you can compute something given the obfuscated program, you could also compute it given black-box access to the program. This is a natural and very strong notion of obfuscation. If we could obfuscate programs according to this definition, we could easily solve many difficult problems in cryptography. For example, a public-key fully homomorphic encryption scheme could be constructed from an ordinary symmetric-key encryption scheme by obfuscating a program which decrypts two input ciphertexts, and outputs the encryption of their NAND. Unfortunately, [1] also showed that black-box obfuscation is impossible to achieve.

One weaker definition of obfuscation that [1] did not rule out is called indistinguishability obfuscation (iO). It requires that if two circuits C_1 and C_2 are functionally equivalent (for all x, C_1(x) = C_2(x)), then their obfuscations are indistinguishable. In 2013, Garg et al. [2] gave the first candidate construction for iO. This construction is not proven secure under standard assumptions (e.g. the hardness of factoring), but no attacks on the construction are currently known, and it has been proven that certain types of generic attacks cannot possibly work.

It isn’t immediately clear whether iO is useful. After all, what information is the obfuscation really hiding if it only guarantees indistinguishability from functionally equivalent programs? “How to use Indistinguishability Obfuscation: Deniable Encryption and More”, by Amit Sahai and Brent Waters [3], shows several applications of iO to building cryptographic primitives, and proposes iO as a “hub of cryptography”, meaning that constructions which use iO as a building block would be able to benefit from improvements in the efficiency or underlying computational assumptions of iO constructions.

As a first example of how to use iO, they show how to construct an IND-CPA public key cryptosystem based on one-way functions and iO. This uses a few other basic and well-studied cryptographic primitives (it is known that all of them exist if one-way functions exist). First, a pseudorandom generator is a deterministic function which takes as input a random string s, and computes a longer string r which is indistinguishable from truly random. The length of r can be polynomially larger than the length of s, but we will let r be twice the length of s. A pseudorandom function family is parameterized by a secret key k and an input x, and produces outputs of some fixed length. Oracle access to f_k(\cdot) for some random unknown k is computationally indistinguishable from oracle access to a truly random function. The main special property that [2] require is a puncturable pseudorandom function family. That is, for a secret key k and an input x, a “punctured” key k\{x\} can be computed. k\{x\} allows one to compute f_k(y) for all y \neq x, but essentially reveals no information about f_k(x).

The power of iO comes from combining its indistinguishability guarantees with those of other cryptographic primitives, in what is known as the hybrid method. We will start with the security game played with a real scheme, and give a sequence of other games, each indistinguishable from the previous one to the adversary, such that the adversary obviously has no advantage in the last game. This will prove that the scheme is secure.

For example, a simple public key encryption scheme for one-bit messages (let G be a PRG, and F be a puncturable pseudorandom function family) would be the following.

  • The secret key SK is just a puncturable PRF key k.
  • The public key PK is the obfuscation of an Encrypt program. Encrypt takes a message m and randomness r, and outputs (G(r), F_k(G(r)) \oplus m).
  • To decrypt a ciphertext (c_1, c_2) given SK, one just computes F_k(c_1)  \oplus c_2.

In the IND-CPA security game, there is a challenger and an adversary. The challenger generates the public and secret key, and gives the public key to the adversary. The challenger also generates an encryption of a random bit b, and gives this to the adversary. The adversary wins if he can guess the bit that the challenger chose.

In the first hybrid, the challenger encrypts by choosing a truly random \tilde{r}, and generating the ciphertext (\tilde{r}, F_k(\tilde{r}) \oplus b). This is indistinguishable by the security property of a pseudorandom generator.

In the next hybrid, the program Encrypt which is obfuscated and given to the adversary uses a punctured key k\{\tilde{r}\} instead of the full PRF key k. This is with high probability functionally equivalent, because \tilde{r} is most likely not in the range of G. So by the security of iO, this hybrid is indistinguishable from the previous one.

Next, the ciphertext is generated by choosing a truly random bit p, and generating the ciphertext (\tilde{r}, p \oplus b). This is also indistinguishable to the adversary, because the punctured key reveals nothing about F_k(\tilde{r}). That is, if these two hybrids were distinguishable, then given only k\{\tilde{r}\}, one could distinguish between a truly random bit p and the true value F_k(\tilde{r}).

In this hybrid, it is clear (by analogy to a one-time pad) that no adversary can distinguish a ciphertext encrypting 0 from a ciphertext encrypting 1. So the original scheme is also secure.

Sahai and Waters achieve more than just public-key encryption – they also achieved the long-standing open problem of publicly deniable encryption. This is an encryption system for which if Encrypt(m ; r) = c, one can also compute a “fake” randomness r' for any other message m' such that Encrypt(m'; r') is also equal to c, and r' is indistinguishable from r. The motivation is that after a message is encrypted, coercing the encrypter into revealing his message is futile – there is no way of verifying whether what he says is the truth. Note that of course the decrypter could be coerced to reveal his secret key, but there’s nothing that can be done about that.

The basic idea of the deniable encryption scheme is that the encryption public key is again an obfuscated program – this program takes a message and randomness string. If the randomness string has some rare special structure (it is a hidden sparse trigger), it is interpreted as encoding a particular ciphertext, and the program will output that ciphertext. Otherwise, the program encrypts normally. There is also a public obfuscated Explain program, which allows anybody to sample a random hidden sparse trigger, but not test whether or not a string is a hidden sparse trigger.

At present, iO obfuscation is known to imply some very powerful primitives, including for example multi-input functional encryption. It is still open to construct fully homomorphic encryption for iO, or even collision resistant hash functions from iO, or to show that no such reduction is possible.

[1] On the (Im)possibility of Obfuscating Programs, Barak, Goldreich, Impagliazzo, Rudich, Sahai, Vadhan, Yang, 2001.
[2] Candidate Indistinguishability Obfuscation, Garg, Gentry, Halevi, Raykova, Sahai, Waters, 2013.
[3] How to Use Indistinguishability Obfuscation: Deniable Encryption, and More, Sahai, Waters, 2014.

Can you tell if a bit is random?

Avi Wigderson has this wonderful talk about the power and limitations of randomness in computation, surveying the beautiful area of randomized algorithms, and the surprising connection between pseudorandomness and computational intractability. If you haven’t seen it already, I highly recommend it!

There is a part of the talk when Avi fearlessly engages the age-old question of the meaning of randomness. What does it mean for something to be random? We theoretical computer scientists often speak of “coin tosses”: many of our algorithms toss coins to decide what to do, and Alice and Bob toss coins to foil their mortal enemy Eve. Avi asks, “Is a coin toss random?”

Imagine that I hold in my hand a (fair) coin, and a second after I toss it high in the air, you, as you are watching me, are supposed to guess the outcome when it lands on the floor. What is the probability that you will guess correctly? 50-50 you say? I agree!

(Reproduced from the essay version of his talk). But now, Avi says, imagine that the coin toss was repeated, but this time you had some help: you were equipped with a powerful computing device, and accurate sensors, like an array of high speed video recorders. The coin suddenly starts to look a lot less random now! If your computer and sensors were good enough, you’d likely be able to predict the outcome of the coin toss nearly perfectly. In both cases the coin toss was the same, but the observer changed, and so did the amount of randomness in the situation. Thus, he contends, randomness is in the eye of the beholder.

From here, Avi launches into an excellent discussion of pseudorandomness and other topics. However, I’m going to forge onwards — foolishly, perhaps — with the original theme: what is randomness? As he pointed out, the randomness in the coin toss situation depended on the capabilities of the observer. Thus, one could argue that a coin toss is not truly random, because in principle you could predict the outcome. Is there a situation that is intrinsically random, no matter how powerful or omniscient the observer was? Put another way: is there a process by which I can generate a bit that  no external super-being could possibly predict? Can you tell if a bit is random?

Enter the physics

Do you believe that the universe is governed by quantum mechanics? If so, then it seems that the matter is settled: quantum mechanics guarantees the existence of intrinsically random events. For those who know basic quantum, generating a truly random bit is easy: simply prepare a qubit state \frac{1}{\sqrt{2}}(|0\rangle + |1 \rangle) and measure in the computational basis — done! In fact, there are hardware random number generators that operate on this principle

But how do you actually prepare such a qubit, and perform the measurement? If you were a skilled experimentalist with access to perfectly manufactured mirrors and laser sources and detectors, then maybe. But then, how would you know these mirrors and lasers and detectors were built correctly?

For the rest of us down on Earth who don’t have time to learn experimental quantum physics and personally guarantee that all our laboratory equipment was built to specification, we’d like to be able to take some apparatus like a hardware random number generator and test whether its outputs were genuinely random — without having to open up the device and inspect its insides. Can we test randomness in a black box fashion?

It is easy to see that, if you only had input/output access to a black box device, there’s no test you can perform to reliably tell whether the device’s outputs are random. Intuitively, this is because for every test, you can construct a black box device that deterministically produces outputs that pass the test! Here’s a proof-by-Dilbert-cartoon:

What about statistical tests for randomness? Pseudorandom number generators are usually subject to an extensive battery of tests that check for suspicious patterns or regularities. Again, these are not fool-proof: one can design a fixed string that “diagonalizes” against all these heuristics and passes them all!

The power of non-locality

Black-box randomness testing seems like a hopeless task. But it turns out that, with a tiny extra assumption, it becomes possible! The added ingredient is non-locality.

Imagine that we had a hardware random number generator (HRNG) that we wanted to test, and suppose that the HRNG consisted of two separate pieces — call them Piece A and Piece B. Prevent A and B from communicating, either by surrounding them with Faraday cages or putting them on opposite sides of the galaxy. Now play the following game with them: generate a random bits xy, and give x to A and y to B. Collect output bits a from A and b from B, and check if = Λ y (where b indicates the parity of the two bits). If so, the devices win this curious little game.

Suppose A and B were completely deterministic devices: their outputs a and b were fixed deterministic functions of their inputs x and y, respectively. Then it is not hard to see that, over the random choices of x and y, they can only win this game with probability 3/4.

What if you somehow knew that the devices were winning with probability greater than 3/4? What must you conclude? These devices — and hence their output bits — must be acting randomly. The killer fact is, it’s actually possible to win this game with probability strictly greater than 3/4! This can happen if part A and part B, though isolated, utilize something called quantum entanglement — a phenomenon that Einstein derisively termed “spooky action at a distance”.

We’ve made some headway on randomness testing, it seems. First, this game I’ve described does not require you to peer inside the HRNG. Second, if the HRNG wins the game with high enough probability, you’ve certified that its outputs must contain some randomness. Third, quantum mechanics says it is possible to build a HRNG to win the game. Finally, as a bonus: you don’t need to believe in quantum mechanics in order to trust the conclusion that “If the devices win with probability greater than 3/4, they must be producing randomness”!

What I’ve described to you is a distillation of the famous Bell’s Theorem from physics, which says that quantum mechanics is inconsistent with any so-called “hidden variable” theory of nature that respects locality (i.e. you can separate systems by distances so they cannot communicate or influence each other). This is why people say that quantum mechanics exhibits “non-local” behavior, because games like the one above can be won with probability greater than 3/4 (which has been demonstrated experimentally). Bell’s Theorem has been historically important in establishing a dividing line between classical and quantum theories of nature; but today we have a new interpretation: one can view Bell’s Theorem — published half a century ago — as saying that we can meaningfully perform randomness testing.

Randomness expansion — to infinity and beyond!

“Wait a minute,” you say. “While you’ve presented a test for randomness, this isn’t so useful — to even run the test you need 2 bits of randomness to generate x and y! Why test a HRNG if you already have randomness to begin with?”

True. A HRNG test isn’t very useful when it requires more randomness than is produced. It’s like the current state of fusion reactors: they produce less energy than is required to run them. But researchers in quantum cryptography realized that you could get a randomness efficient test, using less randomness than the amount that is produced by the HRNG. For example, one can start with 100 random bits, run one of these randomness efficient tests, and end up with, say, 200 random bits, a net gain of 100! In a sense, the amount of randomness you now possess has been expanded — hence these tests are called randomness expansion protocols.

Roger Colbeck, in his Ph.D. thesis from 2006, came up with a protocol that uses m bits of initial seed randomness, and certifies cm bits of output randomness using a HRNG that consists of 3 isolated parts, where c is some constant (approximately 3/2). Later in 2010, Pironio et al. gave a protocol that expands m bits of seed randomness to m2 certified bits. In 2011, two simultaneous works — one by Vazirani and Vidick, and one by Fehr, et al. — demonstrated a protocol that attained exponential expansion: starting with m bits, one can certify that a 2-part HRNG produces a whopping 2m certified bits of randomness!

While perhaps 2100 random bits starting from a mere 100 is probably more than you’d ever need, the tantalizing question still stands: is there a limit to how much randomness you can certify, starting with a finite amount of seed randomness? In the upcoming Quantum Information Processing conference, Matt Coudron and I will show that there is no upper limit, and that infinite randomness expansion is possible. We give a protocol where, starting with say 1000 random bits, you can command a finite number of devices (which is eight in our protocol) to produce an unbounded amount of randomness. Furthermore, this protocol is immune to any secret backdoors installed in your HRNG (e.g. by a certain security agency) — the output of your HRNG is guaranteed to be secure against any external eavesdropper.

Classical control of quantum systems

Let’s take a step back from this frenetic expansion of randomness, and appreciate the remarkable nature of these protocols. Note that, this entire time, we know almost nothing about the HRNG being tested. It could have been specially designed by some malicious adversary to try to fool your test as much as possible. It could have at its disposal enormous computational power, like access to the Halting Problem oracle. It could even use quantum entanglement in some yet-unimagined way to try to defeat your test. Yet, by isolating parts of the HRNG from each other, and employing a simple check on their input and outputs, a limited classical being can force the HRNG to produce a long stream of random bits.

In a time when random number generators can be stealthily compromised by altering the dopants in a microchip, or by the intentional weakening of cryptographic standards by government agencies, the ability to rigorously and soundly test randomness has outgrown its philosophical origins in the nature of physical law, and become something of operational significance.

To return to our original question, though: can we tell if a bit is random? We can, but only by employing some some of the deepest ideas drawn from theoretical computer science and quantum physics. More broadly, this paradigm of classically controlling untrusted quantum devices has given rise to what’s called device independent quantum information processing. Not only can we classically manipulate quantum devices to produce genuine randomness, we can do a whole lot more: today we know how to securely exchange secret keys and even perform general quantum computations in a device independent manner! The scope and potential of device independent QIP is enormous, and I think we’ve only gotten started.

Henry Yuen