Indistinguishability and Pseudo-Randomness
Recall the perfectly secure encryption, OTP. That is, we bitwise-XOR our message with a uniform random string.
OTP is inefficient because the long random string must be shared between Alice and Bob in advance. Suppose that we have a (mathematical, deterministic) function that can extends a short truely random string to a long “random-looking” string. We can use the seemingly random to encrypt messages as in OTP, yet it is efficient.
Is that possible? How to formally define “random-looking”?
Let be the above function with short input and long output . We want Alice and Bob share the same to decrypt correctly, so must be deterministic. Mathematically, we have def for the distance between two probability distributions. However, for any , the input / output distributions are far. The point is “random-looking” at best.
We will introduce computational indistinguishability, and then define pseudo-random generator (PRG) and pseudo-random function (PRF). Prior to that, we need to define efficient algorithms (to model honest procedures) and adversaries.
Efficient Computation and Efficient Adversaries
We want to provide “efficient computation” to the honest users.
Definition: Deterministic Algorithm
An algorithm is defined to be a deterministic Turing Machine with input and output as bit strings, .
Recall that a Turing Machine is by definition uniform, that is,
- The description length of is constant for all inputs (no matter how long it is).
Also notice that the definitiona of algorithm here is different from some algorithm textbooks, e.g., we do not require an algorithm to halt in finite time (unless explicitly specified, see next).
Definition: Polynomial Running-Time
runs in time if , halts within steps. runs in polynomial time if there exists a constant such that runs in time .
Definition: Efficient Computation
We say an algorithm is efficient if it runs in polynomial time.
Justification of poly time:
- Indep of computation models (TM, circuit, …)
- Closed under compositions of algorithms
- From human experience: many poly-time algorithms are improved later to cubic time, but many super-polynomial time algorithm/problems are unclear if we can solve in polynomial time.
As discussed in perfect secrecy, we need randomized algorithms to construct encryption (and more generally crypto).
Definition: Randomized (or Probabilistic) Algorithm
A randomized algorithm , also called a probabilistic polynomial-time Turing Machine and abbreviated as PPT, is a deterministic algorithm equipped with an extra random tape. Each bit of the random tape is uniformly and independently chosen.
We denote the computation by but sometimes omit . When we say a randomized algorithm runs in time , the running time shall be for all .
As an example, we define efficient and correct encryption.
Example: Efficient Private-key Encryption.
is called an efficient private-key encryption scheme w.r.t. message space if:
- is PPT s.t. for every , it samples .
- is PPT s.t. , outputs .
- is PPT s.t. , outputs .
- Correctness: , ,
Note: the notation means the string of copies 1’s, it is called the security parameter. The purpose is to instantiate the “security” of the scheme, e.g., -bit key. We write unary (but not as binary) because “poly-time” is defined by the input length.
Adversaries: Non-Uniform
Next we want to model adversaries with a stronger capability than honest.
Definition: Non-Uniform PPT
A non-uniform PPT machine (abbreviated nuPPT) is a sequence of algos s.t.:
- computes on inputs of length , and
- exists a polynomial s.t. the description size and the time is also less than . We write to denote the computation .
Alternatively, an nuPPT algo can be defined as a uniform PPT that takes an additional advice string of poly length for each input length .
Purpose: non-uniform gives adv extra power and models many real scenario, e.g., Eve may have a list of known (plain, cipher) pairs.
Discuss: the advice may not be computable in poly time (if poly-time, then no need to take advice as input). NU is closed under reduction.
Computational Indistinguishability
Key Idea: If we have no way to show the difference, then we are satisfied. We call it indistinguishability.
Example: Turing test, when a machine and a human is indistinguishable in every human’s prompts, we call it AI.
Observation: they are not the same, not even close in any sense; however, the distinguisher “another human” can not tell the difference due to a limited power. Notice that it is hard to prove a machine satisfies this definition because we can not pull every human in the test. The point is the falsifiability: if a machine does not satisfy, we just need one human distinguisher.
Concept: we say a distribution is pseudorandom if for every efficient algorithm, it can not be distinguished from a (truely) uniform distribution.
We will formalize the concept asymptotically.
Definition: Ensembles of Probability Distributions
A sequence is called an ensemble if for each , is a probability distribution over . (We often write when the context is clear.)
E.g., supposing is a distribution over -bit strings for all , is an ensemble.
Definition: Computational Indistinguishability
Let and be ensembles where are distributions over for some polynomial . We say that and are computationally indistinguishable (denoted by ) if for all NUPPT (called the “distinguisher”), there exists a negligible function such that ,
Note:
- “=1” is a convention in literature
- “absolute” is not necessary due to “for all D”
This definition requires the two distributions to pass all efficient statistical tests, which include the following.
- Roughly as many 0 as 1.
- Roughly as many 01 as 10.
- Each sequence of bits occurs with roughly the same probability.
- Given any prefix, guessing the next bit correctly happens with roughly the same probability.
Lemma: Closure Under Efficient Operations
If the pair of ensembles , then for any NUPPT , .
(By standard reduction)
Examples:
- ignores its input. Clearly, for all .
- is identity, i.e., its output is exactly the input. .
- outputs the first half of the input, i.e., .
Lemma: Hybrid Lemma
Let be a sequence of probability distributions. Assume that the machine distinguishes and with probability . Then there exists some s.t. distinguishes and with probability .
By triangular inequality. Namely, fixing any , for any , define
Then, we have
Hence, given that , we have that for some .
Notice that this lemma applies to distributions, not ensembles. Fortunately, it implies the following.
Corollary:
For any ensembles , if and , then it follows that .
(left as exercise)
Discuss If the number of hybrid distributions (between two ensembles) depends on the size (of the distributions in the ensembles), the above corollary is tricky. Consider two ensembles , and suppose that the machine distinguishes w.p. (that depends on ), and then suppose that the sequence consists of distributions such that depends on . Then, we can not define ensembles between and due to the dependence (i.e., the length of the sequence depends on ). This is indeed the case when we have many hybrids, e.g., going from -bit PRG to -bit PRG. There are two ways to treat this case, the formal one and the more popular one. In the formal way, we assume for contra that exists and s.t. for inf many , distinguishes w.p. at least ; we then construct a reduction such that guesses an index and hoping that , where is the index given by hybrid lemma, so that runs to distinguish and solve the challenge specified by the -th hybrid. The popular way is less rigorous but more intuitive: we just claim that the two distributions are “indistinguishable” for each , and thus are “indistinguishable”; this is informal because fixing any means that is also fixed and are two distributions (not ensembles), but indistinguishability is defined asymptotically on ensembles.
Example: Let be ensembles. Suppose that and . Does ?
Lemma: Prediction Lemma
Let and be two ensembles where are probability distributions over for some polynomial , and let be a NUPPT machine that distinguishes between and with probability for infinitely many . Then there exists a NUPPT such that
for infinitely many .
Remove the absolute value in the def of comp. ind. by negating the distinguisher , and then standard probability.
Note: the converse the easier to prove. Hence, prediction and distinguishing is essentially equivalent.
Pseudo-Random Generator
Definition: Pseudo-random Ensembles.
The probability ensemble , where is a probability distribution over for some polynomial , is said to be pseudorandom if , where is the uniform distribution over .
Note:
- this definition says that a pseudorandom distribution must pass all efficiently computable tests that the uniform distribution would have passesd.
- it is hard to check or prove if a distribution is pseudorandom (due to the “for all” quantifier from comp. ind.)
Definition: Pseudo-random Generators.
A function is a Pseudo-random Generator (PRG) if the following holds.
- (efficiency): can be computed in PPT.
- (expansion):
- The ensemble is pseudorandom.
We sometimes say that the expansion of PRG is if for all .
Example: if for all is a PRG, then is a OWF. (proof left as exercise, why expansion is necessary?)
Example: Existence of PRG implies
Suppose that is a PRG. Then, the language and .
It is direct to see . To see , suppose for contradiction, there exists a polynomial time algorithm that decides , then can easily break the pseudorandomness of , a contradition.
This is indeed the case for all cryptographic objects: is necessary for the objects to exist. Since is long open, cryptography is built on assumptions. Ideally, even we can not prove or disprove , we want the “win-win” scenario:
If , then we can solve all problems efficiently in polynomial time. This is a world called Algorithmica, but we do not have cryptography.
Otherwise , then there exist some hard problems in that can not be solved in polynomial time. Ideally, we want to utilize the hard problems to build cryptographic objects (so that any polynomial-time adversary can not break).
This explained why cryptography needs assumptions, but what is a good assumption? Ideally, the minimal assumption would be , or equivalently, . Unfortunately, we do not know how to build crypto assuming only . So far, cryptography is build on factoring, RSA, …, we will discuss more on them. Notice that we still have the win-win w.r.t. the assumptions, e.g., either we can solve factoring in polynomial time (which would solve a century-old problem), or we have cryptography.
Here, PRG is the assumption that gives us encryption. It is currently an abstract object, but later we will show how PRG connects cryptography and other assumptions.
It also partly explains why we still can have crypto even when we can use quantum computers to break factoring, RSA, and other number-theoretic assumptions: there are other problems in that still resist quantum algorithms. It is interesting to observe that the thousand-year seek of secure encryption is deeply rooted in complexity (e.g., DES, MD5, Enigma, … are broken).
Also notice again that the discussion is asymptotic for all problem sizes, and that some real-world cryptographic constructions are not asymptotic and thus they do not fit in (e.g., AES and SHA are defined only for 128, 256, and 512)
Expansion of PRGs
The above PRG definition does not specify the output length except for longer than the input. Letting the input size be , we denote the output length to be . We say is the expansion of the PRG, the larger the better.
(For some function , the length may differ from even when . In that case, if we know the shortest output length for each , then we can truncate the output. Otherwise, we can simply truncate the output length to . Then, the truncation is still a PRG.)
Lemma: Expansion of a PRG
Let to be a PRG for all . For any polynomial , define as follows:
where , . Then is a PRG.
Proof, warmup:
Suppose that , no expansion, but we want to show pseudorandomness. Define distributions
for , and define for . Since , by and closure, we have . By is pseudorandom and closure, , which implies . By the corollary of hybrid lemma, we have .
Proof of PRG Expansion
It is slightly tricky when depends on . Define the prefix and last bit of iterating as:
and
We have , and we want to prove it through Hybrid Lemma. Given , define hybrid distributions , , and define for as
where denotes sampling an -bit string uniformly at random. Observe that for each , and differ by a , that is,
for all .
Assume for contra (AC), there exists NUPPT , poly s.t. for inf many , distinguishes and w.p. at least . The intuition is to apply Hybrid Lemma so that there exists such that are distinguishable, and thus by Closure Lemma is distinguishable from uniform.
We prove it formally by constructing that aims to distinguish . Given input , performs:
- Samplable (where )
- , , and
- output
To show that succeed with non-negl prob., we partition the event as follows:
where the random variable is sampled exactly the same as in .
Notice that conditioned on for any fixed , the distribution (given to ) is identical to
That implies
We thus have the summations cancelling out,
where the last inequality follows by (AC). That is, distinguishes w.p. at least , contradicting is a PRG.
Discuss In the above, we proved it formally and preserved the uniformity (if is a uniform TM, then is also uniform). We did not apply Hybrid Lemma (and no triangular ineq), nor did we use Closure Lemma. Alternatively after (AC), one may apply Hybrid Lemma which claims that exists s.t. is distinguishable from w.p. at least , and then hardwire into in order to distinguish . This would make non-uniform because would depend on each and we would not have an efficient way to find .
We proved in the above that a PRG with 1-bit expansion is sufficient to build any poly-long expansion. We have not yet give any candidate construct of PRG (even 1-bit expansion), but it is useful to firstly see what we can achieve using PRGs.
Example: Now suppose that we have a PRG with expansion for any poly . We can construct a computationally secure encryption by sampling key as an -bit string and then bitwise XORing with the message. That encrypts one message. We can encrypt more messages by attaching to each message a sequence number, such as , and so on.
What’s the downside of the above multi-message encryption?
Pseudo-Random Functions
In order to apply PRGs more efficiently, we construct a tree structure and call the abstraction pseudo-random functions (PRFs). We begin with defining (truly) random function.
Definition: Random Functions
A random function is a random variable sampled uniformly from the set .
We can view a random function in two ways. In the combinatorial view, any function is described by a table of entries, each entry is the -bit string, .
In the computational view, a random function is a data structure that on any input , perform the following:
- initialize a map to empty before any query
- if , then sample and set
- output
In both views, the random function needs bits to describe, and thus there are random functions in .
Note: the random function is also known as random oracle in the literature.
Intuitively, a pseudo-random function (PRF) shall look similar to a random function. That is, indistinguishable by any NUPPT Turing machine that is capable of interacting with the function.
Definition: Oracle Indistinguishability
Let and be ensembles where are probability distributions over functions. We say that and are computationally indistinguishable if if for all NUPPT machines that is given oracle accesses to a function, there exists a negligible function such that for all ,
Note: denotes that the TM may interact with the function through black-box input and output, while each input-output takes time to read/write but computing takes 0 time.
It is easy to verify that oracle indistinguishability satisfies “closure under efficient operations”, the Hybrid Lemma, and the Prediction Lemma.
Also notic that we can transform a distribution of oracle functions to a distribution of strings using an efficient oracle operation, and in that case, the oracle indistinguishability is translated into the comp. indistinguishability of strings (see CPA-secure encryption below).
Definition: Pseudo-random Functions (PRFs)
A family of functions is pseudo-random if
- (Easy to compute): can be computed by a PPT algo that is given input .
- (Pseudorandom): .
Note: similar to PRG, the seed is not revealed to (otherwise it is trivial to distinguish).
Remark The above definition requires both the input and output length to be , the seed legnth. However, the input and output lenths of PRF can be parameterized. For any functions , consider the family of functions
We also call this a family of PRF if it is easy to compute (in time polynomial in ) and pseudorandom. PRFs of various input/output length can be obtained from any length- PRF. It is an exercise to define such PRFs and to construct them from standard (length-) PRFs.
Discuss
- Suppose is a PRG. Is a PRF?
- Why a PRF must be a keyed function?
- The AES encryption is a deterministic algorithm that takes a 256-bit key and an arbitrary-length message . (We omit the initial vector and the block modes and just use the default.) Is a PRF?
Secure Encryption Scheme
Perfect secrecy considers that the adversary gets the ciphertext only (but nothing else). However, there are other natural adversarial models in practical scenarios.
- Known plaintext attack: The adversary may get to see pairs of form
- Chosen plain text, CPA: The adversary gets access to an encryption oracle before and after selecting messages.
- Chosen ciphertext attack, CCA1: The adversary has access to an encryption oracle and to a decryption oracle before selecting the messages. [“lunch-time attack”, Naor and Young]
- Chosen ciphertext attack, CCA2: This is just like a CCA1 attack except that the adversary also has access to decryption oracle after selecting the messages. It is not allowed to decrypt the challenge ciphertext however. See Rackoff and Simon, Crypto 1991.
We formalize CPA-security next (but leave CCA1/CCA2 later in authentication).
Definition: Chose-Plaintext-Attack Encryption (CPA)
Let be an encryption scheme. For any NUPPT adversary , for any , define the experiment to be:
Experiment :
- Output
Then we say is CPA secure if for all NUPPT ,
Note: the experiment is often equivalently described as “the adversary interacts a challenger , where performs all other steps that are not belong to (such as , , and answering the queries to )”.
Compared to Shannon/perfect secrecy, what are the differences?
- comp. bounded
- orcale before
- orcale after
- choose
Suppose that we have a secure encryption even without CPA oracle but the key is shorter than the message. Can we get a PRG/PRF? Can we get a OWF?
Discuss In the above experiment , the adversary chooses only one pair of challenge (and then has to distinguish the ciphertext). But there are other alternative definitions.
- We can define so that chooses many pairs and then has to distinguish the sequence .
- Moreover, we can also define and allow to query the oracle between the pairs so that each pair are chosen adaptively.
What’s the difference between the definitions?
Theorem: CPA-Secure Encryption from PRF
Let be a family of PRFs. Then the following is a CPA-secure encryption scheme.
- : sample and output .
: given input , sample , and then output
: given input , output
The correctness and efficiency of the construction follows from PRF directly. It remains to prove CPA security.
To show and are comp. ind., we define hybrid experiments as follows.
Hybrid :
, and then let to be the following oracle:
where is sampled uniformaly.
- Output
By oracle indistinguishability of and and closure under efficient operations, we have and . (Notice that and are oracle ind., but and are comp. ind. of strings.)
Hence, it suffices to prove that the ensembles and are ind. They seem to be indentically distributed (as in OTP). However, there is a difference: gets oracle accesses to (before and after choosing ), and could sample the same in the cipher and in another oracle accesses. Fortunately, hitting the same twice in polynomial time happens with negligible probability.
We formally prove next. Define to be the set
and let be the random variable sampled for the cipher . We want to show that is negligible for all NUPPT . Let and be the events for short.
where . We also have , thus
Given that is polynomial in for any NUPPT , it follows that is negligible in , which concludes the proof.
Notice that we could have constructed an efficient CPA-secure encryption from PRG, but using a PRF significantly simplified the construction and the proof.
GGM Construction of PRF
Theorem: Construct PRF from PRG [Goldreich-Goldwasser-Micali 84]
If a pseudorandom generator exists, then pseudorandom functions exist.
We have shown that a PRG with 1-bit expansion implies any PRG with poly expansion. So, let be a length-doubling PRG, i.e., . Also, define to be
where is the input length.
We define as follows to be a PRF:
That is, we evaluate on , but keep only one side of the output depending on , and then keep applying on the kept side, and then continue to choose the side by , and so on.
This constructs a binary tree. The intuition is from expanding the 1-bit PRG, but now we want that any sub-string of the expansion can be efficiently computed. (We CS people love binary trees.) Clearly, is easy to compute, and we want to prove it is pseudorandom.
Discuss Why not use 2, 4, 8, or more seeds on 2, 4, 8, or more trees?
There are leaves in the tree, too many so that we can not use the “swap one more PRG to uniform in each hybrid” technique as in expanding PRG. The trick is that the distinguisher can only query at most polynomial many times since is poly-time. Each query correspond to a path in the binary tree, and there are at most polynomial many nodes in all queries. Hence, we will swap the evaluations from root to leaves of the tree and from the first query to the last query.
Note: swapping each instance of (for each ) is a reduction that runs to distinguish one instance of ; therefore, we swap exactly one in each hybrid.
More formally, assume for contra (AC), there exists NUPPT , poly s.t. for inf many , distinguishes from RF (in the oracle interaction). We want to construct that distinguishes . We define hybrid oracles as follows:
- the map is initialized to empty
- if the prefix , then sample and set
- output
Notice that is a function defined using the computational view.
Let be the distribution of for short. We have and , but there are still too many swaps between . The key observation is that, given is PPT, we know a poly that is the running time of on , and then we just need to swap at most instances of . That is to define sub-hybrids ,
- the map is initialized to empty
if the prefix , then depending on the “number of queries” that are made to so far, including the current query, do the following: sample , set
and set
- output
We have . Moreover for any runs in time , we have (their combinatorial views differ, but their computational views are identical for queries). Now we have hybrids, so we can construct :
- sample and uniformly at random
- define oracle such that is similar to but “injects” to the map in the -th query if the prefix . (This is constructable and computable only in the next step when queries come from .)
- run and output , that is running on input when providing with oracle queries to
It remains to calculate the probabilities, namely, given (AC), distinguishes from uniformly sampled string w.p. , a contradiction. The calculation is almost identical to the proof of PRG expansion and left as an exercise.