Contruct PRG from OWF
We have shown that given a PRG, we can construct a CPA-secure encryption. We also showed that PRG, PRF, and CPA-secure encryption implies OWF, so OWFs are necessary if we want cryptography. We also have a universal OWF that is as strong as any candidate OWF. It remains to construct PRG from OWF. The construction implies that OWF is necessary and sufficient for PRG, PRF, and encryption, and that we have an encryption (and PRG and PRF) that is as strong as any encryption (by universal OWF).
Hard-Core Bits from any OWF
So far we have not yet have a candidate construction of PRG (even with 1-bit expansion). We will next construct a PRG from one-way permutations (which is easier).
The construct of PRG comes from two properties of OWF:
The output of must be sufficiently random (given the input is uniform).
(If the random variable is constant or taken over a small support for most , then we can invert with good probability.)
By hard to invert, there must be some bits of that are hard to guess even when is given.
A sufficiently random can still be easily inverted (such as indentity function). How many bits are hard to guess for any polynomial-time adversary? Must be (otherwise, it can be guessed correctly w.p. ).
Suppose that is OWP, then we have “fully random” . That is good in terms of the first property.
Definition: One-way Permutations
An OWF for all is called a one-way permutations if is a bijection.
To utilize the second property, we want to obtain some of the “hard bits” from the input . If we can get 1 extra hard bit, we have a construction of PRG by putting together the output of OWP and the extra hard bit. The hard bit must be depend on the output mathematically, but it shall be hard to compute even given the output. The hard bit is formalized below.
Definition: Hard-core bits
A predicate is a hard-core predicate for if is efficiently computable given , and for any NUPPT adversary , there exists a negligible so that for all ,
Notice that here we are not restricting to OWP nor OWF. (If never use a bit in the input, then that bit is hard-core. However, such is not permutation and is more challenging for us. We will focus on OWP in this section.)
That is indeed the case for some OWPs, such as RSA. If we construct OWP from the RSA assumption, then the least significant bit of is that “hard to guess” one, and then we can obtain PRG from RSA assumption.
Theorem: PRG from OWP and hard-core predicate
Suppose that is a OWP and is a hard-core predicate for (for all ). Then, to be defined below is a PRG:
(The proof is a standard reduction: if there exists a NUPPT distinguisher against , then we can build a NUPPT adversary that inverts by running .)
However, we want to obtain PRG from any OWP or any OWF (without depending on specific assumptions). That is unfortunately unclear.
Fortunately, Goldreich-Levin showed that for any OWF (or OWP) , we can obtain another OWF (or OWP) that we know its hard-core predicate. The intuition is: given is hard to invert, in the preimage of , there must be at least bits that are hard to guess (otherwise, a poly-time adversary can invert). The hard-core predicate formalizes those bits. Even we do not know which bits are hard, we can sample randomly and hope to obtain 1 bit out of them.
Theorem: Goldreich-Levin, Hard-Core Lemma
Let for all be a OWF. Define functions to be the following:
where denotes the inner product modulo 2, i.e., for any . Then, is a OWF and is a hard-core predicate for .
Note: in the above definition of and , the thm says that “even we are given the subset and , because is hard to invert, we still do not know the parity of over ”. Since the subset is chosen uniformly, and even we do not know where are them, hits some “hard bits” with overwhelming probability. This is indeed consistent with the earlier intuition.
Clearly is a OWF, and is easy to compute. The main challenge is to prove that is hard-core. We assume for contra that is not hard-core, which is the following, and then to reach contra, we want to construct another adversary that inverts .
Full Assumption:
There exists NUPPT , polynomial , such that for inf many ,
The construct and analysis of is involved, so we will start from a couple of warmups.
Warmup Assumption 1:
There exists NUPPT , such that for inf many , for all ,
To invert , the construction of is simple:
- For , do the following
- Let be the -bit string that only the -th bit is 1 (0 otherwise)
- Run
- Output To see why inverts , observe that , where . Hence, succeeds w.p. 1, a contradiction.
Note: the above assumed “for all ” and “w.p. ”, both are much stronger than we wanted.
Warmup Assumption 2:
There exists NUPPT , polynomial , such that for inf many ,
We would like to use as before, but now may always fail whenever the suffix of is . Hence, we randomize to and and then recover the inner product (this is also called “self correction”).
Fact
For all , any strings , it holds that .
To invert , the construction of is below:
- For each , do
- For to , do
- Sample
- Run
- Let be the majority of
- Output
To prove succeeds with high prob., we first prove that there are many good ’s.
Good instances are plenty.
Define to be the set of good instances,
where .
If the Warmup Assumption 2 holds, then .This is actually a standard averaging argument (or a Markov inequality). Suppose not, . Then,
which contradicts Warmup Assumption 2.
Now, suppose that . fails to invert or w.p. by union bound. So, for any fixed , for each independently. By Chernoff bound, the majority of is w.p. . Choosing , the probability is at least . By union bound over all , recovers except w.p. .
Finally, succeeds w.p. for all uniformly sampled by failing for all .
To complete the full proof, We want to lower the probability from to . The “good set” still holds when modified to (since it is a simple averaging). The main challenges from the previous proof is:
- It is too weak to take the union bound of inverting both and . For , that probability is lowered to only , and then that is too low for the following majority and Chernoff bound.
The first idea is to guess the inner product uniformly at random, which is a correct guess w.p. . Supposing that is a constant, we can choose , all guesses are correct w.p. , then conditioned on correct guesses, we have correct w.p. (when is good), and then we can continue with Chernoff bound (w.p. to fail) and finish the prove. For large , the guesses are too many, and thus the success probability is negligible in .
The second idea is to use pairwise independent guesses. Particularly, we have Chebychev’s bound for the measure concentration of pairwise indep. r.v. (instead of Chernoff bound for fully indep.).
Theorem: Chebychev’s inequality
Let be pairwise independent random variables such that for all , . Then,
where .
[Ps, p189]
We can then reduce the number of guesses from to .
Fact: Sampling pairwise independent random strings
For any , let be strings independently sampled uniformly at random (we abuse notation and round up to the next integer). Define strings for each to be
The random variables are pairwise independent, where denotes such that is the -th subset of .
(The proof is left as exercise.)
Now we are ready to prove the full theorem.
Proof of Hard-Core Lemma (Goldreich-Levin, 1989)
Given NUPPT in the Full Assumption, we construct that inverts as follows.
Algorithm
- Let be a polynomial to choose later.
- Let , be fully independent and be pairwise independent -bit random strings as in sampling pairwise independent.
- Let be fully independent and be pairwise independent random 1-bit strings (symmetricall as in the previous step)
- For each ,
- For each ,
- Run .
- Let be the majority of
- Output
We begin with claiming the number of good instances of .
Good instances are plenty.
Define to be the set of good instances,
where . If the Full Assumption holds, then .
(The proof is almost the same and omitted.)
We condition on the good event that . Next, we condition on the “lucky event” that for all , the guess equals to , which happens w.p. . That implies that for all , we have the correct guess ; that is, for any , let , we have that
With the conditioning, for any , and are still pairwise independent, and similarly . Therefore, by Chebychev’s inequality, the majority of equals to w.p.
where , and denotes the event that outputs correctly. Choosing sufficiently large , we have that .
The above shows that each bit is correctly recovered with a good marginal probability. We want that all bits ’s are correct with a good joint probability, but the different ’s come from the same randomness ’s and ’s. (It is necessary to use the same ’s because we want the number of guessed bits to be strictly logarithmic in ).
Fortunately, union bound works with dependent events. Taking union bound for all , , conditioning on and all ’s are correct. Removing the conditional events* takes and , but still inverts w.p. , contradicting is OWF.
* For any events , . The above applied that is , is , and is the event that is correct for all .
Discuss The number of bits we guessed is , where depends on the (hypothetical) NUPPT . Since the guessed bits entails information about , the proof formally implies that for any OWF, there must be bits that are hard to invert (from to ). Still, having an efficient and uniform attack is non-trivial. Put it in another view. The output of adversary gives a probabilistic bit conveying information about , and the reduction is to learn by repeatedly querying with carefully chosen inputs, so that the probability of finding correct is high and the time and query is small. Hence, the reduction is related to learning at a high level.
Discuss How far does the Hard-core Lemma extend to? Suppose is OWF, and suppose is a hard-core predicate for .
- Is a OWF?
- Let , and let . Is a OWF? If so, is a hard-core predicate for ?
- Let , and let . Is a OWF? If so, is a hard-core predicate for ?
The questions are highly relevant when we want to construct PRG from any one-way function.
Min-Entropy and Leftover Hash Lemma
We will use pairwise independent hash functions. The following facts are borrowed from the book of Salil Vadhan, Pseudorandomness, cited as [V].
Definition: statistical difference
For random variables and taking values in , their statistical difference (also known as variation distance) is . We say that and are -close if .
[V, Definition 6.2, p169]
Fact: (Statistical close to uniform, warmup)
Let , , and random variable . If is -close to , then
where denotes the support of .
Definition: Pairwise independent hash family.
A family of functions is pairwise independent if the following two conditions hold when is a function chosen uniformly at random from :
- For all , the random variable is uniform in .
- For all , the random variables and are independent.
[V, Definition 3.21, p64]
Lemma: Pairwise independent hash from linear mapping.
For any finite field , define to be the following set:
is a pairwise independent hash family. We often abuse notation, denoting to be the seed and to be the evaluation of the hash function.
[V, Construction 3.23, p65]
If , choosing the field to be gives a construction such that each function takes bits to describe. If , choosing and chopping the output to bits is still pairwise independent.
Corollary:
For any , there exists a pairwise independent hash family such that each is bits.
[V, Theorem 3.26, p66]
Definition: Min-entropy
Let be a random variable. Then the min-entropy of is:
where all logs are base 2.
[V, Definition 6.7, p171]
Example:
We say a function is -regular if it is -to-one for every input, i.e., for all , it holds that . We have . Let , and let (so that ).
Suppose someone secretly sampled and computed . We have the following:
- Given nothing, we can only guess correctly w.p. .
- Given nothing, we can only guess correctly w.p. , by the min-entropy of .
- Given , we can guess correctly w.p. . Thus, is viewed as the min-entropy of given (we avoid the formalization of conditional min-entropy).
Theorem: Leftover Hash Lemma
Let and . Suppose that is a pairwise independent family of hash functions and that is a random variable. If and , then -close to an uniformly random string of bits.
[V, Theorem 6.18, p179]
Corollary: Guessing the hash value
Let be a pairwise independent hash family, from -bit strings to -bit strings. For any random variable such that ,
where denotes the prefix bits of .
Another look at Goldreich-Levin.
Theorem: Leftover Hash Lemma [Mazor-Pass 2023, Lemma 3.4]
Let , and let be a random variable over . Let and a random matrix, and let
Then,
where denotes the matrix multiplication modulo 2, denotes the first bits of , and denotes the uniform distribution over .
(See [Mazor-Pass 2023] for a simple proof from Goldreich-Levin’s hard-core lemma.)
Example:
Consider a distribution such that for some . Let be the matrix sampled as in LHL, and let be defined w.r.t. as in LHL. Sample uniformly at random. Then, for any , we have that
where denotes the probability mass function of .
Example:
Consider the distribution , the random matrix , the parameter defined as the above. Then, for any , we have that the following two distributions
- and
- are -close, where and are two independent and uniformly random -bit strings.
(The proof is a simple hybrid.)
Notice that in the above example, and did not need to be the same distribution, and they didn’t even need to be independent.
Weak Pseudo-Entropy Generator (PEG)
The first step, a gap in Shannon entropy and pseudo-entropy.
Definition: weak PEG
A function is called a weak pseudo-entropy generator(PEG), if there exists a such that
- .
- There exists such that and . (This is called pseudo Shannon entropy.)
Discuss Is a weak PEG also a weak OWF?
Throughout the construction, it is easier to think that the given OWF is a -to-one mapping for some known .
Definition: -regular OWFs
Let . A OWF is called -regular iff for all , it holds that .
Theorem: Weak PEG from OWF
Let for all be a OWF, let be a pairwise independent hash family that for each , and . Define function to be the following:
where is abused to denote the description of , denotes the -bit prefix of . Then, is a weak PEG.
Intuition for the construct:
- is GL hardcore lemma, the starting point to obtain a pseudo-random bit (i.e., pseudo-entropy). However, gives no extra pseudo-entropy for many-to-one .
- is attempting to obtain PE by “identifying different” through . However, by randomizing , any may map to (like OTP), bad identification.
- is giving , and we get good identification. However, too good to be easy to invert since solving is easy.
- is cutting short to make inverting hard. For proper choice of , this works, but is hard to compute.
- We end up with guessing random . It works, the proof follows below.
For each , let . Let to be and to be for short. Let random variables be the following
Claim: Low Shannon entropy
We have for all (proof left as exercise). Hence, it suffices to show that . Conditioned on , we have . We want to show that when conditioned on , , which happens w.p. . It remains to show that . We will show that given , w.h.p. it holds that is uniquely determined, and thus is also determined (and gives 0 Shannon entropy).
For any and , define the pre-image set . Notice that . By is pairwise independent, for any ,
To see is determined over the random ,
by union bound and then by is small.
(The calculation of conditional entropy is left as exercise.)
Claim: High pseudo-entropy
Proof Sketch.
Because differ only when , assume for contradiction, there exists NUPPT , polynomial , such that for inf many ,
where .
We want to construct that inverts . We have a similar claim of good ’s: let to be the set
Then, . We can next fix similarly: for each , let to be the set
Then, , where .
Now, we can condition on and . Namely, given , tries all and samples uniformly, and we have that and w.p. . It remains to find the correct so that can run repeatedly using pairwise independent ’s.
Suppose that is fixed and is sampled uniformly and independently from . Given , the min-entropy of is because each can be mapped to . By the corollary of Leftover Hash Lemma, “guessing the hash value”, the first bits of is -close to uniform. This implies that we can hit the first bits of w.p. by sampling them uniformly at random.
However, to apply , we also conditioned on (instead of uniform ). Hence, we need to take union bound:
- w.p.
- the guess is not the first bits of for all w.p. .
Thus, choosing such that , we will sample a “good” input to w.p. (only conditioned on ). With the above, we can try all remaining bits and then check if the outcome satisfies .
Algorithm :
- For each ,
- For each ,
- Let .
- Run .
- Output if .
The subroutine performs almost identical to the standard Goldreich-Levin, and the only difference is that takes additional input .
Algorithm
- Let , be fully independent and be pairwise independent -bit random strings.
- For each , sample guess bit uniformly. For each , compute the bit from in the same way as (so that for any , and for all ).
- For each ,
- For each ,
- Run .
- Let be the majority of
- Output
The parameter is choosen according to the success probability of conditioned on and and are consistent. Notice that conditioned on , the events and hits are independent, w.p. and . Also, runs over all possible and . Hence, the overall success probability is .
Discuss Is a OWF? No, consider the case , which happens w.p. , and then w.h.p. over , we can solve from . However, the above claim also showed that is hard to invert when , i.e., is a weak OWF.
PRG from any OWF
We assume that the OWF . This is w.l.o.g.: if input is shorter, then we pad the input with unused random bits; if the output is shorter, then we pad the output with fixed bits. The same applies to the new notions in this section, namely weak PEG and PEG.
Historically, the first construction of PRG from OWF is given by HILL’99, which was initiated by IL’89 and ILL’89. The construction here is presented by a lecture of Barak at Princeton, which followed the paper of Holenstein’06. Later, HRV’13 and VZ’12 improved the efficiency. Interestingly in constructions of PRG, there are several novel tools that are useful later, e.g., the Leftover Hash Lemma, due to [ILL’89].
Even the result is impactful, the full construction is often skipped in textbooks and graduate-level cryptography. Many books and courses cover the Goldreich-Levin hard-core lemma [Ps, KL], but only few of them goes beyond that (such as the lecture of Bellare, 1999). The book of Goldreich, Section 3.5 is one that goes much deeper, which constructs PRG from any “regular” OWF, where regular means that for the same length , the pre-image set is the same cardinality. Still, the full construction
… is even more complex and is not suitable for a book of this nature.
– Goldreich, Section 3.5
The only teaching material we found is the lecture of Barak.
In this subsection, we describe the construction of Mazor and Pass [MP23].
Construction: Mazor-Pass PRG
Let be a function for all . Define the following functions.
- , where , and is a matrix that will be given later. Looking forward, we will compute for different ’s but the same .
- , which is simply repeatedly applying on independent strings and concatenate the outputs.
The function is defined by the following (deterministic) algorithm, where
- are parameters to be determined later,
- is also a parameter,
- , , , and are all inputs ( is represented in a -bit binary string).
Function :
- For all , compute . This is called a “row,” which consists of bits.
- For each , remove from the prefix bits and suffix bits; call the resulting -bit string as .
- Define for any -bit .
- For each , define . That is, is the concatenation of the -th bits of all , and thus each is -bit.
- Output .
We claim that if is OWF, then the function defined as the above is a PRG. Clearly, is easy to compute if is. For expansion, the input and output sizes of are:
- Input: , for , ’s, ’s, and .
- Output: . Since , we have that
Choosing sufficiently large , we obtain the expansion as the difference between output and input size is
It remains to show pseudorandomness.