Foundations of Coding: Compression, Encryption, Error Correction (2015)
Chapter 1. Foundations of Coding
1.3 Block Ciphers, Algebra, and Arithmetic
In this section, we introduce most of the mathematical background used in coding theory and applications. It contains the bases of algebra, and efficient ways to compute in algebraic structures. Some arithmetics is also useful to construct large finite structures. All these notions are widely used in the book and become necessary as soon as block coding methods are envisaged.
Today, the Vernam cipher is the only symmetric cryptographic algorithm that has been proved unconditionally secure. Thus, all other known systems are theoretically breakable.
For these systems, we use almost secure encryption schemes: the knowledge of the ciphertext (or the knowledge of some couples plaintext/ciphertext) does not enable to recover the secret key or the plaintext in humanly reasonable time (see the orders of magnitude and computing limits in Section 1.1.7).
For instance, we can decide to choose a unique key and to reuse it in order to avoid too frequent key exchange protocols. This implies that we have to split the source messages into blocks of some size, depending on the size of the key. Block cipher is also a standard, which is widely used in error detection and correction.
This is also the principle of one of the most famous codes – the ASCII code (“ American Standard Code for Information Interchange”) – which is a numerical representation of the letters and signs of the Latin alphabet. In ASCII code, the source alphabet is the Latin alphabet, and the code alphabet is . The set of codewords is the set of all the words of size 8 over V:
Each one of the characters (uppercases, lowercases, special characters, and control characters) is represented with a word of size 8 over V according to an encoding function. The following Table 1.3 gives an extract of this function.
Table 1.3 Extract of the ASCII Code
A 
01000001 
J 
01001010 
S 
01010011 
B 
01000010 
K 
01001011 
T 
01010100 
C 
01000011 
L 
01001100 
U 
01010101 
D 
01000100 
M 
01001101 
V 
01010110 
E 
01000101 
N 
01001110 
W 
01010111 
F 
01000110 
O 
01001111 
X 
01011000 
G 
01000111 
P 
01010000 
Y 
01011001 
H 
01001000 
Q 
01010001 
Z 
01011010 
I 
01001001 
R 
01010010 
Space 
00100000 
For example, the ASCII code of the message: A KEY, is the string: 0100000100100000010010110100010101011001.
1.3.1 Blocks and Chaining Modes from CBC to CTR
It is possible to encode independently each block of a message with the same algorithm. This is called Electronic Code Book (ECB) cipher mode. More generally, the independence of encryption between the blocks is not required and the several ways of combining the blocks are called encryption modes.
ECB mode: Electronic Code Book In this mode, the message M is split into blocks of constant size. Each block is encrypted separately with a function (the key k being a parameter of the function) as illustrated in Figure 1.3.
1.3
Figure 1.3 Block ciphers: ECB mode
Thus, a given block of the message will always be encrypted in the same way. This encryption mode is the easiest one but does not provide any security and is normally never used in cryptography.
CBC mode: Cipher Block Chaining The CBC mode was introduced to avoid encrypting a block in the same way in two different messages. We add some initial value , possibly generated randomly. Each block is encoded by an XOR operation with the previous cipherblock before being encrypted. Figure 1.4 illustrates this mode.
Figure 1.4 Block ciphers: CBC mode
1.4
This is the most widely used encryption mode. Decryption uses the inverse of the encoding function .
CFB mode: Cipher FeedBack To avoid using the inverse function for decryption, it is possible to perform an XOR after the encryption. This is the principle of the CFB mode, which is illustrated in Figure 1.5.
Figure 1.5 Block ciphers: CFB mode
1.5
The benefit of this mode is to avoid implementing the function for decryption: . Thus, this mode is less secure than the CBC mode and is used in network encryption for example.
OFB mode: Output FeedBack (OFB) is a variant of the previous mode and it provides symmetric encryption and decryption. Figure 1.6 illustrates this scheme.
Figure 1.6 Block ciphers: OFB mode
1.6
Decryption is performed by . This mode is useful when one needs to minimize the number of embedded circuits, especially for communications in spacecrafts.
CTR mode: Counter Mode Encryption The CTR mode is also completely symmetric, but encryption can be perfomed in parallel. It uses the encryption of a counter of initial value T (Figure 1.7):
Figure 1.7 Block ciphers: CTR mode
1.7
Decryption is performed in the same way: . The advantage of such a mode is that the several computations are independent, as in the ECB mode, but a block is also never encrypted twice in the same way a priori.
Exercise 1.12
A message M is split into n blocks that are encrypted into using an encryption mode. Bob receives the blocks , but unfortunately, he does not know that one (and only one) of the blocks was incorrectly transmitted (for example, some 0s became 1s and some 1s became 0s during the transmission of block ). Show that the number of missdecrypted blocks is equal to 1 if ECB, OFB or CTR modes were used and equal to 2 if CBC, or CFB modes were used.
Solution on page 283.
1.3.2 Algebraic Structure of Codewords
Developing block ciphers implies that we have to be able to perform operations and computations over blocks. For example, the operation over a block of bits is a bitwise addition modulo 2 of two vectors. Furthermore, as encoding functions have to be reversible, we need structures for which we can easily compute the inverse of a block. In order to perform these computations using solid algebraic bases, let us recall some fundamental structures. All along the book, denotes the set of integers, and denotes the set of nonnegative integers.
1.3.2.1 Groups
A group is a set equipped with an internal binary operator satisfying the following properties:
1. is associative: for all .
2. There exists a neutral element , such that for all , one has .
3. Each element has an inverse: for all , there exists such that .
Moreover, if the law is commutative ( for all ), G is called abelian.
A subset H of G is called a subgroup of G if the operations of G restricted to H give a group structure to H. For an element a of a group G, we denote by the repetition of the law , performed on n terms equal to a for all . Moreover, one has and for all .
If an element is such that for all , there exists , such that , then g is a generator of the group or a primitive root. A group is called cyclic if it has generator.
For example, for a given n fixed, the set of integers , equipped with the law of addition modulo n is a cyclic group generated by 1; if and if we choose the multiplication modulo 7 as a law of composition, the set is a cyclic group generated by 3. Indeed .
Let be a group and . The set is a subgroup of G, denoted by or . If this subgroup is finite, its cardinal number is the order ofa.
Property 5 (Lagrange)
If G is a finite group, the cardinal number of any subgroup of G divides the cardinal number of G. In particular, the order of any elements divides the cardinal number of G.
Proof
Let H be a subgroup of a finite group G. is the cardinal number of H and consider the sets for any . First of all, all the sets aH have the same cardinal number: if , then since a is invertible , so that . Then these sets form a partition of G: indeed take aH and bH with and suppose that there exist an element x in their intersection, that is, . Then for any element . But as H is a subgroup, and thus . This proves that aH is included in bH. With the reverse argument, one can prove also that bH is included in aH. Therefore, two sets aH and bH are either equal or disjoint. Finally, any element x in G is in xH. Now, as the sets aH form a partition of G and they are all of cardinal number , the cardinal order of G is a multiple of .
Theorem 1 (Lagrange)
In a finite abelian group of cardinal number n, for all .
Proof
Let a be any element in G. The set of the multiples of a, is equal to G. Indeed, as a is invertible, for all , we can define . Reciprocally, if a and x are elements of G a group, then so is their product (). Hence, the two sets being equal, the products of all their respective elements are also equal:
Yet, as multiplication is commutative in an abelian group, we can then extract a from the product. Moreover, there are n elements in G and we thus obtain the following formula:
In order to conclude, we use the fact that – all elements in G being invertible – so is their product and we can simplify the previous formula: .
1.3.2.2 Rings
A ring is a set equipped with two internal binary operators satisfying the following properties:
1. is an abelian group.
2. is associative: for all .
3. is distributive over : for all and .
Moreover, if there exists a neutral element for in A, A is called unitary. This neutral element is noted , or simply 1 if it is not ambiguous from the context. If is commutative, A is called commutative. All elements in A have an opposite, namely their inverse for the law +. However, they do not necessarily have an inverse for the law . The set of invertible elements for the law is denoted by .
For an element a in a ring A, one denotes by (or more simply na) the sum of n terms equal to a for all .
If the set is not empty, the smallest element of this set is called the characteristic of the ring. Otherwise, the ring is said to be of characteristic 0. For example, is a unitary, commutative ring of characteristic 0.
Two rings and are isomorphic if there exists a bijection satisfying for all x and y in A:
1.8
If E is any set and is a ring such that there exists a bijection f from E to A, then E can be equipped with a ring structure:
1.9
The ring defined as such is obviously isomorphic to A. If two rings are isomorphic, one ring can be identified with the other.
An ideal I is a subgroup of a ring A for the law + and “ absorbing” for the law : for , the product remains in I for any element a of the ring A. For all , the set is an ideal of A, which isgenerated by x. An ideal I of A is calledprincipal if there exists a generator x (such that ). A ring A is principal if an only if any ideal of A is principal.
A Euclidean function is a mapping of all nonzero elements of a unitary ring to . A unitary ring with a Euclidean function is Euclidean if it has the property that for every couple of elements a and b of the ring, there exist q and r such that and . This operation is the Euclidean division, and the numbers q and r are, respectively, called the Euclidean quotient and the remainder
and are denoted by and (for a modulo b). Any Euclidean ring is principal. This implies that there exists a Greatest Common Divisor (GCD) for all couples of elements . Any generator of the ideal is a gcd.
For example, the ring with the absolute value as Euclidean function, it is a Euclidean ring. The following famous theorem of Bézout extends the properties of Euclidean rings. It is true for any Euclidean ring, and its proof in will follow from the Euclidean algorithm page 37.
Theorem 2 (Bézout)
Let a and b be two nonzero elements of a Euclidean ring A, and let d be their GCD. There exist two elements x and y, called the Bézout's numbers, such that and satisfying
The modulo operation allows to define a ring on , the set of nonnegative integers strictly inferior to n, for . The set equipped with the addition and the multiplication modulo n [that is, and ] is a (finite) ring. It is widely used in coding.
Exercise 1.13
Bézout's theorem is very useful to prove properties in number theory. In particular, use it to prove the famous Gauss's lemma stated as follows:
Lemma 2 (Gauss)
If an integer number a divides the product of two integers b and c, and if a and b are coprime, then a divides c. .
Solution on page 284
1.3.2.3 Fields
A field is a set equipped with two internal binary operators satisfying the following properties:
1. is an unitary ring.
2. is a group.
If is commutative, the field is commutative; the inverse (or opposite) of x with regard to the law + is denoted by ; the inverse of x with regard to the law is denoted by . The characteristic of a field is its characteristic when considered as a ring.
For instance, is a commutative field of characteristic 0.
As all the rings and fields we are dealing with are commutative, thereafter the word ring (respectively field) will refer to a unitary commutative ring (respectively a commutative field).
Two fields are isomorphic if they are isomorphic when considered as rings.
A subset W of a field V is called a subfield of V if the operations of V restricted to W give a field structure to W.
The standard notation is used for classical fields in this book: denotes the field of rational numbers, and denotes the field of real numbers.
If p is a prime number, the ring is a field of characteristic p.
Indeed, with Bézout's theorem (see page 32), we have for all couples of integers a and b, there exists two integers x and y such that
If p is prime and a is a nonzero element of , this identity applied to a and p gives , hence . Thus, a is invertible and x is its inverse. This field is denoted by .
The field of rational numbers and the fields are called prime fields.
1.3.2.4 Vector Spaces and Linear Algebra
Rudiments of linear algebra are necessary for the reading of the major part of this book. Without any explicative pretention, here we define useful concepts and we introduce the main notation. A set is a vector space over a field V if it has one internal law of composition + and an external law of composition “.,” such that
1. is a commutative group.
2. For all .
3. For all , and .
4. For all , and .
5. For all and .
An element of a vector space is called a vector, and the elements of the field V are called scalars. The set equipped with the addition and the multiplication is a commutative field denoted by . Thus, the set of bit tables of size n can be equipped with a vector space structure. The set of codewords is then .
A set of vectors is an independent set if for all scalars implies .
The dimension of a vector space V, denoted by , is the cardinal number of the greatest set of independent vectors of V.
For example, if V is a field, is a space of dimension n because the vectors are independent.
A linear mapping is an application from one vector space to another, which preserves the laws of composition. Namely, if A and B are two vectors spaces, an application f is said to be linear if for all in A, and for all and .
The image of a linear mapping f from a vector space to a vector space , denoted by , is the set of vectors such that there exists with .
The kernel of a linear application f from a vector space to a vector space , denoted by , is the set of vectors such that .
It is easy to verify that and are vector spaces.
If the dimension of is finite, this quantity is called the rank of the linear mapping and is denoted by . Moreover, if the dimension of is also finite, then we have the following result: .
A matrix M of size is an element of the vector space , represented by a table of m horizontal lines (rows) of size n or n vertical lines (columns) of size m. The element that lies in the ith row and the jth column of the matrix is written as . Multiplication of a vector x of size n with such as matrix gives a vector y of size m satisfying for all i from 1 to m; multiplication is written as .
Each matrix M is associated to a linear application f from to , defined by . Reciprocally, any linear application can be written using a matrix. The coding processes studied throughout this book – mainly in Chapter 4 – often use linear applications, which enable us to illustrate the properties of these functions.
Depending on the chosen structure, codewords can be manipulated with additions and multiplications over integers or vectors. All these structures are very general and common in algebra. Codes are particular as they are finite sets. Finite groups and finite fields have some additional properties, which will be widely used throughout this work.
1.3.3 Bijective Encoding of a Block
Now that we have some structures, we are able to perform additions, multiplications, and Euclidean divisions on blocks. We can also compute their inverses. Here, we give some fundamental examples of computations that can be performed on sets with good algebraic structure. As blocks are of finite size, we willmanipulate finite sets in this section.
1.3.3.1 Modular Inverse: Euclidean Algorithm
Bézout's theorem (see page 32) guarantees the existence of Bézout numbers and thus the existence of the inverse of a number modulo a prime number in . The Euclidean algorithm makes it possible to compute these coefficients efficiently.
In its fundamental version, the Euclidean algorithm computes the Greatest Common Divisor (GCD) of two integers according to the following principle: assuming that ,
where are the remainder of the Euclidean division of a by b. Indeed, if a and b have a common divisor d, then are also divisible by d.
The recursive principle appears:
Example 1.3 (The of 522 and 453)
We compute
The of 522 and 453 is equal to 3.
Extended Euclidean algorithm The “ extended” version of the Euclidean algorithm – the one we will use most of the time in this book – enables us to compute the of two numbers and a pair of Bézout numbers.
This algorithm is also extended because it is meant to be more general: It can be not only applied to number sets but also applied to any Euclidean ring. This is the case for polynomials, as we see in the following sections.
The principle of the algorithm is to iterate with the following function G:
Example 1.4
We wish to find x and y such that . We write the matrices corresponding to the iterations with the function G starting from
One first computes with and and one gets the matrix
Then, one iterates the algorithm with Thus, at the end, one gets
Hence, we have . Thus, , and the Bézout's numbers are and .
Here is a version of the extended Euclidean algorithm that performs this computation while storing only the first line of G. It modifies the variables , and d in order to verify at the end of each iteration that and .
In order to resolve it “ by hand,” we can calculate recursively the following equations (we stop when ):
The following exercise gives examples of resolution using this method.
Exercise 1.14 (Extended Euclidean Algorithm)
Find pairs of Bézout numbers for the following integers:
·
·
·
Solution on page 284.
Now let us prove that this algorithm is correct if the Euclidean ring is the set of integers. This will also provide a constructive proof of Bézout's theorem (see page 32) in .
Theorem 3
The extended Euclidean algorithm is correct in .
Proof
First of all, let us show that the sequence of remainders is always divisible by : recursively, if and , then and thus . Moreover, the sequence of positive remainders is monotonically decreasing and is bounded below by 0. Hence, the algorithm ends.
Besides, after a finite number of steps, one has . Thus, there exists an index i such that . In that case, and the previous remark indicates that is the gcd we are looking for.
Finally, we need to prove that the Bézout numbers are correct. Let us do it recursively. Obviously, the initial case is correct and so is the algorithm. Then, let us denote and . Hence . Recursively, and using the notation introduced in Algorithm 1.4, we have with and . This relation implies that with and . Thus, the algorithm is correct.
Exercise 1.15 (Modular Computation)
The extended Euclidean algorithm also enables one to solve linear modular equations in . Give a method to solve the following equations:
1.
2.
3.
Solution on page 284.
Complexity of the Euclidean algorithm At each step, the greatest number is, at least, divided by two, hence its size decreases by 1 bit, at least. Thus, the number of steps is bounded by . At each step, we also compute the remainder of the Euclidean division. The algorithms for Euclidean division we learned in elementary school have a cost . Finally, the overall cost is bounded by if n is the size of the data. However, a closer study of the algorithm can make this complexity more precise. Indeed, the cost of the Euclidean algorithm is rather of the order !
The proof is technical, but the idea is rather simple: either there are actually about steps, thus each quotient is very small and then each division and multiplication can be performed with operations, or the quotients are large and thus each division and multiplication has to be performed with operations (but then the number of steps is small).
Exercise 1.16
Implement the extended Euclidean algorithm on the set of integer numbers with your favorite programming language (solution coded in C++).
Solution on page 285.
1.3.3.2 Euler's Totient Function and Fermat's theorem
Let be an integer. We denote by the set of positive integers lower than n and coprime with n:
The cardinal number of is denoted by . The function is called Euler's totient function. For example, . Moreover, if p is prime, . Exercise 1.19 focuses on a more general formula.
Each element in has an inverse: indeed, as x is coprime with n, Bézout's identity guarantees the existence of two integers u and v of opposite sign ( and ), such that
Thus, , that is, mod n. The integer u is nonzero and not equal to n because of the relation and . So it is an element of and is called the inverse of x modulo n. Computation of u is performed with the extended Euclidean algorithm.
Theorem 4 (Euler)
Let a be any element in . One has .
Proof
is a finite multiplicative and abelian group, with neutral element 1 and of cardinal number . Therefore, Lagrange Theorem 1, page 31, applies directly to give .
Fermat's theorem can be immediately deduced from Euler's theorem when n is prime.
Theorem 5 (Fermat)
If p is prime, then any satisfies the following result: .
Proof
If a is invertible, then Euler's theorem gives . We multiply the equation by a in order to obtain the relation. The only noninvertible element in (if p is prime) is 0. In that case, we have obviously .
The Chinese Remainder Theorem – which was first formulated by Chinese mathematician Qin JiuShao during the century – enables one to combine several congruences modulo pairwise coprime numbers in order to obtain a congruence modulo the product of these numbers.
Theorem 6 (Chinese Remainder Theorem)
Let be positive pairwise coprime numbers and . Then, for all set of integers , there exists a unique solution to the modular equation system , for . If we call , this unique solution is given by
where is the inverse of modulo .
Proof
Let us proceed in two steps: first, we prove the existence of x, and then we prove the uniqueness of x. As are pairwise coprime, and are coprime. Bézout's theorem gives us the existence of the inverse of modulo , which is written . Let . It is easy to check that x is a solution of the system of congruences ! Indeed, for all i, as divides all (with ), . According to the definition of , we have .
Now let us prove the uniqueness of x. Let us suppose that there exist two solutions and . Then and . Thus, for some and . Hence, divides . Yet, and are coprime, thus also divides ; hence . Processing recursively, as is coprime with the product , we can deduce that , or , which gives us the uniqueness of the solution.
Exercise 1.17
Find all integers x such that and . Deduce the inverse of 49 modulo 55.
Solution on page 285.
Exercise 1.18
Find all integers x whose remainders after division by 2, 3, 4, 5, 6 are, Respectively, 1, 2, 3, 4, 5.
Solution on page 285.
Exercise 1.19 (A formula for Euler's Totient Function)
1. Let p be a prime number, . Compute with and .
2. Show that is multiplicative, that is, if m and n are coprime, then .
3. Deduce a general formula for Euler's previous theorem, using the prime factor decomposition.
Solution on page 286.
Exercise 1.20 (The Chinese Remainder Theorem)
Let be pairwise coprime integers and . We consider the following application:
1. Prove that is a ring isomorphism.
2. Characterize the function
Hint: Use and notice that .
3. Give the unique solution modulo N of the system:
4.
Solution on page 286.
Exercise 1.21 (Application: The Chinese Remainder Problem)
This exercise is a typical application of the Chinese Remainder Theorem. A group of 17 pirates has laid hands on a booty composed of golden coins of equal value. They decide to share it equally and to give the rest to the Chinese cook. The latter would receive three coins. However, the pirates quarrel and six of them are killed. The cook would receive four coins. Unfortunately, the boat sinks and only six pirates, the treasure and the cook are saved. The sharing would give five golden coins to the cook. What is the minimal fortune the latter can expect if he decides to poison the rest of the pirates ?
Note: one may use the following equalities:
· and
· and
·
Solution on page 287.
1.3.3.3 Modular Exponentiation and Discrete Logarithm
Modular exponentiation is a form of coding widely used in modern encryption methods. Let a be a generator of . Consider the function
It is associated with a decoding function. As a is a generator, every element c in may be written as a power of a. The lowest positive integer b such that is called the discrete logarithm (or the index) in base a of b modulo n. We denote .
The coding function is easy to compute. The method is called the exponentiation by squaring (or binary exponentiation, or even squareandmultiply). It consists in writing b as successive squares.
For example,
With this principle, the computation of only requires five multiplications: three exponentiations by squaring and two multiplications.
More generally, the complexity bound of Algorithm 1.5 is multiplications modulo n.
Indeed, at each call, the exponent b is divided by 2. Hence, there are at most recursive calls. At each call, we perform at most two multiplications: a squaring and possibly a multiplication by a. These operations are performed modulo n, that is to say on numbers of bits. Even using the naive multiplication algorithms (those we have seen in elementary school), the cost of such multiplication is .
Thus, the overall cost of the algorithm is . This cost is reasonable with regard to , which is the time required to read a or write the result.
Exercise 1.22 (Computations of the Inverse)
1. Propose an algorithm for the computation of the inverse in whenever it exists, based on Euler's theorem.
Application: compute (quickly) and . One can use the following results: .
2. Give three different algorithms for the computation of the inverse of y modulo , with distinct prime integers.
Solution on page 287.
The Discrete Logarithm Problem (DLP) deals with the computation of the inverse of the modular power. We have seen that modular exponentiation can be computed in reasonable time. However, this is not the case for discrete logarithms. This skewness is a fundamental principle in cryptography.
The following result is called the discrete logarithm theorem. Recall that a generator of the set is a number g such that .
Theorem 7 (Discrete Logarithm)
If g is a generator of , then for all :
Proof
() If , one has . Yet, , hence .
() As the sequence of powers of g is periodic of period , then .
However, this does not enable one to compute the discrete logarithm with reasonable complexity. Given y, it is difficult to compute x such that . The only simple method consists in enumerating exhaustively all possible x and it takes a time . No polynomial time algorithm in (size of the input) is known for this problem.
Thereby, if , modular exponentiation requires less than operations, and it takes less than a second for a PC. On the contrary, exhaustive enumeration for computing the discrete logarithm requires operations, which is unimaginable in reasonable time according to what we have seen before !!!
In practice, one can apply principles similar to those used in factorization algorithms in order to attempt to solve the discrete logarithm problem.
This kind of function – for which one way can be easily computed but not the other one – is crucial in coding, especially for public key cryptography.
1.3.3.4 OneWay Functions
In cryptosystems called public key cryptosystems, the “ encoding system” has to be known while the decoding has to remain unknown. In this example of encoding with modular exponentiation and decoding with the discrete logarithm, the point of having the encoding function E known and the decoding function D unknown seems contradictory: if one knows E, one inevitably knows D as .
Actually, replacing “ unknown” by “ extremely difficult to compute on a computer” (i.e., several years for instance), the functions E and D of a public key cryptosystem must satisfy:
· in order to insure ;
· it is easy (i.e., it can be done quickly) to compute from M; and
· it is difficult (i.e., it takes a very long time) to recover M from .
In other words, the problem is to find an encryption function E, which is fast to compute but is long to invert. Such a function is called a oneway function (also known as OWF). This notion is absolutely crucial in cryptography and all modern codes are based on it. The principle is illustrated in Figure 1.8.
Figure 1.8 Principle of a oneway function
Adding a key to the functions will make decoding easier if one has the key and will make it more difficult if one does not have the key.
Good OWFs are functions such that the research of x given is a mathematical problem that is putatively difficult.
There are two interests in calculating in modular arithmetic. First of all, computations “ modulo n” are quite fast: their cost is using the most naive algorithms. Moreover, iterations of a function F – even a simple one – computed using arithmetic modulo ntend to have some random behavior. We see in Section 1.3.7 that this kind of computation is used in the major part of pseudorandom generators. Knowing F and n large, it seems difficult to solve the equation: find x such that , hence to invert the function F.
1.3.4 Construction of Prime Fields and Finite Fields
We have mentioned that we try to give field structures to our codes when possible in order to make operations easier. Now we have a first method of generating good codes: prime fields. It is sufficient to choose a prime number p, and to equip the set with the addition and the multiplication modulo p. However, “ finding a prime number” is not easy. It is a fullfledged field of research of which we will give a survey in order to leave no algorithmic void in our coding methods.
1.3.4.1 Primality Tests and Prime Number Generation
Even if one does not know a polynomial time algorithm for the factoring of an integer n (polynomial with respect to the size of n), it is still possible to quickly generate a prime number p. In coding, it is very useful to be able to generate prime numbers, both for building structured codes such as fields – which can easily be manipulated for error correction – and for setting up secured cryptosystems. For this, one uses primality tests, namely algorithms that determine whether a given number is prime or not. Taking a large odd number n, and applying the test on it, if n is “ composite” one can restart the algorithm with until one finds a prime number. The number of prime numbers less than n is asymptotically . One deduces that – starting from n odd – on average iterations are sufficient to find a prime number by adding 2 to n at each step.
The most used primality test was proposed by Miller and was made efficient in practice by Rabin. The Miller–Rabin test is an algorithm that determines whether a given number is probably prime or not. Therefore, the response given by the computation is only a probabilistic one, and it might be erroneous. Nevertheless, if one repeats the test a sufficient number of times and if the latter constantly gives thesame response, the error probability will become smaller and smaller and eventually negligible.
MillerRabin test Let n be odd and let s and t such that with t odd. For any integer , one has
If n is prime, according to Fermat's theorem, ; therefore
· Either ;
· Or for some i, .
The Miller–Rabin composition test is based on this property.
One says that a has succeeded in the Miller–Rabin composition test for n if and for all .
If n is odd and composite, there are less than integers a, which fail in the Miller–Rabin composition test. Therefore, by choosing a randomly in , the error probability is lower than .
This test can be efficiently used in order to build a probable prime number with an error probability lower than . One proceeds as follows:
1. Choose randomly an odd integer n.
2. Choose randomly k distinct numbers . Apply the Miller–Rabin composition test for each integer .
3. If no succeeds in the composition test, we deduce that n is prime; the error probability is lower than ;
4. Otherwise, repeat the loop using instead of n.
The complexity bound of the Miller–Rabin test is similar to that of modular exponentiation, namely ; and if one wants an error probability of around .
Thus, the average arithmetic cost of the generation of prime numbers is bounded by . Indeed, as there are about prime numbers less than n, it would take an average of tries to find a prime number less than n.
In practice, using this test, it is easy to generate a 1000 digit prime number with an error probability arbitrarily low.
Besides, it is possible to make the Miller–Rabin algorithm deterministic by testing a sufficient number of integers a. For example, Burgess proved that testing all integers a lower than was enough to obtain a prime number with certainty. However, the test would then become exponential in the size of n.
Finally, in 1990, a theorem proved that, assuming the generalized Riemann hypothesis, one of the most famous conjectures in mathematics, it is enough to test the first integers. Thus, theoretical studies show that this test is efficient and reliable.
Agrawal–Kayal–Saxena (AKS) primality test In order to have an overall survey of this topic, let us mention a new primality test – the AKS test – which was built by Agrawal, Kayal, and Saxena. In 2002, they proved the existence of a polynomial time deterministic algorithm that determines whether a number is prime or not without using the Riemann hypothesis. So far, despite this important theoretical result, in practice, one prefers probabilistic algorithms because of their efficiency.
The idea is close to MillerRabin's and uses the formalism of polynomials: if n is prime, then for all a
1.10
The AKS algorithm checks this equality for some values of a, by developing explicitly . For this test to have a polynomial algorithmic complexity bound, one needs to reduce the size of the polynomials (this is done by performing the test modulo withr satisfying some properties^{3}) and to use a sufficient number of witnesses a, but only of the order of , that is, a polynomial of the size of n.
We only give a rough idea of the justification of the AKS test, all the more as we have not introduced polynomials yet. This is what we are going to do now, because they constitute a necessary formalism to construct finite fields and codes.
We know how to build large prime fields by just computing large prime numbers. However, these are not the only existing finite fields. In order to build finite fields of any size (even if this size is not a prime number) – and provided that there exist such fields – we now have to introduce the ring of polynomials over any field.
1.3.4.2 Arithmetic of Polynomials
Let V be a field. We call a sequence a mapping of onto V. The image of in a sequence a is denoted by . The support of a sequence a is the number of nonzero elements in the image of a. A polynomial P on V is a sequence with a finite support. The numbers are called the coefficients of P. The degree of a polynomial P is the highest i, such that is nonzero and is denoted by . The coefficient is then called the leading coefficient. A polynomial is monic if its leading coefficient is equal to the neutral element for the multiplication in V.
The addition of two polynomials P and Q with coefficients and results in the polynomial with coefficients for all . The multiplication of the two polynomials P and Q results in the polynomial , with coefficients for all .
Let X be the polynomial such that , and for all . Any polynomial P of degree d may be written as
The utility of such a notation is among others to define a function P for any polynomial P: to each element x of V, . Now to efficiently evaluate the latter expression for an element x, one would use the Horner scheme of Algorithm 1.7.
The set of all polynomials with these operations is a ring, denoted by . The null element is the allzero sequence and the neutral element is the polynomial with coefficients and for all . It is a principal and Euclidean ring, with the degree as an Euclidean function. Indeed, one can define a Euclidean division: for all nonnull polynomials A and B, there exist two unique polynomials Q and R with such that
The polynomial is the quotient in the Euclidean division of A by B; also the remainder R is denoted .
The notion of Greatest Common Divisor (GCD) is then defined; the extended Euclidean algorithm can be applied to two nonzero polynomials A and B and provides a polynomial of maximum degree (it is unique if monic) that divides both A and B. Besides, Bézout's identity is valid. In other words, if A and B are two polynomials in and is their , there exist two polynomials S and T in such that and and
If A and B are different from the polynomial 1 (the neutral element), the extended Euclidean algorithm enables one to compute two polynomials S and T whose respective degrees are strictly lower than those of and .
Two polynomials A and B are said to be coprime if their GCD is equal to the polynomial 1; in other words, A and B have no common factor of nonzero degree. One says that the nonconstant polynomial P of is an irreducible polynomial over V if P is coprime with all nonconstant polynomials of degree lower than .
As for the prime factor decomposition of any integer, any monic polynomial of nonzero degree has a unique factorization in powers of monic irreducible factors over V (up to a constant); one says that is aunique factorization domain. In other words, it is possible to decompose any polynomial of nonzero degree A of in the form
where the are nonzero integers and the polynomials irreducible over V. If A is monic, the factors can be chosen monic: the decomposition is then unique, up to a permutation of the indices.
An element of V is a root of if , where is the value of the function associated to the polynomial A evaluated in .
If is a root of A, then divides A. Let B be the polynomial such that . One says that is a simple root of A if is not a root of B, that is, . Otherwise, if , one says that is a multiple root of A.
Example 1.5
In , the polynomial can be factorized into . One can easily check that is irreducible (the only irreducible polynomials in of nonzero degree lower than 2 are X and ; and neither 0 nor 1 are roots of ).
1.3.4.3 The Ring and Finite Fields
Let be a field and let P be a polynomial of degree . One denotes by the set of polynomials of degree strictly lower than d equipped with the addition and multiplication modulo P. Namely, for all polynomials in , with and :
This is a commutative monic ring of neutral elements 0 and 1 with regard to the laws + and . This ring is called the quotient ring of modulo P.
If P is an irreducible polynomial, then is a field. Indeed, if Q is a nonzero polynomial of degree lower than , then Q and P are coprime and with Bézout's identity , one obtains . In other words, Q is invertible in the quotient ring .
Example 1.6 (Over the Field )
If (a nonirreducible polynomial), the ring is such that:
This ring is not a field because proves that is not invertible. On the other hand, if one considers (an irreducible polynomial), the ring is such that . This ring is a field as .
Therefore, we now have finite fields that are more general than prime fields. Indeed, our last example provided us with a field of four elements, which is not a prime field.
Finite fields are called Galois fields. They are denoted by , where q is the cardinal number of the field. The next property enables us to handle all finite fields by this construction principle and to explain the notation for “ the” finite field of cardinal number q.
Property 6
Two finite fields of same cardinal number are isomorphic.
One can use an irreducible polynomial in order to build a finite field. As for prime number generation, looking for irreducible polynomials is a fully fledged domain of which we will give a survey.
1.3.4.4 Irreducible Polynomials
In order to build finite fields, we need some irreducible polynomials, as we needed prime numbers in order to build prime fields.
In the same way as we have seen primality tests for numbers in Section 1.3.4.1, we begin by giving a test that enables to recognize irreducible polynomials.
The first test which is easy to perform is to make sure that the polynomial is squarefree, namely that it does not have for divisor the square of another polynomial. This can be done over a finite field as for any other field by considering the derivative of P. For , we note its derivative polynomial.
Property 7
A polynomial P is square free if and only if .
Proof
If P is divisible by a square, then for some polynomials h and g. It follows that and thus g divides the GCD of P and . Reciprocally, if , let us consider an irreducible factor of g of degree at least 1. Then and with f and polynomials. By differentiating P, one obtains , or . The polynomial being irreducible and being of degree strictly lower than the degree of , and are coprime. By Gauss's lemma (see page 33), necessarily divides and divides P.
The principle of the irreducibility test is given by the following property.
Proposition 1
For p prime and , in , the polynomial is the product of all irreducible, monic polynomials whose degree divides d.
In order to prove this proposition, we will need the following lemma:
Lemma 3
divides if and only if divides .
Proof
If r divides d, then . Reciprocally, one has . Let us suppose that , with . Then, one has . Yet, , thus one obtains over the integers, which implies that . Thus, r divides d.
Proof. [of Proposition 1]
Let P be irreducible of degree r, such that r divides d. Then is a field. In V, the order of any nonzero element divides the cardinal number of the group of its invertible elements, namely (Section 1.3.2.1). One applies this property to , so that . According to the lemma, divides , hence , and thus P divides .
Reciprocally, let P be an irreducible divisor of of degree r. Then, one has . Now, set of maximum order in the group of invertible elements of the field (there always exists at least one such element, see page 56). One then applies Equation (1.10 ) d times in order to obtain . Now, and thus or . Hence, is necessarily a multiple of , the order of G. The lemma enables to conclude that r actually divides d.
One then just needs show that no square divides . Indeed, its derivative polynomial is and the polynomial is coprime with any other polynomial.
Thus, the factors of are all the irreducible, monic polynomials whose degree divides d. If a polynomial of degree d has no common factor with for , it is irreducible. From this property, we can build a test called BenOr's irreducibility test (Algorithm 1.8).
Therefore, we may generate random polynomials and, using this test, see if they are irreducible. One denotes by the number of irreducible, monic polynomials of degree r in . As is the product of all irreducible, monic polynomials whose degree divides r, one obtains
1.11
Indeed, is the degree of , thus . Hence . On the other hand, implies that . The latter is a geometric series whose value is . Finally, .
This statement shows that among all polynomials of degree r, about one over r is irreducible. One wishes to build an irreducible polynomial. At first sight, it is possible to choose a polynomial randomly, test its irreducibility, and restart the process until one chances on an irreducible polynomial. On average, one needs r draws to find an appropriate polynomial. However, in order to make computations with polynomials easier, it is interesting to obtain sparse polynomials, namely polynomials with very few nonzero coefficients. In this case, exhaustive research might turn out to be more efficient in practice.
We propose the following hybrid Algorithm 1.9 that produces an irreducible polynomial – preferably a sparse one. It is based on the idea of taking polynomials in the form with chosen randomly of degree significantly lower than r.
1.3.4.5 Construction of Finite Fields
Now, we have all the elements necessary to build a finite field of size with p a prime number. The method of building finite fields is contained in the proof of the following result:
Theorem 8
For all prime number p and all integers , there exists a field K of elements. This field is unique up to isomorphism.
Proof
Let p be a prime number and let be the field of polynomials with coefficients in . For is a field with p elements. For , Proposition 1 guarantees the existence of at least one irreducible polynomial as there are p irreducible polynomials of degree strictly less than 2 and , the degree of , satisfies . For larger d, Equation (1.11 ) shows that and thus there exists at least one irreducible polynomial P of degree d in . Then, the quotient ring is a field.
As P is of degree d and , there are possible remainders. Thus .
According to Property 6 on page 48, any field of cardinal number is isomorphic to V.
Remark 1
The isomorphism between and equips with a field structure.
Indeed, to any vector in the vector space , one can associate in a bijective way the polynomial . Moreover, one has the following property:
Hence, is an isomorphism between and . This equips with a field structure in which multiplication is defined by
Hence, one can use a field structure with the vectors in .
Exercise 1.23
Let K be a finite field of cardinal number . Using the map , defined by
prove that there exists a unique prime number p (called the characteristic of K), such that for all
Solution on page 288.
Exercise 1.24
(Sequel of the previous exercise)
Deduce from the previous exercise that the cardinal number of K is a power of p using the fact that K is a vector space over its subfields. Hint: One may obtain a subfield of K isomorphic to .
Solution on page 289.
Exercise 1.25 (Construction of the Field )
1. Give a necessary and sufficient condition for a polynomial in of degree to be irreducible. From this condition, deduce all irreducible polynomials of degrees 2 and 3.
2. Deduce all irreducible polynomials of degree 4.
3. Set with the neutral element for the addition and the neutral element for the multiplication. Using the first question, write the operation tables ( inverse) in .
Solution on page 289.
1.3.5 Implementation of Finite Fields
1.3.5.1 Operations on Polynomials
A typical construction of arithmetic in a finite field is – for a given prime number p – to look for some irreducible polynomial P in of degree d, then to write the elements of as polynomials, or as vectors, and finally to implement the arithmetic operations in .
Example 1.7 (Construction of the Field )
There exists a field with 16 elements as . In order to build the field , one first looks for some monic irreducible polynomial P of degree 4 in . Then one establishes the rules of calculation in .
· Finding P.
The irreducible polynomial P is written as with a, b, and c in . In order to determine P, let us examine all possible values for the triplet . One cannot have as for all these cases, 1 is a root of P. Thus, the triplet is to be searched for in the set . The only irreducible polynomials over of degree at most 2 are X, , and . To see whether P is irreducible, it is sufficient to compute the GCD of P and . The calculation (with the Euclidean algorithm for example) of these GCDs! (GDSs!)shows that the only values of for which P is irreducible are , and . Thus, is a possible choice for P. Let us make this choice.
· Operations on polynomials.
Thus, the elements of the field are . Therefore, the operations are performed modulo P. For example, .
1.3.5.2 Use of Generators
There exist other ways to implement finite fields in which the multiplication will be performed much more quickly.
The idea is to use the property of finite fields according to which the multiplicative group of invertible elements of a finite field is cyclic. Namely, there exists at least one generator and the nonzero elements of the field are generatedby this element. Hence, if g is a generator of the multiplicative group of a finite field , all invertible elements can be written as .
One can choose to represent each invertible element simply by using its index i and represent zero by a special index. This construction – in which one represents the elements using their logarithm – is called exponential representation or cyclic representation, or Zech's construction. Then, typical arithmetic operations are greatly simplified using the following proposition:
Proposition 2
Let be a finite field and let g be a generator of . Then . In addition, if the characteristic of is odd, then . Otherwise, .
Proof
Clearly, one has . If the field is of characteristic 2, then, as in , one has . Otherwise thus . Yet, as we consider a field, the equation has at most two roots, 1 and . g is a generator, thus the order of g is rather than . The only remaining possibility is .
This statement gives the following encoding for an element , if is generated by g:
In particular, in our encoding scheme, let us denote by the codeword associated with . We will also denote by the index of ; it is equal to if the characteristic of is odd and equal to otherwise. This enables one to write in a simple way all arithmetic operations.
· Multiplication and division of invertible elements are, respectively, implemented as an addition and a subtraction of indexes modulo .
· Therefore, negation (taking the opposite) is simply the identity in characteristic 2 or an addition with modulo if the characteristic is odd.
· Addition is the most complex operation. One must implement it using other operations. For example, it is possible to do so the following way: if and (with ) are two nonzero elements in a finite field, . This requires to store the index of for all k. This is done by precomputing a table, , of size q, containing the index of the successor of each element in the field. Eventually, addition is implemented with one subtraction of indexes (), one access to a table () and one addition of indices ().
Table 1.4 Zech's Construction on Invertible Elements in Odd Characteristic
Average Cost 

Operation 
Elements 
Indexes 
+/ 
Tests 
Access 
Multiplication 
1.5 
1 
0 

Division 
1.5 
1 
0 

Negation 
1.5 
1 
0 

Addition 

3 
2 
1 

Subtraction 

3.75 
2.75 
1 
In Table 1.4, we study the calculation of these operations over the indices, assuming the existence of a single “table of successors” of size q. Here, we focus on the complexity of the calculation using the least amount of memory possible, considering random elements. We indicate the cost of the computations taking the mean number of additions and subtraction (+/), the number of tests, and the number of times we use the table.
Exercise 1.26
Check that the polynomial X is a generator of the field , constructed with the irreducible polynomial . Then for the two polynomials and , perform and using the operations described in Table 1.4.
Solution on page 289.
1.3.5.3 Primitive Roots
In order to put this implementation into practice, we need to find a way of producing generators of finite fields in the same way as we needed a way of producing prime numbers in order to build prime fields or irreducible polynomials to build nonprime fields.
Generators of prime fields A generator of the group of invertible elements in is called a primitive root of n. The least primitive root of m is denoted by .
If p is a prime number, then always has exactly primitive roots. Indeed, by Lagrange's theorem (Proposition 5 page 5), the order of an element of a group divides the number of elements in the group. This means that the order of any nonzero element of divides . Now suppose that there exists one primitive root g, that is, g generates the group of invertible elements of . Then for any nonzero element x, there exists an index j such that . Then, the order of x is , that is, x is also a generator primitive root, if and only if its index is coprime with . Now, one has to compute at least one of these primitive roots. For this, one uses the following test, which checks whether the order of an element taken at random is or not. The main difficulty is to factorize , at least partially, and we see how to do this in Section 1.4.3.5.
Theorem 9
Algorithm Test Primitive Root is correct.
Proof
One uses the result of Section 1.3.2.1: if a is an integer, of order k modulo p, then if and only if .
One deduces that if the order of a is lower than , as it divides , then necessarily one of the values will be a multiple of the order of a. Otherwise, the only possible value for the order of a is .
Therefore, a first method of finding a primitive root is to test all integers lower than p one after the other, which are not equal to 1, nor to , nor any power of an integer; in this way, one is able to find the least primitive root of p. Numerous theoretical results exist, proving that it does not take a great number of attempts to find this first primitive root. It is generally of the order of
with r the number of distinct prime factors of . Another method is to draw random integers lower than p and to test whether they are primitive roots or not. As that there are primitive roots, the probability of success is ; thus, the expected value for the number of draws before finding a primitive root is . This gives us a better chance than the bruteforce method (trying all possibilities).
Generators of finite fields Now we know how to find a generator for a prime field. Let us consider the finite fields . In order to build them, let us recall that, one has first to build and then to find an irreducible polynomial over this field whose degree is k. The question is how to find a generator polynomial of this field in order to encode elements with their index rather than using polynomials. Encoding and arithmetic operations are then the same as those of prime fields.
Once again, we use a probabilistic approach. First of all, let us consider an algorithm testing whether a polynomial is a generator in . This algorithm is similar to the one we have seen for primitive roots in .
Therefore, once the field is built, an algorithm looking randomly for a generator is easy to implement. Besides, one can start the search into the set of polynomials of small degree (). However, in order to manipulate sparse polynomials, it is useful to find an irreducible polynomial for which X is a primitive root. Such a polynomial is called Xirreducible, or primitive and in general can be quickly found. In practice, for finite fields of size between 4 and , it is possible to show that more than one irreducible polynomial in is Xirreducible! Therefore, an algorithm looking randomly for an Xirreducible polynomial requires less than attempts on average. Thus, an algorithm for finding an irreducible polynomial having X as generator is a simple modification of Algorithm 1.9. If Algorithm 1.11 returns that X is not a generator. one does not select the irreducible polynomial found in Algorithm 1.9.
Example 1.8
Let us return to the example of the field , which we built with the irreducible polynomial .
Algorithm 1.11 performed on X returns that X is a generator. Therefore, one can perform computations using the powers of X (Exercise 1.26).
Recall the identification of the elements in and operation rules: ; ; ; ; ; ; ; ; ; ; ; ; ; ; and .
With written in form , 1, X, , multiplication and inverse calculation in are performed more easily. Addition is also much easier considering that for all k and t in such that .
1.3.6 Curves Over Finite Fields
The exponentiation over finite fields is a good example of a oneway function, and we now have almost all tools to construct and efficiently compute in those fields. Onone hand, the field structure provides many tools for the construction of codes. On the other hand, this structure itself allows more possibilities for code breakers in cryptography. It is possible to define this type of oneway function in a more general structure, a group, so that cryptanalysis is even more difficult. An example of such a group structure is the set of points of a curve defined over a finite field.
In a generic group, we denote by + the group law (which is the multiplication in the group of invertible elements of a finite field for example). Then, the multiplication by an integer (i.e., that integer number of calls to the group law, which is the exponentiation by an integer in ) can be used as a oneway function. The discrete logarithm problem, in this general formulation, is to find the number of times one has to add a given generator of the group in order to obtain a given element of the group. We denote this as for some scalar (integer) n and an element P of a group (Table 1.5).
Table 1.5 Discrete Logarithm and Exponentiation in Finite Fields and Generic Groups
Group 
Exponentiation 
DLP 
with and 
Find s.t. 

G 
times, that is, 
Find s.t. 
1.3.6.1 Weierstrass Model
Let be a prime number, , and let such that the polynomial does not have multiple roots and consider the equation
1.12
If is a solution of (1.12 ), then any multiple is also a solution. Two solutions are called equivalent if they are equal up to a constant multiplicative factor. This defines an equivalence relation. The elliptic curve is the set of equivalence classes of solutions of (1.12 ), which are called points of the curve. For one equivalence class, noted , if , there exists a representative of the class of the form . Indeed, just set and . On the other hand, If then xmust also be zero, and there is exactly one equivalence class of solutions with this form. It is denoted by and its chosen representative is usually . In summary the set of points is entirely defined by the cases and ; therefore, the definition of an elliptic curve can be simplified to
1.13
In fact, the general form of the equation of an ellipse is . Now if the characteristic of the field is neither 2 nor 3, then the change of variable yields an isomorphic curve and then a second change of variable enables to simplify the equation to (1.13 ). This can be generalized for fields of characteristics 2 and 3:
1. If , use to get , followed by , to obtain .
2. Else, and gives .
3. If , use to get , followed by to obtain .
4. Else, and gives .
Exercise 1.27
Verify that the given variable changes are correct.
Solution on page 289.
To make an exhaustive search for a discrete logarithm impossible in practice, the group of points in an elliptic curve has to be large enough. The following theorem states that the number of points is of the order of the size of the involved finite field.
Theorem 10 (Hasse)
For any prime power , let be the number of points of , then
1.3.6.2 The Group of Points of an Elliptic Curve
Now that we have defined the points in an elliptic curve, we need to provide a group law. We first give the abstract definition.
Theorem 11
Let be a field of characteristic greater than 5 and an elliptic curve. Then with the following rules for addition is an abelian group:
· is the neutral element for .
· For is the opposite of P for .
· For and then:
· if , then and
· else, and:
§ if also , then ,
§ else and .
The rules of addition derives from the representation of elliptic curves over the real field: if P and Q are two different points on the curve, then is the symmetric (with respect to the xaxis) of the third intersection of the curve with the line PQ. In the same manner, is the symmetric (with respect to the xaxis) intersection of the tangent in P with the curve, as shown on Figure 1.9. In both cases, is the slope of the line and the yintercept can naturally be recovered as, for example, .
Exercise 1.28
Let and .
1. Check that .
2. Check that .
3. Compute and check that it belongs to the curve.
4. Compute , the doubling of P, and check that it belongs to the curve.
5. Compute , the doubling of Q, and check that it belongs to the curve.
Solution on page 290.
Figure 1.9 (a) Elliptic curve addition and (b) doubling on
Once again, this law of addition can be generalized in characteristics 2 and 3 as given in Table 1.6.
Table 1.6 Group Laws in Characteristics 2 and 3 with and
p 
Curve 
Addition 
Doubling 
2 

opposite: 

opposite: 

3 

opposite: 

opposite: 
Moreover, note that using any of these addition laws, the algorithm for multiplication by an integer remains almost exactly Algorithm 1.5, page 41, where multiplications are replaced by and squarings are replaced by doublings. In this setting, exponentiation by squaring (or squareandmultiply) is often called doubleandadd.
Exercise 1.29
Let and , compute with only three operations.
Solution on page 290.
Note that there exists many other coordinate systems, such as projective and Jacobian, which differ in the number of multiplications, squarings, additions, or inversions in the base field, for the same group law. The choice of system depends on the respective speed of these operations and the target architecture. Note also that for certain subset of curves, such as Edwards curves, the coordinate system can be simplified, often leading to practical enhancements.
The US National Institute of Standards and Technology (NIST) recommends some elliptic curves^{4}, which contains a large number of points, with sizes ranging from to . Many other databases exist, let us mention Bernstein and Lange's ExplicitFormula database^{5} and the Acrypta database^{6}, which contains some Edwards curves.
1.3.7 PseudoRandom Number Generators (PRNG)
Generation of random numbers is widely used in all the methods we have just seen and will be often used in the sequel. In particular, generating numbers randomly is a condition for the perfection of Vernam's OTP scheme (see page 15). Now, it is time to look more deeply at this problem which needs some development.
The definition of randomness is crucial in coding. Indeed, any message presenting some kind of organization (organization is supposed to be the opposite of randomness) is an angle of attack for compression and code breaking. Therefore, one should rely on a solid theory concerning randomness in order to build secure and efficient codes.
Producing a truly random event is unsolvable by computers – which by definition only respond to determined and predictable processes. In order to obtain values produced by “ true” randomness (even if this notion is not completely absolute and refers to what one can observe and predict), one has to call upon assumed unpredictable physical phenomena, such as thermal noise or the description of the Brownian motion of electrons in a resistor.
However, this production of numbers may be called randomness only because we are not able – given our current knowledge in these areas – to explain their mechanisms and because only probabilistic theories enable us to grasp them. With computers, we wish to proceed in the same way in order to generate random numbers. We apply some procedures that make the result unpredictable in practice. This is what we call pseudorandom generation.
Production of random numbers is a very complicated task that has attracted the attention of both machine designers (“ hardware” components, such as thermal noise) and software designers (“ software” products, see examples in the next section) for some time. One must pay attention to this main issue because there exist some efficient methods (we will study some of them) that enable one to detect nonrandom features in a sequence, which is supposed to be random, to recover the method that produced it, and then to break a randomly generated key. If the machine that generates the winning lotto combination were not based on some good random generation process, one would be able to predict the next winning combination.
One often generates a sequence of pseudorandom numbers by computing each number from the previous one (which obviously makes the process completely deterministic and eliminates all randomness), in such a significantly complicated way that – examining the sequence without knowing the method – one could believe that it was truly generated randomly.
A generator must satisfy certain properties to be called a pseudorandom generator. All generated numbers have to be independent from each other. Moreover, they must have a great entropy, and hopefully no rule can be recovered from the sequence of generated numbers. There are several ways to determine whether a generator is acceptable. First of all, one has to make it pass some statistical tests in order to check that the distribution it produces does not significantly differ from the one expected from a theoretical model of randomness. Besides, one can also use algorithmic complexity principles: that is show that in reasonable time no algorithm will be able to predict the mechanisms of the generator.
For example, one can build a generator based on the model of Fibonacci's sequence, by producing the numbers , m being the maximum integer that can be produced. The main advantages of this method are the following: it is very easy to implement, very fast in execution, and “modulo” operation enables one to obtain some hardtopredict behavior for the generator. However, this generator – like most typical and simple generators – has drawbacks, and it is possible to recover its behavior based on statistical analysis.
The requirements for a pseudorandom generator are very similar to the properties one expects from a ciphertext. Indeed, it must be impossible, when receiving a message or a number, to find out the way it was produced without knowing the key. That is why some methods for random number generation look like cryptographic methods or use them.
1.3.7.1 Congruential Generators
One says that a generator is a Linear Congruential Generator (LCG) if it satisfies the following principle: if is the sequence of generated random numbers, one calculates from its predecessor: , with m a large number, and . The generator is called multiplicative if . Such a sequence is always periodic; thus, one will have to choose such that the period is significantly high. For example, for , the period is 4. Hence, this generator is not satisfactory at all.
The maximum period is obviously m. There exists a result providing a description of all generators of period m with :
Theorem 12
The Linear Congruential Generator defined by is of period m if and only if is coprime with m and a is a primitive root of m.
One usually chooses m as the greatest prime number which can be given by a machine (we have seen how to generate such a number on page 44). Obviously, a large period is not a sufficient criterion for the production of random generators (consider the choice ). Exercise 1.42, on page 85, is an approach to methods of attacking LCGs.
1.3.7.2 Linear Feedback Shift Register (LFSR)
One can generalize Linear Congruential Generators using not only the previous value to build the next element in the sequence but also several previous values, namely is computed from linear combinations of . In other words:
These generators are particularly interesting if m is a prime number because their maximum period is then , and this maximum is reached, see Theorem 13. Hence, it is possible, even with a small modulo, to have very large periods.
For example, in order to generate random sequences of bits, one chooses . In this case, the operations can be performed very quickly on a machine with “ eXclusive ORs” (xor) for the addition modulo 2 and with shifts on the bits to generate the next bits. There even exist specialized chips performing the necessary operations. Then, one talks about Linear Feedback Shift Register (LFSR). Figure 1.10 summarizes their mechanisms.
Figure 1.10 Functional diagram of an LFSR
For some computations, it is interesting to write an LFSR in a polynomial form: set .
Hence, refers to the infinite sequence of bits linearly generated by the polynomial , having the first k initial values set to .
Exercise 1.30
Write the first eight values generated by the following shift register:
Solution on page 290.
Finally, we have the equivalent of Theorem 12 for LCG with the primitive root replaced by a primitive polynomial.
Theorem 13
For some polynomial of degree k, the modulo a prime number p is of maximum period if and only if is a primitive polynomial in .
These generators are quite fast. Besides, they have also a very large period. However, we will see in Section 1.4.3.2 that the Berlekamp–Massey algorithm enables one to predict the following bits without knowing the generator polynomial, provided that successive values have been intercepted.
These generators are used in practice to generate quickly some bits with good statistical properties but they have to be combined with other generators to be cryptographically secure.
Example 1.9 (Securing the Bluetooth Protocol)
Bluetooth is a shortrange wireless technology whose aim is to simplify the connections between electronic equipment. It was designed to replace the wires between computers and their devices such as printers, scanners, mice, cellphones, pocketPCs, or even numerical cameras.
In order to make this protocol safe, one uses some kind of Vernam encryption scheme (Section 1.2.1) but with a pseudorandom generator based on LFSR: the encryption algorithm uses four LFSRs of respective length , and for an overall bits. The 128 bits of the initial value represent the secret key of Bluetooth encryption. Figure 1.11 shows the functional diagram of this encryption scheme.
We notice that the four polynomials that are used are as follows:
· ;
· ;
· ;
· .
These four polynomials are primitive polynomials in for an overall period of .
The 4 bits produced by these four successive LFSRs are then combined using a nonlinear discrete function f which produces the next bit on the output, from its initial state () and the successive values of the LFSR, according to the following algorithm:
1. (operations over );
2. (operations over );
3. and are the 2 bits encoding ;
4. (operations over ); and
5. (operations over ).
Figure 1.11 Bluetooth encryption
1.3.7.3 Cryptographically Secure Generators
One can use the principle of a oneway function, namely a function easy to compute but difficult to invert (computation in unreasonable time), to determine the quality of a generator.
The formal definition of a good quality generator is as follows. Given a generator and a finite sequence of bits it has generated, if it is possible, without knowing the method, to predict with good probability and in reasonable time the next bit of the sequence, then the generator cannot be considered as a random generator. Here, a good probability means significantly greater than a random guess, that is, for a sequence of bits.
If one is able to prove that there exists no efficient algorithm performing this prediction, then the generator is called cryptographic or cryptographically secure.
For example, the Blum–Micali generator proceeds in the following way:
· Generate a large prime number p.
· Let be a primitive root of p (a generator of the group of invertible elements in ).
· Let f be the modular exponentiation function .
· Let B be the function with values in defined by:
· if ;
· if .
The pseudorandom sequence of bits is then computed from the sequence , with any nonzero element in and for . One sets .
The function B is easy to compute: we have seen in the previous subsections how to generate a prime number, how to find a primitive root, and how to carry out modular exponentiation. Finally, when computing B, one has the value of using that has just been calculated. However, without the sequence , one can prove that finding is as difficult as computing the discrete logarithm. As we do not know any efficient algorithm solving the discrete logarithm problem, it is a difficult problem. Therefore, the generator is cryptographically secure.
1.3.7.4 Several Statistical Tests
The previous methods are based on the wellknown algorithmic difficulty of some problems, which makes it impossible to predict the behavior of generators. In order to measure the quality of a generator, one can also examine the sequences generated, and test whether they diverge from what we expect from a truly random generator. This is a difficult task as the criteria are numerous and are not necessarily trivial.
Statistics provide us with an adequate tool for these tests. For example, the test enables one to measure the deviance with respect to an expected uniform discrete law.
For all characters in the alphabet V, one has the expected probability and the number of occurrences in the generated sequence of size n. The expected frequencies are never exactly equal to the empiric frequencies. Therefore, one has to set the threshold of divergence from which the generator is no longer considered as random.
One has to keep in mind that, when considering a random generator, all sequences are possible a priori, even those whose distribution is completely different from the expected one because the generator is actually random. These sequences are only very unlikely to appear. If one observes such sequences in the output of a generator, then the generator is probably not so good (even if these sequences can theoretically appear in the output of a good generator). Here is how the test works.
One measures the gap between the expected distribution and the observed distribution using the parameter:
Now, it remains to determine the acceptable values for the parameter K. They are given by the tables of the , of which we give an extract in Table 1.7.
Table 1.7 Table (Extract)
Degrees of liberty 

9 
11.39 
16.92 
21.67 
10 
12.55 
18.31 
23.21 
11 
13.70 
19.68 
24.72 
12 
14.85 
21.03 
26.22 
15 
18.25 
25.00 
30.58 
20 
23.83 
31.41 
37.57 
30 
34.80 
43.77 
50.89 
In this table, the first column gives the number of “ degrees of liberty.” One sets this number to the value . Namely, for an alphabet of size 21, the line number 20. The first line gives the probability of having the value K lower than the value of the table. For example, the probability of having K greater than 24.72 for an alphabet of size 12 is 0.01.
Exercise 1.31 ( test)
A pseudorandom generator which is supposed to generate numbers between 0 and 10 according to a uniform law gives the sequence: 0 0 5 2 3 6 4 2 0 2 3 8 9 5 1 2 2 3 4 1 2. Perform the test on this sequence. What do you think of this generator? What do you think of the test?
Solution on page 290.
Obviously, such a test – although it can be useful and sufficient to reject a generator – is not sufficient to accept a generator. For example, it will not be able to reject the sequence , whereas a notsodrilled eye will notice the regularity in it (although one should distrust the impression of regularity one can have looking at a sequence, as it can be biased by false intuitions on what is actually true randomness).
One can, for example, strengthen this test by applying it to the extensions of the source induced by the message (Section 1.2.3.4). There exist numerous statistical tests reinforcing the trust one could have concerning a generator. It is important to notice that each test enables one to reject a generator, but only the set of all tests will enable one to accept it (besides, without mathematical rigor). There is no guarantee that – after succeeding in x tests – the generator will not reveal itself to be weak under the test.