Primality testing

The Mastery of Computer Programming: Primary Algorithms - Sykalo Eugene 2023

Primality testing
Arithmetic Algorithms

Introduction to Primality Testing

Definition of prime numbers

A prime number is a positive integer greater than 1 that has no positive integer divisors other than 1 and itself. In other words, a prime number is a number that can only be divided by 1 and itself.

Importance of primality testing

Prime numbers play a crucial role in many areas of mathematics and computer science. For example, they are used in cryptography to generate secure encryption keys, in hashing algorithms to distribute data evenly across a hash table, and in number theory to prove important theorems.

Primality testing is the process of determining whether a given number is prime or composite. It is an important problem in computer science and has applications in many areas such as cryptography, number theory, and algorithms. There are many algorithms for primality testing, ranging from simple trial division to more complex probabilistic and deterministic algorithms.

In this chapter, we will cover the fundamental algorithms for primality testing, including the trial division method, the sieve of Eratosthenes, the Miller-Rabin test, and the AKS primality test. By the end of this chapter, readers will have a deep understanding of these algorithms and their applications in various fields.

Trial Division Method

The trial division method is the most basic and straightforward algorithm for primality testing. It works by dividing the given number n by all integers from 2 to n-1, and checking whether any of these integers divide n evenly. If none of these integers divide n evenly, then n is prime; otherwise, it is composite.

The time complexity of the trial division method is O(n), which is very slow for large values of n. However, it is still useful for small values of n or for testing whether a number is divisible by a small prime.

In practice, the trial division method can be optimized by only dividing n by odd integers, since even integers greater than 2 cannot be prime. Additionally, the search range can be further reduced by only checking divisors up to the square root of n, since any larger divisor must have a corresponding smaller divisor less than the square root of n.

Despite its simplicity, the trial division method is not efficient for large values of n and is not used in practice for primality testing. Instead, more complex algorithms such as the Miller-Rabin test and the AKS primality test are used, which have much better time complexity and are more reliable in practice.

Sieve of Eratosthenes

The sieve of Eratosthenes is an algorithm for finding all prime numbers up to a given limit. It works by iteratively marking as composite (i.e., not prime) the multiples of each prime, starting with the multiples of 2. The algorithm consists of the following steps:

  1. Create a list of consecutive integers from 2 through n: (2, 3, 4, ..., n).
  2. Initially, let p equal 2, the smallest prime number.
  3. Enumerate the multiples of p by counting in increments of p from p*p to n, and mark them in the list (these will be 2p, 3p, 4p, ...; the p itself should not be marked).
  4. Find the smallest number in the list greater than p that is not marked. If there was no such number, stop. Otherwise, let p now equal this new number (which is the next prime), and repeat from step 3.

The time complexity of the sieve of Eratosthenes is O(n log log n), which is much faster than the trial division method for large values of n. However, it requires a large amount of memory to store the list of integers, which can be a disadvantage for very large values of n.

Despite its limitations, the sieve of Eratosthenes is a useful algorithm for finding prime numbers up to a given limit and is widely used in practice.

Miller-Rabin Test

The Miller-Rabin test is a probabilistic algorithm for primality testing that is based on the properties of strong pseudoprimes. A strong pseudoprime is a composite number that passes certain tests that are equivalent to the tests used to determine whether a number is prime. The Miller-Rabin test works by using a base value a and an integer n to determine whether n is a strong pseudoprime to base a.

The basic idea of the Miller-Rabin test is as follows:

  1. Write n-1 as 2^r * d, where d is odd.
  2. Choose a random integer a such that 1 < a < n-1.
  3. Compute a^d mod n. If this value is 1 or n-1, then n passes the Miller-Rabin test for base a and we conclude that n is probably prime.
  4. For each value of i from 1 to r-1, compute a^(2^i * d) mod n. If this value is n-1, then n passes the Miller-Rabin test for base a and we conclude that n is probably prime.
  5. If none of the values computed in steps 3 and 4 is equal to 1 or n-1, then n is composite.

The probability that the Miller-Rabin test incorrectly identifies a composite number as prime is at most 1/4 for each base a. By choosing multiple random values of a and performing the test with each value, the probability of error can be made arbitrarily small.

The time complexity of the Miller-Rabin test is O(k log^3 n), where k is the number of iterations and n is the number being tested. The value of k required to achieve a desired level of accuracy depends on the size of the number being tested.

The Miller-Rabin test is widely used in practice for primality testing, especially for large values of n. It is faster and more reliable than the trial division method and the sieve of Eratosthenes for large values of n, and is also simpler and more efficient than other probabilistic algorithms such as the Solovay-Strassen test. However, it is not guaranteed to always give the correct answer, and deterministic algorithms such as the AKS primality test are necessary for certain applications where high reliability is required.

Deterministic Primality Tests

Deterministic primality tests are algorithms that determine whether a given number is prime or composite with certainty, rather than using probabilistic methods like the Miller-Rabin test. These algorithms are generally more complex and slower than probabilistic methods, but have the advantage of always giving the correct answer.

One of the most well-known deterministic primality tests is the AKS primality test, which was discovered in 2002 by Agrawal, Kayal, and Saxena. The AKS primality test works by checking whether the given number n is a member of a special class of numbers called "polynomially bounded" primes. These primes have a special property that allows them to be identified efficiently using modular arithmetic.

The basic idea of the AKS primality test is as follows:

  1. Check whether n is a perfect power of any integer. If so, n is composite.
  2. Find the smallest r such that the order of n modulo r is greater than log^2 n. If such an r exists, go to step 4; otherwise, go to step 5.
  3. For each a between 1 and r-1, check whether (x+a)^n is congruent to x^n+a (mod x^r-1,n). If any of these congruences fails, n is composite.
  4. If n passes the tests in steps 1-3, then n is prime.
  5. If n is greater than some fixed bound for which the AKS primality test is not efficient, then the test is inconclusive and another primality test must be used.

The time complexity of the AKS primality test is O(log^7 n), which is much slower than probabilistic methods like the Miller-Rabin test. However, it has the advantage of always giving the correct answer and is therefore useful in applications where high reliability is required.

Deterministic primality tests like the AKS primality test are not used as widely as probabilistic methods in practice, due to their slower time complexity and higher implementation complexity. However, they are still important for certain applications where high reliability is required, such as in cryptography and number theory.