We've established that determining the prime factorization of a large randomly chosen integer is not as easy as just performing trial division by all potential prime factors. Or even using more sophisticated factorization algorithms, such as the general number field sieve. The reason is that none of these algorithms scale sufficiently as the size of the number grows. And as a consequence, we can always choose numbers that are large enough, but these techniques are infeasible to carry out in a meaningful time frame. In fact for cryptanalytic purposes, that is precisely the goal that we want to achieve. But in situations where we aren't interested in finding the actual factors, just whether or not non-trivial factors exist, we have some powerful tools at our disposal. In this lesson, we will look at one of the simplest which is Fermat's Primality Test. This test is probabilistic, meaning that when we perform the test, it will not actually tell us definitively if the number is prime. In fact, it isn't really a primality test as much as it is a compositeness test. In that if it tells us that a number is composite, then we know for a fact that it is, and hence not prime. But the alternate outcome is only that it fails to establish that the number is composite, which only implies that it might be prime, not that it must be. To understand the sense of this, imagine an even simpler primality test, Bob's primality test. This test says that if you choose a random number greater than two and if the number is even, then it is composite, without a doubt. If the number is odd, however, it only tells you that the number might not be composite or equivalently that it might be prime. Fermat's Primality Test is based on Fermat's Little Theorem which states that if p is a prime number, then any number a satisfies the relation that a to the pth power is congruent to a (mod p). If a and p are relatively prime, then a has a multiplicative inverse, mod p, and this can then be rewritten as a raised to the p- 1 power is congruent to 1 (mod p). This should look familiar since it is a special case of Euler's Totient Theorem. So how do we construct a primality test from this? After choosing a random candidate for p, we pick a value for a that is less than 8. We can't chose 0 since 0 is not relatively prime to any number, including p. We could choose 1, but this trivially satisfies the relation and so tells us nothing. Similarly, since p is going to be an odd number, since two is the only even prime and two would be a rather poor choice for a cryptographically useful prime number, then p- 1 will be even. And hence choosing a = -1, which is congruent to p- 1 (mod p), will also truly satisfy the relation. So we pick a value for a such that it lies strictly between 1 and p- 1. We evaluate a raised to the p- 1 and see if the result is congruent to 1. If is not, then we have established that p is a composite number and the value a is known as a Fermat witness to that fact. But if the result is congruent to 1, then we have two possibilities. Either p really is prime which is what we're hoping for, or p is actually composite and this particular choice of a for the base failed to detect it. When this happens, p is known as a Fermat pseudoprime to the base a, or just a pseudoprime to the base a, and the value a is known as a Fermat liar for this value of p. For a concrete example, let's say that we randomly pick p = 15. And for the moment, let's assume that this value is too large for us to even attempt to factor, but we want to see if it is prime. So we randomly choose a value of a that's equal to 4 and calculate 4 to the 14 (mod 15) = 1. We discover that it is indeed congruent to 1, so 15 might be prime. But since we know that it might not be, we hedge our bets and pick another value of a and choose 2. 2 to the 14th power (mod 15) is congruent to 4. So this proves that 15 is composite and 2 is a Fermat witness for 15. But since it passed the test for a for a base of 4, we call 15 a Fermat pseudoprime to the base 4 and 4 is a Fermat liar for 15. In general, the more bases we choose to test, the more likely we are to find a Fermat witness should p not be prime. But this isn't guaranteed, and in fact, It's actually possible to find a number that is pseudoprime for every possible choice of a. For these numbers, we could exhaustively test every possible base and conclude that the number almost has to be prime. These numbers are called Carmichael numbers and the smallest such number is 561. While there are infinitely many Carmichael numbers, they're actually pretty rare in comparison to the non-Carmichael pseudoprimes. Hence the odds are well in our favor that if we test a number against a lot of bases and we do not find a Fermat witness, that the number is prime. Because the Fermat test is simple, it is often done as the first in a suite of tests. The idea to quickly rule out most selections as being not prime and then carrying out better, and usually slower, tests on the survivors to further increase the confidence in the primality of the number. But this isn't always the case and there are some mainstream cryptographic protocols that rely on Fermat's primality test alone to choose prime numbers. Even when this is the case, however, the Fermat test usually isn't the first screening that is performed. Recall that the expected density of prime numbers around the value n is about 1 over the natural log of n. So for a 500-bit prime, we have about a 1 in 350 chance that a randomly selected number is prime. But if we pick a prime number and discard it if it is even and keep doing this until we have an odd number, then we now have about a 1 in 175 chance of this number being prime. If we then check to see if the number is divisible by 3, we only made it a third of the remaining numbers, leaving us with two-thirds of the prior half of the numbers we might choose, which gets us down to a chance of about 1 in 120. We can keep screening our numbers using successfully larger primes and improve our odds at each step. But at some point, fairly quickly, we reach a point of diminishing returns. Since performing trial division by a small handful of prime numbers is relatively cheap, we might choose to check the first 15 primes, which are the prime numbers less than 50. This will improve our chances to about 1 in 50. It probably isn't worth going much further than this. For instance, if we screen against the 95 prime numbers less than 500, we only further improve our odds to about 1 in 30. One technique that is sometime used to improve our overall performance is to use a number sieve on a segment of the number line, starting with the random number that we chose. Imagine picking a random 500-bit number and then setting up an array having, say, 35,000 entries. Each entry only needs to hold the single bit that tells us that whether the corresponding number is composite or not. The first element corresponds to our chosen number and the next element to the next number after, and so on. We expect there to be about 100 prime numbers within this range. We then walk through the range marking off every nth number for the first prime numbers, being sure to start at an appropriate place based on the residue of our chosen number modulo each prime. This is a very fast and efficient algorithm for limiting a significant fraction of the potential values. After we have sieved the multiples of the first k primes, where perhaps k is several hundred, we then perform a preset number of Fermat primarily test on each unmarked entry until we find one that fails to produce any Fermat witnesses for that preset number of tests. We don't need to pick our bases randomly either. We can, but it's not uncommon to just walk up the prime numbers up to some maximum number. And frequently, we only test half a dozen or so, before we reach a point where we're comfortable saying, we think this number's prime. If it weren't for the existence of the Carmichael numbers, it's highly likely that nearly every protocol would rely on on Fermat's primality test to choose large prime numbers. Unfortunately, they do exist. And this prevents us from making strong statements about the probability that our random number is prime, based on how thoroughly we tested it. To achieve this, we perform further testing, using different algorithms that allow us to place strong probabilistic limits on the likelihood that our number is prime. The first of these tests is the Miller-Rabin test, which we'll talk about in the next lesson.