The Lehmann primality test.
Fast, accurate, easy: pick three!
The world of primality tests is fraught with its own version of the Project Triangle. When you are trying to determine whether a number n is prime, you have three basic options:
 The trialdivision method, wherein you try each number up to √n and see if it’s a factor. This is accurate (no number will ever fool this test, because it depends on the very definition of primality) and easytoimplement, but gets slow quickly.
 Fermat’s primality test. This will be familiar to readers of SICP. It’s easy (once you know the simple theorem behind it) and fast, but not particularly accurate: the pesky Carmichael numbers fool it every time, reporting composite numbers as prime. Despite Abelson and Sussman’s assurance that “the chance of stumbling upon a value that fools the Fermat test is less than the chance that cosmic radiation will cause the computer to make an error,” it’s bad cryptography if an attacker can use a known set of pseudoprimes.
 The Solovay–Strassen and Miller–Rabin tests, which are fast and accurate, but not easy to implement: the former requires understanding the Legendre and Jacobi symbols (arcane mathematical concepts) and the latter requires an efficient factorisation algorithm.
I decided that Plan’s standard library should have a primality test. Not wanting to use the Fermat test because of its inaccuracy, I seemed to be stuck with either the slow trialdivision or the complicated Miller–Rabin. (I’m unwilling to clutter the source code of the standard library for the sake of a simple prime number checker.) Fortunately I happened upon an obscure algorithm that seemed to be perfect. It’s called Lehmann’s test.
Here’s the Lehmann algorithm, in four easy steps, for testing a number n:
 Pick a random integer a, 1 ≤ a < n.
 Let x be a^{(n1)÷2} mod n.
 If x is either 1 or (−1 mod n), then n might be prime. Keep trying different random values of a to be sure.
 Otherwise, n is definitely composite — no fooling.
When I understood this test, I was struck by its simplicity and elegance. It works so similarly to the Fermat test, yet (as far as I can tell) is far more accurate.
How do we go about implementing this test, then? Of course, I originally wrote it in Plan, but I’ll present it here in Scheme. We’ll use the same expmod
presented in SICP:
(define (square x) (* x x))
(define (expmod base exp m)
(cond ((= exp 0) 1)
((even? exp)
(remainder (square (expmod base (/ exp 2) m)) m))
(else
(remainder (* base (expmod base ( exp 1) m)) m))))
(That’s 1996style Scheme, of course. You can probably make a few modernisations if you want.)
We also need a random number procedure that’s guaranteed never to give 0, which will cause our test to fail:
(define (random>0 n) (+ 1 (random ( n 1))))
Here’s the test expressed simply in Scheme, where k
is our value of certainty (the number of times to run the test) and n
is the number we’ll test:
(define (prime? k n)
(define (lehmanntest tries a)
(let ((x (expmod a (/ ( n 1) 2.0) n)))
(cond ((= tries 0) #t)
((or (= x 1) (= x (modulo 1 n)))
(lehmanntest ( tries 1) (random>0 n)))
(else #f))))
(lehmanntest k (random>0 n)))
If you try this out in your Scheme interpreter, you’ll notice that it consistently gives errors when you try to find the primality of 1, or of an even number (including 2!). It doesn’t take much effort to fix that, though:
(define (prime? k n)
(define (lehmanntest tries a)
(let ((x (expmod a (/ ( n 1) 2.0) n)))
(cond ((= tries 0) #t)
((or (= x 1) (= x (modulo 1 n)))
(lehmanntest ( tries 1) (random>0 n)))
(else #f))))
(cond ((< n 2) #f)
((= n 2) #t)
((even? n) #f)
(else (lehmanntest k (random>0 n)))))
According to this page, Lehmann’s algorithm has an overall certainty of 1−2^{−k} — but that page also gives a bogus definition of the test, so I’d take it with a pinch of salt. But that means that:
 If you run it 10 times, it will run with 99.9% certainty — it’ll be wrong once out of 1024 times.
 If you run it 25 times, it will run with 99.99999% certainty — a 1 in 33,554,432 chance of being wrong.
 If you run it 50 times, it will have 99.9999999999999% (fifteen nines) level of reliability — on average, you’ll have to run it 1,125,899,906,842,624 (a thousand billion)^{1} times before it’s wrong.
Don’t forget just how fast this test is. On my machine, testing the prime number 27,644,437 took about 1.2 milliseconds with a k value of 50 (far higher than you’ll probably need for your applications.)
And it’s easy: you could write it on an index card, hand that to an elevenyearold, and they could probably implement it for you.
As far as I know, there’s no equivalent of the Carmichael numbers for this test. It’s very hard to tell, though, because there’s a dearth of information on this test out there.^{2} But nobody seems to mention that it will report falsepositives for any known set of numbers.
Is this test too good to be true? I’d appreciate feedback about its correctness. If it is a perfect prime test (in that, except in cases attributable to its probabilistic nature, it will always produce the correct result for a given input number), I find myself wondering why it hasn’t become more popular.
Updates.
Colin Percival reports that the test is broken for values of n which, as I understand it, are relatively prime to a Carmichael number. I showed, and he explained, that large Carmichael numbers require large values of k for this test to be accurate. So the test isn’t the magic bullet it once seemed, but there seem to be far fewer values for which it gives answers that are outright wrong, compared to Fermat’s test.

Longscale, or British billion. In shortscale (American) style, it’s a trillion. But still a very nice probability. ↩

Hence my writing about it. In fact, there’s so little information that it’s not clear whether it should be called the Lehmann test or the Lehman test, but the original paper (paywall’d) looks like it gives its author’s name two n’s. ↩