08 August 2002

Apparently some mathematicians/computer scientists in India have discovered a polynomial time algorithm for primality testing. This was reported at slashdot a few days ago but I usually doubt slashdot when it comes to math reporting. Anyway the New York Times picked up the story today and quote an MIT prof. who read the paper and so its likely to be actually correct. The link for the paper is at this site, but I haven't read it yet.

So what does this program actually do and why does it matter? Well it checks whether a given number n is prime or not. This in and of itself is not surprising, the simplest way to go about doing this would be to take every number from 1 to the square root of n and check and see if any divide n. Methods like this are way too slow. The method given in this paper works in "polynomial time," meaning the amount of time it takes is less than some polynomial function of log n (the number of digits of n).

Why does this matter? Well for one thing it doesn't. We already have many algorithms which can come up with something which is virtually gauranteed to be prime (say one in a trillion odds) in polynomial time. For all practical purposes this is good enough. Thus this solution is only of theoretical interest.

What practical purposes are there for primality testing? The encryption schemes which, say, internet purchases are based on require having a number which is the product of two large primes. In order to find such large primes one needs a good primality test.

The real holy grail in this area would be a way of factoring numbers in polynomial time. A primality test only tells you if the number is prime or composite, it doesn't give an actual factorization. In order to break the RSA encryption one needs to factor a number which is the product of two large primes. If one could do this in polynomial time it would suddenly allow many of the codes which e-commerce depends on to be easily broken (and thus would probably break the DMCA).

No comments: