Computes an infinite sequence of primes using simple generators. A Python dictionary is used to mark multiples of the generated primes, according to the Sieve of Eratosthenes.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 | ```
# Sieve of Eratosthenes
# David Eppstein, UC Irvine, 28 Feb 2002
from __future__ import generators
def eratosthenes():
'''Yields the sequence of prime numbers via the Sieve of Eratosthenes.'''
D = {} # map composite integers to primes witnessing their compositeness
q = 2 # first integer to test for primality
while 1:
if q not in D:
yield q # not marked composite, must be prime
D[q*q] = [q] # first multiple of q not already marked
else:
for p in D[q]: # move each witness to its next multiple
D.setdefault(p+q,[]).append(p)
del D[q] # no longer need D[q], free memory
q += 1
``` |

As Wim Stolker suggested, I have modified this to use setdefault, 1 Mar 2001.

Tags: algorithms

Shortcut.Nice short and clear recipe, however,Instead of:

I would use:

See: Add an entry to a dictionary, unless the entry is already there.

http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/66516

slightly faster by keeping a dict of single numbers instead of one of lists.neat indeed, but we can be marginally faster (about 40%+ on my machine for primes up to 1,000,000 for example) by keeping in the dict a single value for each composite number -- the first prime factor that revealed to us that number is a composite. Here's the Python 2.3 code for the enhanced main loop:this uses the new-in-2.3 "fetch and remove" method .pop of dictionaries, but that's marginal (and I put just the same small enhancement in the list-using version -- I posted both to comp.lang.python just now) -- the small speedup in this variant comes from not having to build so many small lists.

Alex

I hapen to like haskell's.haskell's canonical version:translates roughly to python as:

Unfortunately it exceeds the recursion limit after about 3000 on my machine.

Slightly faster yet by skipping even numbers.It's too bad itertools.count doesn't take a step argument! This would look cleaner, and likely be marginally faster, with

rather than the current while True cruft.

Another recursive version.This uses ifilter to generate the recursive sequence. It's much less vulnerable to stack overflow than the version that recurses on sieve, but otherwise it's pretty similar.

I managed to generate all primes up to 909,691 before it bombed the Python interpreter mysteriously, but it takes a while!

Faster generation of primes up to n.I can get a roughly 2.5x speedup when generating primes up to a given number. By keeping the whole filter in memory, it's possible to do list lookup instead of dictionary lookup.

I managed to create a python onliner based on #3.

It is not terrible efficient memorywise, but it is a Sieve.

Tell me if you can do it smaller:

Im pitching in my implementation :

def GetPrimes(limit):

I noticed that the dict method applied to odd numbers only (see # 5) has to check for even numbers while searching for the next multiple (e.g. q=9, p=3 leads to D[12] being set unnecessarily). So someone suggested:

But, iff q and p are odd,

q+2*pwill always be odd. This shaves off another 30% of the runtime of this already slick algorithm.Additionally, the 'while' loop is replaced by the itertools.count() iterator, as has been suggested before.

Clean and clear code.. But do not scale well for big numbers.. What do you guys think about my version here http://rahulrajpl.wordpress.com/2010/05/15/sieve_of_eratosthenes-with-efficient-scaling/

def iprime_generator(n):

## It seems fairly efficient for numbers less than a million, I'm a noob so I'm still trying to figure out how to optimize it. Please post feedback, thanks!

The code of Wolfgang Beneicke still can be improved a little. The following code is about 10% faster.

The space complexity is needlessly O(n). O(sqrt(n)) is enough indeed, achieved by

the addition of stepping information for a prime into the dict until that prime'spostponingis seen amongst the candidates, and not a moment sooner. Having much smaller dict, performance and empirical time complexity improve too. See http://ideone.com/WFv4f.squareThat's http://ideone.com/WFv4f (why can't I edit my comment?...).

Here's the code for the postponed sieve mentioned above:

Well to complement the ones above; here is a "set"-based implementation.

## total runtime: 15.5625 for the first 104730 numbers.

There was a bug in my 7-years old implementation (a missing '+ 1'); thanks to @xn0 who spotted it.

Here's the corrected version: