Fifo mean "First In First Out". This is a kind of container, which only allow element insertion and removal and where the first element inserted is the first element removed.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 | class Fifo:
def __init__(self):
self.first=()
def append(self,data):
node = [data,()]
if self.first:
self.last[1] = node
else:
self.first = node
self.last = node
def pop(self,n=-1):
node = self.first
self.first=node[1]
return node[0]
a=Fifo()
a.append(10)
a.append(20)
print a.pop(0)
print a.pop(0)
a.append(5)
print a.pop(0)
|
I would thanks Raymond Hettinger who has suggested many great improvements both to the code and the discussion part. He has also provided an interesting alternate implementation in the comment part.
Python offer the pop(0) method of standard list to implement fifo. Unfortunately lists pop(0) operation need to perform a copy of the whole content of the list, this is quite inefficient if you need to store a large set of data.
Linked lists data structure offer a way to do pop operation in constant time. In fact I did the previous code after I read an academic paper that said that double linked lists was the 'natural' way to do this kind of container (by contrast from stacks). I would convince myself that it was possible to create Fifo with single linked lists.
I've done some dirty benchmarking. Here some numerical results, this show the relative time to push N data in a fifo and pop them (this operation was repeated 10 time for each N)
(N, Python-lists, Simple-linked-list-fifo) (50, 0.01, 0.02) (100, 0.02, 0.03) (500, 0.12, 0.15) (1000, 0.3, 0.31) (5000 3.55, 1.69) (10000, 12.19, 3.48)
This results show that in single-linked-list pop and append duration don't depend on the numbers of items contained in the container. Builtin list solution seems to become inefficient when your fifo have to store more than 1000 items.
Note: Paul Moore code (in comment n°1) is very interesting and efficient, unfortunately this is not a fifo implementation but a stack implementation.
You can do this more efficiently using nested tuples. If you steal Lisp's concept of "cons cells", you can achieve the same sort of result using nested tuples. Effectively, you implement the FIFO [1, 2, 3] (where 1 is the first item you'll pop) as (1, (2, (3, ()))).
Here's how:
This is more compact than your list representation (tuples take up less space than lists) as well as being faster (tuple unpacking is a fast operation). Of course, it's also more obscure unless you're a Lisp hacker :-)
Tighter version. The author doesn't give himself enough credit. The built-in operations are faster only for short lists. His code runs in O(n) time which always wins in the longer run.
Here is a tighter version of the above code.
Fastest FIFO. There was a news group discussion on the fastest FIFO implementation:
Two O(n) approaches were found, the nested list approach shown above and a dictionary based approach from Alex Martelli. The dictionary approach beat nested list by about 30%. Space consumption was not measured -- Sébastien Keim's nested list approach may use less memory per queue element. Both approaches kept their O(n) characteristics in tests upto 300,000 queued elements. Here is the winning source code:
Three slower approaches were also found. Each had performance that degraded to worse than O(n^2) for large test sets. The approaches were prepending to lists, appending to lists, and using array.array('i') which only works for storing integers. Not only were these appoaches slower, but they all took a major performance hit when there were more than 65,536 items in the queue (apparently having long integers as indices will more than double the lookup times).
For Python 2.2 users, variations were tried which sub-classed from built-in types. Revising Alex's code to inherit from a dictionary slowed his code by about 30%. Revising the list based approaches to inherit from built-in lists resulted in a 2% performance gain.
This is a stack, not a FIFO queue.
No exception raised. This is not quite equivalent to the original class: it does not raise an exception if there are more pops then appends.
An even faster FIFO queue. Here's a version that's (by my tests) even faster than the dict method. It uses a design well-known in the functional programming world for making FIFO queues with singly linked lists.
With this method, keeping a FIFO queue requires two lists. The basic idea is that one list is the front of the queue and the other is the back. Enqueuing an element simply appends that element to the back list. Dequeuing an element pops from the end of the front list. If the front list is empty, then the front list is set to the reversed back list and the back list is reset to be empty. Thus the queue is amortized O(1) time, since one O(N) operation is required per N operations total.
Here's the code. Instead of maintaining two separate lists, it subclasses list and thus itself is the front list; the back list is kept as an attribute. It's slightly faster this way than it is keeping separate lists.
Very nice. I find your solution very pythonic. I have explored an implementation which should be a little faster since it avoid one copy:
Interesting. A quick benchmark shows that Jeremy's code is still faster for me (at least on Python 2.5.1). That said, avoiding the subclassing does make things more readable. Using named attributes for front and back also helps with clarity (I assume both you and Jeremy originally wrote versions that looked like this, but I'll post it for future readers).
Also, this probably varies by the benchmark, but my test shows using separate attributes to be slightly faster than the data array. I like the d.reverse() trick, though!
Fifo is now in python. Since python2.4 , a deque type has been added which can be used as fifo or queue.
See http://docs.python.org/lib/deque-objects.html
I haven't done any timing measurement but I trust python maintainers to have chosen an optimal solution.