This is an implementation of the binary search algorithm in (almost) one line. Given a number 'n' and a list 'L', the function returns the index of the number on the list, or -1.
1 2 3 4 5 6 7
def search(n, L): f = lambda n,L,o: len(L) and ( \ (n == L[len(L)/2])*(len(L)/2+o) or \ (n < L[len(L)/2] and f(n,L[:len(L)/2],o)) or \ (n > L[len(L)/2] and f(n,L[len(L)/2+1:],o+len(L)/2+1))) return f(n,L,1) - 1
I did this function for a series of programming katas (http://pragprog.com/pragdave/Practices/Kata/KataTwo.rdoc). The goal of the second kata is to come up with five unique approaches to a binary chop, one per day. I was curious if I could do it in one line, and here's the result.
I encountered some problems while developing it because 0 is a valid index, but its also a boolean false. Thats why I add an offset of 1 when calling the function, and subtract 1 later.
The function is only a little slower than other recursive or iterative implementations that I've tried.
oneliners vs. reali-word apps. >> The function is only a little slower than other recursive or iterative implementations that I've tried.
Really? I'd like to see the benchmarking code.
Below are results of mine. It compares four algorithms:
The output might look like this (doing it 100 times for an array of 10,000 elements):
What it prints is number of seconds spent on doing the same thing four different ways on two types of data: array-of-ints and array-of-strings. Notes:
1) for real-world apps, if an array is huge (1,000,000?), bisect will beat the hash;
2) don't try it at home.
// How do I attach a file.py here?..
What I love about Python is: it doesn't insist, doesn't even encourage me to write:
total = reduce(lambda x,y: x+y, map(int, line.rstrip().split()))
It's OK in Python to write three lines instead of one:
oops... Forgot to mention why I tried brute-force-dumb-C. It actually will beat lambda on considerably large arrays:
If you're patient enough to try, that is.