Popular recipes tagged "parallel" but not "demo"http://code.activestate.com/recipes/tags/parallel-demo/2010-04-02T00:14:38-07:00ActiveState Code RecipesParallelize your shell commands (Bash)
2010-04-02T00:14:38-07:00Kevin L. Sitzehttp://code.activestate.com/recipes/users/4173535/http://code.activestate.com/recipes/577171-parallelize-your-shell-commands/
<p style="color: grey">
Bash
recipe 577171
by <a href="/recipes/users/4173535/">Kevin L. Sitze</a>
(<a href="/recipes/tags/fifo/">fifo</a>, <a href="/recipes/tags/multiprocessing/">multiprocessing</a>, <a href="/recipes/tags/parallel/">parallel</a>).
Revision 2.
</p>
<p>This script is used to processes a batch job of commands in parallel. The script dispatches new commands up to a user specified limit, each generated from a template provided on the command line using arguments taken from STDIN. The basic combining semantics are similar to "xargs -1", though support for multiple arguments and parallel processing of commands are provided.</p>
Parallel map (Python)
2009-04-02T19:41:10-07:00Miki Tebekahttp://code.activestate.com/recipes/users/4086267/http://code.activestate.com/recipes/576709-parallel-map/
<p style="color: grey">
Python
recipe 576709
by <a href="/recipes/users/4086267/">Miki Tebeka</a>
(<a href="/recipes/tags/algorithm/">algorithm</a>, <a href="/recipes/tags/map/">map</a>, <a href="/recipes/tags/parallel/">parallel</a>).
Revision 3.
</p>
<p>Implementation of parallel map (processes).</p>
<p>This is a Unix only version.</p>
processing.Pool variation which allows multiple threads to send the same requests without incurring duplicate processing (Python)
2008-09-17T17:01:21-07:00david decotignyhttp://code.activestate.com/recipes/users/4129454/http://code.activestate.com/recipes/576462-processingpool-variation-which-allows-multiple-thr/
<p style="color: grey">
Python
recipe 576462
by <a href="/recipes/users/4129454/">david decotigny</a>
(<a href="/recipes/tags/map/">map</a>, <a href="/recipes/tags/parallel/">parallel</a>, <a href="/recipes/tags/pool/">pool</a>, <a href="/recipes/tags/processing/">processing</a>, <a href="/recipes/tags/threads/">threads</a>).
Revision 3.
</p>
<p>processing.Pool (<a href="http://pypi.python.org/pypi/processing" rel="nofollow">http://pypi.python.org/pypi/processing</a>) is a nice tool to "parallelize" map() on multiple CPUs.
However, imagine you have X threads which send the same request Pool.map(getNthPrimeNumber, [100000, 10000000, 10000]) at (almost) the same time. Obviously, you don't want to compute X times getNthPrimeNumber for 100000, 10000000, 10000... unless you have 3.X processors available. You would like one thread to submit the 3 requests, and then the X-1 others would notice that the requests have already been submitted and will then just wait for the result.
This is what this code is about: a kind of "trensient memoize" for processing.Pool::imap().</p>