Popular recipes tagged "processing" but not "multiprocessing"http://code.activestate.com/recipes/tags/processing-multiprocessing/2012-07-10T12:37:37-07:00ActiveState Code RecipesZero (Batch) Programs (Python) 2012-07-10T12:37:37-07:00Stephen Chappellhttp://code.activestate.com/recipes/users/2608421/http://code.activestate.com/recipes/578205-zero-batch-programs/ <p style="color: grey"> Python recipe 578205 by <a href="/recipes/users/2608421/">Stephen Chappell</a> (<a href="/recipes/tags/archive/">archive</a>, <a href="/recipes/tags/batch/">batch</a>, <a href="/recipes/tags/old/">old</a>, <a href="/recipes/tags/processing/">processing</a>, <a href="/recipes/tags/utility/">utility</a>). Revision 2. </p> <p>Having written many programs that work with groups of files, the zero programs were written based on a simple batch engine in the <code>zero</code> utility. All of the other programs import the first program to take advantage of its batch processor while supplementing there own functionality in place of zeroing out file data. This is committed for archival to be run under Python 2.5 or later versions.</p> processing.Pool variation which allows multiple threads to send the same requests without incurring duplicate processing (Python) 2008-09-17T17:01:21-07:00david decotignyhttp://code.activestate.com/recipes/users/4129454/http://code.activestate.com/recipes/576462-processingpool-variation-which-allows-multiple-thr/ <p style="color: grey"> Python recipe 576462 by <a href="/recipes/users/4129454/">david decotigny</a> (<a href="/recipes/tags/map/">map</a>, <a href="/recipes/tags/parallel/">parallel</a>, <a href="/recipes/tags/pool/">pool</a>, <a href="/recipes/tags/processing/">processing</a>, <a href="/recipes/tags/threads/">threads</a>). Revision 3. </p> <p>processing.Pool (<a href="http://pypi.python.org/pypi/processing" rel="nofollow">http://pypi.python.org/pypi/processing</a>) is a nice tool to "parallelize" map() on multiple CPUs. However, imagine you have X threads which send the same request Pool.map(getNthPrimeNumber, [100000, 10000000, 10000]) at (almost) the same time. Obviously, you don't want to compute X times getNthPrimeNumber for 100000, 10000000, 10000... unless you have 3.X processors available. You would like one thread to submit the 3 requests, and then the X-1 others would notice that the requests have already been submitted and will then just wait for the result. This is what this code is about: a kind of "trensient memoize" for processing.Pool::imap().</p>