Popular recipes by test http://code.activestate.com/recipes/users/4191474/2015-01-14T21:42:12-08:00ActiveState Code RecipesHow to implement lfu cache in python using threads? (Python) 2015-01-14T21:42:12-08:00testhttp://code.activestate.com/recipes/users/4191474/http://code.activestate.com/recipes/579003-how-to-implement-lfu-cache-in-python-using-threads/ <p style="color: grey"> Python recipe 579003 by <a href="/recipes/users/4191474/">test</a> . </p> <p>I need to design and implement an LFU scheduler abiding by the following core principles. In the implementation the scheduler will be a thread (instead of a process); and the processes in this simulations will be represented by objects of a custom class called Process. There are N processes with name IDs 1,2,...N of a hypothetical operating system are to be arranged in cache memory for scheduling purposes(processes are not threads orprocesses). To make this precise, let T denote the number of times that the scheduler scheduled a(ny) process since the start of the execution of the scheduler. Let p denote a process currently present in the cache, let Tp denote the number of times that process p was scheduled by the scheduler to run since the start of the execution of the scheduler. Then, fp = Tp/T is the frequency in which process p was scheduled. An LFU scheduler favours process p with the least fp and schedules it to run next. In case of ties, the process whose ID is the smallest gets to run.</p> <p>Need to write P for the set of all processes in the system. We distinguish between two subsets of P. The set of processes in the cache, denoted by Pc, and the set of processes outside of the cache,denoted Po. In particular, we have Pc U Po = P. Below we describe the core principle the management of these two sets and migration of processes between these two sets.</p> <p>The cache has M available slots for processes; that is |Pc|&lt;=M. M is an arbitrary integer; and it is to be set at the time of initialisation of the cache. M may satisfy any of the predicates MN. The M slots of the cache are to be filled with the M processes whose associated frequencies are the owest amongst all members of P. In the case of ties, processes with lesser ID numbers are favoured for entrance into the cache. The scheduler may only pick a process to run from those processes present in the cache. In the case that M &lt; N processes may need to be evacuated from the cache in favour of others with lower frequency. Evacuation need not necessarily mean that the process has concluded its task (details below). The members of Po still have their frequency updated; indeed, the parameter T increases while for each process p belongs Po the parameter Tp remains the same causing fp to decay. Class Process: (a) Supports a run() method that simply prints: Process ran for the time. (b) The class supports a constructor whose first argument is an integer called name specifying its ID number (recall that IDs will be unique consisting of the interval [1,N]). The second argument is an integer times specifying the number of times that the process has to run (thus, if a process is forced out of the cache due to lack of space it can only be "discarded" in case it has ran times times). The following is the evacuation policy from the cache: (a) Processes reaching their times parameter are to be evacuated and are eligible to be discarded entirely. (b) Let us assume that |Pc|=M qp, then evacuate p from he cache and add q. For control purposes (of the grading process of the assignment) a single process is to be evacuated each time. This does not exclude the possibility that evacuation may occur in succession; for instance if several members of Po have their frequency low enough. (c) Evacuation is to occur only whenever there exists a q belongs Po and p belongs Pc satisfying fq &lt; fp and the cache is full. A process being scheduled to run is allowed to apply its run() method once and then reevaluation of which is the next process to run is to take place. In particular, a process is not allowed to run in succession unless the scheduling policy mandates it.</p> <p>I'm a little lost on this question! How to implement?!?!</p>