.. meta::
  :description: SimPy Python Simulation Language
  :keywords: simulation python stochastic

=========================
|simpylogo| SimPy Manual
=========================

:Authors: - Tony Vignaux <Vignaux@users.sourceforge.net>
         - Klaus Muller <Muller@users.sourceforge.net>
:SimPy version: 1.7.1
:Web-site: http://simpy.sourceforge.net/
:Python-Version: 2.2, 2.3, 2.4
:Revision: $Revision: 1.1.1.38 $
:Date: $Date: 2006/06/06 16:19:46 $


.. contents:: Contents
  :depth: 2

.. .. sectnum::
..    :depth: 2

 
SimPy is an efficient, process-based, open-source, discrete-event
simulation language using Python as a base. The facilities it offers
include Processes_ to provide the time-active elements in the model,
Resources_, Levels_ and Stores_ to provide to represent congestion
points, and Monitors_ to record simple statistics.

This document describes version 1.7.1 of *SimPy*.


Changes from version 1.7
----------------------------------------

SimPy 1.7.1 is a maintenance release which repairs a small number of bugs
found in SimPy 1.7
The SimPy 1.7.1 API is unchanged from that of 1.7.

[Return to Top_ ]

Introduction
-------------------


*SimPy* is a Python-based discrete-event simulation system. It uses
parallel processes to model active components such as messages,
customers, trucks, planes. It provides a number of tools for the
simulation programmer including *Processes*, three kinds of service
facilities (*Resources*, *Levels*, and *Stores*) and ways of
recording results by using *Monitors* and *Tallys*.

The basic active elements of a *SimPy* model are process objects
(i.e., objects of a process class -- see Processes_).  These may be
delayed for fixed or random times, queued at service facilities, and
they may be interrupted by or interact in other ways with other
processes and components. For example, a simulation of a gas station
could treat automobiles as process objects which may have to queue for
a pump to become available.

A *SimPy* script contains the declaration of one or more *Process*
classes and the creation of process objects from them.  Each Process
class contains a Process Execution Method (referred to later as a PEM_)
that directs the behaviour of the process objects created from that
class. Each PEM runs in parallel with (and may interact with) the
PEMs of other process objects.

Service facilities model congestion points where process objects may
have to wait for a free service unit. For example, a car  may
have to wait for a free pump at a gas station. Treating cars as
process objects and the station as a *Resource* type of service
facility having pumps as its resource units, SimPy automatically puts
waiting cars in a queue until a pump is available. Each car retains
its pump while refuelling and then must release it for use by other cars.

Levels_ model the production and consumption of a homogeneous
un-differentiated 'material.' Thus, the currently-available amount of
material in a *Level* service facility can be fully described by a
scalar (real or integer). Process objects may increase or decrease the
currently-available amount of material in a *Level* facility. For
example, a gas station stores gas (petrol) in large tanks. Tankers
increase, and each refueled car reduces, the amount of gas available.

Stores_ model the production and consumption of individual objects.
Process objects can add to or subtract from the list of available
objects.  For example, surgical procedures (treated as process
objects) require specific lists of personnel and equipment that may be
treated as the items in a *Store* facility such as a clinic or
hospital. The items held in a *Store* can be of any Python type. In
particular they can be process objects, and this may be exploited when
using Master/Slave modeling techniques.

Monitors_ and Tallys_ are used to record the values of variables such as
waiting times and queue lengths as a function of time. These
statistics consist of simple averages and variances, time-weighted
averages, or histograms. They can be gathered on the queues
associated with *Resources*, *Levels* and *Stores*. For example we may
collect data on the average number of cars waiting at the gas station and the
distribution of their waiting times. *Monitors* preserve complete time-series records that may later be used for more advanced post-simulation anlaysis. *Tallys* report current averages and variances as the simulation progresses, but do not preserve complete time-series records.

Before attempting to use SimPy, you should be able to write Python
code. In particular, you should be able to define and use classes and their
objects. Python is free and available on most machine types.  We do
not expound it here. You can find out more about it and download it
from the Python_ web-site (http://www.Python.org)

*SimPy* requires *Python* 2.2 or later [#]_.

.. [#] If Python 2.2 is used, the command: ``from __future__ import
  generators`` must be placed at the top of all *SimPy* scripts. The
  following examples do not include this line.

[Return to Top_ ]

Simulation with *SimPy*
-------------------------

All discrete-event simulation programs automatically maintain the
current simulation time in a software clock. In *SimPy* the current
simulation time is returned by the **now( )** function. The software
clock is set to 0.0 at the start of the simulation. The user cannot
change the software clock directly.

While a simulation program runs, current simulation time steps forward
from one *event* to the next. An event occurs whenever the state of
the simulated system changes. For example, an arrival of a customer is
an event. So is a departure.

To use the *SimPy* simulation system we must import its Simulation module:

   **from SimPy.Simulation import ***

Before any *SimPy* process objects are activated, the following
statement must appear in the script:

   **initialize( )**

This is followed by some SimPy statements creating and activating
objects. Execution of the simulation itself starts when the following
statement appears in the script:

   **simulate(until=endtime)**

The simulation then starts, and SimPy seeks and executes the first
scheduled event. Having executed that event, the simulation seeks and
executes the next event, and so on. This continues until one of the
following occurs:

    * there are no more events to execute (*now( )* == the time
      of the last event)

    * the simulation time reaches the *endtime* (*now( ) == endtime*)

    * the *stopSimulation( )* command is executed (*now( )* == the
      simulation time when *stopSimulation( )* was called).


Typically a simulation is terminated using the *until* argument of the
*simulate* statement but it can be stopped at any time by the
command:

   **stopSimulation( )**


Additional statements can still be executed after exit from
*simulate*. This is useful for saving or displaying results such as
average delays or lengths of queues.


The following fragment shows only the *main* block in a simulation
program.  (complete, runnable examples are shown in Example1_ and
Example2_). ``Message`` is a (previously defined) process class and
``m`` is defined as an object of that class. Activating ``m`` has the
effect of scheduling at least one event by starting ``m``'s *Process
Execution Method* (here called ``go``).  The
``simulate(until=1000.0)`` statement starts the simulation itself, which
immediately jumps to the first scheduled event. It will continue
until it runs out of events to execute or the simulation time reaches
1000.0. When the simulation stops the (previously written) ``Report``
function is called to display the results::

  initialize( )
  m = Message( )
  activate(m,m.go( ),at=0.0)
  simulate(until=1000.0)

  Report( )  #  report results when the simulation finishes

[Return to Top_ ]

.. ==================================================================

Processes
-------------------

The active objects for discrete-event simulation in *SimPy* are
instances of some class that inherits from *SimPy*'s *Process* class.

For example, if we are simulating a computing network we might model
each message as an object of the class *Message*.  When message
objects arrive at the computing network they make transitions between
nodes, wait for service at each one, and eventually leave the
system. The *Message* class specifies these actions in its *Process
Execution Method (PEM)*.  Individual message objects are created as
the simulation runs and they go through the evolutions specified in
the *Message* class's *PEM*.

 
Defining a process
~~~~~~~~~~~~~~~~~~~~

Each process class sub-classes (inherits from) the super-class
*Process*. For example here is the header of the definition of a
new *Message* process class::

   class Message(Process):


The user must define one (and only one) *Process Execution Method*
(PEM_) in each process class. Other methods may also be defined. Such
other methods may include an *__init__* method.


.. _PEM:

* **A process execution method (PEM)** prescribes the actions to be
  performed by its process objects. Each *PEM* must contain at least one
  of the *yield*
  statements, described later, that make it a Python generator
  function. This means it has resumable execution: it can be restarted
  again after the *yield* statement without loosing its current state.
  A PEM can be called
  *execute( )* or *run( )*, but any name may be chosen. It can have
  arguments.

  A process object's PEM starts running as soon as the object is activated
  and the *simulate(until = ...)* statement has been called.

  In the next example the process execution method, *go( )*,for the preceding
  *Message* class, prints out the current time, the message object's
  identification number and the word 'Starting'. After a simulated
  delay of 100.0 time units (in the *yield hold, ...* statement) it
  announces that this message object has 'Arrived'::

       def go(self):
	   print now( ), self.i, 'Starting'
	   yield hold,self,100.0
	   print now( ), self.i, 'Arrived'

* **__init__(self, ...)**, where *...* indicates method arguments. This
  function initialises the process object, setting values for any
  attributes.  The first line of this method must be a call to the
  *Process* class's *__init__( )* in the form::

      Process.__init__(self,name='a_process')

  Then other commands can be used to initialize attributes of the message
  objects. The *__init__( )* method is called automatically when a new
  message object is created.

  In the following example of an  *__init__( )* method for a *Message*
  class we provide for
  each new message object to have an integer identification number, *i*, and
  message length, *len* as instance variables::

       def __init__(self,i,len):
	   Process.__init__(self,name='Message'+str(i))
	   self.i = i
	   self.len = len

  If you do not wish to provide for any attributes other than a *name*, the
  *__init__* method may be dispensed with.


Starting a process
~~~~~~~~~~~~~~~~~~~~
A Process object must be *activated* in order to start it operating (see
`Starting and stopping SimPy Process Objects`_)


..     An example of a SimPy script
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

    Following is a complete, runnable, SimPy script. We declare a
    *Message* class and define *__init__( )* and *go( )* methods for it.
    Two *messages*, *p1* and *p2* are created. We do not actually use the
    *len* attribute in this example. *p1* and *p2* are activated to start
    at simulation times 0.0 and 6.0, respectively. Nothing happens until
    the *simulate(until=200)* statement. When both Messages have finished
    (at time 6.0+100.0=106.0) there will be no more events so the
    simulation will stop at that time::

        from SimPy.Simulation import *

        class Message(Process):
           ''' a simple Process '''
           def __init__(self,i,len):
               Process.__init__(self,name='Message'+str(i))
               self.i = i
               self.len = len

           def go(self):
               print now( ), self.i, 'Starting'
               yield hold,self,100.0
               print now( ), self.i, 'Arrived'

        initialize( )
        p1  = Message(1,203)
        activate(p1,p1.go( ))
        p2  = Message(2,33)
        activate(p2,p2.go( ),at=6.0)
        simulate(until=200)
        print now( ) # will print 106.0



Elapsing time in a Process
~~~~~~~~~~~~~~~~~~~~~~~~~~

A PEM_ uses the *yield hold*
command to temporarily delay a process object's operations:

* **yield hold,self,t** causes the object to wait for a delay of *t*
  time units (unless it is further delayed by being interrupted_).
  After the delay, it continues with the operation specified by
  the next statement in its PEM.
  During the hold the object's operations are suspended.

* **yield passivate,self** suspends the process object's operations
  until reactivated by explicit command (which must be issued by a different
  process object).


.. _Example1:

The following example's *Customer* class illustrates that the
PEM_ method (*buy*) can have arguments which may be used in the
activation. All processes can have a *name* attribute which can be set,
as here, when an object is created.  Here the *yield hold* is executed
4 times for each customer object with delays of 5.0 time units.::

    from SimPy.Simulation import *

    class Customer(Process):
       def buy(self,budget=0):
          print 'Here I am at the shops ',self.name
          t = 5.0
          for i in range(4):
              yield hold,self,t
              print 'I just bought something ',self.name
              budget -= 10.00
          print   'All I have left is ', budget,\
                  ' I am going home ',self.name,

    initialize( )
    C = Customer(name='Evelyn')           # create a customer named 'Evelyn',
    activate(C,C.buy(budget=100),at=10.0) # and activate her
    simulate(until=100.0)




Starting and stopping SimPy Process Objects
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

A process object is 'passive' when first created, i.e., it has no
scheduled events. It must be *activated* to start its process execution method:

* **activate(p,p.PEM(args, at=t, delay=period, prior=False])**
  will activate the execution method *p.PEM( )* of process object
  *p* with arguments *args*.

  The default action is to activate at the current time, otherwise one
  of the optional timing clauses, *at=t*, or *delay=period*, operate.

  *prior* is normally *False*. If it is *True*, the process object will be
  activated before any others that are to be activated at the specified
  activation time.

Process objects can be suspended and reactivated:

* **yield passivate,self** suspends the process object itself. It becomes 'passive'.

* **reactivate(p,at=t, delay=period, prior=False)** reactivates the
  passive process object, *p*: it becomes 'active'. The optional timing
  clauses
  work as for *activate*. A process object cannot reactivate itself.
  To temporarily suspend a process object, use *yield hold,self,t* instead.

* **self.cancel(p)** deletes all scheduled future events for process object
  *p*.
  Only 'active' process objects can be cancelled.
  A process cannot cancel itself.
  If that is required, use *yield passivate,self* instead.
  *Note:* This new format replaces the *p.cancel( )* form of earlier SimPy
  versions.


When all statements in its process execution method have been completed,
a process object becomes 'terminated'. If the object is still referenced, it
becomes just a data container. Otherwise, it is automatically destroyed.

Even activated process objects will not start operating until the
**simulate(until=T)** statement is executed. This starts the
simulation going and it will continue until time *T* (unless it runs
out of events to execute or the command *stopSimulation( )* is
executed).


.. _Example2:

A complete *SimPy*  script
~~~~~~~~~~~~~~~~~~~~~~~~~~

Before introducing the more complicated process capabilities let us
look at a complete runnable SimPy script. This simulates a firework
with a time fuse.  I have put in a few extra *yield hold* commands for
added suspense::

   from SimPy.Simulation import *

   class Firework(Process):

      def execute(self):
          print now( ), ' firework launched'
          yield hold,self, 10.0                # wait 10.0 time units
          for i in range(10):
              yield hold,self,1.0
              print now( ),  ' tick'
          yield hold,self,10.0                 # wait another 10.0 time units
          print now( ), ' Boom!!'

   initialize( )
   f = Firework( )             # create a Firework object, and
   activate(f,f.execute( ),at=0.0)             # activate it.
   simulate(until=100)

Here is the output from this Example. No formatting was attempted so
it looks a bit ragged::

   0.0  firework launched
   11.0  tick
   12.0  tick
   13.0  tick
   14.0  tick
   15.0  tick
   16.0  tick
   17.0  tick
   18.0  tick
   19.0  tick
   20.0  tick
   30.0  Boom!!



A source fragment
~~~~~~~~~~~~~~~~~~~

One useful program pattern is the *source*. This is a process object
with an execution method that sequentially activates other process
objects -- it is a source of other process objects. Random arrivals
can be modelled using random intervals between activations.

In the following example a source creates and activates a series of
customers who arrive at regular intervals of 10.0 units of time. This
continues until the simulation time exceeds the specified *finishTime*
of 33.0.  (Of course, to model customers with 'random' interarrival
times the *yield hold* method could use an *exponential* random
variate (*expovariate( )*) instead of the constant 10.0 interarrival
time used here.)  The example assumes that the *Customer* class has
previously been defined with a PEM_ called *run*::

    class Source(Process):

       def execute(self, finish):
          while now( ) < finish:
             c = Customer( )          # create a new customer object, and
             activate(c,c.run( ))     # activate it 'now'
             print now( ), ' customer'
             yield hold,self,10.0

    initialize( )
    g = Source( )                              # create a Source object, g
    activate(g,g.execute(33.0),at=0.0)   # start the source object, g
    simulate(until=100)

.. ------example-------------



Asynchronous interruptions
~~~~~~~~~~~~~~~~~~~~~~~~~~

An active process can be interrupted by another but cannot interrupt
itself. The *interrupter* process will use the following statement to
interrupt the *victim* process object.

* **self.interrupt(victim)**

The interrupt is just a signal. After this statement, the
*interrupter* process object continues its PEM.

For the interrupt to have an immediate effect, the *victim* process
object must be *active* - that is it must have an event scheduled for
it (that is, it is 'executing' a *yield hold,self,t*). If the *victim*
is not active (that is, it is either *passive* or *terminated*) the
interrupt has no effect on it. In particular, process objects queuing
for service cannot be interrupted because they are
*passive*. Processes which have acquired a resource are *active* and
can be interrupted.

If interrupted, the *victim* returns from its *yield hold*
prematurely. It should then check if it has been interrupted by calling

* **self.interrupted( )** which returns *True* if it has been
  interrupted. It can then either continue in the current activity or
  switch to an alternative, making sure it tidies up the current
  state, such as releasing any resources it owns. When
  *self.interrupted( )== True*:

 * **self.interruptCause** is a reference to the *interrupter* object.

 * **self.interruptLeft** gives the time remaining in the interrupted
   *yield hold*.

The interruption is reset (that is, 'turned off') at the *victim's*
next call to a *yield hold*. It can also be reset by calling

* **self.interruptReset( )**

Here is an example of a simulation with interrupts. A bus is subject
to breakdowns which are modelled as interruptions caused by a
``Breakdown`` Process. Notice that during the first *yield hold*,
interrupts may occur, so a reaction to any interrupts (that is,
triggering a delay for repairs) has been programmed by testing
``self.interrupted( )``.  In this example the ``Bus`` process class
does not require an ``__init__( )`` method::

    from SimPy.Simulation import *

    class Bus(Process):

      def operate(self,repairduration,triplength):    # PEM
         tripleft = triplength             # time needed to finish trip
         while tripleft > 0:
            yield hold,self,tripleft         # try to finish the trip
            if self.interrupted( ):          # if another breakdown occurs
                  print self.interruptCause.name, 'at %s' %now( )
                  tripleft=self.interruptLeft    # time to finish the trip
                  self.interruptReset( )              # end interrupt state
                  reactivate(br,delay=repairduration) # restart breakdown br
                  yield hold,self,repairduration        # delay for repairs
                  print 'Bus repaired at %s' %now( )
            else:
                  break             # no more breakdowns, bus finished trip
         print 'Bus has arrived at %s' %now( )

    class Breakdown(Process):
       def __init__(self,myBus):
           Process.__init__(self,name='Breakdown '+myBus.name)
           self.bus=myBus

       def breakBus(self,interval):           # process execution method
           while True:
              yield hold,self,interval      # breakdown interarrivals
              if self.bus.terminated( ): break
              self.interrupt(self.bus)      # breakdown to myBus

    initialize( )
    b=Bus('Bus')                                       # create a bus object
    activate(b,b.operate(repairduration=20,triplength=1000))
    br=Breakdown(b)                      # create breakdown br to bus b
    activate(br,br.breakBus(300))
    simulate(until=4000)
    print 'SimPy: No more events at time %s' %now( )


The ouput from this example::

    Breakdown Bus at 300
    Bus repaired at 320
    Breakdown Bus at 620
    Bus repaired at 640
    Breakdown Bus at 940
    Bus repaired at 960
    Bus has arrived at 1060
    SimPy: No more events at time 1260

Where interrupts can occur, the victim of interrupts must test for
interrupt occurrence after every appropriate *yield hold* and react
appropriately to it. A victim holding a service facility when it gets
interrupted continues to hold it, unless the facility is explicitly
released.


Advanced synchronisation/scheduling capabilities
------------------------------------------------

All scheduling constructs discussed so far are either time-based,
i.e., they make processes wait until a certain time has passed, or use
direct reactivation of processes. For a wide range of models, these
constructs are totally satisfactory and sufficient.

In some modelling situations, the *SimPy* scheduling constructs are
too rich or too generic and could be replaced by simpler, safer
constructs. *SimPy* has synchronisation by event signalling is one
such possible construct.

On the other hand, there are models which require
synchronisation/scheduling by other than time-related wait
conditions. *SimPy* has a general `wait until`_ to support clean
implementation of such models.

Event_ signalling is particularly useful in situations where processes
must wait for completion of activities of unknown duration. This
situation is often encountered, e.g. when modelling real time systems
or operating systems.

.. _Event:
.. _SimEvent:

Defining a SimEvent
~~~~~~~~~~~~~~~~~~~

Events in *SimPy* are objects of class **SimEvent** [#]_.

.. [#] This name was chosen because the term 'event' is already being
   used in Python for e.g. tkinter events or in Python's standard
   library module *signal -- Set handlers for asynchronous events*.

A *SimEvent*, ``sE`` is established by the following statement::

   sE = SimEvent(name='a_SimEvent')

A SimEvent, ``sE``  has the following attributes:


   - ``sE.occurred`` (boolean, initially ``False``) to indicate
     whether an event has happened (has been signalled)

   - ``sE.waits`` a list of processes waiting for the event

   - ``sE.queues`` a FIFO queue of processes queueing for the event

   - ``SE.signalparam``  a possible payload  from the *signal* method


Waiting or Queueing for a SimEvent
++++++++++++++++++++++++++++++++++

A processes can *wait* for events by issuing::

   yield waitevent,self,<events part>

where *<events part>* can be:

     - an event variable, e.g. ``myEvent``

     - a tuple of events, e.g. ``(myEvent,myOtherEvent,TimeOut)``, or

     - a list of events, e.g. ``[myEvent,myOtherEvent,TimeOut]``

If one of the events in the *<events part>* has already happened, the
process continues.  The ``occurred`` flag of the event(s) is toggled to
False.

If none of the events in the *<events part>* has happened, the process
is passivated after joining the set of processes waiting for all the
events.

Processes can *queue* for events by issuing::

   ``yield queueevent,self,<events part>``

where the  <events part> is as defined above.

If one of the events in *<event>s part>* has already happened, the
process continues.  The *occurred* flag of the event(s) is toggled to
False.

If none of the events in the *<events part>* has happened, the process
is passivated after joining the FIFO queue of processes queuing for
all the events.

Signalling a SimEvent
+++++++++++++++++++++

To signal a *SimEvent*, ``sE`` a process must call::

   sE.signal(<payload parameter>)

The *payload parameter* is optional. It can be of any Python
type. It can be read by the process(es) triggered by the signal as the
SimEvent attribute ``sE.signalparam``, like ``message =
sE.signalparam``.


When this is called the flag ``sE.occurred`` is toggled to ``True``,
if waiting set and and queue are empty. Otherwise, all processes in
the ``sE.waits`` list are reactivated at the current time, as well
as the *first* process in ``sE.queues`` FIFO queue.

An Example using SimEvent
+++++++++++++++++++++++++

Here is a small, complete *SimPy* script illustrating these constructs::

   from SimPy.Simulation import *

   class Waiter(Process):
       def waiting(self,myEvent):
           yield waitevent,self,myEvent
           print '%s: after waiting, event %s has happened'%(now( ),myEvent.name)

   class Queuer(Process):
       def queueing(self,myEvent):
           yield queueevent,self,myEvent
           print '%s: after queueing, event %s has happened'%(now( ),myEvent.name)
           print '   just checking: event(s) %s fired'%([x.name for x in self.eventsFired])

   class Signaller(Process):
       def sendSignals(self):
           yield hold,self,1
           event1.signal( )
           yield hold,self,1
           event2.signal( )
           yield hold,self,1
           event1.signal( )
           event2.signal( )

   initialize( )
   event1=SimEvent('event1'); event2=SimEvent('event2')
   s=Signaller( ); activate(s,s.sendSignals( ))
   w0=Waiter( ); activate(w0,w0.waiting(event1))
   w1=Waiter( ); activate(w1,w1.waiting(event1))
   w2=Waiter( ); activate(w2,w2.waiting(event2))
   q1=Queuer( ); activate(q1,q1.queueing(event1))
   q2=Queuer( ); activate(q2,q2.queueing(event1))
   simulate(until=10)

When run, this produces::

   1: after waiting, event event1 has happened
   1: after waiting, event event1 has happened
   1: after queueing, event event1 has happened
      just checking: event(s) ['event1'] fired
   2: after waiting, event event2 has happened
   3: after queueing, event event1 has happened
      just checking: event(s) ['event1'] fired

When *event1* fired at time 1, two processes (*w0* and *w1*)were
waiting for it and both got reactivated. Two processes were queueing
for it(*q1* and *q2*), but only one got reactivated. The second
queueing process got reactivated when event1 fired again.  The 'just
checking' line reflects the content of the process' *self.eventsFired*
attribute.

.. _`wait until`:

'wait until' synchronisation -- waiting for any condition
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Simulation models, where progress of a process depends on a general
condition involving non-time-related state-variables (such as
``goodWeather OR (nrCustomers>50 AND price<22.50``), are difficult to
implement with *SimPy* constructs prior to version 1.5. They require
*interrogative* scheduling, while all other *SimPy* synchronisation
constructs are *imperative*: after every *SimPy* event, the condition
must be tested until it becomes True.  Effectively, a new (hidden,
system) process has to interrogate the value of the
condition. Clearly, this is not as efficient as the event-list
scheduling used for the other *SimPy* constructs. The *SimPy*
implementation therefore only activates that interrogation process
when there is a process waiting for a condition. When this is not the
case, the runtime overhead is minimal (about 1 percent extra runtime).

The new construct takes the form:

   **yield waituntil, self, <cond>**

*<cond>* is a reference to a function without parameters which returns
the state of condition to be waited for as a boolean value.

Here is a simple program using the *yield waituntil* construct. The
condition to be waited for is defined in the function ``killed``,
defined in the ``life`` PEM of the ``Player`` process::

   from SimPy.Simulation import *
   import random
   class Player(Process):

       def __init__(self,lives=1):
           Process.__init__(self)
           self.lives=lives
           self.damage=0

       def life(self):
           self.message='I survived alien attack!'

           def killed( ):                  # test condition
               return self.damage>5

           while True:
               yield waituntil,self,killed
               self.lives-=1; self.damage=0
               if self.lives==0:
                   self.message= 'Wiped out by alien at time %s!'%now( )
                   stopSimulation( )

   class Alien(Process):

       def fight(self):
           while True:
               if random.randint(0,10)<2: #simulate firing
                   target.damage+=1       #hit target
               yield hold,self,1

   initialize( )
   gameOver=100
   target=Player(lives=3); activate(target,target.life( ))
   shooter=Alien( ); activate(shooter,shooter.fight( ))
   simulate(until=gameOver)
   print target.message

In summary, the ``wait until`` construct is the most powerful
synchronisation construct.  It effectively generalises all other SimPy
synchronisation constructs, i.e., it could replace all of them (but at
a runtime cost).

[Return to Top_ ]

.. ==================================================================


Resources
-------------------

A *Resource* models a congestion point where there may be
queueing. For example, in a manufacturing plant, a *Task* (modelled as
a *process*) needs work done at a *Machine* (modelled as a
*resource*). If a *Machine* unit is not available, the *Task* will
have to wait until one becomes free. The *Task* will then have the use
of it as long as it needs it. It is not available for other *Tasks*
until *released*. These actions are all automatically taken care of by
the *SimPy Resource*.

A resource can have a number of identical *units*. For example, a
number of identical ``Machine`` units. A process gets service by
*requesting* a unit of the resource and, when it is finished,
*releasing* it. A resource maintains a queue of waiting processes and
another list of processes using it.  These are defined and updated
automatically.

Defining a Resource
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

A Resource, ``r``,  is established by the following statement::

 r=Resource(capacity=1,
            name='a_resource',
            unitName='units',
            qType=FIFO,  preemptable=False,
            monitored=False,
            monitorType=Monitor)

where

  - *capacity* is the number of identical units of the resource
    available.
  - *name* is the name by which the resource is known (eg
    ``'gasStation'``).
  - *unitName* is the name of a unit of the resource (eg ``'pump'``).
  - *qType* is either ``FIFO`` or ``PriorityQ``. It specifies the
    queue discipline of the waiting queue of processes; typically,
    this is *FIFO* (First-in, First-out) and this is the presumed
    value.
  - *preemptable* is a boolean (``False`` or ``True``) and indicates, if
    it is ``True`` that a process being put into the queue may also
    pre-empt a lower-priority process already using a unit of the
    resource.  This only has an effect when ``qType == PriorityQ``.
  - *monitored* is a boolean (``False`` or ``True``) that indicates
    if the size of the ``waitQ`` and ``activeQ`` queues (see below)
    are to be monitored. (see Monitors_, below)
  - *monitorType* is either ``Monitor`` or ``Tally`` and is the
    variety of monitor to be used. (see Monitors_, below)

A Resource, ``r``,  has the following attributes:

  - ``r.n`` The number of units that are currently free.
  - ``r.waitQ`` A waiting queue (list) of processes (FIFO by
    default). ``len(r.waitQ)`` is the
    number of Processes held in the waiting queue at any time.
  - ``r.activeQ`` A queue (list) of processes holding units.
    ``len(r.activeQ)`` is the  number of Processes held in the
    active queue at any time.
  - ``r.waitMon`` A Monitor_ automatically recording the activity
    in *r.waitQ* if monitored==True
  - ``r.actMon``  A Monitor_ automatically recording the activity
    in *r.activeQ* if monitored==True


Requesting a unit of a Resource
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

A process can request and later release a unit of resource, *r*, in a
Process Execution Method using the following yield commands:


* **yield request, self, r** requests one unit of resource,
  *r*. The process may be temporarily queued and suspended until
  one is available.

  If, or when, a unit is free, the requesting process will take one and continue
  its execution. The resource will record that the process is using a
  unit (that is, the process will be listed in *r.activeQ*)

  If one is not free , the the process will be automatically placed in
  the resource's waiting queue, *r.waitQ*, and suspended.  When a unit
  eventually becomes available, the first process in the waiting
  queue, taking account of the priority order, will be allowed to take
  it. That process is then reactivated.

  If the resource has been defined as being a *priorityQ* with
  *preemption == True* then the requesting process can pre-empt a
  lower-priority process already using a unit. (see `Requesting a unit
  of a Resource with preemptive priority`_, below)

* **yield release,self,r** releases the  unit of *r*. This may
  have the side-effect of allocating the released unit to the next
  process in the Resource's waiting queue.

  In this example, the current Process requests and, if necessary
  waits for, a unit of a Resource, ``r``.  On acquisition it holds it
  while it pauses for a random time (exponentially distributed, mean
  20.0) and then releases it again::

     yield request,self,r
     yield hold,self,expovariate(1.0/20.0)
     yield release,self,r



Requesting a unit of a Resource with priority
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

If a Resource, ``r`` is defined with *priority* queueing (that is
*qType==PriorityQ*) a request can be made for a unit by:

* **yield request,self,r,P** requests a unit with priority, *P* which
  is real or integer.  Larger values of *P* represent higher
  priorities and these will go to the head of the *r.waitQ* if there
  not enough units immediately.

Here is an example of a complete script where priorities are
used. Four clients with different priorities request a resource unit
from a server at the same time. They get the resource in the order set
by their relative priorities::

   from SimPy.Simulation import *
   class Client(Process):
       inClients=[]
       outClients=[]

       def __init__(self,name):
          Process.__init__(self,name)

       def getserved(self,servtime,priority,myServer):
           Client.inClients.append(self.name)
           print self.name, 'requests 1 unit at t=',now( )
           yield request, self, myServer, priority
           yield hold, self, servtime
           yield release, self,myServer
           print self.name,'done at t=',now( )
           Client.outClients.append(self.name)

   initialize( )
   server=Resource(capacity=1,qType=PriorityQ)
   c1=Client(name='c1') ; c2=Client(name='c2')
   c3=Client(name='c3') ; c4=Client(name='c4')
   activate(c1,c1.getserved(servtime=100,priority=1,myServer=server))
   activate(c2,c2.getserved(servtime=100,priority=2,myServer=server))
   activate(c3,c3.getserved(servtime=100,priority=3,myServer=server))
   activate(c4,c4.getserved(servtime=100,priority=4,myServer=server))
   simulate(until=500)

   print 'Request order: ',Client.inClients
   print 'Service order: ',Client.outClients


This program results in the following output::

   c1 requests 1 unit at t= 0
   c2 requests 1 unit at t= 0
   c3 requests 1 unit at t= 0
   c4 requests 1 unit at t= 0
   c1 done at t= 100
   c4 done at t= 200
   c3 done at t= 300
   c2 done at t= 400
   Request order:  ['c1', 'c2', 'c3', 'c4']
   Service order:  ['c1', 'c4', 'c3', 'c2']

.. ------example-------------

Although *c1* has the lowest priority, it requests and gets the
resource unit first.  When it completes, *c4* has the highest
priority of all waiting processes and gets the resource next, etc.
Note that there is no preemption of processes being served.


Requesting a unit of a Resource with preemptive priority
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

In some models, higher priority processes can preempt lower priority
processes when all resource units have been allocated. A resource with
preemption can be created by setting arguments  *qType==PriorityQ* and
*preemptable* non-zero.

When a process requests a unit of resource and all units are in use it
can preempt a lower priority process holding a resource unit. If there
are several processes already active (that is, in the *activeQ*), the
one with the lowest priority is suspended, put at the front of the
*waitQ* and the preempting process gets its resource unit and is
put into the *activeQ*. The preempted process is the next one to get a
resource unit (unless another preemption occurs).  The time for which
the preempted process had the resource unit is taken into account when
the process gets into the *activeQ* again. Thus, the total hold time
is always the same, regardless of whether or not a process gets
preempted.

An example of a complete script. Two clients of different priority
compete for the same resource unit::

     from SimPy.Simulation import *
     class Client(Process):
         def __init__(self,name):
            Process.__init__(self,name)

         def getserved(self,servtime,priority,myServer):
             print self.name, 'requests 1 unit at t=',now( )
             yield request, self, myServer, priority
             yield hold, self, servtime
             yield release, self,myServer
             print self.name,'done at t=',now( )

     initialize( )
     server=Resource(capacity=1,qType=PriorityQ,preemptable=1)
     c1=Client(name='c1')
     c2=Client(name='c2')
     activate(c1,c1.getserved(servtime=100,priority=1,myServer=server),at=0)
     activate(c2,c2.getserved(servtime=100,priority=9,myServer=server),at=50)
     simulate(until=500)


The output from this program is::

   c1 requests 1 unit at t= 0
   c2 requests 1 unit at t= 50
   c2 done at t= 150
   c1 done at t= 200

Here, *c2*  preempted *c1* at *t=50*. At that time, *c1* had held the
resource for 50 of the total of 100 time units. *c1* got the resource
back when *c2* completed at *t=150*.


.. ---------------------------------------------------------------------

Reneging -- leaving a queue before acquiring a resource
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

In most real world situations, processes do not wait for a requested
resource forever, but leave the queue (*renege*) after a certain time
or when some other condition has arisen.

SimPy provides an extended (compound) *yield request* statement form
to model reneging. If the resource has been defined as being a
*priorityQ* with *preemption == True* the request is made with
priority *P*

**yield (request,self,resource[,P]),(<reneging clause>)**.

The structure of a SimPy model with
reneging is::

 yield (request,self,resource),(<reneging clause>)
 if self.acquired(resource):
    ## process got resource and did not renege
    . . . .
    yield release,self,resource
 else:
    ## process reneged before acquiring resource
    . . . . .

A call to method (**acquired(resource)**) is mandatory after a
compound *yield request* statement.  It is not only a predicate which
indicates whether or not the process has acquired the resource, but it
also removes the reneging process from the resource's waitQ.

There are two reneging clauses, one for reneging after a certain time
and one for reneging when an event has happened.

Reneging after a time limit
+++++++++++++++++++++++++++

To make a process renege after a certain time, the reneging clause
used is identical to the parameters of a *yield hold* statement,
namely **hold, self, waittime**:

* **yield (request,self,res,[,P]),(hold,self,waittime)**
  process *self* requests one unit of resource *res* with relative
  priority *P* if it is set. This blocks the process from continuing
  execution if the resource unit is not available. If the resource
  unit has not been acquired by the process after a time period
  *waittime*, the process leaves the queue and its execution
  continues.

An example code snippet::

    ## Queuing for a parking space in a parking lot
    . . . .
    parking_lot=Resource(capacity=10)
    patience=5   # time units
    park_time=60 # time units
    . . . .
    # wait no longer than 'patience' time units for a parking space
    yield (request,self,parking_lot),(hold,self,patience)
    if self.acquired(parking_lot):
       # park the car
       yield hold,self,park_time
       yield release,self,parking_lot
    else:
       # give up
       print 'I have had enough, I am going home'


Reneging when an event has happened
+++++++++++++++++++++++++++++++++++

To make a process renege at the occurrence of an event, the reneging
clause used is identical to the parameters of a 'yield waitevent'
statement, namely **waitevent,self,events**:

* **yield (request,self,res[,P]),(waitevent,self,events)** process
  *self* requests one unit of resource *res* with priority
  *P*. (*events* can either be one event or a list (or tuple) of
  several SimEvents) This blocks the process from continuing execution
  if the resource unit is not available. If the resource unit has not
  been acquired by the process when one of the events in *events* has
  been signalled, the process leaves the queue and its execution
  continues.

An example code snippet::

 ## Queuing for movie tickets
 . . . .
 tickets=Resource(capacity=100)
 sold_out=SimEvent( ) # signals 'out of tickets'
 too_late=SimEvent( ) # signals 'too late for this show'
 . . . .
 # Leave the ticket counter queue when movie sold out or its too late for show
 yield (request,self,tickets),(waitevent,self,[sold_out,too_late])
 if self.acquired(tickets):
    # watch the movie
    yield hold,self,120
    yield release,self,tickets
 else:
    # did not get a ticket
    print 'Who needs to see this silly movie anyhow?'


Monitoring a resource
~~~~~~~~~~~~~~~~~~~~~

The section `Recording Simulation Results`_ describes the use of Monitors in
general.

If the argument *monitored* is set *True* for a resource, *r*, the
length of the waiting queue, *len(r.waitQ)*, and the active queue,
*len(r.activeQ)*, are both monitored automatically (see Monitors_,
below). This solves a problem, particularly for the waiting queue
which cannot be monitored externally to the resource. The monitors are
called *r.waitMon* and *r.actMon*, respectively.

The argument *monitorType* indicates which variety of monitor is to be
used, either Monitor_ or Tally_. The default is *Monitor*. If this is
chosen, complete time series for both queue lengths are maintained so
that graphs can be plotted and statistics, such as the time average
can be found at any time. If *Tally* is chosen, statistics are
accumulated continuously and time averages can be reported but, to
save memory, no complete time series is kept. Histograms can be
generated, though.

In this example, the resource, ``server`` is monitored, using the
*Tally*  variety of *Monitor*. The time-average of the length of each
queue is calculated::

  from SimPy.Simulation import *
  class Client(Process):
       inClients=[]
       outClients=[]

       def __init__(self,name):
          Process.__init__(self,name)

       def getserved(self,servtime,myServer):
           print self.name, 'requests 1 unit at t=',now( )
           yield request, self, myServer
           yield hold, self, servtime
           yield release, self,myServer
           print self.name,'done at t=',now( )

  initialize( )
  server=Resource(capacity=1,monitored=True,monitorType=Tally)
  c1=Client(name='c1') ; c2=Client(name='c2')
  c3=Client(name='c3') ; c4=Client(name='c4')
  activate(c1,c1.getserved(servtime=100,myServer=server))
  activate(c2,c2.getserved(servtime=100,myServer=server))
  activate(c3,c3.getserved(servtime=100,myServer=server))
  activate(c4,c4.getserved(servtime=100,myServer=server))
  simulate(until=500)

  print 'Average waiting',server.waitMon.timeAverage( )
  print 'Average in service',server.actMon.timeAverage( )


The output from this program is::

  c1 requests 1 unit at t= 0
  c2 requests 1 unit at t= 0
  c3 requests 1 unit at t= 0
  c4 requests 1 unit at t= 0
  c1 done at t= 100
  c2 done at t= 200
  c3 done at t= 300
  c4 done at t= 400
  Average waiting 1.5
  Average in service 1.0

[Return to Top_ ]

.. ==========================================================================

..
 Containers
 --------------

 There are two buffer types, class Level_ and class Store_ which
 model the asynchronous production of items by a process and their
 consumption (possibly by another process). Both are
 capacity-constrained in the number of items they can buffer but can
 be unbounded. By a *yield put* command, the producer puts an amount
 or one or more objects into the buffer, and by a *yield get*
 command, a process takes and amount or one or more objects out. A
 process issuing the *put* is blocked if the items to be added to the
 buffer would exceed its capacity, and a process issuing a *get*
 command blocks when the number of items requested is larger than the
 number of items in the buffer.



Levels
-----------

A *Level* is used to buffer quantities (integer or real). Processes
can add and take amounts from the buffer and are automatically queued
if limits are reached. In contrast, Stores_ model buffering
distinguishable items by maintaining queues of item instances).


Defining a Level
~~~~~~~~~~~~~~~~~~

A *Level* holds a scalar (*real* or *integer*) level and is
established by the following statement::

    cB = Level(name='a_level', unitName='units',
                     capacity='unbounded',
                     initialBuffered=0,
                     putQType=FIFO, getQType=FIFO,
                     monitored=False, monitorType=Monitor,

where

 - *name* (string type) is the name by which the buffer is known (eg
   ``'inventory'``).
 - *unitName* (string type) is the name of the unit of the buffer (eg
   ``'widgets'``).
 - *capacity* (positive real or integer) is the capacity of the
   buffer. The default value is set to 'unbounded' which translates as ``sys.maxint``.

 - *initialBuffered* is the initial content of the buffer.
 - *putQType* (``FIFO`` or ``PriorityQ``) is the (producer) queue
   discipline.
 - *getQType* (``FIFO`` or ``PriorityQ``) is the (consumer) queue discipline.
 - *monitored* (boolean) sets the monitoring of the queues and the buffer.
 - *monitorType* (``Monitor`` or ``Tally``) sets the type of Monitor_
   to be used.  `

A Level, ``cB``, has the following additional attributes:

 - ``cB.amount`` is the amount currently held in the Level. (**Note**: in a
    printout, this attribute will be  shown as ``nrBuffered``.)
 - ``cB.putQ`` is a queue of processes waiting to add amounts to the
   buffer. ``len(cB.putQ`` is the number of processes waiting to add amounts.
 - ``cB.getQ`` is a queue of processes waiting to get amounts from
   the buffer. ``len(cB.getQ)`` is the number of processes waiting to
   get amounts.
 - ``cB.monitored`` is ``True`` if the queues are to be monitored. In
   this case  ``cB.putQMon``, ``cB.getQMon``, and  ``cB.bufferMon`` exist.
 - ``cB.putQMon`` is a Monitor_ observing ``cB.putQ``.
 - ``cB.getQMon`` is a Monitor_ observing ``cB.getQ``.
 - ``cB.bufferMon``  is a Monitor_ observing ``cB.amount``.


Getting amounts  from and putting amounts into a Level
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Processes can extract amounts from the buffer and other processes can
replenish it.

A process, the *consumer*, can extract an amount ``q`` from a
*Level*, ``cB``, by the *yield get* statement.::

    yield get,self,cB,q

Here ``q`` can be a positive real or integer. If the buffer does not
hold enough (that is ``q > cB.amount``) the requesting process will be
passivated and queued (in ``cB.getQ``). It will be reactivated when
there is enough.

A process, the *producer*, which is usually but not necessarily
different from the *consumer* can add an amount ``r`` to the
*Level* by a *yield put* statement::

     yield put,self,cB,r

Here ``r`` can be a positive real or integer. If this statement would lead to an
overflow (that is, ``cB.amount + r > cB.capacity``) the putting
process is passivated and queued (in ``cB.putQ``) until there is
sufficient room.



Getting amounts from and putting amounts into a Level with priority
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

If ``putQType==PriorityQ`` a priority parameter may be used to control
the order in which the consumer process is queued (higher value has
higher priority). Thus the statement::

    yield get,self,cB,q,P

where ``P`` is a real or integer value, will extract an amount ``q``
from the buffer with priority ``P``.


If ``getQType==PriorityQ`` a priority parameter may be used to control
the order in which the producer process is queued (higher value has
higher priority). Thus the statement::

    yield put,self,cB,r,P

where ``P`` is a real or integer value, will add an amount ``r``
to the buffer with priority ``P``. Such queueing will only take
place when the ``capacity`` is exceeded.



An inventory example
~~~~~~~~~~~~~~~~~~~~

Random demands of material (Normal distribution, mean 1.2 units) from
an inventory occur each day.  A ``stock`` (an object of the *Level*
class) is refilled by 10 units at fixed intervals of 10 days. There
are no back-orders but a count of the total stockouts is maintained. A
trace is printed out each day and whenever there is a stockout::

 from SimPy.Simulation import *
 from random import normalvariate

 class Deliver(Process):
     tot=0.0
     def deliver(self):
         while True:
             lead = 10.0
             delivery=10.0
             yield put,self,stock,delivery
             print '%7.4f  deliver: %7.4f stock : %6.4f'%\
                   (now( ),delivery,stock.count)
             yield hold,self,lead

 class Demand(Process):
     stockout=0.0
     def demand(self):
         day=1.0
         while True:
             yield hold,self,day
             dd=normalvariate(1.2,0.2)
             ds= dd-stock.amount
             if dd>stock.amount:
                 yield get,self,stock,stock.amount
                 self.stockout+=ds
                 print '%7.4f  stockout: %7.4f'%(now( ),-ds)
             else:
                 yield get,self,stock,dd
             print '%7.4f  demand: %7.4f buffer: %6.4f'%\
                   (now( ),dd,stock.amount)

 stock=Level(monitored=True)

 initialize( )
 d = Deliver( )
 activate(d,d.deliver( ))
 dem = Demand( )
 activate(dem,dem.demand( ))
 simulate(until=50)

 result=(now( ),stock.bufferMon.mean( ),dem.stockout)
 print '%7.4f ave stock: %7.4f total stockout: %7.4f'%result


[Return to Top_ ]

.. =================================================================

Stores
-----------

A *Store* buffers a number of *distinguishable* objects (of any
type, including processes). Objects are put into the *Store* by
processes and taken out by others. In contrast, Levels_ model the
buffering of single quantities.


Defining a Store
~~~~~~~~~~~~~~~~~~~

A *Store* is established by the following statement::

 sB = Store(name='a_store', unitName='units',
                 capacity='unbounded',
                 initialBuffered=None,
                 putQType=FIFO, getQType=FIFO,
                 monitored=False, monitorType=Monitor)

where

 - *name* (string type) is the name by which the buffer is known (eg
   ``'Inventory'``).
 - *unitName* (string type) is the name of the unit of the buffer (eg
   ``'widgets'``).
 - *capacity* (positive real or integer) is the capacity of the buffer.
 - *initialBuffered* (a list) is the initial content of the buffer.
 - *putQType* (``FIFO`` or ``PriorityQ``) is the (producer) queue
   discipline.
 - *getQType* (``FIFO`` or ``PriorityQ``) is the (consumer) queue discipline.
 - *monitored* (boolean) sets the monitoring of the queues and the buffer.
 - *monitorType* (``Monitor`` or ``Tally``) sets the type of monitor
   to be used.  `

A *Store* has the following additional attributes:

 - ``sB.theBuffer`` is a queue (list) containing the buffered objects
   (in FIFO order unless the user is storing them in a particular
   order (see `Storing objects in an order`_ , below). This is
   read-only and cannot be changed by the user. (**Note**: in a printout
   of a *Store* object, this attribute is shown as ``buffered``.)

 - ``sB.nrBuffered`` is the number of objects currently
   buffered. This is read-only and cannot be changed by the user.
 - ``sB.putQ`` is a queue of processes waiting to add objects to the
   buffer. ``len(sB.putQ`` is the number of processes waiting to add objects.
 - ``sB.getQ`` is a queue of processes waiting to get objects from
   the buffer. ``len(sB.getQ)`` is the number of processes waiting to
   get objects.

 - ``sB.monitored`` is set to ``True`` when the buffer is created if
   the queues are to be monitored. In this case ``sB.putQMon``,
   ``sB.getQMon``, and ``sB.bufferMon`` exist.

 - ``sB.putQMon`` is a monitor observing ``sB.putQ``.
 - ``sB.getQMon`` is a monitor observing ``sB.getQ``.
 - ``sB.bufferMon``  is a monitor observing ``sB.nrBuffered``.


Getting objects from and putting objects into a Store
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Processes can extract (*get*) objects from the buffer and other processes can
add objects to it (using *put*).

A process can get the  first ``n`` objects from a *Store*, ``sB``, by the
*yield get* statement.::

    yield get,self,sB,n

If the buffer does not hold enough objects the requesting process will
be passivated and queued (in ``sB.getQ``). It will be reactivated when
the request can be satisfied.

The retrieved objects are returned in the list attribute ``got`` of
the requesting process.

A process (another or possibly the same one) can add an list of
objects to the *Store* by a *yield put* statement::

     yield put,self,sB,[S]

Here ``[S]`` is a list of any objects. If this statement would lead to
an overflow (that is, ``sB.nrBuffered + len([S]) > cB.capacity``) the
putting process is passivated and queued (in ``sB.putQ``) until there
is sufficient room.

The objects are stored in the form of a queue (``sB.theBuffer``) which
is in FIFO order unless the user has arranged to sort them into a
particular order (see `Storing objects in an order`_ below)

Getting objects from and putting objects into a Store with priority
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

If ``getQType==PriorityQ`` a priority parameter may be used to control
the order in which the consumer process is queued (higher value has
higher priority). Thus the statement::

    yield get,self,sB,n,P

where ``P`` is a real or integer value, will extract a list of ``n``
objects from from the buffer, ``sB``, with priority ``P``. The
retrieved objects are returned in the list attribute ``got`` of the
requesting process.

If ``putQType==PriorityQ`` a priority parameter may be used to control
the order in which the producer process is queued (higher value has
higher priority). Thus the statement::

    yield put,self,sB,S,P

where ``P`` is a real or integer value, will add the *list* of objects,
``S``, to the buffer with priority ``P``. Queueing will only take
place when if the buffer ``capacity`` would be exceeded.


An example
~~~~~~~~~~~~

In this model of distinguishable objects::

   from SimPy.Simulation import *
   class ProducerD(Process):
       def produce(self):
           while True:
               yield put,self,buf,[Widget(9),Widget(7)]
               yield hold,self,10

   class ConsumerD(Process):
       def __init__(self):
           Process.__init__(self)
       def consume(self):
           while True:
               toGet=3
               yield get,self,buf,toGet
               assert len(self.got)==toGet
               print now( ),'Received widget weights',[x.weight for x in self.got]
               yield hold,self,11
   class Widget(Lister):
       def __init__(self,weight=0):
           self.weight=weight

   widgbuf=[]
   for i in range(10):
       widgbuf.append(Widget(5))

   initialize( )
   buf=Store(capacity=11,initialBuffered=widgbuf,monitored=True)
   for i in range(3):
       p=ProducerD( )
       activate(p,p.produce( ))
   for i in range(3):
       c=ConsumerD( )
       activate(c,c.consume( ))
   simulate(until=50)
   print 'Buffer:',buf.bufferMon
   print 'getQ:',buf.getQMon
   print 'putQ',buf.putQMon

[Return to Top_ ]


Storing objects in an order
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

The buffer of a Store instance is a queue (FIFO, or First In First
Out) by default.  Alternatively, it can be kept in a user-defined
order. For this, the user must define a function for reordering the
buffer and add it to the Store instance for which he wants to change
the buffer order. Subsequently, the SimPy system will automatically
call that function after any addition (*put*) to the buffer.

An example::

   class Parcel:
           def __init__(self,weight):
                   self.weight=weight

   lightFirst=Store( )

   def getLightFirst(self,par):
           '''Lighter parcels to front of queue'''
           tmplist=[(x.weight,x) for x in par]
           tmplist.sort( )
           return [x for (key,x) in tmplist]

   lightFirst.addSort(getLightFirst)

Now any *yield get* will get the lightest parcel in lightFirst's buffer.

Note that this only changes the sorting order of this Store instance, NOT of
the class Store.



Master/Slave modelling with a Store
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

The items in a *Store* can be of any class. A useful case for
modelling is when these objects are SimPy processes. We can then model
a Master/Slave situation, an asymmetrical cooperation between two or
more processes, with one process (the Master) being in charge of the
cooperation.

The Consumer (Master) requests one or more Slaves put into the buffer
by the Producer (which may be the same process as the Slave).  For
Master/Slave cooperation, the Slave has to be passivated (by a *yield
passivate* or *yield waitevent* statement) after it is *put* and
reactivated when it is retrieved and finished with. As this is not
done automatically by the *Store* the Master has to signal the end of
the cooperation.

An example
~~~~~~~~~~~~~~~~~~~~

Cars arrive randomly at a Car Wash and add themselves to the
``waitingCars`` buffer. They wait (passively) for a ``doneSignal``.
There are two ``Carwash`` washers. These ``get`` a car, if one is
available, wash it, and then send the ``doneSignal`` to reactivate
it.

In this version of the model the ``Carwash`` is the Master and the
``Cars`` the slaves. Several cars are put into a list, ``waiting``
first and these make up the initial setof cars waiting for
service. The cars are generated randomly by the ``CarGenerator``
process. Each car *yield puts* itself onto the ``waitingCars`` *Store*
and immediately passivates itself by waiting for a ``doneSignal``
from a car washer.

The car washers cycle round *getting* the next car on the bufer,
washing it and then sending a ``doneSignal`` to it when it has finished.

It is possible also to restructure this model with the cars as Master
and the Car Wash as Slaves::

   from SimPy.Simulation import *

   '''Carwash is master
   '''
   class Carwash(Process):
       '''Carwash is master'''
       def __init__(self,name):
           Process.__init__(self,name)

       def lifecycle(self):
           while True:
               yield get,self,waitingCars,1
               carBeingWashed=self.got[0]
               yield hold,self,washtime
               carBeingWashed.doneSignal.signal(self.name)

   class Car(Process):
       '''Car is slave'''
       def __init__(self,name):
           Process.__init__(self,name)
           self.doneSignal=SimEvent( )
       def lifecycle(self):
           yield put,self,waitingCars,[self]
           yield waitevent,self,self.doneSignal
           whichWash=self.doneSignal.signalparam
           print '%s car %s done by %s'%(now( ),self.name,whichWash)

   class CarGenerator(Process):
       def generate(self):
           i=0
           while True:
               yield hold,self,2
               c=Car(i)
               activate(c,c.lifecycle( ))
               i+=1

   washtime=5
   initialize( )
   waiting=[]
   for j in range(1,5):
       c=Car(name=-j)
       activate(c,c.lifecycle( ))
   waitingCars=Store(capacity=40,initialBuffered=waiting)
   for i in range(2):
       cw=Carwash('Carwash %s'%`i`)
       activate(cw,cw.lifecycle( ))
   cg=CarGenerator( )
   activate(cg,cg.generate( ))
   simulate(until=100)
   print 'waitingCars',[x.name for x in waitingCars.theBuffer]


[Return to Top_ ]

.. ==========================================================================

Random Number Generation
-------------------------

Simulation usually needs pseudo-random numbers. *SimPy* does not have
generators of its own but uses the standard `Python random module`_. A
good range of distributions is available. The module's documentation
should be consulted for details.

This module can be used in two ways: you can import the methods
directly or you can import the *Random* class and make your own random
objects. This gives multiple random streams, as in Simscript and
ModSim. Each object gives a different pseudo-random sequence.

Here the first, simpler, method is described. A single pseudo-random
sequence is used for all calls.

One *imports* the methods you need from the *random* module. For example::

 from random import seed, random, expovariate, normalvariate``

In simulation it is good practice to set the initial seed for the
pseudo-random sequence at the start of each run.


* ``seed(x)`` sets the initial seed for the pseudo-random sequence to
  the integer, *x*.

* ``random( )`` returns the next random floating point number in the
  range [0.0, 1.0).

* ``expovariate(lambd)`` returns a sample from the exponential
  distribution. *lambd* is *1.0/m* where *m* is the mean value. (The
  parameter would be called *lambda*, but that is a reserved word in
  Python) Returned values range from 0 to positive infinity. **Note**
  that, unfortunately, the parameter is *not* the mean of the
  distribution.


* ``normalvariate(mu,sigma)`` returns a sample from the normal
  distribution. *mu* is the mean, and *sigma* is the standard
  deviation. Returned values range from minus to plus infinity.


This example shows how the simple method is used. We set the initial
seed to 333555.  *X* and *Y* are pseudo-random variates from the
two distributions. Both distributions have the same mean::

   from random import seed, expovariate, normalvariate

   seed(333555)
   X = expovariate(0.1)
   Y = normalvariate(10.0, 1.0)


[Return to Top_ ]

.. ============================================================================

Recording Simulation Results
-----------------------------

Recorders are used to observe variables of interest and to return a
simple data summary either during or at the completion of a simulation
run. SimPy simulations often use either a **Tally** class or a
**Monitor** class object for this purpose. Each of these recorders
observes one variable. For example we might use a monitor object to
record the waiting times for a sequence of customers and another to
record the total number of customers in the shop. In a discrete-event
system the number of customers changes only at arrival or departure
events and it is at those events that the waiting times and number in
the shop is observed. Although Monitors and Tallys provide elementary
statistics rather than sophisticated statistical analysis, they have
proved useful in many simulations.

The simpler class, **Tally**, records enough information (sums and
sums of squares) to return simple data summaries while the simulation
runs. It has the advantage of speed and low memory use. It can collect
data to produce a histogram. However, it does not preserve a
time-series usable in post-simulation statistical analysis.

The more complicated class, **Monitor**, does preserve a complete
time-series of observed data values, *y*, and associated times,
*t*. It calculates the data summaries using these series only when
they are needed. It is slower and uses more memory than *Tally*. In
long simulations its memory demands may be a disadvantage.

Both varieties of recorder use the same *observe* method to record data
on the variable.


Defining Tallys and Monitors
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

To define a new **Tally** object:

* **m=Tally(name=<string>, ylab=<string>, tlab=<string>)**

  - *name*: is the name of the tally object. (default='a_Tally')
  - *ylab* and *tlab* are provided as
    labels for plotting graphs from the data held in the monitor using
    facilities in the `SimPlot`_ package (defaults= 'y' and 't', respectively). (If a histogram_ is required
    the method *setHistogram* must be called before observing starts)

To define a new **Monitor** object:

* **m=Monitor(name=<string>, ylab=<string>, tlab=<string>)**

  - *name*: is the name of the Monitor object (default='a_monitor').
  - *ylab* and *tlab* are provided as labels for plotting graphs from
    the data held in the monitor using facilities in the `SimPlot`_
    package (default='y' and 't', respectively). (If a histogram_ is
    required this can be requested at any time).

.. _histogram: Histograms_

Observing data
~~~~~~~~~~~~~~~~~

Both *Tallys* and *Monitors* use the *observe* method to record data.
in the following description, *r* is either a Tally or a Monitor object:

* **r.observe(y [,t])** records the current value of the variable, *y*
  and time *t* (the current time, *now( )*, if *t* is missing). A
  *Monitor* retains the two values aa a sublist *[t,y]*. A *Tally*
  uses them to update the accumulated statistics.

  To assure that time averages are calculated correctly *observe* should
  be called
  immediately *after* a change in the variable. For example, if we are
  monitoring the number of jobs in a system, *N*, using monitor *r*,
  the correct sequence of commands on an arrival is::

     N = N+1      # FIRST, increment the number of jobs
     r.observe(N) # THEN observe the new value of N using r


The recording of data can be *reset* to start at any time in the
simulation:

* **r.reset([t])** resets the observations. The recorded data is
  re-initialised, and the starting time set to *t* or, if it is
  missing in the call, to the current simulation time, *now( )*.

Data summaries
~~~~~~~~~~~~~~~~~

The following simple data summaries can be obtained from either
Monitors or Tallys at any time during or after the simulation run:

* **r.count( )** the current number of observations. (In the case of a
  *Monitor*, *M*, this is the same as *len(M)*).

* **r.total( )** the sum of the *y* values

.. figure:: images/Mon004.png
  :alt: Standard mean value
  :align: right

* **r.mean( )** the simple average of the observed *y* values, ignoring
  the times at which they were made.  This is *r.total( )/r.count( )*.

  If there are no observations, the message:
  'SimPy: No observations for mean' is printed.

* **r.var( )** the sample variance of the observations, ignoring the
  times at which they were made. This should be multiplied by
  *n/(n-1)*, where *n = r.count( )* if an estimate of the *population*
  variance is desired. The standard deviation is, of course, the
  square-root of the variance.

  If there are no observations, the message: 'SimPy: No observations
  for sample variance' is printed.

.. figure:: images/Mon005.png
  :alt: Time Average
  :align:  right

* **r.timeAverage([t])** the average of the time-weighted *y* graph,
  calculated from time 0 (or the last time *r.reset([t])* was called)
  to time *t* (the current simulation time, *now( )*, if *t* is
  missing).  This is determined from the area under the graph shown in
  the figure, divided by the total time of observation.  *y* is
  assumed to be continuous in time but changes in steps when
  *observe(y)* is called.

  If there are no observations, the message 'SimPy: No observations
  for timeAverage'. If no time has elapsed, the message 'SimPy: No
  elapsed time for timeAverage' is printed.


* **r.__str__( )** is a string that briefly describes the current state
  of the monitor. This can be used in a print statement.


Special methods for Monitor:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

The *Monitor* variety is a sub-class of *List* and has a few extra methods:

* **m[i]** holds the **i**-th observation as a list, *[ti, yi]*
* **m.yseries( )** a list of the recorded data values, *yi*
* **m.tseries( )** a list of the recorded times, *ti*



Histograms
~~~~~~~~~~~~~~~~~

A histogram is an object that counts the observations that fall into a
number of specified ranges, called bins.

.. figure:: images/Mon006.png
  :align: right
  :alt: Histogram

* **h = histogram(low=<float>,high=<float>,nbins=<integer>)** is a *histogram* object
  (a derived class of *list*) which contains the number of *y* values
  in each of its bins. It is calculated from the monitored *y*
  values. A *histogram* can be graphed using the *plotHistogram*
  method in the `SimPlot`_ package.

  - *low* is the lowest value of the histogram (default=0.0)
  - *high* is the highest value of the histogram (default=100.0)
  - *nbins* is the number of bins between *low* and *high* into which
    the histogram is to be divided (default=10). The number of *y* values
    in each
    of the divisions is counted into the appropriate bin. An additional two
    bins are constructed to count (i) the number of *y* values *under*
    the *low* value and (ii) the number *over* the *high* value. Thus, the
    histogram actual consists of *nbins + 2* bins altogether.



Although both Tallys and Monitors can return a histogram of the data, they
handle histogram data in different ways.  

_ The *Tally* object accumulates bin counts in a histogram as each
value is observed in the course of the simulation run. Since the
individual values are not preserved, the *setHistogram* method must be
called to provide a histogram object to hold the accumulated bin
counts before any values are actually observed.

_ The *Monitor* object stores its data, so the accumulated bin counts
can be computed whenever they are desired. Thus, the histogram need
not be set up until it is needed and this can be done after the data
has been gathered.


Setting up a Histogram for a *Tally* object
++++++++++++++++++++++++++++++++++++++++++++

To establish a Histogram for a *Tally* object, *r*, we call the
*setHistogram* method with appropriate arguments before we observe any
data, e.g.

* **r.setHistogram(name = '',low=0.0,high=100.0,nbins=10)**

Then, after *observing* the data we need:

* **h = r.getHistogram( )** returns a completed histogram using the
   histogram parameters as set up.

In the following example we establish a *Tally* monitor to observe
values of an exponential random variate. A histogram with 30 bins
(plus an *under* and an *over* count) is used::

   from SimPy.Simulation import *
   from random import expovariate

   r = Tally('Tally')                          # define a tally object, r
   r.setHistogram(low=0.0,high=20.0,nbins=30)  # done before observations

   for i in range(1000):
      y = expovariate(0.1)
      r.observe(y)

   h = r.getHistogram( )

Setting up a Histogram for a *Monitor* object
++++++++++++++++++++++++++++++++++++++++++++++

For *Monitor* objects, a histogram can be both set up and constructed in
a single call, e.g.,

* **h = r.histogram(low=0.0,high=100.0,nbins=10)**

This  call is  equivalent to the following pair:

* **r.setHistogram(name = '',low=0.0,high=100.0,nbins=10)**
* **h = r.getHistogram( )**, which returns the completed histogram.

In the following  example we establish a *Monitor* to observe values of an
exponential random variate.  A histogram with 30 bins (plus an *under*
and an *over* count) is used. ::

   from SimPy.Simulation import *
   from random import expovariate

   m = Monitor( )

   for i in range(1000):
      y = expovariate(0.1)
      m.observe(y)

   h = m.histogram(low=0.0, high=20, nbins=30)

.. -------------------------------------------------------------------------

..  Note: The following methods of the *Monitor* class are
   retained for backwards compatibility
   but are not recommended. They may be removed in future releases of
   SimPy.

   * **r.tally(y)** records the current value of *y* and the current
     time, *now( )*. (DO NOT USE)
   * **r.accum(y [,t])** records the current value of *y* and time *t*
     (the current time, *now( )*, if *t* is missing). (DO NOT USE)  

[Return to Top_ ]

.. -------------------------------------------------------------------------

Other Links
-------------------

Several `SimPy models`_ are included with the SimPy code distribution.

Klaus Muller and Tony Vignaux, *SimPy: Simulating Systems in Python*,
O'Reilly ONLamp.com, 2003-Feb-27,  http://www.onlamp.com/pub/a/python/2003/02/27/simpy.html

Norman Matloff, *Introduction to the SimPy Discrete-Event Simulation
Package*, U Cal: Davis, 2003,
http://heather.cs.ucdavis.edu/~matloff/simpy.html

David Mertz, *Charming Python: SimPy simplifies complex models*, IBM
Developer Works, Dec 2002,
http://www-106.ibm.com/developerworks/linux/library/l-simpy.html

[Return to Top_ ]

Acknowledgements
-------------------

Bob Helmbold made improvements to the text of this Manual.  We
will be grateful for any further corrections or suggestions that will
improve it.

[Return to Top_ ]


.. ===================================================================

Appendices
-------------



A1. SimPy Error Messages
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Advisory messages
+++++++++++++++++

These messages are returned by *simulate( )*, as in
*message=simulate(until=123)*.

Upon a normal end of a simulation, *simulate( )* returns the message:

- **SimPy: Normal exit**. This means that no errors have occurred and
  the simulation has run to the time specified by the *until* parameter.

The following messages, returned by *simulate( )*, are produced at a premature
termination of the simulation but allow continuation of the program.

- **SimPy: No more events at time x**. All processes were completed prior
  to the *endtime* given in *simulate(until=endtime)*.

- **SimPy: No activities scheduled**. No activities were scheduled
  when *simulate( )* was called.

Fatal error messages
++++++++++++++++++++

These messages are generated when SimPy-related fatal  exceptions occur.
They end the SimPy program. Fatal SimPy error messages are output to
*sysout*.

- **Fatal SimPy error: activating function which is not a generator (contains no 'yield')**.
  A process tried to (re)activate a function which is not a
  SimPy process (=Python generator). SimPy processes must contain
  at least one *yield . . .* statement.

- **Fatal SimPy error: Simulation not initialized**. The SimPy program
  called *simulate( )* before calling *initialize( )*.

- **SimPy: Attempt to schedule event in the past**: A *yield hold* statement
  has a negative delay time parameter.

- **SimPy: initialBuffered exceeds capacity**: Attempt to initialize a Store
  or Level with more units in the buffer than its capacity allows.

- **SimPy: initialBuffered param of Level negative: x**: Attempt to
  initialize a Level with a negative amount x in the buffer.

- **SimPy: Level: wrong type of initialBuffered (parameter=x)**: Attempt to
  initialize a buffer with a non-numerical initial buffer content x.

- **SimPy: Level: put parameter not a number**: Attempt to add a
  non-numerical amount to a Level's buffer.

- **SimPy: Level: put parameter not positive number**: Attempt to add
  a negative number to a Level's amount.

- **SimPy: Level: get parameter not positive number: x**: Attempt to
  get a negative amount x from a Level.

- **SimPy: Store: initialBuffered not a list**: Attempt to initialize
  a Store with other than a list of items in the buffer.

- **SimPy: Item to put missing in yield put stmt**: A *yield put* was
  malformed by not having a parameter for the item(s) to put into the
  Store.

- **SimPy: put parameter is not a list**: *yield put* for a Store must
  have a parameter which is a list of items to put into the buffer.

- **SimPy: Store: get parameter not positive number: x**: A *yield
  get* for a Store had a negative value for the number to get from the
  buffer.

- **SimPy: Fatal error: illegal command: yield x**: A *yield*
  statement with an undefined command code (first parameter) x was
  executed.


Monitor error messages
++++++++++++++++++++++

- **SimPy: No observations for mean**. No observations were made by the
  monitor before attempting to calculate the mean.

- **SimPy: No observations for sample variance**. No observations were made by the
  monitor before attempting to calculate the sample variance.

- **SimPy: No observations for timeAverage**, No observations
  were made by the monitor before attempting to calculate the time-average.

- **SimPy: No elapsed time for timeAverage**. No simulation
  time has elapsed before attempting to calculate the time-average.



A2. SimPy Process States
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

From the point of the model builder, at any time, a SimPy process, *p*,
can be in one of the following states:

- **Active**: Waiting for a scheduled event. This state simulates an
  activity in the model.  Simulated time passes in this
  state. The process state *p.active( )* returns *True*.

- **Passive**: Not active or terminated. Awaiting *(re-)activation* by
  another process.  This state simulates a real world process which
  has not finished and is waiting for some trigger to continue. Does
  not change simulation time.  *p.passive( )* returns *True*.

- **Terminated**: The process has executed all its action statements
  and continues as a data instance, if referenced. *p.terminated( )*
  returns *True*

Initially (upon creation of the Process instance), a process returns *passive*.

In addition, a SimPy process, *p*,  can be in the following (sub)states:

- **Interrupted**: Active process has been interrupted by another
  process. It can immediately respond to the interrupt. This
  simulates an interruption of a simulated activity before its
  scheduled completion time.  *p.interrupted( )* returns *True*.

- **Queuing**: Active process has requested a busy resource and is
  waiting (passive) to be reactivated upon resource
  availability. *p.queuing(a_resource)* returns *True*.


.. -------------------------------------------------------------------------


A3. SimPlot, The Simpy plotting utility
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

SimPlot_ provides an easy way to graph the results of simulation runs.

.. _`SimPlot`: SimPlotManual/ManualPlotting.html


A4. SimGUI, The Simpy Graphical User Interface
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

SimGUI_  provides a  way for users to interact with a SimPy program,
changing its parameters and examining the output.

.. _`SimGUI`: SimGUIManual/SimGUImanual.html



A5. SimulationTrace, the SimPy tracing utility
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

`SimulationTrace`_ has been developed to give users insight into the
dynamics of the execution of SimPy simulation programs. It can help
developers with testing and users with explaining SimPy models to themselves
and others (e.g. for documentation or teaching purposes).

.. _`SimulationTrace`: Tracing.html


A6. SimulationStep, the SimPy event stepping utility
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

`SimulationStep`_ can assist with debugging models, interacting with them on
an event-by-event basis, getting event-by-event output from a model (e.g.
for plotting purposes), etc.

It caters for:

   - running a simulation model, with calling a user-defined procedure after every event,
   - running a simulation model one event at a time by repeated calls,
   - starting and stopping the event stepping mode under program control.

.. _`SimulationStep`: SimStepManual/SimStepManual.html

A7. SimulationRT, a real-time synchronizing utility
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

`SimulationRT`_ allows synchronising simulation time and real (wall-clock) time.
This capability can be used to implement e.g. interactive game applications or
to demonstrate a model's execution in real time.

.. _`SimulationRT`: SimRTManual.html

[Return to Top_ ]

.. ----------------------------------------------------------------------------

.. some useful stuff used above


.. |simpylogo| image:: images/sm_SimPy_Logo.png
.. _`simpydownload`: http://sourceforge.net/projects/simpy/

.. _`SimPy models`: LISTOFMODELS.html


.. _Top: Contents_
.. _Monitor: `Recording Simulation Results`_
.. _Monitors: `Recording Simulation Results`_
.. _Tally: `Defining Tallys and Monitors`_
.. _Tallys: `Defining Tallys and Monitors`_
.. _reneging: `Reneging -- leaving a queue before acquiring a resource`_
.. _interrupted: `Asynchronous interruptions`_
.. _`Python random module`: http://www.python.org/doc/current/lib/module-random.html

.. _Python: http://www.Python.org


..
 .. image:: http://sourceforge.net/sflogo.php?group_id=62366&type=4
    :width: 125
    :height: 37
    :alt:  SourceForge Logo





..
  Local Variables:
  mode: rst
  indent-tabs-mode: nil
  sentence-end-double-space: t
  fill-column: 70
  End:
