C
castironpi
To whoso has a serious interest in multi-threading:
What advanced thread techniques does Python support?
1) @synchronized
@synchronized
def function( arg ):
behavior()
Synchronized prevents usage from more than one caller at once: look up
the function in a hash, acquire its lock, and call.
def synchronized( func ):
def presynch( *ar, **kwar ):
with _synchlock:
lock= _synchhash.setdefault( func, allocate_lock() )
with lock:
return func( *ar, **kwar )
return presynch
2) trylock:
trylock.acquire() returns False if the thread that currently owns it
is currently blocking on a lock that the calling thread currently
owns. with trylock: instead throws an exception. Implementation
pending. If a timeout is specified, returns one of three values:
Success, Failure, and Deadlock.
3) upon_acquiring( lockA, lockB )( function, *ar, **kwar )
upon_acquiring spawns new thread upon acquiring locks A and B.
Optional UponAcquirer( *locks ) instance can guarantee they are always
acquired in the same order, similar to the strategy of acquiring locks
in order of ID, but does not rely on the implementation detail of
having them. Just acquire them in the order with which the instance
was initialized.
The similar construction:
while 1:
lockA.acq()
lockB.acq()
Is likewise efficient (non-polling), except in the corner case of
small numbers of locks and large numbers of free-acquire pairs, such
as in large numbers of lock clients, spec. threads.
4) @with_lockarg
with_lockarg wraps an acquisition call, as in 2 or 3, and passes a
lock group to the function as a first parameter: yes, even
supersceding object instance parameters.
def function( locks, self, *ar, **kwar ):
behavior_in_lock()
locks.release()
more_behavior()
function is called with the locks already held, so sadly though, with
locks: idiom is not applicable.
5) groupjoin
for j in range( len( strats ) ):
for k in range( j+ 1, len( strats ) ):
branch:
i.matches[ j,k ]= Match( strats[j], strats[k] )
#Match instances may not be initialized until...
joinallthese()
This ideal may be implemented in current syntax as follows:
thg= ThreadGroup()
for j in range( len( strats ) ):
for k in range( j+ 1, len( strats ) ):
@thg.branch( freeze( i ), freeze( j ) )
def anonfunc( i, j ):
i.matches[ j,k ]= Match( strats[j], strats[k] )
#Match instances may not be initialized until...
thg.groupjoin()
Footnotes:
2: trylock actually checks a graph for cyclicity, not merely if the
individual callee is already waiting for the caller.
3: upon_acquiring as usual, a parameter can be passed to indicate to
the framework to preserve calling order, rather than allowing with
lockC to run prior to a series of threads which only use lockA and
lockB.
4: x87 hardware supports memory block pairs and cache pairs, which set
a reverse-bus bit upon truth of rudimentary comparisons, alleviating
the instruction stack of checking them every time through a loop;
merely jump to address when match completes. Fortunately, the blender
doubles as a circuit-board printer after hours, so production can
begin at once.
What advanced thread techniques does Python support?
1) @synchronized
@synchronized
def function( arg ):
behavior()
Synchronized prevents usage from more than one caller at once: look up
the function in a hash, acquire its lock, and call.
def synchronized( func ):
def presynch( *ar, **kwar ):
with _synchlock:
lock= _synchhash.setdefault( func, allocate_lock() )
with lock:
return func( *ar, **kwar )
return presynch
2) trylock:
trylock.acquire() returns False if the thread that currently owns it
is currently blocking on a lock that the calling thread currently
owns. with trylock: instead throws an exception. Implementation
pending. If a timeout is specified, returns one of three values:
Success, Failure, and Deadlock.
3) upon_acquiring( lockA, lockB )( function, *ar, **kwar )
upon_acquiring spawns new thread upon acquiring locks A and B.
Optional UponAcquirer( *locks ) instance can guarantee they are always
acquired in the same order, similar to the strategy of acquiring locks
in order of ID, but does not rely on the implementation detail of
having them. Just acquire them in the order with which the instance
was initialized.
The similar construction:
while 1:
lockA.acq()
lockB.acq()
Is likewise efficient (non-polling), except in the corner case of
small numbers of locks and large numbers of free-acquire pairs, such
as in large numbers of lock clients, spec. threads.
4) @with_lockarg
with_lockarg wraps an acquisition call, as in 2 or 3, and passes a
lock group to the function as a first parameter: yes, even
supersceding object instance parameters.
def function( locks, self, *ar, **kwar ):
behavior_in_lock()
locks.release()
more_behavior()
function is called with the locks already held, so sadly though, with
locks: idiom is not applicable.
5) groupjoin
for j in range( len( strats ) ):
for k in range( j+ 1, len( strats ) ):
branch:
i.matches[ j,k ]= Match( strats[j], strats[k] )
#Match instances may not be initialized until...
joinallthese()
This ideal may be implemented in current syntax as follows:
thg= ThreadGroup()
for j in range( len( strats ) ):
for k in range( j+ 1, len( strats ) ):
@thg.branch( freeze( i ), freeze( j ) )
def anonfunc( i, j ):
i.matches[ j,k ]= Match( strats[j], strats[k] )
#Match instances may not be initialized until...
thg.groupjoin()
Footnotes:
2: trylock actually checks a graph for cyclicity, not merely if the
individual callee is already waiting for the caller.
3: upon_acquiring as usual, a parameter can be passed to indicate to
the framework to preserve calling order, rather than allowing with
lockC to run prior to a series of threads which only use lockA and
lockB.
4: x87 hardware supports memory block pairs and cache pairs, which set
a reverse-bus bit upon truth of rudimentary comparisons, alleviating
the instruction stack of checking them every time through a loop;
merely jump to address when match completes. Fortunately, the blender
doubles as a circuit-board printer after hours, so production can
begin at once.