F
Felix
Hi,
The documentation for the Multiprocessing.Array says:
multiprocessing.Array(typecode_or_type, size_or_initializer, *,
lock=True)¶
....
If lock is False then access to the returned object will not be
automatically protected by a lock, so it will not necessarily be
“process-safe”.
....
However:
In [48]: mp.Array('i',1,lock=False)
---------------------------------------------------------------------------
AssertionError Traceback (most recent call
last)
/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/
multiprocessing/__init__.pyc in Array(typecode_or_type,
size_or_initializer, **kwds)
252 '''
253 from multiprocessing.sharedctypes import Array
--> 254 return Array(typecode_or_type, size_or_initializer,
**kwds)
255
256 #
/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/
multiprocessing/sharedctypes.pyc in Array(typecode_or_type,
size_or_initializer, **kwds)
85 if lock is None:
86 lock = RLock()
---> 87 assert hasattr(lock, 'acquire')
88 return synchronized(obj, lock)
89
AssertionError:
-------
I.e. it looks like lock=false is not actually supported. Or am I
reading this wrong? If not, I can submit a bug report.
I am trying to create a shared, read-only numpy.ndarray between
several processes. After some googling the basic idea is:
sarr = mp.Array('i',1000)
ndarr = scipy.frombuffer(sarr._obj,dtype='int32')
Since it will be read only (after being filled once in a single
process) I don't think I need any locking mechanism. However is this
really true given garbage collection, reference counts and other
implicit things going on?
Or is there a recommended better way to do this?
Thanks
The documentation for the Multiprocessing.Array says:
multiprocessing.Array(typecode_or_type, size_or_initializer, *,
lock=True)¶
....
If lock is False then access to the returned object will not be
automatically protected by a lock, so it will not necessarily be
“process-safe”.
....
However:
In [48]: mp.Array('i',1,lock=False)
---------------------------------------------------------------------------
AssertionError Traceback (most recent call
last)
/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/
multiprocessing/__init__.pyc in Array(typecode_or_type,
size_or_initializer, **kwds)
252 '''
253 from multiprocessing.sharedctypes import Array
--> 254 return Array(typecode_or_type, size_or_initializer,
**kwds)
255
256 #
/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/
multiprocessing/sharedctypes.pyc in Array(typecode_or_type,
size_or_initializer, **kwds)
85 if lock is None:
86 lock = RLock()
---> 87 assert hasattr(lock, 'acquire')
88 return synchronized(obj, lock)
89
AssertionError:
-------
I.e. it looks like lock=false is not actually supported. Or am I
reading this wrong? If not, I can submit a bug report.
I am trying to create a shared, read-only numpy.ndarray between
several processes. After some googling the basic idea is:
sarr = mp.Array('i',1000)
ndarr = scipy.frombuffer(sarr._obj,dtype='int32')
Since it will be read only (after being filled once in a single
process) I don't think I need any locking mechanism. However is this
really true given garbage collection, reference counts and other
implicit things going on?
Or is there a recommended better way to do this?
Thanks