RELEASED Python 2.3.1

C

Cousin Stanley

| Yes, you can still download the individual HTML files at:
| http://www.python.org/ftp/python/doc/2.3.1/

Raymond ....

Thanks for the reply ....

I'd like to see the Docs as always a separate download
from the standard distribution to provide a bit leaner
download for those who don't program and aren't interested ....
 
T

Thomas Heller

Cousin Stanley said:
| There's a checkbox which you can uncheck
| to disable installing the htmlhelp file.
|
| Then you can download the HTML archive
| and install it manually.

Thomas ....

Thanks for the reply ....

It's good to know that an alternative
for the individual HTML doc files will remain ....

Would it be feasible to eliminate the Python Docs
from the standard distribution and always download
separately if desired ????

This would eliminate downloading the docs twice
if the user wants the separate doc files and provide
a leaner download for non-programming Python users
that will never develop anything in Python themselves
but who want a run-time environment only ....

Of course it would be possible, but it would certainly add confusion if
there were several downloads to select from.

Thomas
 
A

Alex Martelli

Peter Hansen wrote:
...
Alex' slides were interesting, but I think they might reflect a subtle
trend away from Python's traditional treatment of programmers as
consenting adults, with all the concern about newbies and the hints
that such a dangerous feature will be abused to no end.

Python has always struck a nice balance between the "trust the
programmer" principle (e.g., no complicating the language to make
some things 'private', and the like) and the need to enhance
programmer productivity, among other ways, by not offering "red
herrings" that may LOOK like they'd be good for typical purpose
X but actually aren't -- to understand the latter half of this
contention, consider the cases where the latter principle was NOT,
in fact, fully respected (__slots__, locals()[somename]=23,
reduce, ...). I've seen enough programmers (and then some) wrest
with threading, and come up with all the architecturally-wrong answers,
to be fully aware that "raising an exception in another thread" would
fit the "red herrings case" to a tee; I considered that the only
real downside of the proposal, and so, I think, did all the group
that was discussing it. So, when Guido came up with the idea of
putting the feature in the C API only, there was much cheering --
just the right amount of distance away from "typical programmers"
to avoid a flood of puzzled questions here and to (e-mail address removed)!-)

The slides also suggest that there are almost no use cases whatsoever
for such a feature, though admitting there might, just barely, be enough
to merit adding it as a non-newbie C-only API which one has to jump
through hoops (or use ctypes :) to access.

I and Just canvassed widely at Europython, looking for people to
suggest more use cases, and nobody came up with anything that stood
up to examination beyond our basic use case of "debugging possibly
buggy (nonterminating) code, in cases where we just can't run the
possibly buggy code in the main thread and delegate a separate
watchdog thread to the purpose of interrupting the main one" (note
that a secondary thread raising an exception in the main one WAS
already going to be in 2.3 anyway, since IDLE 1.0 uses that). I
kept buttonholing people with exactly the same question at OSCON
and elsewhere, trying to beef up my docs, but again, nothing came up.

I can think of several use cases right off the top of my head, one
of them being while running automated unit and acceptance tests,
to ensure that a test involving threads will complete (generally
with a failure in this case) even if the code is broken and the
thread cannot be terminated.

Yes, yet another subcase of that one and only use case we kept
coming up with (just like my own immediate application for the
feature, and Just's, were two other such subcases). If you can
come up with use cases that are NOT just restatements of this
one with fake beards, now THAT would be interesting (and might
perhaps see the functionality exposed in module threading in
Python 2.4:). As things stand, it still seems to me that the
need to write debugging (including testing:) frameworks that
deal gracefully with buggy multi-threaded code (or cannot use
the main thread to run possibly buggy code because of other
strange constraints, e.g. related to GUI's or embedding) is
rare enough to warrant keeping the functionality a safe way off
from most programmers, though, clearly, real enough to warrant
_having_ the functionality around for those rare but important
special cases.


Alex
 
T

Terry Reedy

I agree that a separate Windows run-only distribution (including .pyc
instead of .py files) would be a good idea. But someone has to
volunteer the time or money to make it happen. The current release
procedure is still being refined and documented by the current
volunteers.

Terry J. Reedy
 
T

Thomas Heller

Terry Reedy said:
I agree that a separate Windows run-only distribution (including .pyc
instead of .py files) would be a good idea.

Why would you distribute .pyc instead of .py files? Most of the time
they are not even smaller, and tracebacks would be problematic.
But someone has to volunteer the time or money to make it happen.

If interesting ideas evolve, I might be interested.

Thomas
 
T

Thomas Heller

Dave Brueck said:
I've been working on a simplistic implementation of such a run-time
for my own projects. It's functional but very experimental (read: does
what I need and not much else) and not well-documented or anything,
but if anybody wants to play with what I've done so far just drop me a
line. It's nice and small:

19,530 ctypes.zip
18,944 pycb.exe
1,908 pycbcom.tlb
19,456 pycbw.exe
445,952 python23.dll
749,092 python23.zip
3,072 w9xpopen.exe
20,480 _ctypes.pyd

(1.2 MB total - has _socket, select, _winreg, etc built in to the main dll)

Interesting.

Is the python23.dll compressed,or did you leave features out?
It registers itself as an ActiveX control so that from Internet
Explorer you can have a web page query to see if the run-time has been
installed (so that, e.g., you can have the user download the
app+runtime or just the app).

Small download size is a primary goal, with a close second being very
little differentiation between the dev (.py) and release (.exe)
environments (which has bitten me many times in the past). Thus I
don't run python.exe any more at all for projects that will end up
being distributed this way.

The library also registers the .pycb extension with Windows (pycb =
"Python code bundle") so that you can distribute your code in a small
app.pycb file that, from the user's perspective, is an
"executable". For example, for a personal project I just completed the
entire distribution consisted of:

venster.pycb (82k)
main.pycb (15k)

Under 100KB is not bad for a GUI app! :)

The .pycb format is basically ZIP + AES encryption, and pycb uses the new
import hooks in 2.3 to handle it. The encryption is just to keep honest
people out; anybody with the right combination of smart and bored could
figure out how to circumvent it.

Mark Hammond and I have also been hacking on a new py2exe version which
is somewhat similar, available in a CVS sandbox subdirectory.

It creates small exe-files (gui, console, service, or com), containing
the main script as marshaled code objects, together with a zipfile
containing the needed python modules, shared between these
exe-files. The zipfile is imported using the zipimport feature of 2.3.

The code is factored out so that it should even possible to add
encryption for the zipfile's contents, given that a customized
zipimporter is used.

Thomas
 
P

Peter Hansen

Alex said:
I and Just canvassed widely at Europython, looking for people to
suggest more use cases, and nobody came up with anything that stood
up to examination beyond our basic use case of "debugging possibly
buggy (nonterminating) code, in cases where we just can't run the
possibly buggy code in the main thread and delegate a separate
watchdog thread to the purpose of interrupting the main one"

[snip Peter's example]
Yes, yet another subcase of that one and only use case we kept
coming up with (just like my own immediate application for the
feature, and Just's, were two other such subcases). If you can
come up with use cases that are NOT just restatements of this
one with fake beards, now THAT would be interesting (and might
perhaps see the functionality exposed in module threading in
Python 2.4:).

I know this won't meet the standard, but what about a use case
involving "actually running (as opposed to debugging) possibly
buggy (non-terminating) code". This could come up in cases such as
our embedded Linux product, which is supposed to be a long-running
process.

And yes, to make it clear, I am actually suggesting the possibility
that the product could ship with buggy code that could cause that
condition. It does happen in the real world... more often than
anybody normally cares to admit. More often with multi-threaded
applications than is good for anyone, of course. Sometimes,
nevertheless, an ugly necessity. Especially when custom digital
hardware is involved, as with embedded systems, and one can't
possibly test for all potential configurations or conditions.

The alternative to attempting to terminate the offending
thread, then restarting it, would be to kill the entire process,
which has a much more severe and immediate impact in several ways.
For example, startup of the application is on the order of 22
seconds, and during that time the user sees no response, so a
restart is definitely a last resort.

It would be "nice" to have the option of attempting to terminate
just the offending thread (based on an internal watchdog feature
which detected the bad condition) and then restart it, allowing the
rest of the application to continue largely unimpeded and not
affecting the user experience as severely. (To tie this closer
to the actual case: the user can still access the system through
a serial port, but network communications, which is all but
invisible to the user in this case, might halt briefly as the thread
restarted.)

Yes, I'm reaching somewhat. Although I actually would like that
feature, even if it were available it would be quite some time
before implementing it would be high enough on the list of priorities
to bother. And wait! It _is_ available, just not directly, so I
can hardly complain. :)

Point mostly taken...

-Peter
 
D

Dominic

One use case could be if you only want to use a limited number
of threads for some reason.

Then you could interrupt a low priority task and reassign the
thread to some more urgent task. Afterwards the old task could be
resumed. To make this work you would have to make the code
aware of those interrupts.

While playing with the new feature I noticed that it
takes a long time (>3 seconds) until the exception is thrown.
In contrast to the possibility to interrupt the main thread
with interrupt_main which seems not to be delayed.

Besides with Pyrex it's 2-4 lines to access
PyThreadState_SetAsyncExc from Python :)

Ciao,
Dominic
 
T

Thomas Heller

Dave Brueck said:
venster.pycb (82k)
main.pycb (15k)

Under 100KB is not bad for a GUI app! :)

The .pycb format is basically ZIP + AES encryption, and pycb uses the new
import hooks in 2.3 to handle it. The encryption is just to keep honest
people out; anybody with the right combination of smart and bored could
figure out how to circumvent it.

I have also experimented with importing encrypted .pyo files, but my
short experiments so far didn't give sufficient speed. Only the a simple
string.translate() or the rotor module with one permutation didn't slow
down the import by several orders of magnitude. What decryption rates do
you get?

Thomas
 
P

Peter Hansen

Dominic said:
Besides with Pyrex it's 2-4 lines to access
PyThreadState_SetAsyncExc from Python :)

Posting those 2-4 lines would be helpful to those searching the archives
at a later date. ;-)

-Peter
 
T

Terry Reedy

Thomas Heller said:
Why would you distribute .pyc instead of .py files?

Off the top-of-my-head thought: so they could be pre-zipped without
interpreter having to write them. But I remember now discussion of
turning off .pyc writes if desired. Anyway, such decisions are for
the doers.
Most of the time
they are not even smaller, and tracebacks would be problematic.

Tracebacks? Production consumer apps shouldn't have them, should
they? :)

Terry J. Reedy
 
D

Dominic

Peter said:
Posting those 2-4 lines would be helpful to those searching the archives
at a later date. ;-)

Sure, here they are:

cdef extern int PyThreadState_SetAsyncExc(long id, obj)


def interrupt(id, obj):
PyThreadState_SetAsyncExc(id, obj)
 
A

Alex Martelli

Dominic said:
One use case could be if you only want to use a limited number
of threads for some reason.

Then you could interrupt a low priority task and reassign the
thread to some more urgent task. Afterwards the old task could be
resumed. To make this work you would have to make the code
aware of those interrupts.

I don't think it works well: Python threads have no priorities,
so, even if you DID interrupt one thread that's working on a
"low-priority job" to feed it with a higher-priority one, OTHER
threads running low-priority jobs will happily keep stealing
CPU and other resources away from the allegedly "high-priority
job". And if you're thinking of somehow "suspending" ALL the
threads currently deemed to be running "low-priority jobs", I
think the whole architecture sounds creaky and fragile. I would
rather tweak (not anywhere as hard a job) module Queue to give
messages posted on Queue's a priority field; ensuring that not
ALL threads (in the pool that's peeling job requests off the
main "pending-jobs Queue") are simultaneously running long AND
low-priority jobs, so that one of them is going to respond soon
enough, is decently easy -- and you can add a global sempahore,
that high-priority jobs increment at their start and decrement
at their end, and low-priority jobs check periodically in their
main loops to ensure their work is suspended when any high-
priority task is running.

While playing with the new feature I noticed that it
takes a long time (>3 seconds) until the exception is thrown.
In contrast to the possibility to interrupt the main thread
with interrupt_main which seems not to be delayed.

Hmmmm... care to show exactly the code you've been trying?
I get EXACTLY opposite results, as follows...:


import time
import thread
import threadex

def saywhen():
for x in xrange(100000):
time.sleep(0.1)

def intemain():
global when_sent
when_sent = None
time.sleep(1.0)
when_sent = time.time()
thread.interrupt_main()
time.sleep(1.0)

im_delays = []
for i in range(10):
tid = thread.start_new_thread(intemain, ())
try:
saywhen()
except:
when_received = time.time()
im_delays.append(when_received - when_sent)

def get_interrupted():
global when_received
when_received = None
try:
saywhen()
except:
when_received = time.time()

it_delays = []
for i in range(10):
tid = thread.start_new_thread(get_interrupted, ())
time.sleep(1.0)
when_sent = time.time()
threadex.threadex(tid, KeyboardInterrupt)
time.sleep(1.0)
it_delays.append(when_received - when_sent)

main_id = thread.get_ident()
def intemain1():
global when_sent
when_sent = None
time.sleep(1.0)
when_sent = time.time()
threadex.threadex(main_id, KeyboardInterrupt)
time.sleep(1.0)

im1_delays = []
for i in range(10):
tid = thread.start_new_thread(intemain1, ())
try:
saywhen()
except:
when_received = time.time()
im1_delays.append(when_received - when_sent)


im_delays.sort()
im1_delays.sort()
it_delays.sort()
print 'IM:', im_delays
print 'IT:', it_delays
print 'IM1:', im1_delays




Module threadex is a tiny interface exposing as 'threadex'
the PyThreadState_SetAsyncExc function. And the results on
my Linux (Mandrake 9.1) box are as follows...:


[alex@lancelot sae]$ python pai.py
IM: [2.4993209838867188, 2.4998600482940674, 2.4998999834060669,
2.4999450445175171, 2.4999510049819946, 2.4999560117721558,
2.4999659061431885, 2.499967098236084, 2.4999990463256836,
2.5000520944595337]
IT: [0.20004498958587646, 0.39922797679901123, 0.39999902248382568,
0.40000700950622559, 0.40000808238983154, 0.40002298355102539,
0.40002298355102539, 0.40002405643463135, 0.40003299713134766,
0.40004003047943115]
IM1: [0.10003900527954102, 0.39957892894744873, 0.40000700950622559,
0.40000796318054199, 0.40001499652862549, 0.40002000331878662,
0.40003204345703125, 0.40003299713134766, 0.40004301071166992,
0.40005004405975342]
[alex@lancelot sae]$

i.e., interrupt_main takes a very repeatable 2.5 seconds;
PyThreadState_SetAsyncExc typically 0.4 seconds, whether it's
going from main to secondary thread or viceversa, with occasional
"low peaks" of 0.1 or 0.2 seconds. Of course, it's quite
possible that there may be something biased in my setup, or
it may be a platform issue. But I'd be quite curious to
see the code you base your observation on.


Alex
 
A

Alex Martelli

Peter Hansen wrote:
...
I know this won't meet the standard, but what about a use case
involving "actually running (as opposed to debugging) possibly
buggy (non-terminating) code". This could come up in cases such as
our embedded Linux product, which is supposed to be a long-running
process.

And yes, to make it clear, I am actually suggesting the possibility
that the product could ship with buggy code that could cause that
condition. It does happen in the real world... more often than
anybody normally cares to admit. More often with multi-threaded
applications than is good for anyone, of course. Sometimes,

Yep. And even more often, the buggy code will involve race
conditions -- unsynchronized access to variables from >1 thread --
which will make you wail long and loud. Which is just part of
why using multiple processes, on a platform such as Linux (where
forking a process isn't the end of the world in terms of
performance), is so often preferable to using multiple threads
within a single process: the operating system gives you far better
support for isolating faults, ensuring communication between
processed DOES go "through the channels", terminating errant
processes, and so on, and so forth (as a cherry on top, if you're
on a multi-CPU machine you'll also get to exploit all of your
CPU's, while Python-coded threaded wouldn't do that:).

I try to limit my multithreaded architectures to VERY simple
structures, based mostly on Queue's for inter-thread cooperation.
Whenever hairy issues emerge -- I try to move to multi-process
architectures instead, at least when I know I will be running
on an OS with decent support for processes. Admittedly, that
is not _always_ possible (sometimes one does have to run under
Windows, for example, with excellent support for threads but
rather heavy-weight processes); thus, it IS nice to be able to
try and interrupt another thread (it IS just a "try": if the
buggy code is looping forewher within a try/except that just
keeps looping when you interrupt it, you're hosed anyway -- it
still isn't anywhere as solid as a multi-process architecture).
Remember, I was among the *paladins* of the new functionality;-).
However, I do think it's crucial that it be used only as a very
last resort, NOT in the 99.99% of cases which are best covered
by other architectures...

Yes, I'm reaching somewhat. Although I actually would like that
feature, even if it were available it would be quite some time
before implementing it would be high enough on the list of priorities
to bother. And wait! It _is_ available, just not directly, so I
can hardly complain. :)

Point mostly taken...

Sure, we did end up having the feature -- just a fingerbreadth away
from the average programmer's reach;-). Anybody who truly really
needs it IS doing stuff much more complicated than average, anyway;-).


Alex
 
F

Florian Schulze

Dave Brueck said:
I've been working on a simplistic implementation of such a run-time
for my own projects. [snip]
445,952 python23.dll

Interesting.

Is the python23.dll compressed,or did you leave features out?

I started to leave out stuff like imageop, audioop, etc - stuff I never
use,
but I believe in the end I decided not to bother since it wouldn't
amount to
a big difference, so IIRC that's a normal python23.dll (with "minimize
size"
compiler optimizations turned on) + select + _socket + _sre + pyexpat
+ zlib
+ _winreg + aes then run through UPX.

-Dave

I got several reports that UPXed dlls are bad, but I still need some hard
data. I can only say that I had a bug report which sayed that a function
didn't work correctly in a compressed dll. Another thing I heard is, that
windows can't share compressed dlls, so they are loaded for each process
which uses the dll. If anyone knows more details I would be very grateful.

Regards,
Florian Schulze
 
A

Alex Martelli

Peter said:
The highlights mention the existence of a new API
PyThreadState_SetAsyncEnc, which is "deliberately accessible only from C",
that can interrupt a thread by sending it an exception.

I can't find an online discussion of this, so I'm asking here. Why was
this
made accessible only from C? Is it dangerous? Experimental? Someone
feels it will be used inappropriately if too readily available at the
Python level?

As covered in previous discussion, basically the latest reason.

Presumably somebody will come up with a little extension module or other
technique for calling this which will let anyone use it at will, so I'm
unclear on why it should be made inaccessible from Python.

Having to use a third-party extension module, or other kludge, will make
people more aware than this functionality is most likely NOT intended nor
appropriate for the use they have in mind (that's going to be the case
well over 90% of the time, IMNSHO based on teaching, consulting and
debugging lots and LOTS of horrid, inappropriate threading architectures
over the years). I just wish that most of the threading-synchronization
constructs currently available in Python, and module thread first and
foremost, were just as "arm's length away", leaving module threading and
module Queue as "the only obvious way to do it" for 90%+ of people's
actual threading needs... a sufficiently selfish and short-sighted
consultant might think that would reduce their volume of business, but
IMHO -- by removing most of the threading-related issues of most Python
programs -- it would just make everybody's life a little bit better;-).


Alex
 
D

Dominic

Well, first of all I agree with you that
a more traditional approach using queues
is better and more predictable.

And yes on a single CPU machine I was
thinking about stopping all "low priority"
threads.

This is somehow creaky and fragile but
you don't need a thread per "task" as
in your queue based alternative.

So you could have 3000 "tasks" and
one thread e.g.
However every "task" would need to recover
from it's injected exception which
is kind of unpredictable and hard to program.

I would not use such an architecture ;-)

Now to the delay:

from testx import interrupt
from thread import start_new_thread,exit
from time import sleep,ctime
from sys import stdin,stdout

def f():
try:
while 1:
sleep(1)
print 'hi there', ctime()
stdout.flush()
except StandardError:
print 'done: ',ctime()
exit()



tid=start_new_thread(f,())

sleep(1)

print 'initiate:',ctime()

interrupt(tid, StandardError())

print 'press any key'

stdin.readline()


result="""
[dh@hawk Python]$ python2.3 main.py
initiate: Sat Sep 27 00:11:30 2003
press any key
hi there Sat Sep 27 00:11:30 2003
hi there Sat Sep 27 00:11:31 2003
hi there Sat Sep 27 00:11:32 2003
hi there Sat Sep 27 00:11:33 2003
hi there Sat Sep 27 00:11:34 2003
hi there Sat Sep 27 00:11:35 2003
hi there Sat Sep 27 00:11:36 2003
hi there Sat Sep 27 00:11:37 2003
done: Sat Sep 27 00:11:37 2003

[dh@hawk Python]$
"""

machine="""
Linux hawk 2.4.18-6mdk #1 Fri Mar 15 02:59:08 CET 2002 i686 unknown
"""

testx="""
# Pyrex version 0.8.2

cdef extern int PyThreadState_SetAsyncExc(long id, obj)


def interrupt(id, obj):
PyThreadState_SetAsyncExc(id, obj)

"""

As you can see it takes about 7 seconds until the
exception is injected.

Ciao,
Dominic


While playing with the new feature I noticed that it
takes a long time (>3 seconds) until the exception is thrown.
In contrast to the possibility to interrupt the main thread
with interrupt_main which seems not to be delayed.


Hmmmm... care to show exactly the code you've been trying?
I get EXACTLY opposite results, as follows...:


import time
import thread
import threadex

def saywhen():
for x in xrange(100000):
time.sleep(0.1)

def intemain():
global when_sent
when_sent = None
time.sleep(1.0)
when_sent = time.time()
thread.interrupt_main()
time.sleep(1.0)

im_delays = []
for i in range(10):
tid = thread.start_new_thread(intemain, ())
try:
saywhen()
except:
when_received = time.time()
im_delays.append(when_received - when_sent)

def get_interrupted():
global when_received
when_received = None
try:
saywhen()
except:
when_received = time.time()

it_delays = []
for i in range(10):
tid = thread.start_new_thread(get_interrupted, ())
time.sleep(1.0)
when_sent = time.time()
threadex.threadex(tid, KeyboardInterrupt)
time.sleep(1.0)
it_delays.append(when_received - when_sent)

main_id = thread.get_ident()
def intemain1():
global when_sent
when_sent = None
time.sleep(1.0)
when_sent = time.time()
threadex.threadex(main_id, KeyboardInterrupt)
time.sleep(1.0)

im1_delays = []
for i in range(10):
tid = thread.start_new_thread(intemain1, ())
try:
saywhen()
except:
when_received = time.time()
im1_delays.append(when_received - when_sent)


im_delays.sort()
im1_delays.sort()
it_delays.sort()
print 'IM:', im_delays
print 'IT:', it_delays
print 'IM1:', im1_delays




Module threadex is a tiny interface exposing as 'threadex'
the PyThreadState_SetAsyncExc function. And the results on
my Linux (Mandrake 9.1) box are as follows...:


[alex@lancelot sae]$ python pai.py
IM: [2.4993209838867188, 2.4998600482940674, 2.4998999834060669,
2.4999450445175171, 2.4999510049819946, 2.4999560117721558,
2.4999659061431885, 2.499967098236084, 2.4999990463256836,
2.5000520944595337]
IT: [0.20004498958587646, 0.39922797679901123, 0.39999902248382568,
0.40000700950622559, 0.40000808238983154, 0.40002298355102539,
0.40002298355102539, 0.40002405643463135, 0.40003299713134766,
0.40004003047943115]
IM1: [0.10003900527954102, 0.39957892894744873, 0.40000700950622559,
0.40000796318054199, 0.40001499652862549, 0.40002000331878662,
0.40003204345703125, 0.40003299713134766, 0.40004301071166992,
0.40005004405975342]
[alex@lancelot sae]$

i.e., interrupt_main takes a very repeatable 2.5 seconds;
PyThreadState_SetAsyncExc typically 0.4 seconds, whether it's
going from main to secondary thread or viceversa, with occasional
"low peaks" of 0.1 or 0.2 seconds. Of course, it's quite
possible that there may be something biased in my setup, or
it may be a platform issue. But I'd be quite curious to
see the code you base your observation on.


Alex
 
D

Dominic

I have used Python 2.3 which is probably
the reason. I'll try it with 2.3.1 again.

Ciao,
Dominic
 
A

Alex Martelli

Dominic said:
Well, first of all I agree with you that
a more traditional approach using queues
is better and more predictable.

And yes on a single CPU machine I was
thinking about stopping all "low priority"
threads.

This is somehow creaky and fragile but
you don't need a thread per "task" as
in your queue based alternative.

Queues don't make you need a thread per task: you
can perfectly well have threads from a pool peeling
tasks of a queue of tasks -- that's a frequent use,
in fact.

So you could have 3000 "tasks" and
one thread e.g.
However every "task" would need to recover
from it's injected exception which
is kind of unpredictable and hard to program.

I would not use such an architecture ;-)

Me neither -- exceptions can arrive at any time, and
recovering and restarting a task under such conditions
is frighteningly hard. Much easier to have all tasks
coded as loops with reasonably frequent checks on e.g.
a lock, if you do need to temporarily suspend them all
to leave CPU available for an occasional high-priority
task -- this way the checks come when the task is in
a "reasonable", "restartable" state, implicitly.

Now to the delay: ...
As you can see it takes about 7 seconds until the
exception is injected.

Offhand you seem to be testing your console drivers and the
C runtime library interface to them, much more than you're
testing Python. The test I posted doesn't use the console
in such critical junctures -- admittedly it does use time.sleep,
one does have to waste time in SOME way without melting the
CPU, but that still feels less artificial to me than your
console-based minuet.


Alex
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
474,164
Messages
2,570,898
Members
47,439
Latest member
shasuze

Latest Threads

Top