Thread imbalance

T

Tuvas

I have a program running several threads. One of them must be done
every (Specified time, usually 1 second). The whole point to having a
thread is do this. However, I've noticed the following. When there is
another part of the program that is active, this thread slips into
disuse, ei, it's only ran about once every 4-5 seconds, or perhaps it
waits for a lul in the computing process. How can I ensure that this
does not happen? This thread uses little processing power, so it could
be set to a high priority, if there is a way to do this. Thanks!
 
C

Carl J. Van Arsdall

Tuvas said:
I have a program running several threads. One of them must be done
every (Specified time, usually 1 second). The whole point to having a
thread is do this. However, I've noticed the following. When there is
another part of the program that is active, this thread slips into
disuse, ei, it's only ran about once every 4-5 seconds, or perhaps it
waits for a lul in the computing process. How can I ensure that this
does not happen? This thread uses little processing power, so it could
be set to a high priority, if there is a way to do this. Thanks!
I think that might be difficult using python threads simply based on how
python controls the global interpreter lock.

One suggestion I might have is to have python release the global
interpreter lock more frequently, you can read about the global
interpreter lock here:

http://docs.python.org/api/threads.html

You might also be able to use some timer/condition construct in
combination with this, something like

Thread A:
if watchdogTimer():
conditionVar.aquire()
conditionVar.notify(threadB)
conditionVar.release()

ThreadB:
while(1):
conditionVar.aquire()
conditionVar.wait()
functionToDoSomething()

This is pseudo python of course, if you need to know about these objects
I would suggest consulting the python manual.

-carl

--

Carl J. Van Arsdall
(e-mail address removed)
Build and Release
MontaVista Software
 
I

Ivan Voras

Tuvas said:
waits for a lul in the computing process. How can I ensure that this
does not happen? This thread uses little processing power, so it could
be set to a high priority, if there is a way to do this. Thanks!

Python is bad for concurrently executing/computing threads, but it
shouldn't be that bad - do you have lots of compute-intensive threads?

If you are running on a unix-like platform, see documentation for
signal() and SIGALRM - maybe it will help your task.
 
P

Peter Hansen

Ivan said:
Python is bad for concurrently executing/computing threads, but it
shouldn't be that bad - do you have lots of compute-intensive threads?

Just in case anyone coming along in the future reads this statement, for
the record I'd like to say this is obviously a matter of opinion or
interpretation, since in my view Python is *excellent* for threads
(which are generally considered "concurrently executing", so I think
that adjective is redundant, too).

Ivan, what makes you say that Python is bad for threads? Did the
qualifcation "concurrently executing/computing" have some significance
that I missed?

-Peter
 
I

Ivan Voras

Peter said:
Ivan, what makes you say that Python is bad for threads? Did the
qualifcation "concurrently executing/computing" have some significance
that I missed?

Because of the GIL (Giant interpreter lock). It can be a matter of
opinion, but by "good threading implementation" I mean that all threads
in the application should run "natively" on the underlying (p)threads
library at all times, without implicit serialization. For example, Java
and perl do this, possibly also lua and C#. Python and Ruby have a giant
interpreter lock which prevents two threads of pure python code (not
"code written in C" :)) ) in one interperter process executing at the
same time.

Much can be said about "at the same time" part, but at least in one case
it is unambiguous: in multiprocessor machines. Someone writing a
multithreaded server application could get in trouble if he doesn't know
this (of course, it depends on the exact circumstances and the purpose
of the application; for example system calls which block, such as read()
and write() can execute concurrently, while two "while True: pass"
threads cannot). This is not new information, it's available in many
forms in this newsgroup's archives.

I think it would also make implicit problems in the situation like that
of the OP, where there's a "watchdog" thread monitoring some job or
jobs, and the job(s) intefere(s) with the GIL by contenting for it,
possibly when there are lots of threads or frequent spawning of
processes or threads.

I don't have the intention to badmouth Python, but to emphasize that one
should be familira with all the details of the tool one's using :)
 
P

Peter Hansen

Ivan said:
Because of the GIL (Giant interpreter lock).
...
Much can be said about "at the same time" part, but at least in one case
it is unambiguous: in multiprocessor machines.

Okay, I thought that might be what you were talking about.

I agree that in multiprocessor situations the GIL can be a problem.

I'm equally certain that the GIL is not involved in the OP's problem,
unless he's running custom extensions that he's produced without any
awareness of the GIL.

I'm pretty sure that, for the OP's situation, Python threading is
perfectly fine and the problems he's facing are not inherent, but
related to the way he is doing things. I don't see enough information
in his post to help further though.

-Peter
 
A

Aahz

Because of the GIL (Giant interpreter lock). It can be a matter of
opinion, but by "good threading implementation" I mean that all threads
in the application should run "natively" on the underlying (p)threads
library at all times, without implicit serialization. For example, Java
and perl do this, possibly also lua and C#. Python and Ruby have a giant
interpreter lock which prevents two threads of pure python code (not
"code written in C" :)) ) in one interperter process executing at the
same time.

When did Perl gain threads? If you read Bruce Eckel, you also know that
the Java threading system has been buggy for something like a decade.
 
N

Neil Hodgson

Ivan said:
opinion, but by "good threading implementation" I mean that all threads
in the application should run "natively" on the underlying (p)threads
library at all times, without implicit serialization. For example, Java
and perl do this, possibly also lua and C#.

Lua has coroutines rather than threads. It can cooperate with
threading implemented by a host application or library.

See the coroutines chapter in
http://www.lua.org/pil/index.html

Neil
 
I

Ivan Voras

Neil said:
Lua has coroutines rather than threads. It can cooperate with
threading implemented by a host application or library.

I mentioned it because, as far as I know the Lua's intepreter doesn't do
implicit locking on its own, and if I want to run several threads of
pure Lua code, it's possible if I take care of data sharing and
synchronization myself.
 
T

Tuvas

The stuff that it runs aren't heavily processor intensive, but rather
consistant. It's looking to read incoming data. For some reason when it
does this, it won't execute other threads until it's done. Hmmm.
Perhaps I'll just have to work on a custom read function that doesn't
depend so much on processing power.
 
N

Neil Hodgson

Ivan Voras:
I mentioned it because, as far as I know the Lua's intepreter doesn't do
implicit locking on its own, and if I want to run several threads of
pure Lua code, it's possible if I take care of data sharing and
synchronization myself.

Lua's interpreter will perform synchronization if you create
multiple threads that attach to a shared Lua state. You have to provide
some functions for Lua to call to perform the locking.

If you create multiple Lua states they are completely separate
worlds and do not need to be synchronized.

Neil
 
B

Bryan Olson

Tuvas said:
The stuff that it runs aren't heavily processor intensive, but rather
consistant. It's looking to read incoming data. For some reason when it
does this, it won't execute other threads until it's done. Hmmm.
Perhaps I'll just have to work on a custom read function that doesn't
depend so much on processing power.

That is odd. Python should release its global lock while waiting
for I/O, and thus other threads should run. What read function
are you using? Can you provide a minimal example of your problem?
 
T

Tuvas

The read function used actually is from a library in C, for use over a
CAN interface. The same library appears to work perfectly find over C.
I wrote an extention module for it. The function t_config is the
threaded function that will call a function called config once per
second. Note the time.time() timer

def t_config():
while (TRUE):
if(can_send_config):
config()
ts=time.time()
time.sleep(1)
if time.time()-ts>1.5:
print "Long time detected
else:
time.sleep(.1)

When nothing is happening, the thread runs pretty consistantly at 1
second. However, when it is doing IO through this C function or
whatever, the length of time increases, for the time.sleep() function.
Kind of strange, isn't it? I can explain in more detail if it's needed,
but I don't know how much it'll help...
 
B

Bryan Olson

Peter said:
Just in case anyone coming along in the future reads this statement, for
the record I'd like to say this is obviously a matter of opinion or
interpretation, since in my view Python is *excellent* for threads

Ivan already gave his answer: the GIL. The GIL is no real problem
on single processors. As commodity machines go to
multi-poly-hyper-cell-core, or whatever they call it, the GIL is
becoming much less attractive.

There are other issues of note:

As the OP's problem points out, Python threads don't offer any
priority control.

Timeouts on the threading module's Condition objects are
implemented by active polling. Timeouts in threading.Event and
Queue.Queue are implemented via threading.Condition so they
inherit the polling implementation.

A thread can wait on just one
Lock/RLock/Condition/Semaphore/Event/Timer at a time. There is no
analog of Win32's WaitMultipleObjects(). As an example, suppose
we wanted to implement timeouts without polling by waiting for
either a Semaphore or a timer. With Python's standard thread and
threading libraries, there's no good way to do so.
 
B

Bryan Olson

Tuvas said:
The read function used actually is from a library in C, for use over a
CAN interface. The same library appears to work perfectly find over C. [...]
When nothing is happening, the thread runs pretty consistantly at 1
second. However, when it is doing IO through this C function or
whatever, the length of time increases, for the time.sleep() function.
Kind of strange, isn't it? I can explain in more detail if it's needed,
but I don't know how much it'll help...

That's enough detail that I'll take a most-likely-guess:

When your C function runs, by default you hold the aforementioned
Global Interpreter Lock (GIL). Unless you explicitly release it,
you will hold it until you return, and nothing else will run.
Python's own I/O operations always release the GIL right before
calling any potentially-blocking operation, and reacquire it
right after.

See:

http://docs.python.org/api/threads.html
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,283
Messages
2,571,409
Members
48,102
Latest member
charleswillson

Latest Threads

Top