is it possible to switch between threads manually.

H

hayyal

Hi folks,

I have program which utilizes 5 threads to complete the process.
when the application starts running, it randomly switches between
threads to complete the process as expected.

My question is, when the process is running, can i interrupt it
manually and switch between threads?
for example, the application is running, now when i interrupted, the
applications thread is on thread3.
Now from thread3 can i switch to thread5 and continue with execution?
if yes, is there any difference in what operating system does and what
i did?
if yes, is there a chance of getting coredump?
if yes, what happens to the stacks of other threads?
if yes, what happens to the thread which was interrupted manually( in
this case what happens to thread3)?
will thread3 starts from where it was stopped? or will it starts from
where it has been ordered by OS?

Appreciate your views and comments on this topic.

Thanks & regards
Nagaraj Hayyal
 
G

Guest

Hi folks,

First, the current C++ standard has no knowledge about threads, so your
question is off topic, perhaps comp.programming.threads is a better
suited group for your questions.
I have program which utilizes 5 threads to complete the process.
when the application starts running, it randomly switches between
threads to complete the process as expected.

The whole idea with threads is that more than one of them can execute
concurrently, so unless you only have a single-core CPU without hyper-
threading capabilities (or similar) your application's thread should be
executing concurrently.
My question is, when the process is running, can i interrupt it
manually and switch between threads?
for example, the application is running, now when i interrupted, the
applications thread is on thread3.
Now from thread3 can i switch to thread5 and continue with execution?
if yes, is there any difference in what operating system does and what
i did?

It depends on your threading library, but I have never heard of one that
allows that kind of manipulation, not can I come up with a reason to do so.
if yes, is there a chance of getting coredump?
Probably.

if yes, what happens to the stacks of other threads?

Each thread should have its own stack so not much I would imagine.
if yes, what happens to the thread which was interrupted manually( in
this case what happens to thread3)?

It stops running?
will thread3 starts from where it was stopped? or will it starts from
where it has been ordered by OS?

If you find a library that allows you to do such a thing, then it would
probably continue from where it was stopped.
Appreciate your views and comments on this topic.

My views are that it is a weird idea with little practical applications.
If you want to allow such behaviour then write the application to allow
it, I would imagine that for some applications arbitrary stopping a
thread in the middle of execution could have disastrous consequences.
 
Y

yanlinlin82

Now from thread3 can i switch to thread5 and continue with execution?
if yes, is there any difference in what operating system does and what
i did?

I know Visual C++ has this function. When you interrupt the program,
there is a menu item 'Thread' in 'Debug', which can list all threads
of current program to switch. I think this function is belong to
debugger. Other platforms must have similar one.
 
W

werasm

The whole idea with threads is that more than one of them can execute
concurrently,

Sorry, OT, but your response got my attention.

Not necessarily. Threads have been used long before dual core
processors
existed. In software that requires real-time response under
certain circumstances (especially if one only has one processor),
threads
are used to prioritize the part that requires real time response. It
is also used in cases where one waits on blocking IO calls whilst
keeping the GUI active, for example.
so unless you only have a single-core CPU without hyper-
threading capabilities (or similar) your application's thread should be
executing concurrently.

Yes, true, if they make use of round robin scheduling.
It depends on your threading library, but I have never heard of one that
allows that kind of manipulation, not can I come up with a reason to do so.

If you make use of pre-emptive scheduling (OS dependent) you may be
able
to do this by using the priorities of threads. I don't think Windows
supports pre-emptive scheduling, for one. We use it under Linux, but
it is only advised when something really needs to finish before
anything
else (even the Kernel).
I would imagine that for some applications arbitrary stopping a
thread in the middle of execution could have disastrous consequences.

Yes, and for some it is absolutely required (If I understand you
correctly). E.g. a little controller responsible for controlling
some mechanism that acts on incoming missiles monitors status
(threadX)
that takes X time. Suddenly event happens indicating incoming
missile, and the controller (threadY) has to respond in Y time, but
if the rest of threadX completes execution the Y time deadline would
not be met. Stop (or pause) threadX, continue with threadY until
complete, and continue with thread X where you left off last time.

Actually, in Windows this happens all the time - it
is called round robin scheduling, where threads each get given
a share of the processor (not?), the ones not having a share at
that particular instance in time, being saved until they do
get processor share.
From an applications perspective code that is
executed in a thread don't have control over when
it executes relative to other threads, except if one
uses synchronization primitives which would exist
at specific points (mutexes, semaphores, conditional
variables).

Regards,

Werner
 
W

werasm

The whole idea with threads is that more than one of them can execute
concurrently,

Sorry, OT, but your response got my attention.

Not necessarily. Threads have been used long before dual core
processors
existed. In software that requires real-time response under
certain circumstances (especially if one only has one processor),
threads
are used to prioritize the part that requires real time response. It
is also used in cases where one waits on blocking IO calls whilst
keeping the GUI active, for example.
so unless you only have a single-core CPU without hyper-
threading capabilities (or similar) your application's thread should be
executing concurrently.

Yes, true, if they make use of round robin scheduling.
It depends on your threading library, but I have never heard of one that
allows that kind of manipulation, not can I come up with a reason to do so.

If you make use of pre-emptive scheduling (OS dependent) you may be
able
to do this by using the priorities of threads. I don't think Windows
supports pre-emptive scheduling, for one. We use it under Linux, but
it is only advised when something really needs to finish before
anything
else (even the Kernel).
I would imagine that for some applications arbitrary stopping a
thread in the middle of execution could have disastrous consequences.

Yes, and for some it is absolutely required (If I understand you
correctly). E.g. a little controller responsible for controlling
some mechanism that acts on incoming missiles monitors status
(threadX)
that takes X time. Suddenly event happens indicating incoming
missile, and the controller (threadY) has to respond in Y time, but
if the rest of threadX completes execution the Y time deadline would
not be met. Stop (or pause) threadX, continue with threadY until
complete, and continue with thread X where you left off last time.

Actually, in Windows this happens all the time - it
is called round robin scheduling, where threads each get given
a share of the processor (not?), the ones not having a share at
that particular instance in time, being saved until they do
get processor share.
From an applications perspective code that is
executed in a thread don't have control over when
it executes relative to other threads, except if one
uses synchronization primitives which would exist
at specific points (mutexes, semaphores, conditional
variables).

Regards,

Werner
 
J

James Kanze

I have program which utilizes 5 threads to complete the process.
when the application starts running, it randomly switches between
threads to complete the process as expected.
My question is, when the process is running, can i interrupt it
manually and switch between threads?

This depends very much on the system. I've never heard of a
system which has a request: switch to thread x, however.
for example, the application is running, now when i interrupted, the
applications thread is on thread3.
Now from thread3 can i switch to thread5 and continue with execution?
if yes, is there any difference in what operating system does and what
i did?

I'm not too sure what you're trying to do, but Posix threads
(and all real time systems, at least, have a provision for
thread priority; if an external event unblocks a thread with
higher priority than the one running, the thread with higher
priority is guaranteed to run. (Note that on a modern machine,
this doesn't mean that the orginally running thread is paused,
however. Most modern machines are capable of running several
threads at the same time.) This feature is optional, however,
and may not be implemented on the particular Posix
implementation which you're using.
if yes, is there a chance of getting coredump?

There's always a chance of getting a core dump. Even without
threads.
if yes, what happens to the stacks of other threads?

If you get a core dump, the process is terminated.
if yes, what happens to the thread which was interrupted manually( in
this case what happens to thread3)?
will thread3 starts from where it was stopped? or will it starts from
where it has been ordered by OS?
Appreciate your views and comments on this topic.

You'll have to explain what you're actually trying to do. In
comp.programming.threads, since that's where the threading
experts hang out. Note, however, that most threading issues are
very system dependent, and scheduling policies differ even
between different Posix implementations.
 
J

James Kanze

Sorry, OT, but your response got my attention.
Not necessarily.

He should have said "pseudo-concurrently":). Conceptually,
each thread represents a separate thread of execution, which
runs in parallel with the other threads. (Obviously, real
parallelism requires as many CPU's as there are active threads.)
Threads have been used long before dual core processors
existed.

Except that they weren't always called threads:). Back before
MMU's were standard, when memory wasn't protected, there really
wasn't any difference between a thread and a process. My first
experience with "multi-threaded" code, back in the 1970's,
called them processes, but since everyone could access all of
the memory, they behaved pretty much like threads today. (And
an incorrectly managed pointer could corrupt the OS data
structures---gave a whole new dimension to undefined behavior,
which really could wipe out a hard disk.)

And of course, even back then, we had multiprocessor systems.
(But they were easier to understand and program, since there was
no cache, no pipeline, and no hardware reordering.)
In software that requires real-time response under certain
circumstances (especially if one only has one processor),
threads are used to prioritize the part that requires real
time response. It is also used in cases where one waits on
blocking IO calls whilst keeping the GUI active, for example.
Yes, true, if they make use of round robin scheduling.
If you make use of pre-emptive scheduling (OS dependent) you
may be able to do this by using the priorities of threads. I
don't think Windows supports pre-emptive scheduling, for one.
We use it under Linux, but it is only advised when something
really needs to finish before anything else (even the Kernel).

I'm not sure about your vocabulary here. Scheduling policy is
somewhat orthogonal to preemption; Windows definitly has
preemptive threads, even if it doesn't support a priority based
threading policy. Preemption simply means that context switches
may occur at any time, regardless of what the thread is doing.
(Without preemption, you don't need locks, because your thread
will retain control of the CPU until you tell the system it can
change threads. It's actually a much easier model to deal with,
but of course, it makes no sense in a multi-CPU environment.)

Note too that there are many variants of scheduling policies.
Early Unix, for example, used a decaying priority for process
scheduling: every so often, the "priority" of all of the
processes was upped, by a factor which depended on their nice,
and as a process used CPU (and possibly other resources), it's
priority "decayed".

I think that Posix has provisions for relative priority, within
a process, as well. (But again, all scheduling policy options
are optional; a Posix system doesn't have to implement them.)
 
W

werasm

Except that they weren't always called threads:).

Yes, I'm aware of this. I did maintenance on a Multibus 2 platform
using Intels iRMX in '97. After that we moved to VxWorks. In both
cases they were called tasks.
(And an incorrectly managed pointer could corrupt the OS data
structures---gave a whole new dimension to undefined behavior,
which really could wipe out a hard disk.)

Yes, cost an inexperienced programmer many late nights. The biggest
culprit being sprintf most of the time.
I'm not sure about your vocabulary here. Scheduling policy is
somewhat orthogonal to preemption;

I got my vocab wrong. I meant to say "Preemptive Priority Scheduling".
Admittedly we always call it just "Preemptive Scheduling", although
I see your point. The threads certainly do get preempted, whether
one is using "Round-Robin" or "Preemptive Priority". In the one case
time sharing applies, whereas in the other the thread with the highest
priority gets the processor (Still often used in embedded processors
today, in fact we are using it for ARM using Linux OS).
Windows definitly has
preemptive threads, even if it doesn't support a priority based
threading policy.

Yes, that was what I meant.
(Without preemption, you don't need locks, because your thread
will retain control of the CPU until you tell the system it can
change threads.

.... After which the system would preempt you and give control
to the other thread?...;-)
It's actually a much easier model to deal with,
but of course, it makes no sense in a multi-CPU environment.)

I've never used or heard of this model before (non-preemptive). I
could perhaps think of simulating it, but that would require locks.
I've implemented something like a ADA rendezvous (if I understand
it correctly) that waits for another thread to complete an operation
that specified by it. This seems to simulate this model as the thread
literally preempts when it goes into the Rendezvous, but it certainly
requires locks.

Do you have examples (of non-preemptive sched) for interest sake?

Regards,

Werner
 
J

James Kanze

Yes, I'm aware of this. I did maintenance on a Multibus 2 platform
using Intels iRMX in '97. After that we moved to VxWorks. In both
cases they were called tasks.

I've heard that word as well; back in the 1970's and 1980's, I
tended to make the distinction: process ("processus" in French)
when there was no memory protection, task ("tâche" in French)
when there was. The real-time embedded processors I mostly
worked on had processes; IBM mainframes had tasks.

This distinction went out the window when I started working on
Unix (late 1980's), which had "processes", but used memory
protection.
Yes, cost an inexperienced programmer many late nights. The
biggest culprit being sprintf most of the time.

Or strcpy( malloc( strlen( s ) ), s ). On big-endian machines,
that got the allocator writing to low memory very quickly.

[...]
... After which the system would preempt you and give control
to the other thread?...;-)

If you request/authorize the switch, is preemption the correct
word? (My "feeling" for the word preempt is that it implies
something happening without my particularly desiring it.)
I've never used or heard of this model before (non-preemptive).

I'm not aware of anyone implementing it under processes. It
obviously requires co-operating processes/threads, and so isn't
appropriate for processes on a general purpose, multi-user
system. Threads within a process are supposed to collaborate,
however, and I think that in most cases, it would be preferable
to the preemptive model we find every where.
I could perhaps think of simulating it, but that would require
locks.

I've considered that once or twice as well. A single mutex
lock, always held except when you wanted to allow a context
switch (instead of only holding it when you didn't want to allow
one). It makes a lot of things considerably easier, but it does
require wrapping all system calls that might block, to ensure
that they count as legitimate context switch locations. (My
main consideration was to allow lock free logging---logging, of
course, uses common resources which need protection.)
I've implemented something like a ADA rendezvous (if I understand
it correctly) that waits for another thread to complete an operation
that specified by it. This seems to simulate this model as the thread
literally preempts when it goes into the Rendezvous, but it certainly
requires locks.
Do you have examples (of non-preemptive sched) for interest sake?

Nothing recent, but we used it a lot on the 8080. (The
non-preemptive kernel I used on the 8080 fit in less than 80
bytes. Very useful when you only had 2K ROM for the entire
program.) I think that early Windows (pre-Windows 95) also used
non-preemptive scheduling for its processes, but I'm not really
sure; I never actually programmed on the system---I just heard
rumors that non-cooperating processes could hang the system.
 
J

Jerry Coffin

[ ... ]
This depends very much on the system. I've never heard of a
system which has a request: switch to thread x, however.

That's true of kernel threading as a rule. Back in the bad old days of
user threading, some packages provided this, though it was far more
common for a thread to just yield, and the scheduler picked what thread
to run next.

Depending on the target system, it sounds like the OP is asking for
something closer to what Windows calls "fibers". If he wants something
portable, I believe he'll have to do it himself though. In that case, it
seems like the old cthreads package would be a reasonable place to
start. It's been a long time since I played with that, but just glancing
over the source code, cthread_thread_schedule(newthread) sounds like
it's at least pretty close to what the OP is apparently asking for.
 
J

James Kanze

[ ... ]
This depends very much on the system. I've never heard of a
system which has a request: switch to thread x, however.
That's true of kernel threading as a rule. Back in the bad old days of
user threading, some packages provided this, though it was far more
common for a thread to just yield, and the scheduler picked what thread
to run next.
Depending on the target system, it sounds like the OP is asking for
something closer to what Windows calls "fibers". If he wants something
portable, I believe he'll have to do it himself though. In that case, it
seems like the old cthreads package would be a reasonable place to
start. It's been a long time since I played with that, but just glancing
over the source code, cthread_thread_schedule(newthread) sounds like
it's at least pretty close to what the OP is apparently asking for.

Just curious, but are "fibers" something like co-processes?
(I'd completely forgotten that technique. It's been a long time
since I last used it.)
 
G

Guest

[ ... ]
This depends very much on the system. I've never heard of a
system which has a request: switch to thread x, however.
That's true of kernel threading as a rule. Back in the bad old days of
user threading, some packages provided this, though it was far more
common for a thread to just yield, and the scheduler picked what thread
to run next.
Depending on the target system, it sounds like the OP is asking for
something closer to what Windows calls "fibers". If he wants something
portable, I believe he'll have to do it himself though. In that case, it
seems like the old cthreads package would be a reasonable place to
start. It's been a long time since I played with that, but just glancing
over the source code, cthread_thread_schedule(newthread) sounds like
it's at least pretty close to what the OP is apparently asking for.

Just curious, but are "fibers" something like co-processes?
(I'd completely forgotten that technique. It's been a long time
since I last used it.)

From what I can tell a fiber in Windows is something like a userland
thread, each normal thread can schedule multiple fibers. One might say
that fibers are to threads what threads are to processes.
 
J

Jerry Coffin

[ ... ]
Just curious, but are "fibers" something like co-processes?
(I'd completely forgotten that technique. It's been a long time
since I last used it.)

It's pretty much a user-land thread. You start with a normal kernel
thread. You create N-1 other fibers, as well as convert the original
thread to a fiber. Then you have a pool of N fibers that you can switch
between as you see fit. The kernel scheduler continues to schedule the
group of them as (essentially) a single thread, and you can pick which
fiber is going to execute at any given time.

IIRC, MS introduced fibers when they were still doing a JVM. I believe
they were introduced primarily (exclusively?) to support Java threads
using the existing kernel thread management instead of writing a whole
new thread manager into the JVM.
 
A

Alf P. Steinbach

* James Kanze:
Just curious, but are "fibers" something like co-processes?

You mean co-routines.

Yes, they are co-routines.

(I'd completely forgotten that technique. It's been a long time
since I last used it.)

It's been a long time since I last implemented it (then in terms of
longjmp + a little assembly, it was a fad around 1990). But the
interesting thing is that co-routines are so useful that they have been
implemented in terms of e.g. Java threads, where the efficiency argument
is void. And Windows API fibers are seemingly implemented in terms of
Windows threads: you start with threads and convert them to fibers.

Cheers, & hth.,

- Alf
 
T

tragomaskhalos

program.) I think that early Windows (pre-Windows 95) also used
non-preemptive scheduling for its processes, but I'm not really
sure; I never actually programmed on the system---I just heard
rumors that non-cooperating processes could hang the system.
Certainly this is correct for Windows 3.1; one's code had to
call the "Yield" API function to give other processes a chance!
Despite which, I often feel that those early Windowses were
somehow more reliable ... :)
 
J

Jerry Coffin

[ ... ]
Certainly this is correct for Windows 3.1; one's code had to
call the "Yield" API function to give other processes a chance!
Despite which, I often feel that those early Windowses were
somehow more reliable ... :)

One did not normally call the yield function. Most code had a loop to
repeatedly 1) call GetMessage, and 2) process the message that was
retrieved. GetMessage was where the yielding happened.

Of course, when/if you had to carry out processing that wasn't in
(direct) response to a message, things got a bit uglier...
 
A

Alf P. Steinbach

* Jerry Coffin:
[ ... ]
Certainly this is correct for Windows 3.1; one's code had to
call the "Yield" API function to give other processes a chance!
Despite which, I often feel that those early Windowses were
somehow more reliable ... :)

One did not normally call the yield function. Most code had a loop to
repeatedly 1) call GetMessage, and 2) process the message that was
retrieved. GetMessage was where the yielding happened.

Of course, when/if you had to carry out processing that wasn't in
(direct) response to a message, things got a bit uglier...

I think this has wandered off-topic.

But there are some relevant C++ perspectives.

In particular, the flurry of research on "active objects", mostly based
on coroutines (although some were thread-based), seemed to die out quite
silently in the latter half of the 1990's. Even though the term "active
object" is now being used for just about anything, like "well, it's sort
of active, that's cool". I suspect practically useful active objects in
C++ need language support, like, Ada rendezvous's.

Cheers,

- Alf
 
J

James Kanze

* James Kanze:
You mean co-routines.

Yes, that's the word.
Yes, they are co-routines.
It's been a long time since I last implemented it (then in terms of
longjmp + a little assembly, it was a fad around 1990).

I think it was around 1985, or a little before, that I last used
it. But I think it was really at the base of the USL threading
package---as you say, with longjmp, etc. (In my earlier case,
it was a lot simpler: I just had two stacks, one for the main
process, and the other for the co-process. And a special
function which swapped the stack pointer. This was all in
assembler, so I just passed parameters and return values in
registers.)
But the interesting thing is that co-routines are so useful
that they have been implemented in terms of e.g. Java threads,
where the efficiency argument is void. And Windows API fibers
are seemingly implemented in terms of Windows threads: you
start with threads and convert them to fibers.

Well, co-routines are very much like non pre-emptive threads.
Which makes thread safety an order or two of magnitude simpler.
From the description of others, I gather that threads/fibers are
about what Unix (or at least Solaris) called LWP/threads. For
the most part, my impression is that Unix (or at least Solaris)
is moving away from that---kernel threads (the earlier LWP) have
gotten to the point that they're fast enough that you don't need
anything even lower level.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,994
Messages
2,570,223
Members
46,813
Latest member
lawrwtwinkle111

Latest Threads

Top