Let My Terminal Go

M

mystilleef

Hello,

A user of my application points me to a behavior in gVim,
the text editor, that I would like to implement in my
application.

When gVim is launched from a shell terminal, it completely
frees the terminal. You can continue to use the terminal for
whatever purpose you wish, including closing and exiting it,
without any effect on the running gVim instance.

How do I implement this in my application written in python?
I would like to believe it does not involve me forking my
application in a new process. Maybe there is signal I can
send to the operating system to achieve this, right?

Your help is appreciated.

Thanks
 
M

marduk

Hello,

A user of my application points me to a behavior in gVim,
the text editor, that I would like to implement in my
application.

When gVim is launched from a shell terminal, it completely
frees the terminal. You can continue to use the terminal for
whatever purpose you wish, including closing and exiting it,
without any effect on the running gVim instance.

How do I implement this in my application written in python?
I would like to believe it does not involve me forking my
application in a new process. Maybe there is signal I can
send to the operating system to achieve this, right?

gvim forks. Why do you want to avoid it?

import os, sys

pid = os.fork()
if pid !=0:
# exit parent
sys.exit(0)
# child continues
 
S

Sybren Stuvel

(e-mail address removed) enlightened us with:
When gVim is launched from a shell terminal, it completely frees the
terminal. [...] How do I implement this in my application written in
python?

Using fork() and by catching the HUP signal.

Sybren
 
M

Mike Meyer

Hello,

A user of my application points me to a behavior in gVim,
the text editor, that I would like to implement in my
application.

When gVim is launched from a shell terminal, it completely
frees the terminal. You can continue to use the terminal for
whatever purpose you wish, including closing and exiting it,
without any effect on the running gVim instance.

How do I implement this in my application written in python?
I would like to believe it does not involve me forking my
application in a new process. Maybe there is signal I can
send to the operating system to achieve this, right?

Several things need to happen.

First, you need to take yourself out of the session you are in. To do
that, you use the setsid system call. This is available in python as
os.setsid.

Last, you need to detach your process from the terminal. You do that
by closing all the file descriptors you have that reference it. stdin,
stdout and stderr should do the trick. The standard trick is to set
set them to /dev/null. This has to happen last, so that if there are
problems in the second step, writing to stderr about it does some
good.

Second, you need to tell the shell that launched you that it can
continue. The standard way to do this is to fork your process, and
have the parent exit. That causes the parent shell to think your
process is dead, and so forget about it completely. There are other
ways to do this, but they aren't as reliable.

The easy way to do all these things - from C, anyway - is with
daemon(3). That isn't wrapped as part of the Python library. The
easiest way to solve your problem may be write a wrapper for that
call. If daemon exists on enough systems, submitting your wrapper as a
patch to the os modulee would be appropriate.

<mike
 
M

Mystilleef

Hello,

Thank you. That's all I needed. For some reason, I had always assumed
forking was an expensive process. I guess I was ill-informed.
 
M

Mystilleef

Hello,

Thanks to all the responders and helpers on the group. I'm learning
everyday.

Thanks
 
I

Ivan Voras

Mike said:
The easy way to do all these things - from C, anyway - is with
daemon(3). That isn't wrapped as part of the Python library. The
easiest way to solve your problem may be write a wrapper for that
call. If daemon exists on enough systems, submitting your wrapper as a
patch to the os modulee would be appropriate.

I think the deamon() library call only exists on the BSDs. Anyway, there
it is implemented with a fork() call and some additional code to close
std descriptors, so there's no practical difference between calling
deamon() and fork() by yourself...
 
D

Dan Stromberg

Hello,

Thank you. That's all I needed. For some reason, I had always assumed
forking was an expensive process. I guess I was ill-informed.

In a loop, yes, it's expensive.

Done once, it's usually not unacceptable.
 
J

Jorgen Grahn

In a loop, yes, it's expensive.

It depends on what you mean by expensive -- web servers can fork for each
HTTP request they get, in real-world scenarios, and get away with it.
Done once, it's usually not unacceptable.

In fact, I can't think of a scenario where it /would/ be unacceptable ;-)

But back to the original problem: I can't really see why anybody would need
the "let my terminal go" feature. Is there a reason why 'gvim foo.txt&'
isn't good enough?

/Jorgen
 
P

Paul Rubin

Jorgen Grahn said:
It depends on what you mean by expensive -- web servers can fork for each
HTTP request they get, in real-world scenarios, and get away with it.

This is OS dependent. Forking on Windows is much more expensive than
forking on Linux.
 
G

Grant Edwards

This is OS dependent. Forking on Windows is much more
expensive than forking on Linux.

Under VMS, fork/exec was so expensive that the Bourne shell
implimentation in DECShell executed "simple" commands in the
shell's process rather than do a fork/exec. Shell scripts that
used pipes or similar constructs requiring fork/exec ran _very_
slowly under DECShell.

Since the NT kernel is descended from VMS, I'm not surprised
that a fork is expensive.
 
P

Paul Rubin

Grant Edwards said:
Since the NT kernel is descended from VMS, I'm not surprised
that a fork is expensive.

Apache 2.x supports concurrency via threading as an alternative to
forking, basically in order to get acceptable performance on Windows.
 
F

Fredrik Lundh

Jorgen said:
In fact, I can't think of a scenario where it /would/ be unacceptable ;-)

if you're stuck on a system that doesn't use copy-on-write ?

</F>
 
J

Jorgen Grahn

This is OS dependent. Forking on Windows is much more expensive than
forking on Linux.

Forking, to me, means doing what the Unix fork(2) system call does. Since
AFAIK there is no corresponding Win32 call, I assumed the original poster
was on Unix.

But now I see that he didn't use the word "fork"; someone else in the thread
did ...

You are correct, of course; the cost of spawning processes varies a lot
between OSes, and it's distinctly higher on Windows compared to Unixes in
general and Linux in particular.

(BTW, Eric Raymond argues that low-cost spawning is an important
characteristic of an OS: see
http://www.faqs.org/docs/artu/ch03s01.html#id2892171 )

/Jorgen
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,268
Messages
2,571,344
Members
48,019
Latest member
Migration_Expert

Latest Threads

Top