Improving interpreter startup speed

J

James Mills

It's not optimal but it is very common (CGI for example).

Which is why we (The Python Community)
created WSGI and mod_wsgi. C"mon guys
these "problems" are a bit old and out
dated :)

--JamesMills
 
G

Gabriel Genellina

En Sun, 26 Oct 2008 23:52:32 -0200, James Mills
+1 This thread is stupid and pointless.
Even for a so-called cold startup 0.5s is fast enough!

I don't see the need to be rude.
And I DO care for Python startup time and memory footprint, and others do
too. Even if it's a stupid thing (for you).
 
J

James Mills

Depends on the tool: build tool and source control tools are example
it matters (specially when you start interfaciing them with IDE or
editors). Having fast command line tools is an important feature of
UNIX, and if you want to insert a python-based tool in a given
pipeline, it can hurt it the pipeline is regularly updated.

Fair enough. But still:
0.5s old startup is fast enough
0.08s warm startup is fast enough.

Often "fast enough" is "fast enough"

--JamesMills
 
J

James Mills

I don't see the need to be rude.
And I DO care for Python startup time and memory footprint, and others do
too. Even if it's a stupid thing (for you).

I apologize. I do not see the point comparing Python with
RUby however, or Python with anything else.

So instead of coming up with arbitary problems, why don't
we come up with solutions for "Improving Interpreter Startup Speeds" ?

I have only found that using the -S option speeds it up
significantly, but that's only if you're not using any site
packages and only using the built in libraries.

Can site.py be improved ?

--JamesMills
 
D

durumdara

To make faster python, you can do:

1.) Use mod_python, and not cgi.
2.) Use other special python server that remaining in memory, and call
it from compiled C code. For example, the C code communicate this server
with pipes, tcp, (or with special files, and the result will come back
in other file).
You can improve this server when you split threads to python
subprocesses, and they still alive for X minutes.
You have one control process (py), and this (like the apache)
communicate the subprocesses, kill them after timeout, and start a new
if needed.

dd

James Mills írta:
 
T

Terry Reedy

David said:
Any command line based on python is a real example of that problem.

No it is not.
The specific problem that you wrote and I responded to was

"Not if the startup is the main cost for a command you need to repeat
many times. "

in a short enough period so that the startup overhead was a significant
fraction of the total time and therefore a burden.
 
T

Terry Reedy

James said:
So instead of coming up with arbitary problems, why don't
we come up with solutions for "Improving Interpreter Startup Speeds" ?

The current developers, most of whom use Python daily, are aware that
faster startup would be better. 2.6 and 3.0 start up quicker because
the some devs combed through the list of startup imports to see what
could be removed (or, in one case, I believe, consolidated). Some were.
Anyone who is still itching on this subject can seek further
improvements and, if successful, submit a patch.

Or, one could check the Python wiki for a StartUpTime page and see if
one needs to be added or improved/updated with information from the
PyDev list archives to make it easier for a new developer to get up to
speed on what has already been done in this area and what might be done.

tjr
 
L

Lie

It's not optimal but it is very common (CGI for example).

CGI? When you're talking about CGI, network traffic is simply the
biggest bottleneck, not something like python interpreter startup
time. Also, welcome to the 21st century, where CGI is considered as an
outdated protocol.
 
J

James Mills

CGI? When you're talking about CGI, network traffic is simply the
biggest bottleneck, not something like python interpreter startup
time. Also, welcome to the 21st century, where CGI is considered as an
outdated protocol.

That's right. That's why we have WSGI. That's
why we built mod_wsgi for Apache. Hell taht's
why we actually have really nice Web Frameworks
such as: CherryPy, Pylons, Paste, etc. They
perform pretty damn well!

--JamesMills
 
B

bearophileHUGS

Terry Reedy:
The current developers, most of whom use Python daily, [...]

Thank you for bringing some light in this thread so filled with worse
than useless comments.

Bye,
bearophile
 
B

BJörn Lindqvist

2008/10/27 James Mills said:
Fair enough. But still:
0.5s old startup is fast enough
0.08s warm startup is fast enough.

Often "fast enough" is "fast enough"

Nope, when it comes to start up speed the only thing that is fast
enough is "instantly." :) For example, if I write a GUI text editor in
Python, the total cold start up time might be 1500 ms on a cold
system. 750 ms for the interpreter and 750 ms for the app itself.
However, if I also have other processes competing for IO, torrent
downloads or compilations for example, the start up time grows
proportional to the disk load. For example, if there is 50% constant
disk load, my app will start in 1.5 / (1 - 0.5) = 3 seconds (in the
best case, assuming IO access is allocated as efficiently as possible
when the number of processes grows, which it isn't). If the load is
75%, the start time becomes 1.5 / (1 - 0.75) = 6 seconds.

Now if the Python interpreters start up time was 200 ms, by apps start
up time with 75% disk load becomes (0.2 + 0.75) / (1 - 0.75) = 3.8
seconds which is significantly better.
 
P

Paul Boddie

Terry Reedy:
The current developers, most of whom use Python daily, [...]

Thank you for bringing some light in this thread so filled with worse
than useless comments.

Indeed. Observing that CGI is old-fashioned, aside from not really
helping people who choose or are obliged to use that technology,
doesn't detract from arguments noting various other areas where start-
up performance is worth improving (in build environments, for example,
as someone suggested), nor does it address the more basic issue of why
Python issues as many system calls as it does when starting up,
something which has actually been looked at by the core developers
(indicating that it isn't a waste of time as one individual claimed)
as well as by developers in other projects where application start-up
time has been criticised.

Although people are using multi-GHz CPUs on the desktop, there are
environments where it is undesirable for Python to go sniffing around
the filesystem just for the fun of it. In fact, various embedded
projects have employed some kind of daemon for Python in order to get
Python-based programs started quickly enough - something that I'm sure
plenty of people would ridicule if, say, Java were involved. The
better solution would just involve improving the obvious: the start-up
performance.

Paul
 
S

Steve Holden

BJörn Lindqvist said:
Nope, when it comes to start up speed the only thing that is fast
enough is "instantly." :) For example, if I write a GUI text editor in
Python, the total cold start up time might be 1500 ms on a cold
system. 750 ms for the interpreter and 750 ms for the app itself.
However, if I also have other processes competing for IO, torrent
downloads or compilations for example, the start up time grows
proportional to the disk load. For example, if there is 50% constant
disk load, my app will start in 1.5 / (1 - 0.5) = 3 seconds (in the
best case, assuming IO access is allocated as efficiently as possible
when the number of processes grows, which it isn't). If the load is
75%, the start time becomes 1.5 / (1 - 0.75) = 6 seconds.

Now if the Python interpreters start up time was 200 ms, by apps start
up time with 75% disk load becomes (0.2 + 0.75) / (1 - 0.75) = 3.8
seconds which is significantly better.
But still not fast enough to be regarded as even close to "instant", so
you appear to be fiddling while Rome burns ...

reqards
Steve
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,999
Messages
2,570,243
Members
46,836
Latest member
login dogas

Latest Threads

Top