J
James Mills
It's not optimal but it is very common (CGI for example).
Which is why we (The Python Community)
created WSGI and mod_wsgi. C"mon guys
these "problems" are a bit old and out
dated
--JamesMills
It's not optimal but it is very common (CGI for example).
+1 This thread is stupid and pointless.
Even for a so-called cold startup 0.5s is fast enough!
Depends on the tool: build tool and source control tools are example
it matters (specially when you start interfaciing them with IDE or
editors). Having fast command line tools is an important feature of
UNIX, and if you want to insert a python-based tool in a given
pipeline, it can hurt it the pipeline is regularly updated.
I don't see the need to be rude.
And I DO care for Python startup time and memory footprint, and others do
too. Even if it's a stupid thing (for you).
David said:Any command line based on python is a real example of that problem.
James said:So instead of coming up with arbitary problems, why don't
we come up with solutions for "Improving Interpreter Startup Speeds" ?
It's not optimal but it is very common (CGI for example).
CGI? When you're talking about CGI, network traffic is simply the
biggest bottleneck, not something like python interpreter startup
time. Also, welcome to the 21st century, where CGI is considered as an
outdated protocol.
The current developers, most of whom use Python daily, [...]
2008/10/27 James Mills said:Fair enough. But still:
0.5s old startup is fast enough
0.08s warm startup is fast enough.
Often "fast enough" is "fast enough"
Terry Reedy:
The current developers, most of whom use Python daily, [...]
Thank you for bringing some light in this thread so filled with worse
than useless comments.
But still not fast enough to be regarded as even close to "instant", soBJörn Lindqvist said:Nope, when it comes to start up speed the only thing that is fast
enough is "instantly." For example, if I write a GUI text editor in
Python, the total cold start up time might be 1500 ms on a cold
system. 750 ms for the interpreter and 750 ms for the app itself.
However, if I also have other processes competing for IO, torrent
downloads or compilations for example, the start up time grows
proportional to the disk load. For example, if there is 50% constant
disk load, my app will start in 1.5 / (1 - 0.5) = 3 seconds (in the
best case, assuming IO access is allocated as efficiently as possible
when the number of processes grows, which it isn't). If the load is
75%, the start time becomes 1.5 / (1 - 0.75) = 6 seconds.
Now if the Python interpreters start up time was 200 ms, by apps start
up time with 75% disk load becomes (0.2 + 0.75) / (1 - 0.75) = 3.8
seconds which is significantly better.
Want to reply to this thread or ask your own question?
You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.