that would be fairly difficult - consider that the all open file handles, db
connection, stdin, stdout, and stderr would be __shared__. for multiple
processes to all use them would be a disaster. in order to be able to fork a
rails process robustly one would need to track a huge number or resources and
de/re-allocate them in the child.
Interesting. I had though about db connections but hadn't followed the reasoning
through to file descriptors and other shared resources. True this might be quite
tricky and problematic but, as khaines said, not necessarily a showstopper.
any decent kernel is going to share library pages in memory anyhow - they're
Indeed, I imagine (hope) that the code of a .so file would be shared between
processes. But I very much doubt the same holds true for .rb files. And I doubt
that compiled modules are more than a small fraction of the code.
mmap'd in iff binary - and files share the page cache, so it's unclear to me
Page cache... isn't that an entirely different topic? Access to shared data is
an open and shut case. Here I'm mostly interested in the CPU & memory cost of
the initialization phase, i.e. loading the *code*, not the data.
what advatage this would give? not that it's a bad idea, but it seems very
difficult to do in a generic way?
It may be difficult to do in a generic way, but the advantages seem obvious to
me. Hey, why tell when you can show? Please compare the behavior of:
require "/path/to/rails/app/config/environment.rb"
20.times do
break if Process.fork.nil?
end
sleep 10
vs:
20.times do
break if Process.fork.nil?
end
require "/path/to/rails/app/config/environment.rb"
sleep 10
and tell me which one you like better ;-)
Daniel