C
Chad Perrin
2 years x 1 developer @ $70k = 58x Dell PowerEdge 860 Quad Core Xeon
X3210s
Job Security Rocks!
Oops. I should have read the responses before I posted my own cost
comparison.
2 years x 1 developer @ $70k = 58x Dell PowerEdge 860 Quad Core Xeon
X3210s
Job Security Rocks!
I don't think it was a matter of not getting something working -- IIRC
CD Baby did *work* when it was in Rails. In reality, I think it was that
he didn't understand MVC, Ruby or Rails when he started the migration --
it just looked cool, so he went out and hired a Rails programmer to do it.
M. Edward (Ed) Borasky said:"Complex scheduling algorithm" means different things to different
people. Is it slow because the algorithm sucks or slow because it's not
written in C/C++? What kind of scheduling is it -- combinatorial?
Because he was able to do it himself, and then both _read_ the code
and _rewrite_ it.
Please read http://www.zedshaw.com/essays/c2i2_hypothesis.html,
Gadfly Festival section,
and all on the Big Rewrite by Chad Fowler. In this case PHP was a
(roughly and not at all harshly pat) alternative
for an Excel spreasdheet with VBA macros - the issue of ownership I
guess. Nothing critical.
He decided to write it _himself_. That's the main piece.
Chad said:Yes, he decided to write it himself -- after giving up on Rails, for
reasons that, as far as I'm aware, relate to the fact that it wasn't
done in Rails after two years.
Actually, if I can be allowed to read between the lines, he went back
when his Ruby mentor left and he realized that he was not able to do it
in Ruby. He went back to what he knew when he was left on his own. He
sort of says that in the post.
Nope.
I go with Dave Thomas's verbiage "Ruby stays out of your way". That says
it all - dynamic typing, clear simple statements, endless extensibility,
and realistic scaling, all in a nutshell.
That's not scaling! (Okaaay, that's only one aspect of scaling!)
How did your Java design itself scale? The rate you add new features -
did it go up or down over time? _That's_ scaling. If the rate doesn't
slow down, you have time to tune your code to speed it up and handle
more users...
2 years to rebuild in Rails?! How?!
Simple. You can't force an existing database structure onto a framework
that has an ORM. Doesn't work well if at all.
You can migrate the data. easy.
Chad said:Assuming about an 80k salary and a 2,000 dollar server, a server is worth
about 50 hours of programmer time.
I just figured I'd provide a simple starting place for comparing the cost
of software development with that of hardware upgrades.
Thank you!! It's about time somebody put a dollar figure on the cost ofCharles said:I find this perspective puzzling. In most large datacenters, the big
cost of operation is neither the cost of the servers nor the cost of the
development time to put code on them, it's the peripheral electricity,
administration and cooling costs once the application written must be
deployed to thousands of users.
An application that scales poorly will require more hardware. Hardware
is cheap, but power and administrative resources are not. If you need 10
servers to run a poorly-scaling language/platform versus some smaller
number of servers to run other "faster/more scalable"
languages/platforms, you're paying a continuously higher cost to keep
those servers running. Better scaling means fewer servers and lower
continuous costs.
Even the most inexpensive and quickly-developed application's savings
will be completely overshadowed if deployment to a large datacenter
results in unreasonably high month-on-month expenses.
- Charlie
Thank you!! It's about time somebody put a dollar figure on the cost of
poor scalability and highlighted the nonsense of "adding servers is
cheaper than hiring programmers." They are two entirely different
economic propositions.
That's true. However, very roughly, compute resource can scale about
linearly with compute requirement.
Alternatively, you can reduce the compute requirement by having a more
complex software system.
What about Amdahl's law?
While it's true that very simple systems can perform badly because
they use poor algorithms and/or do not make dynamic optimizations,
more complex software generally means increased computational
requirements.
I find this perspective puzzling. In most large datacenters, the big
cost of operation is neither the cost of the servers nor the cost of the
development time to put code on them, it's the peripheral electricity,
administration and cooling costs once the application written must be
deployed to thousands of users.
I find this perspective puzzling. In most large datacenters, the big
cost of operation is neither the cost of the servers nor the cost of the
development time to put code on them, it's the peripheral electricity,
administration and cooling costs once the application written must be
deployed to thousands of users.
An application that scales poorly will require more hardware. Hardware
is cheap, but power and administrative resources are not. If you need 10
servers to run a poorly-scaling language/platform versus some smaller
number of servers to run other "faster/more scalable"
languages/platforms, you're paying a continuously higher cost to keep
those servers running. Better scaling means fewer servers and lower
continuous costs.
Even the most inexpensive and quickly-developed application's savings
will be completely overshadowed if deployment to a large datacenter
results in unreasonably high month-on-month expenses.
It definitively is. One aspect of Ruby that hinders scaling is the
absence of native threads IMHO. On the other hand, mechanisms are
provided for IPC (DRb for example) which are easy to use and thus may be
counted as compensating at least partially for the lack of native threading.
IMHO this is not scaling (well, at least not if you follow common usage)
but extensibility or flexibility of the design which translates into
developer efficiency. Which does not say this is superfluous or the
wrong measure, not at all. I just don't think "scalability" is the
wrong term here.
What about it? Unless you're writing software that doesn't scale with
the hardware, more hardware means linear scaling, assuming bandwidth
upgrades. If bandwidth upgrades top out, you've got a bottleneck no
amount of hardware purchasing or programmer time will ever solve.
I thought "complex" was a poor choice of term here, for the most part.
It was probably meant as a stand-in for "more work at streamlining
design, combined with greater code cleverness needs to scale without
throwing hardware at the problem."
Amdahl's law is relevant because most software _can't_ be written to
scale entirely linearly with the hardware, because most computational
problems are limited in the amount of parallelism they admit. You may
have been fortunate enough to have been presented with a lot of
embarrassingly parallel problems to solve, but that isn't the norm.
Want to reply to this thread or ask your own question?
You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.