In general, humans are very bad at predicting which bits of code
will be the slow bits. Hence it's a good idea to write clean,
simple code, measure it, and only then decide where to optimize.
Google: "premature optimization"
Anecdote: my company works on realtime C++ 3D rendering software with
JS and Lua bindings to the runtime. Lua is not nearly as nice as
Ruby. To make matters harder for the scripters, we've chosen to do
certain things (for speed and memory preserving) that make the
scripting even more verbose. These choices undoubtedly have had a
positive effect on performance, probably in the 1-2% FPS range. I
work very hard to keep the Lua I write as fast and lean as possible.
I made a change yesterday which took my (simple) presentation from
~890fps to ~910fps, and I was proud.
Then, yesterday, one of the C++ programmers found a naive code branch
in the C++ renderer. The net result was a 15-20x speed increase. (My
complex presentation went from 15fps to over 170fps on my machine.)
All the hard work and scripter-annoying decisions we made in the name
of speed have had nowhere NEAR as big an impact on speed as a single
10-minute algorithm change *in code that was already C++*.
Yet again, we discover (in real life, real world code) that choosing
the faster language helps a bit, but choosing the right algorithms is
gobs more important.
I would agree with what matthew says - write it all in Ruby, the find
out where you're being stupid (because we all are, now and then) and
fix the ruby algorithms. THEN, if it's still not fast enough, find
the sticking points and rewrite them in C and see how that goes.
I think the time you save in all-ruby writing should allow for the C
conversion time (and then some), yielding a shorter development than
writing a whole core library in C from the start. But then that
depends on how your Ruby skills are compared to your C skills.