Reposting since Google Groups seems to be having problems.
Hi,
I'm about to develop a new framework for my corporative applications
and my first decision point is what kind of strings to use: std::string
or classical C char*.
Performance in my system is quite importante - it's not a realtime
system, but almost - and I concern about std::string performance in
terms of speed. No doubt to use std implementation is a lot easier but
I can't sacrifice speed.
I'm using Sun Workshop 6. A very basic test shows processing with
std::string can be 3 times slower than using char*. Is there any
improvement in later versions?
Thanks,
Jorge Ortiz
First, speed is implementation-dependent -- both compiler and library.
Since I presume you don't want to change compilers, you may be able to
find a faster library implementation (e.g., STLPort, Dinkumware, etc.).
Also, you might try fiddling with the switches for your compiler. On
some compilers, if you don't specify an optimization level, the
compiler doesn't even inline functions, which is a speed killer for the
C++ standard library in general. For more on these sorts of concerns,
you may want to consult a newsgroup (or list or whatever) that is more
familiar with your particular compiler and library.
Second, we might question the validity of your test. If you didn't
factor in the extra (read: manual, error-prone, often tedious) work
you'll have to do with arrays to validate lengths, prevent buffer
overflows, allocate and deallocate, etc., then it might not be a fair
test. See this FAQ for some more thoughts on why standard containers
should be preferred over arrays:
http://www.parashift.com/c++-faq-lite/containers.html#faq-34.1
Finally, let me remind you to beware premature optimization. Guru
Sutter reminds us about the rules for optimizing
(
http://www.gotw.ca/publications/mill09.htm):
"1. Don't optimize early. 2. Don't optimize until you know that it's
needed. 3. Even then, don't optimize until you know *what* [is] needed,
and *where*.
"By and large, programmers--that includes you and me--are notoriously
bad at guessing the actual space/time performance bottlenecks in their
own code. If you don't have performance profiles or other empirical
evidence to guide you, you can easily spend days optimizing something
that doesn't need optimizing and that won't measurably affect runtime
space or time performance. What's even worse, however, is that when you
don't understand what needs optimizing you may actually end up
pessimizing (degrading your program) by of saving a small cost while
unintentionally incurring a large cost. Once you've run performance
profiles and other tests, and you actually know that a particular
optimization will help you in your particular situation, then it's the
right time to optimize."
Cheers! --M