You must live in an extremely skewed world -- there are very few
projects left today where you can justify simply starting in the C
language. About the only places where this makes sense is where you
have no other compilers available, and its either that or assembly.
I thought you were making some good points, right up until the part
where you said the above, and then I became convinced you had simply
had some sort of brain-ECC error. You *must* be joking.
If you want C++ and have the memory and processor to spare, go for
it. If you like Python or Ruby better, they're also readily
available. At this point, it seems like your "mission" is to form
a line of converts heading down the hall for the other languages.
If so, that's rather OT for clc.
*I* want features, more safety, and I want a heck of a lot more
performance than C gives me!
C runs screamingly fast on most hardware, and the only places I've
ever had to tweak it for better performance is when standard library
functions have been the bottleneck. That's not the language, that's
the "libc" implementors' fault. Writing around such bottlenecks if/
when the become a factor is no big deal if you are even moderately
competent. Going to Python or one of your other alterative
languages certainly isn't going to help with performance.
And why am I resorting to inline assembly any time I want real
performance out of my state of the art C compiler?
Because it can't read your mind? Or because you haven't experimented
enough with how your compiler optimizes particular C constructs for
a given target CPU/OS combination? Maybe your target compiler isn't
as SOTA as you think it is. Real world example: VC++6 (used as a C
compiler) on Windows vs. gcc on Linux. The Windows implementation
(identical source code) ran 30% slower for a tight loop using a
particular standard library call. Looking at how code was generated
on both platforms, the gcc compiler was smart enough to resort to
the appropriate MMX instructions when compiled with the right arch
and cpu flags. The MS compiler simply did not. A little inline
assembly in my own replacement for the offending library call for
WIN32 and the program performed basically identically on both
platforms. I normally wouldn't care about maximum performance per
se if it required inline assembly because a lot of applications
simply don't need it. In this case, performance was THE thing so
it mattered above all else.
Why am I constantly re-implementing standard data
structures, debug wrappers, and various other things that are well
known practice? At least the C++ people had the good sense to create
STL ...
Because you never started building your own library of ADTs, or
didn't want to obtain one elsewhere? I can't read your mind.
For example, there is almost no C standard library function that I
cannot rewrite, or respecify to improve both performance and program
safety. Almost the entire stdlib is of a "first hack" quality,
practically enforced by the standard.
Then don't use it. There are a lot of standard lib functions that I
won't use either. As you say, it's not hard to implement similar
functionality on your own, or obtain it from a freely available or
commercial library.