[snips]
True, but my point is:
(1) C introduces entirely new classes of bugs
and
(2) C makes bugs more likely, regardless of the programmer's skill
and
(3) C makes it much harder to root out and fix bugs
Frankly, I think that's a load of hogwash; C makes coding no more or less
likely to produce bugs than any other language - if each is done by a
competent coder.
For example, in C, a competent coder knows you don't use gets, you use the
scanf family very carefully, or you use fgets with gay abandon, knowing
full well that short of a serious library bug, you simply aren't going to
have to cope with a buffer overflow on input using fgets.
Similarly, when handling strings, a competent coder knows you have to add
1 to the length, to hold the terminal \0; so either he does so by habit,
or he creates functions to do so automatically - dupstr, for example,
could duplicate a string and automatically handle the addition of the
extra byte's space, and the coder would never again have to worry about
adding the one or not.
Where I find you run into the most problems is with code that simply isn't
properly thought out, combined with coders who aren't quite as good as
they think they are.
The perhaps classic example of this is the ever-repeating idea that on
freeing a pointer, it should subsequently be "NULLed out"; that is, the
sequence of code should read free(p); p = NULL;. The idea is that you can
then tell, easily, if you're re-using a freed pointer.
This seems like a good idea until you realise there could have been an
intervening q = p; and that q will _not_ be NULL, but will also not be
valid; this cutesy "NULLing out" notion falls apart, but proper code
design would have rendered it irrelevant in the first place - if the code
is well designed, you don't need to worry whether p was freed or not, you
_know_ whether it was freed or not.
There are some other issues, such as using the wrong types, assuming
pointers and ints are the same size, or can be trivially converted, that
sort of thing, which may be specific to C, but competent coders generally
aren't going to make that mistake, any more than a competent electrician
is going to get himself zapped by poking a knife into a possibly live
socket; these are things he learns to avoid as a matter of habit.
This leaves other issues - algorithmic failures, for example - which are
not applicable to C alone, but to all languages. Forgetting to calculate
a checksum, or misdesigning an encryption function, or failing to check an
error code, these can happen in any language, not just C.
Sure, C has its pitfalls, but so do all languages. If you're any good at
the language, you know what the pitfalls are and they generally don't
affect you, because you simply avoid the situations where they'd arise, as
a matter of habit.