[...]
I was a contractor for many, many years; I've
worked in a lot of different places (in different domains---I'm
a telecom's specialist, but I currently work in an investment
bank). And I can't think of a single case where the rule wasn't
int, unless there are strong technical reasons otherwise. That
seems to be the general attitude. In most cases, I'm sure,
without ever having considered the technical aspects. One
learns C++ from Stroustrup, and Stroustrup uses int everywhere
(without ever giving a reason, as far as I know---I suspect that
he does so simply because the people he learned C
from---Kernighan, in particular) did so.
I postulated that also: I think they haven't thought of trying the
alternative or dismissed it too soon because of being too close to the
technology. Then again, I switched over when I was doing C and didn't
know all the ramifications (still don't, but close enough) but found the
change worthwhile and didn't really consider going back to preferring
signed. So I'm kind of the opposite.
When I first learned C, it was int everywhere. (I'm not even
sure that unsigned existed back then.) At some point, I started
using unsigned for things that couldn't be negative. That one
extra bit of range was important on the 16 bit machines at the
time. I switched back some years later, when I moved to 32 bit
machines---using unsigned didn't have any real advantages, and
was just one more thing to keep in mind. Plus, my collegues
always expected bit operations when the saw unsigned.
[...]
Well there's more than that, such as loop counter rollover.
If you insist on using a for loop
. When working down, I'd
normally write:
int n = top;
while (n > 0) {
-- n;
// ...
}
Which, of course, works equally well with unsigned.
Nothing wrappers wouldn't solve. I'm not sure how many cases
get missed by compilers that give warnings for signed/unsigned
conversions.
Wrappers have their own problems. For starters, someone new who
reads the code will (again) suppose something special.
You make it sound like unsigned was a compromise. How so?
Requiring size_t to be unsigned was a compromize. The fact that
you need both ptrdiff_t and size_t was recognized as a serious
problem---almost a design flaw. But the alternatives seemed
worse (back then, when the most frequently used machines were 16
bits).
Well only for those who hold BS as infallible.
You're misreading my argument. I'm not saying that it's better
because BS does it. I'm saying that BS has a large influence,
and because he does it, a lot of other people do it; it is the
"expected" idiom for most programmers. If there were really
strong arguments for something else, then do so. But the burden
of proof is on the other side: most programmers will have their
expectations set by BS. Similar reasoning has made me drop my
insistence on not using .h for C++ headers, for example. In that
case, there are fairly strong technical arguments against it.
But not enough to justify bucking the expectations of the
everyday programmer.
I've seen the cfront code, and that's a mess, so I hope he has
improved since then (surely C++ has helped him a lot).
I've seen Unix kernel source code, and it's worse
. On the
other hand, I've seen a lot of code written elsewhere, at that
time, which makes both look really nice.
You were doing great until that last "go with the crowd" statement!
(Unless you meant within an existing project rather than a new one, where
either alternative can be chosen. i.e., Consistency is of course
important). Also, I'm not sure whether you consider richness of semantics
a technical thing: using signed, you have one thing while using unsigned
you have "divided and conquered".
Yes, but the division is mainly with regards to what the reader
understands. If you have a project with detailed coding
guidelines, that everyone really understands, then you can choose
an arbitrary semantic meaning for the distinction
signed/unsigned. But in general, most programmers will
understand the distinction to be bit-wise operations or not. So
that's the distinction you go with.