Douglas said:
It makes sense to me, and I suspect to others.
Just because current platforms share a certain
characteristic X, doesn't mean that it is wise
to make one's programs *depend* on characteristic
X, especially when it is easy to avoid such
dependence.
This is not only a property of current platforms; it is extremely likely
to be a property of future platforms. If making a particular assumption
simplifies programs, and the assumption is technically reasonable and
does not unduly limit implementations, why not make it?
You have previously argued, if I remember correctly, that the technical
advantages of each signed integer representation in specific circumstances
justify supporting multiple representations. I disagree -- the advantages of
having just one kind of signed integer representation outweigh any relative
advantages of each representation. Much the same applies to 8-bit bytes:
standardizing on a single definition of a byte has been more important
than which specific size was chosen.
I recall when the GNU project was insisting that
it was reasonable for them to assume C ints were
exactly 32 bits wide.
Since no individual speaks for "the GNU project", I very much doubt it.
Anyone who did say that was simply wrong: it was not reasonable.
Assuming two's complement (and 8-bit bytes), OTOH, is reasonable. The
differences between these assumptions are in their technical merits,
and the range of real-world hardware platforms and ABIs that are
consistent with them.
David Hopwood <david.nospam.hopwood@blueyonder.co.uk>