You could say that in C also, and then for specific implementations allow
an override to alter it for the machine-specific quirks (a 9-bit word, for
example).
If specific implementations can override standards-defined behaviour,
then the behaviour is no longer standard! You can't have a "standard"
that says "int is always 32-bit" and then say "but for /this/ particular
compiler, int is 16-bit". You have two choices - you can do as D does,
and specify that "int is always 32-bit" and therefore the language is
not suitable for smaller processors, or you can do as C does and say the
choice is "implementation dependent". A feature is /either/ fully
defined and specified, /or/ it is implementation dependent - it cannot
be both.
So? I'm developing code in a computer language. I EXPECT it to behave
a certain way. I don't expect to have to bend to the peculiarities of
a particular machine. In fact, I should not even care about them in
most cases.
The idea that a C developer should know the mechanics of the implementation
of the language in ALL cases is crazy. And relying upon the idea that a
particular quantity must be "at least" so many bits is insane.
The whole point with the C standards is that programmers know which
parts are fixed in the specs, and which are variable. They can rely on
the fixed parts. For /some/ code, you might want to rely on
implementation-specific features - not all code has to be portable.
In the particular case of bit sizes, it is often perfectly reasonable to
work with types that are defined as "at least 16 bits". If you are
counting up to 1000, you don't care if the variable has a max of 32K or
2G. If you need bigger numbers, you can use "long int" and know that it
is at least 32 bits. If you need specific sizes (I often do in my
work), you can use types like int16_t and uint32_t. The system is
clear, flexible, portable, and works well on big and small systems. Of
course, it all relies somewhat on the programmer being competent - but
that applies to all programming tasks.
Fixed, rigid requirements should be defined and adhered to. And the CPU
designers, should they wish to offer a new product, will look at their
audience and determine if their minimally-readble 32-bit quantity CPU is
actually a good idea or not. The market would sort that out pretty quick.
Apparently you have /no/ concept of how the processor market works. You
live in your little world of x86, with brief excursions to ARM. Did you
know that it is only a few years ago that shipments of 8-bit cores
exceeded those of 4-bit cores? And that there are still far more 8-bit
cores sold than 32-bit? As for cpus that cannot access 8-bit or 16-bit
data, these are almost always DSP's - and there is good reason for that
behaviour. The manufacturers will continue to produce them, and
designers will continue to use them - because they give better value for
money (or power, or space) than alternative solutions. And they will
continue to program them in C, because C works fine with such cores.
That is the way the market works.
One thing that strikes me in your writing here, is that you seem to have
a belief that there is such a thing as "absolute" specifications - that
you can define your language and say /exactly/ how it will always work.
This is nonsense. You can give more rigid specifications than the C
standards do - but there are no absolutes here. There are /always/
aspects of the language that will be different for different compilers,
different options, different targets. Once you understand this, I think
you will get on a little better.
So what? It is the requirement of the C language authors for those CPUs
to figure out the mechanics. I'm writing for C, not for a machine.
It is /precisely/ because C does not define these details, that you are
able to write for C and not for the machine. If C specified
requirements tuned for a particular processor type, then you would be
writing for that processor.
I'm frankly amazed C developers have tolerated this.
Flexibility can still exist when you have rigid specs. The flexibility
simply, at that point, exists outside of the specs, as per overrides which
render parts of it contrary to specified behavior.
If you allow "overrides", you no longer have rigid specs. You have
"implementation dependent" behaviour. This is precisely what the C
standards do. Sometimes I think the C standards could have some
multiple choice options rather than wider freedom (they have multiple
choices for the format of signed integers), but it is certainly better
that they say a particular point is "implementation dependent" than if
they were to say "/this/ is how to do it", and then allow
implementations to override that specification.
On a platform without
a particular feature (no integer engine), the underlying mechanics mean
that everything must be done with floating point. The C developer should
never see ANY side effect of that in their properly written C program.
Yes, and that is what happens today with C. So what is your point?
The compiler should hide every ounce of that away so there are no
variations at all.
No, it should not hide /everything/. You should be free to develop
general portable code, and free to take advantage of particular
underlying architectures, depending on the sort of code you are writing.
C gives you that.
Communicating between machine X and machine Y, through
C's protocols, in that way, should always yield correct results.
Yes, and C gives you that. You have to stick to the things that C
defines explicitly, but that's fine.