The standard is violated purposefully by the inclusion of some kind of
command line switch which enables the alternate behavior.
Yes you can. The standard says "int is always 32-bit" and the extension
says "when enabled, int is no longer 32-bit, but is now 16-bit."
The standard never changes. The extensions override the standard. Get it?
Yes, I get it - if you have overrides like this, you do not have a standard.
Sometimes I wonder whether or not English is your mother tongue - you
certainly seem to have different definitions for some words than the
rest of the world.
C's choice is a bad one. It is incorrect. It produces code which remains
in question until tested on a particular implementation because there is
no standard. Code that works perfectly well on 99 compilers might fail
on the 100th because that particular compiler author did something that
the C "standard" allowed through not having a definition, but yet is
inconsistent with other compilers. As such, my code, because I use the
behavior demonstrated through 99 out of 100 compilers, cannot be known
to work on any new compiler.
If the programmer writes code that depends on an assumption not
expressed in the C standards, then he is not writing valid C code. This
is no different from any other language.
For many of the implementation-defined points in the C standards, the
particular choices are fixed by the target platform, which often lets
you rely on some of these assumptions. And often there are ways to
check your assumptions or write code without needing them - compilers
have headers such as <limits.h> so that your code /will/ work with each
compiler.
Just because you don't know how to do such programming, does not mean C
does not support it.
It is a variable. And it's a hideous allowance, one that the authors of
C should never have allowed to be introduced ever. The rigid definition
should've been created, and then overrides for specific and unique
platform variances should've been encouraged. I'm even open to a common
set of command line switches which allow them to all enable the same
extensions with the same switch.
Yes. Lunacy. Absolute lunacy.
Are you really trying to say it is "lunacy" to rely on Linux features
when writing Linux-specific code, or Windows features when writing
Windows-specific code?
If all I'm doing is counting up to 1000 then I don't care. How many useful
programs do you know of that only count up to 1000?
The huge majority of numbers in most programs are small.
And if you are writing a program that needs to count 100,000 lines of a
file, then you can be confident it will be running on a system with at
least 32-bit ints, and can therefore happily continue using "int".
The language needs to be affixed so that people can have expectations
across the board, in all cases, of how something should work. And then,
on top of that, every compiler author is able to demonstrate their coding
prowess by introducing a host of extensions which make their C compiler
better than another C compiler.
Marvellous idea - lets make some fixed standards, then tell all the
compiler implementers to break those standards in different ways. We
can even standardise command-line switches used to choose /how/ the
standards are to be broken. Perhaps we can have another command line
switch that lets you break the standards for the command line switches.
Competition would abound. New features
created. Creativity employed. The result would be a better compiler for
all, rather than niche implementations which force someone into a
particular toolset because the man-hours of switching away from compiler
X to compiler Y introduces unknown variables which could exceed reasonable
time frames, or expenses.
I don't care. We are not going backwards. We are going forwards. We
are not in the 1970s. We are in the 2010s. And things are not getting
smaller. They are getting bigger.
Again, you have no concept of how the processor market works, or what
processors are used.
If you want to restrict your own language to a particular type of
processor, that's fine - but you don't get to condemn C for being more
flexible.
32-bit and 64-bit CPUs today exist for exceedingly low cost. ARM CPUs
can be created with full support for video, networking, external storage,
for less than $1 per unit.
Again, you have no concept of what you are talking about. Yes, there
are ARM devices for less than $1 - about $0.50 is the cheapest available
in very large quantities. They don't have video, networking, etc., -
for that, you are up to around $15 minimum, including the required
external chips.
Putting an 8-bit cpu on a chip, on the other hand, can be done for
perhaps $0.10. The same price will give you a 4-bit cpu with ROM in a
bare die, ready to be put into a cheap plastic toy or greeting card.
Yes, 32-bit cpus (but not 64-bit cpus) are available at incredibly low
prices. No, they do not come close to competing with 8-bit devices for
very high volume shipments.
I will certainly agree that most /developers/ are working with 32-bit
devices - and most embedded developers that worked with 8-bit or 16-bit
devices are moving towards 32-bit devices. I use a lot more 32-bit
chips than I did even 5 years ago - but we certainly have not stopped
making new systems based on 8-bit and 16-bit devices.
If I were designing a new language, I would pick 32-bit as the minimum
size - but I am very glad that C continues to support a wide range of
devices.
We are not stuck in the past. We are standing at a switchover point.
Our mechanical processes are down below 20nm now. We can put so many
transistors on a chip today that there is no longer any comparison to
what existed even 15 years ago, let alone 40 years ago.
It's time to move on.
Nonsense.
Exactly - it is nonsense.
Will your specifications say anything about the timing of particular
constructs? I expect not - it is not fully specified. In some types of
work it would be very useful to be able to have guarantees about the
timing of the code (and there are some languages and toolsets that give
you such guarantees). For real-time systems, operating within the given
time constraints is a requirement for the code to work correctly as
desired - it is part of the behaviour of the code. Will your
specifications say anything about the sizes of the generated code? I
expect not - it is not fully specified. Again, there are times when
code that is too big is non-working code. Will your specifications give
exact requirements for the results of all floating point operations? I
expect not - otherwise implementations would require software floating
point on every processor except the one you happened to test on.
You can give more rigid specifications than the C
standards do - but there are no absolutes here. There are /always/
aspects of the language that will be different for different compilers,
different options, different targets. Once you understand this, I think
you will get on a little better.
We'll see. RDC will have rigid standards, and then it will allow for
variances that are needed by X, Y, or Z. But people who write code,
even code like a
= a[i++], in RDC, will always know how it is going
to work, platform A or B, optimization setting C or D, no matter the
circumstance.
People who write code like "a = a[i++]" should be fired, regardless
of what a language standard might say.
People are what matter. People make decisions. The compiler is not
authorized to go outside of their dictates to do things for them. If
the developer wanted something done a particular way they should've
written it a particular way, otherwise the compiler will generate
EXACTLY what the user specifies, in the way the user specifies it,
even if it has to dip into clunky emulation to carry out the mandated
workload on some obscure CPU that probably has no business still
being around in the year 2014.
No. It would guarantee that your program would work the same on all
CPUs, regardless of internal mechanical abilities. The purpose of the
language is to write common code. Having it work one way on one CPU,
and potentially another way on another CPU, or maintaining such an
insane concept as "with int, you can always know that it is at least
16-bits," when we are today in the era of 64-bit computers, is to be
found on the page of examples demonstrating wrongness.
You throw around insults like "insane" and "lunacy", without making any
attempt to understand the logic behind the C language design decisions
(despite having them explained to you).
The language should determine how all things are done. Anything done
beyond that is through exceptions to the standard through switches
which must be explicitly enabled (because on a particular platform you
want to take advantage of whatever features you deem appropriate).
Bzzz! Wrong. You still have rigid specs. The specs are exactly what
they indicate. Only the overrides change the spec'd behavior. A person
writing a program on ANY version of RDC will know that it works EXACTLY
the same on ANY version of RDC no matter what the underlying architecture
supports. And that will be true of every form of syntax a person could
invent, use, employ, borrow, and so on...
Rigid specs are what computers need. They expose a range of abilities
through the ISA, and I can always rely upon those "functions" to work
exactly as spec'd. I don't need to know whether the underlying hardware
uses method A or method B to compute a 32-bit sum ... I just want a
32-bit sum.
That's how it will be with RDC.
It's odd that you use the example of computing the sum of two numbers as
justification for your confused ideas about rigid specifications, when
it was previously given to you as an example of why the C specifications
leave things like order of operation up to the implementation - it is
precisely so that when you write "x + y", you don't care whether "x" or
"y" is evaluated first, as long as you get their sum. But with your
"rigid specs" for RDC, you are saying that you /do/ care how it is done,
by insisting that the compiler first figures out "x", /then/ figures out
"y", and /then/ adds them.
Note, of course, that all your attempts to specify things like ordering
will be in vain - regardless of the ordering generated by the compiler,
modern processors will re-arrange the order the instructions are carried
out.
NO! You only have "implementation dependent" behavior because the authors
of the C language specs did not mandate that behavior should be consistent
across all platforms, with extensions allowed.
C exists as it does today because they chose this model of "I will leave
things open in the standard so that people can implement whatever they
choose, possibly based upon whether or not the breakfast they ate is
settling well in their stomach today, or not," rather than the model of
"I will define everything, and let anyone who wants to step outside of
what I've defined do so, as per an extension."
To me (b) is FAR AND AWAY the better choice.
Yes. I'd rather be a farmer than deal with this variation in behavior
across platforms for the rest of my life. I am a person. I am alive.
I am not convinced. I suspect you are a Turing test that has blown a
fuse
I write something in a language, I expect it to do what I say. I do not
want to be subject to the persona inclinations of a compiler author who
may disagree with me on a dozen things, and agree with me on a dozen
others. I want to know what I'm facing, and then I want to explore the
grandness of that author's vision as to which extension s/he has provided
unto me, making use of them on architecture X whenever I'm targeting
architecture X.
It should be that way on all aspects. On an 8-bit computer, ints should
should be 32-bit and the compiler should handle it with emulation.
Why? You have absolutely no justification for insisting on 32-bit ints.
If I am writing code that needs a 32-bit value, I will use int32_t.
(Some people would prefer to use "long int", which is equally good, but
I like to be explicit.) There are no advantages in saying that "int"
needs to be 32-bit, but there are lots of disadvantages (in code size
and speed).
However, by using the compiler switch to enable the alternate size, then
my code can be compiled as I desire for the lesser hardware.
This means that the same code has different meanings depending on the
compiler flags, and the target system. This completely spoils the idea
of your "rigid specifications" and that "RDC code will run in exactly
the same way on all systems". It is, to borrow your own phrase, lunacy.