[... Subthread, basically "C is 'bad' and the committee 'cowards' because
they allowed undefined behavior in the language".
...]
Who might have implied or explicitly suggested these items? C can
hardly be considered bad; it's been around a long time and is still
popular. That seems like a good thing.
I don't really perceive a
problem with a C Standard leaving expectations open in certain areas.
The C Standard appears to be evolving, too. A couple of things that
were, perhaps, ambiguous in C99 appear to have been pondered-over for
C1X.
In order to have no UB, then every pointer dereference would have to have an
"is this address valid" check. (Not just a check for NULL, but an entire
validity check.) And that includes "is this address writable" checks as
necessary, too.
How could "valid address" be defined beyond what the C Standard
currently offers?
And every call to free() would have to validate the pointer as being a value
previously returned from *alloc(), and not yet freed.
Additionally, it would be interesting if an implementation digitally
signed pointers to prevent tampering, too.
Is that a worth-while
expectation for conforming implementations?
Since this subthread was turning into "a language with UB is inexcusable", I
was asking about how one would detect out-of-bounds. And, how much overhead
does that require, simply to handle the "just-in-case" situations?
Regardless of how such a standard would define what to do, it still has to
be a detectable situation.
Who is it that has said "a language with UB is inexcusable"? Sorry, I
don't remember that. If somebody did say that, I'd have to disagree
with them. That doesn't mean that a C Standard cannot be improved by
experts and practitioners and hobbyists over time, though.
But the bounds-checking currently defined for C (in regards to pointer
arithmetic) does seem odd to me... It's not clear what "array object"
is being referred to or how one can determine the "number of elements"
in that array object... Hence the other thread.
There're sure to be trade-offs in decision-making. It appears that
function calls in C are supposed to be akin to an N-ary operator (like
in mathematics), rather than including a list of atomic argument
evaluations used to be assigned to the parameters. The C Standard
grants a license for compilers to optimize as long as they yield
results consistent with the abstract semantics. Those abstract
semantics further grant a license for compilers to determine
evaluation order. That's permissive and consistent, but does appear
to mean that:
int a = 1;
printf("%d", ++a, a + 5);
yields undefined behaviour.
int a = 1;
/* No luck here, either. */
printf("%d", (0, ++a, a), (0, 0, 0, 0, 0, 0, a + 5));
/* Or here. */
printf("%d", (0, 0, 0, 0, 0, 0, ++a, a), (0, a + 5));
Or:
volatile int a = 1;
/* D'oh! Still a violation. */
printf("%d %d", ++a, a + 5);
Or:
#include <stdio.h>
static inline int inc_int(int *param) {
return ++*param;
}
int main(void) {
int a = 1;
/**
* Worst-case is a non-inline function call and
* the need for 'a' to be addressable.
* C99 still might not define it.
* C1X should be ok.
*/
printf("%d", inc_int(&a), a + 5);
return 0;
}