Article about Herb Schildt accepted at comp.risks

S

Seebs

Ben Bacarisse wrote:
<snip>
Hanlon's Razor.

I have considered that, but... There is a Father Brown mystery in which
he points out that someone MUST have known the truth in order to get every
single detail of something wrong.

Nilges does indeed approach that level of consistent error. However,
I think it may still be possible to provide an alternative theory that
explains this more as incompetence than malice, although arguably still
"malice" in a broader sense. If we consider the plausible notion that
he's got some severe pathological narcissism going, his errors can be
explained much more simply. It's obvious he knows very little about any
of the subjects on which we've seen him talk; however, it's also obvious
that he reflexively opposes people he dislikes at all times. If you view
his statements, not as claims about the actual facts under discussion,
but as assertions that the people he's disagreeing with are Bad People,
everything makes sense. He picked variable names in a reasonably
unexceptional way until I pointed out that he was doing something stupid;
after that he went out of his way to name them extremely badly, in a kind
of over-the-top satire/parody of "Hungarian".

Which is to say -- it's not consciously intended as incorrect, but it is
not mere coincidence that he's wrong so often.

-s
 
B

blmblm

[ snip ]

[ snip ]
Interesting. I'd expect Google to do the same thing no matter who
typed in the query, and to not change significantly for something like
this over the course of a day or two. I just tried it again with the
same results, from both Firefox and IE. Are you just using a Google
toolbar, or something like that, or are you actually going to www.google.com
and typing into the entry field there?

The latter, using Firefox -- but a rather old version.

Trying again, with a more-up-to-date version, I get the same behavior
you did.

Hm! but at least this makes a kind of sense.
 
R

Richard Bos

"Whimper"? Whatever.

(I *was* curious, and I find that the history goes back to .... 1988.
Good heavens. Some familiar themes, though, in those early posts.)

1925, ITYF. But the "rather" spoils it rather.

Richard
 
B

blmblm

1925, ITYF.

Say what? The history I'm talking about is the history of Nilges posts
to comp.risks. What history are you talking about?
But the "rather" spoils it rather.

Oh, maybe you mean the phrase that's being paraphrased rather
than quoted .... Yes, okay, after Googling for the original, that
seems like a good guess.
 
T

Tim Rentsch

David Thompson said:
On Mon, 10 May 2010 11:56:29 -0700 (PDT), John Bode

Not quite. First, a nit: the term 'conforming program' is defined by
the standard in a way that is useless, knowingly so, and 'strictly
conforming' is almost as bad.

I can't agree with this assessment. These two terms are defined
as they are to help the Standard make statements about certain
classes of programs. They may not serve the purpose you desire,
but that doesn't make them useless; they are useful for the
purpose that (I believe) the Standard expects for them.
We've had several tries seeking a term
for the (useful!) category of programs that portably work correctly,
and the least bad IMO was 'clc-compliant'.

The term 'clc-compliant' seems reasonable, or at least plausible,
but as a definition this statement falls short. Unless there
is some sort of definition of what it means to "work correctly",
it says nothing.
More substantively: modulo bugs a conforming implementation should
accept *and correctly execute* a clc-compliant program *unless it's
too big* -- and then as Quality of Implementation it should give an
appropriate diagnostic, although the standard doesn't require it.

This statement is circular: a clc-compliant program is one that
portably works correctly, and conforming implementations must
correctly execute a clc-compliant program. Obviously this must
be true, otherwise the program in question is not 'clc-compliant',
by definition.
On the other side, it must diagnose any program with a syntax error or
constraint violation, but it is permitted to try to 'fix' such program
into a valid and if lucky desired one.

Which is exactly the case under the existing Standard (absent
a #error directive).
And it can have extensions that
users intend and want, but mandate diagnosis in conforming mode.

This statement appears to be discussing changing the definition
of the term "conforming implementation" in the Standard. If so
then it is circularly self-contradictory -- it mandates diagnosis
for conditions that, by definition, the Standard does not
require diagnosing.

And it *may* diagnose other errors, but in general there are many
errors that are not practical to detect and thus cannot be rejected.

IOW I agree with 'accept' mostly but 'reject' mostly not

I don't disagree with what I read as the spirit of the quoted
comments. However, if there is some sort of objection to how these
terms are defined (and therefore what constitutes a "conforming
implementation"), I think the objectors should try writing a
complete replacement for section 4, "Conformance" (it's only eight
paragraphs). If criticism is going to be offered, don't do it from
the sidelines, get out there on the playing field. Don't want to
get out on the playing field? Then be prepared for someone to
say "put up or shut up".
 
D

David Thompson

I can't agree with this assessment. These two terms are defined
as they are to help the Standard make statements about certain
classes of programs. They may not serve the purpose you desire,
but that doesn't make them useless; they are useful for the
purpose that (I believe) the Standard expects for them.
'conforming' was a bodge to get C89 adopted; I strongly agree that
benefit justifies its infinitesimal cost, but as a class of programs
it's all strings over the alphabet, and I see no use for that.
s-c was a serious attempt, but I find it unuseful. In particular for
the purpose here, deciding a set of programs that all implementations
of interest 'should' accept; in practice 99.99% of programs I want to
*and do* compile and especially port are not s-c. 'Of interest' to me
is definitely limited to hosted since programs that rely on embedded I
don't expect to be portable (although parts may, and often should);
and I might be willing to exclude one or two really weird outliers --
I haven't hit any yet, but I don't rule out the possibility.

(As a concrete example, I was recently looking at an area in OpenSSL
littered with #if's to make some filename handling differ on VMS. It
depends majorly on impl-def characteristics of fopen() -- but in a way
I believe makes sense to users on both VMS and non-VMS systems.)
The term 'clc-compliant' seems reasonable, or at least plausible,
but as a definition this statement falls short. Unless there
is some sort of definition of what it means to "work correctly",
it says nothing.
That is the hard part. My rough version is that a program works
correctly on some implementation if (each run of) it either produces
results consistent with (within the range specified by) a reasonable
understanding (mediated by discussion here if necessary) of the
standard (C plus others applicable like POSIX), or gives a clear
indication it has failed (either at translation or execution).

Preferably that 'result within standard' should also be within the
spec for the program, and that in turn consistent with the needs of
the user(s), but those are out of scope for the language.
This statement is circular: a clc-compliant program is one that
portably works correctly, and conforming implementations must
correctly execute a clc-compliant program. Obviously this must
be true, otherwise the program in question is not 'clc-compliant',
by definition.
It's not circular, it's just vague; if I reasonably expect it to work,
implementations should make it work unless they can't.
Which is exactly the case under the existing Standard (absent
a #error directive).


This statement appears to be discussing changing the definition
of the term "conforming implementation" in the Standard. If so
then it is circularly self-contradictory -- it mandates diagnosis
for conditions that, by definition, the Standard does not
require diagnosing.
Other way; if a program uses an extension that violates standard
syntax or constraints -- as practically all do, and that's a working
definition of 'extension' as opposed to 'implementation choice' --
then a conforming implementation must (required by standard) diagnose,
but may (by standard) then do anything, and should (IMO) accept using
its definition of the extension.

Formally this is the same as the prior point -- after diagnosis the
compiler tries to DWIM. But for an extension it can have very high
confidence about WIM, not just a guess.
I don't disagree with what I read as the spirit of the quoted
comments. However, if there is some sort of objection to how these
terms are defined (and therefore what constitutes a "conforming
implementation"), I think the objectors should try writing a
complete replacement for section 4, "Conformance" (it's only eight
paragraphs). If criticism is going to be offered, don't do it from
the sidelines, get out there on the playing field. Don't want to
get out on the playing field? Then be prepared for someone to
say "put up or shut up".

I certainly don't have a better solution. I haven't seen a language
conformance spec that could be accurately enforced except Ada,
and that only because for a while it had the US Defense Department
spending loads of money on it, and it was done from scratch, and even
so AIUI there were still serious disagreements on some (small) points.
And it took quite a few years before people could use it. (And
relatively few did or do, although I don't believe that's caused by
the conformance spec, and personally I even like the language.)

But to define what I consider 'decent' or 'satisfactory', and can
reasonably argue others should also, I think my method works.
 
T

Tim Rentsch

pete said:
I like "correct program".

ISO/IEC 9899:1999 (E)

4. Conformance

3 A program that is correct in all other aspects,
operating on correct data, containing unspecified behavior
shall be a
correct program
and act in accordance with 5.1.2.3.

Unfortunately whether a program is a 'correct program' in the
sense of 4p3 depends on what implementation it's running on.
That isn't much help if we're trying to identify a set of
programs that are portably correct. Hmmm, how about "portably
correct"?
 
T

Tim Rentsch

David Thompson said:
'conforming' was a bodge to get C89 adopted; I strongly agree that
benefit justifies its infinitesimal cost, but as a class of programs
it's all strings over the alphabet, and I see no use for that.
s-c was a serious attempt, but I find it unuseful. In particular for
the purpose here, deciding a set of programs that all implementations
of interest 'should' accept; in practice 99.99% of programs I want to
*and do* compile and especially port are not s-c. 'Of interest' to me
is definitely limited to hosted since programs that rely on embedded I
don't expect to be portable (although parts may, and often should);
and I might be willing to exclude one or two really weird outliers --
I haven't hit any yet, but I don't rule out the possibility.

(As a concrete example, I was recently looking at an area in OpenSSL
littered with #if's to make some filename handling differ on VMS. It
depends majorly on impl-def characteristics of fopen() -- but in a way
I believe makes sense to users on both VMS and non-VMS systems.)

It still sounds like you're just saying that the terms the
Standard uses don't meet your needs. I get that they don't
meet your needs, but that doesn't make them useless.

That is the hard part. My rough version is that a program works
correctly on some implementation if (each run of) it either produces
results consistent with (within the range specified by) a reasonable
understanding (mediated by discussion here if necessary) of the
standard

So... you want a term for a program that portably works
correctly, but you only require the program to run correctly
on one implementation? Where's the "portable" part?
(C plus others applicable like POSIX), or gives a clear
indication it has failed (either at translation or execution).

Surely defining a term pertaining to C compliance shouldn't
depend on requirements outside the C Standard such as POSIX.
Or do you mean something more like "portably POSIX compliant"?
Preferably that 'result within standard' should also be within the
spec for the program, and that in turn consistent with the needs of
the user(s), but those are out of scope for the language.

I would expect that whether a program is 'clc-compliant' (or
whatever other similar term is used) would depend only on
whether its behavior can be predicted reliably using the
Standard, not on anything having to do with the program's
specification (if it even has one at all). Trying to
define the criteria that decide whether a program meets
a specification sounds even harder than trying to define
some sort of "portably correct program" criteria.

It's not circular, it's just vague; if I reasonably expect it to work,
implementations should make it work unless they can't.

The statement quoted was the only one in the responded-to posting that
was anything like a definition of 'clc-compliant'. That makes it
circular. (The "unless too big" part is fuzzy, but that's neither
here nor there.) You did give the start of a definition in the
followup, but of course that doesn't affect statements made about the
previous posting.

Other way; if a program uses an extension that violates standard
syntax or constraints -- as practically all do, and that's a working
definition of 'extension' as opposed to 'implementation choice' --
then a conforming implementation must (required by standard) diagnose,
but may (by standard) then do anything, and should (IMO) accept using
its definition of the extension.

There are two sorts of things that might be termed extensions: those
that are syntax errors or constraint violations, and those that are
not SE/CV. For SE/CV, the Standard already provides the behavior you
want -- must issue a diagnostic, then the compiler can do anything it
wants (not counting #error). For extensions outside the range of
SE/CV, not diagnostic is required. Do you mean to require compilers
(or runtimes) to diagnose all instances of undefined behavior? Because
if a program doesn't have SE/CV, the only choices left are defined
(including unspecified and implementation-defined) and undefined
behavior. For that matter, do you mean to require compliers/runtimes
to diagnose relying on unspecifed behavior or implementation-defined
behavior?

I simply don't understand which behaviors you mean to require
getting a diagnostic beyond SE/CV, which already require getting
a diagnostic.
Formally this is the same as the prior point -- after diagnosis the
compiler tries to DWIM. But for an extension it can have very high
confidence about WIM, not just a guess.

I don't see how this is any different from how things are now.
SE/CV -- diagnostic required, can do anything. Unspecified
behavior -- can do any of the different possibilities allowed.
Implementation-defined behavior -- must do what the implementation
says it will do. Undefined behavior -- can do anything. Which
ones of { unspecified behavior, implementation-defined behavior,
undefined behavior } do you mean to require diagnostics?

I certainly don't have a better solution. I haven't seen a language
conformance spec that could be accurately enforced except Ada,
and that only because for a while it had the US Defense Department
spending loads of money on it, and it was done from scratch, and even
so AIUI there were still serious disagreements on some (small) points.
And it took quite a few years before people could use it. (And
relatively few did or do, although I don't believe that's caused by
the conformance spec, and personally I even like the language.)

But to define what I consider 'decent' or 'satisfactory', and can
reasonably argue others should also, I think my method works.

I'm sympathetic to where I think you're trying to go with
this. However, I think actually doing it is much harder
than you think, because (a) actually defining the criteria
for a "good program" (or whatever we might call it) is
very difficult, and (b) even not counting the writing part,
getting people to agree on what the criteria should be
is also difficult, perhaps even moreso. If someone wants
to lead the charge, I'm happy to applaud and cheer them
on; at the same time I think it's unrealistic to expect
the chances of success to be even as high as 50%. So...
is it into the valley of death for the Light Brigade?
 
D

David Thompson

It still sounds like you're just saying that the terms the
Standard uses don't meet your needs. I get that they don't
meet your needs, but that doesn't make them useless.
If there is a good use for the universe set, I'd like to know it.
So... you want a term for a program that portably works
correctly, but you only require the program to run correctly
on one implementation? Where's the "portable" part?
Sorry, I quantified sloppily, I meant 'with respect to some=any'.
For each implementation X, my program run on X should produce results
consistent with my understanding of the standard, or none -- but not
necessarily the same as on Y!=X. In my OpenSSL example, it produces
different results on different implementations so it's not s.c., but
if on any given implementation X it produces either the results I
expect (for X) or indicates error, I call that correct.
Surely defining a term pertaining to C compliance shouldn't
depend on requirements outside the C Standard such as POSIX.
Or do you mean something more like "portably POSIX compliant"?
If I intended a particular program to be POSIX, then yes. That was not
the best example, since POSIX already has its own conformance rules.
If I intended a program to be C + IEEEFP (or now C99 with the *option*
for 10559) I would say that. If I intended it to be C + 256M usable
memory I would say that. These and more are all useful-IMO categories
of programs that correspond to useful-IMO ranges of platforms. But (I
claim) I can use the same decision process for all. I need to say
which standards apply to each actual program, yes.
I would expect that whether a program is 'clc-compliant' (or
whatever other similar term is used) would depend only on
whether its behavior can be predicted reliably using the
Standard, not on anything having to do with the program's
specification (if it even has one at all). Trying to
define the criteria that decide whether a program meets
a specification sounds even harder than trying to define
some sort of "portably correct program" criteria.
That's why I said preferably and out-of-scope. But it is my *reason*
for this: my goal is programs that do some desired thing, on many
platforms. I'm responsible for writing code that expresses that
desire; if my code is 'x-compliant' I claim each implementation is
responsible for carrying it out as written or indicating failure.
There are two sorts of things that might be termed extensions: those
that are syntax errors or constraint violations, and those that are
not SE/CV. For SE/CV, the Standard already provides the behavior you
want -- must issue a diagnostic, then the compiler can do anything it
wants (not counting #error). For extensions outside the range of

I thought I said that. Maybe more than once.
SE/CV, not diagnostic is required. Do you mean to require compilers
(or runtimes) to diagnose all instances of undefined behavior? Because
if a program doesn't have SE/CV, the only choices left are defined
(including unspecified and implementation-defined) and undefined
behavior. For that matter, do you mean to require compliers/runtimes
to diagnose relying on unspecifed behavior or implementation-defined
behavior?
No no no, I didn't say or mean that. I said that if the Standard
allows a range of possibilities as unspec or impl-def, for a given
construct in my program, the implementation must do something in that
range or indicate error. The Standard says (and you reiterate) the
first part; what I'm arguably adding is that the loose wording in 1p2
shouldn't be a loophole: the implementation must indicate, either at
translation or runtime, if it can't correctly execute my 'x-compliant'
program. I would like that indication to be specific and helpful, but
at a minimum it must NOT mimic the normal output of my program while
actually being silently wrong -- which UB can do.
I'm sympathetic to where I think you're trying to go with
this. However, I think actually doing it is much harder
than you think, because (a) actually defining the criteria
for a "good program" (or whatever we might call it) is
very difficult, and (b) even not counting the writing part,
getting people to agree on what the criteria should be
is also difficult, perhaps even moreso. If someone wants
to lead the charge, I'm happy to applaud and cheer them
on; at the same time I think it's unrealistic to expect
the chances of success to be even as high as 50%. So...
is it into the valley of death for the Light Brigade?

I agree it is hard. The aside in my first post was because the
Standard didn't accomplish it -- neither 'conforming' nor 's-c'
accurately identifies 'usefully portable' programs. I don't consider
that a serious problem, since we can (and I did) use other criteria.
To keep my aside short and get to the actual points, I dismissed it
more harshly than needed, and for that I'm sorry.

Perhaps an (always-suspect) analogy: to me it's like saying pilots on
airline X aren't good at handling baggage. It might be nice in some
cases if they were, but it's okay if they aren't because we can get
other people to handle baggage as long as the pilots fly the planes
safely to the right places. The Standard should -- and AFAICT except a
very few murky corners does -- define what I reasonably can expect the
execution of a given C program to do, and what I can't expect.
That's necessary and sufficient.
 
T

Tim Rentsch

David Thompson said:
Sorry, I quantified sloppily, I meant 'with respect to some=any'.
For each implementation X, my program run on X should produce results
consistent with my understanding of the standard, or none -- but not
necessarily the same as on Y!=X. In my OpenSSL example, it produces
different results on different implementations so it's not s.c., but
if on any given implementation X it produces either the results I
expect (for X) or indicates error, I call that correct.

This sounds like a tautology. Don't all programs produce results for
each implementation that are (in the context of the implementation in
question) consistent with the Standard, if that implementation is
conforming? Or do you mean to change what it means to be consistent
with the Standard, or what it means for an implementation to be
conforming, or both?

[snip] my goal is programs that do some desired thing, on many
platforms. I'm responsible for writing code that expresses that
desire; if my code is 'x-compliant' I claim each implementation is
responsible for carrying it out as written or indicating failure.

Don't conforming implementations already do that (assuming we're
talking just about compliance with ISO C)? If not then that sounds
like you think the Standard's notion of conformance should be
changed -- right? If that's right then changed how?

[snip]
SE/CV, not diagnostic is required. Do you mean to require compilers
(or runtimes) to diagnose all instances of undefined behavior? Because
if a program doesn't have SE/CV, the only choices left are defined
(including unspecified and implementation-defined) and undefined
behavior. For that matter, do you mean to require compliers/runtimes
to diagnose relying on unspecifed behavior or implementation-defined
behavior?
No no no, I didn't say or mean that. I said that if the Standard
allows a range of possibilities as unspec or impl-def, for a given
construct in my program, the implementation must do something in that
range or indicate error. The Standard says (and you reiterate) the
first part; what I'm arguably adding is that the loose wording in 1p2
shouldn't be a loophole:

I don't know what you're getting at with this statement.
Can you be more specific?
the implementation must indicate, either at
translation or runtime, if it can't correctly execute my 'x-compliant'
program. I would like that indication to be specific and helpful, but
at a minimum it must NOT mimic the normal output of my program while
actually being silently wrong -- which UB can do.

I'm not sure what you're saying here. Do you mean a 'clc-compliant'
program can't have undefined behavior on any implementation? Or
if it does then the UB must be diagnosed (on those implemenations
where the UB occurs)? Or something else?
 
D

David Thompson

(Sorry for the extra-long delay; work problems.)

This sounds like a tautology. Don't all programs produce results for
each implementation that are (in the context of the implementation in
question) consistent with the Standard, if that implementation is
conforming? Or do you mean to change what it means to be consistent
with the Standard, or what it means for an implementation to be
conforming, or both?
It's not a tautology, just a (over?)detailed statement of a point I
hadn't gotten across. Yes, program P run on conforming implementation
X must produce results consistent with the standard, or detected
error. But in case of UB that doesn't help, because anything at all is
consistent with std; and even some non-UB results are undesired.
P is 'clc-compliant' (or whatever we call it) if it produces *desired*
results (never UB), or error, on all implementations.

I'm satisfied with *implementation* conformance; the issue was about a
useful category of *programs* between s-c and conforming.
[snip] my goal is programs that do some desired thing, on many
platforms. I'm responsible for writing code that expresses that
desire; if my code is 'x-compliant' I claim each implementation is
responsible for carrying it out as written or indicating failure.

Don't conforming implementations already do that (assuming we're
talking just about compliance with ISO C)? If not then that sounds
like you think the Standard's notion of conformance should be
changed -- right? If that's right then changed how?
Implementation conformance is okay.
[snip]
SE/CV, not diagnostic is required. Do you mean to require compilers
(or runtimes) to diagnose all instances of undefined behavior? Because
if a program doesn't have SE/CV, the only choices left are defined
(including unspecified and implementation-defined) and undefined
behavior. For that matter, do you mean to require compliers/runtimes
to diagnose relying on unspecifed behavior or implementation-defined
behavior?
No no no, I didn't say or mean that. I said that if the Standard
allows a range of possibilities as unspec or impl-def, for a given
construct in my program, the implementation must do something in that
range or indicate error. The Standard says (and you reiterate) the
first part; what I'm arguably adding is that the loose wording in 1p2
shouldn't be a loophole:

I don't know what you're getting at with this statement.
Can you be more specific?
1p2 'does not specify ... size or complexity ... that will exceed the
capacity of ... system or ... processor' = part of an implementation.
This admits a (concrete) implementation can't handle the infinite set
of programs for which the Std defines abstract semantics, which must
be true; and silently allows that the boundary varies across
implementations, which in practice is true and unavoidable. AFAICS
nothing quite says what happens when you cross that boundary -- it
isn't SE/CV as defined, although it is a violation of a constraint in
a more general meaning. Is it 'nonportable ...' UB per 3.4.3? Is it
'correct in all other aspects' per 4p3? So keep reading:
I'm not sure what you're saying here. Do you mean a 'clc-compliant'
program can't have undefined behavior on any implementation? Or
if it does then the UB must be diagnosed (on those implemenations
where the UB occurs)? Or something else?

The first. A clc-compliant program can't do something which invokes UB
because then I couldn't have a nontrivial expectation from std of
valid semantics, which was my criterion. But if the *only* problem is
my program exceeds size & complexity limits for an implementation, so
the implementation is unable to execute (the semantics of) the code as
written, I claim that *isn't* UB, and the implementation must
diagnose. (Or slightly weaker -- if it gives me an error indication,
but not one documented as a diagnostic per 3.10, that's fine.)

What I won't accept is just malfunctioning. The classic example: if
(as common) implementation uses 'the stack' for all autos, temps and
returns, and my code calls more deeply than fits in in the available
stack space, and the stack growth just corrupts other data without any
indication, I consider that unacceptable.

In practice I don't have problems. Such misbehavior would be widely
considered bad QoI, and get driven out of the market, at least for
general-purpose systems (roughly, hosted). (For true embedded = tiny
systems, I would be willing to leave this burden on the programmer, or
much better the toolchain, e.g. a call tree analyzer.) But is it
nonconforming? Do 4p3, and 5.1.2.3, apply over 1p2? I'm not 100% sure.
Clearly we can't require a conforming implementation correctly execute
an otherwise correct program whose source code is larger than the
universe -- then no implementation could ever be conforming.
Even a nonnegligible fraction of the planet would be unaffordable.
Fortunately I can't -- *and* don't want to -- write anything that big.
 
T

Tim Rentsch

David Thompson said:
(Sorry for the extra-long delay; work problems.)

David Thompson said:
On Sat, 03 Jul 2010 07:58:01 -0700, Tim Rentsch


On Thu, 17 Jun 2010 12:31:40 -0700, Tim Rentsch
[snip]
We've had several tries seeking a term
for the (useful!) category of programs that portably work correctly,
and the least bad IMO was 'clc-compliant'.

The term 'clc-compliant' seems reasonable, or at least plausible,
but as a definition this statement falls short. Unless there
is some sort of definition of what it means to "work correctly",
it says nothing.

That is the hard part. My rough version is that a program works
correctly on some implementation if (each run of) it either produces
results consistent with (within the range specified by) a reasonable
understanding (mediated by discussion here if necessary) of the
standard

So... you want a term for a program that portably works
correctly, but you only require the program to run correctly
on one implementation? Where's the "portable" part?

Sorry, I quantified sloppily, I meant 'with respect to some=any'.
For each implementation X, my program run on X should produce results
consistent with my understanding of the standard, or none -- but not
necessarily the same as on Y!=X. In my OpenSSL example, it produces
different results on different implementations so it's not s.c., but
if on any given implementation X it produces either the results I
expect (for X) or indicates error, I call that correct.

This sounds like a tautology. Don't all programs produce results for
each implementation that are (in the context of the implementation in
question) consistent with the Standard, if that implementation is
conforming? Or do you mean to change what it means to be consistent
with the Standard, or what it means for an implementation to be
conforming, or both?
It's not a tautology, just a (over?)detailed statement of a point I
hadn't gotten across. Yes, program P run on conforming implementation
X must produce results consistent with the standard, or detected
error. But in case of UB that doesn't help, because anything at all is
consistent with std; and even some non-UB results are undesired.
P is 'clc-compliant' (or whatever we call it) if it produces *desired*
results (never UB), or error, on all implementations.

I'm satisfied with *implementation* conformance; the issue was about a
useful category of *programs* between s-c and conforming.
[snip] my goal is programs that do some desired thing, on many
platforms. I'm responsible for writing code that expresses that
desire; if my code is 'x-compliant' I claim each implementation is
responsible for carrying it out as written or indicating failure.

Don't conforming implementations already do that (assuming we're
talking just about compliance with ISO C)? If not then that sounds
like you think the Standard's notion of conformance should be
changed -- right? If that's right then changed how?
Implementation conformance is okay.
[snip]

SE/CV, not diagnostic is required. Do you mean to require compilers
(or runtimes) to diagnose all instances of undefined behavior? Because
if a program doesn't have SE/CV, the only choices left are defined
(including unspecified and implementation-defined) and undefined
behavior. For that matter, do you mean to require compliers/runtimes
to diagnose relying on unspecifed behavior or implementation-defined
behavior?

No no no, I didn't say or mean that. I said that if the Standard
allows a range of possibilities as unspec or impl-def, for a given
construct in my program, the implementation must do something in that
range or indicate error. The Standard says (and you reiterate) the
first part; what I'm arguably adding is that the loose wording in 1p2
shouldn't be a loophole:

I don't know what you're getting at with this statement.
Can you be more specific?
1p2 'does not specify ... size or complexity ... that will exceed the
capacity of ... system or ... processor' = part of an implementation.
This admits a (concrete) implementation can't handle the infinite set
of programs for which the Std defines abstract semantics, which must
be true; and silently allows that the boundary varies across
implementations, which in practice is true and unavoidable. AFAICS
nothing quite says what happens when you cross that boundary -- it
isn't SE/CV as defined, although it is a violation of a constraint in
a more general meaning. Is it 'nonportable ...' UB per 3.4.3? Is it
'correct in all other aspects' per 4p3? So keep reading:
I'm not sure what you're saying here. Do you mean a 'clc-compliant'
program can't have undefined behavior on any implementation? Or
if it does then the UB must be diagnosed (on those implemenations
where the UB occurs)? Or something else?

The first. A clc-compliant program can't do something which invokes UB
because then I couldn't have a nontrivial expectation from std of
valid semantics, which was my criterion. But if the *only* problem is
my program exceeds size & complexity limits for an implementation, so
the implementation is unable to execute (the semantics of) the code as
written, I claim that *isn't* UB, and the implementation must
diagnose. (Or slightly weaker -- if it gives me an error indication,
but not one documented as a diagnostic per 3.10, that's fine.)

What I won't accept is just malfunctioning. The classic example: if
(as common) implementation uses 'the stack' for all autos, temps and
returns, and my code calls more deeply than fits in in the available
stack space, and the stack growth just corrupts other data without any
indication, I consider that unacceptable.

In practice I don't have problems. Such misbehavior would be widely
considered bad QoI, and get driven out of the market, at least for
general-purpose systems (roughly, hosted). (For true embedded = tiny
systems, I would be willing to leave this burden on the programmer, or
much better the toolchain, e.g. a call tree analyzer.) But is it
nonconforming? Do 4p3, and 5.1.2.3, apply over 1p2? I'm not 100% sure.
Clearly we can't require a conforming implementation correctly execute
an otherwise correct program whose source code is larger than the
universe -- then no implementation could ever be conforming.
Even a nonnegligible fraction of the planet would be unaffordable.
Fortunately I can't -- *and* don't want to -- write anything that big.

Reading all this and trying to digest it, it sounds to
me like you're saying basically three distinct things,
namely:

1. A 'clc-compliant' program is one that never under
any circumstances on any imaginable implementation
(that is conforming) is subject to undefined behavior;

2. The description of what happens when a program
transgresses an (implementation-specific) implementation
limit must be made more definite (ie, the Standard must
state the requirements for such cases more definitely);
and

3. In particular, what happens when (2) occurs must
be defined (perhaps implmentation-defined) behavior,
and not be undefined behavior.

Did I understand all that right? Is there anything I
missed?
 
D

David Thompson

On Mon, 23 Aug 2010 11:43:41 -0700, Tim Rentsch

Reading all this and trying to digest it, it sounds to
me like you're saying basically three distinct things,
namely:

1. A 'clc-compliant' program is one that never under
any circumstances on any imaginable implementation
(that is conforming) is subject to undefined behavior;
Never UB on any implementation I (we?) will use is the goal, yes.
Whether I can actually use conformance as the criterion, see below.
2. The description of what happens when a program
transgresses an (implementation-specific) implementation
limit must be made more definite (ie, the Standard must
state the requirements for such cases more definitely);
and
I'm not saying it must be stated, and I'm not even certain it can be
stated in a way that is rigid enough to be enforced but flexible
enough to allow the variations that we probably need to allow.

I am saying this is a 'weak spot' where there might possibly be a
problem. But I have not in practice seen such problems, and AIUI the
committees have quite enough more productive work to do.
3. In particular, what happens when (2) occurs must
be defined (perhaps implmentation-defined) behavior,
and not be undefined behavior.
Effectively yes. I won't use an implementation where this can produce
undetected UB. Because of the 'weak spot' I'm not sure I can prove
it's nonconforming but if not it's definitely unacceptable-to-me QoI.
And it seems to unacceptable to enough people that it doesn't happen,
which is why I don't consider it a serious problem.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,083
Messages
2,570,591
Members
47,212
Latest member
RobynWiley

Latest Threads

Top