subroutine stack and C machine model

S

spinoza1111

spinoza1111wrote:


It is a fact that the C language has been defined by the ISO C standards
for a long time.

So? A mistake has been certified standard for too long.
I've known plenty of good programmers who have switched to C from other
safer languages without problems.


Firstly he was not the only person on the standards committee, secondly
he probably was not the most influential person on it, and thirdly it is
only you who thinks a lot of the things you pick on (often inaccurately)
should be changed.


The programmers I know all have more sense than to rely on order of
evaluation within an expression *whatever* language they are programming
in, since it makes it harder for *people* to follow what is going on.
When order of evaluation matters anyone with any sense puts the items in
separate statements.

That's your goal, to return us all to Fortran. I think putting things
in separate statements gives a false clarity to code, making it
understandable only to clerkish twerps.
He's helped a lot of people to miss-understand C.

Where are these Trogdolytes? I think you're the cave dwellers!
It was like that right from the start. Some compilers may have provided
it as an extension in non-conforming mode, but that is another matter.


The issue your mistakes show is your inability to learn C and your lack
of qualification to talk about it.

I must be Benjamin Button, since I taught it at Princeton.
Plenty of people have learned to program successfully in C, in fact, of
the people I have known who had any aptitude for software development,
all of them have been able to learn C without all of the errors you make.

You cite a cite of a cite, replicating an illusion. In fact, I have
made fewer than many of the regs for ths simple reason I don't code in
C...because it's an inadequate language, and what I've learned about
it from bantering with you faggots has horrified me.
Back to the insults then.

Screw you. This place is a Sodom for newbies, where the crime of Sodom
was in fact inhospitability and not anal sex. The only people accorded
respect are the insiders, the regulars. You guys hate it when a newbie
calls you bad names, but you regularly kick the shit out of newbies,
especially when their technical and general education shows you to be
frauds.
If people stop replying you will stop inventing and posting rubbish then?



Your learning C is unlikely to convince you of anything true since you
don't seem to be able to learn it properly.




I've known experienced programmers learn C purely from K&R2 and reading
code written by others. They learned it quickly and without difficulty.
You don't need a book riddled with errors to learn a language.

Yeah, and then they get fat jobs at DE Shaw creating the credit
crisis.
 From what you've said previously, it was Nash's code that was in error,

No, it wasn't. At Bell Northern Research I realized that if constant
expressions are resolved at compile time, and absent a standard
dictating bad practice, the widest representation needed to be
used...because the programmer obviously meaning to calculate something
"at compile time" is often unable to specify the width of the result.
In the Nash case, pow() was used to compare a Long value but the
Microsoft compiler used Int precision, whereas the Borland compiler
didn't.

Nash, an extremely competent programmer, wanted to represent multiple-
precision numbers base big N flexibly. It made sense from the standard
of readability to express the limit as a power of 2, and the compiler
should have used long and not int precision.
and the compiler within specification. Of course, I would not expect you
to be able to tell the difference between an error in a compiler and the
compiler following the standard (for any language, not just C).

My "incompetence" is purely my failure to accept bullshit. It's why I
left the field because I don't like working with people whose
"knowledge" is of constructed facts which can and should be changed.

Patriarchal "competence" isn't my bag because in fact conformity leads
to more errors in the large. Everybody went along at NASA with the
"standardized" fact that "while we didn't design the Space Shuttle to
shed tiles on lift-off, we have in effect standardized and blessed
this, as a known defect which we don't choose to fix: if you disagree,
you're a stupid idiot".

The C standard is an instance of what anthropologist Diane Vaughan
names as "Normalized Deviance". In her study of the earlier Challenger
disaster, caused by the "standardization" process in the sense that
Morton-Thiokol engineers were browbeaten into approving a launch
despite "known unknowns" about O-rings, she discovered a male tendency
to transform deviance ("aw shit the O rings will probably hold, fella"
"aw shit the foam chunks will probably not harm anything" "aw shit it
doesn't matter if sequence is nondeterminate") into the Law ... which
whistle blowers question only at the risk of global challenges to
their competence.
 
T

Tim Streater

Richard Heathfield said:
In
<96ebd01e-e7d2-4040-9039-cf4019b7eb58@t11g2000prh.googlegroups.com>,
spinoza1111 wrote:

So presumably unclerkish twerps, clerkish non-twerps, and especially
unclerkish non-twerps can't understand clear code, right? I'm just
checking I understand you right.

I think Spinny has contempt for the poor sods who have to do maintenance
on impenetrable code written by smart erudite chaps like himself. It was
hard to write, so it should be hard to maintain.
 
B

Ben Bacarisse

<snip>

You are posting an increasing amount of nonsense of late. I don't
have the time or the inclination to read a fraction of it but this
stood out as a particular perversion of logic:
C's nondeterminacy is recognized as a bug and not a feature in
academia. This journal article recognizes "sequence points" as a C
idiom, and idioms are usually signs of a language mistake:

http://journals.cambridge.org/actio...72398F7187C.tomcat1?fromPage=online&aid=54521

"The presence of side effects in even a very simple language of
expressions gives rise to a number of semantic questions. The issue of
evaluation order becomes a crucial one and, unless a specific order is
enforced, the language becomes non-deterministic. In this paper we
study the denotational semantics of such a language under a variety of
possible evaluation strategies, from simpler to more complex,
concluding with unspecified evaluation order, unspecified order of
side effects and the mechanism of sequence points that is particular
to the ANSI C programming language. In doing so, we adopt a dialect of
Haskell as a metalanguage, instead of mathematical notation, and use
monads and monad transformers to improve modularity. In this way, only
small modifications are required for each transition. The result is a
better understanding of different evaluation strategies and a unified
way of specifying their semantics. Furthermore, a significant step is
achieved towards a complete and accurate semantics for ANSI C."

A technical matter to discuss, an academic journal, and a quote. It
looks impressive, but how does the quote back up the claim?

Well, it does not. Spinoza1111 goes from a quote that says that C's
sequence points are "particular to ANSI C" to a claim that this is an
idiom. From there he injects the idea that "idioms are usually signs
of a language mistake". But he is not yet done, because all that
misdirection was about sequence points. To extend that to unspecified
subexpression evaluation order, he simply relies on textual proximity:
by putting his claim just before the one that he says can be drawn
from the paper, he suggests that they are linked when, in fact, there
is no connection at all. It helps that the paper also discusses
evaluation order, but since the quote says nothing about C's choice
one way or the other, sleight of hand is needed to suggest that is it
critical.

Of course, the article /might/ be critical of C's choices, but the
quote does not support the claim. It is there just to fluff up the
argument. I can't be sure what the paper says because, like so many
journals, they want more than the price of a book just to read it, and
I am not /that/ interested in whether spiniza1111 has simply quoted
the wrong passage from it.

<masses snipped>

No point in asking you to trim your posts, I suppose?
 
K

Keith Thompson

spinoza1111 said:
No, it wasn't. At Bell Northern Research I realized that if constant
expressions are resolved at compile time, and absent a standard
dictating bad practice, the widest representation needed to be
used...because the programmer obviously meaning to calculate something
"at compile time" is often unable to specify the width of the result.
In the Nash case, pow() was used to compare a Long value but the
Microsoft compiler used Int precision, whereas the Borland compiler
didn't.

Nash, an extremely competent programmer, wanted to represent multiple-
precision numbers base big N flexibly. It made sense from the standard
of readability to express the limit as a power of 2, and the compiler
should have used long and not int precision.
[...]

There may well have been a problem similar to what you describe,
but I believe you have misunderstood it. The C pow() function
takes two double arguments and returns a double result. It does
not operate on int or long.

Incidentally, since C is case-sensitive, referring to these types
as Int and Long merely causes confusion.

Integer powers of 2 can be represented using the shift operator. For
example, 2**20 can be represented as 1<<20. But on a system with,
say, 16-bit ints and 32-bit longs, the expression 1<<20 will overflow;
since 1 and 20 are of type int, the result of 1<<20 is of type int.
Overflow can be avoided by writing 1L<<20 (or, perhaps better,
1UL<<20, since shifts on signed types can cause problems).

My guess (and it's only a guess) is that you're suggesting that, since
the mathematical result of 1<<20 is outside the range of type int on
some particular system, it should yield a result of type long. This
would, I believe, would cause far more problems than it would solve.

As it is, the type of 1<<20 can be determined entirely from the type
of its left operand (in this case, int). Having it depend on the
value of the result would mean that the type of 1<<20 would vary from
one implementation to another.

It would also mean that, given:
int x = 1;
int y = 20;
either the type of x<<y would differ from the type of 1<<20 (though
you'd expect the two expressions to be equivalent), or the type of
x<<y would depend on the run-time values of x and y, something that's
not possible in a statically typed language.

Or I suppose you could avoid the problem by having the << operator
(and, for consistency, most other operators) consistently yield a
result of the largest integer type of the appropriate signedness.
But that would make it impossible to use 16-bit or 32-bit arithmetic
(except as a compile-time optimization that cannot always be
performed).

If my somewhat wild guess about what the actual problem was happens to
be correct, and if Nash is a competent a programmer as you say (which
I have no reason to doubt), I'm sure he realized his error reasonably
quickly, corrected the code, and moved on.
 
N

Nick Keighley

all of
you creeps praise [Schildt} for his clarity, which shows you don't know the
meaning of that word, for it means "conducive to understanding and
acquiring JUSTIFIED TRUE BELIEF".

1. Free from opaqueness; transparent; bright; light;
luminous; unclouded.
[Webster]
 
S

Seebs

Because you make, for example, the claim that nondeterminacy makes a
language more efficient.

It makes it more amenable to efficient implementations.
Whereas in my book I point out that strictly
speaking, a language can neither be efficient nor inefficient.

It can, however, be amenable to an efficient implementation, or extremely
difficult to implement efficiently.
You seem incapable of understanding how when and why we optimize.

In the real world, optimization relies on finite resources. One of the
things that people do when building optimizers is allocate finite
resources both to implementing the optimizer, and to the execution of
the optimizer.

There are compilers out there which can spend hours to days optimizing
specific hunks of code... And this can be *cost-effective*. Keep that
in mind.

So, the quuestion is: Are there real-world cases where the option of
reordering evaluations would permit optimizations which would be otherwise
prohibited? The answer is a completely unambiguous "yes".

Now, you might argue, perhaps even persuasively, that either such
optimizations will change the behavior of the code, or they could be
performed anyway. True enough.

But! What you have apparently missed is that they might change the
behavior of the code *in a way that the user does not care about*.

Let's take the example from Schildt's text:

x = f1() + f2();

As he correctly notes, the order of execution of f1 and f2 is not defined.

Now, consider:

int
f1(void) {
puts("f1");
return 3;
}

int
f2(void) {
puts("f2");
return 5;
}

Obviously, these functions have side effects. Reordering them results in
an observable change.

However, it is not at all obvious that the user cares which is which.

If we were in a case where the ABI's rules made it more expensive to perform
operations in one order than another, the option of reordering them could
improve performance. If it produces changes in behavior, that *might* be
a problem -- unless those changes are harmless. If all it changes is the
order of log messages, the developer might well prefer a faster form to
a slower one, even though it "changes" the output -- because the change is
irrelevant.
That's absurd.

But it is what he said.
(1) Schildt was not denying a minor free(NULL) because especially in
teaching, de minimis non curat lex: the law does not deal in
trivialities. Only an INCOMPETENT teacher insists on this level of
detail when introducing students to essentials. The tyro has no use
for free(NULL).

Ahh, but this is not a book labeled "Baby's First C Primer". It claims
to be "The Complete Reference". You could dispute whether the original
text ought to have mentioned this, but consider the description of
free() on page 434:

free() must only be called with a pointer that was previously
allocated with the dynamic allocation system's functions (either
malloc(), realloc(), or calloc()). Using an invalid pointer in
the call will probably destroy the memory management system and
cause a system crash.

Here, we are not talking about an introduction to the essentials; we are
talking about a *reference work*, billed as such, and it states something
flatly untrue. It is absolutely the case that a null pointer is "an invalid
pointer".
(2) It's crazy to attack Herb for saying that in effect (x)[A(x)] (for
all x, property A is true) when he does not go on to say, there must
be only ONE free(). The student who's awake knows already that free(x)
returns x to available storage and that because of this x has no
referent. You're asking him when speaking to repeat other facts in
such a way that would only confuse. You say he's clear, and in this
you are right. You want him to be as unclear as the Standard would be
for the beginner!

People have asked on clc before why free(x) is failing when it worked the
previous time. After all, it was previously allocated.
Outside of programming, you need to start assuming the best of people
and not the worst.

I do! For instance, when I first saw one of your long posts on this topic,
I responded with careful thoughtful analysis to many of your claims and
asked for evidence for them, which was not forthcoming.

Having realized that you're generally unwilling to support your claims,
I've stopped bothering; now I'm just killing time and enjoying the fun.
For boneheads, who need to free() more than once because their code is
leaky.

Have you ever heard the phrase "belt and suspenders" applied to computing?

If you want to write robust code, it is not enough to be sure that someone
should have done something right -- even if you are that someone and have
verified it. You should be careful anyway.

Consider the following data structure:

struct foo {
int x;
int y;
unsigned char *name;
};

You might find that you have a number of these, and that most but not all of
them have a name. What should foo_free() look like?

void
foo_free(struct foo *p) {
if (p) {
free(p->name);
free(p);
}
}

Now, why are we checking 'if (p)'? Because if an error elsewhere in the
program results in inadvertantly passing a null pointer in, the reference
to p->name would invoke undefined behavior.

In pre-ISO C, you had to write:

if (p) {
if (p->name)
free(p->name);
free(p);
}

and this usage, though common, was obnoxious -- and people sometimes forgot.
Fixing this improved the language.
Naw, I just used a fancy French word. Do you know what a *frisson* is?

Yeah, I just don't see the relevance.
Schildt nowhere claimed that the stack must be laid out in any
particular way any more than a math teacher says that in order to
conform to Euclid, the triangle must be the same size as that which he
draws on the board.

Incorrect. A math teacher might refer to "a" triangle, but will rarely
refer to "the" triangle.

Again, I'll quote for you directly from the book:

Figure 16-1 shows conceptually how a C program would appear in memory.
The stack grows downward as it is used.

+----------------+
| Stack |
| | |
| v |
+----------------+
| ^ |
| | |
| Free memory |
| for |
| allocation |
+----------------+
|Global variables|
+----------------+
| |
| Program |
| |
+----------------+

(Figure 16-1 lovingly reproduced in vi, use fixed-pitch font plz.)

He continues:

"Memory to satisfy a dynamic allocation request is taken from the
heap, starting just above the global variables and growing towards the
stack. As you might guess, under fairly extreme cases the stack may
run into the heap."

This is not a mere illustration of one possible way of doing things; this is
a clear assertion that they are done in a particular way, and that that
particular way has consequences the reader must be aware of.
The student needs, at the cost of some illusions which can be
unlearned at a later date, to be helped over difficult ground. When
you've both taught and written a computer book, you'll have standing
in this field. Lists of errors don't give you standing.

Uh.

I have both taught (though not in college), and written a computer book.

I wrote a book on shell programming, which has only had one complaint made
so far about it, which is that the publisher insisted on tacking the word
"beginning" onto something that the most experienced shell programmers I
know have all considered to be rather advanced.

I have been, at some length, told that I did a very good job of presenting
the shell in a way that allows new readers to pick up the basics without
misleading them.
In your dreams. A stack, like a right triangle, is a "the" not an "a".
Herb, by referring to "the" stack was referring EITHER to the stack he
illustrated or its functional equivalent.

Ahh, but "its functional equivalent" might never "run into the heap".
He really was referring to a specific thing, not to an abstract
generalization.
In this you concede game, set and match. As I have shown, "clarity"
leads to understanding: understanding is knowledge of that which is
true.

You haven't shown it, you've asserted it. At most, you've established
that I was mistaken to claim that he was clear, but really, the dictionary
hasn't got your back this time.
It wouldn't have worked in my classes in C for the IBM Mainframe at
Trans Union in Chicago, and probably not even in my classes in C for
prospective computer science majors at Princeton. As it happened, a
few students at Trans Union complained that I used too much math. My
remit at Princeton wasn't to teach computer science and abstract data
structures. It was to get some students started in C.

That's nice.
Give newbies the credit most deserve.

I have found that relying on the reader's telepathy makes for a poor
learning experience. Fundamentally, while it's true that *most* readers
may be able to guess what you meant even when you say something else, it
is not true that *all* will -- and even "most" won't get it right all the
time.

If it were impossible to write easy-to-understand text without introducing
grave errors that the reader must magically guess were intended to be viewed
as possible examples rather than definitions, I think you would have a case;
we could reasonably assert that we have to make the books that will help the
largest number of people learn.

It's not impossible, though.

Kim King's _C Programming: A Modern Approach_ discusses "stacks" only in
terms of the abstract data type. It does not refer to the storage of
automatic objects as "a stack", and yet, I've yet to hear of anyone being
confused by King's particularly lucid explanation of how function calls
work.
I think you padded "C: the Complete Nonsense" by counting what Herb
did not say as positive sins of omission as if he should have written
a computer science treatise AND a standard.

Not really. If he says something which is untrue, that's a mark against
the accuracy of the book. Even just qualifying things with statements like
"on many common machines" would be enough that I wouldn't worry too much,
because it'd be enough to give the reader a warning not to rely too heavily
on this.

The problem with your model is that the reader might as well assume that,
while printf was used on some specific systems, other systems will have a
totally different formatted printing routine. Without some marker for
when he's talking about the language in general, and when he's talking about
a particular compiler, the reader can't be expected to know which things
generalize and which don't.
I don't think it would have been appropriate to prefix use of the
stack with a pompous prologemena on the Idea of the Stack.

Perhaps so -- in which case you could omit it entirely, since you can explain
C quite clearly without referring to "the stack".
Computer
people KNOW that things can be done in different ways.

But again, remember, we're talking about newbies. Notice how newbies come
here with questions like "I tried to type ^Z and it didn't get EOF", because
they *don't* know that things can be done multiple ways; they know that ^Z
is an EOF character, or that someone told them that anyway, and they're trying
to figure out what's happening.

-s
 
S

Seebs

Earlier on in this debate, you thought that unspecified evaluation
order was something introduced for the first time by C99.

Objection! You're assuming that the things he says are connected to
the things he believes, and I don't think we have support for that
claim at this time.
Microsoft's documentation is notoriously poor, but nobody ever said
that the order is non-deterministic. It /is/ determined - by each
implementor individually, according to his or her platform
constraints and opportunities.

More importantly, it's not at all clear that Microsoft's documentation
actually says what he thinks it does.
"[...] Where
several operators appear together, they have equal precedence and
are evaluated according to their associativity."
Not true. For exasmple, in the test I conducted here just now, the MS
compiler evaluated A() * B() in accordance with associativity, but
A() ? B() : 0 the "wrong" way round wrt associativity.

Furthermore, operators and operands are not the same thing.

Which is to say: I suspect what the documentation writer MEANT to say
was that associativity determined which operators yielded the operands
for which other operators, and that the confusion about order of evaluation
was purely confusion.
No. I don't know where you got this non-deterministic nonsense from.
Evaluation order is an implementation decision. The implementor
determines evaluation order. Therefore, evaluation order is
determined, and therefore it cannot be non-deterministic.

I do not think this is the case.

The order of evaluation of subexpressions (and the order in which
their side effects take place) is unspecified, not implementation-defined.
(6.5, p3).

So.

Hypothetically, an implementor with a particularly large amount of spare
cash and nothing sane to do with it might choose to implement a
non-deterministic order of evaluation, possibly even relying on a hardware
entropy source.

In practice, however, it's merely not particularly predictable to the user;
evaluations may occur in different orders in even marginally different
contexts, or at different optimization levels, and so on.
Excuse me, but are you the autistic twerp to whom you are referring? I
ask only for information. I assume you mean you, since you're the
only twerp in this discussion, but I must admit I didn't know you
claimed to be good at maths.

Oh, I'm almost certainly technically a twerp. I sorta like it. I'll
tell Beloved Spouse that I am twerpsychorean.
Well, you can think so if you like, but hoping won't make it so.

I have no idea what he's talking about. I thought this guy just popped
up in 2008, were there lulz before that?
Why on earth not?

Because he's got ego investment in the theory that the page is harmful
and bad and should be wholly retracted if any part of it falls short
of the highest standards. You know, like the way that, having discovered
that some corn once went bad, we forever banned the growing of corn for
human consumption in any form.

-s
 
S

Seebs

It just happens to be the version I have. (In my defence, I didn't
actually buy it. A friend saw it on a remainder pile for a couple of
quid, and grabbed it as a gift for me. How nice.) If you have any
specific questions about it, let me know.

Check the example, probably right near page 53, of the "put_rec" function
which tries to write an array to disk (using sizeof(rec) to obtain
the size of an array argument...).

I believe that in the 2nd edition, the test is
if(len <> 1)

(Note: I just checked that it is page 53 in the third edition. I last
looked at that, I think, in 1995 or so. I have no idea why I remember
this and can only find my car keys about one day in two.)

-s
 
S

Seebs

Right. Unspecified does not mean non-deterministic. Yes, it's possible
that an implementor might /choose/ a non-deterministic order of
evaluation, but the Standard does not require it.

The term "an implementation decision" sounds eerily similar to "implementation
defined", and could be read that way. But furthermore, I'm not sure it is
true that the evaluation order is determined!
Yes, but EGN was under the impression that the Standard /required/
non-determinism, which it does.

This is actually funnier with the typo than without.
"Contemptible person"? No, I don't think so.

Demonstrably so, in fact! You will notice that Spinny appears to hold
me in contempt. So do a number of other people.
I first encountered him in, I think, 1999.

Wow.

I feel like a geek who only found out about Monty Python a couple of years
ago.

-s
 
S

Seebs

Right (apart from spaces). In 2e, it's page 58. Here's the complete
text:

Agh!

I tried to get the spaces wrong the way they were in the text, but apparently
I simply CAN'T type that.
"Coded as shown, put_rec() compiles and runs correctly on any
computer, no matter how many bytes are in an integer."

In fact, even after you've added the include and the calling code that
opens the file, and after you've corrected the <> nonsense, the code
will *only* work correctly on computers that have pointers at least
six times as wide as integers.

Interestingly, that part remains untouched in the 3rd edition -- what amuses
me is that the <> to != change proves the code was at least looked at, and
that moves this from "hah hah, what a silly oversight" to "wow, you just
don't get it, do you."

-s
 
K

Keith Thompson

Seebs said:
Wow.

I feel like a geek who only found out about Monty Python a couple of years
ago.

Except that Monty Python is, you know, funny.

(Now watch Spinoza complain that I've insulted him by saying he's not
funny.)
 
K

Keith Thompson

Richard Heathfield said:
Seebs wrote: said:
Check the example, probably right near page 53, of the "put_rec"
function which tries to write an array to disk (using sizeof(rec) to
obtain the size of an array argument...).

I believe that in the 2nd edition, the test is
if(len <> 1)

Right (apart from spaces). In 2e, it's page 58. Here's the complete
text:

/* write 6 integers to a disk file */
void put_rec(int rec[6], FILE *fp)
{
int len;

len = fwrite(rec, sizeof rec, 1, fp);
if(len<>1) printf("write error");
}
[...]

It hardly seems worth mentioning that the error message is written
to stdout rather than stderr, and without a newline.

I usually compile code before posting it here, though I sometimes
don't bother if it's sufficiently short and simple (and yes, that
sloppiness does sometimes come back and bite me). I can hardly
imagine not bothering to compile code before publishing it in a book.

Then again, I suppose the process of converting compilable code
into printable pages is non-trivial, and errors could creep in
during typesetting, especially in the old days when more of the
process was manual. But that doesn't explain this example.
 
S

Seebs

You know and I know that "unspecified behaviour" and
"implementation-defined behaviour" are distinct. We know how they're
distinct, and we even know /why/ they're distinct. So are you arguing
vicariously?

Long story short: I've started trying to be careful with terms which,
in the absence of the formal definition, could easily be seen as semantically
equivalent to a particular formal term. Imagine someone who doesn't read
the standard for a hobby being told the following two things:

1. The sizes of the various types are implementation-defined;
the implementation must figure out what sizes to use for the types,
implement that, and document it somewhere.
2. Order of evaluation is an implementation decision; it must
be determined.

There are plenty of people who would interpret these as, on their face,
clearly referring to the same basic class of things -- those which are
decided or defined by the implementation.
True, but that doesn't count, because he holds /everyone/ in contempt.

Oh, it's okay. There are many people out there who hold only a few people
in contempt, but count me as one of them. Usually, in context, I end up
feeling it's probably meant to be complimentary.

-s
 
S

spinoza1111

all of
you creeps praise [Schildt} for his clarity, which shows you don't know the
meaning of that word, for it means "conducive to understanding and
acquiring JUSTIFIED TRUE BELIEF".

    1. Free from opaqueness; transparent; bright; light;
        luminous; unclouded.
        [Webster]

You've deliberately chosen the wrong definition: the visual
definition. Furthermore, you've selected an inferior dictionary. The
OED has two definitions, one relating to visual clarity and the other
linking "clarity" to understanding. Its definition of "understanding"
is the link to knowledge, and its definition of "knowledge" defines it
as "justified true belief".
 
S

spinoza1111

No, it wasn't. At Bell Northern Research I realized that if constant
expressions are resolved at compile time, and absent a standard
dictating bad practice, the widest representation needed to be
used...because the programmer obviously meaning to calculate something
"at compile time" is often unable to specify the width of the result.
In the Nash case, pow() was used to compare a Long value but the
Microsoft compiler used Int precision, whereas the Borland compiler
didn't.
Nash, an extremely competent programmer, wanted to represent multiple-
precision numbers base big N flexibly. It made sense from the standard
of readability to express the limit as a power of 2, and the compiler
should have used long and not int precision.

[...]

There may well have been a problem similar to what you describe,
but I believe you have misunderstood it.  The C pow() function
takes two double arguments and returns a double result.  It does
not operate on int or long.

At this date I do not clearly recall whether Nash used pow or shift.
Incidentally, since C is case-sensitive, referring to these types
as Int and Long merely causes confusion.

I shall flagellate myself accordingly.
Integer powers of 2 can be represented using the shift operator.  For

You mean "calculated" (I can pick a nit or two as well)
example, 2**20 can be represented as 1<<20.  But on a system with,

I'll alert the media
say, 16-bit ints and 32-bit longs, the expression 1<<20 will overflow;
since 1 and 20 are of type int, the result of 1<<20 is of type int.

Which is what effectively occured.
Overflow can be avoided by writing 1L<<20 (or, perhaps better,
1UL<<20, since shifts on signed types can cause problems).

If memory serves this failed to work but the Borland compiler worked
for the fix. I do not recall if this was the fix at all.
My guess (and it's only a guess) is that you're suggesting that, since
the mathematical result of 1<<20 is outside the range of type int on
some particular system, it should yield a result of type long.  This
would, I believe, would cause far more problems than it would solve.

If l is a constant, this is what I recommend. I'm in fact trying to
recall whether the complete expression was a constant expression.
Higher-level and more intelligent programmers in my experience prefer
to avoid "magic numbers" and express mathematical relationships as
constant expressions using (in C) preprocessor names.

As it is, the type of 1<<20 can be determined entirely from the type
of its left operand (in this case, int).  Having it depend on the
value of the result would mean that the type of 1<<20 would vary from
one implementation to another.

Not in this scenario. A true constant expression such as 25*8 (or
TWENTY_FIVE * EIGHT where TWENTY_FIVE and EIGHT are preprocessor
variables, or more sensibly FORCE * MASS) has no type at all and I
believe the compiler developer can choose what he thinks is best. I
think widest precision is best.

It would also mean that, given:
    int x = 1;
    int y = 20;
either the type of x<<y would differ from the type of 1<<20 (though
you'd expect the two expressions to be equivalent), or the type of
x<<y would depend on the run-time values of x and y, something that's
not possible in a statically typed language.

Or I suppose you could avoid the problem by having the << operator
(and, for consistency, most other operators) consistently yield a
result of the largest integer type of the appropriate signedness.
But that would make it impossible to use 16-bit or 32-bit arithmetic

That was my solution on an internal compiler but ONLY for constant
expressions that used its preprocessor variables. As you know these
are defined in C with values that are strings and do not have any
type.
(except as a compile-time optimization that cannot always be
performed).

If my somewhat wild guess about what the actual problem was happens to
be correct, and if Nash is a competent a programmer as you say (which
I have no reason to doubt), I'm sure he realized his error reasonably
quickly, corrected the code, and moved on.

That is what happened. He probably would have figured out the problem
without me, but as it happened we worked it out together. At other
times, I and other members of Information Centers assisted him, and I
had other tasks including teaching C, doing C, and working in many
other languages including PL/I Rexx and assembler.
 
S

spinoza1111

Flash Gordon said:
spinoza1111wrote: [...]
So stop replying.
If people stop replying you will stop inventing and posting rubbish then?

[...]

He probably wouldn't stop entirely, but I suspect the volume would
decrease considerably.  Most of what he posts is replies to others
(most of whom are replying to him).  If people stopped replying,
he'd have less to talk about.

But it's unikely that's going to happen, unfortunately.

You are being dishonest since you replied concerning the Nash
question.
 
S

spinoza1111

I have read the paper. It deals mostly with a method that can be used
to evaluate order of evaluations. The quote was the complete paper
abstract. The paper itself is an interesting read but not for the reasons
suggested.

I don't have 30 quid otherwise I'd have read it. I probably shouldn't
have cited it but I wanted to show that C's indeterminacy is
recognized in academia.
 
S

spinoza1111

Keith said:
Flash Gordon said:
spinoza1111wrote: [...]
So stop replying.
If people stop replying you will stop inventing and posting rubbish then? [...]

He probably wouldn't stop entirely, but I suspect the volume would
decrease considerably.  Most of what he posts is replies to others
(most of whom are replying to him).  If people stopped replying,
he'd have less to talk about.
But it's unikely that's going to happen, unfortunately.

At least I only reply very rarely, but you are correct and I probably
should not have bothered. I'll probably go back to ignoring him for
another 6 months or more...

Nonsense, you're always here, Flashie. Without much of value compared
to Keithie boy, Dickie Heathfield or Bennie "B" Bacarisse. I'm the
best thing that's happened to clc in a long time since I create lively
discussion, I do my homework, and while I'm rusty on C, I have a
serious background with the language.
 
S

Seebs

You've deliberately chosen the wrong definition: the visual
definition. Furthermore, you've selected an inferior dictionary. The
OED has two definitions, one relating to visual clarity and the other
linking "clarity" to understanding. Its definition of "understanding"
is the link to knowledge, and its definition of "knowledge" defines it
as "justified true belief".

Something is clear if it leads you to an understanding *of what was said*,
not necessarily of the thing it was said about. The statement "elephants
are usually green" is quite clear -- you quickly and easily understand the
statement, allowing you to form a justified true belief as to the statement's
meaning.

Having done so, you can also quickly form a justified true belief that the
statement is false. However, that does not detract from the fact that you
were easily and quickly able to discern its meaning accurately.

The map is not the territory.

-s
 
S

Seebs

If l is a constant, this is what I recommend. I'm in fact trying to
recall whether the complete expression was a constant expression.
Higher-level and more intelligent programmers in my experience prefer
to avoid "magic numbers" and express mathematical relationships as
constant expressions using (in C) preprocessor names.

Often so!

But consider:

#define MEGA 1<<20
#define LIKELY_FACTOR 5

What is MEGA*LIKELY_FACTOR?

(Hint: It is probably not 5 << 20.)
Not in this scenario. A true constant expression such as 25*8 (or
TWENTY_FIVE * EIGHT where TWENTY_FIVE and EIGHT are preprocessor
variables, or more sensibly FORCE * MASS) has no type at all

Wrong.

First off, there is no such thing as a preprocessor "variable". Preprocessor
things are called "macros", and this matters, because they don't have the
semantics of variables. In particular, they're merely strings of text until
there's a need to interpret them.

Secondly, 25*8 has a type. 25 has a type (int), 8 has a type (int), so 25*8
is an int.
and I
believe the compiler developer can choose what he thinks is best. I
think widest precision is best.

I have no idea where you got the idea that constants don't have types in C.
They do, although the rules for their type can be surprising. They're cleaner
in C99 than they were in C89.
That was my solution on an internal compiler but ONLY for constant
expressions that used its preprocessor variables. As you know these
are defined in C with values that are strings and do not have any
type.

Again, they're not variables. It's true that they don't really have a
type DURING PREPROCESING, but:

#define FOO 25
#define BAR 8
unsigned long x = FOO*BAR;

This does not yield an initializer of 200; it yields an initializer of
25*8, which is an integer constant expression and has type int.
That is what happened. He probably would have figured out the problem
without me, but as it happened we worked it out together. At other
times, I and other members of Information Centers assisted him, and I
had other tasks including teaching C, doing C, and working in many
other languages including PL/I Rexx and assembler.

Quoting your credentials while mistakenly asserting that constants in C
don't have a type tends to undermine the credentials rather dramatically.

-s
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,994
Messages
2,570,223
Members
46,813
Latest member
lawrwtwinkle111

Latest Threads

Top