How to convert Infix notation to postfix notation

S

Seebs

Bone up on compiler optimization and tail recursion. I'm not gonna
explain it to you.

It just seems non-obvious to me that always returning a value is necessarily
better than not having to return a value.
Do you own Sethi Aho et al 2nd ed.?

Probably. :)
Don't even presume to lecture me, kiddo.

You posted code, you got feedback. :)
Your consideration for others
is on display in your treatment of Schildt.

I would never *dream* of giving you advice based on the theory that you were
interested in showing consideration for others. I'm telling you how to reduce
the likelihood that people will think you're an idiot. You can pursue that
out of pure self interest, no worries!
You have a disturbing tendency to lapse into sloppy English and
corporatese whenever you want to stop thinking, and when this harms
other people, it needs to stop.

Corporatese? I don't think so.
What on earth is "the reader's flow?"

People performing tasks such as "reading diagnostic messages" or "skimming
output from code" tend to perform them more reliably and more quickly when
the format and structure of the material is conducive to maintaining a
state called "flow".

http://www.eric.ed.gov/ERICWebPorta...&ERICExtSearch_SearchType_0=no&accno=EJ525775

Basic familiarity with the literature: Always a plus.
Haven't you run the latest edition?

These comments were all on the original edition. You asked what benefit
a format string might have, I gave an example.
It does this visually on any
monospace window by putting a dollar sign under the point at which the
error is discovered...although now that I think of it, it's probably a
mistake on my part to assume monospace output.

Not an unreasonable one, though.
No, it's organizing material for easy retrieval in a modern editor
with intellisense.

It still doesn't work.
He's not the originator. "Systems Hungarian" was already in use in IBM
when he was still in Hungary helping to destroy socialism as a punk
kid who needed a kick in the ass. It is in fact mentioned in a book by
Richard Diebold published in 1962.

Perhaps, but he's the reason people mostly adopted it -- and while Systems
Hungarian may well have been useful in 1962, it's been worthless since the
80s or so in nearly all languages.
Actually, for trivial algorithms, I can.

Probably not for this one. (If you're on x86, for instance, consider
that there's a single instruction for this operation, which is likely to
get used if it's a good fit. There's a reason compilers often have built-in
implementations of common library functions.)
I didn't pay a dime for Microsoft C++ Express.

No one said you did. I was referring to the "Lot Of Useless Crap". You
already got the Lot Of Useless Crap. It's there. You might as well use
it.
Yes. The confusion is here created not by me, however, but by C's lack
of a boolean type.

One of the things that fluent speakers learn is that beyond the raw formal
lexicon of a language, it will usually have idioms. C has had idioms for
true and false since 1978, and it would make sense to stick with those
idioms. (Or, if you prefer, you could always use the boolean type, since
there is one in C99.)
This is childish. "I care about all sorts of useless shit like what
main() returns but not about interpersonal decency, nor elegance I
didn't invent".

Oh, I'm all for elegance, regardless of who invented it, but -1 isn't
elegant.
You see, C lacks Boolean values,

Not since 1999 it doesn't. :)

But even in 1978... Let's consider the question.

If we are to define "false" and "true", what should they be?

Obviously:

if (false) {
}

should not execute the contents of the block. How about... 0?

Okay. So what's true? "!false" is true. What's !0? 1. true is 1.
and -1 is more
visible than 1: more readable: more elegant, and, in a twos complement
system, it uses the literary technique of "evocation", for it evokes
in the intelligent code reader a vision of all ones (and it's more
visible at what we used to call core dump time, and what is now The
Time of Blue Screen of Death).

Your continued assertions that you are the gold standard of the "intelligent
reader" are unsupported.

However, it gets a bit deeper than that.

There are many functions and APIs which have standardized on using negative
values to indicate *errors*. Because of this, readers who are experienced
with C will usually regard a "-1" as an error indicator. Your theory that it
would make sense for it to be regarded as a particularly idiomatic "true"
is fascinating, but it doesn't match the real world.
Not in any substantive way.

Your code has a buffer overrun that exists because you didn't pay attention
to the well-established idioms that allow experienced programmers to avoid
buffer overruns.

That's pretty substantive.
My goal was to get something coded and see
how you behave in a structured walkthrough

Ahh, that won't happen. I'm not about to put real effort into this stuff.
I view you as an amusing kook, no more. The moment I saw multiple responses
from you continuing to assert your tinfoil hat theory about how the
standards process worked, without even ONE bit of supporting evidence, I
gave up. I have not taken you seriously since, nor do I now.
Words, my lord: words words words. But that's not the problem. "In
context" is meaningless corporatese.

No, it isn't.

It's odd, because you keep talking about what users would expect, but you
appear to have carefully avoided learning anything about how users form
their expectations.
The "user" (sigh). Sloppy English: because in a structured walkthrough
users have no place, no more than managers. You invoke "the user" as a
deus ex machina: a Lacanian phallus. But literally, that's the person
who as you say needs to be innocent of the details!

Lots of handwaving, but you haven't addressed the point. The purpose of the
code is to be used. The person using it does not care about the internal
implementation; that person's sole interest is to get an infix expression
converted to RPN, for reasons we are not privileged to speculate on. :)
If you'd learn how to write properly, which you can't,

*snicker*

Bored now, gonna go change the litterbox.

I was hoping you'd stay fun longer, but you've actually really dropped off
in hilarity lately, such that I'd rather go deal with trash night. (One of
the few holidays nearly all Americans still revere.)

-s
 
S

spinoza1111

In <[email protected]>,

spinoza1111wrote:


That has not been my experience of you.

You rarely make genuine technical remarks, dear boy. Complaining about
test questions, repeating saws, maxims, and folk-lore, and trashing
people online don't count, my dear fellow,
 
K

Kenny McCormack

That has not been my experience of you.

You rarely make genuine technical remarks, dear boy. Complaining about
test questions, repeating saws, maxims, and folk-lore, and trashing
people online don't count, my dear fellow,[/QUOTE]

So obvious. So true. And yet, Petey will come along any second now and
issue a denial.
 
S

spinoza1111

It just seems non-obvious to me that always returning a value is necessarily
better than not having to return a value.


Probably.  :)

Oh great, so you haven't opened it.
You posted code, you got feedback.  :)

I'm not complaining. You are. You need to use Ben Bacarisse as a role
model. If you have technical insight, share it. Shove your corporatese
and generalizations about people's competence (whether Schildt's,
Navia's, or mine), up your ass, because this medium simply isn't one
where you have a broad enough signal to make the online judgements you
make on Navia and I, and you were out of line with Schildt because you
don't know Microsoft platforms.

Schildt is weak on non-Microsoft and should not have used upper case
file names in #include statements for this reason. But for the same
reason, you weren't qualified to tech review his book. What is worse,
you enabled a series of personal attacks because you only published 20
flaws, but made references to hundreds, which you have not, as far as
I can see, documented.
I would never *dream* of giving you advice based on the theory that you were
interested in showing consideration for others.  I'm telling you how to reduce
the likelihood that people will think you're an idiot.  You can pursue that
out of pure self interest, no worries!

I have plenty of consideration for others, but unlike a feminized
corporate dweeb "this animal defends itself when attacked".
Corporatese?  I don't think so.

I do. When you are at a loss for enough to say technically you venture
without warning into abstractions with which you're not qualified to
deal. For example, your claim that Schildt somehow made software
incorrect by publishing a book is nonsense, because what makes
software incorrect is the refusal of technicians to abandon C.
People performing tasks such as "reading diagnostic messages" or "skimming
output from code" tend to perform them more reliably and more quickly when
the format and structure of the material is conducive to maintaining a
state called "flow".

No shit, Dick Tracy. Have you ever heard about the use of white space
in technical writing?
http://www.eric.ed.gov/ERICWebPortal/custom/portlets/recordDetails/de...

Basic familiarity with the literature:  Always a plus.


These comments were all on the original edition.  You asked what benefit
a format string might have, I gave an example.


Not an unreasonable one, though.


It still doesn't work.

Corporatese, because a naming standard doesn't work or fail. It is
usable or otherwise. Learn to write before you write another document
that trashes a decent person's career, punk.
Perhaps, but he's the reason people mostly adopted it -- and while Systems
Hungarian may well have been useful in 1962, it's been worthless since the
80s or so in nearly all languages.

Your laziness doesn't control. I find Systems Hungarian easy to
maintain. I also find that most programmers are too lazy to maintain
another's naming standards. Systems Hungarian was abandoned out of
laziness. Szymonyi was told how to name variables at IBM but went his
own way, stealing the idea and taking credit for it.

Probably not for this one.  (If you're on x86, for instance, consider
that there's a single instruction for this operation, which is likely to
get used if it's a good fit.  There's a reason compilers often have built-in
implementations of common library functions.)

When I see the need, I'll use assembler to get the instruction. The
legacy C libraries are too full of gotchas to be worth the trouble in
the case of strings. If I were to use C for a "real" project, first
thing I'd do would be to reinvent strings.
No one said you did.  I was referring to the "Lot Of Useless Crap".  You
already got the Lot Of Useless Crap.  It's there.  You might as well use
it.

Can I quote you as you misrepresent Schildt? Peter Seebach says, use
Useless Crap.

(Chortle)
One of the things that fluent speakers learn is that beyond the raw formal
lexicon of a language, it will usually have idioms.  C has had idioms for
true and false since 1978, and it would make sense to stick with those
idioms.  (Or, if you prefer, you could always use the boolean type, since
there is one in C99.)

I'd rather use C Sharp.
Oh, I'm all for elegance, regardless of who invented it, but -1 isn't
elegant.

I don't think you know what elegance is. For starters elegance is
conceptual unity, and for this reason I have always been sickened by
discussions among programmers, with their consistent misuse of
language ("language x isn't efficient, but language y is"). Right up
there is having to argue with some dweeb about whether a snippet is
elegant out of context. As it happened, it was best for me in my
limited time, simply to show that this conversion could be grammar-
driven, to use -1 as truthiness.

I am so glad not to be a programmer anymore, because programmers don't
realize that while a guy like Dijkstra was indeed concerned with
elegance of coding languages, what he meant was integrated syntax and
semantics, and would probably regard it as silly to waste a person's
time criticising -1 as truthiness.

Also, while you yourself have not demonstrated the willingness or
ability to post code extempore, you probably excuse your own stylistic
and technical failings by calling new code throwaway, while acting as
if others only have the right here to post bug free code...a common
failing of the twerps here. I come here for your feedback on technical
matters on C exclusively because as I have said you appear to me to
know C while having much to learn about programming.
Not since 1999 it doesn't.  :)

But even in 1978... Let's consider the question.

If we are to define "false" and "true", what should they be?

Obviously:

        if (false) {
        }

should not execute the contents of the block.  How about... 0?

Okay.  So what's true?  "!false" is true.  What's !0?  1.  true is 1.

Wow, I'll alert the media. 1==!0? New math indeed.

You wouldn't say this if you had in the past to debug at machine code
level, because 1 looks too much like 0 to be a good way to represent
truthiness. -1 is best.
Your continued assertions that you are the gold standard of the "intelligent
reader" are unsupported.

However, it gets a bit deeper than that.

There are many functions and APIs which have standardized on using negative
values to indicate *errors*.  Because of this, readers who are experienced
with C will usually regard a "-1" as an error indicator.  Your theory that it
would make sense for it to be regarded as a particularly idiomatic "true"
is fascinating, but it doesn't match the real world.

I don't have to follow broken standards. I am very familiar, from
longer experience than you, that in many situations 0 means "all
clear". This is an unfortunate problem because in other contexts, it
means false.

The problem is not using -1 for truthiness. It is that globally, C is
a broken language that fosters bad practices, buggy code, personality
dysfunction, and the politics of personal destruction when people
scapegoat others for their refusal to use a safer language.

Your code has a buffer overrun that exists because you didn't pay attention
to the well-established idioms that allow experienced programmers to avoid
buffer overruns.

Well, sure, if I were in a C workgroup (not if hell froze over, but
let's suppose) then I would use my coworker's idioms which, no matter
what you believe, vary all over the map because C IS BROKEN. As it
happens, in the code you reviewed, there probably is an off by one
situation which I fixed this evening in code not ready for release.

FYI, this evening, I changed the malloc to malloc exactly n bytes
where n is the total number of characters in the infix expression that
aren't blanks or parentheses to see my append test FAIL (properly)
because I am now appending more than the allocated number of bytes,
less one for the Silly Assed Null, whereas earlier I'd allocated far
too many bytes.

Which was actually a good thing, because I realized in bantering with
you that I needed to make sure that append failed correctly!

It now (at this minute) fails all the time, because I "forgot" that I
ALSO add extra blanks around output tokens, and these weren't
accounted for.

If I were outputting infix, I would simply not output blanks, but
blanks are necessary in suffix because operands can be adjacent in
suffix.

The most elegant solution would be to only insert a blank where
necessary but this would require the routine which runs before parsing
and counts outputting characters to know when in the output two
operands would be adjacent. I believe that this would be when they are
separated in the infix by a binary operator.

This might complicate the pre-scan, and another less optimal solution
would simply to request twice the bytes needed from malloc, an
inelegant solution indeed.

The point being I shall think this true
Without, in all probability, any help from you.

That's pretty substantive.


Ahh, that won't happen.  I'm not about to put real effort into this stuff.

No, you prefer to shit all over a person and leave. As you did to
Schildt. You want sympathy for being autistic without taking
responsibility for the fact that autistic people are very bad at
understanding other minds...and have no standing when assessing the
knowledge of a Schildt, which you nonetheless do.

I view you as an amusing kook, no more.  The moment I saw multiple responses

**** you, asshole. I am nothing of the kind. I am an experienced
programmer with thirty years of experience who left in the field in
disgust when it was taken over by autistic twerps.
from you continuing to assert your tinfoil hat theory about how the
standards process worked, without even ONE bit of supporting evidence, I
gave up.  I have not taken you seriously since, nor do I now.


No, it isn't.

It's odd, because you keep talking about what users would expect, but you
appear to have carefully avoided learning anything about how users form
their expectations.

Users? I don't use the word.
Lots of handwaving, but you haven't addressed the point.  The purpose of the
code is to be used.  The person using it does not care about the internal
implementation; that person's sole interest is to get an infix expression
converted to RPN, for reasons we are not privileged to speculate on.  :)

This is just wrong. For one thing, this "user" has to know, not
whether the code is "correct" (for in fact all code is "incorrect" in
the absolute sense that a computer is ultimately a finite state
automaton) but its limitations, and in this case and many others, this
"user" (I prefer "caller") would rather read the code. And note that
despite this, this "user" does not, in fact, give a rat's ass that the
code uses -1 to signal truthiness, since the code returns a string.

And you also failed, now that I think of it, to realize that
infix2Polish should NOT free() this string: it is the caller's
responsibility to free() it. Yet you added this to your Vicious Tirade
with the same lack of diligence you showed in "C: the Complete
Nonsense".

Yes, main() should: but you did not say this because as in the Vicious
Tirade you did not make constructive suggestions. Had you done so,
McGraw Hill might not have treated you like a joke.

Why is it when I submitted a proposal to O'Reilly in 1999, it was
accepted? Why is it when I contacted Apress in 2001 I was invited to
write a book? Why is it I get stuff published, whereas you failed to
get an errata published? Why did my 13 year old get errata published
and not you?

It's because you were destructive and had no insight as to what was
going on.

You should have done your homework as I did when I taught IBM
mainframe C at Transunion. I had enough familiarity from Princeton and
Bell-Northern to be able to describe the culture of C, including its
case sensitivity and use of ASCII in place of EBCDIC. Whereas you
attacked Schildt without having any experience, as far as I can see,
on his platforms.

You had no standing, and this is why McGraw Hill rejected you. You
were the crazy man with the tinfoil hat. The problem is that the
Internet empowers creeps like you to sound authoritative, because it
confuses pointers to facts with facts.

Withdraw "C: The Complete Nonsense". Apologize publically to Herb. And
here, do not presume ever again to advise me on scientific or
stylistic issues. Confine your comments to your area of competence,
which is C.
 
S

Seebs

Oh great, so you haven't opened it.

I dunno, my memory's crap. I can't remember which books I've read.

But I don't see you advancing any argument that always moving data is
necessarily more efficient than not always moving data.
I'm not complaining. You are.

Uh, actually, you're complaining. You were claiming that my behavior
didn't meet your needs.
You need to use Ben Bacarisse as a role model.

While I certainly don't dispute that he'd be a plausible one, I am
not much for role models.
If you have technical insight, share it.

Why, certainly! Thank you for your permission. :p
Shove your corporatese

You keep using this word, I do not think it means what you think it means.
and generalizations about people's competence (whether Schildt's,
Navia's, or mine), up your ass, because this medium simply isn't one
where you have a broad enough signal to make the online judgements you
make on Navia and I,

I don't recall having made any particular judgements about Navia. I have
enough data to state confidently that you don't know what you're talking
about, as widely demonstrated across several different (though related)
fields of inquiry.
and you were out of line with Schildt because you
don't know Microsoft platforms.

First off, several of his claims were false even there.

Secondly, if he wanted to write a book called "C for Microsoft Platforms Only:
The Incomplete Reference With A Few Errors To Make Sure You Are Staying On
Your Toes", I have every confidence that it would be well-received.
Schildt is weak on non-Microsoft and should not have used upper case
file names in #include statements for this reason.

No, he shouldn't have used them because *they are never in any way a
better choice*. They don't make anything work that otherwise wouldn't.

Your inability to grasp this point astounds me. The question is not
one where there's a tradeoff. Covering the far/huge memory model crap?
Perfectly reasonable, because it's going to be useful to people on a common
platform. Note: *It offers them a benefit they cannot have by doing
things the standard way*. Using uppercase letters for headers specified
as lowercase? Stupid, *because it does not offer any benefit*.
But for the same
reason, you weren't qualified to tech review his book.

As long as it claims to cover ANSI C, rather than DOS/Windows-only, I am.
What is worse,
you enabled a series of personal attacks because you only published 20
flaws, but made references to hundreds, which you have not, as far as
I can see, documented.

So what? Plenty of other people have pointed out flaws, and there are plenty
more. A representative sample is more useful.
I have plenty of consideration for others, but unlike a feminized
corporate dweeb "this animal defends itself when attacked".

So does my cat, albeit more effectively.

Of course you do. But you can't actually, say, justify or support
that point.
When you are at a loss for enough to say technically you venture
without warning into abstractions with which you're not qualified to
deal.

Wait, that's not a telepathy helmet, it's a mirror!
For example, your claim that Schildt somehow made software
incorrect by publishing a book is nonsense, because what makes
software incorrect is the refusal of technicians to abandon C.

Even if we grant (despite the lack of support or evidence from you)
the theoretical notion that C is a source of errors, it's still noticeable
that those errors are exacerbated by people trying to learn from Schildt,
and mitigated by people trying to learn from other authors.
No shit, Dick Tracy. Have you ever heard about the use of white space
in technical writing?

Yes.

Keep in mind, unless you've got more books out there, I've done 2-5 times
as much technical writing as you have. White space? A good thing *used
intelligently*. Double-spacing paragraph text would not be a good thing.
Large spaces between members of lists, also not good.
Corporatese, because a naming standard doesn't work or fail. It is
usable or otherwise.

A naming standard presumably has goals. The goals given for Hungarian
notation are not met by Systems Hungarian. Therefore, it does not work.
Learn to write before you write another document
that trashes a decent person's career, punk.

Good thought. I'll just zip back in my time machine and tell my earlier self
to start learning to write seriously five or ten years ago... oh, wait, he
already did. Lucky that, I think my time machine's batteries are low, and
I haven't got a steam locomotive or a thunderstorm anyway.
Your laziness doesn't control.

But your incompetence does.
I find Systems Hungarian easy to maintain.

It certainly can be, but it is COMPLETELY WORTHLESS.
I also find that most programmers are too lazy to maintain
another's naming standards. Systems Hungarian was abandoned out of
laziness.

No, it was abandoned because it is COMPLETELY WORTHLESS (in a strongly-typed
language).
Szymonyi was told how to name variables at IBM but went his
own way, stealing the idea and taking credit for it.

Have you ever gone five minutes without inventing a conspiracy theory
involving people stealing, lying, and cheating?
When I see the need, I'll use assembler to get the instruction. The
legacy C libraries are too full of gotchas to be worth the trouble in
the case of strings.

Only they're not, at least, not for strlen().

Basic familiarity with the tools is a prerequisite for coding effectively.
If I were to use C for a "real" project, first
thing I'd do would be to reinvent strings.

Badly.

Yes, we already saw that your first idea was to make something which
carefully checked the limits for an append -- incorrectly. That's not
encouraging.
Can I quote you as you misrepresent Schildt?

That's a philosophical question, since you've never shown that I've
misrepresented Schildt at all. You've come up with some incredible
and elaborate reinterpretations of his writing, ignored entire arguments,
and then lied about the rest, but you've never actually shown anything
about what he actually said or what I said about it.
I'd rather use C Sharp.

Please do. C has a problem with an undeserved bad reputation because
shitty programmers insist on using it then blaming the language for their
inability to learn to do simple things.
I don't think you know what elegance is.

Oh, of course not.
Also, while you yourself have not demonstrated the willingness or
ability to post code extempore, you probably excuse your own stylistic
and technical failings by calling new code throwaway, while acting as
if others only have the right here to post bug free code...

Who said that? Correcting code doesn't mean the poster didn't have the
right to post it, only that it had an error of some sort.

A lot of my code is sorta dodgy, certainly. That's one of the reasons
I hang out and look at code; to try to learn more.
You wouldn't say this if you had in the past to debug at machine code
level,

Sure I would.
because 1 looks too much like 0 to be a good way to represent
truthiness. -1 is best.

Nope. You're still wrong, and no amount of making excuses is gonna change
it.

In C, the canonically-true value is 1, not -1. -1 indicates an error.

You're acting like one of those foreigners who picked up a bit of a language
from a phrase book, and is now arguing with the native speakers that they're
pronouncing it wrong and don't have their idioms right.
I don't have to follow broken standards.

You don't, but if you don't follow a standard merely because you think
it's broken, you're going to be useless as a contributor to any project
that will ever involve other people.
The problem is not using -1 for truthiness. It is that globally, C is
a broken language that fosters bad practices, buggy code, personality
dysfunction, and the politics of personal destruction when people
scapegoat others for their refusal to use a safer language.

That's interesting, because you're the person here who writes buggy code
and indulges in ceaseless personal attacks. It seems to me that you're
projecting your own pre-existing failings on C.
Well, sure, if I were in a C workgroup (not if hell froze over, but
let's suppose) then I would use my coworker's idioms which, no matter
what you believe, vary all over the map because C IS BROKEN.

You keep saying C is broken, but other people don't seem to be having
the problems you do.
You want sympathy for being autistic

No, I don't.
without taking
responsibility for the fact that autistic people are very bad at
understanding other minds...

You might consider learning about things from sources other than movies
and television.

Autistic people are not necessarily always bad at understanding other minds,
and have noticeable advantages in doing so. Specifically, not being
constrained to accept the instinctive heuristics of the brain as Revealed
Truth. Big win, there.
and have no standing when assessing the
knowledge of a Schildt, which you nonetheless do.

Knowledge is easily assessed when looking at a book someone wrote on a topic.
**** you, asshole. I am nothing of the kind.

You have been ranting for almost two years about the Schildt thing.
What we know so far:
* You demonstrate no familiarity, at all, with the contents of his books
about C.
* You consistently misunderstand basic features of C.
* You make a broad range of claims about the standardization process, none
of which you have presented ANY evidence for.
* You contradict yourself freely.

In short, you behave as though you are a complete nutter.
I am an experienced
programmer with thirty years of experience who left in the field in
disgust when it was taken over by autistic twerps.

It's a wonderful thing to know that you are totally above the sort
of personal attacks and character assassination you so decry. :)

.... and this still stands.

You haven't actually supported any of your claims, but you keep making
them.
Users? I don't use the word.

That's nice.
This is just wrong.

It's nice that you think that, but your handwaving and use of scare
quotes are not an argument.
And you also failed, now that I think of it, to realize that
infix2Polish should NOT free() this string: it is the caller's
responsibility to free() it.

You can define those responsibilities however you want.
Yes, main() should: but you did not say this because as in the Vicious
Tirade you did not make constructive suggestions.

I made a number of constructive suggestions in regard to your sample code,
but I didn't try to catch everything; it stopped being interesting.
Had you done so,
McGraw Hill might not have treated you like a joke.

You keep referring to this hypothetical event, but it never happened.
Why is it when I submitted a proposal to O'Reilly in 1999, it was
accepted?

I don't know, maybe it was a plausible-looking proposal? I suspect that
this was before your current incarnation as a Usenet troll.
Why is it when I contacted Apress in 2001 I was invited to
write a book?

Again, maybe you had a nice proposal?
Why is it I get stuff published, whereas you failed to
get an errata published?

I did not try to get an errata published. I contacted a publisher to
state that one of their books had a disturbingly large number of errors.
They offered me a small amount of money to do a technical review. At the
time, I thought the amount of money was too small, so I turned them down.
Why did my 13 year old get errata published
and not you?

Possibly he submitted errata, and I didn't?
It's because you were destructive and had no insight as to what was
going on.

No, it's because I never submitted errata. Sort of a key point, that.
You should have done your homework as I did when I taught IBM
mainframe C at Transunion. I had enough familiarity from Princeton and
Bell-Northern to be able to describe the culture of C, including its
case sensitivity and use of ASCII in place of EBCDIC. Whereas you
attacked Schildt without having any experience, as far as I can see,
on his platforms.

The book was not sold as being platform-specific -- and in any event, the
bulk of the complaints were not platform-specific.
You had no standing, and this is why McGraw Hill rejected you.

Except, again, they didn't.
Withdraw "C: The Complete Nonsense".

Not until you show it to be incorrect, rather than merely not as well written
as it would be if I did it again today.
Apologize publically to Herb.

Not unless you show that I was actualy wrong. Not merely "acting according
to different priorities than some random guy on the internet". Show that the
FACTS were wrong, or shut up.
And here, do not presume ever again to advise me on scientific or
stylistic issues.
No.

Confine your comments to your area of competence, which is C.

I'll comment on anything I feel like, in my areas, plural, of competence --
which include C, shell, some other languages, and general issues of
programming style.

-s
 
P

Phil Carmody

Ben Bacarisse said:
I dithered about that. In the end, since it is used only on this
program, the name was just about OK.

Erm, but doesn't it tromp on reserved namespace?
vs.
size_t new_cap = s->capacity ? 2 * s->capacity : 8;

Measure? Having said that, surely some applications prefer
slower increments, and your measurements of time and wastage
might well be completely misaligned with others'. I'm pretty
sure the measurements have already been done - the C++ standard
library probably has a reasonable corpus backing it up.

Sleep-deprived, but still just able to type,
Phil
 
B

Ben Bacarisse

Phil Carmody said:
Erm, but doesn't it tromp on reserved namespace?

No, I don't think so but, to be honest, I am not sure -- and that alone
is reason enough to avoid the name. The code (of mine, BTW) from with
I borrowed that small part uses a capitalised name, but that looked odd
in the context of the other code, I so replaced it.

I think the str[a-z][a-z]* names are only reserved as identifiers with
external linkage (and probably as function-like macros when string.h
is included) though I admit to finding the passages in the standard
less than crystal clear. I used it as struct tag and as a type name.

I hope someone with more certain knowledge can say one way or the other.

<snip>
 
K

Keith Thompson

Ben Bacarisse said:
Phil Carmody said:
Erm, but doesn't it tromp on reserved namespace?

No, I don't think so but, to be honest, I am not sure -- and that alone
is reason enough to avoid the name. The code (of mine, BTW) from with
I borrowed that small part uses a capitalised name, but that looked odd
in the context of the other code, I so replaced it.

I think the str[a-z][a-z]* names are only reserved as identifiers with
external linkage (and probably as function-like macros when string.h
is included) though I admit to finding the passages in the standard
less than crystal clear. I used it as struct tag and as a type name.

I hope someone with more certain knowledge can say one way or the other.

C99 7.1.3p1:

All identifiers with external linkage in any of the following
subclauses (including the future library directions) are always
reserved for use as identifiers with external linkage.

Each identifier with file scope listed in any of the following
subclauses (including the future library directions) is reserved
for use as a macro name and as an identifier with file scope in
the same name space if any of its associated headers is included.

This includes identifiers starting with str (or mem, or wcs) and
a lowercase letter. Using "string" as a struct tag is ok (though
probably inadvisable). Using it as a type name with file scope
is not.
 
B

Ben Bacarisse

Keith Thompson said:
Ben Bacarisse said:
Phil Carmody said:
typedef struct string string;

I don't like the name "string" for this. The term "string" is pretty
well defined in C, and I think this has real potential to confuse
readers in a larger program.

I dithered about that. In the end, since it is used only on this
program, the name was just about OK.

Erm, but doesn't it tromp on reserved namespace?

No, I don't think so but, to be honest, I am not sure -- and that alone
is reason enough to avoid the name. The code (of mine, BTW) from with
I borrowed that small part uses a capitalised name, but that looked odd
in the context of the other code, I so replaced it.

I think the str[a-z][a-z]* names are only reserved as identifiers with
external linkage (and probably as function-like macros when string.h
is included) though I admit to finding the passages in the standard
less than crystal clear. I used it as struct tag and as a type name.

I hope someone with more certain knowledge can say one way or the other.

C99 7.1.3p1:

All identifiers with external linkage in any of the following
subclauses (including the future library directions) are always
reserved for use as identifiers with external linkage.

Each identifier with file scope listed in any of the following
subclauses (including the future library directions) is reserved
for use as a macro name and as an identifier with file scope in
the same name space if any of its associated headers is included.

This includes identifiers starting with str (or mem, or wcs) and
a lowercase letter. Using "string" as a struct tag is ok (though
probably inadvisable). Using it as a type name with file scope
is not.

Thank you. For some odd reason I thought the type names were in their
own name space so I also needed to read 6.2.3 "Name spaces of
identifiers" to understand 7.1.3p1. Seems daft now but that was my
chain of thought...
 
S

spinoza1111

I dunno, my memory's crap.  I can't remember which books I've read.

So we listen to your opinions why?
But I don't see you advancing any argument that always moving data is
necessarily more efficient than not always moving data.


Uh, actually, you're complaining.  You were claiming that my behavior
didn't meet your needs.

No, your behavior is unacceptable.
While I certainly don't dispute that he'd be a plausible one, I am
not much for role models.


Why, certainly!  Thank you for your permission.  :p


You keep using this word, I do not think it means what you think it means..


I don't recall having made any particular judgements about Navia.  I have
enough data to state confidently that you don't know what you're talking
about, as widely demonstrated across several different (though related)
fields of inquiry.

....or that I know so much more, and am so free of learning disorders,
that you can't follow my reasoning.
First off, several of his claims were false even there.

Secondly, if he wanted to write a book called "C for Microsoft Platforms Only:
The Incomplete Reference With A Few Errors To Make Sure You Are Staying On
Your Toes", I have every confidence that it would be well-received.

Grow up. Most platforms are Microsoft.
No, he shouldn't have used them because *they are never in any way a
better choice*.  They don't make anything work that otherwise wouldn't.

Your inability to grasp this point astounds me.  The question is not
one where there's a tradeoff.  Covering the far/huge memory model crap?
Perfectly reasonable, because it's going to be useful to people on a common
platform.  Note:  *It offers them a benefit they cannot have by doing
things the standard way*.  Using uppercase letters for headers specified
as lowercase?  Stupid, *because it does not offer any benefit*.

Perhaps distinguishing lower and upper case file names is the mistake.
It might seem cool, but it's English - centric.
As long as it claims to cover ANSI C, rather than DOS/Windows-only, I am.


So what?  Plenty of other people have pointed out flaws, and there are plenty
more.  A representative sample is more useful.

No, they haven't. They've repeated what you have said, or they have
said that "Schildt sucks", which isn't pointing out a flaw at all.
So does my cat, albeit more effectively.


Of course you do.  But you can't actually, say, justify or support
that point.

I have supported it.

It is corporate-speak to pass on rumors about a person's competence
without substantiation, and this is what you've done with Schlidt.
Wait, that's not a telepathy helmet, it's a mirror!


Even if we grant (despite the lack of support or evidence from you)
the theoretical notion that C is a source of errors, it's still noticeable
that those errors are exacerbated by people trying to learn from Schildt,
and mitigated by people trying to learn from other authors.

That's simply not the case. As it happens, people use his approaches
to generate correct software. What part of "critical thinking" and
"testing" don't you understand?
Yes.

Keep in mind, unless you've got more books out there, I've done 2-5 times
as much technical writing as you have.  White space?  A good thing *used

Wow, the autistic writer of documents nobody bothers to read.
intelligently*.  Double-spacing paragraph text would not be a good thing.
Large spaces between members of lists, also not good.


A naming standard presumably has goals.  The goals given for Hungarian
notation are not met by Systems Hungarian.  Therefore, it does not work..

In my experience this is not the case. Systems Hungarian makes
selection of data type a design point and not coding and the fact that
it's difficult to change is a good thing. This is because before
coding starts you need to know your data types.
Good thought.  I'll just zip back in my time machine and tell my earlier self
to start learning to write seriously five or ten years ago... oh, wait, he
already did.  Lucky that, I think my time machine's batteries are low, and
I haven't got a steam locomotive or a thunderstorm anyway.


But your incompetence does.

If you're a technical writer, what are you doing misrepresenting
yourself as a competent programmer?
 
B

bartc

Richard Heathfield said:
In
spinoza1111 wrote:

Absolute rubbish. If you're going by installation count, there are
approximately 1,000,000,000 PCs out there, not all of which run MS
operating systems. And there are well over four times as many mobile
phones, of which only a tiny fraction run Windows Mobile. And if
you're going by platform count, Microsoft has written a handful of
versions of Windows (1, 2, 3, 95, 98, NT3.5, NT4, ME, 2000, XP,
Vista, 7, plus slight mods thereof), and they released a few versions
of MS-DOS. There are at least that many different mainframe operating
systems, and then there's all the minicomputer OSs, not forgetting
alternate PC OSs, and all the Mac platforms, various games consoles,
and a whole bundle of mobile phone systems, engine management
systems, set top boxes, and various other embedded devices. MS is a
tiny drop in a rather large puddle.

I keep hearing all this. But since the 80's, most of the computers I've been
able to buy have come with a MS operating system. Most of the rest have been
Macs.

By computers I mean what you normally expect: a box with a screen and
keyboard.

There are also a number of products which might have microprocessors inside
but are not primarily computers (easy to tell when they contain any
programmable devices, because they take ages to power up, are temperamental,
slow, unresponsive, and half the time don't work; it seems not only MS are
capable of writing crappy software).

Regarding C, some development will be targeted at MS/PC platforms, the rest
at everything else.

I don't know what the mix is (counting developers, not numbers of
end-products), but I'd say the MS/PC lot are still a sizeable chunk; they
would welcome that book that was mentioned and there's no real reason for
them to care whether their product runs on anything else.

Likewise, someone developing C for your engine management system can't
really be expected to care whether his product will work on a PC.
 
N

Nick Keighley

trim your posts?

Oh, I'm all for elegance, regardless of who invented it, but -1 [for true]
isn't elegant.

I don't think you know what elegance is. For starters elegance is
conceptual unity, [...] As it happened, [-1 for true] was best for me in my
limited time, simply to show that this conversion could be grammar-
driven, to use -1 as truthiness.

you could use a macro and diffuse all these arguments

#define TRUE -1

I quite like the grammer driven approach as an idea.


personnally I thought this was one of C's mistakes. They should have
had a boolean type (or alias) from day 1 and null should have been a a
language word. Oh, and function prototypes they should have been in
from the beginning. Lint? feugh!

I don't think there's any should about it. Various languages ahve made
various decisions. This is why the atcual values should be hidden
behind macros (or even language words!). We don't get our knickers in
a twist about the representaion of floating point values or character
sets (ok, we /do/; but we shouldn't).
Wow, I'll alert the media. 1==!0? New math indeed.

well == isn't very standard mathematical notation to start with. You
are aware that in C the ! operator yields 1 when applied to 0?

x !x
0 1
1 0
nz 0

where nz is any non-zero value. Hence !!x must yield either 0 or 1.
But of course Peter is begging the question.

You wouldn't say this if you had in the past to debug at machine code
level, because 1 looks too much like  0 to be a good way to represent
truthiness.

you're a hardware engineer, right? In my experience they're the people
who can't tell a 0 from a 1.

There are many [C] functions and APIs which have standardized on using negative
values to indicate *errors*.  Because of this, readers who are experienced
with C will usually regard a "-1" as an error indicator.  Your theory that it
would make sense for it to be regarded as a particularly idiomatic "true"
is fascinating, but it doesn't match the real world.

although this is a bit of a angels on the head of a pin discussion
Peter Seebach /is/ correct here. When I read Spinoza's code I was
expecting -1 to be a failure indication. Idioms and cultural norms are
important.


you can't have a structured walkthru in a bunch of usenet posts. Who's
the Co-ordinator?

Users? I don't use the word.

You do actually. What alternative would you offer for "the people who
use a program"? If a user of my program says it is hard to use I
listen (I don't necessarily fix it but I do listen). If a random
person made comments he'd get less weight added to his comments.

I have, apocryphally, heard the comment "I don't like being called a
"user" it sounds like I take drugs".

why don't the end-users have a place? Who else can assess usability?

You invoke "the user" as a
deus ex machina: [...] But literally, that's the person
who as you say needs to be innocent of the details!

the user interface and what the system does isn't a detail

This is just wrong. For one thing, this "user" has to know, not
whether the code is "correct" (for in fact all code is "incorrect" in
the absolute sense that a computer is ultimately a finite state
automaton) but its limitations, and in this case and many others, this
"user" (I prefer "caller")

caller and user are not synonymns

would rather read the code.

users don't typically read code. Though the user of an infix to
postfix library is not a very typical user...

[...] I had enough familiarity [...] to be able to describe the culture of C,
including its case sensitivity and use of ASCII in place of EBCDIC.

the C standard does not specify the use of ASCII

Whereas you
attacked Schildt without having any experience, as far as I can see,
on his platforms.

the point is, Schildt didn't have to write a platform specific book.
And if that is what he wanted to do he should have picked a better
title.

<snip>
 
N

Nick Keighley

Absolute rubbish. If you're going by installation count, there are
approximately 1,000,000,000 PCs out there, not all of which run MS
operating systems. And there are well over four times as many mobile
phones, of which only a tiny fraction run Windows Mobile.

and settop boxes, and mp3 players and engine management systems, and
HDTVs, and satnavs, and...

And the internet, and e-commerce, and factory production lines...

And... oh I give up.

And if
you're going by platform count, Microsoft has written a handful of
versions of Windows (1, 2, 3, 95, 98, NT3.5, NT4, ME, 2000, XP,
Vista, 7, plus slight mods thereof), and they released a few versions
of MS-DOS. There are at least that many different mainframe operating
systems, and then there's all the minicomputer OSs, not forgetting
alternate PC OSs, and all the Mac platforms, various games consoles,
and a whole bundle of mobile phone systems, engine management
systems, set top boxes, and various other embedded devices.

ah, ok you got those. Have you heard of Linux?

MS is a tiny drop in a rather large puddle.

It bugged me when the adverts came out for Windows-7. "A new version
of the windows operating systems which already runs 90% of the worlds
computers".

No, the mistake is Schildt's *failure* to distinguish between upper
and lower case.


Hardly. A great many languages use case distinctions just as much as
English does, and in some csaes even more (e.g. German).

most programs are written with the english alphabet.


Er, yes it is.


If they follow his advice, they can hardly fail to produce incorrect
software.

do either of you have any empirical evidence for your claims?

<snip>
 
N

Nick Keighley

Richard Heathfield said:
spinoza1111 wrote:
Absolute rubbish. If you're going by installation count, there are
approximately 1,000,000,000 PCs out there, not all of which run MS
operating systems. [lots of examples]

I keep hearing all this. But since the 80's, most of the computers I've been
able to buy have come with a MS operating system. Most of the rest have been
Macs.

You probably haven't bought many articulated lorries either but they
still exist.
By computers I mean what you normally expect: a box with a screen and
keyboard.

So all those machines that sit in air conditioned rooms running google
or e-bay or generating Pixar's latest aren't computers?
There are also a number of products which might have microprocessors inside
but are not primarily computers (easy to tell when they contain any
programmable devices, because they take ages to power up, are temperamental,
slow, unresponsive, and half the time don't work; it seems not only MS are
capable of writing crappy software).

Some things like MP2 players, digital cameras and GSM phones can't be
built without microprocessors.

Your phone would just be a rather ugly paper weight if it wan't for
all the base-stations, switches, databases, management systems,
routers, billers, etc etc. Most of that stuff looks very much like a
computer to me and a lot of it doesn't run MS DOS.

One of those thingies mentioned above has over 3/4 million lines of
software in it. And *that's* not a computer?

Do you know a how many *million* lines of software there is in a
modern commercial aircraft?

Regarding C, some development will be targeted at MS/PC platforms, the rest
at everything else.

I don't know what the mix is (counting developers, not numbers of
end-products), but I'd say the MS/PC lot are still a sizeable chunk;
agreed


they
would welcome that book that was mentioned and there's no real reason for
them to care whether their product runs on anything else.

personnally I'd prefer a book that clearly distinguised standard stuff
from non-standard stuff. Some of us like to write software that runs
on multiple platforms. That 3/4 million line program has been ported
between OSs once and between DBMSs another.
Likewise, someone developing C for your engine management system can't
really be expected to care whether his product will work on a PC.

do you really think they test an EMS by bunging a PROM in and starting
the engine?

Host Based Development
 
N

Nick Keighley

On several non-PC projects I've worked on, we cared passionately that
the program would work on a PC, because that was the easiest place to
do development and debugging.

so we actually *like* PCs. They just aren't the whole world. I've
worked on stuff where the target platform didn't exist when we started
testing.
 
B

bartc

Richard Heathfield said:
Richard Heathfield said:
spinoza1111 wrote:
Grow up. Most platforms are Microsoft.

Absolute rubbish. [...] MS is a tiny drop in a rather large puddle.

I keep hearing all this. But since the 80's, most of the computers
I've been able to buy have come with a MS operating system. Most of
the rest have been Macs.

How many mainframe systems have you bought, personally, in the last
twenty years? How many minicomputer systems? And of course most of
the computers that you /have/ bought in the last twenty years aren't
MS platforms.

They don't seem to stock mainframes at PC World or Dixons.
If you can arbitrarily restrict the meaning of the word "computer" as
much as you like, it's easy enough to come to almost any conclusion
about them that you wish.

It's fairly obvious when something is a computer, ie. a desktop or laptop
PC. Everything else is more specialised. There might be some consumer
gadgets that are getting close though.
 
S

Seebs

So we listen to your opinions why?

Because my opinions are frequently right.

Look at it this way: Who is better at mathematics, someone who knows all the
standard trigonometry relationships, or someone who can't remember them but
can rederive them on the fly any time they come up?
No, your behavior is unacceptable.

That's a "complaint". (It's also patently false, as at least some people
appear to accept it.)
...or that I know so much more, and am so free of learning disorders,
that you can't follow my reasoning.

Being a cautious sort, I did consider this as a possibility, briefly. I have
shown your arguments to a broad selection of people with either different
learning disorders or none that we know of. The responses are essentially
consistent; everyone thinks you're a nutter. This would be because you don't
HAVE any reasoning, just various assertions offered as a sort of miasma
of impotent rage against C.
Grow up.
Did.

Most platforms are Microsoft.

Not really. Most desktop computers are Microsoft -- but much of what
Schildt wrote is no longer true of them. The vast majority of CPUs running
C code, though, are not running MS operating systems, and many of the ones
which are are cell phones or other devices which, again, don't conform to
the rules Schildt discussed.

But even if it were true, the best case is that we reduce the set of errors
to about two thirds of its current size, and observe that the other third
of the "errors" were in fact a single error, the claim that the book covered
all systems and portable code. That would get us from incompetence about C
to mere fraud, doubtless an improvement. :)
Perhaps distinguishing lower and upper case file names is the mistake.

Even if you were right, you'd still be wrong.

The job of a technical writer is to describe what is, not just what
should have been.

I'd hate to see you dealing with history books. "Schildt's claim that
the Japanese never attacked Pearl Harbor is ridiculous." "But that would
have been a horrible mistake; they'd have gotten America into WWII and
probably subsequently lost the war. Clearly, it was the Japanese that
made the mistake, Schildt is correct."
It might seem cool, but it's English - centric.

Doesn't matter. It's what is. It is an error for the book to describe
things incorrectly, *even if you don't like the way they are*.
No, they haven't.

Untrue. We've pointed you at several instances.
They've repeated what you have said, or they have
said that "Schildt sucks",

Untrue. No one has been identified as having said this. Seriously, find
an example.
I have supported it.

No, you haven't. You've claimed that you have, but normally you just post
non-sequiturs, occasionally you post untrue things.
It is corporate-speak to pass on rumors about a person's competence
without substantiation,

I never knew that junior high school students were the epitome of
"corporate-speak".
and this is what you've done with Schlidt.

Only it isn't. It isn't what I've done with Schildt, either.

I've passed on verified factual claims with substantiation and support.
The support's strong enough that you're now reduced to claiming that
the real world should have been different so that the book wouldn't be
wrong. That's... Well, it's funny, and it's why I see no evidence
that *anyone* is taking you seriously anymore.
That's simply not the case.

Again, "yes it is". Go look at the Google Groups archives from back when
his books were popular.
As it happens, people use his approaches
to generate correct software.

Not demonstrably.
What part of "critical thinking" and "testing" don't you understand?

I understand those fine. However, that doesn't make the book correct; that
makes the book incorrect in a way that many users recover from.

Here's the great part.

Critical thinking is what happens when people realize that a source is
unreliable. One of the ways this can be streamlined is to tell them about it
up front. Then, they critically think about the value of the book, toss
it in the trash, and get one that doesn't suck.
Wow, the autistic writer of documents nobody bothers to read.

Fascinating assertion. Not particularly supported, but fascinating.
In my experience this is not the case. Systems Hungarian makes
selection of data type a design point and not coding and the fact that
it's difficult to change is a good thing. This is because before
coding starts you need to know your data types.

You continue to hammer home the fact that you have no clue what you're
talking about. The "data types" you need to know are at the level
of "structure" or "integer"; if you think you need to know whether something
is an int or a long during the design phase, your design is flawed.
If you're a technical writer, what are you doing misrepresenting
yourself as a competent programmer?

This makes no sense. Why should the categories be mutually exclusive?
If they are mutually exclusive, which are you?

-s
 
S

Seebs

It's fairly obvious when something is a computer, ie. a desktop or laptop
PC. Everything else is more specialised. There might be some consumer
gadgets that are getting close though.

But if they have CPUs, they're still computers. We target a lot of those
things with C compilers.

-s
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,079
Messages
2,570,574
Members
47,207
Latest member
HelenaCani

Latest Threads

Top