C++ sucks for games

D

David Steuber

Gerry Quinn said:
I don't find that. Succinct, perhaps, but I see no special merit in
terseness. Surely the ease of recognition of common patterns and idioms
is far more important?

If the code fits into a common idiom or pattern, it would probably be
easiest to just have a name for it. The "anamorphic if" pattern would
be a simple case where the pattern can be expressed more easily by a
name than with code structure.

Of course, every language has common idioms. In C, I would expect to
see:

i++;

rather than:

i = i + 1;

And I would expect to see if(;;){...} for infinite loops rather than
while(1){...}.
I speak as one who has of late taken to scoping off any part of a
function that has variables local to that part, with a pair of curly
brackets that have a line each. (Dividing functions into 'mini-
functions' that can be refactored as necessary, if you like.)

Maybe I've just got a bigger screen. (Or maybe it's because using a
mouse it's easier to scroll?

This is something I see a lot of in Lisp. Throw in the refactoring,
and vertical space is conserved.
Fortunately, this Lisp advantage has been incorporated into all
languages since Fortran, by way of the underscore '_' character. The
new 'camelCase' technology has also helped with giving comprehensible
names to symbols. (Of course it only works when case is significant...)

I've never liked the underscore '_' character because I have to hold
down the SHIFT key to type it. This is not so with the hyphen '-'
character. In C, I tend to use 'camelCase' to avoid the underscore.
I would prefer camel-case. C doesn't let me do the latter because of
the way the tokenizer works.

I'm also no longer convinced that case sensitivity is a good thing for
program tokens.
 
W

Wade Humeniuk

Jon said:
It seems to me that the point you make about it taking as long in
lisp as in C++ (where the original claim was lisp was more
productive) can not be substantiated meaningfully without this extra
context.

When I wrote my version of concentration I did not have it in my mind
that I wanted to show that I was more productive. In this thread there
has been a desire for verification that (Common) Lisp is even capable
of being used for any application (let alone a game). I just wanted to
show that it 1) can be, 2) has reasonable performance and 3) is deliverable
in a fairly small footprint. Also I hope some people will actually
look at the Lisp code who have not seen Lisp before. (It does not bite)


Wade
 
J

Jerry Coffin

[ ... ]
Uh, if you look at the comments that file looks a hell of a lot like an
(very direct) translation of some Fortran code.

I probably _should_ have looked at the comments at the top, but
immediately went looking for what corresponded to what I happened to
be looking at in C++ right then. Then, when I really started _looking_
at the code, I mostly just wanted to quit looking as quickly as
possible!

I suppose I really _should_ have looked for some other code, (almost
ANY other code) but quite frankly, looking at that took away any
interest I might have had for such a search, at least for a while.

[ ... ]
I doubt you'd find any competent Lisp programmer who would actually
write new code like that - it's nasty.

I certainly _hope_ not!
It is unfortunate that your search turns that up as the first result.

True -- I certainly doubt it's nowhere close to representative. OTOH,
as I understand things, part of how Google determines ranking is other
links to that site, indicating that this code is left to the obscurity
it so richly deserves. Worse, given that it does show up as the first
result of an obvious search, it probably gets seen quite a bit
(certainly more than it deserves).
 
R

Ray Blaak

David Steuber said:
I'm also no longer convinced that case sensitivity is a good thing for
program tokens.

In a pure ASCII world you are probably right.

In a Unicode world there is no other choice: case *in*sensitivity is confusing
and ambiguous in general. It is better to simply ignore it as an issue.
 
W

Wade Humeniuk

Jerry said:
[ ... ]

Uh, if you look at the comments that file looks a hell of a lot like an
(very direct) translation of some Fortran code.


I probably _should_ have looked at the comments at the top, but
immediately went looking for what corresponded to what I happened to
be looking at in C++ right then. Then, when I really started _looking_
at the code, I mostly just wanted to quit looking as quickly as
possible!

I suppose I really _should_ have looked for some other code, (almost
ANY other code) but quite frankly, looking at that took away any
interest I might have had for such a search, at least for a while.

[ ... ]

I doubt you'd find any competent Lisp programmer who would actually
write new code like that - it's nasty.


I certainly _hope_ not!

I have taken some time to look at the code. My impression is that it is
not so bad. When writing the code the author's primary concern is the
correctness compared to a reference implementation. I also see this as the
overiding concern. As a user of the code, if there was problem I could
find the reference and do a direct check if everything was OK. If the
code was written to improve some aesthetic criteria that all important
checking would be lost. Kudos to the author of the code for keeping to
the spirit of Lisp! (to express the solution in the most natural
language of the experts). As for all the variable names being terse, all
I can say is that they have MEANING to the experts.

I also find it strange that you would choose to use a library or not based
on some aesthetic judgement. If one was to look at the internals of LINPACK
one might be appalled, but LINPACK works, its tested, its fast. Real code
tends to be cruddy inside its package, but that is hardly important. One
should not look at a gift horse in the mouth.

Wade
 
S

stedetro

This thread brings up a good question. Although I don't know Lisp and
looks too complicated for someone like me I hear only the greatest
things about it from most experts in most programming languages. I hear
it probably the most flexible language in the world and can do things
in ways other languages cant.

My question is why is everyone not using Lisp or why is Lisp not more
popular in all programming fields?

Stede
 
W

William Bland

I don't find that. Succinct, perhaps, but I see no special merit in
terseness. Surely the ease of recognition of common patterns and idioms
is far more important?

Yes, excessive terseness is bad. However, I find the *existence* of common
patterns to be a sign that I haven't got my design right yet, and that I
might need a new macro.

Cheers,
Bill.
 
P

Peter Seibel

This thread brings up a good question. Although I don't know Lisp and
looks too complicated for someone like me I hear only the greatest
things about it from most experts in most programming languages. I hear
it probably the most flexible language in the world and can do things
in ways other languages cant.

My question is why is everyone not using Lisp or why is Lisp not more
popular in all programming fields?

For one possible answer, check out _Wisdom of Crowds_ by James
Surowiecky, particulary the section on "plank road fever". While the
overall thesis of his book is that crowds can be quite smart, he also
discusses several situations in which the wisdom of crowds breaks
down--such as when too many people start basing their decisions on
decisions made previously by other people.

-Peter
 
J

Jerry Coffin

[ ... ]
Taking what I presume you consider a decent piece of C++ code as a
starting point, and then comparing it to the google "I feel lucky"
Common Lisp version, isn't a terribly good way to eliminate personal
bias. In fact, just "terrible" describes that method better, IMHO.

Sounds a lot like sour grapes to me. Perhaps I didn't make it clear,
but I picked out both pieces of code in essentially the same fashion.
Other than the fact that the Lisp code was so horrendous, I probably
wouldn't have made a big deal about it in either case.
And, I suspect, don't FIR filters fit my description of one of the two
areas where the benefits of Common Lisp aren't all that clear, namely
some special purpose number crunching?

Well, I certainly wouldn't consider FIR filters particularly "special
purpose" at all -- quite the contrary, they're useful (and heavily
used) in a huge variety of situations.

They're also closely related to the FFT, DCT, etc., used in many forms
of lossy compression. Now, perhaps to you "HDTV, "cell phone", "CD",
"DVD" and "MP3" all sound so obscure that something used in all of the
above qualifies as "special purpose", but here on planet earth, that's
not particularly accurate.

You've also conveniently forgotten that you claimed that even in these
cases, Lisp was still just as easy to use, while C and C++ merely
provided better performance.

Now, I've yet to see Lisp measure up in performance for these jobs,
but let's face reality: even if we totally ignored performance, none
of the Lisp code we've seen for the job so far has been anywhere close
to as nice as the C++ versions.

Now, I'll certainly admit that there are times it's worth trading off
some readability to gain performace, and other times the reverse --
but it's a lot harder to justify the Lisp code when it's slower AND
harder to read.
I mean, when you have some
functional abstraction with a very clearly defined mathematical
relationship between input and output, and algorithms without too much
fancy control flow, then almost any language/syntax will do (sometimes
even assembly), and effortless (predictable) performance becomes the
more important factor.

It sounds to me like you've just said that Lisp provides advantages
only when/if you've done such insufficient analysis that you shouldn't
be writing code of ANY kind yet.
Finally, it seems to me others here have showed that properly written
Common Lisp does quite a bit better than the example you found.

Better than that example, but the C++ code is still clearly superior.

[ ... ]
Right, it's everyone else who make the bugs happen. What a pity it's
not you who write all software.

You seem to specialize in non-sequiters. If I decide to use a long
long, I can certainly do so. Somehow you make a jump from "large
enough integer type" to "perfectly bug-free code". If that was
actually justified, then any language that provided
arbitrary-precision integers would guarantee bug-free code, but
somehow that doesn't seem to have happened.

At the same time, it should be pointed out that on the (relatively
rare) ocassion that somebody cares, they can easily use arbitrary
precision integers in C++. Doing the same in C is possible, but you
have to use prefix-notation function calls to do nearly everything,
which is ugly (i.e. it looks like Lisp).
 
H

Hannah Schroeter

Hello!

rif said:
ps. I *do* feel that CL's lack of "inline" non-homogeneous arrays
(i.e. arrays of structures) gives C/C++ advantages for certain kinds
of programs which need to be simultaneously quite fast and extremely
memory efficient, but that's a different story.

Would it help making arrays of (unsigned-byte 8) or (unsigned-byte 32)
and writing inlined accessor functions then?

Kind regards,

Hannah.
 
R

rif

Would it help making arrays of (unsigned-byte 8) or (unsigned-byte 32)
and writing inlined accessor functions then?

Kind regards,

Hannah.

It will help for some but not all applications. If you want to have a
struct whose fields are all the same type, and that type can be
inlined in arrays, then you can just allocate a big array of that type
and build syntax to do the right thing. If your structs contain
fields of different types (say, (unsigned-byte 32)'s *and*
double-float's), then you have to either pay a memory penalty or move
to a "multiple arrays" setup (what would naturally be an array of
structs, where each struct is a two int's and a double-float, becomes
an array of ints and an array of double-floats). If you're accessing
the arrays more-or-less sequentially, this is fine, and in fact may be
more efficient than the alternatives on modern hardware (thanks to
Duane Rettig for pointing this out). If you're random accessing the
arrays, you're going to get a lot less locality of reference, and it's
going to hurt you a lot.

I think this would be a pretty delicious extension to CMUCL (or any
other CL that compiles to native code). I declare a structure type,
and get inlined arrays that can hold the structure type. I agree that
the consequences are undefined if I try to put anything else in the
array.

rif
 
C

Christopher C. Stacy

Ray Blaak said:
In a pure ASCII world you are probably right.

In a Unicode world there is no other choice: case *in*sensitivity is
confusing and ambiguous in general. It is better to simply ignore it
as an issue.

I think maybe case (in)sensitivity is being conflated with rich
character sets and their coding implementations. One certainly wants
to be able to enter all the necessary characters for their native language.
But in which languages does the case of the letters in single words actually
change their meaning, other than to distinguish between proper and regular nouns?

Note that for compound words and phrases, I can see actual case
preservation (but still insensitive) as being preferable.

But most of the flaming I see about case sensitivity has nothing
to do with languages other than English, and I think that StudlYCaSE
is terrible in English. One good hint that it's an unnatural artifact
of infix syntax -- no dashes ("-") allowed - is that youDontSeePeople
spellingMessagesLikeThis. (setf (makes-much-more-sense *this*) t).
That's my 2c subjective analysis of the practical and aesthetic issues.

(I can't imagine I've said anything here that hasn't been
argued 1000 times, though, so I'll shut up now.)
 
C

Christopher C. Stacy

Gerry Quinn said:
Well, there is truth in that, but on the other hand, "how fast can an
experienced programmer do it" is probably a pretty relevant metric. As
for inexperienced programmers, I don't think Lisp is going to take over
from VB any time soon...

Lisp is not being targeted particularly at Visual Basic programmers
at the moment, so your prediction makes sense. But it says nothing
about whether they would like it better. Historical experience has
shown that beginner programmers do very well with Lisp-like languagues
compared against traditional BASIC. One thing that's missing from a
more modern comparison is the "visual" (drag-and-drop-and-properties)
programming environment for Lisp.
 
C

Christopher C. Stacy

Brian Downing said:
Uh, if you look at the comments that file looks a hell of a lot like an
(very direct) translation of some Fortran code. So if it looks like
Fortran, that's probably why. Compare with the output of f2c.

Heck, it even says at the top:

;;; translation of most of the FORTRAN programs given in "Digital Filter
;;; Design" by Parks and Burrus

I doubt you'd find any competent Lisp programmer who would actually
write new code like that - it's nasty.

Automatic FORTRAN -> Lisp translation programs have been seriously
used for mathematical libraries since the 1970s. I wonder if the
source code you're looking at is the result of such a translation.
Usually, such translation systems try to preserve the structure
of the original FORTRAN program.

(Otherwise, I guess this was some human accomplishing the same thing.)
 
P

Paul Khuong

Ray Blaak said:
In a pure ASCII world you are probably right.

In a Unicode world there is no other choice: case *in*sensitivity is confusing
and ambiguous in general. It is better to simply ignore it as an issue.
If you program with unicode characters, I can only hope you don't
charge for it. Is it r or r?... And people complain about APL, which,
at least, wasn't ambiguous. Non-ASCII characters belong in strings and
comments.
 
R

Ray Blaak

Ray Blaak said:
In a Unicode world there is no other choice: case *in*sensitivity is
confusing and ambiguous in general. It is better to simply ignore it
as an issue.
[...]
But in which languages does the case of the letters in single words actually
change their meaning, other than to distinguish between proper and regular
nouns?

The canonical example I know of is: in German the lower case of "ß" is "s".
The lower of "S" is also "s". Certainly "ß" should not be considered the same
as "S". But how should the identifier tokens "foos", "fooS" and "fooß" compare
in a case-insensitive language?
But most of the flaming I see about case sensitivity has nothing
to do with languages other than English, and I think that StudlYCaSE
is terrible in English.

It is terrible in English. So are many identifiers that are only in a single
case. So are many other identifiers in general.

My take on this is simple: use good identifiers based on how comprehensible
they are, period.

The issue of case-(in)sensitivity can be viewed as an orthogonal one. In the
presence of Unicode, being case insensitive is not worth the trouble.
 
V

Vladimir Sedach

To write a good chess program, I suppose, one does not have to
be a good chess player, more a good programmer. An AI chess
program is "opaque", it does not play chess like its author
would do, so one can not see the chess-playing style of its
author "trough it".

No, but it does play endgames like those stored in it's database by
the programmer. The difference between chess and art is that the
former is a formal system (in fact, I don't think it is incorrect to
think of a chess game as one path in a non-deterministic serial
cellular automaton with very weird rules, but then again maybe I'm
reading into ANKOS too much). Others have pointed out that you can
generate art (and anything and everything else, really) by randomly
flipping bits in a bitmap (from what I can tell, Aaron works in a
scale-independent representation, but let's consider only this
representation of the final product), but there is a threshold
difference - the board is so large that a "tree search" to make
pictures is wholly unfeasible (but of course if you can come up with a
decision criterion for "art," you've certainly accomplished
something!).
Possibly, because the program was not capable to develope
something new.

Something I recall about that incident now (I read about it in Pamela
McCorduck's _Aaron's Code_) is that it happened sometime in the 70s,
back when Cohen hand-colored Aaron's drawings. He is a very distinct
colorist, and in that sense I guess it did limit the style of Aaron's
output more than it's database of subject representations.
In the same sense, an A.I. program will not just be an
implementation of the style, ideas or thoughts of his creator,
but rather be an open system with an own memory, being able to
interact, evolve and to create something new.

Well, by those criteria, no one has managed to make anything close to
an AI program in any field and likely won't for the foreseeable
future. What Harold Cohen has done (and what I think AI is really all
about, despite the current trend in connectionism) is make explicit
the rules for "making art," at least in a style similar to his own. To
go back to the chess issue for an example, I read a book called
_Blondie24_, where the authors described how they trained a
modest-sized neural network to play moderately well at checkers
(except for the endgames, oops!), then go on to hype connectionism and
how all these "self-learning" (note that they stopped training the
system when they put it into play against real opponents) systems are
the future. Not that this was hot news or anything (the book was
published in 1999), but of course they don't know _how_ (or even
really why) the network works. About the only thing close to research
they did in the book was observe (and perpetrate) gender role-play on
checkers websites (hence the name of the book). Anyway, to close my
"Perl Harbor sucked and I miss you" rant, if you want "an open system
with an own memory, being able to interact, evolve and to create
something new," you're out of luck.

Vladimir
 
S

Stefan Ram

Ray Blaak said:
The canonical example I know of is: in German the lower case of
"ß" is "s". The lower of "S" is also "s". Certainly "ß" should
not be considered the same as "S".

Actually, "ß" already is lowercase.

The language usually does not require an uppercase version of
"ß", because there are no words beginning with "ß". (Unless
one considers "ß" to be a word in itself, but this is usually
written as "das Eszett" ["the eszett"] or "das 'ß'" [which
would be false if one views it strictly, because nouns need to
have their first letter capitalized.])

If one still insists to use capital characters, there are
/two/ uppercase versions: the default uppercase spelling is
"SS", while the spelling "SZ" has to be used, when the default
spelling might cause ambiguity. So the capitalizations of
"Masse" and "Maße" (two different words with different
meanings) are "MASSE" and "MASZE", respectively.
 
J

Jerry Coffin

[ ... ]
I have taken some time to look at the code. My impression is that it is
not so bad.

In that case, I certainly hope I never see anything you'd admit WAS
"so bad".

Beyond that I have little left to say to you -- it would appear to me
that your views of programming are sufficiently different from my own
as to preclude any hope of a rational discussion between us on the
subject.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
474,202
Messages
2,571,057
Members
47,665
Latest member
salkete

Latest Threads

Top