C++ sucks for games

F

Frode Vatvedt Fjeld

Gerry Quinn said:
What do you mean? Rabbit::Jump() seems a perfectly applicable
example.

This is functional abstration, which is all good and well. Lisp macros
provide for syntactic (and even linguistic[*]) abstraction, which opens
up a whole new playing field. The combination of functional and
syntactic abstraction is the dynamic duo that defines Lisp's power.
Well, many of the boasts about Lisp seem to be about its ability to
create 'new languages' for a given problem domain. You mutter about
people not seeing the benefits, but you don't seem so keen to
elucidate them.

It is one of those things whose value is difficult to convey except by
personal experience. Imagine trying to convince a dedicated 1980's era
BASIC programmer of the benefits of functional abstraction. He'd say
something like "Oh, so a function is just a GOSUB with a name instead
of a line-number? Doesn't seem like much to me. Functions have
arguments and return values, you say? Well, duh, I pass arguments in
global variables, and I think anyone unable to keep their variables
straight are unfit to write programs. So this just helps weed out the
weaklings." Well, not quite.

Do you know what "functional programming" means? Some say it's
programming without side-effects, but really it's about writing your
programs in terms of functional abstraction. Or, in other words, in
terms of the function-call protocol your language provides in the form
of syntax and mechanism. Some people think this concept is the panacea
that should be applied universally to solve the software crisis. But
let's assume it's not. Still, the benefit of the clear, manifest,
predictable relationship between the program text and the result of
the program's execution is unquestionable (even if the hypothtetical
BASIC-programmer preferred his global variables). Bugs tend to creep
in when programs have side-effects (which might be called ad-hoc
protocols, contrasted to the language's function-call protocol) that
have consequences for the program that are difficult to spot by
looking at the program text. It is one of the very great benefits of
syntactic abstraction to be able to massage program text such that one
gets the best of both worlds: the clear, manifest relationship between
program text and program execution results, and also
application-specific, non-function-call protocols/side-effects
(i.e. one is not restricted to "functional programming"). (And this is
not to say that there isn't many cases of programming with
side-effects where the consequences are clear and simple without the
use of macros.)

[*] By linguistic abstraction I mean syntactic abstraction carried out
to such a degree that one cannot anymore be said to program in the
original language. Normally, syntactic abstraction integrates
naturally with the original language.
 
G

Greg Menke

Gerry Quinn said:
The same thing I use the safety catch on a gun for. It stops me
shooting myself in the foot.

- Gerry Quinn

If the safety is the only thing keeping you from shooting yourself in
the foot, I think you should put away your gun for a while. And
perhaps re-examine your preconceptions about why access specifiers are
so important.

Gregm
 
G

Greg Menke

Computer Whizz said:
> "Computer Whizz"
beginning of the evaluated form and the > parameters are the rest of
it. I can't see how that's "the same place" > any more than the
reception is in the same place as the janitor's > closet. They're in
the same building, sure, but the reception is right > at the
entrance.

Yes - but "our" buildings have walls between the rooms.
(OK - I sound like an ass a little there... Still, Lisp programmers seem all
happy and ine saying "I can read Lisp code OK" when it's not you - but the
editor reading it out for you.
I might use a simple syntax highlisghter to show variables/functions/static
text but I can just as easily judge and evaluate raw C/C++/most other
languages - even if they've been typed by a monkey with no actual use of the
tab key... )

Obfuscated C is just as bad as obfuscated Lisp. It is amazing to me
that you make these claims about Lisp without having gotten a good
deal of experiences actually using it.

That's where they differ more prominently! C/C++/whatever IS (at least in my
mind) very similar to English. It has words following each other in a
"flow".

If what way is Lisp different?

Grem
 
P

Peter Lewerin

Gerry Quinn said:
When I wrote that I HAD done a smidgeon of research, and my impression
was that yes, Common Lisp has classes, but they have a very 'bolted-on'
look.

ROFL. CLOS fits seamlessly with non-OOP Lisp. There is a paradigm
difference between CLOS and non-OOP Lisp code, but the language is the
same, and it's quite possible to mix the two. In comparison, only in
recent years has C++ begun to even approach the level of integration
between OOP and non-OOP aspects that CLOS has, but there are still
areas where they just don't mix.
They are built with Lisp macros, no doubt for ideological reasons

More likely because that's a very good way to build language tools.
- perhaps as a consequence they seem to lack things we might expect such
as access specifiers. That's why I wrote what I did.

If there is anything in the Lisp macro system that precludes adding
access specifiers to slots (class members), I'd like to know about it.
 
M

Maahes

Possibly a good psychological example of the differences between C
programmers and Lisp programmers. C Programmers generally hate the idea of
an editor that understood conceptual blocks. I know I do. I like to program
left to right in text and don't want the compiler inserting brackets and
such.
 
J

Jon Boone

Gerry Quinn said:
The same thing I use the safety catch on a gun for. It stops me
shooting myself in the foot.

Can you be a bit more specific? For example, do you make
extensive use of the protected specifier? Or do you stick primarily
to the public/private specifiers?

--jon
 
P

Peter Seibel

Gerry Quinn said:
Well, many of the boasts about Lisp seem to be about its ability to
create 'new languages' for a given problem domain. You mutter about
people not seeing the benefits, but you don't seem so keen to
elucidate them.

Gerry, if you're actually interested in exploring what Lisp has to
offer, you might be interested in taking a look at the draft chapters
of my soon-to-be-published book on Common Lisp, available at:

<http://www.gigamonkeys.com/book/>

It is written exactly for people like yourself--experienced
programmers in other languages who are perhaps skeptical of the claims
Lisp advocates make. It contains several chapters of "practical"
projects including building a parser for ID3 tags in MP3 files and a
Shoutcast server, many of which use the "build and embedded language"
approach made possible by Lisp's macros.

Anyway, the chapters on the web are first drafts so there may be some
unpolished bits and even a few forward references that I need to
straighten out but quite a few people have told me that it is in good
enough shape to help them at least see what the heck us Lispers are
going on and on about all the time.

-Peter
 
M

Matthew Danish

Gerry Quinn said:
That seems clear. May it be assumed it does not claim to possess the
advantages of the C++/Simula model of OO?

The perceived advantages of the C++ model are rather dubious. CL
certainly does not claim to possess many of the disadvantages of that
system. The C++ model is far too static to be acceptable to Lisp (and
Smalltalk) programmers. And "data-hiding" (notice that I did not say
encapsulation) is antithetical to the general Lisp philosophy, which
is about open-ness, introspection, and empowering the programmer.
 
K

Kenneth Tilton

Maahes said:
Possibly a good psychological example of the differences between C
programmers and Lisp programmers. C Programmers generally hate the idea of
an editor that understood conceptual blocks. I know I do. I like to program
left to right in text and don't want the compiler inserting brackets and
such.

You misunderstand. Lispniks still edit text as so many characters. We
just have /at our disposal/ when we choose to call on them sundry mad
useful tools for editing blocks of code in one go.

Just to make this clear, occasionally while dicing and slicing the code
I temporarily have parens unbalanced. For the duration of that state,
and in that narrow section of text, there are some blocks I can see
which I cannot grab in one go, precisely because the power tools do not
stop me from mashing things up any way I like while editing, and until I
get the parens rebalanced the editor will grab the wrong stuff.

And what you have no way of knowing since you cannot edit this way is
how often when editing one can usefully grab all the logic from, say,
one branch of an IF statement. (Answer: allllll the time.)

This to me is why the parens which scare non-Lispniks so much actually
make editing, formatting, and reading code vastly easier (after about
three weeks of practice).

kenny
 
S

Steven E. Harris

Jon Boone said:
What is it that you use access specifiers for?

Refining interfaces and enforcing invariants.

In CL, using package system deliberately can aid in /communicating/
these intents or desires, but there's a stronger non-enforceable trust
factor involved.
 
K

Kaz Kylheku

Pardon my pointing out the obvious, but in C or C++, none of the above
is an operator at all.

It's true that the operator terminology is not used in this way in the
defining documents for these languages. However, it's not inaccurate
to call these operators, and there is a vacancy for calling them
something.

This is a statement: while (<expr>) <statement>

This is a reserved keyword: while

But what do you call the *role* that the keyword plays within the
statement? I would argue that it's an operator. It distinguishes the
semantics from something else like if (<expr>) <statement>.

Some other symbols in C have two names. For example = is sometimes an
operator (in assignment expressions) and sometimes a punctuator
(separating the initializing expression in a declarator).

And then there is the sizeof keyword, whose job description in a unary
expression is that of operator. (Also, typeid in C++).

My second remark is that it would be an obvious extension to the
semantics of C to allow statements to have return values. In fact such
extensions exist; for example in GNU C, the last expression evaluated
in a brace-enclosed block is yielded as a value.

C borrows from Lisp the idea that evaluation is performed by
expressions, including side effects such as assignment, and even these
yield result values. But then the idea is screwed up by the
introduction of statements which don't propagate result value. In
principle, compound statements, selection statements and others could
actually be expressions. If that were the case, it would be even more
obvious to call the keywords operators, in analogy with sizeof.
Partly true, but not entirely so -- in particular, see below about
M-expressions.

M-expressions survive in disguised form in various functional
languages, which are about as readable as line noise.
When John McCarthy designed Lisp, the intent was that S-expressions
would be used for data. Programmers were intended to write code using
M-expressions, which have a syntax more like other languages,
including some inconsistencies, such as using a mixture of parentheses
and square brackets, as well as sometimes putting the first item in a
list outside the brackets and sometimes inside.

I should have said *is* designed. I'm referring to the current design,
not to the various bad ideas that were discarded or shunted along the
way toward that design.
The original intent was that M-expressions would be translated to
S-expressions, and then those would be interpreted. As a first step,
an interpreter that accepted S-expressions was written. Before a
front-end, or even a syntax for the language it would accept, was
finished, the S-expression syntax had been accepted to the extent that
M-expressions never materialized.

You might (even well be right to) believe that Lisp is better off for
this, but claiming it was a deliberate design, at the very best,
distorts the facts beyond recognition.

Sure it was a deliberate design. It just hadn't been suspected that
the syntax would gain that level of acceptance as a primary way to
input code. McCarthy's intuition suggested to him that it would be
regarded as a disadvantage.
Common Lisp actually captures that original intent by supporting a
programmable lexical analyzer, in which you can set up custom
notations. There exists a highly portable infix package in which you
can write something like:

f(x, y, a[i + j] += 3)

the custom reader converts it to the list object:

(f x y (incf (aref (+ i j)) 3))

I don't know of any project which uses that. There don't seem to be
takers for this, like there weren't any takers for M-expressions way
back when. The syntactic masochists are using things like CAML or
Haskell.

By the way, I should mention that C++ pays my bills. I'm a long-time C
and C++ programmer, in addition to many other things. I didn't become
interested in Lisp until I was around 31 years old. I never
encountered it at university. There were some Scheme courses, but I
got equivalent credit for them from another school. I heard that
Scheme was some dumbed-down Lisp for teaching, and so I smelled
trouble (and now I know how right I had been!) There was a programming
languages course that served as a prerequisite for the upper level
compiler construction course, but I got around that one as well! I
talked to the prof and his concern was simply whether I can handle the
flow of assignments that lead up to the working compiler, and I
replied that I was a solid C hacker, and so he said okay. ;)
 
C

Cameron MacKinnon

Matthew said:
The perceived advantages of the C++ model are rather dubious. CL
certainly does not claim to possess many of the disadvantages of that
system. The C++ model is far too static to be acceptable to Lisp (and
Smalltalk) programmers. And "data-hiding" (notice that I did not say
encapsulation) is antithetical to the general Lisp philosophy, which
is about open-ness, introspection, and empowering the programmer.

One point that should be emphasized: Common Lisp's standard object
system is the default (rather than the only) object system available.
Lispers seem to have reached consensus that it represents the best
design for general use. Since the object system is just more Lisp code,
people who desire C++ style semantics, or indeed any other style they
can dream up, can write or acquire an object system that suits them.

Contrast this with C++, where the object system is factory installed and
not user serviceable.
 
J

Jock Cooper

Gerry Quinn said:
I disagree - it is in fact rare for code to be changed per se. What
would be the point in writing code whose only purpose is to be written
over? Whether original code is erased or not is completely irrelevant
to the issue of whether code is self-modifying. The *program* is still
self-modifying.

Because I might get tired of typing something like:

(let ((addr-1 (get-html-value "addr-1"))
(addr-2 (get-html-value "addr-2"))
(city (get-html-value "city"))
(state (get-html-value "state"))
(zip (get-html-value "zip"))
(other-value (some-function (get-html-value "other")))
(name (get-html-value "username")))
...some code...
)

so I write a macro to transform some code and now I can
write thusly:

(with-form-values (addr-1 addr-2 city state zip (name "username")
(other-value "other" some-function))
...some code...)

which is transformed into the top code.

Now I have eliminated a source of typos and made the code much more
readable. Note that a function will not work for this because I am
establishing lexical bindings for "...some code..."
 
P

Petter Gustad

Maahes said:
Possibly a good psychological example of the differences between C
programmers and Lisp programmers. C Programmers generally hate the
idea of an editor that understood conceptual blocks. I know I do. I
like to program left to right in text and don't want the compiler
inserting brackets and such.

The compiler does not insert brackets and such. It's quite common to
have an *editor* (like emacs) to do so when you type a command to
request it to do so.

Petter
 
K

Kaz Kylheku

Jon Boone said:
What ever do you mean by "bolted-on"?

I'd like to see a definition of ``bolted on'' which covers the Common
Lisp object system, but somehow skillfully manages to exclude the C++
object system.

A good test for determining whether an object system is bolted on is
to answer the question: are there some ``basic types'' inherited from
an older language which don't participate in the class system?
Most of any Common Lisp system is going to be written in Common
Lisp (which means either macros or functions).

.... and it is for very pragmatic reasons.
The lack of access specifiers is a design decision, not a
by-product of implementation.

Common Lisp supports access control, just not in the same way as C++.

Firstly, there is the package system, which lets symbols be marked as
exported, or remain unexported. The package system provides a general
mechanism for controlling feature visibility, regardless of what kind
of entities you are working with, be they classes or whatever.

Next, accessor methods can be specified for a class slot, right in the
definition of that slot for disciplined access. Without accessors,
it's less convenient to get at a slot.

If you really want to hide slots, you can use secret uninterned
symbols to name them.

The C++ style of protection is completely at odds with the very design
of the object system, because there is no notion of class scope. The
body of a method in Lisp is not in some special scope in which the
symbols denoting the variables of a class are magically bound to those
slots. The ordinary scoping rule applies.

Public versus private issues play out at the package level.
 
K

Kaz Kylheku

[ ... ]
It's not ambiguous if you're a C compiler, but it is if you're a human who
hasn't memorised all the operator precedence rules.

Based on this reply, about all I can guess is that you don't know what
"ambiguous" really means.

Ambiguous means I have to look up a bunch of hidden rules that were
added to an ambiguous language and arbitrarily resolve the parsing
conflicts. The existence of the hidden rules doesn't solve all of the
problems.

By the way, look at the ridiculous grammar factoring that's needed in
order to avoid the use of explicit precedence and associativity rules.
The expression 42 ends up being a multiplicative-expression *and*
an additive-expression at the same time. Yet it neither adds nor
multiplies.

Being ignorant of the meaning of a statement
doesn't imply that the statement is ambiguous; to be ambiguous, it
must be impossible to determine the meaning of the statement,
regardless of one's knowledge of the language, etc.

My utter ignorance of Japanese is FAR from justification for claiming
that all Japanese is ambigous.

I'm almost certain that Japanese is full of ambiguities, being a
natural language.

The difference is that there aren't any hidden rules about it. An
ambiguity in natural language doesn't pretend to be something else.

If you utter something ambiguous that leads to a misunderstanding, you
can't claim superiority by referring the poor victim of your
misunderstanding to an associative precedence chart.

Now every message must have some hidden rules which indicate how it
should be parsed. But those rules can be made minimal. Moreover, the
message can even be structured such that the rules can be just about
deduced from its structure.

An ambiguous message is one whose parsing cannot be cracked without
access to the hidden rules, even if it is encountered by an
intelligence which is able to correctly figure out every other aspect
of the message, like which symbols are operators and operands, and
that the operators are binary, and so on.

Suppose that an alien encounters a message from Earth which looks like

A + B * C / D - E * F

The alien might be able to deduce what the operators are, and then it
is stuck.

Now the same alien receives:

(- (+ A (/ (* B C) D)) (* E F))

Aha, it's obvious that the two symbols ( ) are special and that they
serve to enclose, as suggested by their shape. Moreover, they balance,
which reinforces that suspicion.

Given a larger repertoire of such messages, it becomes obvious that
the operators like * and + always appear in the left position of a
sublist, and the other symbols like A B C are in the remaining
positions. So the message itself can convey the idea that + - * / are
somehow different from these other A B C ... In the first message,
even this is huge leap. One could suspect that A and + are unary
operators, and find nothing in the message to contradict the idea.
I find it particularly interesting that C and C++ both contain some
things that (based on pure syntax) really ARE ambiguous, but those

What, like evaluation orders? That's not pure syntax, but semantics.

That's a whole different pile of unbelieveable idiocy that should have
been fixed long ago.

Or, I'm guessing that perhaps by ``pure syntax'' you mean ``just the
raw grammar, with no symbol table information''. As in, what is this:

(a)(b)(c)+(d);

Our choices are:

typedef int a, b, c;
int d;
// ...
(a)(b)(c)+(d); // unary + expression put through casts!

Another choice:

extern int (*a(int))(int);
int b, c, d;
// ...

// call a with argument b, which returns pointer to a function that
// takes int and returns a pointer to an int -> int function.
// This function is called with argument c. The result is added
// with the value of d.

(a)(b)(c)+(d);

And so on. I once wrote a parser which could deal with these
ambiguities. The goal of the parser was to classify an expression into
three categories: 1) has side effects for sure. 2) might have side
effects depending on which way it is parsed and 3) has no side
effects, for sure. When there was an ambiguity like the above, it
would parse it both ways, record its findings and backtrack. The
backtracking was done using an exception handling package hacked over
setjmp and longjmp in C.

With this package, one can put assertions into function #define
macros, so that at run time, suspicious uses of these macros would
blow up!

For example, suppose you were implementing a macro for int getc(FILE
*) macro, and you needed to evaluate the stream variable twice in the
macro expansion.

With my package, I could write the getc() macro call such that when
the first time it is evaluated, the enclosed expression is parsed and
classified. If it is found to have side effects, like getc(stream++)
then the program stops with an assertion. If it's suspected to have
side effects, a warning is produced and the program continues, and of
course if there are no side effects, it is silent. In both these
cases, the expression is stored into a hash table so it doesn't have
to be parsed again; the next time that same call is evaluated, the
hash will tell that all is good.

So as you can see, I have done some incredibly devilish things in
order to make a dumb language safer.
 
G

Gerry Quinn

Gerry Quinn said:
I disagree - it is in fact rare for code to be changed per se. What
would be the point in writing code whose only purpose is to be written
over? Whether original code is erased or not is completely irrelevant
to the issue of whether code is self-modifying. The *program* is still
self-modifying.

Because I might get tired of typing something like:

(let ((addr-1 (get-html-value "addr-1"))
(addr-2 (get-html-value "addr-2")) [--]
so I write a macro to transform some code and now I can
write thusly:

(with-form-values (addr-1 addr-2 city state zip (name "username")
(other-value "other" some-function))
...some code...)

which is transformed into the top code.

Now I have eliminated a source of typos and made the code much more
readable. Note that a function will not work for this because I am
establishing lexical bindings for "...some code..."

Sure, that's a reason why you might want to use this. And I think, if
you are making the point that this should not be called 'self-modifying
code' in the proper sense, you do have a point - you are manipulating
strings in a way that might be handled elsewise in another language. If
you look at the above, it's really data you are manipulating, not code.

Downloaded your 'UltraFractal'. I was curious to see if it was written
in Lisp, but it doesn't appear so. [Not that that invalidates anything
you say.]
 
G

Gerry Quinn

The perceived advantages of the C++ model are rather dubious. CL
certainly does not claim to possess many of the disadvantages of that
system. The C++ model is far too static to be acceptable to Lisp (and
Smalltalk) programmers. And "data-hiding" (notice that I did not say
encapsulation) is antithetical to the general Lisp philosophy, which
is about open-ness, introspection, and empowering the programmer.

I'll take that as a 'yes'.

Gerry Quinn
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,204
Messages
2,571,063
Members
47,671
Latest member
peterweyand

Latest Threads

Top