C++ sucks for games

P

Peter Lewerin

(e-mail address removed) (Jerry Coffin) wrote
Other than the fact that the Lisp code was so horrendous,

Well, the C++ code was quite horrendous too, so maybe it evens out.
:)

I'm actually a bit surprised that no C++ programmers have pointed out
yet that the C++ is sloppily written, obfuscated, and potentially
dangerous. It's *not* a flattering example of C++.
 
F

Frode Vatvedt Fjeld

Well, I certainly wouldn't consider FIR filters particularly
"special purpose" at all -- quite the contrary, they're useful (and
heavily used) in a huge variety of situations.

They're also closely related to the FFT, DCT, etc., used in many
forms of lossy compression. Now, perhaps to you "HDTV, "cell phone",
"CD", "DVD" and "MP3" all sound so obscure that something used in
all of the above qualifies as "special purpose", but here on planet
earth, that's not particularly accurate.

Of course they are, by special purpose I did not mean "single
application". FFT and DCT etc. was exactly what I was talking about. I
consider e.g. an 8x8 iDCT transform for MPEG decoding to be a quite
special purpose function, with very clearly mathematically defined
relationship between input and output, just as I explained. That this
particular function is used in a wide array of gadgets and programs,
is really quite far besides the point.
You've also conveniently forgotten that you claimed that even in
these cases, Lisp was still just as easy to use, while C and C++
merely provided better performance.

It seems to me that it is you who is conveniently forgetting that you
have been shown Lisp code that is shorter than your C++ code. Of
course you think the C++ version "nicer", but don't pretend that's
anything but your subjective opinion.

In fact, I'd find it quite interesting to discuss your example from
another sub-thread.

Compare "h[n] = val/N;" to "(setf (aref h n) (/ val numtaps))".

Never mind that any person literate in both languages see these as
identical. I think my point above is illustrated even in these two
simple lines. Because, the Lisp syntax here (setf ..) scales very well
in terms of concepts and mechanisms. Once one learns to appreciate the
expressive power of Lisp's "places", of which "(aref h n)" is an
example here, it also becomes very natural to want to express array
elements as such places, and the idea that one should introduce
special syntax for this particular and mundane concept just to save
some characters of program text, or to emulate high-school maths,
becomes ridiculous. The C++ syntax here, on the other hand, goes
nowhere. Points about syntax such as this is of course completely lost
on the casual reader of Lisp code, but they are valid and important,
nonetheless.
Now, I'll certainly admit that there are times it's worth trading
off some readability to gain performace, and other times the reverse
-- but it's a lot harder to justify the Lisp code when it's slower
AND harder to read.

Let me point out that one needs to be a minimum of literate in any
language in order to be in a position where one can make any kind of
judgement on readability. That a language which you presumably rarely,
if ever, use extensively appears obscure to you, shouldn't surprise
you.
It sounds to me like you've just said that Lisp provides advantages
only when/if you've done such insufficient analysis that you
shouldn't be writing code of ANY kind yet.

Well, there's this concept called "exploratory programming" that I
really like and which is impossible with C++. But that's not the issue
here. If all the programs you're writing is merely the encoding of
some perfect mathematical model of every function, data and control
flow, then good for you, but I believe that's not the reality most
programmers find themselves in.
You seem to specialize in non-sequiters. If I decide to use a long
long, I can certainly do so. Somehow you make a jump from "large
enough integer type" to "perfectly bug-free code". If that was
actually justified, then any language that provided
arbitrary-precision integers would guarantee bug-free code, but
somehow that doesn't seem to have happened.

It is you who is making unwarranted jumps here. I thought it obvious
from the context that "the bugs" refers to program errors resulting
from unexpected integer wrap-arounds. My point is simply that this is
something that does happen, even if 64-bit integers have been around
for a long time. And even 64-bit isn't immune to overflow.
At the same time, it should be pointed out that on the (relatively
rare) ocassion that somebody cares, they can easily use arbitrary
precision integers in C++. Doing the same in C is possible, but you
have to use prefix-notation function calls to do nearly everything,
which is ugly (i.e. it looks like Lisp).

What you mean is that you have to start fighting the language: You'll
have two completely separate kinds of numbers, and you'll have to
concern yourself with which operators apply to which variables
etc. And the compiler is left completely in the dark about what's
going on wrt. type inference etc., something I'd guess is true also
for C++.
 
G

Gerry Quinn

Yes, excessive terseness is bad. However, I find the *existence* of common
patterns to be a sign that I haven't got my design right yet, and that I
might need a new macro.

David: "anamorphic if" [WTF?]

Ray: "lower case is best because it's a unicode world" [and here I
though MSVC was so primitive compared to the magical text-editors]

William: "existence of patterns shows new macro is needed" [patterns are
ubiquitous and normal, *words* are patterns for heaven's sake]

I can see that this subthread is devolving into the most primitive form
of Lisp advocacy. How long before someone claims that no respectable
language can have more or less than four letters in its name?

- Gerry Quinn
 
R

Ray Blaak

Actually, "ß" already is lowercase. [...]
If one still insists to use capital characters, there are
/two/ uppercase versions: the default uppercase spelling is
"SS", while the spelling "SZ" has to be used, when the default
spelling might cause ambiguity. So the capitalizations of
"Masse" and "Maße" (two different words with different
meanings) are "MASSE" and "MASZE", respectively.

Thanks for the correction.

So how would the these identifiers compare in a case-insensitive language:
masze, MASZE, maße?

My advice is still to bail on the issue can simply say: different codepoints
== different identifiers.
 
R

Ray Blaak

Gerry Quinn said:
Ray: "lower case is best because it's a unicode world" [and here I
though MSVC was so primitive compared to the magical text-editors]

Actually, that's: case-sensitive is best...

Of course, maybe it's even better to have a language with ASCII-only
identifiers (or some other suitable "visual" Unicode subset) and then be case
*in*sensitive after all.
 
M

Marcin 'Qrczak' Kowalczyk

Followup-To set arbitrarily, it's off-topic everywhere...

Ray Blaak said:
So how would the these identifiers compare in a case-insensitive language:
masze, MASZE, maße?

Unicode provides default case mapping algorithms, which are then tuned
for a few special cases in a few languages (Turkish, Azeri, Lithuanian).

http://www.unicode.org/versions/Unicode4.0.0/ch03.pdf section 3.13
http://www.unicode.org/versions/Unicode4.0.0/ch05.pdf section 5.18

Case folding is neither uppercasing nor lowercasing, but their
combination with the property that it is the smallest equivalence
relation which folds together strings which can be made equal by
uppercasing or lowercasing.

After case folding ß, SS and ss are equivalent. Yes, this breaks when
ß is uppercased to SZ. A dumb algorithm cannot unambiguously solve
this without breaking languages where SS and SZ are significantly
different. There is also an unsolvable problem with dotted and dotless
I/i, and Unicode tables leave a choice here. This is the closest
language-independent approximation which can be automated.
 
H

Hartmann Schaffer

Jerry said:
In that case, I certainly hope I never see anything you'd admit WAS
"so bad".

Beyond that I have little left to say to you -- it would appear to me
that your views of programming are sufficiently different from my own
as to preclude any hope of a rational discussion between us on the
subject.

apparently you didn't read beyond the snippet you quoted. wade
proceeded to give some reasons.

also, see. Christopher Stacy's article where he expresses his suspicion
that this code is the output of a fortran>lisp translator

hs
 
J

Jerry Coffin

[ ... ]
Of course they are, by special purpose I did not mean "single
application". FFT and DCT etc. was exactly what I was talking about.

Okay, so "special purpose" in your vocabulary translates to what most
people would call "extremely general purpose".

Bottom line: we're not talking about something extremely obscure that
virtually nobody ever has a reason to deal with, or anything like
that. We're talking about something that's reasonably representative
of what quite a few people really do on quite a regular basis.

[ ... ]
That this particular function is used in a wide array of gadgets
and programs, is really quite far besides the point.

It may be beside whatever point it was that you were trying to make,
but if so, it would seem to indicate that your "point" was really
pretty nearly pointLESS.
It seems to me that it is you who is conveniently forgetting that you
have been shown Lisp code that is shorter than your C++ code.

No, I have not. There was Lisp code shorter than the SPUC code -- _my_
C++ code is shorter still.
Of
course you think the C++ version "nicer", but don't pretend that's
anything but your subjective opinion.

The claim _seems_ to be that the Lisp code was better than the SPUC
code because it was shorter. If we take that as the measure, then my
C++ code is clearly better than the Lisp code that was posted.

I'm not convinced that's the case though -- both my code and the Lisp
code were shortened by introducing more indirection. While this often
increases convenience, it quite dependably slows execution speed. What
we have is a pretty clear tradeoff between size and speed. If you
don't need the speed, then the reduced size is an advantage, but
anybody making such a tradeoff should certainly realize that he hasn't
simply "improved" the code, but that he's changed the tradeoffs
involved.
In fact, I'd find it quite interesting to discuss your example from
another sub-thread.

Compare "h[n] = val/N;" to "(setf (aref h n) (/ val numtaps))".

Never mind that any person literate in both languages see these as
identical.

Close anyway.

The still leaves room for a fairly substantial and meaningful
difference though: that C++ is easily readable by quite a few people
who aren't familiar with C++ itself, but happen to be familiar with
Pascal, Fortran, BASIC, Java, or any number of other vaguely similar
languages.

By contrast, the Lisp is more or less unique to Common Lisp. Somebody
familiar with some older Lisp (e.g. ZetaLisp or MacLisp) or with
something like Scheme could probably take a pretty fair _guess_ at
what was going on, but that's about it -- setf has been implemented as
a macro in Scheme, but unless the Schemer in question happened to also
be familiar with Common Lisp as well, he really wouldn't know what was
going on.

This goes back to the (lack of) syntax in Lisp. Knowing one language
in the Algol/Fortran group tends to make quite a bit of almost any
other in the group fairly easy to understand. In Lisp, by contrast,
learning the syntax teaches you only syntax. That lets you identify
(to some extent) which things are going to be evaluated as
functions/special forms, but that's about it. Until you know exactly
what "setf" happens to mean, you can't divine anything about what this
might mean.

I do feel obliged to point out that Lisp's paucity of syntax doesn't
_have_ to spell the disasater embodied in CL. It's only a naming
convention, but at least when it's written at all well, Scheme is
clearly better in this respect -- setf would be named setf! (and let
is let!, etc.), predicates always end in "?", and so on, so the name
of a function can provide at least some information, whereas in CL you
have no more than a totally arbitrary association between a string and
some action.
I think my point above is illustrated even in these two
simple lines. Because, the Lisp syntax here (setf ..) scales very well
in terms of concepts and mechanisms. Once one learns to appreciate the
expressive power of Lisp's "places", of which "(aref h n)" is an
example here, it also becomes very natural to want to express array
elements as such places, and the idea that one should introduce
special syntax for this particular and mundane concept just to save
some characters of program text, or to emulate high-school maths,
becomes ridiculous.

Somehow I'm reminded of the old joke about the proud mother pointing
at her son in the parade and saying "Oh look, everybody's out of step
except my Johny."

"Once one learns to appreciate...it becomes natural" is a cheap
debating trick at best -- it basically translates to "Anybody who
disagrees with me on this point is stupid or ignorant." Unfortunately,
that's simply not a valid point.

The theoretical similarity of all Turing-complete languages has been
cited previously in this thread. There are real differences between
languages however: one is that a real language provides reasonably
syntax. Another is that a even though it is Turing complete, a real
language is defined to help solve problems in some specific category.

C++ provides a specialized syntax for array access for the simple
reason that this is often helpful in solving the problems for which it
is intended.

Lisp refuses to provide such special syntax. AFAICT, for the simple
reason that nobody has really ever pinned down what it's supposed to
be good at -- because its advocats persist in claiming that it's good
at everything. That result is a language that _could_ be good at
almost anything, but really _is_ good for almost nothing.

Lisp reminds me of one of my relatives: he's brilliant, athletic (or
used to be anyway), and good enough looking that even though he's now
past 60, he still attracts nearly every woman within miles. Deciding
what he was going to do when he grew up would have meant giving up
other things for which he really did have the potential. He refused to
do so, which has prevented him from concentrating on anything. The
result is that instead of missing out on some possibilities, he's
missed out on ALL of them.
The C++ syntax here, on the other hand, goes
nowhere. Points about syntax such as this is of course completely lost
on the casual reader of Lisp code, but they are valid and important,
nonetheless.

IOW, you like Lisp better, and dismiss all who disagree as lacking
your brilliant insights, etc., ad naseum.

Frankly, your lofty claims strike me as little more than hot air. Your
later guesses about C++ make it clear that you lack the knowledge
necessary to comment on it intelligently.
Let me point out that one needs to be a minimum of literate in any
language in order to be in a position where one can make any kind of
judgement on readability. That a language which you presumably rarely,
if ever, use extensively appears obscure to you, shouldn't surprise
you.

The code has an extra level of indirection. That has nothing to do
with my ability to read, and everything to do with how the code is
structured.

Your presumption that I rarely if ever make extensive use of the
language is simply incorrect -- in fact, if I chose to, I could put up
better arguments in its defense than you have so far (and Lisp I've
written is certainly better than the average of what's been posted so
far in this thread).

[ ... ]
Well, there's this concept called "exploratory programming" that I
really like and which is impossible with C++.

I believe any claim of the form "X is impossible in Y", where X is a
technique and Y is a major programming language, reflects ignorance or
dishonesty (or both) on the part of the person making the claim.

[ ... ]
It is you who is making unwarranted jumps here. I thought it obvious
from the context that "the bugs" refers to program errors resulting
from unexpected integer wrap-arounds. My point is simply that this is
something that does happen, even if 64-bit integers have been around
for a long time. And even 64-bit isn't immune to overflow.

I see -- so treating what you said as what you meant was an
"unwarranted jump". Okay, I guess from now on I'll know better than to
trust you.

In the end, we're left with one simple fact though: while people
certainly do write buggy code in C++ on a regular and ongoing basis,
integer overflows appear to account for no more than a miniscule
percentage of that.

I think you're tilting at a windmill here -- and AFAICT, the windmill
is imaginary.

[ ... ]
What you mean is that you have to start fighting the language: You'll
have two completely separate kinds of numbers, and you'll have to
concern yourself with which operators apply to which variables
etc.

Here we seem to nearly agree, but "It's like Lisp" is a lot more
succinct way of saying it.
And the compiler is left completely in the dark about what's
going on wrt. type inference etc., something I'd guess is true also
for C++.

Your guess indicates less about the subject than your ignorance of it.
 
J

Jerry Coffin

[ ... ]
apparently you didn't read beyond the snippet you quoted.

Actually, I read the whole thing.
wade proceeded to give some reasons.

Yes and no -- he gave excuses (not really reasons) and even those
weren't reasons to believe the code was good, but merely that it was
justifiable to continue using the code despite its poor quality.

Another look at the code itself, however, reveals the real truth: you
can argue all day long that horrible code is excusable, but the fact
remains that this is the worst code I've looked at in years (in any
language). In the end, only one conclusion is possible: attempting to
discuss programming with anybody who claims it's "not so bad" is
pointless.
also, see. Christopher Stacy's article where he expresses his suspicion
that this code is the output of a fortran>lisp translator

Yes, it may well be. In fact I've previously pointed out that I agree
that this is probably the case. This may tell us how this horrible
code was produced, but does nothing to change the fact that it IS
horrible.

The truly sad part is that it's a fair guess that the Fortran code has
long since been fixed. From the looks of the code, it's based on
something written in Fortran 66 (or earlier). Fortran added block
structures in the 1977 standard, and has been updated a couple more
times since then, to the point that it now supports reasonable
structure. Most Fortran programmers have embraced this sufficiently
that spaghetti code like this is rarely seen in Fortran anymore.
 
M

Mike Ajemian

Jerry Coffin said:
Hartmann Schaffer <[email protected]> wrote in message
[ ... ]
apparently you didn't read beyond the snippet you quoted.

Actually, I read the whole thing.
wade proceeded to give some reasons.

Yes and no -- he gave excuses (not really reasons) and even those
weren't reasons to believe the code was good, but merely that it was
justifiable to continue using the code despite its poor quality.

Another look at the code itself, however, reveals the real truth: you
can argue all day long that horrible code is excusable, but the fact
remains that this is the worst code I've looked at in years (in any
language). In the end, only one conclusion is possible: attempting to
discuss programming with anybody who claims it's "not so bad" is
pointless.
also, see. Christopher Stacy's article where he expresses his suspicion
that this code is the output of a fortran>lisp translator

Yes, it may well be. In fact I've previously pointed out that I agree
that this is probably the case. This may tell us how this horrible
code was produced, but does nothing to change the fact that it IS
horrible.

The truly sad part is that it's a fair guess that the Fortran code has
long since been fixed. From the looks of the code, it's based on
something written in Fortran 66 (or earlier). Fortran added block
structures in the 1977 standard, and has been updated a couple more
times since then, to the point that it now supports reasonable
structure. Most Fortran programmers have embraced this sufficiently
that spaghetti code like this is rarely seen in Fortran anymore.

--
Later,
Jerry.

The universe is a figment of its own imagination.

It's Jessica Rabbit code:
I'm not bad, I was just generated that way...

All this noise about old machine generated fortran->lisp code from *a very
long time ago* - and the code works fine (when did the Remez algorithm
change?) I just spent a very short time extracting just the remes function
and supporting code from the web page cited
(http://www.dxarts.washington.edu/docs/clmman/fltdes.lisp), compiled it and
ran the example located in the header comments. It compiled, loaded and ran
without a hitch.

On visual inspection, the code could use a little error-checking to prevent
div 0 (14-15 occurances.) And I wouldn't call it production code without
mods. It's not even close to being the worst code I've seen in any
language - not by a long shot, well, except for the GO's. It's ugly, but it
works. Agree it's machine generated. If I had to work with it, I'd diagram,
refactor, add error-handling, comment and add a test harness. Probably take
a day, maybe more, probably less. At least in Lisp. Equivalent C++ code
cleanup would be difficult to determine. I have much more experience with
C++ and think that it would take longer to refactor bad code like this in
C++ than it would in Lisp. YMMV.

If anybody wants the code, let me know.

Mike

From mathworld (note the last line):

http://mathworld.wolfram.com/RemezAlgorithm.html

An algorithm for determining optimal coefficients for digital filters. The
Remez algorithm in effect goes a step beyond the minimax approximation
algorithm to give a slightly finer solution to an approximation problem.

The Remez exchange algorithm (Remez 1957) was first studied by Parks and
McClellan (1972). The algorithm is an iterative procedure consisting of two
steps. One step is the determination of candidate filter coefficients h(n)
from candidate "alternation frequencies," which involves solving a set of
linear equations. The other step is the determination of candidate
alternation frequencies from the candidate filter coefficients (Lim and
Oppenheim 1988). Experience has shown that the algorithm converges very
fast, and is widely used in practice to design optimal filters.

A FORTRAN implementation is given by Rabiner (1975). A description
emphasizing the mathematical foundations rather than digital signal
processing applications is given by Cheney (1999), who also spells Remez as
Remes (Cheney 1966, p. 96).
 
G

Gerry Quinn

(e-mail address removed) (Jerry Coffin) wrote


Well, the C++ code was quite horrendous too, so maybe it evens out.
:)

I'm actually a bit surprised that no C++ programmers have pointed out
yet that the C++ is sloppily written, obfuscated, and potentially
dangerous. It's *not* a flattering example of C++.

Couldn't find it.

- Gerry Quinn
 
P

Peter Ashford

No, they never will. That is because nothing ever changes in
programming. That is why the Web is invariably programmed using COBOL,
VSAM, CICS and...uh-oh....

...never mind!!!

kenny (going back to work on his Lisp game engine with his new OpenGL
shading reference in hand)

Dude, I'm writing openGL / Java code - I *know* that you can write good
games without using C++. HOWEVER what I or you do has no effect on the
game studios out there who have millions invested in staff knowledge and
tool chains built around C++.

I wouldn't even matter if these alternate solutions were technologically
superior (which you might argue) because the investment in current
technology adds up to a hell of a lot of inertia to change.

I wasn't making a claim that technology never changes, or better
solutions never come along - I'm talking about the studios investment in
tools and knowledge. That was the point of the ASM->C->C++ language
transition comment in the original post.
 
K

Kenny Tilton

Peter said:
Dude, I'm writing openGL / Java code - I *know* that you can write good
games without using C++. HOWEVER what I or you do has no effect on the
game studios out there who have millions invested in staff knowledge and
tool chains built around C++.

Fine, fine, fine! Like any dinosaur, C++ has a lot of inertia, for all
the reasons you stated. Do you think COBOL/VSAM had no inertia? Staff
knowledge? Existing software?

But you said "...and probably never will". Bzzt!

My read on the situation is, inertia schmertia. These dynasties
disappear overnight. And the edge Lisp has over C++ is close to an order
of magnitude, so this will be an exceptionally quick transition.

This whole thread started when some Microsoft drone did his master's
bidding and trashed Lisp on his personal Web site. That just confirms
what any follower of comp.lang.lisp can tell you: the Ice Age is over.
The thaw has begun. A steady stream of newbies has the old-timers
scrambling to keep up with the newby FAQs (and we're so happy to see
them that we do not even mind that the Qs are FA).

And Micro$oft is scared, as they always are by anything they cannot control.

As they say in your business, Game Over. I recommend AllegroCL on the
win32 platform, btw, if you want to start your re-training.

:)

kt
 
G

Gerry Quinn

This whole thread started when some Microsoft drone did his master's
bidding and trashed Lisp on his personal Web site. That just confirms
what any follower of comp.lang.lisp can tell you: the Ice Age is over.
The thaw has begun. A steady stream of newbies has the old-timers
scrambling to keep up with the newby FAQs (and we're so happy to see
them that we do not even mind that the Qs are FA).

No, it started when some idiot posted here that "C++ sucked for games".
The website you refer to has been there a long time, though not a
fraction as long as Lisp has existed.

If Lisp ever should have a major turnaround in its popularity, MS will
quite happily add Visual Lisp to their product range.

- Gerry Quinn
 
P

Philippa Cowderoy

The website you refer to has been there a long time, though not a
fraction as long as Lisp has existed.

If Lisp ever should have a major turnaround in its popularity, MS will
quite happily add Visual Lisp to their product range.

I'm not so sure it'd be that simple. It'd require major changes to the CLR
as I understand it - for better or worse, .NET assumes a Java-like OO
model. And I suspect MS's commitment to it's pretty hefty, it's their
ticket off x86 should they feel the need.
 
K

Kenny Tilton

Gerry said:
No, it started when some idiot posted here that "C++ sucked for games".

I meant in the larger sense, not that of the literal thread. And the
"idiot" is not really an idiot; the idiocy was a parody of the web site,
and the provocativeness intended simply to make for a lively, drawn-out
thread which would turn the un-saved on to Lisp.

The thread almost died, but you have kept it going nicely by sticking to
reasonable technical arguments and being more open-minded than is really
appropriate on Usenet. :)
The website you refer to has been there a long time, though not a
fraction as long as Lisp has existed.

When the site got noticed again and slammed on cll (again), the dweeb
added a second page pretending to respond to critics but actually
ducking all the solid objections.
If Lisp ever should have a major turnaround in its popularity, MS will
quite happily add Visual Lisp to their product range.

Yep. And it will break the standard so they can monopolize Lisp as well.
That will probably be the end of them.

kt
 
R

Raghar

There is this website called Google where you can go
and type things like "lisp comparison C++" and it
<snip>
This applies to the post of Ron Garret as well.
So let's just take one of them for verification. It looks like I
had http://www.flownet.com/gat/papers/lisp-java.pdf offline on my
computer. So we might look at that.

It starts with big name "Lisp as an Alternative to Java". So I hope
I'd talk about the right article.
From first few looks it seems it was done in around winter 2000. It
also reffered to some study that used 1.2.? version of JVM. So at
the first we should say IT'S AN OUTDATED study, that shouldn't be
used for any current comparisons, at least with Java.

Then they talked about the development time. 2 - 8.5 LISP
4 - 63 Java
3 - 25 C / C++
It seems strange. It shouldn't be so high for Java. (and for C if
not messing with pointers it should be more like 17 hours) Perhaps
that it depended on experience with programmer. Lets look how many
hour they have behind them. (I have just over 3000 if someone would
like to do some criticism.) LISP 6.5 years
Java 7.7 years
C / C++ 9.6 years
7.7? What version of Java was available when they started with
programming? Difficult counting 2000 - 7.7. I remmembered one
person talked something about company that required programmers
with 5 years experience back in the 1996. Wait a minute I remmeber
he said also something about... When was first version of JVM? Beta
was released in late 1995. Release version of the JVM was released
in the 1996. So that programmers was... I REFUSE to consider this
article as too much valid, if there are programers with longer
experience with the language than age of the language's compiler.
(They said they were volunteers from usenet so they were unlikely
people that builded the Java compiler.)
So we might consider that lower values as somewhat relevant.
However we don't know if they accounted for breaks and toilets, so
we could repair it by a factor of 2 hours. We will add 2 hours to
lower value, and subtract 2 hours from larger value. Now all times
are equal so we could say they spend around 2 hours with that
problem and rest of the time with retyping it into theirs favorite
language.
That 63 hours of programming in Java is only one result. It seems
to be caused by 50 hours of learning Java and 13 hours of typing
that algorithm into the language.
Median of Java is lower than C, it's expected behaviour. (If both
sides are using Eclipse, experienced programmers and perfect
libraries for development, Java time is around 0.8 of C. However it
would be stupid to do any races.)
Lower median of LISP could be caused be problem very easy to
program in LISP, or by students that did such problem recently in
the school.

Conclusion. That article has nearly no informations.

Should I expect that everyone is verifying Internet articles from 3
independent sources and in case of program tests, he does them as
well?
 
R

Raghar

When the site got noticed again and slammed on cll (again), the
dweeb added a second page pretending to respond to critics but
actually ducking all the solid objections.

What site? This thread started by some person that didn't posted
here second time.

This thread has just 668 posts and is growing...

Funny is he didn't say LISP is solution for all that problems,
however main debate was about LISP.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
474,202
Messages
2,571,057
Members
47,665
Latest member
salkete

Latest Threads

Top