is LISP the ultimate prgram language?

N

Nick Keighley

On 20 loka, 11:05, Nick Keighley <[email protected]>
wrote:

It's probably a nice language in theory, but in practice
nothing really useful has been programmed with it.

Emacs, Naughty Dog games engine. I believe some of the internal
file formats in GCC look remarkable Lisp like.
It's the
matter with many languages that are supposed to be better
than C++. I think the reason is simple: horrible syntax.

You have seen a complex template declaration?

I know some people don't appreciate the syntax of C++, but
at least it can be read by humans, in most cases (there are
examples of very twisted C++ source code which is harder
to read than usual).

Yes the Lots of Irritating Silly Parenthases takes some getting
used tobut you /can/ get used to it (surprisingly rapidly), an
editor that can count brackets helps.
 
R

Richard Herring

In message
Computerized symbolic mathematics systems ie Macsyma and its
imitators. I find that useful on a weekly basis ;-)

I suppose that qualifies. For an appropriate definition of
"intelligence", of course.
 
R

Richard Herring

Rui Maciel said:
Do you believe that the progress observed in an entire scientific
domain has any relation with the use of a
certain programming language by some projects?

I'm not the one making claims here. Did you miss this:

?
 
F

Francesco

Emacs, Naughty Dog games engine. I believe some of the internal
file formats in GCC look remarkable Lisp like.

Hi, I'm just curious: where did you read that Naughty Dog game engine
is written in LISP?
If you go to their website, career section, they explicitly ask for C,
C++ and C# knowledge.

http://www.naughtydog.com/site/careers/server_programmer/

Anyway it seems strange since the vast majority of games engines are
written either in C or C++.
Just to name a few:
- Unreal
- Quake
- Valve's Engine

And when it's not explicitly stated, all games software houses REQUIRE
C++ skills.
I'm not really a fan of C++, but it's the de-facto standard for the
games industry.
Other languages are usually used for scripting (LUA, Lisp, etc...)

I'm no expert, but the statement (Naughty Dog's engine in LISP) seems
strange.
By the way, Naughty Dog games are awesome, Uncharted rules!

Bye,
Francesco

<SNIP>
 
N

Nick Keighley

Hi, I'm just curious: where did you read that Naughty Dog game engine
is written in LISP?
If you go to their website, career section, they explicitly ask for C,
C++ and C# knowledge.

I believe the high performance bits were always written in C++ but
that there was a scripting language or something that was lispy. I
then think the lisp bits got rewritten in C++. If you google "lisp
naughty dog" you get quite a few hits. For instance:-
http://ynniv.com/blog/2005/12/lisp-in-games-naughty-dogs-jax-and.html

(which also gives you idea why they stopped using it...)

or
http://en.wikipedia.org/wiki/Game_Oriented_Assembly_Lisp
 
K

Keith H Duggar

It can, and some do. Obviously, it's harder for the compiler
than optimizing around std::vector, but then, optimizing around
std::vector is harder for the compiler than optimizing around
Fortran style arrays. (C-style arrays are a problem for the
compiler, because they end up being pointers.)

And what would you replace this C-style array "problem" with?
In other words, how would you implement iterators for C-style
array sequences? Do you have a link to a proposal for C++ or
examples of languages that "do it right" and support Stepanov
iterator concepts as C++ does?

KHD
 
J

James Kanze

And what would you replace this C-style array "problem" with?

Make arrays first class types, behaving as any other type.
In other words, how would you implement iterators for C-style
array sequences?

The same way you implement them for any other container. For
that matter, you usually inhibit the convertion of array to
pointer (by using pass by reference) when you want "iterators"
for C style arrays, using something like:

template< typename T, size_t n >
T*
end( T (&array)[ n ] )
{
return array + n ;
// or return &array[0] + n, if there was no conversion,
// or just something like
// return array.end(), if the language were defined
// thusly.
}

Basically, you block the conversion because it involves loss of
important information concerning the type, mainly how many
elements the array contains.
Do you have a link to a proposal for C++ or examples of
languages that "do it right" and support Stepanov iterator
concepts as C++ does?

The STL has been implemented in Ada. The problems doing so had
nothing to do with arrays; the problems had to do with the fact
that Ada's generics work somewhat differently than those in C++.

And of course, one might reasonably ask why you would want to
use the STL iterator idiom, which follows the C idiom in
requiring a separate object to contain all of the necessary
information, when you could put all the information in a single
object, resulting in a more powerful and simpler to use
abstraction.
 
A

ardjussi

No it doesn't.  C++ wants to be a multi-paradigm programming
langauge, supporting many different paradigms, including object
oriented programming.  Many (not all) of the criticisms of it
come from people who insist that only one paradigm is good, and
are upset with C++ because it doesn't impose that paradigm.  In
practice, the more tools you have in your workbox, the more
effective you can be (provided you know how to use them).  For
any given paradigm, C++ is probably a bit harder to use than a
language dedicated to only supporting that paradigm, but it has
the advantage of not imposing any one paradigm, and letting you
use which ever one is most appropriate to the problem at hand.

Having said that: let's look at where your quotes are coming
from.


The term was coined by Alan Key, but has gone one to acquire a
life and a meaning of its own, beyond what he originally
conceived of.  In particular, Key's OO was very much dynamically
bound, with inheritance only for implementation; almost all
serious OO languages today use static type checking, with
inheritance mainly of implementation.  Arguably, these are two
different things, and a different word should have been chosen,
but it wasn't.  (Meyer once defined OO in a way that it couldn't
be done in Smalltalk:).)


Note that Key was the inventer of Smalltalk.  (Interested party,
so to speak.)


That's rather obvious, since C++ didn't exist at the time.  More
to the point, Key was very much thinking in terms of dynamic
type checking (as in Lisp); as I said, the word has evolved, and
most people associate it with languages with static type
checking: Java andEiffel, if not C++.


I'm not too sure how to take that (except maybe as sour grapes):
Key obviously was playing with Smalltalk (and probably Lisp,
given the influence of Lisp on Smalltalk) long before C++ was
even invented.


I'm not familiar with any of the above people, but I'd be
interested in knowing more about the context in which they are
speaking.  *IF* the goal is to teach OO as the unique solution,
and as an end in itself, then no, C++ is not the ideal language.
If the goal is to teach effective programming, using OO when
appropriate, other languages when appropriate... C++ probably
isn't the ideal language, either, but none of the others are
very good either.  It's a real problem; in some ways, I'd argue
that the first programming course should still be taught in
Pascal (but that's just showing my age).  In other ways, I'd
argue that the language isn't that important in itself: the best
book on programming (in general) that I know is by far
Stroustrup's: _Programming Principles and Practice Using C++_.
And one of the main reasons is in the title: it insists on the
principles and practice, reducing the language (C++) to the role
of a tool (which is what it should be).  He could rewrite the
book to use Java (or Lisp, or just about any other language)
without major modifications.


Bertrand Meyer is the inventor ofEiffel.  And he's very
dogmatic; you could replace C++ with any other language exceptEiffelin the above, and he would probably agree.  As I said,
I've read a paper in which he proves not only that you can't
really do OO in C++, but that you can't do it in Smalltalk
either.  (Meyer's contributions to software engineering are not
to be underestimated, but I do wish he'd be less dogmatic aboutEiffel.)


Given the quality of Linux, I don't think you can quote Linus on
software engineering issues.
No language can help this. But do you believe that after removing this
problem choosing C++ over C would have helped?
Finding evidence is difficult if not impossible, but just opinion?
-Jussi
 
J

James Kanze

No language can help this. But do you believe that after
removing this problem choosing C++ over C would have helped?
Finding evidence is difficult if not impossible, but just opinion?

I definitely think it's easier to develop quality code in C++
than in C, perhaps by an order of magnitude or more.
Theoretically, perhaps some other languages would be even easier
than C++, but practically, for whatever reasons, they aren't
available or aren't appropriate for developing kernel code.
 
O

osmium

:

I definitely think it's easier to develop quality code in C++
than in C, perhaps by an order of magnitude or more.

I think that statement epitomizes a huge problem I have had with C++, and
always will have. As I learned I kept looking for this wonderful magic
bullet in C++, kind of like structured programming. Once one really learns
about structured programming there is an enormous increase in productivity,
perhaps as much as an order of magnitude. As I worked myself through the
process of learning C++, I would say to myself, yes, this is nice, that is
nice, but where is the *really* good part? After many years I have
concluded that there is no really good part. It is just a complicated
agglomeration of pretty good ideas, fitted together in one language.
Someone mentioned, upthread, a Swiss army knife, and I think it is an
excellent metaphor. My problem is that I detest Swiss army knives and
Crescent wrenches!

Change order of magnitude to 40% better and I am on board as a C++ guy.

You can't make a nice cohesive, consistent, language (such as Algol 60 or
Pascal) with such diverse roots, one root is C, cryptic beyond belief with
'%' meaning modulo, and the huge, 15 character or so guru selected names of
things used in the STL. I sometimes find myself using *two* lines of code
for a simple for loop with iterators. It's kind of like someone grafted
COBOL on to a Teletype friendly APL. It is just a nasty, ugly mix, usable,
but still distasteful.
 
K

Keith H Duggar

:



I think that statement epitomizes a huge problem I have had with C++, and
always will have. As I learned I kept looking for this wonderful magic
bullet in C++, kind of like structured programming. Once one really learns
about structured programming there is an enormous increase in productivity,
perhaps as much as an order of magnitude. As I worked myself through the
process of learning C++, I would say to myself, yes, this is nice, that is
nice, but where is the *really* good part? After many years I have
concluded that there is no really good part.

To me the "really good part" is generic programming (including
among other features overloading including operator overloading
and templates) and the paired constructor+destructor paradigm.
As for OO it's just so-so ;-)

For example, generic programming made the numerical work I was
doing FAR simpler and more enjoyable than it was in Fortran or
C.

KHD
 
K

Keith H Duggar

Make arrays first class types, behaving as any other type.

As in make them std::vector? Why isn't that better done at the
library level as has already been done (in multiple different
ways to meet different needs)?
In other words, how would you implement iterators for C-style
array sequences?

The same way you implement them for any other container. For
that matter, you usually inhibit the convertion of array to
pointer (by using pass by reference) when you want "iterators"
for C style arrays, using something like:

template< typename T, size_t n >
T*
end( T (&array)[ n ] )
{
return array + n ;
// or return &array[0] + n, if there was no conversion,
// or just something like
// return array.end(), if the language were defined
// thusly.
}

Basically, you block the conversion because it involves loss of
important information concerning the type, mainly how many
elements the array contains.

So your chief complaint is that information about the size of
the array is lost? Ok then I'm a bit confused because you also
mentioned Fortran as an example of a language that does not have
C's "problem" however Fortran also discards size information and
it must be passed in as additional parameters (or otherwise
known). For this reason canonical Fortran looks like

subroutine sum ( a, n, s )
real s
real a(n)
do i = 1, n
s = s + a(i)
end do
end

real a(3)
...
call sum(a,3,s)

It even supports passing subarrays in effectively the same way
as C++ by the fact that size is not part of the type for example

real b(9)
...
call sum(b(4),3,s)

to sum the middle 3 elements of the original 9 element array.

So at least for built-in arrays Fortran just like C/C++ doesn't
make size part of the type and provides no built-in operators to
query the size, bounds, etc.

So anyhow, what difference do you see with regard to array size
information between Fortran and C?
The STL has been implemented in Ada. The problems doing so had
nothing to do with arrays; the problems had to do with the fact
that Ada's generics work somewhat differently than those in C++.

Ok so you would happy if built-in arrays were instead std::vector?
Ie a class with some interface to the size in this case .size(),
..begin(), etc just like the standard interface that Ada offers to
array types (First, Last, etc)?

So then what would the type of "new T[]" and the return type of
malloc(sizeof(T)*N) be? std::vector said:
And of course, one might reasonably ask why you would want to
use the STL iterator idiom, which follows the C idiom in

I think it is misleading to say the STL iterator idiom "follows"
the C idiom. Stepanov had abstract idioms and concepts in mind
drawn from a lineage quite distinct from C. It just so happened
by fortune that the C/C++ memory model and concrete pointer fit
very nicely into his abstract concepts.
requiring a separate object to contain all of the necessary
information, when you could put all the information in a single
object, resulting in a more powerful and simpler to use
abstraction.

That really is a separate and large topic. You would certainly have
trouble justifying in absolute the "all information", "more powerful",
and "simpler to use" claims. You could look to the recent debates
stirred up by Andrei's "iterators must go" range advocacy to see many
cogent arguments against your view above.

I don't think there is any sense in opening that debate here;
but, I would like to know do you advocate "range" concepts a la
Andrei (for generalizing sequences I mean) or something else?

KHD
 
S

Stefan Ram

osmium said:
As I learned I kept looking for this wonderful magic
bullet in C++, kind of like structured programming.

It is the automatic memory management that C++ does
for you! No, wait, C++ does not have this feature. Sorry.

Well, regarding the topic of this thread: Lisp, today, has
lost some of its advantages compared to other programming
languages, because they have already copied many parts
from LISP. For example:

- The first garbage collector ever was written for a
LISP implementation. So when Java has a garbage
collector today, this comes from LISP. One might
say »from artificial intelligence research«.

- At the time, LISP was created, FORTRAN was IIRC
unable to do recursive function calls, so possibly
that was implemented in LISP first, too. See

http://www-formal.stanford.edu/jmc/history/lisp/node3.html

There surely are more things that are used everywhere
in programming today and were done in LISP first.

On the other hand, many features today's Lisp programmers
are fond of, like macros and CLOS, were not a part of the
classical LISP.

So the 1950s »LISP« and the modern »(Common )Lisp« are
quite different languages.

One can see that Java (a language with a garbage collector
and run-time array index checks and without pointer
arithmetics) now executes most programs of the shootout
benchmarks just as fast as C++ (as given by 6 »1/1« entries
in the »Time« column):

http://shootout.alioth.debian.org/u32q/benchmark.php?test=all&lang=gpp&lang2=java&box=1

This benchmark was done with Java 1.6, but Java 1.7 is
reported to be even faster. The author of one benchmark
wrote:

»Java 5 <=== 18% faster=== < Java 6 < ===46% faster===< Java 7«

http://www.taranfx.com/blog/java-7-whats-new-performance-benchmark-1-5-1-6-1-7

He also wrote about a new garbage collector with »much
smaller pause times«.

My usual collection of quotations regarding garbage
collection might already be known to some readers of
this newsgroup:

»There were two versions of it, one in Lisp and one in
C++. The display subsystem of the Lisp version was faster.
There were various reasons, but an important one was GC:
the C++ code copied a lot of buffers because they got
passed around in fairly complex ways, so it could be quite
difficult to know when one could be deallocated. To avoid
that problem, the C++ programmers just copied. The Lisp
was GCed, so the Lisp programmers never had to worry about
it; they just passed the buffers around, which reduced
both memory use and CPU cycles spent copying.«

<[email protected]>

»A lot of us thought in the 1990s that the big battle would
be between procedural and object oriented programming, and
we thought that object oriented programming would provide
a big boost in programmer productivity. I thought that,
too. Some people still think that. It turns out we were
wrong. Object oriented programming is handy dandy, but
it's not really the productivity booster that was
promised. The real significant productivity advance we've
had in programming has been from languages which manage
memory for you automatically.«

http://www.joelonsoftware.com/articles/APIWar.html

»[A]llocation in modern JVMs is far faster than the best
performing malloc implementations. The common code path
for new Object() in HotSpot 1.4.2 and later is
approximately 10 machine instructions (data provided by
Sun; see Resources), whereas the best performing malloc
implementations in C require on average between 60 and 100
instructions per call (Detlefs, et. al.; see Resources).
And allocation performance is not a trivial component of
overall performance -- benchmarks show that many
real-world C and C++ programs, such as Perl and
Ghostscript, spend 20 to 30 percent of their total
execution time in malloc and free -- far more than the
allocation and garbage collection overhead of a healthy
Java application (Zorn; see Resources).«

http://www-128.ibm.com/developerworks/java/library/j-jtp09275.html?ca=dgr-jw22JavaUrbanLegends

»Perhaps the most important realisation I had while developing
this critique is that high level languages are more important
to programming than object-orientation. That is, languages
which have the attribute that they remove the burden of
bookkeeping from the programmer to enhance maintainability and
flexibility are more significant than languages which just
add object-oriented features. While C++ adds object-orientation
to C, it fails in the more important attribute of being high
level. This greatly diminishes any benefits of the
object-oriented paradigm.«

http://burks.brighton.ac.uk/burks/pcinfo/progdocs/cppcrit/index005.htm
 
J

James Kanze

As in make them std::vector? Why isn't that better done at the
library level as has already been done (in multiple different
ways to meet different needs)?

That's what was decided, but there are still repercusions.
There's no way you can create an std::vector with static
initialization, for example, and the initialization syntax isn't
the same. (The committee is working on the latter.)
In other words, how would you implement iterators for C-style
array sequences?
The same way you implement them for any other container. For
that matter, you usually inhibit the convertion of array to
pointer (by using pass by reference) when you want "iterators"
for C style arrays, using something like:
template< typename T, size_t n >
T*
end( T (&array)[ n ] )
{
return array + n ;
// or return &array[0] + n, if there was no conversion,
// or just something like
// return array.end(), if the language were defined
// thusly.
}
Basically, you block the conversion because it involves loss
of important information concerning the type, mainly how
many elements the array contains.
So your chief complaint is that information about the size of
the array is lost?

Not only, with regards to C style vectors. My chief complaint
is that they follow completely different rules than other types
of objects. The decision to use pass by reference, rather than
the usual pass by value, should be made by the programmer. (And
yes, in both cases, length information should be preserved.)
Ok then I'm a bit confused because you also mentioned Fortran
as an example of a language that does not have C's "problem"
however Fortran also discards size information and it must be
passed in as additional parameters (or otherwise known).

Agreed, and now that I think of it, I'm not sure that Fortran is
a good example; Fortran's arrays don't convert to pointers,
which can then be abused, but you can't (or at least, you
couldn't when I used Fortran) assign an array to another array.
(It's hard to compare Fortran, of course, because it never uses
pass by value.)

The point is that objects in C++ have a specific behavior: this
holds for the basic types, for pointers, for structs, and in
fact, for every object type except arrays. (By default, anyway.
In C++, you can replace that behavior, at least in part, by
overloading operators and defining a copy and a default
constructor.) That default behavior includes things like copy
and assignment.
Ok so you would happy if built-in arrays were instead
std::vector? Ie a class with some interface to the size in
this case .size(), .begin(), etc just like the standard
interface that Ada offers to array types (First, Last, etc)?

That would be one solution. There are many. All I really
insist on is that arrays work like any other type---a struct
doesn't implicitly convert to a pointer to its first element in
just about any context; nor should an array. And of course, as
a side effect, indexation would be indexation, and not pointer
arithmetic.

Doing this does allow carrying the size around, so you could
then add all of the advantages that allows (like efficient
bounds checking). But that's really a second point---important,
but not as primordial as the first.
So then what would the type of "new T[]" and the return type
of malloc(sizeof(T)*N) be? std::vector<T>?

The return type of new T should be T*, for *all* T. Not just
for the cases where T is not an array. That's really part of
what I'm complaining about: it's totally abherant that the
return type of new int and new int[n] be the same. And that you
have to use a different form of delete on them, because the
original type has been lost.
That really is a separate and large topic.

Agreed. But anyone who's worked with complex iterators to any
extend (filtering iterators, etc.) realizes what a pain it is
not being able to know the end from within the iterator. And
anyone who makes use of extensive functional decomposition with
e.g. one function determing the range, and another function
iterating over it, has suffered from the fact that you need two
separate objects to define the range.
You would certainly have trouble justifying in absolute the
"all information", "more powerful", and "simpler to use"
claims.

Not in the least.
You could look to the recent debates stirred up by Andrei's
"iterators must go" range advocacy to see many cogent
arguments against your view above.
I don't think there is any sense in opening that debate here;
but, I would like to know do you advocate "range" concepts a
la Andrei (for generalizing sequences I mean) or something
else?

Not having seen what Andrei is advocating, I don't know. But it
should only take one object in order to iterate, since a
function can only return a single object to be used as a single
argument to another function.
 
J

James Kanze

"James Kanze" wrote:
<WRT Linux quality>
I think that statement epitomizes a huge problem I have had
with C++, and always will have. As I learned I kept looking
for this wonderful magic bullet in C++, kind of like
structured programming.

And that is your problem. There is no magic bullet. There are
a number of important paradigms which improve productivity, each
important, and the strength of C++ is that all of them are
possible in it. Any one may be better implemented in another
language, but the real productivity gain is in using whichever
one is appropriate in a given situation.
Once one really learns about structured programming there is
an enormous increase in productivity, perhaps as much as an
order of magnitude. As I worked myself through the process of
learning C++, I would say to myself, yes, this is nice, that
is nice, but where is the *really* good part? After many
years I have concluded that there is no really good part. It
is just a complicated agglomeration of pretty good ideas,
fitted together in one language. Someone mentioned, upthread,
a Swiss army knife, and I think it is an excellent metaphor.
My problem is that I detest Swiss army knives and Crescent
wrenches!

Perhaps. The screwdriver in a Swiss army knife probably isn't
as good as a purpose built screwdriver, but if you need to drive
a screw, it's a lot better than a hammer.
Change order of magnitude to 40% better and I am on board as a
C++ guy.
You can't make a nice cohesive, consistent, language (such as
Algol 60 or Pascal) with such diverse roots, one root is C,
cryptic beyond belief with '%' meaning modulo, and the huge,
15 character or so guru selected names of things used in the
STL.

That's certainly a problem, and a more Pascal like syntax (with
a lot less special characters) would certainly improve things.
But the number of special characters, or the use of {} instead
of begin/end is, in the end, a detail. The real diffence comes
with encapsulation (and access control), the ability to use
polymorphism when appropriate (and the fact that you're not
stuck with it when it isn't), the ability to use programming by
contract idioms (with private virtual functions), the ability to
define abstract types (like std::vector), the ability to
separate interface (in the header file) and implemenation (in
the source). None of these are unique to C++, and for any one,
you could probably find a better language, but having all of the
possibilities at hand makes C++ a powerful tool. Not perfect,
but it works, and the alternatives all seem to have some fatal
weakness.
 
K

Keith H Duggar

That's what was decided, but there are still repercusions.
There's no way you can create an std::vector with static
initialization, for example, and the initialization syntax isn't
the same. (The committee is working on the latter.)

Right. However those limitations of course apply to any UDT.
So given that you are arguing for making the built-in array
behave the same as other types I guess that is the price to
pay (at least for now).
In other words, how would you implement iterators for C-style
array sequences?
The same way you implement them for any other container. For
that matter, you usually inhibit the convertion of array to
pointer (by using pass by reference) when you want "iterators"
for C style arrays, using something like:
template< typename T, size_t n >
T*
end( T (&array)[ n ] )
{
return array + n ;
// or return &array[0] + n, if there was no conversion,
// or just something like
// return array.end(), if the language were defined
// thusly.
}
Basically, you block the conversion because it involves loss
of important information concerning the type, mainly how
many elements the array contains.
So your chief complaint is that information about the size of
the array is lost?

Not only, with regards to C style vectors. My chief complaint
is that they follow completely different rules than other types
of objects. The decision to use pass by reference, rather than
the usual pass by value, should be made by the programmer. (And
yes, in both cases, length information should be preserved.)
Ok then I'm a bit confused because you also mentioned Fortran
as an example of a language that does not have C's "problem"
however Fortran also discards size information and it must be
passed in as additional parameters (or otherwise known).

Agreed, and now that I think of it, I'm not sure that Fortran is
a good example; Fortran's arrays don't convert to pointers,
which can then be abused, but you can't (or at least, you
couldn't when I used Fortran) assign an array to another array.
(It's hard to compare Fortran, of course, because it never uses
pass by value.)

The point is that objects in C++ have a specific behavior: this
holds for the basic types, for pointers, for structs, and in
fact, for every object type except arrays. (By default, anyway.
In C++, you can replace that behavior, at least in part, by
overloading operators and defining a copy and a default
constructor.) That default behavior includes things like copy
and assignment.

Would you have any problem if built-in arrays were simply
removed entirely from the core language? Replaced instead
by some equivalent of a raw block new/malloc.
That would be one solution. There are many. All I really

What other solutions besides something that is effectively
equivalent? Is removing them entirely a solution?
insist on is that arrays work like any other type---a struct
doesn't implicitly convert to a pointer to its first element in
just about any context; nor should an array. And of course, as
a side effect, indexation would be indexation, and not pointer
arithmetic.

Doing this does allow carrying the size around, so you could
then add all of the advantages that allows (like efficient
bounds checking). But that's really a second point---important,
but not as primordial as the first.
So then what would the type of "new T[]" and the return type
of malloc(sizeof(T)*N) be? std::vector<T>?

The return type of new T should be T*, for *all* T. Not just
for the cases where T is not an array. That's really part of
what I'm complaining about: it's totally abherant that the
return type of new int and new int[n] be the same. And that you
have to use a different form of delete on them, because the
original type has been lost.

And what about malloc? Should such raw allocation capability
be defined in the core library? Or should there be an equivalent
core language operator? Ie something that returns a simple
pointer to a block.
Agreed. But anyone who's worked with complex iterators to any
extend (filtering iterators, etc.) realizes what a pain it is
not being able to know the end from within the iterator. And
anyone who makes use of extensive functional decomposition with
e.g. one function determing the range, and another function
iterating over it, has suffered from the fact that you need two
separate objects to define the range.


Not in the least.


Not having seen what Andrei is advocating, I don't know. But it
should only take one object in order to iterate, since a
function can only return a single object to be used as a single
argument to another function.

Well as I said, this isn't the thread for that debate. For starters
if you are interested here is one major thread on the issue

'Andrei's "iterators must go" presentation'

http://groups.google.com/group/comp...d7d869060/024054e2168a86ff?#024054e2168ak86ff

I don't think you participated in that one so you may have missed
it entirely and you might like to read it. If you do I'd be most
interested in your thoughts. But I think we should start another
thread for that.

Thanks.

KHD
 
J

James Kanze

Right. However those limitations of course apply to any UDT.
So given that you are arguing for making the built-in array
behave the same as other types I guess that is the price to
pay (at least for now).

Yes. More generally, it's the price we pay for evolution,
rather than revolution (i.e. compatibility with historical
situations). On the other hand, evolution means benefiting from
previous experience, which is a definite advantage; every really
new language I've seen has introduced its own set of problems,
not foreseen from the start for lack of experience with the
idiom.

[...]
Would you have any problem if built-in arrays were simply
removed entirely from the core language? Replaced instead by
some equivalent of a raw block new/malloc.

If built-in arrays were simply removed, then they'd have to be
replaced with something with a similar syntax, to avoid breaking
code. What I'd have liked, way back when, would have been that
something like "int a[N];" have semantics similar to "struct
{int a[N];}", i.e. it acted like a real object, which could be
assigned and passed and returned by value, there was no implicit
conversion to int*, and &a had the type int (*)[N]. Or even
int (*)[]; i.e. the size is lost when you take the address. The
point is that you only loose the type information explicitly.
But I think I would prefer int (*)[N] with an implicit
convertion to int (*)[]---functions that want to treat arrays of
variable size can; those that require a specific size can also
insist on that. Ideally, there would be some means of
recovering the initial size from the array, but if we consider
the date when all of this was being specified, I suspect that
that would be asking too much.

Of course, all of this neglects the fact that Kernighan and
Richie were coming from B, where everything is just a machine
word (and "arrays" are a machine word initialized to point to
the first element), where it is all coherent and makes sense.
And that arrays were present from the first; structs (and other
elaborated types which behave as first class objects) came
later.
What other solutions besides something that is effectively
equivalent?

I don't know. To tell the truth, I've not thought about it
much, since practically speaking, nothing's going to change
(except the initialization syntax for class types like vector).
Is removing them entirely a solution?

Only if we consider breaking all existing code a solution:).

I'm really only complaining about "what should have been", 30 or
more years ago. I don't think there's much we can do about it
today.
insist on is that arrays work like any other type---a struct
doesn't implicitly convert to a pointer to its first element in
just about any context; nor should an array. And of course, as
a side effect, indexation would be indexation, and not pointer
arithmetic.
Doing this does allow carrying the size around, so you could
then add all of the advantages that allows (like efficient
bounds checking). But that's really a second point---important,
but not as primordial as the first.
So then what would the type of "new T[]" and the return type
of malloc(sizeof(T)*N) be? std::vector<T>?
The return type of new T should be T*, for *all* T. Not just
for the cases where T is not an array. That's really part of
what I'm complaining about: it's totally abherant that the
return type of new int and new int[n] be the same. And that you
have to use a different form of delete on them, because the
original type has been lost.
And what about malloc?

Allocating an array using malloc should work exactly like
allocating an int using malloc. E.g.:

typedef int array[10];

int* pi = (int*)malloc( sizeof(int) );
array* pa = (array*)malloc( sizeof(array) );

(Note that the above is actually legal today. It does mean that
to access the array, you need to write (*pa). I wonder if
this awkwardness didn't also play a role in the early rules,
although it seems natural to me, and it fully parallels what I
do with a dynamically allocated int.)
Should such raw allocation capability be defined in the core
library?

Yes. At least, I think so; I'm not 100% sure. C++ should
continue to be a language that can be used at the lowest level
as well, for e.g. things like kernel code. On the other hand,
if you're defining memory allocation at this level, then you
probably should define other things, like IO, at this level as
well. C (and C++) doesn't, leaving that up to the system (Posix
or Windows I/O, etc.)
 
J

Jerry Coffin

[ ... ]
- At the time, LISP was created, FORTRAN was IIRC
unable to do recursive function calls, so possibly
that was implemented in LISP first, too. See

Simon, Newell and Shaw's IPL implemented recursion before McCarthy
even started working on Lisp.

As for the rest, you seem far more interested in advocating a
viewpoint than being accurate. Characterizing six out of thirteen as
"most" is little short of a blatant lie. Quoting IBM's Java web site
hardly qualifies as a disinterested source. Quoting Joel Spolsky
makes you seem overly credulous -- while he seems like a perfectly
decent guy, he doesn't seem to have any credentials to qualify him as
an authority on much of anything.

Those, however, pale to insignificance when your quote Ian Joyner. If
you read his diatribe carefully, essentially every argument he makes
works out as "C++ is different from (Eiffel|Java), and therefore
wrong" (or the variant, "Bertrand Meyer advocates something
different, therefore C++ is wrong").

Accepting and quoting such arguments as if they meant something
indicates that you're not even attempting to think objectively about
the subject matter.
 
I

Isaac Gouy

On 22 Oct, 10:29, (e-mail address removed)-berlin.de (Stefan Ram) wrote:
-snip-
  One can see that Java (a language with a garbage collector
  and run-time array index checks and without pointer
  arithmetics) now executes most programs of the shootout
  benchmarks just as fast as C++ (as given by 6 »1/1« entries
  in the »Time« column):

http://shootout.alioth.debian.org/u32q/benchmark.php?test=all〈=gp...
-snip-

"just as fast" is given by »-« not »1/1«
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,156
Messages
2,570,878
Members
47,408
Latest member
AlenaRay88

Latest Threads

Top