Teaching new tricks to an old dog (C++ -->Ada)

  • Thread starter Turamnvia Suouriviaskimatta
  • Start date
G

Georg Bauhaus

T said:
I'm with Ioannis on this one... I fail to see how p[-1] is any less an
index on an array than somearray[1] is.

One difference is a 1:1 correspondence of index values and
indexed items. This suggests not using p[-1] or somearray[1]
interchangeably:

There is one named index type.
There is one named array type.
The index type is used in the declaration of the array type,
stating the set of permissible array index values.

type Item_Array is array (Index_Type) of Item;

Values in the index type designate items in the problem domain.
This propagates into the declaration of the array type.
It also propagates into its use.

Ioannis really started, I think, from this 1:1 correspondence.
He had (intuitively?) mapped these kinds of array to std::map
in sample programs. Conceptually this seems right because the
specific index values and the items are associated 1:1 in the
array.

From this perspective, after choosing an associative container
for representing the (index, item) pairs, you can no longer
use somemap[-1] or somemap[1] interchangeably. The perspective
has shifted from computing offsets to an association. 1 is
associated with one item in the array, -1 is associated with
another.

In this sense, p[-1] and somearray[1] are different.
In a sense, -1 and 1 are treated as names rather than
computable index values.

(Ada-like arrays have STL's key_type so to speak.)


Georg
 
R

Robert A Duff

Ed Falis said:
This is just a matter of simile. A tagged type and derivatives of
tagged types provide dispatching and other typical OOP facilities.

I don't see it that way. Tagged types provide inheritance and type
extension. Ada's *class* feature provides run-time polymorphism
(i.e. dispatching calls).

That is, you can use tagged types all you like, but you'll never get any
dispatching until you say 'Class. Ada's notion of "class" (i.e. "class
of types") doesn't exactly match what C++ calls "class".
Where the concept differs from the class concept is that visibility is
orthogonal, provided by packages and other more traditional Ada
facilities, while the class concept combines the two.

Yeah, that, too. C++ wraps all three things into one language feature
(well, sort of -- there are namespaces), whereas Ada splits them out.
I was a bit taken aback by Jerry Coffin's "idiocy" remark, since I
can see advantages of both ways. I somewhat prefer the Ada "splitting"
way. Or maybe his "idiocy" comment was merely directed at the words:
"tagged record". (Of course, it's usually "tagged private", not "tagged
record".)

Ada is not the only language that splits things up differently from the
mainstream OOP languages. CLOS, for example, comes to mind.

- Bob
 
I

Ioannis Vranos

Martin said:
Shure it is usefull, very usefull indeed. Only you started off wrong:

namespace Ada
{
template <
class Base_Type,
Base_Type The_First,
Base_Type The_Last>
class Range
{
.....
}

template <
class Element_Type,
class Index_Type> // derived from Range.
class Array
{
.....
}
}



I did not thought up the code, I just typed a quick, limited demonstration.
 
R

Robert A Duff

Martin Krischik said:
All true. But does that powerfull contruct works inside a multi tasking
environment.

I see no reason why such powerful exception handling features cannot be
task/thread safe. In fact, many languages are superior to *both* Ada
and C++ in this regard.
And tread save is not all. An Ada system implementing the (optional) Annex E
needs exeptions which can be passed from one process to another. Both
processed runing on different computers. Just like CORBA exceptions.

Same comment for distribution: "I see no reason..."
Of course, exceptions with attached information need some flattening
to be passed across partitions.

Followups disobeyed, sorry. I mean, you're answering somebody who's
probably *not* reading comp.lang.ada, so I thought he should see *my*
response.

- Bob
 
D

Dr. Adrian Wrigley

On Thu, 24 Mar 2005 17:59:39 -0500, jayessay wrote:

....
of _expressive capability_ in either Ada or C++ or Eiffel or ?? is so
_low_ that it again is simply amazing to see people so vehemently
arguing over differences that are almost invisible.

I think the point that many of the enthusiasts here claim is that
C++ and Ada have very different *outcomes* in real-world programming
tasks. The differences may be small in Computer Science terms,
and may be small in relation to CLOS/Prolog/Lisp, but in practical
commercial terms, the differences are (it is being claimed)
hugely significant.

The Ada claims are very bold in terms of better code defect density,
portability, total cost and maintainability for large projects -
particularly with real-time and/or distributed aspects.
The C++ claims are equally bold in terms of expressiveness,
performance and conciseness (what else?).

*I* think the evidence is now very strong that programming language
choice *is* massively important for overall success of large projects.
The problem is a lack of scientific method in determining this,
resulting in a big analytical problem with (also massive)
confounding factors.


one snippet from a Google search:
"Gartner is now saying 70% of all Java projects so far have failed"
but only "40% of all projects fail."

and
"C++ programs typically have roughly six times the lifetime ownership
costs of equivalent programs written in C, Ada or Fortran, and fewer
than one third of programming projects begun in C++ are completed."

It will be really interesting if The Navy switches mainly to
Java, to see if it affects reliability or cost. This should be
taken as a golden opportunity to research the outcome (but
still falls short of a controlled study).

Can anybody post links quantifying how much better C++ is
than C or Ada (in terms of project cost and outcome)?

Of course I enjoy programming in C/C++ very much, having
been using them for a decade or two. But I do find the
total debugging time so much shorter in Ada (I've almost
forgotten how to use the debugging tools!). This must
have an impact on the bottom line.
 
I

Ioannis Vranos

fabio said:
Sorry, maybe I was unable to explain the concept because of my poor
English. Try please to re-read the latest paragraph, because you're
missing the point. I know that standard guarantees minimum number of
bits per types.

I was reasoning upon porting a C++ code from a machine providing C++
"int"(s) with 32 bits capacity to another machine where "int"(s) are
only 16 bits. In that situation if you forget to substitute every "int"
with "long" you don't get any error from the compiler.


True. I wish we got such errors/warnings. I still see no reason why
compiler-writers do not implement such errors/warnings and it is not
part of the standard requirement.

That ported
program can execute for years without any problem. When that variable
is assigned a value like 32768 you get -32768 and then negative numbers
increasing towards zero for each "++var;". Remember that this
assignment was considered allowable, since the programmer chose an
"int" type for that variable knowing that it is stored in 32 bits in
his development machine.


Yes. Also I see no reason why explicit use of ranges and constraints
can't be introduced in a future standard revision as additional keywords
without breaking existing code.


Maybe that won't crash your program yet it is worse, because a bug has
been inserted and it may pass unnoticed for years.

What I want to say is that in C++ (1) you must know how many bits are
reserved for every type you use and (2) you must carefully change types
when compiling to different targets.


Yes C++ is more unsafe than Ada (but still more safe than C).


Instead in Ada a programmer can just write code that is portable across
different platforms without worrying of bits of storage for types. Just
ask the compiler "I need to store values between -80_000 and 80_000, so
please choose yourself how many bits I need", and you can be sure that
it won't raise any bug porting code from a machine with 32 bits "int"
to a 16 bits "int".


OK, I already know that Ada is more safe than C++, when talking about
"implicit" safety as opposed to "explicit" safety, and C++ *is* the
second. Also I consider C++ being more "implicit" *paradigm-expressive*
than Ada, as opposed to explicit-expressiveness (about the same that
someone mentioned as expressiveness and expressibility in another message).


In other words I expect an intermediate C++ programmer (like myself) to
know how to write bullet-proof code.


The interesting thing in Ada is that either you can let your compiler
to choose how to internally reserve storage, or you can force a lot of
attributes for representation like the example that I provided:

type Counter is new Integer;
for Counter'Size use 48;
for Counter'Alignment use 16;

The above mentioned type declaration is something can't be done with
C++, as far as I know. In order to do same kind of things you must use
some no-standard compiler extension. Ada is simultaneously lower-level
and higher-level than C++.


If you mean "more" I disagree. It simply provides more "implicit" safety
than C++. In the low-level part I think they are about the same
"implicit"-expressive, about the high level part (abstraction) I think
C++ is better than Ada and with more complete paradigm-support
("implicit" expressiveness).
 
I

Ioannis Vranos

Chad said:
What do you do when you need to map the data structure to a specific
space in memory for a memory-mapped sensor array e.g. the temperature
sensors for levels -2 to 8 in a building?


I am not sure I understood what you mean by this, but in C++ you can do
use a specific numeric address in memory in the style:


int *p= reinterpret_cast<int *>(0x5556);


Of course this is system-specific and if nothing exists there will be a
problem. :)


You can also do:


// Complete safe and portable
#include <new>

class SomeClass {};


int main()
{
// 1000 bytes in the stack
unsigned char array[1000];

// Create SomeClass object in the beginning of the array
SomeClass *p= new(array)SomeClass;

// Create a second SomeClass object after the first
SomeClass *r= new(array+sizeof(*r))SomeClass;
}


With placement new you can create objects wherever you want. Also you
can define your own versions of placement new, new, delete, new[],
delete[] etc both globals and as parts of a class:


#include <cstddef>
#include <new>
#include <iostream>


class SomeClass
{
static std::size_t occupied;
static unsigned char *buffer;
static const std::size_t MAX_SIZE=5*1024;

public:

~SomeClass() { std::cout<<"Destructor called!\n"; }

void *operator new(std::size_t size)
{
using namespace std;

cout<<"Class member operator new was called!\n";

if(occupied+size>MAX_SIZE)
throw bad_alloc();

occupied+=size;

return buffer+occupied-size;
}

void operator delete(void *p)
{
std::cout<<"Class member operator delete was called!\n";

occupied-=sizeof(SomeClass);
}
};

std::size_t SomeClass::eek:ccupied=0;
unsigned char *SomeClass::buffer=new unsigned char[MAX_SIZE];


int main()
{
// The member operator new is called implicitly
SomeClass *p=new SomeClass;

delete p;
}


So objects can undertake their memory management. C++ standard library
containers use allocators, with the default one using the default new,
and gives the ability to define your own ones, as also provides
facilities to manage raw memory manually, but I do not know these yet.



Some pieces of Chapter 19 of TC++PL3 on the later:


"Implementers of containers often allocate() and deallocate() objects
one at a time. For a naive implementation of allocate(), this implies
lots of calls of operator new , and not all implementations of operator
new are efficient when used like that. As an example of a user-defined
allocator, I present a scheme for using pools of fixed-sized pieces of
memory from which the allocator can allocate() more efficiently than can
a conventional and more general operator new().

I happen to have a pool allocator that does approximately the right
thing, but it has the wrong interface (because it was designed years
before allocators were invented). This Pool class implements the notion
of a pool of fixed-sized elements from which a user can do fast
allocations and deallocations. It is a low-level type that deals with
memory directly and worries about alignment:"


Some other facilities:

"19.4.4 Uninitialized Memory

In addition to the standard allocator , the <memory > header provides a
few functions for dealing with uninitialized memory. They share the
dangerous and occasionally essential property of using a type name T to
refer to space sufficient to hold an object of type T rather than to a
properly constructed object of type T .

The library provides three ways to copy values into uninitialized space:"

and it talks about uninitialized_copy, uninitialized_fill and uninitial
ized_fill_n.


Then, other facilities are mentioned like

"Algorithms often require temporary space to perform acceptably. Often,
such temporary space is best allocated in one operation but not
initialized until a particular location is actually needed.

Consequently, the library provides a pair of functions for allocating
and deallocating uninitialized space:

template <class T> pair <T *,ptrdiff_t > get_temporary_buffer(ptrdiff_t
); // allocate, don’t initialize

template <class T> void return_temporary_buffer(T *); // deallocate,
don’t destroy

A get_temporary_buffer<X >(n) operation tries to allocate space for n or
more objects of type X .

If it succeeds in allocating some memory, it returns a pointer to the
first uninitialized space and the number of objects of type X that will
fit into that space; otherwise, the second value of the pair is zero.
The idea is that a system may keep a number of fixed-sized buffers ready
for fast allocation so that requesting space for n objects may yield
space for more than n . It may also yield less, however, so one way of
using get_temporary_buffer() is to optimistically ask for a lot and then
use what happens to be available."


And other stuff, it is an entire chapter.



C++ is very customizable, you can define handling functions for cases
that uncaught exceptions occur, define your own behaviour when memory
allocation error occurs instead of the default throwing of bad_alloc etc.


One of C++ design ideals has been

"Leave no room for a lower level language except assembly".
 
I

Ioannis Vranos

Georg said:
One difference is a 1:1 correspondence of index values and
indexed items. This suggests not using p[-1] or somearray[1]
interchangeably:

There is one named index type.
There is one named array type.
The index type is used in the declaration of the array type,
stating the set of permissible array index values.


Again, one can easily define an array container that accepts user
defined indexes. The philosophical question that arises is *why no one
has done it to this day*.

However I am going to make such one in some weekend (perhaps this!) to
see if I can find any real use of it. Doing it sounds really simple:


template<class T, const int MIN, const int MAX >
class Array: public vector
{
// Only provide definitions for operator at() and operator[] that
// really *only* explicitly call the base ones with the proper index.
// And the few constructors doing the same, *only* explicitly passing
// the *same* arguments to the base constructors plus checking MIN and
// MAX (only once).
};


I think it is *that* simple.

Of course it is only about container index ranges, not stored value ranges.


type Item_Array is array (Index_Type) of Item;

Values in the index type designate items in the problem domain.
This propagates into the declaration of the array type.
It also propagates into its use.

Ioannis really started, I think, from this 1:1 correspondence.
He had (intuitively?) mapped these kinds of array to std::map
in sample programs. Conceptually this seems right because the
specific index values and the items are associated 1:1 in the
array.


For simple things I have been using a vector like this:


[real situation comment:]
// For [-200, -101] values of Something
vector<int> counter(100);


[something is in range of -200, -101 (by concept)]
counter[something-200]++;


For more diverse things, maps are suitable (I guess in Ada too).

From this perspective, after choosing an associative container
for representing the (index, item) pairs, you can no longer
use somemap[-1] or somemap[1] interchangeably. The perspective
has shifted from computing offsets to an association. 1 is
associated with one item in the array, -1 is associated with
another.

In this sense, p[-1] and somearray[1] are different.
In a sense, -1 and 1 are treated as names rather than
computable index values.

(Ada-like arrays have STL's key_type so to speak.)


key_type is used only in maps!
 
J

Jerry Coffin

Robert A Duff wrote:

[ ... ]
Yeah, that, too. C++ wraps all three things into one language
feature (well, sort of -- there are namespaces), whereas Ada
splits them out. I was a bit taken aback by Jerry Coffin's
"idiocy" remark, since I can see advantages of both ways. I
somewhat prefer the Ada "splitting" way. Or maybe his "idiocy"
comment was merely directed at the words: "tagged record". (Of
course, it's usually "tagged private", not "tagged record".)
From a practical viewpoint, I've seen little real advantage to
requiring another construct to control visibility -- though given the
pre-existence of packages in Ada, continuing to use them when OO
support was added can hardly be seen as a surprise.

My real objection was (and is) entirely the terminology -- even in
assembly language, I think 'tagged' would be better avoided in this
situation as being far too closely tied to the implementation rather
than the meaning. In anything that attempts to provide even slightly
more abstraction, it strikes me as truly egregious.

At the same time, I'll also admit that the spelling used for a keyword
or two is rarely a good reason to condemn (or condone) an entire
language. In the end, if somebody's reading or writing code in either
Ada or C++, they should be sufficiently familiar with the language to
get past minor things like this -- that, however, doesn't excuse the
offense, but merely makes it easier to ignore. Furthermore, one of the
advantages often claimed for Ada is readability to people who don't use
the language on a regular basis -- but that doesn't seem (at least to
me) to be the case here.

I don't question Ada's expressiveness in a few specific areas, but IMO,
calling it even "fair" would qualify as quite generous at least wrt to
exception handling, object orientation or generics. I find this
particularly interesting since among mainstream langauges it was
essentially the first to support generics at all, and predated
primarily by PL/I in exception handling.
 
I

Ioannis Vranos

Vinzent said:
That would simply depend on your definition of "intermediate". :)


.... which is knowing between 50%-75% of the language.


Little story: Yesterday, a colleague of mine didn't believe me that
realloc() would copy the memory contents if necessary until I showed
him the reference. And he is doing C for more than ten years now.


Then he is definitely less than intermediate level in C. Besides that,
C90 is relatively small, and he should have learned it *all* by reading
K&R 2 in a matter of a few months, in a not-in-a-hurry mode.
 
I

Ioannis Vranos

Martin said:
At first, in this and other messages you had set the follow-ups to
comp.lang.ada which means we can not follow the discussion anymore!]


Martin said:
Actualy there are only 3 of them. And they are usefull - two examples:

I have worked a lot with SQL and I allways wound it cumbersome to map the
SQL string type to C++. Most SQL string types are bounded - they have a
maximum size. std::string (and IBM's IString) is unbounded - so before you
can store your string inside the database you have to make a sanity check.
In Ada I could use Ada.Strings.Bounded_Strings instead.

The other example is CORBA. CORBA has two string types: string and
sting<size>. In C++ both a mapped to std::string - and the CORBA/C++
mapping must check the size of the sting. In Ada they are mapped to
Ada.Strings.Unbounded_Strings and Ada.Strings.Bounded_Strings.



What about using const string.

Well, Ada 95 added a 200 character informations string.





All true. But does that powerfull contruct works inside a multi tasking
environment. It is indeed true that Ada exeptions are restricted - but they
are thread save. And thead save means that an exception may be raised in
one thead and caugth in another. Exceptions raised inside an rendevous may
even be caught in both threads participating in the rendevous.

And tread save is not all. An Ada system implementing the (optional) Annex E
needs exeptions which can be passed from one process to another. Both
processed runing on different computers. Just like CORBA exceptions.



In C++, multithreading is platform-specific. I think this is better than
Ada, but may be I am just used to it.
 
I

Ioannis Vranos

Martin Krischik wrote:

--->> At first, in this and other messages you had set the follow-ups to
--->> comp.lang.ada which means we can not follow the discussion
--->> anymore!]

It make no difference with interfaces as interfaces hold no data.
Tha's the Java trick.


Being technically accurate, I have to say it is not a trick. It is a
subset of the OO paradigm.

ISO C++ provides both the ability to define and inherit both interfaces
and complete classes.


Here is an interface:

#include <iostream>

class SomeInterface
{
public:
virtual void somefunc() =0;

// Place some definition if you want!
// Only one method needs to be pure ( =0)
virtual void func() { /* ... */ }

virtual ~SomeInterface() {}
};


class SomeClass: public SomeInterface
{
public:
void somefunc()
{
std::cout<<"SomeClass::somefunc() was called!\n";
}
};


int main()
{
// Error: No instances of SomeInterface can be created
// since it contains pure methods.
// SomeInterface obj;

SomeInterface *p= new SomeClass;

p->somefunc();

delete p;
}


C:\c>temp
SomeClass::somefunc() was called!

C:\c>
 
I

Ioannis Vranos

--->> At first, in this and other messages you had set the follow-ups to
--->> comp.lang.ada which means we can not follow the discussion
--->> anymore!]


Martin said:
Yes, and process/system boundaries when Annex E is implemented.

Martin



ISO C++ defines no threading facilities so far (but *care has been
taken* to be easy implementing the algorithms and container operations
as thread-safe).

The only multithreading I know is the .NET lock-based multithreading. I
can't understand how in such an environment you can expect to catch
exceptions reliably. I can only assume that Ada's built in
multithreading mechanism is not lock-based.



Here is a .NET example:


#using <mscorlib.dll>

using namespace System;
using namespace System::Threading;


class SomeException
{};


__gc class SomeClass
{
int index;

//...

public:

// ...


void DoSomething()
{
Monitor::Enter(this);

throw SomeException();

// Modify index

Monitor::Exit(this);
}

void DoSomethingElse()
{
Monitor::Enter(this);

// Modify index

Monitor::Exit(this);
}

// ...
};


int main() try
{
SomeClass *ps= __gc new SomeClass;


Thread *pthread1= __gc new Thread ( __gc new ThreadStart(ps,
&SomeClass::DoSomething) );

Thread *pthread2= __gc new Thread ( __gc new ThreadStart(ps,
&SomeClass::DoSomethingElse) );


//Start execution of ps->DoSomething()
pthread1->Start();

//Start execution of ps->DoSomethingElse()
pthread2->Start();
}

catch(SomeException)
{
Console::WriteLine("SomeException caught in main thread!\n");
}


How can you expect to catch the exception in the main thread for
example, since it terminates immediately after the two calls in the try
block?



The proper way in .NET (this isn't production-level code BTW):



#using <mscorlib.dll>

using namespace System;
using namespace System::Threading;

class SomeException
{};

__gc class SomeClass
{
int index;

//...

public:

// ...


void DoSomething() try
{
Monitor::Enter(this);

throw SomeException();

// Modify index

Monitor::Exit(this);
}

==> Handle inside the specific thread locally
catch(SomeException)
{
// ...
}

void DoSomethingElse()
{
Monitor::Enter(this);

// Modify index

Monitor::Exit(this);
}

// ...
};


int main() try
{
SomeClass *ps= __gc new SomeClass;


Thread *pthread1= __gc new Thread ( __gc new ThreadStart(ps,
&SomeClass::DoSomething) );

Thread *pthread2= __gc new Thread ( __gc new ThreadStart(ps,
&SomeClass::DoSomethingElse) );


//Start execution of ps->DoSomething()
pthread1->Start();

//Start execution of ps->DoSomethingElse()
pthread2->Start();
}

catch(Exception *pe)
{
Console::WriteLine("{0}", pe->Message);
}
 
J

Jeremy J

And the level
of _expressive capability_ in either Ada or C++ or Eiffel or ?? is so
_low_

You don't even know which languages you are talking about!?!? You
have to compare lisp to language "??". Please avoid posting flamebait
such as this. Or maybe create a newsgroup called
comp.lang.expressiveness and mail me in a few years when you compile a
FAQ.

Jeremy J.
 
L

Larry Kilgallen

Vinzent 'Gadget' Hoefler wrote:


Then he is definitely less than intermediate level in C. Besides that,
C90 is relatively small, and he should have learned it *all* by reading
K&R 2 in a matter of a few months, in a not-in-a-hurry mode.

Is "intermediate" intended to describe what portion someone knows
of all possible information about a language ?

Or is "intermediate" intended to describe where someone ranks in
language knowledge across all programmers who use that language ?

I am reminded of humorist Garrison Keillor describing a community
quite proud that all it's children are above average.
 
J

Jared

jayessay said:
Lisp macros are _not_ a
Lisp macros are full
Things like C/C++ preprocessor/"macro" simplistic text
substitutions are not
You can literally create
You can also
None of this is at all doable _in_ things like
You would have to write a
<snip>

And that is why $YOUR_LANGUAGE should allow arbitrary array indices. Or
should have all types be anonymous. Or should use structural type
equivalence. Or should combine data and method declarations for object
types. <Looks around guiltily.> Or should have a preprocessor. Or
infer all types. Or whatever. Just like $MY_LANGUAGE.

There's a reason I don't use your language. I assume there's a reason
you don't use mine. In my case, it's because I'm a hobbyist; I go with
whatever appeals to me. ML does and so does Piccola, and so do Cecil and
Aldor, to a certain extent, and I'll learn them when I get around to it.
I'm here because Ada appeals to me, although that's obscured by the
cross-posting. Most of the people here seem to be professionals; they
use whatever they're told to use. Thus the cynical comments about Ada
and C++, describing the way they were, rather than the way they are now.
Robert Duff made a comment a while ago about how silly most (I would
say without much hyperbole 99+%) of the points in these threads would
be to Lisp (and Smalltalk) folks. I couldn't agree more. You are all
arguing over differences that mean almost nothing looked at from these
other perspectives.

This is quite correct, but also largely irrelevant. The general context
of a language flamewar is that every feature $YOUR_LANGUAGE has that
$MY_LANGUAGE doesn't have is unnecessary, and a clear sign of its
inferiority. Any feature $MY_LANGUAGE has that $YOUR_LANGUAGE lacks is
absolutely essential, and a clear sign of its superiority.

A functional language like Lisp lacks many of the features being argued
about here; the obvious conclusion is that Ada and C++ are painfully
inferior. Similarly, most of Lisp's features, such its the macro system
or Currying, are absent from the imperative languages; a clear sign of
Lisp's superiority.

Yes, a person can do all kinds of neat things in Lisp. A good programmer
can do all kinds of neat things elegantly in any language, and a bad
programmer cannot.

In the interest of full disclosure, I should add that I am not a good
programmer.

And the level
of _expressive capability_ in
Aldor?

is so
_low_ that it again is simply amazing to see people so vehemently
arguing over differences that are almost invisible.

I'm sure you meant expressive convenience. Turing completeness ensures
that "_expressive capability_" is equivalent, at least, for functional
computation. On the other hand, you seem so vehement about it, I'm not
really sure. I've already dealt with the conclusion I draw from your
personal expressive capability in these languages.

Anyway, first, a quote:

"The sooner we can forget that FORTRAN has ever existed, the better, for
as a vehicle of thought it is no longer adequate: it wastes our
brainpower, is too risky and therefore too expensive to use."
- E.W. Dijkstra, EWD 340

The arguments being carried on here echo this complaint. Implicit in the
arguments for both sides is that the one wastes brainpower while the
other conserves it. That's fairly important, given that brainpower is
the only tool a programmer really has. As for myself, I side with Ada,
and provide as evidence the followup to one of the many code fragments in
this thread. "A clever trick." Dijkstra again:

"The competent programmer is fully aware of the strictly limited size of
his own skull; therefore he approaches the programming task in full
humility, and among other things he avoids clever tricks like the
plague."
 
W

Wes Groleau

Jerry said:
Of course, in well-written code, you just wouldn't do anything like
this at all -- a picture of a wagon is NOT a wagon, and does not
satisfy the Liskov Substitution Principle, so having Picture_Of_A_Wagon
derive (publicly) from Wagon is just plain wrong.

It was just an illustration of the concept, which does not
hold sufficient interest for me to spend hours engineering
the question.
 
W

Wes Groleau

Ioannis said:
In C++, multithreading is platform-specific. I think this is better than
Ada, but may be I am just used to it.

Of course it's better--it's job security.
Why would anyone want to write a program
and then draw unemployment while the company
re-compiles and runs it effortlessly on a
new platform?

--
Wes Groleau
"Grant me the serenity to accept those I cannot change;
the courage to change the one I can;
and the wisdom to know it's me."
-- unknown
 
I

Ioannis Vranos

Wes said:
Of course it's better--it's job security.
Why would anyone want to write a program
and then draw unemployment while the company
re-compiles and runs it effortlessly on a
new platform?


Actually .NET multithreading is standard-based, part of the CLI standard
(.NET is a CLI-compliant VM) so it is portable to all CLI VMs (e.g.
Mono, DotGNU).


About portability to non-CLI VMs, well such an application can not be
straight-portable anyway, but multithreading as part of ISO C++ could
mean less porting efforts probably.


C++0x will have multithreading support, and one benefit of being
somewhat late on this, is perhaps more mature multithreading support. :)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
474,202
Messages
2,571,057
Members
47,665
Latest member
salkete

Latest Threads

Top