Teaching new tricks to an old dog (C++ -->Ada)

  • Thread starter Turamnvia Suouriviaskimatta
  • Start date
I

Ioannis Vranos

Peter said:
Of all the comments in this rambling (but remarkably bloodless) thread,
this is the one that, for me, gets closest to the fundamental
philosophical difference between the C language family and Ada.

The [0, +] styles does indeed map closely to what is happening in the
machine; however, for me, what is happening in the machine is generally
much less interesting than what is being represented in my problem
domain. I am always struck by the way a C user's first thought always
seems to be about how many bits he needs to represent something where an
Ada user is concerned with real-world values and leaves the bit size to
be chosen by the compiler. Of course, when writing an interface to a
hardware device, we have to worry about bit patterns but I certainly
want to stop doing that as soon as possible and worry about the problem
domain instead.

So, for an array, I want to index it with a problem domain entity and
let the compiler turn that into a wholy uninteresting set of address
offsets.

Ionnis wanted an example of negative indexes. How about:

type X_Values is range -5 .. 5;
type Y_Values is range 0 .. 25;
type Graph is array (X_Values) of Y_Values;
Squares : constant Graph := (25, 16, 9, 4, 1, 0, 1, 4, 9, 16, 25);


I have to say that for simple cases of things like that - in case we
want to associate the x ranges with the y values - in C++ we "keep in
mind" that [0] is for -5, etc, in the style:


unsigned is used for Y_Values.


// Indices stand for [-5, 5] X values
vector<unsigned> Graph(11);

Graph[0]= 25;


For much more ranges one can use a map. If a map is considered
expensive, a way I can think is using a vector of pairs:

// first for X_Values, second for Y_Values
vector<pair<int, unsigned> > Graph(11);


Graph[[0].first= -5;
Graph[0].second= 25;


// Range checked
Graph.at(1).first= -4;
Graph.at(1).second= 16;


It isn't that incomprehensible if you are used to programming in C++.
 
D

Dmitry A. Kazakov

As allways: the default behaviour. Default is static inheritance.

As it will be with interfaces in Ada 2005, I suppose...
Take the following example:

class X : public A, public B { };

This causes a problem when you have

class A: public C {};

class B public C {};

Of couse C++ has a perfectly good solution:

class A: virtual public C {};

class B virtual public C {};

Actually neither solution is automatically good. Consider:

class A : public AbstractList {};
class B : public AbstractList {};
class X : public A, public B {}; -- Participates in both lists

The problems of C++ model lie elsewhere:

1. It is difficult if possible to express more complex groupings when the
same base is sometimes shared and sometimes not, all in the same type
hierarchy.

2. Decision about "virtuality" has to be made too early in wrong place. In
the example above, it is X, which should decide.

3. C++ model has distributed space overhead.

Perhaps, the bases should be subject of overriding in the same way one
overrides methods to achieve desired flexibility. So conflicts could be
resolved by either renaming=overloading (~non-virtual) or overriding
(~virtual).

IMO, it was wise not to rush for MI during Ada 95 design... Though MI
should be, that's no question.
 
I

Ioannis Vranos

Martin said:
As allways: the default behaviour. Default is static inheritance.

Take the following example:

class X : public A, public B { };

This causes a problem when you have

class A: public C {};

class B public C {};

Of couse C++ has a perfectly good solution:

class A: virtual public C {};

class B virtual public C {};

All very well, only I know C++ programmers which had 2 to 7+ years
experience in C++ and did not know about it until I told them. And not just
one programmer: the whole team of about 20!


Perhaps it was in the pre-standard era? :)) I do not think there is any
intermediate level C++ programmer that does not know virtual bases.
 
V

Vinzent 'Gadget' Hoefler

Ioannis said:
I have to say that for simple cases of things like that - in case we
want to associate the x ranges with the y values - in C++ we "keep in
mind" that [0] is for -5, etc, in the style: [...]
It isn't that incomprehensible if you are used to programming in C++.

Of course it's not "incomprehensible", for me it's inconvinient.

Once I actually got used to range types (these days I use them heavily,
although not so often with negative indices) it appears to me that I
sometimes just get confused when everything starts at zero, because
then I always have to keep in mind where and how a real world range is
biased.

This distracts me from the original problem I am trying to solve.


Vinzent.
 
V

Vinzent 'Gadget' Hoefler

Ioannis said:
I do not think there is any
intermediate level C++ programmer that does not know virtual bases.

That would simply depend on your definition of "intermediate". :)

Little story: Yesterday, a colleague of mine didn't believe me that
realloc() would copy the memory contents if necessary until I showed
him the reference. And he is doing C for more than ten years now.


Vinzent.
 
M

Martin Krischik

Jerry said:
Speaking of strings, I'll digress for a moment: personally, I find it a
bit humorous when Ada advocates talk about things like having five
string types as an advantage. IMO, the need, or even belief, that five
string types are necessary or even useful is a _strong_ indication that
all five are wrong.

Actualy there are only 3 of them. And they are usefull - two examples:

I have worked a lot with SQL and I allways wound it cumbersome to map the
SQL string type to C++. Most SQL string types are bounded - they have a
maximum size. std::string (and IBM's IString) is unbounded - so before you
can store your string inside the database you have to make a sanity check.
In Ada I could use Ada.Strings.Bounded_Strings instead.

The other example is CORBA. CORBA has two string types: string and
sting<size>. In C++ both a mapped to std::string - and the CORBA/C++
mapping must check the size of the sting. In Ada they are mapped to
Ada.Strings.Unbounded_Strings and Ada.Strings.Bounded_Strings.
Ada's exception handling is also primitive at best (exceptionally so,
if you'll pardon a pun). In particular, in Ada what you throw is
essentially an enumaration -- a name that the compiler can match up
with the same name in a handler, but nothing more. Only exact matches
are supported and no information is included beyond the identity of the
exception.

Well, Ada 95 added a 200 character informations string.
In C++ you can throw an arbitrary type of object with an arbitrary
value. All the information relevant to the situation at hand can be
expressed cleanly and directly. The usual inheritance rules apply, so
an exception handler can handle not only one specific exception, but an
entire class of exceptions. Again, this idea can be expressed directly
rather than as the logical OR of the individual values. And, once
again, the addition of tagged records to Ada 95 testifies to the fact
that even its own designers recognized the improvement this adds in
general, but (whether due to shortsightedness, concerns for backward
compatibility or whatever) didn't allow this improvement to be applied
in this situation.

All true. But does that powerfull contruct works inside a multi tasking
environment. It is indeed true that Ada exeptions are restricted - but they
are thread save. And thead save means that an exception may be raised in
one thead and caugth in another. Exceptions raised inside an rendevous may
even be caught in both threads participating in the rendevous.

And tread save is not all. An Ada system implementing the (optional) Annex E
needs exeptions which can be passed from one process to another. Both
processed runing on different computers. Just like CORBA exceptions.

With Regards

Martin
 
M

Martin Krischik

M

Martin Krischik

Ioannis said:
Do you mean throwing an exception in one thread and catch it in another?

Yes, and process/system boundaries when Annex E is implemented.

Martin
 
M

Martin Krischik

Ioannis said:
Perhaps it was in the pre-standard era? :)) I do not think there is any
intermediate level C++ programmer that does not know virtual bases.

They started off in the pre-standard area an never learned any new
tricks ;-) . However the pre-standard IBM compiler they used was advanced
enough to have virtual base classes and I used them alot.

Martin
 
M

Martin Krischik

Dmitry said:
As it will be with interfaces in Ada 2005, I suppose...

It make no difference with interfaces as interfaces hold no data. Tha's the
Java trick.

Martin
 
F

fabio de francesco

Ioannis said:
fabio de francesco wrote:

[skip some lines]
The standard guarantees the minimum ranges, for int can hold at least
16-bit values and long at least 32-bit values.

Sorry, maybe I was unable to explain the concept because of my poor
English. Try please to re-read the latest paragraph, because you're
missing the point. I know that standard guarantees minimum number of
bits per types.

I was reasoning upon porting a C++ code from a machine providing C++
"int"(s) with 32 bits capacity to another machine where "int"(s) are
only 16 bits. In that situation if you forget to substitute every "int"
with "long" you don't get any error from the compiler. That ported
program can execute for years without any problem. When that variable
is assigned a value like 32768 you get -32768 and then negative numbers
increasing towards zero for each "++var;". Remember that this
assignment was considered allowable, since the programmer chose an
"int" type for that variable knowing that it is stored in 32 bits in
his development machine.

Maybe that won't crash your program yet it is worse, because a bug has
been inserted and it may pass unnoticed for years.

What I want to say is that in C++ (1) you must know how many bits are
reserved for every type you use and (2) you must carefully change types
when compiling to different targets.

Instead in Ada a programmer can just write code that is portable across
different platforms without worrying of bits of storage for types. Just
ask the compiler "I need to store values between -80_000 and 80_000, so
please choose yourself how many bits I need", and you can be sure that
it won't raise any bug porting code from a machine with 32 bits "int"
to a 16 bits "int".

The interesting thing in Ada is that either you can let your compiler
to choose how to internally reserve storage, or you can force a lot of
attributes for representation like the example that I provided:

type Counter is new Integer;
for Counter'Size use 48;
for Counter'Alignment use 16;

The above mentioned type declaration is something can't be done with
C++, as far as I know. In order to do same kind of things you must use
some no-standard compiler extension. Ada is simultaneously lower-level
and higher-level than C++.

Ciao,

fabio de francesco
 
G

Guest

Ioannis Vranos said:
Since I have had enough with this signed value range of Ada, here is a
quick implementation of mine and some uses of it.
Are you under the illusion that the code [your code included below] is
actually expressive?

About forty years ago, when I was working in a Fortran shop, we were awarded
a contract for an inventory management system. At that time, we were mostly
focused on missile defense software, but our DoD customer was persuaded by
our marketing people that we could do the job.

An argument ensued in our programming group about what language to use. I
was trying to convince my colleagues that we should use COBOL since, at that
time, it was the dominant language for business applications. In my view, COBOL
best expressed the kind of solutions we needed for the problem at hand.

Many of my colleagues were adamant about Fortran. Each example I gave of how
something would be done in COBOL, they countered with an example of how it
"could" be done in Fortran. As this debate continued, some of the Fortran
solutions
began to look more and more bizzare. Our manager finally concluded, correctly,
that the problem could best be solved in COBOL.

From this experience, I concluded that there were two important considerations
in
language selection: 1) how well does this language express the kind of
solutions
required for the problem to be solved, and 2) is it possible to solve the
problem
using a given language, even it the solution is a little ugly?

The first I called expressiveness. The second I called expressibility. In
the years
since then, others have come to similar views and that is why new languages are
designed from time to time.

A solution to almost any programming problem can be expressed in almost any
language. This is expressibility. When a language allows one to express that
solution with concisely and with ease, that is expressiveness. Expressible?
Expressive? Which is more appropriate for solving problems?

While you solution demonstrates expressibility, it fails to meet the test of
expressiveness.

I admit that one must be a little careful when using the expressiveness test in
evaluating a language design. Some expressive syntax might be expressive,
but fail other tests. For example, the popular,

+=
*=
and other operator=

constructs of the C family of language are wonderfully expressive of a simple
idea, but their use fails the test of compile-time confirmability. That is,
they
are highly expressive but some of them are also error-prone. The example
from Ada,

type Index is range -42 .. 453;

is confirmable every place it is used. It is expressive of a simple idea. It
does
not involve any unusual behavior at any point in the program where it is used.
It can be checked by the compiler for validity. It can raise run-time errors
without the programmer inserting specialized code. It is expressive of the
idea of a static range of values for a named type.

No doubt that, when you evaluate your solution in the context of expressiveness
versus expressibility, you will see the difference.

------------------------------------------------------------------------------
I am sure one can create a better one or a container directly that
supports ranges, if he devotes some time, so the question again arises,
since it is possible and nothing exists, probably it is not considered
useful to have such a feature in C++:


#include <vector>
#include <algorithm>
#include <cstdlib>

template <class T>
class range
{
std::vector<T> array;
T min, max;

public:
range(const T &mi, const T &ma):array(ma-mi+1), min(mi), max(ma)
{
using namespace std;

if(max-min<=0)
;//throw some exception

for(typename vector<T>::size_type i=0; i<array.size(); ++i)
array=min+i;
}

const T &operator[](const T &index)
{
// Add range checking max>=index>=min if desirable

return array[index-min+1];
}

operator T() { return array.size(); }

};


int main()
{
using namespace std;

range<int> r(-100, -20);

vector<int> vec(r);

vec[r[-65]]=3;
}
 
G

Guest

A preprocessor ... almost entirely missing from Ada.
When programming in C++, almost every use of the preprocessor
is to make up for a shortcoming of the language. For example,

#IFDEF
#IFINDEF

Other uses include some that are deprecated. For example,

#DEFINE

Since Ada does not have these shortcomings, preprocessors are not
as necessary. That being said, I do know of Ada shops that have
written their own preprocessors for such things as being able to
ignore certain blocks of code prior to compilation, code instrumentation,
and similar things. This turns out to be quite simple to do. I'm
not sure why it is even being discussed as an issue since the C family
of preprocessor statements are something of a kludge.

Richard Riehle
 
P

Pascal Obry

Jim Rogers said:
One Ada feature that cannot be implemented through any container library
is the ability to define a range-restricted floating point type. You can
do this in C++, but not through the use of a template. Templates cannot
take floating point values as parameters.

Is that still true ? I have never understood why there was such
restriction...

Pascal.

--

--|------------------------------------------------------
--| Pascal Obry Team-Ada Member
--| 45, rue Gabriel Peri - 78114 Magny Les Hameaux FRANCE
--|------------------------------------------------------
--| http://www.obry.org
--| "The best way to travel is by means of imagination"
--|
--| gpg --keyserver wwwkeys.pgp.net --recv-key C1082595
 
J

Jerry Coffin

(e-mail address removed) wrote:

[ ...]
C++ continues to evolve, but much of that evolution seems to follow a
course of shoring up things already in the language that don't quite
work as one might prefer, or adding a truss here and a buttress there
to prevent or enable deficiencies in the language; e.g., cast-away
const, a truly silly addition to the language.

IMO, this is _quite_ an inaccurate characterization.

C++ has changed exactly once since it was originally standardized. I do
not believe that _any_ of what was changed was to change behavior at
all -- rather, it was almost entirely changes in the standard to make
the wording more accurately reflect what was desired all along.

The most visible change was in the requirements for std::vector. The
original C++ standard never _quite_ requires that std::vector use
contiguous storage. That has now been changed so its storage must be
contiguous.

TTBOMK, nobody has ever implemented (or even designed) a version of
std::vector that didn't use contiguous storage. I'm not sure anybody
has really even proven that a version using non-contiguous storage
could even meet the standard's requirements for std::vector. It's
pretty clear from reading books written by committee members that most
(if not all) thought from the beginning of std::vector as using
contiguous storage.

IMO, your characterization bears no more than an extremely distant
relationship with reality.
 
K

kevin cline

Ada is every bit as expressive as C++. There is not likely to be
any improvement in the number of KSLOC using C++. Oh, yes,
if we choose to use a cryptic form of C++, with all the little
shortcuts that make the code less readable, we might achieve
some reduction in the KSLOC, but at what cost in understandability?

Richard Riehle

C++ is more expressive because of implicit generic/template function
instantiation. The limitations of Ada's explicit instantiation have
been shown to be a major impediment to creating advanced type systems,
e.g. modeling physical units.

Christopher Grein writes at
http://www.adapower.com/index.php?Command=Class&ClassID=Advanced&CID=215:

[Attempting to model physical units in Ada]
"Our attempt leads us to a plethora of overloaded functions. The number
of function definitions afforded runs into the hundreds...

....One could object that this definition has to be made only once and
for all in a reusable package, and later-on the package can simply be
withed and used without any need for the user to care about the
package's complexity, but unfortunately the argument is not fully
correct. Apart from the most probable compile time explosion, it takes
into account only simple multiplication and division. Operations like
exponentiation an and root extraction root (n, a) are not representable
at all.

So we have to confess that our attempt to let the compiler check
equations at compile time has miserably failed. The only proper way to
deal with dimensions in full generality is either to handle them as
attributes of the numeric values that are calculated and checked at
run-time or to use preprocessors. Also for these methods, there are
many examples to be found in literature."

Grein again, at
http://home.t-online.de/home/Christ-Usch.Grein/Ada/Dimension.html:

"A C++ solution, which is very similar to the Ada one presented here -
it does however not include fractional powers - can be found at
http://www.fnal.gov/docs/working-groups/fpcltf/html/SIunits-summary.html.
The big difference is that C++ templates allow type checking during
compile-time, so that no overhead neither in memory space nor in
runtime is incurred. In this respect, C++ templates ARE MORE POWERFUL
than Ada generics." [Emphasis mine]

In my experience, C++ templates have allowed me to write fully
type-safe programs at a very high level, an ability which Ada generics
simply can not match.
 
J

jimmaureenrogers

Pascal said:
Is that still true ? I have never understood why there was such
restriction...

I believe this is still true. I hope I will be corrected by a C++
expert if I am wrong.

My understanding is that C++ cannot rely on a particular floating point
representation across platforms. This inability to define type
representation appears to make template parameter instantiation
difficult
for C++. I am guessing that this is a side-effect of the C and C++
type system for primitive types, where the type is very strongly
related to its physical representation.

Jim Rogers
 
T

T Beck

I'm with Ioannis on this one... I fail to see how p[-1] is any less an
index on an array than somearray[1] is. They're using the exact same
compile-time methods to get to where their data is, and if you really
want to use -1 as an index, it's allowing you to do exactly that. Is
there something magical about being able to declare it yourself that
makes it a "proper" index? Or would you just rather see Perl-style
negative indices? (which would make no sense in the way I've seen
people describe using negative indices in this discussion)
 
J

jayessay

Jerry Coffin said:
A preprocessor per so, no. The type of capabilities is a different
story. For example, Lisp includes macros. Most assemblers include (much
more extensive) macros as well as an eqiuvalent of C's
#if/#ifdef/#ifndef/#endif.

Lisp macros are _not_ a preprocessor and they are _vastly_ more
capable than what people typically think of when they see/hear
"macros". Lisp macros are full code analyzing and generating
constructs. Things like C/C++ preprocessor/"macro" simplistic text
substitutions are not in any way related to them. You can literally
create entire new languages with them, create embedded domain specific
languages, and/or create extensions to the "core" language _within_
Lisp itself. You can also domain specific code _optimizers_ in like
fashion. None of this is at all doable _in_ things like
C++/Ada/Eiffel/<pretty-much-you-name-it>. You would have to write a
compiler (or at least optimizer/code generator)_with_ those languages
_for_ the new language/constructs/optimization, etc.

Robert Duff made a comment a while ago about how silly most (I would
say without much hyperbole 99+%) of the points in these threads would
be to Lisp (and Smalltalk) folks. I couldn't agree more. You are all
arguing over differences that mean almost nothing looked at from these
other perspectives. Exceptions is another good example. Neither Ada
nor C++ nor Eiffel has anything even _remotely_ as potent as the
condition system in Common Lisp. Same goes for the so called "OO"
capabilities in these languages in comparison with CLOS. And the level
of _expressive capability_ in either Ada or C++ or Eiffel or ?? is so
_low_ that it again is simply amazing to see people so vehemently
arguing over differences that are almost invisible.


/Jon
 
C

Chad R. Meiners

I think an associative container like map fits better to this.

What do you do when you need to map the data structure to a specific
space in memory for a memory-mapped sensor array e.g. the temperature
sensors for levels -2 to 8 in a building?
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
474,202
Messages
2,571,057
Members
47,665
Latest member
salkete

Latest Threads

Top