R
Roland Pibinger
If you've got a bug report, please report it. You can either email them
to me, or post them in the Digital Mars C++ forums.
Already done weeks ago.
If you've got a bug report, please report it. You can either email them
to me, or post them in the Digital Mars C++ forums.
DMC++ does not implement export.
James said:So it doesn't follow the ISO C++ standards. This doesn't, in
itself, make it worse than a number of other compilers, but it
does mean that it's not the answer to the original question (any
more than is g++ or VC++), if the original question is to be
taken literally.
Roland said:Already done weeks ago.
kwikius said:First you need to know an awful lot about the C++ language to get
things working. The obvious effect of this is that C++
mertaprogramming is seen as for experts only, which is a shame as
metaprogramming is a beautiful technique which many less experienced
programmers would find rewarding if it was less complicated.
The other problem is that, because the techniques of metaprogramming
in C++ are basically based on hacks, there is a large cost in compile-
time. Once you create simple types using metaprogramming you find that
these can be used in theory to construct more complex types, but there
is a practical limit both in compile time and in compiler resources
which puts a cap on what you can do in practise.
In D metaprogramming has been designed in as part of the language. It
is much simpler to use and also compiles fast. I believe that Daveed
Vandevoorde has been working on a separate metaprogramming language
for C++. It is a shame that the two sides cant get together and try to
look at getting a C++ compiler with these extensions together in a
similar way that the ConceptGCC compiler has been used to test and
prove the Concept ideas.
English isn't a programming language - we don't expect what we say to be
taken *literally* unless someone annotates the description saying so.
English isn't a programming language - we don't expect what we say to be
taken *literally* unless someone annotates the description saying so.
One would expect such a question to be worded like:
"Does anyone know of a C++ compiler which is 100% conformant with the
C++ standard?"
instead of:
"I wanted a C++ compiler which would follow the ANSI C++ standards.
If you could tell me an IDE also, it would be more helpful."
which is not even grammatically correct. Do you really think that a
language lawyer would also ask for an IDE? For understanding English,
context is crucial. Pulling sentences out of context and trying to take
them literally is to risk seriously misunderstanding people.
James Kanze said:He also didn't state his platform. Now, it's probably a pretty
good guess that it is Windows, but....
If you think about it, that's a fantastic amount of processing and
memory required to do something as trivial as a factorial. It is why
complex calculations cannot be done with C++ templates, even if they
theoretically can be. It's why C++ TMP compiles very, very slowly.
I'm not familiar with Daveed's work, nor did I know he was working on
such a language.
kwikius said:Yep. However I prefer C++ to D mainly because I have spent quite a
long time learning it and it looks to me like D is becoming just as
complex and quirky as it evolves.
Still I did like the metaprogramming in D.
I would prefer to have my cake and eat it I guess.
I'm not sure if he still is but there is some info dated April 2003
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2003/n1471.pdf
http://www.vandevoorde.com/Daveed/News/Archives/000015.html.
kwikius wrote:
That will inevitably happen. However, you can do much more complicated
things with D before the quirkiness sets in. And some day, D will get to
the point where E will become necessary.
The reports are that the same problems take dramatically less code to do
in D than in C++, and go together much faster.
Thank you, I'll have a look.
kwikius said:On 18 Jun, 18:32, Walter Bright <[email protected]>
wrote:
The problem I had when I tried D was that there seemed to be no
equivalent to the C++ explicit keyword. Its useful for example value
initialisation of my quan quantity types:
quan::length::mm x(1);
quan::length::mm y = 1; // Error quantity constructor is declared
explicit.
That may seem trivial but its just one of those tools that I am used
to having. However I figured that you have enough feedback and
requests for language features to contend with at the moment, and I
decided I would back off ;-)
I would agree with that from the short time I spent trying D out and
it certainly provides a good talking point for a separate C++
metaprogramming scheme, though my current hunch is that that is
unlikely to happen because the current metaprogramming hacks are
becoming firmly embedded, with time spent on honing the hacks ( e.g
variadic templates should help a lot)
The other thing of course is that none of the problems with C++
metaprogramming(compile time, compiler resources) are that visible in
textbooks or lectures... something about style over substance...
The other point is the 'macho' attitude, something I believe
contributed to the success of Dos/Win over Mac. Simply put,
programmers enjoy conquering ugly crap ;-) ( Maybe Linux is in there
too ;-)
I don't expect C++ TMP to change in any fundamental way. The C++0x
improvements just twiddle around the edges, there are no fundamental
improvements (other than variadic templates).
kwikius said:The major feature of C++0x for me is in making Concepts programming
entities rather than documentation entities.
There are other useful features such as decltype and auto(aka typeof)
which just should have been there with overloaded operators.
There are so many plus points for Concepts that it is another reason
for sticking with C++ ( I am aware that this isnt what you may want to
hear but I'm just laying it out ). Nor do I find it easy to explain.
An interesting example is a power function.
In case of std:ow the power is a runtime value. However in a
particular expression on quantities its not possible to make the power
a runtime value as the type of the result is dependent on it so it
must be a compile time value. To a layman this would appear to be a
limitation, but in practise every variable raisd to a power must
conform to some Concept,probably either a numeric or a quantity.
e.g. Calculate an area y from a length x;
///use doubles
double x =1;
double y = std:ow(x,2);
//use quantities
quan::length::m xx(1);
quan::area::m2 yy = quan:ow<2>(x);
In the first case the concepts are there but hidden, whereas in the
second case they are enforced and the whole expression is strongly
typed and further by revealing the concepts underlying the expression
the pow function has been given much clearer hint for optimisation.
( and the power must be a compile time value)
There are many calculations where, once the concepts are revealed then
values can be moved to compile time.
One area that I have explored and where there can be dramatic
optimisations of an order of magnitude is in 3D matrix concatenation,
where many values are multiplied by 1 or 0. Every 3D matrix is in
fact a type e.g a translation matrix, a rotation matrix etc. The
result type of a concatenation of the basic types is also a specific
type. which can be computed by the compiler from these 'primitive'
input types. ( You can also use quantities or numerics or combinations
within a matrix). Special static types are used for certian values
within the matrix which because all their properties are known at
compile time calculations on them with runtime types are very easy to
optimise away.
It works but...interestingly this is one area where C++ does suffer
really badly and is frustrating because in practise for the compiler
computing the result type of a matrix computation at compile time
reveals very much the problems ( long compile times, running out of
compiler resources (in VC7.1)) discussed previously.
However IMO the results of this work if the practical issues were
solved could be very positive for complexity of 3D games etc, so I'm
continuing plugging away.
hmm.. It might be worth trying this stuff in D and I reckon it would
be a good proof for D over C++. But therin lies another problem. Does
D have an equivalent of for example C++'s enable_if(SFINAE)
D already has the equivalents of those.
D has a much simpler method of doing the same thing - specializations.
Extending the specializations a little covers what concepts do.
D has a much simpler method of doing the same thing - specializations.
Extending the specializations a little covers what concepts do.
You might find these interesting:
http://www.digitalmars.com/pnews/re...rs.com...http://www.csc.kth.se/~ol/physical.d
kwikius said:I think the generic term is "raising the level of abstraction". You
can then see the broad picture and overall design more clearly And
the design is enforced by the compiler.
Yes.
Already Concepts do similar in documentation in C++, so rather than
saying type x needs functions Y and Z and members A, B, and C, else
you will get a (compile or runtime) failure you simply say type x is
or must be a model of Concept X. Doing this in documentation is
useful, but crucially the rules arent enforced. This has consequences.
The first one is there is no way to test a Concept . This means that
Concepts are academic entities and types can slip through the net.
Like anything that is designed, Concepts themselves need to be
designed,redesigned and honed to get them right. They won't be correct
first time. It is very hard to test and use Concepts without being
able to enforce them. Basically template metaprogramming in C++ is
very similar to assembly language programming in this respect. It is
very hard to understand the code and very hard to track down errors.
A Concept is a concise and elegant way to express and ideally
enforce ... well a Concept ;-)
kwikius wrote:
It's roughly equivalent to an 'interface' in D. If you inherit from an
interface, you must provide an implementation of all of the interface's
members. A template type parameter can be constrained to be derived from
a particular interface. This is enforced by the compiler.
kwikius said:Actually this brings me to a feature of D's operator overloading
design that I wasnt too keen on, which is that operators must be
expressed as members of a particular class (in C++ terminology) IIRC
A binary operation always applies to two types and there are *4*
entities involved, the 2 input types,the operator and the result_type.
IOW a binary operation involves 4 entities and making it a member of
one input_type doesnt express this elegantly (if anything the
operation is a "member" of the operator which is the only constant
feature)
Further as free functions it is possible to add overloads without
modifying classes.
Ideally one would be able to define operator overloads as:
In fact there is a definite OOP flavour to D which despite contrary
claims is not so dominant in C++.
C++ has that weird asymmetry in that one ordering of operands is a
member function, the other is a free function. In D you can do both as
member functions, in either the class of one operand, or the class of
the other operand.
Why you'd want to is another matter <g>. I would argue that defining
operations on a class is the class designer's job, not the class user's job.
The other problem with C++'s method is it requires ADL (Koenig lookup).
Requiring operator overloads to be class members avoids that whole wacky
kerfluffle.
I suspect it is less of an OOP flavor and more of an orientation towards
modularity. C++ doesn't have modules, which leads to all sorts of
scaling problems (the worst of which are exported templates).
Want to reply to this thread or ask your own question?
You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.