which of these 3 casts would you prefer?

B

BGB

The beauty of RAII is you don't have to implement it manually! This
eliminates a whole class of common programming errors.

yep, but this is still more of a convenience feature, rather than a
fundamental limitation.


it is not really too much different from using OpenGL and remembering to
always have a matching "glEnd();" for every "glBegin();", or a
"glPopMatrix();" for every "glPushMatrix();".

if one messes up on these sorts of things in OpenGL, things will turn
ugly...


Yet another manual kludge..

well, it can be seen this way...

I am not claiming here that C is perfect or equally programmer-friendly
to C++ or anything, but rather there may be other bigger concerns, and
that conveniences are not "essential".


in my own script language, there are value-types / structs, which will
call the destructor any time they go out of scope (internally, in the
VM, classes and structs are more-or-less equivalent, only that some
extra behaviors exist in the struct case, namely that it is copied
rather than passed by reference, and its destructor is called whenever
it goes out of scope).

but, anyways, they could potentially be used to do something analogous
to RAII (albeit more limited due to the present lack of
copy-constructors or similar).


now, why do they exist?... because these sorts of things are actually
sort of useful (personally, I find the "everything is a
pass-by-reference object" mentality of Java to be a little annoying,
among other things...).
 
N

Noah Roberts

yep, but this is still more of a convenience feature, rather than a
fundamental limitation.

That's getting pretty silly isn't it? Both languages are Turing
complete, either one CAN do anything the other does just as you CAN do
anything in brainfuck that you could have done in C or C++. The
difference being of course, how much time do you want to waste writing
something that another language offers to you for free.
it is not really too much different from using OpenGL and remembering to
always have a matching "glEnd();" for every "glBegin();", or a
"glPopMatrix();" for every "glPushMatrix();".

if one messes up on these sorts of things in OpenGL, things will turn
ugly...

In C++ you can use RAII for that sort of thing. With C, again, you
have to manually track it all and do everything the hard way.
 
I

Ian Collins

yep, but this is still more of a convenience feature, rather than a
fundamental limitation.

It is a fundamental feature in the sense that it allows automated
resource management. Automated resource management leads to safer,
cleaner, code. It also removes the single, all be it feeble, excuse for
using goto which has to be a good thing!
it is not really too much different from using OpenGL and remembering to
always have a matching "glEnd();" for every "glBegin();", or a
"glPopMatrix();" for every "glPushMatrix();".

if one messes up on these sorts of things in OpenGL, things will turn
ugly...

Exactly. With RAII, you eliminate the chance of things turning ugly.
well, it can be seen this way...

How else can you see it? Missing a finally (or leaving a release out of
one) is just as bad as forgetting a free in C.
I am not claiming here that C is perfect or equally programmer-friendly
to C++ or anything, but rather there may be other bigger concerns, and
that conveniences are not "essential".

No language feature is "essential". We can program in machine code if
we choose. Most of us don't write machine code these days (although I
have to admit I did enjoy it back in the day!) because we have
programming languages that directly support safer and more advanced
idioms.
 
N

Noah Roberts

yes, fair enough. C does not do this...

however, it could be argued that *not* having to write duplicated code
in this case is what is nifty/convinient/...

You seem to be saying that function overloading is not necessary for
generic programming because you don't have to use generic
programming. Same can be said of any paradigm.

Tell me, why are you not writing all your code in assembler?
Certainly it is possible to do so.
as well, there is another partial way around this problem:
one can implement a dynamic type-system, and then use wrapping and
run-time type-checking for a lot of this. it is a tradeoff (performance
is worse, ...), but it works.

In C++ we call this polymorphism and it is another built in feature we
don't have to develop on our own.
 
N

Nobody

yep, but this is still more of a convenience feature, rather than a
fundamental limitation.

You can't take this argument much further before you start arguing that
everything about high-level languages is a convenience, because you could
do the same thing in assembler.
it is not really too much different from using OpenGL and remembering to
always have a matching "glEnd();" for every "glBegin();", or a
"glPopMatrix();" for every "glPushMatrix();".

That's slightly different, as you normally care about the precise location
of the glEnd/glPopMatrix/etc. With destruction, you normally only care
that the object gets destroyed "soon" after its last use. You can force
destruction by using an extra block, but this is seldom considered
necessary.
 
B

BGB

You seem to be saying that function overloading is not necessary for
generic programming because you don't have to use generic
programming. Same can be said of any paradigm.

potentially, then one can find the minimum core of a language which can
still express the desired set of functionality.

Tell me, why are you not writing all your code in assembler?
Certainly it is possible to do so.

because then I would have to port all that assembler between the various
targets platforms...

I have enough assembler in the project already, much of it mirrored
between different CPU/OS configurations.

so, ASM is generally used sparingly, such as when a given feature can't
be readily implemented directly in a higher-level form.

granted, many nifty parts of the codebase are written in ASM (mostly x86
and x86-64, but there is some ARM related stuff around as well).

technically, the program involves self-modifying code as well.

a nifty use of ASM is to be able to implement reflection mechanisms such
as a generic apply, and also to implement features such as closures
which mimic ordinary function pointers, ...


however, ASM can also be a little verbose at times.

In C++ we call this polymorphism and it is another built in feature we
don't have to develop on our own.

yes, people so much commenting "well I don't have to do X manually", but
what really is the big deal?...


but, anyways, about as soon as one wants to write an interpreter they
will have to implement a lot of this stuff anyways regardless of the
choice of language.

then one may also find themselves faced with such issues as to how to
effectively integrate the interpreter with the host language, ...

having a simpler host language, with a simpler ABI, ... can have its
advantages here.
 
D

Dombo

Op 27-Aug-11 21:19, Noah Roberts schreef:
C-style casts are horrible because they can do anything at any time
without any warning. One minor code change can turn a static cast
into a reinterpret cast. You can't hunt these things down because
regular expressions are useless. The undefined behavior they cause
may or may not turn up at some time in the near future....it may be 2
decades before your code starts crashing in some place utterly
unrelated to the cast in an object that has nothing to do with what is
actually in the memory its using. NOBODY can keep track of every
place that needs to cast, especially within complex desktop
applications that use casting quite regularly. Stuff gets lost and
bad things happen. The new style casts provide a much better method
because they turn up more errors, you'll get a compiler error instead
of successful compile when your static_cast is no longer appropriate,
and can be searched for quite easily. They should be used for all
cases in which they can be, which is all cases in most projects.

Though I agree with most of what you say here (and your story is all to
familiar to me), personally I prefer to avoid to having to use casts in
the first place. When I see code with a lot of casts I tend to get
nervous. Though C++ style casts are much more precise, I still get
worried when I see many of them. If casts cannot be avoided (which
sometimes is the case) the C++ style casts are to be preferred for the
reasons stated in this thread.
 
J

Jorgen Grahn

The reason why function and operator overloading is important is not
because it's "handy" or "convenient" or anything like that. It's because
it's *necessary* for generic programming.

Well, I think the handiness and convenience are important, too.

When I use C, one of the things I hate the most is coming up with
different names for (for example) is_empty(const Foo*) and
is_empty(Bar*). It really hurts readability.

For example, a C application I work on contains half a dozen linked
list types. They're implemented as just one List_t type, with void* as
values, and we have to write down /as comments/ what type the List_t
is supposed to contain.

I could easily write half a dozen type-safe wrapper structs with
associated next/prev/clear etc operations ... but the operation names
would be too long to be manageable. They would be

Subsystem_List_of_FooBar_next()
Subsystem_List_of_FooBar_prev()
Subsystem_List_of_FooBar_clear()

and my coworkers would find using those /more/ painful than
the current lack of typing. And they would probably be right.

/Jorgen
 
J

Jorgen Grahn

Perhaps a story would help you understand.

A while back I worked under a guy who used C++ but basically feared
everything about it. Boy did he love inheritance though. He used
inheritance as his only method of code reuse. He created these
gigantic higherarchies

[snip excellent thedailywtf.com material]

Sometimes I wonder if the critics aren't right after all -- that C++
is too complicated for its users to understand. But then so is any
other language, damn it!

/Jorgen

PS. Interesting spelling of "hierarchy". Still unsure if its
intentional ;-)
 
N

Noah Roberts

Perhaps a story would help you understand.
A while back I worked under a guy who used C++ but basically feared
everything about it.  Boy did he love inheritance though.  He used
inheritance as his only method of code reuse.  He created these
gigantic higherarchies

[snip excellent thedailywtf.com material]

Sometimes I wonder if the critics aren't right after all -- that C++
is too complicated for its users to understand. But then so is any
other language, damn it!

Perhaps C++ puts too much power in the hands of people who can't use
it responsibly, but I think also that there are many people who will
just write crap code no matter what language they're using. I mean,
the person in question actually told me that, "The document/view
architecture is considered outdated," when I suggested we NOT mix
business and UI code into the same classes. Using Java or C# or some
other supposedly "easy" language would not have made his code any
better, it only perhaps would have lessened some of the consequences
regarding poor casting. Even then though, I seriously doubt it would
have made any difference.

I'm currently working with someone who seems to think globals are the
best thing since sliced bread and is actually AGAINST factoring them
out of functions that could easily work with parameters instead. This
time the language is C, which is supposedly "simpler" than C++ (which
again seems to be hated). Put such a developer in "simple", weakly-
typed language or something and he'll do the same thing...it just
doesn't matter. In fact in many cases it may get much worse because
now they're not limited to multiple uses of the same type when multi-
purposing a global variable...they can use it as a string here and a
double over there.

The casts that C++ offers introduce a small safety against some of
what can happen when people do bad things, which EVERYONE does. The C
style casts just are not equipped for the introduction of things that C
++ adds to C, like multiple-inheritance and stronger typing. They
won't fix broken coders though and I really don't think that C++ is to
blame for the kinds of poor approaches I've seen in the industry, with
surprising frequency and at surprisingly high levels of authority. A
good language might protect you from making basic mistakes in grammar
or semantics, but if your statements are nonsense to begin with then
no amount of simplicity is going to create sense of them.
 
J

Jorgen Grahn

You don't even need to be doing this to run into the lack of function
overloading as a great annoyance. I recently started working in a
project where the lead architect insists it's in C (there are a bunch
of other things that are more frustrating to me about it than that,
but whatever). I've found the necessity to think of obscure names to
make sure to avoid any naming conflicts is a serious quandry.

Damn, you already said what I posted in <slrnj5l5qe.gtj.grahn+nntp@
frailea.sa.invalid> one day later.

/Jorgen
 
J

Jorgen Grahn

this is a problem of naming conventions...

for example, a fairly common naming convention is something like:
library_subsystem_function.

so, a person would name things more like:
int MyFooLib_MyContainerStuff_IsHashEmpty()
{
...
}

with "MyFooLib" generally being something a bit more accurate.

now, provided people don't create libraries with the same names, then
there is not a clash.

Imagine a piece of code, perhaps a function with a loop, which uses
plenty of functions like that. Start with C++ names, e.g. empty(), and
then replace them with long, unique names like
MyFooLib_MyContainerStuff_IsHashEmpty(). I think you will agree that
readability decreases sharply.

/Jorgen
 
J

Juha Nieminen

BGB said:
RAII normally only really deals with cases which can be dealt with
manually though (apart from exceptions, which also don't exist in C, and
things like longjmp are rarely used).

You write as if the lack of RAII was only a minor inconvenience. I don't
agree with that.

One of the major problems with that lack, which means that resources
have to be managed manually, is that this management burden is "contagious".
With that I mean that if you have a type which needs to be destroyed
manually, that requirement is transferred to any other type that wants
to use that type as a member. (In other words, if you have eg. a struct
that needs to be constructed and destructed by explicitly calling some
functions, if you want to use that struct as a member of another struct,
you need to write equivalent construction/destruction functions for *that*
struct too, and the need to manually call them is "inherited" by that
struct. And so on. It can become quite complicated and burdensome.)

It also makes generic programming a lot more difficult. If you make,
for example, a generic container, you have no way of knowing whether
the elements need to be properly destroyed before freeing them or not.
If you want your container to support such elements, you have to offer
some kind of construction/destruction paradigm, which can be inconvenient,
and in the case of elements that don't need it, needless overhead.
also, several other major languages, such as Java and C#, also lack RAII
(with exceptions, this behavior is handled by the use of a "finally"
clause).

This is actually a problem in Java because if you have to manage any
resource other than memory, you run into the problem of having to free
the resource manually (at least if the resource should be freed as soon
as possible; the finalizer mechanism in Java doesn't guarantee when
destructors will be called, or even that they will be called at all).

The 'finally' block only handles a subset of cases that RAII does, and
it's nevertheless more burdensome because you have to implement it
manually. (At least it's safer than anything in C, which is a plus.)
 
N

Noah Roberts

  You write as if the lack of RAII was only a minor inconvenience. I don't
agree with that.

Yeah, how did I miss that gem? I agree with you.

Exceptions are by far the only place where RAII is useful. They are
but one way for a function to exit early. Any and all times that you
have a resource or more to allocate and can leave a function early due
to error conditions is an important place to be using RAII. Without
you're stuck making sure you manually release those parts you were so
far able to acquire.

Yeah, it can be said that it's "just" a convenience but I have to say
it's a pretty damn big one. Sort of like having a car is only a
"convenience" when you need to travel 100 miles.
 
N

Nobody

Yeah, how did I miss that gem? I agree with you.

Exceptions are by far the only place where RAII is useful.

I presume you mean "far from" rather than "by far"?
They are
but one way for a function to exit early. Any and all times that you
have a resource or more to allocate and can leave a function early due
to error conditions is an important place to be using RAII.

Not just functions, but any block. Aside from return: break, continue and
goto can cause automatic variables to leave scope.
 
J

Juha Nieminen

Noah Roberts said:
Exceptions are by far the only place where RAII is useful. They are
but one way for a function to exit early. Any and all times that you
have a resource or more to allocate and can leave a function early due
to error conditions is an important place to be using RAII. Without
you're stuck making sure you manually release those parts you were so
far able to acquire.

Yeah, it can be said that it's "just" a convenience but I have to say
it's a pretty damn big one. Sort of like having a car is only a
"convenience" when you need to travel 100 miles.

And code blocks are not the only situation where RAII comes into play.
It also does so when types (which have to be constructed and destroyed
properly) are members of other types, or inside data containers. With
RAII the parent type doesn't need to worry about the member object: It
will be automatically constructed and destroyed approperiately, without
the parent type having to do anything special about it (with the only
exception being if the member object needs some constructor parameters,
which of course makes sense).

This even in the case of a specific type using another specific type
as a member. It becomes even more important with generic programming.
 
B

BGB

You write as if the lack of RAII was only a minor inconvenience. I don't
agree with that.

One of the major problems with that lack, which means that resources
have to be managed manually, is that this management burden is "contagious".
With that I mean that if you have a type which needs to be destroyed
manually, that requirement is transferred to any other type that wants
to use that type as a member. (In other words, if you have eg. a struct
that needs to be constructed and destructed by explicitly calling some
functions, if you want to use that struct as a member of another struct,
you need to write equivalent construction/destruction functions for *that*
struct too, and the need to manually call them is "inherited" by that
struct. And so on. It can become quite complicated and burdensome.)

well, this comes down to coding style:
most data types of this sort are not allocated/freed directly, but are
allocated/freed via function calls.

say:
FOO_Context *ctx;

ctx=FOO_NewContext(...);
....
FOO_FreeContext(ctx);

however, rarely does this seem like a big deal.

also, the matter of multiple entry/exit points and releasing things also
has a typically straightforward solution:
if for some function it becomes awkward, it means the function is
probably doing too much and needs to be broken down into smaller ones.

typically, 5-25 lines is a good limit for a function size, as well as
the usual practice that a function only does a single conceptual
operation (as opposed to a function which does "this, that, and this
other thing"...).

It also makes generic programming a lot more difficult. If you make,
for example, a generic container, you have no way of knowing whether
the elements need to be properly destroyed before freeing them or not.
If you want your container to support such elements, you have to offer
some kind of construction/destruction paradigm, which can be inconvenient,
and in the case of elements that don't need it, needless overhead.

typical answers:
one generally doesn't use generic containers (containers are created on
an as-needed basis);
typically containers are homogenous;
for complex non-uniform data types, typically a vtable and/or a pointer
to a destructor function can be used.


this is again why, as noted before, one memorizes/internalizes things
like hashing, linked lists, and sorting algorithms, as one often needs
to deal with them on a fairly regular basis (one memorizes basic
algorithms much as one memorizes things like lists of API functions, ...).

not that it has to be done in some sort of rote/school style way, but
one tends to memorize things after dealing with them a few times.

This is actually a problem in Java because if you have to manage any
resource other than memory, you run into the problem of having to free
the resource manually (at least if the resource should be freed as soon
as possible; the finalizer mechanism in Java doesn't guarantee when
destructors will be called, or even that they will be called at all).

The 'finally' block only handles a subset of cases that RAII does, and
it's nevertheless more burdensome because you have to implement it
manually. (At least it's safer than anything in C, which is a plus.)

potentially...

but then again, a typical pattern in C becomes:
int BAR_Sub_DoSomething(...)
{
FOO_Context *ctx;
int i;

ctx=FOO_NewContext(...);
...
i=FOO_GetFinalValue(ctx);
FOO_FreeContext(ctx);
return(i);
}

then the form of a function becomes itself a convention.

if success/failure status is involved, typically this is handled either
with "if()" blocks, or folding the next step into its own function (the
use of "goto" is nasty and so typically not done).


in Java, the usual strategy is to introduce ones' own release methods
(rather than trying to rely on finally).


for some data types, it is also common to create ones' own mini
allocator/free system.

public class Foo
{
private static Foo freeList;
private Foo next;

public static final Foo newFoo()
{
Foo tmp;
if(freeList)
{
tmp=freeList;
freeList=tmp.next;
tmp.next=null;
return tmp;
}
tmp=new Foo();
return tmp;
}

public static final void freeFoo(Foo tmp)
{
...
tmp.next=freeList;
freeList=tmp;
}

public void free()
{
freeFoo(this);
}
}


as well as things like:
Foo obj=Foo.newFoo();
try {
...
}finally {
obj.free();
}

because, unlike some people seem to claim, the GC is a good deal more
hit-or-miss in practice when it comes to non-trivial usage patterns (and
GC cycles are not always free).



for my own scripting language, I have a delete keyword (partly itself
inherited from ActionScript, which presumably got it from C++).

however, sadly, at the moment there is no good way to prove that no one
tries to access an object after freeing it (a potential safety/security
concern), but it is a tradeoff (however, like Flash, in my language it
is not required that the VM accept the request to delete something, and
it may potentially reject it in some cases, although at the moment it
will actually just free whatever is given to it provided the code has
the needed permissions).

granted, the addition of VM-level permissions checking (using a
POSIX-style model) was itself a subject of debate (others argued for
sandboxing and trying to make sure that sandboxed code could never get
any references to secure objects, worrying that any sort of security
checking would be too slow/complex/... to be usable).


or such...
 
J

Juha Nieminen

BGB said:
well, this comes down to coding style:
most data types of this sort are not allocated/freed directly, but are
allocated/freed via function calls.

How does that change what I said? It doesn't matter if it's allocated
and freed directly or via function calls. (Heck, malloc() and free() *are*
function calls.)

My point is that this manual construction/destruction requirement is
"inherited" by anything that wants to use those types. For example, if
you create a new struct that has such a type as a member, this struct
will now also have to be constructed/destructed manually, and so on.
The language offers no means to automatize and hide this in any way.
typical answers:
one generally doesn't use generic containers (containers are created on
an as-needed basis);
typically containers are homogenous;

Containers being homogeneous has nothing to do with genericness. A container
being generic means that you can use it with any type, rather than it being
fixed to a single hard-coded type, even if the container is homogeneous.

Even if the container is hard-coded for one single type, if that type
requires construction and destruction, it makes the implementation of the
container more complicated and more error-prone.
this is again why, as noted before, one memorizes/internalizes things
like hashing, linked lists, and sorting algorithms, as one often needs
to deal with them on a fairly regular basis (one memorizes basic
algorithms much as one memorizes things like lists of API functions, ...).

Having to implement the same algorithms over and over has never been a
good programming practice. That's why generic programming is such a great
aid in this.
then the form of a function becomes itself a convention.

You see, these coding conventions exist solely because of the deficiencies
of the language, rather than because they are good programming practices.
With RAII they become unneeded, and the resulting code becomes simpler,
more straightforward and easier to read.
 
B

BGB

I think these summarize perfectly the kind of mentality that the
language produces, and the reason why I hate the language so much.

C++ might not be as "programmer friendly" as many other high-level
languages, but at least it offers significantly more tools to aid the
programmer *without* the compromises of those higher-level languages
(such as increased memory consumption or the requirement of a very
complex runtime environment and JIT compiler to make the program even
acceptably fast, and which often do not exist in most systems, especially
the embedded ones.)

yes, well, this is true at least.

If someone wants to create big projects in C, even if C++ or other
languages would be perfecly good for the job, that's fine. However,
when these same people start preaching how the defects of the language
are "normal" and "not a big deal", rather than "yeah, it's bad, but you
just have to live with it", that's rationalizing.

I am not entirely sure I see the distinction though.

I have never been claiming here that C is as
nice/programmer-friendly/... as C++, rather that it is technically
usable (absent running into big problems which would render it
"technically unusable", as it seemed some others were trying to assert).

(examples of things I consider "unusable" would be things like trying to
write an OS kernel purely in standard Java and on generic PC-style
hardware, but this excludes cases where there is a common and reasonably
effective workaround).


as noted, I don't actually dislike C++ either, just there are a few
cases where it isn't ideal either, and as I see it, a lot of my own code
happens to fall into this case. so, it is a tradeoff...

one sacrifices some level of niceness, to gain certain other features
and capabilities.

elsewhere, one can sacrifice some of these capabilities (if they are not
useful in a given case), and gain some nice features.

I think probably each language has found an effective local minima WRT
its costs, but each has different cost-sets.


it is much like, there are also cases where ASM is the ideal solution,
and other cases where a higher-level script language (such as JavaScript
/ Python / Lua / ...) are more ideal.

if something is used outside of its ideal usage domain, then it is
generally perceived as "worse" than whatever is dominant in the area.


or such...
 
M

Miles Bader

Jorgen Grahn said:
I'm surprised you're listing g++ warning options, but omit the topical
one: "-Wold-style-cast". I use it for all my code.

The problem with -Wold-style-cast, in my experience, is that
system/library headers are often chock-full of C-style casts,
resulting in huge quantities of warnings over which one has no
control...

-Miles
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,997
Messages
2,570,239
Members
46,827
Latest member
DMUK_Beginner

Latest Threads

Top