C as a Subset of C++ (or C++ as a superset of C)

J

Jens Gustedt

Am 28.08.2012 17:57, schrieb David Brown:
In practice, this works fine - but the guarantees in the standard are
weak. I /use/ bitfields regularly, with different types and enum types
in the bitfields. But I would prefer that the standards said that such
usage was completely well-defined.

can't agree on that, the standard describes exactly how this has to be
layed out. (well it could decide to pad it to 64 bit, but that's it.)
It will be fun to try it out. For my professional work, I am reliant on
"official" builds of gcc from various sources for various targets (as
well as a few other non-gcc compilers), but I like to keep track of the
latest developments too.

starting with gcc 4.7 they have moved to new interfaces for the
builtins that come close to what the standards (C and C++) describe
"Med vennlig hilsen" - "with friendly greetings" in Norwegian. It's a
habit, and I sometimes use it in English-language emails without
thinking about it.

ah, my first name might have triggered that reflex

Jens
 
C

Casey Carter

well, yes, but I listed that they could add this as well.
probably most C++ code wouldn't notice the change.

I should have been more clear about my point here: I was trying to say
that the only reason this definition of NULL is unacceptable to C++ is
because of the lack of the implicit conversion from void*. Given that
conversion, there would be no other barrier in C++ to using ((void*)0)
for NULL.
possibly, the change could be limited to 'extern "C"' code or similar
(and/or could be ignored in cases of templates or overloading, or at
least be a lower priority than any other matches).

I think the rules for overload resolution and template type deduction
are complicated enough without introducing a feature that behaves
differently depending on whether or not it interacts with those systems.

There's no such thing as 'extern "C" code': extern "foo" is a *linkage
specification* whose sole purpose is to tell the implementation to
engage the ABI machinery appropriate for language "foo".
there are plenty of uses besides malloc, so malloc is just one of many
cases (mmap, memcpy, VirtualAlloc, ...).

or, a biggie:
for any user-defined code which accepts or returns "void *".

Returns, not accepts. The implicit conversion _from_ void* is dangerous,
implicit conversion _to_ void* is perfectly safe type erasure.
for example, to do similar tasks in C++ often requires a larger number
of casts.



could work...



this would break large amounts of C code.

I'll handwave here and claim that's what compiler options are for. Valid
C90 code will always be valid C90 code, I see no reason why a future C20
compiler couldn't be instructed to compile code as C90. Old code will
always be old code, but does that mean that we have to keep on writing
old code forever?

It's already the case that C11 made some C99 features optional: there
may be conforming C11 compilers that refuse to compile some conforming
C99 programs. The kind of change I suggest is quantitatively but not
qualitatively different.
I don't think people on either side would want changes which cause their
existing programs to break and have to be rewritten.


we call this 'void *', where this is nearly the only thing that can
really be done with this type anyways ("void" variables are otherwise
pretty much useless, ...).

I am differentiating between the implicit conversion _from_ any pointer
type and the implicit conversion _to_ any pointer type; I posit that
those two features have unique design intentions and should therefore be
represented by distinct types. C conflates the two ideas in void*, C++
doesn't have the conversion _to_ any pointer type at all, except for
nullptr. (Given how simple it is to make a user-defined type in C++ that
implicitly converts to any pointer type, it's notable that I've never
seen anyone feel the need to do so.)

void* has 2 uses in C:
1. it's the "sink" type to which all other pointer types can be
implicitly converted.
2. it's the "source" type that implicitly converts to all other pointer
types.

and 2 uses in C++:
1. "sink" type just as in C.
2. type-erased pointer that designates _some_kind_ of object about which
nothing is known except its location in memory.

The secondary usage is diametrically opposite between C and C++: one
disallows using a void* for any purpose without a cast, the other allows
you to pass a void* where you would any pointer type without a cast. In
C, I can pass the same void* to fclose, free, strcat, and
hundreds/thousands of other functions without a compiler diagnostic.
Using void* you've effectively opted out of a large part of the type system.

C programmers also often use void* as either a type-erased or generic
pointer but do so purely based on convention and discipline: you will
get no help from the compiler. If an intern jumps into your code the
next day and passes your type-erased pointer to fputs, the compiler will
accept it happily.
yes, but the least impact route would be to just pick up C's "void *"
semantics for this, since C++ code isn't likely to notice, and existing
C code can keep working unmodified.

While it's true that making the language more permissive doesn't impact
the correctness of existing programs, that doesn't necessarily make it
always a good idea. I seriously doubt that the C++ community would ever
accept implicit conversion from void* into the language; the case I'm
attempting to make here is that C shouldn't have that feature either and
likely would not if it was designed afresh today.

Given that preserving the semantics of old code has a higher priority to
the C community than almost any other concern, I think it's unlikely
that we will ever have a C++ that is truly a superset of C. If anything
I think it's more likely that C++ would introduce even more breaking
changes to become _less_ compatible with C.
 
C

Casey Carter

That said, I don't really see the point of "void*" as it is in C++
nowadays. Since you'd have to do an explicit conversion any time you
use it, in C++ it has nothing that couldn't be done with a "unsigned
char*". In the contrary, "unsigned char*" allows byte based pointer
arithmetic and to inspect the contents on a byte base.

The only reason, I think, for "void*" is interface compability to
C. Somehow it misses the whole point of it.

Jens

The point of a type system is to restrict what operations are valid, so
void* is desirable in C++ precisely because there are _fewer_ things you
can do with it than unsigned char*.
 
B

BGB

I should have been more clear about my point here: I was trying to say
that the only reason this definition of NULL is unacceptable to C++ is
because of the lack of the implicit conversion from void*. Given that
conversion, there would be no other barrier in C++ to using ((void*)0)
for NULL.

yep, which is probably why they could do it...
it does at least make more sense than defining it as 0 (which would give
a warning in C).

possible could be changing it internally to _Nullptr or similar, which
could be defined as "functionally equivalent to ((void *)0)".

I think the rules for overload resolution and template type deduction
are complicated enough without introducing a feature that behaves
differently depending on whether or not it interacts with those systems.

There's no such thing as 'extern "C" code': extern "foo" is a *linkage
specification* whose sole purpose is to tell the implementation to
engage the ABI machinery appropriate for language "foo".

they *could* make it work this way, but OTOH, maybe it isn't such a
great idea to add context-sensitivity in this case.

otherwise, C's "void *" semantics would then apply everywhere.

Returns, not accepts. The implicit conversion _from_ void* is dangerous,
implicit conversion _to_ void* is perfectly safe type erasure.

but, changing this behavior would break existing code.

consider you have something like:
Foo *obj;
obj=gcalloc(sizeof(Foo)); //allocated via a GC API
which would need to be changed everywhere to:
obj=(Foo *)gcalloc(sizeof(Foo)); //allocated via a GC API

this kind of thing isn't really a good option.

introducing another pointer type similarly has the problem of requiring
changing existing code to make it work (altering use of "void *" return
types), which is similarly not a good option.

I'll handwave here and claim that's what compiler options are for. Valid
C90 code will always be valid C90 code, I see no reason why a future C20
compiler couldn't be instructed to compile code as C90. Old code will
always be old code, but does that mean that we have to keep on writing
old code forever?

actually, I was thinking if C99 or C11 were the subset, not C90.
in both cases, "void *" works as it did before.

It's already the case that C11 made some C99 features optional: there
may be conforming C11 compilers that refuse to compile some conforming
C99 programs. The kind of change I suggest is quantitatively but not
qualitatively different.

it would break the majority of existing code, rather than just "some
code which uses a few features which never gained widespread adoption
anyways" (or "something which has largely fallen into disuse"), this is
a bit more severe, as it would break "nearly all existing C programs"...

I am differentiating between the implicit conversion _from_ any pointer
type and the implicit conversion _to_ any pointer type; I posit that
those two features have unique design intentions and should therefore be
represented by distinct types. C conflates the two ideas in void*, C++
doesn't have the conversion _to_ any pointer type at all, except for
nullptr. (Given how simple it is to make a user-defined type in C++ that
implicitly converts to any pointer type, it's notable that I've never
seen anyone feel the need to do so.)

void* has 2 uses in C:
1. it's the "sink" type to which all other pointer types can be
implicitly converted.
2. it's the "source" type that implicitly converts to all other pointer
types.

and 2 uses in C++:
1. "sink" type just as in C.
2. type-erased pointer that designates _some_kind_ of object about which
nothing is known except its location in memory.

The secondary usage is diametrically opposite between C and C++: one
disallows using a void* for any purpose without a cast, the other allows
you to pass a void* where you would any pointer type without a cast. In
C, I can pass the same void* to fclose, free, strcat, and
hundreds/thousands of other functions without a compiler diagnostic.
Using void* you've effectively opted out of a large part of the type
system.

C programmers also often use void* as either a type-erased or generic
pointer but do so purely based on convention and discipline: you will
get no help from the compiler. If an intern jumps into your code the
next day and passes your type-erased pointer to fputs, the compiler will
accept it happily.


actually, C code typically handles the type-erased case by defining new
types based on incomplete structs or similar, as in:
typedef struct _opaqueptr_s opaqueptr_t;

so, this isn't really an issue in practice.

While it's true that making the language more permissive doesn't impact
the correctness of existing programs, that doesn't necessarily make it
always a good idea. I seriously doubt that the C++ community would ever
accept implicit conversion from void* into the language; the case I'm
attempting to make here is that C shouldn't have that feature either and
likely would not if it was designed afresh today.

Given that preserving the semantics of old code has a higher priority to
the C community than almost any other concern, I think it's unlikely
that we will ever have a C++ that is truly a superset of C. If anything
I think it's more likely that C++ would introduce even more breaking
changes to become _less_ compatible with C.

well, one could always argue that, on the other side, it may be that C
is already good and that it is C++ which needs changing to be more
compatible with C, but either way.


usually, the route of least impact is the most preferable, and simply
adding "void*" is what leads to this.


the other possible option is, granted, the introduction of an explicit
"C mode" into the compiler, which could possibly look like 'extern "C"'
but would infact change the semantics, or maybe, the introduction of new
syntax for this case, say:
_C { ... }

which would both enable C style linkage, as well as switching over to C
style semantics as well.

maybe it could also be used to declare functions with C-style semantics
(but retaining C++ linkage), as-in:
void myfunc(void) _C
{
Foo *obj;
obj=malloc(sizeof(Foo));
...
}

then C compilers would simply ignore the _C modifier (that or it is
handled similarly to how 'extern "C"' is generally handled in headers,
IOW, via preprocessor magic).

maybe it could be called something like:
C_MODE

and would be defined something like:
#ifndef C_MODE
#ifdef __cplusplus
#define C_MODE _C
#else
#define C_MODE
#endif
#endif


or, something along these lines...
 
B

Bo Persson

Jens Gustedt wrote 2012-08-27 18:34:
I am not sure that I understand this one. C has the optional uintXX_t
that are guaranteed to have a fixed width no padding etc, if they
exist. Would it for your purpose just suffice to make some of these
types mandatory e.g for 8, 16, 32 and 64 bit integers?

Why would you want to make this mandatory for everyone? Why not be
satisfied if your code is portable to systems having these types?

Whatever we do at the language level, we still have code that will not
run both on a cell phone and a supercomputer. In places where the
language standards are vague, it is to ALLOW for implementations on
other types of systems.

For example, I write some code for IBM mainframes. It will not run at
all on any systems not having IMS and DB2. It is totally non-portable
whatever language features we use. Just let us do that!


Bo Persson
 
J

Jens Gustedt

Am 28.08.2012 20:08, schrieb Scott Lurndal:
Jens Gustedt <[email protected]> writes:
Until you build code that must compile and run on both big-endian and
little-endian architectures, where the order of the bits _will_ change. Not
a problem unless the bits are mapped to hardware or are serialized as a
larger unit (e.g. uint32_t).

ah ok, the standard effectively only make a statement about the value
(bit pattern seen as unsigned value) of such a beast. How and in which
order this is stored depends on endianess of the architecture. But
that is not a problem of bit-fields per se, but the fact that C still
has to run on two different types of architectures with respect to
that.

Jens
 
J

James Kuyper

Am 28.08.2012 17:57, schrieb David Brown:

can't agree on that, the standard describes exactly how this has to be
layed out. (well it could decide to pad it to 64 bit, but that's it.)

That's not consistent with my understanding of the standard. Could you
specify precisely how you thought the standard required that structure
to be laid out?

The relevant clause from the C standard is 6.7.2.1p11:
An implementation may allocate any addressable storage unit large enough to hold a bitfield.
If enough space remains, a bit-field that immediately follows another bit-field in a
structure shall be packed into adjacent bits of the same unit. If insufficient space remains,
whether a bit-field that does not fit is put into the next unit or overlaps adjacent units is
implementation-defined. The order of allocation of bit-fields within a unit (high-order to
low-order or low-order to high-order) is implementation-defined. The alignment of the
addressable storage unit is unspecified.

In the C++ standard, 9.6p1 says much the same thing, in somewhat less
formal language.

Neither standard specifies the size of the allocation unit from which
bit-fields are allocated. Which size were you assuming was specified by
the standard?

If there's enough space to store b in the same allocation unit as 'a',
the C standard requires that 'b' be stored in bits adjacent to 'a', but
it does not specify the relative order of 'a' and 'b'. Both standards
explicitly state that the order of bit-fields within an allocation unit
is unspecified . What order were you assuming was specified by the standard?

Both standards explicitly leave unspecified whether bit-fields can
overlap adjacent allocation units. Since the assumption that CHAR_BIT==8
was specified, that problem won't come up in this particular case, but
with different bit-field sizes, or even with the same sizes in a
different order, that could be an issue even on entirely conventional
systems. For instance, if "c" were moved before "b", and the allocation
unit was either 8 or 16 bits in size, then "b" might, or might not,
overlap two consecutive allocation unit, depending upon the
implementation. Were you making any assumptions about how such overlaps
would be handled, or were you relying upon the assumption that
CHAR_BIT==8 to conclude that this particular structure has a uniquely
specified layout?
 
B

Bo Persson

BGB wrote 2012-08-28 05:34:
there are plenty of uses besides malloc, so malloc is just one of many
cases (mmap, memcpy, VirtualAlloc, ...).

But you don't use memcpy for C++ type, you use std::copy which is typed.
And you don't do malloc either, you use the containers from the C++
standard library. Or use the occasional new X, which doesn't need a cast
either.
or, a biggie:
for any user-defined code which accepts or returns "void *".

No, these can be templates and accept or return T*, for any type T.
for example, to do similar tasks in C++ often requires a larger number
of casts.

If you do it the C++ way, it actually uses fewer casts. :)


A big problem with trying to finds the common ground between C and C++
is that lots of good C++ code isn't using many C features at all. And
well written C code tend to be very non-idiomatic C++ code.


Bo Persson
 
J

Jens Gustedt

Am 28.08.2012 19:24, schrieb Casey Carter:
The point of a type system is to restrict what operations are valid,
sure

so void* is desirable in C++ precisely because there are _fewer_
things you can do with it than unsigned char*.

my question would be more, why would you need it at all? Where does
the idea of having a pointer that has no type information occur in
C++? In C++ all objects have a type from their creation ("new" always
generates a typed object) and all other stuff is regulated through the
different forms of cast, inheritence and all that. If you want to do
it really badly, you'd do a reintrepret_cast anyhow, no need to go
through "void*", no? I think C++ itself doesn't need "void*" much and
the use of it is rightly frowned upon in C++. (Well there are some
very restricted corner cases such as overloading "operator new" or
things like that.)

You need "void*" when you use "malloc" or other stuff for
compatibility with C. So why the hell have it function differently in
C++? This is just an inconvenience / annoyance.

Jens
 
J

Jens Gustedt

Am 28.08.2012 13:04, schrieb Malcolm McLean:
בת×ריך ×™×•× ×©×œ×™×©×™, 28 ב×וגוסט 2012 08:27:55 UTC+1, מ×ת Jens Gustedt:
void * just documents that "this is an unsigned char * holding arbitrary
bytes". It's a bit pointless to require a cast to and from, because you
can't use the data in any way without converting it to something else,
but that's just a minor syntactical niggle.

minor syntax problem, perhaps, but can be very anoying

In C, many agree that cast are to be avoided as much as possible,
because they often hide problems. Just introducing casts into code for
the only reason to have it compatible with C++ is a show stopper.

Jens
 
B

Bo Persson

Jens Gustedt wrote 2012-08-28 21:02:
You need "void*" when you use "malloc" or other stuff for
compatibility with C. So why the hell have it function differently in
C++? This is just an inconvenience / annoyance.

No, it was done on purpose.

If you write

MyClass* x = malloc(sizeof(MyClass));

you are in real trouble, because you forgot to call the constructor of
MyClass. You really don't want to have that conversion by default.

In C this is less of a problem, because there is no constructor anyway.


Bo Persson
 
J

James Kuyper

On 08/27/2012 12:34 PM, Jens Gustedt wrote:
....
... C has the optional uintXX_t
that are guaranteed to have a fixed width no padding etc, if they
exist. Would it for your purpose just suffice to make some of these
types mandatory e.g for 8, 16, 32 and 64 bit integers?

Implementations which fail to support those types do so almost
exclusively because there is no hardware support on the target platform
for a type that meet's the standard's specifications for those types.
Keep in mind that the specifications are such as to rule out emulation.
For instance, an emulated 32-bit type on a system with CHAR_BIT==9 could
be called int_least32_t, or int_fast32_t, but it could not meet the
requirements to qualify as int32_t, because it would have to occupy at
least 36 bits, and designating 4 of those bits as padding is not allowed
for int32_t.

The sole effect of making those types mandatory would be to render all
implementations targeting such platforms non-conforming. Such platforms
may currently be exceedingly rare, or even non-existent - but is there
any value to be gained by prohibiting implementations for such platforms
from claiming standard conformance?
 
J

Jens Gustedt

Am 28.08.2012 20:50, schrieb James Kuyper:
That's not consistent with my understanding of the standard. Could you
specify precisely how you thought the standard required that structure
to be laid out?

The relevant clause from the C standard is 6.7.2.1p11:

In the C++ standard, 9.6p1 says much the same thing, in somewhat less
formal language.

I have the vague memory of someone claiming that C++ allowed for an
out of order storage of bit-fields, but I would be happy to learn the
contrary.
Neither standard specifies the size of the allocation unit from which
bit-fields are allocated. Which size were you assuming was specified by
the standard?

It must be an addressable storage unit, so it must be a multiple of 8
bit (I assumed that CHAR_BITS is 8). So what ever multiple of 8 is
chosen for it c and d will always fit into the same unit. So the order
will always be a-b-c-d-e or the other way around.
If there's enough space to store b in the same allocation unit as 'a',
the C standard requires that 'b' be stored in bits adjacent to 'a', but
it does not specify the relative order of 'a' and 'b'. Both standards
explicitly state that the order of bit-fields within an allocation unit
is unspecified . What order were you assuming was specified by the standard?

Both standards explicitly leave unspecified whether bit-fields can
overlap adjacent allocation units. Since the assumption that CHAR_BIT==8
was specified, that problem won't come up in this particular case, but
with different bit-field sizes, or even with the same sizes in a
different order, that could be an issue even on entirely conventional
systems. For instance, if "c" were moved before "b", and the allocation
unit was either 8 or 16 bits in size, then "b" might, or might not,
overlap two consecutive allocation unit, depending upon the
implementation.

I was refering to this example only. Perhaps you missed the fact that
I referred to the case of potential overlap upthread already and that
we were discussing a specific case here that came from an example by
David where he wanted a layout that respected byte (uint8_t)
boundaries.

But, point taken, it has two different layouts.

Jens
 
J

Jens Gustedt

Am 28.08.2012 21:24, schrieb James Kuyper:
On 08/27/2012 12:34 PM, Jens Gustedt wrote:
...

Implementations which fail to support those types do so almost
exclusively because there is no hardware support on the target platform
for a type that meet's the standard's specifications for those types.
Keep in mind that the specifications are such as to rule out emulation.
For instance, an emulated 32-bit type on a system with CHAR_BIT==9 could
be called int_least32_t, or int_fast32_t, but it could not meet the
requirements to qualify as int32_t, because it would have to occupy at
least 36 bits, and designating 4 of those bits as padding is not allowed
for int32_t.
exactly

The sole effect of making those types mandatory would be to render all
implementations targeting such platforms non-conforming. Such platforms
may currently be exceedingly rare, or even non-existent - but is there
any value to be gained by prohibiting implementations for such platforms
from claiming standard conformance?

POSIX went that path, for example, so it seems that they got away with
it well.

And just think of it as a question, I didn't say that I want it
myself. You seem to be against, I wouldn't be sure.

Also I think that CHAR_BIT==9 could imply the existence of uint9_t,
wouldn't it? One could expect that it then has the types that
correspond to sizeof(int)*CHAR_BIT and similar multiples, no?

I was just trying what the original poster had in mind with his
question (which you completely snipped). I understood it that he
sought for a way to access "uninterpreted" larger chunks of objects,
where the chunks correspond to the typical sizes that arithmetic types
have.

Jens
 
J

James Kuyper

Am 28.08.2012 20:50, schrieb James Kuyper:

I have the vague memory of someone claiming that C++ allowed for an
out of order storage of bit-fields, but I would be happy to learn the
contrary.

You won't be happy. "in-order" has two different possible meanings, and
each implementation has the option of choosing either one. Both C and
C++ are in agreement on this. Different implementations for the same
platform could make different choices (though market forces make that
unlikely); a single implementation could even make it a command-line option.
C++ allows more freedom than C, because it doesn't require consecutive
bit-fields in the same allocation unit to occupy adjacent sets of bits.
That means that there's a lot more than just two different possible orders.
It must be an addressable storage unit, so it must be a multiple of 8
bit (I assumed that CHAR_BITS is 8). So what ever multiple of 8 is
chosen for it c and d will always fit into the same unit. So the order
will always be a-b-c-d-e or the other way around.

Even in C, there's a lot more than just two possible orders. If, for
example, the allocation unit size is 16, a and b will have to be in the
same allocation unit, and c, d, and e will have to be in the same
allocation unit, but depending upon the allocation order chosen by the
implementation, the fields within each allocation unit could be stored
in either order: "ab cde" or "ba edc". The implementation's definition
of allocation order could even specify different orders in even and odd
numbered words (though I can't think of any reason to do so, except on
middle-endian machines with allocation units of at least 32 bits): "ab
edc" or "ba cde".

Do you still feel that the C standard is sufficiently specific?
 
J

James Kuyper

Am 28.08.2012 20:50, schrieb James Kuyper:

I have the vague memory of someone claiming that C++ allowed for an
out of order storage of bit-fields, but I would be happy to learn the
contrary.

You won't be happy. "in-order" has two different possible meanings, and
each implementation has the option of choosing either one. Both C and
C++ are in agreement on this. Different implementations for the same
platform could make different choices (though market forces make that
unlikely); a single implementation could even make it a command-line option.
C++ allows more freedom than C, because it doesn't require consecutive
bit-fields in the same allocation unit to occupy adjacent sets of bits.
That means that there's a lot more than just two different possible orders.
It must be an addressable storage unit, so it must be a multiple of 8
bit (I assumed that CHAR_BITS is 8). So what ever multiple of 8 is
chosen for it c and d will always fit into the same unit. So the order
will always be a-b-c-d-e or the other way around.

Even in C, there's a lot more than just two possible orders. If, for
example, the allocation unit size is 16, a and b will have to be in the
same allocation unit, and c, d, and e will have to be in the same
allocation unit, but depending upon the allocation order chosen by the
implementation, the fields within each allocation unit could be stored
in either order: "ab cde" or "ba edc". The implementation's definition
of allocation order could even specify different orders in even and odd
numbered words (though I can't think of any reason to do so, except on
middle-endian machines with allocation units of at least 32 bits): "ab
edc" or "ba cde".

Do you still feel that the C standard is sufficiently specific?
 
J

James Kuyper

On 08/28/2012 04:47 PM, Jens Gustedt wrote:
....
Also I think that CHAR_BIT==9 could imply the existence of uint9_t,
wouldn't it? One could expect that it then has the types that
correspond to sizeof(int)*CHAR_BIT and similar multiples, no?

It would not be mandatory, but it would be extremely likely.
 
J

Jens Gustedt

Am 28.08.2012 21:15, schrieb Bo Persson:
Jens Gustedt wrote 2012-08-28 21:02:


No, it was done on purpose.

If you write

MyClass* x = malloc(sizeof(MyClass));

you are in real trouble, because you forgot to call the constructor of
MyClass. You really don't want to have that conversion by default.

exactly that was my point. But the problem is not that "malloc" has a
return type of "void*" the problem is the use of "malloc"
itself. Don't shoot the messenger.

In C++, somebody using "malloc" is on his own. Casting to the target
pointer type is even more of a problem, because it is just hiding the
fact that you didn't call a constructor. You got to be very sure of
what you are doing, anyhow.

Requesting that you should cast the "void*" is counterproductive. A
"strict C++" mode that barks at you for any use of "malloc", *that*
would be appropriate.

I'd still like to hear of a plausible use case for "void*" in C++ that
is not related to C compability or to the overloading of "operator
new".

Jens
 
M

Melzzzzz

I'd still like to hear of a plausible use case for "void*" in C++ that
is not related to C compability or to the overloading of "operator
new".
Well, when implementing template container for pointers, it is great
optimization to use just single class specialized for pointers to void,
and cast to appropriate pointer type in specialized classes,
while using single implementation.
It greatly reduces code size.
 
C

Casey Carter

Am 28.08.2012 21:15, schrieb Bo Persson:

exactly that was my point. But the problem is not that "malloc" has a
return type of "void*" the problem is the use of "malloc"
itself. Don't shoot the messenger.

In C++, somebody using "malloc" is on his own. Casting to the target
pointer type is even more of a problem, because it is just hiding the
fact that you didn't call a constructor. You got to be very sure of
what you are doing, anyhow.

Requesting that you should cast the "void*" is counterproductive. A
"strict C++" mode that barks at you for any use of "malloc", *that*
would be appropriate.

malloc is not the only function - standard or otherwise - that returns a
void*. Are you suggesting that C++ should forbid malloc, or that C++
should forbid void*, or that C++ should have some other type whose only
purpose is to represent void* returns from C functions?
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
474,079
Messages
2,570,574
Members
47,207
Latest member
HelenaCani

Latest Threads

Top