const and proxies of objects

C

ciccio

Hi all,

I'm lately a bit puzzled about something and it boils down to the
usage of the keyword const.

I'll sketch the problem and hope to find enlightenment here in this
group.

I would like to create some kind of object and proxies of that
object. A simple example would be a vector and the proxy is then a
subvector.

Assume I have the following array class that contains a simple array
definition

class array {
public:
typedef unsigned int size_type;

array() : data_(0), size_(0) {}
array(size_type size) { /* definition here */ }
array(const array &c) { /* copy constructor */ }

// all requested functions and operators here

private:
double * data_;
size_type size_;
};

This array class would then contain the data of my vector.

The private data members of the vector class are then an array
and an array reference. The reference is used for all data access and
thus in all functions. The vector constructors set the reference.
In case of a proxy, the reference is set to data of the original.

class vector {
public:
typedef array::size_type size_type;

vector() : _data_(), data_(_data_) { /* def here */ }
vector(size_type size) : _data_(size), data_(_data_) {
/* def here */
}
vector(const vector &c) : _data_(c.data_), data_(_data_) {
/* def here */
}

// proxy
vector(const vector &p, int i) : _data_(), data_(p.data_) {
// stuff where i can be used for, ex. different length
}

// all requested functions and operators here

private:
array _data_;
array &data_;
// bunch of variables you want here (size, begin, end, stride)
};


The question I have no concerns the proxy constructor, i.e

vector(const vector &p, int i) : _data_(), data_(p.data_) {
// stuff where i can be used for, ex. different length
}

The constructor takes a const vector reference p, and creates a new
vector that can alter the data members of p afterwards. This could be
done by means of other functions. (Ex.

vector v(20); // vector v of size 20
vector p(v,10) // vector p has same elements of v but only address
// first 10
p(1) = 1.0;

How correct is this with respect to the idea of const?

I could define the constructor as

vector(vector &p, int i) : _data_(), data_(p.data_) {
// stuff where i can be used for, ex. different length
}

but this would make it impossible to use

vector(vector(p,i),j)

because of the dangling temporary that one of the references has.

So, how far should the const keyword reach? If an object is defined as
const, should the proxies or all derived objects also be consts?

I don't know, I am a bit puzzled at the moment.

Regards
 
A

Alf P. Steinbach

* ciccio, on 02.06.2010 15:47:
I would like to create some kind of object and proxies of that
object. A simple example would be a vector and the proxy is then a
subvector.

Assume I have the following array class that contains a simple array
definition

class array {
public:
typedef unsigned int size_type;

array() : data_(0), size_(0) {}
array(size_type size) { /* definition here */ }
array(const array&c) { /* copy constructor */ }

// all requested functions and operators here

private:
double * data_;
size_type size_;
};

As long you're using a raw array as the represention you'll also need to define
a destructor and a copy assignment operator (or at least declare them). This is
known as the "law of 3". If you need any single one of copy constructor,
destructor or copy assignment operator, you probably need all three.

If not then you'll leak memory and do double deallocations to compensate...

By the way, emulating the standard library wrt. definining local size types
everywhere is just added complication for strictly negative gain. I.e., it's
silly. :) Just Say No(TM) to that idea.

This array class would then contain the data of my vector.

The private data members of the vector class are then an array
and an array reference. The reference is used for all data access and
thus in all functions. The vector constructors set the reference.
In case of a proxy, the reference is set to data of the original.

class vector {
public:
typedef array::size_type size_type;

vector() : _data_(), data_(_data_) { /* def here */ }
vector(size_type size) : _data_(size), data_(_data_) {
/* def here */
}
vector(const vector&c) : _data_(c.data_), data_(_data_) {
/* def here */
}

// proxy
vector(const vector&p, int i) : _data_(), data_(p.data_) {
// stuff where i can be used for, ex. different length
}

// all requested functions and operators here

private:
array _data_;
array&data_;
// bunch of variables you want here (size, begin, end, stride)
};

The usual name for this kind of thing is "slice". Another good reason to change
the name is that we already have 'std::vector'.

Check out std::valarray (§26.3 of C++98 standard).

I haven't used it but it's there, and it might free you for implementing this
yourself.

The question I have no concerns the proxy constructor, i.e

vector(const vector&p, int i) : _data_(), data_(p.data_) {
// stuff where i can be used for, ex. different length
}

The constructor takes a const vector reference p, and creates a new
vector that can alter the data members of p afterwards. This could be
done by means of other functions. (Ex.

vector v(20); // vector v of size 20
vector p(v,10) // vector p has same elements of v but only address
// first 10
p(1) = 1.0;

How correct is this with respect to the idea of const?

As far as I understand your intention it isn't const-correct, really. There is
about the same problem with a smart pointer but we accept it for the smart
pointer because the constness of the smart pointer does not relate to the
constness of the pointee. However in your case you have a wrapper whose
constness or not presumably is meant to restrict (or not) available operations.

It is extremely dangerous to use a copy constructor to do anything else than
actual /copying/. For one, the compiler is free to assume that a copy
constructor just copies, and optimize away a copy constructor call. For another
reason, programmers also expect a copy constructor to actually copy.

E.g., add a method ref() that produces a Ref, and a constructor that takes a
Ref. Don't try to make everything automatic, implicit and hidden, which IMO is
bad (except for all the exceptions to that rule, he he). Explicit is good.

I could define the constructor as

vector(vector&p, int i) : _data_(), data_(p.data_) {
// stuff where i can be used for, ex. different length
}

but this would make it impossible to use

vector(vector(p,i),j)

because of the dangling temporary that one of the references has.

slice( slice( p, i ).ref(), j )

So, how far should the const keyword reach? If an object is defined as
const, should the proxies or all derived objects also be consts?

Depends; see comment about smart pointer above.


Cheers & hth.,

- Alf
 
Ö

Öö Tiib

Wrong, defining public typedefs in the style of the standard library is a
good thing (TM), you don't like it because it is at odds with your retarded
"use int everywhere" (or ptrdiff_t equivalent) mantra which you have
solidified in a blog post and must now constantly defend.

I have found it to be more important to have limit to contained
element count as part of non-generic container interface. There are
rarely containers needed in practice for what having more than few
thousands of elements makes any sense. Often it is few hundreds. OTOH
when it is possibly much larger container then operating it starts to
affect performance. Most comically ... when container sizes are likely
so large that the sign bit of index starts to possibly matter
something then you are on field where sign bit is again the most
trivial of all issues to deal with.
 
S

Squeamizh

Wrong, defining public typedefs in the style of the standard library is a
good thing (TM), you don't like it because it is at odds with your retarded
"use int everywhere" (or ptrdiff_t equivalent) mantra which you have
solidified in a blog post and must now constantly defend.

/Leigh

How about some pros and cons? I'm not just being a pain in the ass,
I'm genuinely curious about this.
 
K

Kai-Uwe Bux

Eric said:
Ah yes, the "Leigh Indicator" still firing at 100%:

"If Leigh is in favor, it's wrong".

In this case, your suggestion implies that boost is wrong in defining
size_type for boost::array. I am not so sure that the "Leigh indicator" is
working properly here. Whenever I implement a container-like data structure,
I do a typedef size_type. It eases generic programming as client code can
rely on X::size_type being an appropriate type. (Beware: int may not be
universally appropriate, e.g., for file-backed containers std::streamoff
might be appropriate; and that is larger than int on some systems.)


Best

Kai-Uwe Bux
 
A

Alf P. Steinbach

* Kai-Uwe Bux, on 03.06.2010 19:45:
... (Beware: int may not be
universally appropriate, e.g., for file-backed containers std::streamoff
might be appropriate; and that is larger than int on some systems.)

prtdiff_t covers most of it in practice on current desktop systems for in-memory
containers.

This means simpler code: you don't have to care about or adapt client code to
the type.

stream positions are a special case on 32-bit systems. Since standard arithmetic
doesn't generally suffice a simple typedef is of dubious advantage. I'd say
negative advantage...

<quote
src="http://www.boost.org/doc/libs/1_43_0/libs/iostreams/doc/functions/positioning.html"
emphasis="mine">
The header <boost/iostreams/positioning.hpp> provides the definition of the
integral type boost::iostreams::stream_offset, capable of holding arbitrary
stream offsets on most platforms, together with the definition of two functions,
offset_to_position and position_to_offset, for converting between stream_offset
and std::streampos.

The type std::streampos is required to be able to hold an arbitrary stream
position, but it is not an intergral type. Although std::streampos is
interconvertible with the integral type std::streamoff, the conversion from
std::streampos to std::streamoff *may not be faithful* for large (64-bit)
values. The integral type boost::iostreams::stream_offset is intended as a
replacement for std::streamoff, with the implicit conversions to and from
std::streampos being replaced by explicit conversion functions.

The implementation of offset_to_position and position_to_offset relies on
implementation defined behavior, and is guaranteed to work correctly for large
values only for standard libraries which define std::streamoff to be 64-bit type
or for which the Boost Iostreams library has been explicitly configured.
</quote>


Cheers & hth.,

- Alf
 
K

Kai-Uwe Bux

Alf said:
* Kai-Uwe Bux, on 03.06.2010 19:45:

prtdiff_t covers most of it in practice on current desktop systems for
in-memory containers.
[...]

For file-backed containers, you might want to use long long or unsigned long
long, which may or may not be the same as ptrdiff_t or size_t.


Best

Kai-Uwe Bux
 
A

Alf P. Steinbach

* Kai-Uwe Bux, on 03.06.2010 21:09:
Alf said:
* Kai-Uwe Bux, on 03.06.2010 19:45:

prtdiff_t covers most of it in practice on current desktop systems for
in-memory containers.
[...]

For file-backed containers, you might want to use long long or unsigned long
long, which may or may not be the same as ptrdiff_t or size_t.


<requoting snipped part>
stream positions are a special case on 32-bit systems. Since standard arithmetic
doesn't generally suffice a simple typedef is of dubious advantage. I'd say
negative advantage...

<quote
src="http://www.boost.org/doc/libs/1_43_0/libs/iostreams/doc/functions/positioning.html"
emphasis="mine">
The header <boost/iostreams/positioning.hpp> provides the definition of the
integral type boost::iostreams::stream_offset, capable of holding arbitrary
stream offsets on most platforms, together with the definition of two functions,
offset_to_position and position_to_offset, for converting between stream_offset
and std::streampos.

The type std::streampos is required to be able to hold an arbitrary stream
position, but it is not an intergral type. Although std::streampos is
interconvertible with the integral type std::streamoff, the conversion from
std::streampos to std::streamoff *may not be faithful* for large (64-bit)
values. The integral type boost::iostreams::stream_offset is intended as a
replacement for std::streamoff, with the implicit conversions to and from
std::streampos being replaced by explicit conversion functions.

The implementation of offset_to_position and position_to_offset relies on
implementation defined behavior, and is guaranteed to work correctly for large
values only for standard libraries which define std::streamoff to be 64-bit type
or for which the Boost Iostreams library has been explicitly configured.
</quote>
</requoting snipped part>


Cheers & hth.,

- Alf
 
K

Kai-Uwe Bux

Alf said:
* Kai-Uwe Bux, on 03.06.2010 19:45:
... (Beware: int may not be
universally appropriate, e.g., for file-backed containers std::streamoff
might be appropriate; and that is larger than int on some systems.)
[...]
stream positions are a special case on 32-bit systems. Since standard
arithmetic doesn't generally suffice a simple typedef is of dubious
advantage. I'd say negative advantage...

<quote
src="http://www.boost.org/doc/libs/1_43_0/libs/iostreams/doc/functions/positioning.html"
emphasis="mine">
The header <boost/iostreams/positioning.hpp> provides the definition of
the integral type boost::iostreams::stream_offset, capable of holding
arbitrary stream offsets on most platforms, together with the definition
of two functions, offset_to_position and position_to_offset, for
converting between stream_offset
and std::streampos.

The type std::streampos is required to be able to hold an arbitrary stream
position, but it is not an intergral type. Although std::streampos is
interconvertible with the integral type std::streamoff, the conversion
from std::streampos to std::streamoff *may not be faithful* for large
(64-bit) values. The integral type boost::iostreams::stream_offset is
intended as a replacement for std::streamoff, with the implicit
conversions to and from std::streampos being replaced by explicit
conversion functions.

The implementation of offset_to_position and position_to_offset relies on
implementation defined behavior, and is guaranteed to work correctly for
large values only for standard libraries which define std::streamoff to be
64-bit type or for which the Boost Iostreams library has been explicitly
configured. </quote>

I think, this caveat will become mostly obsolete with C++0x:

[27.5.1/1]
The type streamoff is a synonym for one of the signed basic integral types
of sufficient size to represent the maximum possible file size for the
operating system.


Best

Kai-Uwe Bux
 
K

Kai-Uwe Bux

Alf said:
* Kai-Uwe Bux, on 03.06.2010 21:09:
Alf said:
* Kai-Uwe Bux, on 03.06.2010 19:45:

... (Beware: int may not be
universally appropriate, e.g., for file-backed containers
std::streamoff might be appropriate; and that is larger than int on
some systems.)

prtdiff_t covers most of it in practice on current desktop systems for
in-memory containers.
[...]

For file-backed containers, you might want to use long long or unsigned
long long, which may or may not be the same as ptrdiff_t or size_t.


<requoting snipped part>
stream positions are a special case on 32-bit systems. Since standard
arithmetic doesn't generally suffice a simple typedef is of dubious
advantage. I'd say negative advantage...

<quote
src="http://www.boost.org/doc/libs/1_43_0/libs/iostreams/doc/functions/positioning.html"
emphasis="mine">
The header <boost/iostreams/positioning.hpp> provides the definition of
the integral type boost::iostreams::stream_offset, capable of holding
arbitrary stream offsets on most platforms, together with the definition
of two functions, offset_to_position and position_to_offset, for
converting between stream_offset
and std::streampos.

The type std::streampos is required to be able to hold an arbitrary stream
position, but it is not an intergral type. Although std::streampos is
interconvertible with the integral type std::streamoff, the conversion
from std::streampos to std::streamoff *may not be faithful* for large
(64-bit) values. The integral type boost::iostreams::stream_offset is
intended as a replacement for std::streamoff, with the implicit
conversions to and from std::streampos being replaced by explicit
conversion functions.

The implementation of offset_to_position and position_to_offset relies on
implementation defined behavior, and is guaranteed to work correctly for
large values only for standard libraries which define std::streamoff to be
64-bit type or for which the Boost Iostreams library has been explicitly
configured. </quote>
</requoting snipped part>

I replied to that point, which is somewhat different, elsethread.


Best

Kai-Uwe Bux
 
J

James Kanze

"Squeamizh" <[email protected]> wrote in message

[...]
Alf eschews unsigned integral types because of the potential for bug
creation when mixing signed and unsigned integral types in an expression.

No. Alf eschews unsigned integral types when the semantics
require an integer or a cardinal (member of I or N), because the
semantics of unsigned integral types in C++ are not those of an
integer or a cardinal.
This position is untenable for the following reasons:
1) There is a potential for bug creation using *any* language
feature.

Yes, but one does try to limit the damage when possible.
2) Unsigned integral types are already embedded extensively in
both the C++ and C standard libraries and trying to fight
against this is both stupid and pointless.

That's a valid argument (albeit overstated). When I'm using
functions of the standard library which return an unsigned type,
I use an unsigned type.

That's not very often, however.
 
K

Kai-Uwe Bux

James said:
[...]
Alf eschews unsigned integral types because of the potential for bug
creation when mixing signed and unsigned integral types in an expression.

No. Alf eschews unsigned integral types when the semantics
require an integer or a cardinal (member of I or N), because the
semantics of unsigned integral types in C++ are not those of an
integer or a cardinal.

That raises an interesting question: suppose for a second, the arithmetic
operations for unsigned types _were_ defined such that overflow caused
undefined behavior just like with the signed types. Under those
circumstances, _would_ you say that they have the semantics of cardinals? I
have the inclination to say "yes"; and then I can continue this line of
thought as follows: well, if I need a cardinal, I can use unsigned types and
_regard_ overflow as undefined behavior. (In _other contexts_, where I don't
want a cardinal, I would use the unsigned types because of the modular
arithmetic they provide.)

I think, the problem with the arithmetic type system in C++ is that the
unsigned types are declared to be the "attracting types" in a mixed
expression. If it was the other way around, intuitive semantics would be
guaranteed as long as overflow does not happen. After all, the counting
numbers are a subset of the integers.


Best

Kai-Uwe Bux
 
J

James Kanze

news:d5ee011b-d702-4077-a4be-165490de7a93@y11g2000yqm.googlegroups.com...
"Squeamizh" <[email protected]> wrote in message
[...]
Alf eschews unsigned integral types because of the
potential for bug creation when mixing signed and unsigned
integral types in an expression.
No. Alf eschews unsigned integral types when the semantics
require an integer or a cardinal (member of I or N), because
the semantics of unsigned integral types in C++ are not
those of an integer or a cardinal.
The semantics of unsigned integral types in C++ include being
a suitable type for representing an integral value whose value
is only ever positive or zero.

No they don't. They violate the rules of natural numbers for
several operations. For things like serial numbers and such,
this may not be a problem, but the question still remains: why?
The natural type for all integral values is int (dixit Kernighan
and Richie); the other types exist for cases where their
specific semantics are required.

What you're basically using them for is a subrange type. Which
of course only makes sense if their actual range corresponds to
the desired subrange. Which is rarely the case.

[...]
Why is your level of usage relevant?

There's no issue of "level". When I'm interfacing with
functions like std::vector<>::size(), I use size_t. Because
mixing signed and unsigned causes even more problems. But most
of the time, when I'm dealing with a standard container, I'm
using iterators, and not integral types. So the issue isn't
important, and I use int for the integers.
Are you a god or something which requires devoted followers?

You're the one arguing positions without giving any valid
reasons.
 
J

James Kanze

James Kanze said:
The semantics of unsigned integral types in C++ include
being a suitable type for representing an integral value
whose value is only ever positive or zero.
From the C standard:
"The range of nonnegative values of a signed integer type is a
subrange of the corresponding unsigned integer type, and the
representation of the same value in each type is the same.33)
A computation involving unsigned operands can never overflow,
because a result that cannot be represented by the resulting
unsigned integer type is reduced modulo the number that is one
greater than the largest value that can be represented by the
resulting type."
My understanding of the above is that "unsigned int" *is* a
suitable type for storing non-negative values (i.e. cardinal
numbers).

For storing. The problems involve the definitions of the
operations on unsigned types, which quite clearly don't obey the
normal laws of integral (nor natural) arithmetic.
 
J

James Kanze

James said:
"Squeamizh" <[email protected]> wrote in message
[...]
Alf eschews unsigned integral types because of the
potential for bug creation when mixing signed and unsigned
integral types in an expression.
No. Alf eschews unsigned integral types when the semantics
require an integer or a cardinal (member of I or N), because
the semantics of unsigned integral types in C++ are not
those of an integer or a cardinal.
That raises an interesting question: suppose for a second, the
arithmetic operations for unsigned types _were_ defined such
that overflow caused undefined behavior just like with the
signed types. Under those circumstances, _would_ you say that
they have the semantics of cardinals?

No. Theoretically, of course, an implementation is allowed to
provide defined reasonable behavior (a signal or a program
abort) for overflow of an int; we both know, however, that the
behavior in most implementations is just as bad as that required
for unsigned.

What would be necessary is that subtraction give the expected
results. That e.g. a > b ==> a - b > 0 (or that subtraction
isn't allowed). Or that if a < b, a - b, resulted in some sort
of error that you could catch.

Also necessary would be some sort of reasonable behavior in the
case of mixed arithmetic. The C standards committee discussed
this in depth---when C was being standardized, most
implementations were value preserving (i.e.: given
int i = -1;
assert(i < someUnsigned);
was guaranteed not to fail). This is not without problems if
someUnsigned is very large. The C committee was aware of these
problems, and decided to replace value preserving with signed
preserving semantics. Which, it turns out, are even worse, but
no one knew it then, because no one had any real experience with
them.

From an abstract point of view, the ideal would be a single
integral type (signed), with subranges; one could add a cardinal
type (a la Modula), but with a range restricted to a subset of
the integral type values, and subtraction resulting in the
signed type. And subranges actually verified at runtime. This
has significant runtime implications, however, and goes against
the philosophy of C for that reason. The original C had int and
char, with char mainly for characters. The other types got
added for special uses. That's the history: people who've
actually lived it (those who worked at the original Bell labs
under Kernighan) use int. As do most others who, like myself,
learned pre-standard C.
I have the inclination to say "yes"; and then I can continue
this line of thought as follows: well, if I need a cardinal, I
can use unsigned types and _regard_ overflow as undefined
behavior. (In _other contexts_, where I don't want a cardinal,
I would use the unsigned types because of the modular
arithmetic they provide.)
I think, the problem with the arithmetic type system in C++ is
that the unsigned types are declared to be the "attracting
types" in a mixed expression. If it was the other way around,
intuitive semantics would be guaranteed as long as overflow
does not happen. After all, the counting numbers are a subset
of the integers.

It would certainly help. As I said, that solution is not
without problems either.
 
K

Kai-Uwe Bux

James said:
James said:
[...]
Alf eschews unsigned integral types because of the
potential for bug creation when mixing signed and unsigned
integral types in an expression.
No. Alf eschews unsigned integral types when the semantics
require an integer or a cardinal (member of I or N), because
the semantics of unsigned integral types in C++ are not
those of an integer or a cardinal.
That raises an interesting question: suppose for a second, the
arithmetic operations for unsigned types _were_ defined such
that overflow caused undefined behavior just like with the
signed types. Under those circumstances, _would_ you say that
they have the semantics of cardinals?

No. Theoretically, of course, an implementation is allowed to
provide defined reasonable behavior (a signal or a program
abort) for overflow of an int; we both know, however, that the
behavior in most implementations is just as bad as that required
for unsigned.

What would be necessary is that subtraction give the expected
results. That e.g. a > b ==> a - b > 0 (or that subtraction
isn't allowed).

Hm: a > b ==> a-b > 0 already holds for unsigned types. More
generally, if a and b are unsigned int and the mathematical value of a-b can
be represented as an unsigned int, then this _is_ the value of a-b. No
surprises. Weird things only happen when a-b has a value that cannot be
represented as an unsigned int, i.e., if a<b.
Or that if a < b, a - b, resulted in some sort of error that you could
catch.

That is a little unfair: for int you don't require that over/underflow
triggers catchable erros, yet you would agree (I think) that they model the
integers. Why should unsigned int not model the cardinals unless
over/underflow is flagged?

The point of whether over/underflow is detectable is surely important. I
don't see, however, why it enters the consideration on whether an arithmetic
type has "the semantics of X", where X is the integers or the cardinals.
Arithmetic types can suck, yet model the integers. Likewise, arithmetic
types can suck and still model the cardinals.

Also necessary would be some sort of reasonable behavior in the
case of mixed arithmetic. The C standards committee discussed
this in depth---when C was being standardized, most
implementations were value preserving (i.e.: given
int i = -1;
assert(i < someUnsigned);
was guaranteed not to fail). This is not without problems if
someUnsigned is very large. The C committee was aware of these
problems, and decided to replace value preserving with signed
preserving semantics. Which, it turns out, are even worse, but
no one knew it then, because no one had any real experience with
them.
Agreed.

From an abstract point of view, the ideal would be a single
integral type (signed), with subranges; one could add a cardinal
type (a la Modula), but with a range restricted to a subset of
the integral type values, and subtraction resulting in the
signed type. And subranges actually verified at runtime. This
has significant runtime implications, however, and goes against
the philosophy of C for that reason. The original C had int and
char, with char mainly for characters. The other types got
added for special uses. That's the history: people who've
actually lived it (those who worked at the original Bell labs
under Kernighan) use int. As do most others who, like myself,
learned pre-standard C.

You are shifting the topic farther away from the question whether unsigned
int can be said to model the cardinals to the question which arithmetic type
system is better. I agree with you that the system of arithmetic types
provided by C++ is not optimal (to say the least). However, within the given
type system, the unsigned types model the cardinal numbers just as
accurately as the signed types model the integers (i.e., with
wicked/undefined behavior if the result of an operation cannot be
represented in the type of the operands).
It would certainly help. As I said, that solution is not
without problems either.

True.


Best

Kai-Uwe Bux
 
K

Kai-Uwe Bux

Leigh Johnston wrote:
[...]
I can sum it up thus: just as it is possible to write code using signed
integers whose results don't overflow it is also possible write code using
unsigned integers whose results don't require modulo truncation.

Well, the modulo truncation comes in very handy even if all you want is to
guard against overflow. You can do:

unsigned a, b;
...
if ( a+b < a ) {// a+b will overflow
}

or

if ( a-b > a ) {// a-b will undeflow
}

Note: the value of the subexpression a+b or a-b can be saved for use in the
non-overflowing case.

The corresponding code for signed types is more involved.


Best

Kai-Uwe Bux
 
A

Alessandro [AkiRoss] Re

James Kanze said:
   [...]
Alf eschews unsigned integral types because of the
potential for bug creation when mixing signed and unsigned
integral types in an expression.
No.  Alf eschews unsigned integral types when the semantics
require an integer or a cardinal (member of I or N), because
the semantics of unsigned integral types in C++ are not
those of an integer or a cardinal.
The semantics of unsigned integral types in C++ include being
a suitable type for representing an integral value whose value
is only ever positive or zero.

No they don't.  They violate the rules of natural numbers for
several operations.  For things like serial numbers and such,
this may not be a problem, but the question still remains: why?
The natural type for all integral values is int (dixit Kernighan
and Richie); the other types exist for cases where their
specific semantics are required.

Hello, sorry for my intrusion, but the topic is interesting :)

I was thinking: unsigned logic works for natural numbers (N), not
integers (I).
For example, the subtraction is defined in a special way for naturals
(if defined at all), and usually is the modulo subtraction (like I
think it is in C++). So, my first question: they violate which rules?

Next, just a consideration: the size of an array is a natural number,
it can't be negative.
So, I think, is semantically wrong to use integers when you need
naturals.

Sorry I'm not expert in the language, but this topic is interesting!

Cheers
~AkiRoss
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,962
Messages
2,570,134
Members
46,692
Latest member
JenniferTi

Latest Threads

Top