size_t problems

C

CBFalconer

user923005 said:
.... snip ...

A size_t can describe the size of any object allowed by the C
language. An int cannot.

In the interests of accuracy, a size_t is guaranteed to describe
those sizes. An int is not guaranteed to have similar capability,
however it may in some systems.
 
C

CBFalconer

Richard said:
CBFalconer said:

No, it wasn't.


That's a good four orders of magnitude - almost five - away from
being a nautical mile.

I guess you never heard of mild exageration. :)
 
B

Ben Pfaff

Ed Jensen said:
And yet, other programming languages get by -- somehow -- by returning
an integer when asked for the length of a string.


And yet, other programming languages get by -- somehow -- even though
they don't even have unsigned integer types.

What programming languages are you thinking of here?
 
U

user923005

And yet, other programming languages get by -- somehow -- by returning
an integer when asked for the length of a string.

Can those same languages create objects with a size to large to be
held in an integer?
If 'yes', then those languages are defective. If 'no', then integer
is the correct return.
And yet, other programming languages get by -- somehow -- even though
they don't even have unsigned integer types.

I can create a language with a single type. Somehow, I think it will
be less effective than C for programming tasks.
I recognize and understand why the range of C types are defined the
way they're defined, but that doesn't minimize the pain when trying to
write 100% portable code.

The way to minimize the pain of writing 100% portable code is to write
it correctly, according to the language standard. For instance, that
would include using size_t for object sizes. Now, pre-ANSI C did not
have size_t. So that code will require effort to repair.
 
B

Bart

Bart said:


Let's just say I'm suffering from imaginatrophy, shall we? The problem
here is not the negative value itself, but the comparison between a
value of a type that can store negative values and a value of a type
that cannot. These are fundamentally different concepts.

Possibly, but comparing two such values is still meaningful. Try
adding +6 to the numbers in my little example and both values must now
be positive and can be compared a little more easily. There's still a
question of overflow but that's a different problem.

Signed/unsigned numbers have different ranges. Why is it a big deal to
compare these two types of values? Is it because one type can store a
value that does not exist in the other? That's also a problem with
short and long ints. Anyway the solution can be simple, such as
converting the numbers into a type that accommodates both ranges.

Bart C
 
M

Martin Wells

user923005:
An unsigned is a defective return from anything that describes the
size of an object.

I don't agree entirely. If:

a) Execution speed is of prime prime prime importance.
b) The value will never be greater than 65536.

, then I wouldn't call it "defective". However I'd use casts wherever
applicable in order to suppress compiler warnings.

Martin
 
M

Martin Wells

Ed Jensen:
I recognize and understand why the range of C types are defined the
way they're defined, but that doesn't minimize the pain when trying to
write 100% portable code.

It's considerably painless, even fun, if you make it your religion
from the very start.

Martin
 
K

Keith Thompson

Martin Wells said:
user923005:

I don't agree entirely. If:

a) Execution speed is of prime prime prime importance.
b) The value will never be greater than 65536.

, then I wouldn't call it "defective". However I'd use casts wherever
applicable in order to suppress compiler warnings.

Using casts to suppress compiler warnings is widely considered to be a
bad idea. There can be cases, I suppose, where it's necessary, but
what you're really doing is saying to the compiler, "I know exactly
what I'm doing; if you think I'm wrong, please don't tell me". That's
fine if you're right; the problem is when you're wrong (it happens to
all of us), and you've gagged the compiler so it can't tell you.

A lot of newbie C programmers will write something like:

int *ptr = malloc(100 * sizeof(int));

and then change it to

int *ptr = (int*)malloc(100 * sizeof(int));

because the compiler warned them about a type mismatch on the
initialization. In fact, the compiler's warning was correct, and the
cast merely hides it. The programmer was making one of two mistakes:
forgetting the required '#include <stdlib.h>', making the compiler
assume that malloc() returns int, or using a C++ compiler which
doesn't allow this particular implicit conversion.

I'm not saying that you'd necessarily make this particular mistake.
But adding a cast to silence a warning should be thought of as a
fairly drastic step, and it should be very carefully considered.
 
C

Charlton Wilbur

BP> What programming languages are you thinking of here?

Perl, where one has to dig into internals to determine whether a
scalar variable is a number or a string and whether a number is an
integer or a floating-point value.

Scheme, where the important distinction is not signed or unsigned, but
exact or inexact.

Those are just two I can think of off the top of my head. I'm sure
there are more.

Charlton
 
M

Martin Wells

Keith Thompson:
I'm not saying that you'd necessarily make this particular mistake.
But adding a cast to silence a warning should be thought of as a
fairly drastic step, and it should be very carefully considered.

I know what you're saying and I agree with you: If you're gonna be
suppressing warnings through the use of casts then you'd better be
certain about what you're doing.

Of course though, in the hands of a competant programmer, the use of
casts to suppress warnings can be quite useful. I've done it quite a
few times, both in C and in C++. Here might be an example in fully
portable code:

char NumToDigit(unsigned const x)
{
assert( x >= 0 && x <= 9 ); /* Let's assume that we'll never get
bad input */

return (char)('0' + x); /* No warning, yippee! */
}

A note to those who don't fully understand integer promotion yet:

1: '0' is promoted to type "unsigned" before it's added to x.
2: The result of the addition is put into a char, but we shouldn't get
a truncation warning because we've explictly used a cast.

Martin
 
M

Martin Wells

Keith Thompson:
A lot of newbie C programmers will write something like:

int *ptr = malloc(100 * sizeof(int));

and then change it to

int *ptr = (int*)malloc(100 * sizeof(int));

because the compiler warned them about a type mismatch on the
initialization. In fact, the compiler's warning was correct, and the
cast merely hides it. The programmer was making one of two mistakes:
forgetting the required '#include <stdlib.h>', making the compiler
assume that malloc() returns int, or using a C++ compiler which
doesn't allow this particular implicit conversion.

While I admire your sentiment as regards following the C89 Standard, I
still must condemn any compiler that allows the "implict function
declaration" feature, not at least without having to explicitly
request it.

compile a.cpp
--ERROR function declaration missing for "malloc".

compile a.cpp -i_want_implicit_function_declarations
--ERROR Type mismatch, "malloc" returns int.

Martin
 
K

Keith Thompson

Martin Wells said:
Keith Thompson:

While I admire your sentiment as regards following the C89 Standard, I
still must condemn any compiler that allows the "implict function
declaration" feature, not at least without having to explicitly
request it.

compile a.cpp
--ERROR function declaration missing for "malloc".

compile a.cpp -i_want_implicit_function_declarations
--ERROR Type mismatch, "malloc" returns int.

As do I -- but our condemnation of such compilers doesn't, alas,
prevent newbies from using them.
 
K

Keith Thompson

Martin Wells said:
Keith Thompson:

I know what you're saying and I agree with you: If you're gonna be
suppressing warnings through the use of casts then you'd better be
certain about what you're doing.

Of course though, in the hands of a competant programmer, the use of
casts to suppress warnings can be quite useful. I've done it quite a
few times, both in C and in C++. Here might be an example in fully
portable code:

char NumToDigit(unsigned const x)

I assume this was supposed to be 'unsigned int x'.
{
assert( x >= 0 && x <= 9 ); /* Let's assume that we'll never get
bad input */

return (char)('0' + x); /* No warning, yippee! */
}

Amusingly, the only warning I got on this was on the assert():

c.c:5: warning: comparison of unsigned expression >= 0 is always true
A note to those who don't fully understand integer promotion yet:

1: '0' is promoted to type "unsigned" before it's added to x.
2: The result of the addition is put into a char, but we shouldn't get
a truncation warning because we've explictly used a cast.

I assume the compiler I used can be persuaded to issue such a warning.

Strictly speaking, though, the compiler is just as entitled to issue a
warning with the cast as without it. Most compilers choose not to do
so.
 
M

Martin Wells

Keith Thompson:
I assume this was supposed to be 'unsigned int x'.


No, I meant what I wrote. I'm curious as to why you would've thought
that. . ? Anyway, to explain why I wrote it that way:

1: I invariably use "unsigned" as an abbreviation of "unsigned int".
2: I pretty much use const wherever possible.

Amusingly, the only warning I got on this was on the assert():

c.c:5: warning: comparison of unsigned expression >= 0 is always true


Hehe, I should've copped that Z-)

I assume the compiler I used can be persuaded to issue such a warning.

Strictly speaking, though, the compiler is just as entitled to issue a
warning with the cast as without it. Most compilers choose not to do
so.


I think C++ has something called "implict_cast" which is used
specifically for telling the compiler to stay quiet, but in C I think
the most common and reliable way is to use a plain ol' vanilla cast.
But yes, you'd be right to say that a compiler can warn about whatever
it wants to warn about.

With the whole "cast to suppress warning" thing, we're relying more on
industry common practice than anything inherent in the C language or
its standard.

Still though, I advocate its usage.

Martin
 
M

Martin Wells

Bill C:
So why not replace all the strlen() calls with your own function (maybe
call it i_strlen(), or somesuch name) that returns an int?


Not a bad band-aid at all...

...assuming we don't want to actually clean out the wound and
disinfect it.

Really though that's a good idea if "fixing the code" were out of the
question.

Martin
 
S

spacecriter \(Bill C\)

jacob said:
I am trying to compile as much code in 64 bit mode as
possible to test the 64 bit version of lcc-win.

The problem appears now that size_t is now 64 bits.

Fine. It has to be since there are objects that are more than 4GB
long.

The problem is, when you have in thousands of places

int s;

// ...
s = strlen(str) ;

Since strlen returns a size_t, we have a 64 bit result being
assigned to a 32 bit int.

This can be correct, and in 99.9999999999999999999999999%
of the cases the string will be smaller than 2GB...

Now the problem:

Since I warn each time a narrowing conversion is done (since
that could loose data) I end up with hundreds of warnings each time
a construct like int a = strlen(...) appears. This clutters
everything, and important warnings go lost.


I do not know how to get out of this problem. Maybe any of you has
a good idea? How do you solve this when porting to 64 bits?

jacob

I assume that you don't want to redefine s as a size_t because it may be
used elsewhere as an int, and you would rather not track down everywhere it
may be used.

So why not replace all the strlen() calls with your own function (maybe
call it i_strlen(), or somesuch name) that returns an int?
 
K

Keith Thompson

Martin Wells said:
Keith Thompson:

No, I meant what I wrote. I'm curious as to why you would've thought
that. . ? Anyway, to explain why I wrote it that way:

1: I invariably use "unsigned" as an abbreviation of "unsigned int".
2: I pretty much use const wherever possible.

That was partly a failure on my part to understand what you wrote.
The "unsigned const x" threw me off enough that I momentarily forgot
that "unsigned" is synomymous with "unsigned int". (I probably would
have written "const unsigned int" myself.)

"const" in a parameter declaration doesn't do anything useful for the
caller, since (as I'm sure you know) a function can't modify an
argument anyway. It does prevent the function from (directly)
modifying its own parameter (a local object), but that's of no concern
to the caller.

It would make more sense to be able to specify "const" in the
*definition* of a function but not in the *declaration*. And gcc
seems to allow this:

int foo(int x);

int main(void)
{
return foo(0);
}

int foo(const int x)
{
return x;
}

but I'm not sure whether it's actually legal. In any case, it's not a
style that seems to be common.

I'm sympathetic to the idea uf using const whenever possible.
<OT>If I ever design my own language, declared objects will be
constant (i.e., read-only) by default; if you want to be able to
modify an object, you'll need an extra keyword ('var'?) on the
declaration.</OT>

[...]
With the whole "cast to suppress warning" thing, we're relying more on
industry common practice than anything inherent in the C language or
its standard.

Still though, I advocate its usage.

Fair enough. I don't.
 
I

Ian Collins

Martin said:
Keith Thompson:


I know what you're saying and I agree with you: If you're gonna be
suppressing warnings through the use of casts then you'd better be
certain about what you're doing.

Of course though, in the hands of a competant programmer, the use of
casts to suppress warnings can be quite useful. I've done it quite a
few times, both in C and in C++. Here might be an example in fully
portable code:
If you use casts frequently in C, you are doing something wrong.

If you use naked casts at all in C++, you are doing something very wrong.

In my shops we always have a rule that all casts require a comment, a
good way to make developers think twice before using them.
char NumToDigit(unsigned const x)
{
assert( x >= 0 && x <= 9 ); /* Let's assume that we'll never get
bad input */

return (char)('0' + x); /* No warning, yippee! */
}
I can't fan a compiler that issues a warning without the cast, just out
of interest, which one does?
 
B

Ben Pfaff

Martin Wells said:
While I admire your sentiment as regards following the C89 Standard, I
still must condemn any compiler that allows the "implict function
declaration" feature, not at least without having to explicitly
request it.

Implicit function declarations are part of C89. A compiler that
rejects programs that use this feature is not an implementation
of C89.
 
C

CBFalconer

Keith said:
.... snip ...

"const" in a parameter declaration doesn't do anything useful for
the caller, since (as I'm sure you know) a function can't modify
an argument anyway. It does prevent the function from (directly)
modifying its own parameter (a local object), but that's of no
concern to the caller.

It does if you are passing a pointer to a const item. That way you
can protect the parameter and avoid copying large objects. Such
as, but not limited to, strings.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Similar Threads


Members online

Forum statistics

Threads
473,996
Messages
2,570,238
Members
46,826
Latest member
robinsontor

Latest Threads

Top