Long and Int

R

ritesh.noronha

Hi,

I am a beginer programmer, i would like to know the exact difference
between long and int. Why do we have two names for the same size of
variable.

Does the size of long and int change on various architectures..

thanks,
carl
 
V

Vladimir S. Oka

Hi,

I am a beginer programmer, i would like to know the exact difference
between long and int. Why do we have two names for the same size of
variable.

They are not necessarily the same size at all. In fact, it is only
guaranteed that the size of `long` is greater than or equal to that of
an `int`.
Does the size of long and int change on various architectures..

It sometimes does, but that is not required either. It depends more on
the actual C implementation than on the underlying architecture.
There's nothing stopping an implementation from having 64-bit `long`s
(or `int`s, for that matter), even on a 6-bit machine.
 
B

Ben Pfaff

Vladimir S. Oka said:
[...] it is only guaranteed that the size of `long` is greater
than or equal to that of an `int`.

You mean "range", not "size". The range of int is a subrange of
the range of long. There are no such guarantees about the sizes
of int and long.
 
E

Eric Sosman

Hi,

I am a beginer programmer, i would like to know the exact difference
between long and int. Why do we have two names for the same size of
variable.

Because they're not necessarily the same size. Here are
some schemes I've encountered (the `bits' notation is informal,
not actually part of C):

bits(short) == 16, bits(int) == 16, bits(long) == 32
bits(short) == 16, bits(int) == 32, bits(long) == 32
bits(short) == 16, bits(int) == 32, bits(long) == 64

As you can see, `int' could have the same width as `short',
or the same as `long', or be different from both.
Does the size of long and int change on various architectures..

Yes. The C language guarantees certain minima and
certain relationships between the types:

bits(char) >= 8
bits(short) >= 16 && bits(short) >= bits(char)
bits(int) >= 16 && bits(int) >= bits(short)
bits(long) >= 32 && bits(long) >= bits(int)
bits(long long) >= 64 && bits(long long) >= bits(long)

(The `long long' type was introduced in C99. C99 also allows
implementations to define other types of their own choosing,
subject to various rules; I haven't attempted to enumerate
these "exotic" integers because there'll be different sets
on different implementations.)
 
M

Micah Cowan

Hi,

I am a beginer programmer, i would like to know the exact difference
between long and int. Why do we have two names for the same size of
variable.

They are not necessarily the same. In fact, on many implementations,
they are not.

All that you can ever portably rely on, is that a signed int must be
capable of representing integers in at least the range of -32767 to
+32767, and signed long int must be capable of representing integers
in at least the range of -2147483647 to +2147483647.

Now, as it happens on some popular platforms, both signed int and
signed long have the same range: -2147483648 to +2147483647.
^

On those platforms, it doesn't much matter which one you decide to
use. However, when you're coding portable apps, the decision process
(for me, at least) is usually:

If I don't expect to need to represent integers outside of
the range [-32767,32767], then
Using a short or an int should be fine.
Else, if the range [-2147483647,2147483647] is likely to suffice, then
Using a long int should be fine.
Else, if even the range [-2147483647,2147483647] may not be enough, then
Consider using a long long or intmax_t (defined in
<inttypes.h>), or maybe a more flexible, variably-sized integer library.

Note that none of these options (with the possible exception of using
a variable-length int library) obviates the need for you to always
check to make sure that you don't over-/underflow the variable. Always
check for this: it will make your life much easier, in the long
run. In some extreme cases, it has been shown to literally save lives
(http://en.wikipedia.org/wiki/Therac-25).
Does the size of long and int change on various architectures..

YES.

HTH,
Micah
 
F

Flash Gordon

Hi,

I am a beginer programmer, i would like to know the exact difference
between long and int.

One has a larger minimum size than the other.
> Why do we have two names for the same size of
variable.

Because they are not always.
Does the size of long and int change on various architectures..

Yes.
--
Flash Gordon
Living in interesting times.
Web site - http://home.flash-gordon.me.uk/
comp.lang.c posting guidlines and intro -
http://clc-wiki.net/wiki/Intro_to_clc
 
J

Jordan Abel

Hi,

I am a beginer programmer, i would like to know the exact difference
between long and int. Why do we have two names for the same size of
variable.

They're not the same size. Any fule kno that short is 18 bits, int is 27
bits, and long is 36 bits.
Does the size of long and int change on various architectures..

And the point I was getting at with that sarcastic remark is: yes. On
many 64-bit architectures, long is 64 bit while int is still 32. On some
architectures, int is only 16 bits. And of course there are other odd
architectures. What I described above is at least legal, if somewhat
less than plausible [though, 18 and 36 for short and long with int as
one of them does exist in the real world. char is 9 bits on such
systems.]
 
V

Vladimir S. Oka

Ben said:
Vladimir S. Oka said:
[...] it is only guaranteed that the size of `long` is greater
than or equal to that of an `int`.

You mean "range", not "size". The range of int is a subrange of
the range of long. There are no such guarantees about the sizes
of int and long.

Yes. I blindly went with the OP's wording. :-(
 
M

Michael Mair

Eric said:
(e-mail address removed) wrote On 02/22/06 18:07,:

Yes. The C language guarantees certain minima and
certain relationships between the types:

bits(char) >= 8
bits(short) >= 16 && bits(short) >= bits(char)
bits(int) >= 16 && bits(int) >= bits(short)
bits(long) >= 32 && bits(long) >= bits(int)
bits(long long) >= 64 && bits(long long) >= bits(long)

(The `long long' type was introduced in C99. C99 also allows
implementations to define other types of their own choosing,
subject to various rules; I haven't attempted to enumerate
these "exotic" integers because there'll be different sets
on different implementations.)

Hypothetical Question: AFAIR, we have only that
rank(int) > rank(short) which implies range(int) contains
range(short) as a subrange. Might the DS9000 or such ilk
have implemented short and int with bits(short) > bits(int)
-- with appropriate padding bits to fulfill the range
requirement?

Cheers
Michael
 
K

Keith Thompson

Michael Mair said:
Hypothetical Question: AFAIR, we have only that
rank(int) > rank(short) which implies range(int) contains
range(short) as a subrange. Might the DS9000 or such ilk
have implemented short and int with bits(short) > bits(int)
-- with appropriate padding bits to fulfill the range
requirement?

Yes.
 
S

slebetman

Ben said:
Vladimir S. Oka said:
[...] it is only guaranteed that the size of `long` is greater
than or equal to that of an `int`.

You mean "range", not "size". The range of int is a subrange of
the range of long. There are no such guarantees about the sizes
of int and long.

But there is such a thing as the sizeof int and long ;-)
 
E

Eric Sosman

Michael said:
Hypothetical Question: AFAIR, we have only that
rank(int) > rank(short) which implies range(int) contains
range(short) as a subrange. Might the DS9000 or such ilk
have implemented short and int with bits(short) > bits(int)
-- with appropriate padding bits to fulfill the range
requirement?

Yes, taking `bits(TYPE)' as `sizeof(TYPE) * CHAR_BIT'.
My goal in cooking up the informal `bits' notation was to
express the rules in terms of the O.P.'s notion of "size,"
but to keep the explanation concise. My intent was that
`bits(TYPE)' would include only the value bits and sign
bit; dragging padding bits into the discussion would, I
think, have served more to confuse the O.P. than to
enlighten him.

I'll grant that it's a little sloppy to treat the range
of an integer type as equivalent to its width. Still, bit-
counting (of non-padding bits) makes a useful shorthand: it's
easier to grasp "64 bits" than "-9223372036854775807 through
9223372036854775807." (Which expression is easier to check for
correctness?) Also, bit-counting makes obvious the fact that
the integers are binary so the range limits must be just short
of powers of two; INT_MAX cannot be one million. Speaking of
the bit-width of an integer type is a shorthand with many
conveniences, much like speaking of the "precedence" of
arithmetic operators.
 
J

Jordan Abel

I'll grant that it's a little sloppy to treat the range of an
integer type as equivalent to its width. Still, bit-counting (of
non-padding bits) makes a useful shorthand: it's easier to grasp "64
bits" than "-9223372036854775807 through 9223372036854775807." (Which
expression is easier to check for correctness?) Also, bit-counting
makes obvious the fact that the integers are binary so the range
limits must be just short of powers of two; INT_MAX cannot be one
million.

That's not actually clear - I remember being told that it would be legal
for that to be the case [and binary representations from 1000001 to
1048575 would be trap representations] - It's possible that c99
tightened this up along with explicitly specifying the three ways signed
types could be represented.
 
M

Micah Cowan

Jordan Abel said:
I'll grant that it's a little sloppy to treat the range of an
integer type as equivalent to its width. Still, bit-counting (of
non-padding bits) makes a useful shorthand: it's easier to grasp "64
bits" than "-9223372036854775807 through 9223372036854775807." (Which
expression is easier to check for correctness?) Also, bit-counting
makes obvious the fact that the integers are binary so the range
limits must be just short of powers of two; INT_MAX cannot be one
million.

That's not actually clear - I remember being told that it would be legal
for that to be the case [and binary representations from 1000001 to
1048575 would be trap representations] - It's possible that c99
tightened this up along with explicitly specifying the three ways signed
types could be represented.

I think that's probable. Certainly, by the current Standard, I don't
see any room for a given bit to be only "sometimes" a value bit...

-Micah
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
474,175
Messages
2,570,944
Members
47,491
Latest member
mohitk

Latest Threads

Top