128 bit integer

B

BGB / cr88192

Eric Sosman said:
See Question 18.15d on the comp.lang.c Frequently Asked
Questions (FAQ) page at <http://www.c-faq.com/>.

also (will add this much):
some compilers add 128 bit (or more) integer types as an extension feature,
but there is no real standardization here AFAIK.

it is also common to roll ones' own large integer support, such as using by
a struct of smaller integers, and then applying similar strategies to
elementary-school arithmetic to make it behave like a larger value.

or such...
 
K

Keith Thompson

BGB / cr88192 said:
also (will add this much):
some compilers add 128 bit (or more) integer types as an extension feature,
but there is no real standardization here AFAIK.
[...]

Well, there's some.

An implementation that provides 128-bit integer types will probably
define int128_t, intleast128_t, and intfast128_t in <stdint.h>. All
these are optional, so even if long long is 128 bits an implementation
isn't *required* to define these typedefs (only the 8, 16, 32, and
64-bit "least" and "fast" typedefs are mandatory), but it would be silly
not to define them for 128 bits if the corresponding types exist.

And if any integer type is 128 bits or wider, intmax_t will be such a
type.

Repeat the above replacing "int" with "uint" for unsigned types.

But most current implementations, as far as I know, only support
integers up to 64 bits.
 
B

BGB / cr88192

Keith Thompson said:
BGB / cr88192 said:
also (will add this much):
some compilers add 128 bit (or more) integer types as an extension
feature,
but there is no real standardization here AFAIK.
[...]

Well, there's some.

An implementation that provides 128-bit integer types will probably
define int128_t, intleast128_t, and intfast128_t in <stdint.h>. All
these are optional, so even if long long is 128 bits an implementation
isn't *required* to define these typedefs (only the 8, 16, 32, and
64-bit "least" and "fast" typedefs are mandatory), but it would be silly
not to define them for 128 bits if the corresponding types exist.

forgot about stdint.h, I suspect partly because MSVC lacks this header...

And if any integer type is 128 bits or wider, intmax_t will be such a
type.

Repeat the above replacing "int" with "uint" for unsigned types.

But most current implementations, as far as I know, only support
integers up to 64 bits.

yeah, this is true of both MSVC and GCC AFAIK (actually, IIRC GCC supports
128 bit ints for certain targets, but I am not sure).

both my compiler and Mr. Navia's compiler support 128 bit ints (mine uses
the type name "__int128" internally).

I think also Open64 and a few others as well support them.

not really sure though, I haven't really looked into it much...
 
A

Andrey Tarasevich

Jorgen said:
I'm a bit curious: for what purpose do you need it? 64-bit integers
can hold values almost up to 19,000,000,000,000,000,000 which is ...
a lot.

I am not saying it would be useless, just that I cannot think of a use
for it.

The use for it is rather obvious. For example, if your program uses
64-bit integers, then every time you multiply these integers, you have
to perform that multiplication within a 128-bit type. You have a
rectangle, whose coordinates are 64-bit integers? Well, then the the
area of that rectangle is a 128-bit integer, whether you like it or not.

It also happens extremely often that 64-bit input has to produce a
64-bit result, but the intermediate evaluation has to go through a
64x64-bit (i.e. 128 bit) multiplications.

The often-used practice in such situations is to use floating-point
types for intermediate evaluations, which is obviously nothing else than
"losers way out" forced by the unavailability of integer types of
required bit-width.
 
K

Keith Thompson

Seebs said:
You caught one of the obvious ones. The one that leapt out at me, of
course, was IPV6.

I'm not as familiar with IPV6 as I probably should be. What benefit
does using, say, uint128_t to store IPV6 address give you over using,
say, unsigned char[16]? (Or wrap the array in a struct so you can do
assignment and equality comparison.)

In other words, how much sense does it make to do arithmetic in IPV6
addresses?

I'm not suggesting that the answer is "none".
 
E

Ersek, Laszlo

Seebs said:
You caught one of the obvious ones. The one that leapt out at me, of
course, was IPV6.

I'm not as familiar with IPV6 as I probably should be. What benefit
does using, say, uint128_t to store IPV6 address give you over using,
say, unsigned char[16]? (Or wrap the array in a struct so you can do
assignment and equality comparison.)

In other words, how much sense does it make to do arithmetic in IPV6
addresses?

I'm not as familiar with IPv6 as I definitely should be, but I'll
extrapolate from IPv4.

http://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing
http://en.wikipedia.org/wiki/Routing_table

Very superficially, you need fast bitand to see if an IP address matches a
route (an IP prefix): ip_addr & route_netmask == route_prefix. The route
with the longest matching prefix is taken. There are many tricks to speed
this up; for example, I just stumbled on

http://en.wikipedia.org/wiki/Luleå_algorithm

but that seems to require fast bitand and bitwise shifts too.

lacos
 
S

Seebs

I'm not as familiar with IPV6 as I probably should be. What benefit
does using, say, uint128_t to store IPV6 address give you over using,
say, unsigned char[16]? (Or wrap the array in a struct so you can do
assignment and equality comparison.)

In other words, how much sense does it make to do arithmetic in IPV6
addresses?

I'm not suggesting that the answer is "none".

I seem to recall that there are cases where it might be convenient to do
a bitwise mask on something, and "x & y" is simpler than iterating over an
array.

-s
 
T

Tim Rentsch

Keith Thompson said:
Seebs said:
You caught one of the obvious ones. The one that leapt out at me, of
course, was IPV6.

I'm not as familiar with IPV6 as I probably should be. What benefit
does using, say, uint128_t to store IPV6 address give you over using,
say, unsigned char[16]? (Or wrap the array in a struct so you can do
assignment and equality comparison.)

In other words, how much sense does it make to do arithmetic in IPV6
addresses?

Some, but most of the benefit is probably just plain data movement.
IPv6 addresses divide nicely into two 64-bit essentially independent
pieces, so uint64_t[2] should provide nearly all of the benefit, as
far as arithmetic/logical operations go, of using uint128_t.
 
J

Jorgen Grahn

The use for it is rather obvious. For example, if your program uses
64-bit integers, then every time you multiply these integers, you have
to perform that multiplication within a 128-bit type. You have a
rectangle, whose coordinates are 64-bit integers? Well, then the the
area of that rectangle is a 128-bit integer, whether you like it or not.

But what would the purpose of that rectangle be? If it's a real shape,
and you measure with 1um resolution, with 32 bits you can express
areas as large as 4300 x 4300 m. Possibly useful if you build nuclear
plants or aircraft carriers, but that's not something I do regularly.

(With 64-bit integers, the largest rectangle you can express is on the
order of a thousand light-years on the side, still with micrometer
precision.)
It also happens extremely often that 64-bit input has to produce a
64-bit result, but the intermediate evaluation has to go through a
64x64-bit (i.e. 128 bit) multiplications.

I think "extremely often" is an extreme exaggeration, or that you work
in very different domains from me (the very ones I asked about, in
fact). I have never seen the need myself, and I've been programming
since the early 1990s.

/Jorgen
 
J

Jorgen Grahn

I'm not as familiar with IPV6 as I probably should be. What benefit
does using, say, uint128_t to store IPV6 address give you over using,
say, unsigned char[16]? (Or wrap the array in a struct so you can do
assignment and equality comparison.)

In other words, how much sense does it make to do arithmetic in IPV6
addresses?

I'm not suggesting that the answer is "none".

I seem to recall that there are cases where it might be convenient to do
a bitwise mask on something, and "x & y" is simpler than iterating over an
array.

But then you'd want uint256_t, uint512_t ... etc, when you hit the
128-bit limit, wouldn't you?

We have to stop *somewhere* (with the builtin C integer types, that is).

/Jorgen
 
S

Seebs

But then you'd want uint256_t, uint512_t ... etc, when you hit the
128-bit limit, wouldn't you?

The specific question was "IPv6", which is 128-bit. There's not yet an
IPv7. :)

-s
 
B

Bartc

(And the volume would be a 192-bit integer...)

(With 64-bit integers, the largest rectangle you can express is on the
order of a thousand light-years on the side, still with micrometer
precision.)

I think "extremely often" is an extreme exaggeration, or that you work
in very different domains from me (the very ones I asked about, in
fact). I have never seen the need myself, and I've been programming
since the early 1990s.

The requirements for integers tend to be 32-bits, 64-bits or unlimited (ie.
bigints).

I don't think there's an urgent need for a dedicated 128-bit integer that
supports all arithmetic ops, except that 64-bit processors make this trivial
to implement, so it might as well be used.
 
A

Andrey Tarasevich

Jorgen said:
But what would the purpose of that rectangle be? If it's a real shape,
and you measure with 1um resolution, with 32 bits you can express
areas as large as 4300 x 4300 m. Possibly useful if you build nuclear
plants or aircraft carriers, but that's not something I do regularly.

Ha! _I_ actually use it regularly. And I work with semiconductor
microchips. Note: not aircraft carriers or nuclear plants, but
_microchips_. Where do such large values come from, you might ask? Well,
firstly, the modern semiconductor technologies work with resolutions
much higher than 1 um. Secondly, to reduce the chance of serious
rounding problems in intermediate integral calculations with
non-rectilinear geometry we use "super-precision": the coordinates are
usually multiplied by some constant (like 2, 16, 32 etc., depending on
the angle variety in the input data). And yes, we have already grew out
of the range offered by 32-bit signed integer types. 32-bit signed
integer is no longer enough to represent coordinates in microchip
geometry with sufficient "super-precision" (multiplier).
(With 64-bit integers, the largest rectangle you can express is on the
order of a thousand light-years on the side, still with micrometer
precision.)

Which is why there's hope that 64-bit integer range should be enough for
everybody :) At least when it comes to linear dimensions.
I think "extremely often" is an extreme exaggeration, or that you work
in very different domains from me (the very ones I asked about, in
fact). I have never seen the need myself, and I've been programming
since the early 1990s.

Well, just for another example, when you one's dealing with systems of
linear equations of small pre-determined fixed size (again, often
happens in geometrical applications), which have to be solved precisely,
often the best way to get to the solution is to use the well-known
Cramer's rule, especially if the system is sparse[-ish]. To solve a 4x4
system with 32-bit coefficients you generally need 128-bit integer
arithmetics. After switching to 64-bit coefficients, 256-bit integer
arithmetics becomes necessary. In this case, for obvious reasons, the
required "bitness" of the intermediate arithmetic grows even faster than
in the example with rectangle area.
 
D

Dann Corbit

Jorgen said:
But what would the purpose of that rectangle be? If it's a real shape,
and you measure with 1um resolution, with 32 bits you can express
areas as large as 4300 x 4300 m. Possibly useful if you build nuclear
plants or aircraft carriers, but that's not something I do regularly.

Ha! _I_ actually use it regularly. And I work with semiconductor
microchips. Note: not aircraft carriers or nuclear plants, but
_microchips_. Where do such large values come from, you might ask? Well,
firstly, the modern semiconductor technologies work with resolutions
much higher than 1 um. Secondly, to reduce the chance of serious
rounding problems in intermediate integral calculations with
non-rectilinear geometry we use "super-precision": the coordinates are
usually multiplied by some constant (like 2, 16, 32 etc., depending on
the angle variety in the input data). And yes, we have already grew out
of the range offered by 32-bit signed integer types. 32-bit signed
integer is no longer enough to represent coordinates in microchip
geometry with sufficient "super-precision" (multiplier).
(With 64-bit integers, the largest rectangle you can express is on the
order of a thousand light-years on the side, still with micrometer
precision.)

Which is why there's hope that 64-bit integer range should be enough for
everybody :) At least when it comes to linear dimensions.
I think "extremely often" is an extreme exaggeration, or that you work
in very different domains from me (the very ones I asked about, in
fact). I have never seen the need myself, and I've been programming
since the early 1990s.

Well, just for another example, when you one's dealing with systems of
linear equations of small pre-determined fixed size (again, often
happens in geometrical applications), which have to be solved precisely,
often the best way to get to the solution is to use the well-known
Cramer's rule, especially if the system is sparse[-ish]. To solve a 4x4
system with 32-bit coefficients you generally need 128-bit integer
arithmetics. After switching to 64-bit coefficients, 256-bit integer
arithmetics becomes necessary. In this case, for obvious reasons, the

Other areas requiring high precision integer mathematics:
1. Crypto (look at the openssl code)
2. Factoring (look at yafu)
3. Fractals (look at Fractint)
4. Number theory (see PARI/Gp)

Probably plenty besides these.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,954
Messages
2,570,116
Members
46,704
Latest member
BernadineF

Latest Threads

Top