N
nessuno
I can't find any discussion of this question in this NG.
I'd like to implement some variable precision integer arithmetic in C,
and do it efficiently. A problem arises with the divide/remainder
operations. I'm using gcc on x86, but a lot of what I say applies to
other architectures.
First, I assume the div and ldiv functions are more efficient than
using / and % (obviously it depends on the compiler and level of
optimization, but it's one operation instead of two).
However, div and ldiv require the length of the quotient, divisor,
dividend and remainder to all be the same. Now all the machines I've
ever worked with at the machine language level (quite a few of them)
implement integer division with a dividend that is twice as long as the
divisor, quotient and remainder. Moreover, this is what naturally
arises in multiple precision arithmetic, you get dividends twice as
long as everything else. So suppose I'm using 16-bit divisors,
quotients and remainders, but 32-bit dividends. If I were writing in
assembly language, I would use the instructions that divide 16 bits
into 32.
But when I write in C, I'm forced to use the ldiv function, since one
of the operands is 32-bits. (Let's say int=16-bit and long=32-bit, as
on my machine.) Then the compiler has to implement this ldiv function
by using the instruction that divides 32-bits into 64-bits, since the
divisor is now 32-bits. In other words, the compiled code is forced to
use a divide instruction with operands twice as large as needed, due to
the design of the div and ldiv functions.
Anybody know a way around this (apart from inserting assembly code in
my C program)?
Thanks in advance...
I'd like to implement some variable precision integer arithmetic in C,
and do it efficiently. A problem arises with the divide/remainder
operations. I'm using gcc on x86, but a lot of what I say applies to
other architectures.
First, I assume the div and ldiv functions are more efficient than
using / and % (obviously it depends on the compiler and level of
optimization, but it's one operation instead of two).
However, div and ldiv require the length of the quotient, divisor,
dividend and remainder to all be the same. Now all the machines I've
ever worked with at the machine language level (quite a few of them)
implement integer division with a dividend that is twice as long as the
divisor, quotient and remainder. Moreover, this is what naturally
arises in multiple precision arithmetic, you get dividends twice as
long as everything else. So suppose I'm using 16-bit divisors,
quotients and remainders, but 32-bit dividends. If I were writing in
assembly language, I would use the instructions that divide 16 bits
into 32.
But when I write in C, I'm forced to use the ldiv function, since one
of the operands is 32-bits. (Let's say int=16-bit and long=32-bit, as
on my machine.) Then the compiler has to implement this ldiv function
by using the instruction that divides 32-bits into 64-bits, since the
divisor is now 32-bits. In other words, the compiled code is forced to
use a divide instruction with operands twice as large as needed, due to
the design of the div and ldiv functions.
Anybody know a way around this (apart from inserting assembly code in
my C program)?
Thanks in advance...