R
Rob Williscroft
dwrayment wrote in
Well this is the *real* problem, all that bit-twideling and you've gone
and masked off the sign bit.
Lets start by working with some correct code:
bool thing( int a, int b )
{
return (a * b) > 0;
}
We can write a truth table (N < 0 and P > 0):
a, b, return
------------
0, 0, false
0, P, false
0, N, false
P, 0, false
P, N, false
P, P, true
N, 0, false
N, P, false
N, N, true
Assuming we have 32 bit 2's compliment machine that is:
bool thing( int a, int b )
{
return (a || b) && ((a >> 31) == (b >> 31));
}
or (if you prefer):
bool thing( int a, int b )
{
return
(a || b)
&&
0 == ( ( unsigned(a) ^ unsigned(b) ) & 0x80000000U )
;
}
Assuming (again) that either of the above is faster than (a * b) < 0,
why wouldn't an optimizing C++ compiler make the change.
The reason others have objected to your assertion that "bit-twideling
is faster", is that not only can the compiler do the job for you, so
can modern CPU's and when the CPU does it you get a double hit as
the generated code is shorter too.
Rob.
ok last post for me if some of you dont like it theres nothing more i
can say.
first off, im not trying to reinvent the multipy op by simulating it
with adding and bit shifts. the question posed was how can we compare
sign of numbers.
soultion 1 was a*b .... i posed a different solution simply !a xor
b >> 31. and now i think about it id rather do
a xor b & 0x70000000 == 0.
Well this is the *real* problem, all that bit-twideling and you've gone
and masked off the sign bit.
here multiplying is inefficient because
doing a simple xor and bitwise and is less work than doing a mulitpy
(on even the best of the best of machines).
Lets start by working with some correct code:
bool thing( int a, int b )
{
return (a * b) > 0;
}
We can write a truth table (N < 0 and P > 0):
a, b, return
------------
0, 0, false
0, P, false
0, N, false
P, 0, false
P, N, false
P, P, true
N, 0, false
N, P, false
N, N, true
Assuming we have 32 bit 2's compliment machine that is:
bool thing( int a, int b )
{
return (a || b) && ((a >> 31) == (b >> 31));
}
or (if you prefer):
bool thing( int a, int b )
{
return
(a || b)
&&
0 == ( ( unsigned(a) ^ unsigned(b) ) & 0x80000000U )
;
}
Assuming (again) that either of the above is faster than (a * b) < 0,
why wouldn't an optimizing C++ compiler make the change.
The reason others have objected to your assertion that "bit-twideling
is faster", is that not only can the compiler do the job for you, so
can modern CPU's and when the CPU does it you get a double hit as
the generated code is shorter too.
Rob.