negative powers of two

R

Richard Herring

Jerry Coffin said:
On Jul 31, 6:01 pm, Jerry Coffin <[email protected]> wrote:

[ ... ]
Not really. I don't know of any mainframe which uses base 2 for
its floating point: IBM uses base 16, and both Unisys
architectures use base 8; Unisys MCP also normalizes in a
somewhat strange way. The fact that the bases are powers of two
means that the operation can still be done with masking and
shifting, but it requires a bit more than a base 2
representation would.

You start by saying "not really", but from what I can see, you then
go on to pretty much confirm what I said -- that while these machines
aren't binary, they still work in a power of two, which means that
frexp/ldexp will probably be faster than pow() for all of them.

Depends on how much extra bit-twiddling is needed to normalize the
results. IIRC IBM floating-point instructions can only be applied to
data in dedicated FP registers, so some extra copying may be needed.
I have to admit I'm a bit surprised though -- I thought IBM's
mainframes used decimal floating point, not hexadecimal...

No, but they have a decimal fixed-point format for currency etc.
 
K

Keith H Duggar

Jerry said:
Ah, I can finally see the source of the misunderstanding.

The algorithm I mentioned was for machines at one end of the spectrum
-- basically IEEE 754 or something very similar.

I also mentioned machines with "weird" representations, for which
frexp/ldexp could be quite slow -- quite possibly slower than pow,
being the important point under the circumstances.

Not only did you mention "weird" (in your own words "dramatically
different" or "strange") machines but you also ignorantly claimed
they were "unusual" which we all take to mean *uncommon*.
I did not say, or mean to imply, that there was no middle ground
between those extremes. I guess since I didn't mention the middle
ground, I can understand how somebody could infer that I didn't
intend for there to be any.

I'm sure many will accept that weak cop-opt and let you slither
quietly away. Here is the simple proof that even using your own
terms and admitted Jerry "facts" (let's just call them "jerry"),
you were wrong.

http://groups.google.com/group/comp.lang.c++/msg/f3f7d4df96ec0124
Jerry said:
OTOH, if the native floating point representation is drastically
different from that [base-2 maybe base-2^n], they could end up pretty
slow -- but machines with such strange floating point representations
are pretty unusual.

jerry-1) machines with "drastically different" or "strange" floating
point representation are "pretty unusual"

http://groups.google.com/group/comp.lang.c++/msg/5cd89bd1b73504be
Jerry said:
Yes, that's probably the most obvious anyway.

jerry-2) decimal floating-point representations are "strange"

http://groups.google.com/group/comp.lang.c++/msg/6bead24b1aeae51a
Jerry said:
I have to admit I'm a bit surprised though -- I thought IBM's
mainframes used decimal floating point, not hexadecimal...

jerry-3) thought IBM mainframes used decimal floating point

From those jerrys we can easily deduce

jerry-3 AND jerry-2 IMPLIES jerry-4) thought IBM mainframes used
"strange" floating point representation

jerry-4 AND jerry-1 IMPLIES jerry-5) thought IBM mainframes are
"pretty unusual".

And there you have it, a clear deductive demonstration from your
own jerrys (Jerry Facts) and words that you were ignorant of the
actual facts and were wrong. Sadly, the closest you can come to
publicly admitting this is to say that you were "surprised" and
that there was "a misunderstanding".

KHD
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
474,159
Messages
2,570,879
Members
47,414
Latest member
GayleWedel

Latest Threads

Top