Accessing array elements via floating point formats.

G

George Neuner

Suppose we have a table of 100 values representing sine wave. We have
theta in a floating point.

By taking theta/2PI * 100 we can create an index into the sine table.
However this calculation isn't inherently integeral.
If we linearly interpolate between floor(theta/PI *100) and ceil(theta/
2PI * 100) we can get slightly more accurate results. If the hardware
does it automatically for us, we can get the results very quickly.

Yes, but you can also compute sine directly without using a table and
my point was that, in general, using a table is unnecessary if the
direct computation is fast enough. Trigonometric functions happen to
be one area where the direct computation is, in general, too slow.

Hardware interpolation within an interval could be very useful, but
there are just too many ways to perform interpolation, with good
reasons for each. I seriously doubt that any one method being in the
hardware will convince people using other methods to switch.

And would the hardware method handle extrapolation as well. If so,
how?

George
 
S

Skybuck Flying

Andy "Krazy" Glew said:
(1) IEEE Decimal Floating Point, 754-2008

(2) You said "7.1". You realize that ".1" is not an exactly
representable binary fraction?

You probably meant "7.125".

No I ment 7.1.

The .1 means something totally else once it's between the brackets[]

It's treated as a modifier.

Is just a constant decimal notation... which is either stuffed into an
integer... or perhaps stuffed into the floating point itself somewhere if
possible and if that doesn't upset the hardware.

Bye,
Skybuck.
 
M

Malcolm McLean

And would the hardware method handle extrapolation as well.  If so,
how?
No, extrapolation is, in general, invalid, whilst interpolation is
generally valid.

(eg if I have a list of the number of aircraft in the US airforce
between December 1941 and August 1945, one entry per month, then I can
make a reasonable guess of the numbers mid-month by taking the mean of
the other two figures. However I can't extrapolate beyond August 1945,
something happened then to the US airforce and my predictions will be
wildly wrong).
 
S

Skybuck Flying

Andy "Krazy" Glew said:
Andy "Krazy" Glew said:
On 12/13/2010 6:27 PM, Skybuck Flying wrote:
On 12/13/2010 7:50 AM, Skybuck Flying wrote:
Apperently sombody on google completely misunderstood how the
fractional
part would be used to access individual bits. He's not in my outlook
express folder so he problably a troll.

Concerning the potential troll... I cannot find his posting
anymore...
but
it doesn't matter... at least I clearified it a bit how I saw it ;)

The nice thing is it doesn't matter how the floating point format
works...
because we human beings can design the language to fit whatever we
want...
so we don't have to use the fractional part for anything... and can
give
the
source code notation a different meaning.

Bye,
Skybuck.



But, what about decimal versus binary floating point?

What about it ?

Screw decimals... computers work with binary !

(1) IEEE Decimal Floating Point, 754-2008

(2) You said "7.1". You realize that ".1" is not an exactly
representable binary fraction?

You probably meant "7.125".

No I ment 7.1.

The .1 means something totally else once it's between the brackets[]

It's treated as a modifier.

Is just a constant decimal notation... which is either stuffed into an
integer... or perhaps stuffed into the floating point itself somewhere if
possible and if that doesn't upset the hardware.

Then you are not using an IEEE binary floating point representation for
your addresses. You are using something that looks like floating point
when typed, but is actually something else.
I.e. just a language syntax and notation.

No the floating point is fed to the cpu and will take care of it... as well
as the optional integer or whatever it may be.

Bye,
Skybuck.
 
S

Skybuck Flying

vArray[
HowManyCharactersDoYouNeedToTypeBeforeItGetsVeryAnnoyingAndCostlyToConsistentlyHaveToCastToIntHaveYouEverWrittenALotOfArrayCodeLikeIHaveInDelphiNoYouProbablyHaveNotBecauseCDoesNotHaveTheGreatArraySupportThatDelphiHasSoInOtherWordsYouHaveNoIdeaWhatSoEverWhatsItLikeToConstantlyHaveToRoundIndexes
] = YouGetItNow ?

Me against your "combining bits of 1.5" that idea is just stupid, why would
you even want to do that ? never... and it's confusing as well.

Since vArray[ 5.4 / 3.4 ] will normally not compile anyway ? This notation
is not
valid and therefore it's not a problem.

In my original idea I said to ignore the fraction... I think that's best
because
newbies don't understand fractions that well.. and the fractions are
probably not
that usefull anyway... You could try to write weird code like:

vArray[ (1.0 + 1/2 + 1/4 + 1/8) ] = vSomething;

But wouldn't you much rather write something shorter like:

vArray[ 1.7 ] =

at least .7 is easy to remember while 0.5 + 0.25 + 0.125 and so forth is
not.

Also there is nothing preventing the compiler from interpreting the above
code as:

vArray[ Float.Integer ] =

Two seperate variables... one for array indexing, one for bit indexing.

If the bit indexing is a bad idea... fine then drop it.

But at least the float idea is nice which was my original idea ! ;) :p*

Bye,
Skybuck =D
 
G

George Neuner

No, extrapolation is, in general, invalid, whilst interpolation is
generally valid.

(eg if I have a list of the number of aircraft in the US airforce
between December 1941 and August 1945, one entry per month, then I can
make a reasonable guess of the numbers mid-month by taking the mean of
the other two figures. However I can't extrapolate beyond August 1945,
something happened then to the US airforce and my predictions will be
wildly wrong).

But, a robot needs to extrapolate future positions based on its
current course and speed. It must do this for a variety of reasons,
but chiefly to determine whether a probably future position will
intersect with an obstacle.

George
 
K

Keith Thompson

Malcolm McLean said:
No, extrapolation is, in general, invalid, whilst interpolation is
generally valid.

(eg if I have a list of the number of aircraft in the US airforce
between December 1941 and August 1945, one entry per month, then I can
make a reasonable guess of the numbers mid-month by taking the mean of
the other two figures. However I can't extrapolate beyond August 1945,
something happened then to the US airforce and my predictions will be
wildly wrong).

Sure, if you carefully choose the end points of the range to make
extrapolation invalid, you'll find that extrapolation doesn't work
very well.

Interpolation does tend to be more reliable than extrapolation,
simply because you've got (at least) two data points to start with,
but I don't think the difference is as great as you imply.

If you had data from, say, May 1957 to March 1983 (months randomly
chosen from the 20th century), extrapolation would probably be
reasonably accurate.
 
N

nmm1

Interpolation does tend to be more reliable than extrapolation,
simply because you've got (at least) two data points to start with,
but I don't think the difference is as great as you imply.

If you had data from, say, May 1957 to March 1983 (months randomly
chosen from the 20th century), extrapolation would probably be
reasonably accurate.

Not if it was anything to do with computers :)

There are mathematical reasons that interpolation is more reliable
than extrapolation, in addition to extrapolation being vulnerable
to changing conditions. When it comes to hopeless inaccuracy, yes,
the difference is immense - there is much less difference for mere
minor inaccuracies.


Regards,
Nick Maclaren.
 
G

George Neuner

Have you looked at what code is generated? Especially for the plain
cast to int?

Have you? I don't claim that VC2008 is a great compiler, but I
happen to have it handy. I compiled the following for x64 optimized
for speed:

typedef long long bignum;

int _tmain(int argc, _TCHAR* argv[])
{
double f;
bignum i;

f = 1.23456789e29;
i = (bignum) f;

printf( "%20f -> %lld\n", f, i );
return 0;
}

The code for the cast is:

cvttsd2si r8,xmm1


Simple enough. But make the following little changes:

typedef unsigned long long bignum;
and
printf( "%20f -> %llu\n", f, i );

and suddenly the "simple" cast becomes:

movsd xmm2,mmword ptr [__real@43e0000000000000 (13FF021C0h)]
xor eax,eax
comisd xmm1,xmm2
movapd xmm0,xmm1
jbe wmain+37h (13FF01037h)
subsd xmm0,xmm2
comisd xmm0,xmm2
jae wmain+37h (13FF01037h)
mov rcx,8000000000000000h
mov rax,rcx
wmain+37h:
cvttsd2si r8,xmm0
add r8,rax

Please explain why these features are "necessary". You seem to think
that casting to int is a difficult-to-accomplish task.

Seems to me like it is.

George
 
G

George Neuner

George said:
But having to call such routines all the time seems a bit
overheadish/excessive.

Have you looked at what code is generated? Especially for the plain
cast to int?

Have you? I don't claim that VC2008 is a great compiler, but I
happen to have it handy. I compiled the following for x64 optimized
for speed:

typedef long long bignum;

int _tmain(int argc, _TCHAR* argv[])
{
double f;
bignum i;

f = 1.23456789e29;
i = (bignum) f;

printf( "%20f -> %lld\n", f, i );
return 0;
}

The code for the cast is:

cvttsd2si r8,xmm1


Simple enough. But make the following little changes:

typedef unsigned long long bignum;
and
printf( "%20f -> %llu\n", f, i );

and suddenly the "simple" cast becomes:

movsd xmm2,mmword ptr [__real@43e0000000000000 (13FF021C0h)]

Loading some magic value, presumably +2^63, i.e. the first fp value that
won't fit in a signed 64-bit int.
xor eax,eax

OK, avoiding the REX.W prefix because a 32.bit operation will always
zero the top 32 bits?
comisd xmm1,xmm2

This just did a signed fp compare but sets the flags as if it was an
unsigned int CMP!
movapd xmm0,xmm1
jbe wmain+37h (13FF01037h)
Skip the correction if the top bit was clear
subsd xmm0,xmm2

Subtract 2^63
comisd xmm0,xmm2

Check again, are we in range?
jae wmain+37h (13FF01037h)

If not, the conversion will and should overflow
mov rcx,8000000000000000h
mov rax,rcx
wmain+37h:
cvttsd2si r8,xmm0
add r8,rax

That is actually quite nice code, except for the spurious MOVAPD copy
from xmm1 to xmm0 it is probably as fast as you can make it while still
handling all inputs, including out of range, correctly.

Terje

I agree that it is about as good as can be ... but it is a lot more
complex than the single instruction in the signed integer case.

George
 
J

Jens Thoms Toerring

In comp.lang.c George Neuner said:
On Tue, 14 Dec 2010 00:09:20 -0800 (PST), Malcolm McLean
But, a robot needs to extrapolate future positions based on its
current course and speed. It must do this for a variety of reasons,
but chiefly to determine whether a probably future position will
intersect with an obstacle.

I wouldn't subscribe to the sentence

It's all about if you have a (more or less) reliable model of
what's happening. For example guessing at what the Dow Jones
was in the middle of a year from what it was at the start and
the end of a year would probably not be much more reliable than
guessing what it will be six months after the last data points
you have...

Interpolation without any knowledge about what can happen in
between the two (or more) data points you use is as error prone
as extrapolating from them. Interpolating the value of tan() in
between from its values at 80 and 100 degrees is a nice example:
if you don't know that the tan() function goes "berserk" in that
interval you might be fooled into believing the result of an "in-
terpolation" and be infinitely surprised. Interpolation works
rather well when it's save to assume that what happens in between
is not to far off from behaving linearly with time (or whatever
parameter you're basing your interpolation on).

So if you don't have any information and no model that has been
tested carefully neither inter- nor extrapolation is reliable.
But for a robot, extrapolating future positions, you normally
have a rather well-tested model of the world it's interacting
with, so extrapolations not too far into the future may be quite
good. Of course, there might be problems with the model, there
might be inaccuracies with the data used for the extrapolation,
there might be computational errors etc., so trusting the extra-
polation not too far and instead doing measurements how reality
develops in relation to the internal representation and taking
this into account is prudent;-)

Regards, Jens
 
A

Andrew Reilly

probably as fast as you can make it while still handling all inputs,
including out of range, correctly.

I haven't thought about it at all, but wouldn't it be possible to do the
conversion without comparisons as the dp-float sum of two 32-bit values
and appropriate scaling? I figure there must be a catch to that
strategy, because I've only ever seen it done with tests and magic
numbers, this way.

Cheers,
 
N

nmm1

I wouldn't subscribe to the sentence


It's all about if you have a (more or less) reliable model of
what's happening. For example guessing at what the Dow Jones
was in the middle of a year from what it was at the start and
the end of a year would probably not be much more reliable than
guessing what it will be six months after the last data points
you have...

Er, no ....

Some of the mathematical reasons for that assertion (which, I agree,
is a slight overstatement) do not rely on a specific model. If you
actually do the checking, I think that you will find you are wrong
for Dow Jones, but that's another guess :) Certainly, what you
can get with extrapolation but almost never with interpolation is
a radical change of conditions - a real crash, for example, not the
minor glitches we have seen in recent years.


Regards,
Nick Maclaren.
 
A

Andrew Reilly

The input is already double, the output uint64_t, so there is no room
for magic scaling adds which leave the desired result in the lower 32
bits.

Oh, pardon the dumbness on my part: I was wondering about the opposite
conversion (uint64_t to double), rather than wat was actually being
discussed. I guess because it's one that I bumped into recently.

Sorry for the distraction.

Cheers,
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Staff online

Members online

Forum statistics

Threads
474,083
Messages
2,570,588
Members
47,211
Latest member
JaydenBail

Latest Threads

Top