Hi,
Is anyone familiar with the floating point arithmetic that C uses? I
asking as I am trying to emulate this in C# and am having some
difficulty.
You really haven't asked a question to which any reasonable response can
be given. There is no way to guess what difficulty you might be having,
and of course this is not the place to ask about how to do _anything_ in
a language other than C, especially not a proprietary language like C#.
Of course someone is familiar with floating point arithmetic in C. It
could hardly be otherwise, since most of us use floating point
arithmetic. C, of course, does not have hard-and-fast rules about how
floating point arithmetic is done. It could hardly be otherwise, since
not only do different computers and different FPUs have different data
representations and different hardware instructions, but different
software implementers may use them in different ways. For example,
Microsoft seems never have found a way to use long double to provide
either more precision or greater range than double, even though their
principal hardware would support ir.
C provides limits on the seen behavior of programs using floating point
rather than decreeing a particular implementation as Microsoft can with
its proprietary language. Some of those limits are minimum requirements
to be bet, and any implementation is free to do better. Implementations
will provide much information about their own floating point limits in
<float.h>, including the base radix, the rounding behavior, whether
arithmetic is done with the type expressions require or to higher rank
types (double or long double), the difference between 1.0 and the next
larger value (for each type), the minimum number of decimal digits, the
number of digits in the mantissa in the base used in the implementation,
how many decimal digits it takes to represent the largest supported
value, the smallest and largest values for each type, the smallest and
largest exponents for each type in both the implementation base and in
base 10. All of these, as long as they meet the standard's minimum
requirements, are subject to the implementer's decisions.
That having been said, many implementations choose to use the
floating-point standard in
IEC 60559:989, Binary floating-point arithmetic for microprocessor
design, 2nd edition.
The earlier names for this standard were
IEC 559:1989 and ANSI/IEEE 654-1985, IEEE Standard for Binary
Floating-point arithmetic
and
ANSI/IEEE 854-1987, IEEE Standard pfr Radix-Independent Floating-point
Arithmetic,
If, as implied by your question. C# does not use that standard, that is
a shocking situation.
The standard header <fenv.h> will be helpful to you.
The #prgama STDC FENV_ACCESS (resetable by the program) indicates
whether a C program will deal with control modes and status bits. The
macro FE_DEFL_ENV specifies the default floating-point environment. The
header also contains macros for sccesseing the status flags for each
supported FP exception, more macros concerning rounding behavior, a
prgrama for whether FP expressions can be optimized (contracted) to take
advantage of fast FP operations. And there are prototypes there for
functions to use or take advantage all this, which you can find in your
C reference manual by checking functions that have names beginning with
"fe".
And Microsoft is your problem with C#. We really don't discuss other
languages here, and especially not proprietary languages apart from one
developer who keeps hyping his own product, which happens to be a very
good one, even though off-topic here). The people who can tell you how
to do things in C# -- although I can't imagine why anyone would want to
-- hang out in C# newsgroups or subscribe to C# msiling lists.