What is a good way to do this (send a floating-point value
from one machine to another) without losing precision (and
ideally without increasing bandwidth)? I've run into this
issue frequently.
There is no absolute answer. To begin with, if the source
machine has greater precision than the target machine, it is
impossible, since the target machine won't be able to represent
the value with greater precision than it has. If the reverse is
true, it's also an interesting question: if I send the results
of 1.0/3.0, what the target machine receives won't be the
results of 1.0/3.0.
In general, the protocol will specify a format, with whatever
precision it supports. If the source machine has greater
precision, it will round, and if the target machine has greater
precision, some possible values will never be sent. The one
exception I know is BER encoding, in which the source machine
sends the value in its native encoding, with information about
the encoding. This means that there is never a loss of
precision when communicating between machines with the same
representation, but it also means a lot of extra complexity. An
awful lot. (In addition to sending the sign, the exponant and
the mantissa as separate fields, it also has to send informat as
to how to interpret those fields.)
At the bare minimum, do you know of a way to -check- what
format a "float" is in so that I could at least make the
application say "not implemented for this machine, please
contact the author" instead of silently screwing up?
There are a number of useful values in either <cfloat> or
<limits>. Those in <cfloat> are macros which evaluate to
constant expressions, which when integral can be used in the
preprocessor, e.g. for conditional compilation. (The ones that
would interest you here are xxx_RADIX, xxx_MANT_DIG, xxx_MIN_EXP
and xxx_MAX_EXP, where xxx is one of FLT, DBL OR LDBL.)