range vs precisionC

C

Chad

What is the difference between range and precision in c? Also, if c
doesn't support fixed point numbers, how is the addition of two
integers possible?

Chad
 
O

osmium

Chad said:
What is the difference between range and precision in c?

Range refers to the ability to represent very large or very small numbers in
the same datum. Say a variable of type double, x sometimes was the
distance between a proton and an electron and also the distance from the sun
to some distant galaxy. Floating point numbers are approximations of the
infinite number of points on the real number line; precision has to do with
how good that approximation is.

See this for more:

http://en.wikipedia.org/wiki/Floating_point

You can find the key factoids about your particular installation by
examining the constants in said:
Also, if c
doesn't support fixed point numbers, how is the addition of two
integers possible?

The binary point is taken to be immediately to the right of the least
significant binary digit (bit). The tools are there to improve on that if
that doesn't please you.
 
E

Eric Sosman

Chad said:
What is the difference between range and precision in c? Also, if c
doesn't support fixed point numbers, how is the addition of two
integers possible?

The range of a type is the span of values it can
represent. An `int', for example, has a range of at
least -32767 through +32767, perhaps more.

The precision of a type is the granularity with
which it divides up its range. The `int' can represent
values to a precision of one unit, but cannot represent
half a unit or a tenth of a unit. For floating-point
types the granularity varies with the magnitude of the
value, so it is customary to state the precision in
relative rather than absolute terms: so-and-so many
digits rather than plus-or-minus so-and-so much.

(Don't confuse precision with accuracy. If I say that
the square root of two is 2.7182818284590452353602874713527
I have made a statement that is extremely precise but
wildly inaccurate.)

The addition of integers is possible because C knows
how to add integers. If that's not a satisfactory answer,
you'll need to explain your doubts more clearly: Why do
you think fixed-point arithmetic is involved in any way?
 
C

Chad

The addition of integers is possible because C knows
how to add integers. If that's not a satisfactory answer,
you'll need to explain your doubts more clearly: Why do
you think fixed-point arithmetic is involved in any way?

On another site, I had a question about converting floating point to
binary and back. Anyhow, during this lecture, I had someone tell me
that fixed-point and floating-point where two different things. This
was something I wasn't aware of. After some googling, I saw some
comments on other newsgroups that said C didn't support fixed-point.
Hence, the questions.

Chad
 
B

bert

Chad said:
On another site, I had a question about converting floating point
to binary and back. Anyhow, during this lecture, I had someone
tell me that fixed-point and floating-point were two different things.
This was something I wasn't aware of. After some googling, I saw
some comments on other newsgroups that said C didn't support
fixed-point. Hence, the questions.
Chad

I think fixed-point arithmetic can have more than one meaning. In
one sense, C supports fixed-point arithmetic, with the point fixed
at the least significant end of the machine word. But descriptions
of old algorithms suggest that there used to be another meaning,
more like scaled, where the programmer would decide that his
machine word represented say 20 integer bits and 12 fraction
bits. This just means a shift after each multiplication, but as a
concept it's quite different from integer or floating-point. I don't
know whether there was ever any support for the practice of this
concept, either in old hardware or in a programming language.
--
 
O

osmium

Chad said:
On another site, I had a question about converting floating point to
binary and back. Anyhow, during this lecture, I had someone tell me
that fixed-point and floating-point where two different things. This
was something I wasn't aware of. After some googling, I saw some
comments on other newsgroups that said C didn't support fixed-point.
Hence, the questions.

I remeber that thread. The guy who kept that thread alive said this in
message 11.

--- clip----
Wow that Klien guy [Jack Klien] is so ignorant, I can't believe he is
employed.
lmfao, I used to do fixed point arithmetic on 16 bit architecture,
there was no way you could squeeze out the accuracy and performance
with BCD. I was not alone other people that wrote similar functions
did not try either. BCD was invented by ignorants.
---- clip----
The guy lost any credibility he had when he insulted Jack Klein like that.
That is not a typo or a premature send. That is a planned response.

If you want to pursue fixed point, take a look at this.

http://www.answers.com/fixed-point&r=67
 
J

John Smith

Eric said:
(Don't confuse precision with accuracy. If I say that
the square root of two is 2.7182818284590452353602874713527
I have made a statement that is extremely precise but
wildly inaccurate.)

Samuel Johnson, who wrote the famous dictionary, and who was
known for his careful use of language, once asked a woman her
age. She replied that she was 25 and a half. Dr. Johnson observed
that her answer was "very precise, but not very accurate."

JS
 
B

Ben Pfaff

A

Andrew Poelstra

Ben said:
C doesn't have built-in support for fixed-point arithmetic,
except for integer arithmetic as a special case. But you can
implement fixed-point arithmetic in C; for example, the
instructions for one part of an operating system project that I
wrote explain how to do so:
http://www.stanford.edu/class/cs140/projects/pintos/pintos_8.html#Fixed-Point Real Arithmetic

The only time you would need fixed-point numbers is when you are
outputting. printf does support fixed point, I believe.

In other words, what type of decimal math the language uses is
irrelevant, as long as you can output in the format you want.
 
B

Ben Pfaff

Andrew Poelstra said:
The only time you would need fixed-point numbers is when you are
outputting.

How do you propose to do fixed-point multiplication and division
without some kind of fix-up?
printf does support fixed point, I believe.

I don't understand what you mean by that.
In other words, what type of decimal math the language uses is
irrelevant, as long as you can output in the format you want.

I think you're wrong.
 
K

Keith Thompson

bert said:
I think fixed-point arithmetic can have more than one meaning. In
one sense, C supports fixed-point arithmetic, with the point fixed
at the least significant end of the machine word. But descriptions
of old algorithms suggest that there used to be another meaning,
more like scaled, where the programmer would decide that his
machine word represented say 20 integer bits and 12 fraction
bits. This just means a shift after each multiplication, but as a
concept it's quite different from integer or floating-point. I don't
know whether there was ever any support for the practice of this
concept, either in old hardware or in a programming language.

A number of languages have direct support for fixed-point arithmetic.
(Ada is one of them; I think PL/I is another.) No particular hardware
support is needed; addition and subtraction work exactly like integer
addition and subtraction, and muliplication and division just require
a scaling operation. It's easily emulated in other languages;
building support into a language gives you numeric literals and a
convenient syntax for declaring fixed-point types.
 
K

Keith Thompson

Andrew Poelstra said:
The only time you would need fixed-point numbers is when you are
outputting. printf does support fixed point, I believe.

In other words, what type of decimal math the language uses is
irrelevant, as long as you can output in the format you want.

Incorrect. Fixed-point has arithmetic properties that are very
different from those of floating-point, properties that can be quite
useful in some circumstances. For example, fixed-point with a scale
factor of 100 lets you do precise calculations with money;
floating-point introduces rounding errors, because 0.01 can't be
represented exactly in binary. (Some monetary calculations, such as
interest calculations, might yield results that require better
precision than 0.01, but that's another topic.)
 
T

Thad Smith

In both cases the meaning is the same: the binary point is fixed for a
particular variable. Integer arithmetic is indeed fixed point. It is
limited in that the user cannot specify the location of the binary point.
A number of languages have direct support for fixed-point arithmetic.
(Ada is one of them; I think PL/I is another.) No particular hardware
support is needed; addition and subtraction work exactly like integer
addition and subtraction, and muliplication and division just require
a scaling operation.

This refers to language-supported fixed-point arithmetic in which the
location of the binary can be specified. In the most general case, such
as I think PL/I supports, it can be specified independently for each
variable. If the operands of an addition or subtraction operator have
different scale factors then scaling needs to be performed before the
operation. The result will need to be scaled afterwards if storing in a
variable with a different scale factor than used for the operation.

Fixed point arithmetic can be done in C, but the user must track the
scaling and often employ higher precision arithmetic for intermediate
results.

Some hardware, at least in the past, had both an integer multiply and a
fractional multiply, which assumes that the binary point is to the left
of the most significant bit. A fractional multiplication might not
retain all the bits of the product: a 64-bit fraction times a 64-bit
fraction might yield a 64-bit fractional result. A floating point
multiply unit typically has a fractional multiply for the mantissas.
 
T

Thad Smith

Keith said:
Incorrect. Fixed-point has arithmetic properties that are very
different from those of floating-point, properties that can be quite
useful in some circumstances. For example, fixed-point with a scale
factor of 100 lets you do precise calculations with money;
floating-point introduces rounding errors, because 0.01 can't be
represented exactly in binary.

Good point. As another example, low-end embedded processors (without
floating point, having short hardware-supported data types, and with
limited RAM) can benefit significantly from careful use of fixed-point
arithmetic.
 
J

jacob navia

P.J. Plauger a écrit :
Well, it kinda does now. TR18037 (Embedded C) defines a well thought
out addition to C for fixed point arithmetic. It'll be included in
our next release, though only the EDG front end supports the language
part. It's intended primarily for DSPs, but it's generally useful.

After reading that proposal, I was struck by the fact that since quite a
long time we are turning around the problem of defining SUBTYPES for the
language.

Consider this:

int a;

and

volatile int a;

The second is a SUBTYPE of integer, i.e. an integer with special
characteristics, in this case that is volatile.

In the same manner, fixed point could be understood as a subtype of
double/float, etc, where the implementation changes.

This is even more obvious when further down in the proposal there is the
definition of address spaces, and we have declarations like:

_X char a,b,c;

(page 38)

This declaration means that the characters a,b, and c are stored in an
address space named _X.

Recently, Microsoft decided to standardize the different __declspec(foo)
declarations that sprinkle the windows headers, and decided to adopt a
standard annotation system for declaring variables. At the same time,
gcc has its __attribute__ construct that does the same thing .

What is needed now, I think, is to realize that subtypes (and all those
ad hoc implementations) are a real NEED for the language, and that we
should try to address THIS problem rather than defining a new ad hoc
solution for each specific need. What is needed is a general syntax that
allows compiler to ignore subtype specifications when not supported but
would allow standadization of annotations so that they would be
compatible, in the sense that

#pragma

is used.

jacob
 
A

Andrew Poelstra

Ben said:
How do you propose to do fixed-point multiplication and division
without some kind of fix-up?

By doing floating-point arithmetic and then chopping off any extra
decimals. Do you want fixed-point arithmetic in the sense of having
rounding errors?
I don't understand what you mean by that.
printf will allow you to specify how many decimal points to display.
I think you're wrong.

Thank you for the comment. If you elaborate perhaps I'll be able to
correct myself and become a smarter person.
 
A

Andrew Poelstra

osmium said:
:



Better informed, not smarter. LOL.

Exactly. The message immediately after yours answered my question,
proving that perhaps I should read ahead before posting.

I had a snappy comeback, but I forgot it, so I'll just wink. ;-)
 
B

Ben Pfaff

Andrew Poelstra said:
By doing floating-point arithmetic and then chopping off any extra
decimals. Do you want fixed-point arithmetic in the sense of having
rounding errors?

If you want fixed-point arithmetic, presumably it's because
floating-point arithmetic is not suitable. For example, in the
example I cited earlier, floating-point arithmetic was not
allowed at all.
 
B

Barry Schwarz

What is the difference between range and precision in c? Also, if c
doesn't support fixed point numbers, how is the addition of two
integers possible?
I suspect your question is about floating point numbers but for
completeness, range also applies to integers.

For integer types, the range is the set of values between the minimum
and maximum allowable values, inclusive. The standard provides a set
of macros so your code has access to these values (e.g., INT_MIN and
INT_MAX for int). Each of these values is represented exactly.

For floating point types, the range is the smallest and largest
allowable positive values plus the corresponding negative values plus
0.. But not all values in this range can be represented.

Since different systems use different representations and
since binary is not intuitively obvious in this regard, let's consider
a system where a floating point value is represented by a signed two
decimal digit exponent (base 10) and a signed four decimal digit
mantissa with an implied decimal point before the first mantissa
digit. Using ^ for exponentiation like the math newsgroups do, the
largest value that can be represented is 9999*10^99 and the smallest
positive value is 0001*10^-99. The range, expressed in normal
scientific notation, is 1.0*10^-103 to 9.999*10^98.

If you attempt to multiply 1024 (represented as 1024*10^4) by
itself, instead of 1048576, you get 1048*10^7 (or perhaps 1049
depending on how rounding is performed). Similarly, you cannot
represent 1/3 exactly on this system. You have to settle for
3333*10^0. You also end up with non-arithmetic results such as
10,000+1 == 10,000. The system is limited to four digits of precision
even though the range spans 200 orders of magnitude.

When you consider the binary techniques actually used, the problem is
compounded since "simple" decimal values such as 10.1 cannot be
represented exactly. Again, the standard provides a set of macros so
your code can know what the limits are (e.g., DBL_DIG, decimal digits
of precision, and DBL_EPSILON, the smallest number which when added to
1.0 produces a result greater than 1.0).

You might look up the differences between precision, significance, and
accuracy in a math reference.


Remove del for email
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
474,183
Messages
2,570,965
Members
47,511
Latest member
svareza

Latest Threads

Top