calculating the area of a circle problems

F

Frederick Gotham

Noah Roberts posted:
1>c:\documents and settings\nroberts\my documents\visual studio
2005\projects\playground\playground\playground.cpp(16) : error C2632:
'float' followed by 'float' is illegal
1>c:\documents and settings\nroberts\my documents\visual studio
2005\projects\playground\playground\playground.cpp(16) : error C2039:
'()' : is not a member of '`global namespace''
1>c:\documents and settings\nroberts\my documents\visual studio
2005\projects\playground\playground\playground.cpp(16) : error C2673:
'()' : global functions do not have 'this' pointers


Foolish of you to try and compile it.

Humans would be able to eat raw chicken if that's the way evolution
designed us to be. Are you now going to go off and eat raw chicken, and
then come back here complaining to me when you start feeling ill?

In other news, how do you like my hypothetical implementation of
float::eek:perator()? The think I like most about it is that it's
hypothetical. Do you like it hypotheticalness?
 
F

Frederick Gotham

kwikius posted:
It follows that those that make liberal use of unsigned int when they
dont need to, sooner or later turn raving mad from this practise.


It serves me just fine. Here's how I choose my integer type:

Firstly, my default choice is "unsigned int".

Next I consider if I'll be storing negative numbers; if so, then I use
"signed int".

Next I consider if I need a 32-Bit type. If so, I'll use "signed long" or
"unsigned long".

How long have you been following this practise ?


A few years.
 
K

kwikius

Frederick said:
kwikius posted:



It serves me just fine. Here's how I choose my integer type:

Firstly, my default choice is "unsigned int".

By default I like to be to be able to swing either way...

regards
Andy Little
 
J

Jerry Coffin

[ ... ]
It serves me just fine. Here's how I choose my integer type:

Firstly, my default choice is "unsigned int".

Here's your first and biggest mistake. Your default choice should be
plain int. If the designers had intended "unsigned" to be the default,
they would have made it the default, and you'd have to use 'signed' to
get a signed int.

Having started out completely wrong, none of the rest of your rules make
any sense at all. A sensible set would look more like:

First, separate size from signedness. Choose the size necessary to hold
the range of numbers you'll use. By default, this is usually plain int.
Choosing something smaller is generally restricted to 1) interfacing
with external code that uses the smaller type, or 2) places that you
need to create a large number of objects, and want to minimize space
usage.

Then choose signedness:

The default is signed.

If you want something that's basically a collection of bits, use
unsigned.

On the rare ocassion that there's no signed type that will hold the
range of values you need, and there is an unsigned type that will, use
unsigned and hope for the best -- but be aware that you're producing
relatively fragile code.

That's pretty much it. The "collection of bits" situation does include a
few things that may initially look like arithmetic. For example, if you
wanted to create a large integer type, you might have something like
this:

class large_integer {
int most_significant;
std::vector<unsigned> less_significant;
// ...
};

While you do math on the object as a whole, and implement it using the
normal mathematic operators on objects in the vector, the items in the
vector aren't really arithmetic types by themselves -- even though
you're using something like the addition operator, you're still really
doing bit manipulation.
 
F

Frederick Gotham

Jerry Coffin posted:
Here's your first and biggest mistake. Your default choice should be
plain int. If the designers had intended "unsigned" to be the default,
they would have made it the default, and you'd have to use 'signed' to
get a signed int.


Actually, I thought it had more to do with the meaning of the word
"integer". Courtesy of dictionary.com:

A member of the set of positive whole numbers 1, 2, 3,... ,
negative whole numbers -1, -2, -3,... , and zero {0}.

Having started out completely wrong, none of the rest of your rules make
any sense at all.


You have yet to convince me that I'm wrong.

A sensible set would look more like:

First, separate size from signedness. Choose the size necessary to hold
the range of numbers you'll use.


Ambiguous. To represent 40000, I could use either signed long or unsigned
int.

By default, this is usually plain int.


By your doctrine, yes. By mine, it's "unsigned int".

Choosing something smaller is generally restricted to 1) interfacing
with external code that uses the smaller type, or 2) places that you
need to create a large number of objects, and want to minimize space
usage.


I concur on this point.

Then choose signedness:


This is the first thing I do, before I choose size.

The default is signed.

If you want something that's basically a collection of bits, use
unsigned.


I use "unsigned int" for a non-negative integer value.

On the rare ocassion that there's no signed type that will hold the
range of values you need, and there is an unsigned type that will, use
unsigned and hope for the best -- but be aware that you're producing
relatively fragile code.


Nonsense. I've adequate knowledge of integer promotion, and what happens
when different integer types are mixed.
 
D

Daniel T.

First, separate size from signedness. Choose the size necessary to hold
the range of numbers you'll use.


Ambiguous. To represent 40000, I could use either signed long or unsigned
int.[/QUOTE]

Not if the range you wanted to represent was from -40000 to 40000. There
you go assuming unsigned again for no good reason. :)
 
H

Howard

Thomas J. Gritzan said:
jdcrief schrieb:


Remove that, unneccessary and non-standard.

People always say that. Why should he remove it? What harm is it doing?
We all know Visual Studio puts it there, so let's just ignore it. It's
[usually] not germaine to the problem, and isn't affecting anything.

Besides, don't you also have to go and change the "using precompiled
headers" project settings if you remove it?

-Howard
 
F

Frederick Gotham

Daniel T. posted:
Not if the range you wanted to represent was from -40000 to 40000.


When I want a specific range, I'll specify a specific range.

When I want a sole value, I'll specify a sole value.

There you go assuming unsigned again for no good reason. :)


Last time I checked, 40000 was a positive integer.
 
V

Victor Bazarov

Howard said:
Thomas J. Gritzan said:
jdcrief schrieb:


Remove that, unneccessary and non-standard.

People always say that. Why should he remove it? What harm is it
doing? We all know Visual Studio puts it there, so let's just ignore
it. It's [usually] not germaine to the problem, and isn't affecting
anything.
Besides, don't you also have to go and change the "using precompiled
headers" project settings if you remove it?

Please remove it when posting here. People often copy-and-paste code
into their development environments to test your code, and not everyone
on in the world uses Visual Studio, so they would have to edit your
code before trying it, which brings up the point that your code should
be free of non-standard elements when posted here. That's all.


V
 
J

Jerry Coffin

[ ... ]
Ambiguous. To represent 40000, I could use either signed long or unsigned
int.

Yup -- and unless you have a really good reason to do otherwise, you use
signed. Given the remainder of the rules, I thought that would be pretty
obvious to anybody.
By your doctrine, yes. By mine, it's "unsigned int".

Only because your's is a mess.

[ ... ]
This is the first thing I do, before I choose size.

Yes, you've already made it clear that you get most of it backwards.
There's no real need to reiterate the point.
I use "unsigned int" for a non-negative integer value.

Yes, you've already made it clear that you use it in many situations to
which it's poorly suited at best. Again, no need to reiterate the
obvious.
Nonsense. I've adequate knowledge of integer promotion, and what happens
when different integer types are mixed.

Awareness of the rules doesn't change the fact that the code is fragile.
Forcing everybody who reads your code to constantly think about a set of
lengthy, rather complex set of rules -- e.g. in section 6.3.1.1 of the
C99 standard, occupy a full page (your shorter set of rules omitted
several possibilities, such as enums). Then we have to deal with the
"usual arithmetic conversions", another page+ set of rules.

Even knowing all of that, you're stuck with two facts: first of all, you
need to use signed integers part of the time. Second the rules of the
language are oriented toward producing signed types as the result of a
promotion. Both of these result in combining signed and unsigned. The
result is code that's diffcult to read or understand, and can give
strange results.

We all know that subtracting an unsigned from another unsigned can well
give a negative result. That's normal math. OTOH, consider this:

#include <stdio.h>
#include <limits.h>

int main() {
unsigned short a = USHRT_MAX, b = USHRT_MAX;

// other code, perhaps...

if (a * b < 0)
printf("Result is negative.\n");
return 0;
}

On a typical current implementation (twos-complement, 16-bit short, 32-
bit int) this will tell you that the result of multiplying two unsigned
numbers is negative! That's a whole lot different from normal math -- if
you multiply two signed numbers and get a negative result, that's at
least conveivable, depending on the values, the fact that those values
might not produce a negative number in normal math notwithstanding.

OTOH, to many people, the fact that multiplying two unsigned numbers can
produce a negative result seems just plain WRONG! Knowing how it
happened doesn't change the fact that it shouldn't be possible!
 
M

Mark P

Frederick said:
jdcrief posted:



The object known as "radius" was never defined.




Those parentheses might mean multiplication in normal mathematics, but not
in C++. Furthermore, you're using a signed integer type (and I presume your
"area" should never be negative).

unsigned area = pi*radius*radius;

Negative areas are quite common when dealing with oriented curves or
even basic calculus. And what happens when the OP wants to find the
difference in the areas of two circles by subtracting one (possibly
larger) value from another?
 
F

Frederick Gotham

Jerry Coffin posted:
Yup -- and unless you have a really good reason to do otherwise, you use
signed.


Convince me. Everything so far has persuaded me to use "unsigned".
Benefits:

(1) No UB upon overflow.
(2) 16-Bit type can be used instead of 32-Bit (for storing 40000).
(3) More efficient on machines other than 2's complement.

Given the remainder of the rules, I thought that would be pretty
obvious to anybody.


Rules? Please clarify what rules you are referring to.

Only because your's is a mess.


Convince me. Right now, it just looks like your preaching religion (which
would explain why your arguments can be irrational, and even downright
wrong).

Yes, you've already made it clear that you get most of it backwards.
There's no real need to reiterate the point.


Worldplay. Congratulations. Care to make a real argument?

Persuade me to use signed integer types when storing positive numbers; I'll
acquiesce to your request as soon as I'm convinced.

Yes, you've already made it clear that you use it in many situations to
which it's poorly suited at best. Again, no need to reiterate the
obvious.


Persuade me that it's poorly suited, because right now, it seems perfectly
logical to me to use an unsigned integer type to store a positive number.

Awareness of the rules doesn't change the fact that the code is fragile.


The code is not "fragile". That's a subjective observation, one which is
compounded by an air of incompetency.

Forcing everybody who reads your code to constantly think about a set of
lengthy, rather complex set of rules -- e.g. in section 6.3.1.1 of the
C99 standard, occupy a full page (your shorter set of rules omitted
several possibilities, such as enums). Then we have to deal with the
"usual arithmetic conversions", another page+ set of rules.


Would you like me to dumb-down my language too? Should I say "basic"
instead of "fundamental", "imprecise multiple meanings" instead of
"ambiguous"?

When I speak to young children, I'll dumb-down my language.

When I program for incompetent programmers, I'll dumb-down my code --
thankfully though, I've no intention to do so in the near future.

Even knowing all of that, you're stuck with two facts: first of all, you
need to use signed integers part of the time.


Yes, when storing negative numbers, or interfacing with 3rd party code
which uses signed integers where unsigned integers would be more suited.

Second the rules of the language are oriented toward producing signed
types as the result of a promotion.


Yes, we must cast to overcome this. Take the following code for example:

unsigned short a = 65535, b = 65535;

unsigned c = a * b;

Even if an "unsigned int" can hold the value 4294836225, this snippet can
invoke undefined behaviour because a "signed int" might be 17-Bit, and thus
"unsigned short" would be promoted to "signed int", the two figures would
be multiplied, overflow, and invoke UB. We must keep our whits about us
with regard to integer promotion:

unsigned c = (unsigned)a * b;

Hiding under the bed and using signed integer types all the time won't make
it go away.

Both of these result in combining signed and unsigned. The result is
code that's diffcult to read or understand, and can give strange
results.


Not if coded proficiently.

We all know that subtracting an unsigned from another unsigned can well
give a negative result.


In human maths, yes, but not in C++ maths.

That's normal math. OTOH, consider this:

#include <stdio.h>
#include <limits.h>

int main() {
unsigned short a = USHRT_MAX, b = USHRT_MAX;
// other code, perhaps...

if (a * b < 0)
printf("Result is negative.\n");


Again here, we would need:

unsigned)a * b

On a typical current implementation (twos-complement, 16-bit short, 32-
bit int) this will tell you that the result of multiplying two unsigned
numbers is negative!


The beauty of undefined behaviour.

As with all mathematical operations in computer programming, we must beware
of overflow.

The overflow of signed integer types leads to UB, so it may or may not
yield a negative result... it could very well just hang.

That's a whole lot different from normal math -- if
you multiply two signed numbers and get a negative result, that's at
least conveivable, depending on the values, the fact that those values
might not produce a negative number in normal math notwithstanding.


You still have UB if signed types overflow.

OTOH, to many people, the fact that multiplying two unsigned numbers can
produce a negative result seems just plain WRONG! Knowing how it
happened doesn't change the fact that it shouldn't be possible!


I would rather if "unsigned" were the default.
 
F

Frederick Gotham

Mark P posted:
Negative areas are quite common when dealing with oriented curves or
even basic calculus.


Which is why I wrote "I presume". To be honest though, in all my years of
school and college, I don't think I've ever encountered a negative area.

And what happens when the OP wants to find the
difference in the areas of two circles by subtracting one (possibly
larger) value from another?


Assuming that there's a larger signed type available (and there must be for
your question to make sense):

unsigned area1 = 23352;
unsigned area2 = 1942917;

int diff = (long)area1 - area2;

If you find this too scary, you could use wrappers:

int NegDiff(unsigned a, unsigned b)
{
STATIC_ASSERT(LONG_MAX >= UINT_MAX);

return (long)a - b;
}
 
M

Mark P

Frederick said:
Mark P posted:



Which is why I wrote "I presume". To be honest though, in all my years of
school and college, I don't think I've ever encountered a negative area.

Well you get to learn something new. Lucky you.
Assuming that there's a larger signed type available (and there must be for
your question to make sense):

unsigned area1 = 23352;
unsigned area2 = 1942917;

int diff = (long)area1 - area2;

If you find this too scary, you could use wrappers:

int NegDiff(unsigned a, unsigned b)
{
STATIC_ASSERT(LONG_MAX >= UINT_MAX);

return (long)a - b;
}

Ah, of course, how silly of me not to recognize that requiring users to
cast every time they perform a subtraction is far superior to just using
an int to begin with.
 
J

Jerry Coffin

[ using unsigned ]
Benefits:

(1) No UB upon overflow.

True -- under a few rare circumstances, this is imortant. These
circumstances almost univerally fit the situation I mentioned: when
you're using it as a collection of bits.
(2) 16-Bit type can be used instead of 32-Bit (for storing 40000).

Yes. So what? This typically sacrifices speed to save memory. On a few
ocassions it's justified -- but it's fairly rare.
(3) More efficient on machines other than 2's complement.

We already discussed that once -- it's just not true.
Rules? Please clarify what rules you are referring to.

The rules I'd just posted.
Convince me. Right now, it just looks like your preaching religion (which
would explain why your arguments can be irrational, and even downright
wrong).

If only they really were.
Persuade me to use signed integer types when storing positive numbers; I'll
acquiesce to your request as soon as I'm convinced.

It's obvious that you simply refuse to be convinced by rational
arguments. I've already pointed out how multiplying two variables of
unsigned type can and often will produce a result that's a negative
number. Yes, it's possible to use casts to work around the problem --
but you can't get away from the fact that you're starting by causing a
problem, and then using an ugly wart to (hopefully) keep the problem
from occurring.

As for your accusation of an "air of incompetency", to me it sounds
elitist and arrogant. Given the apologies you currently owe to Mark P
and Dilip for the remarks you made to them in another thread on in this
newgroup today, my personal advice would be to spend a little time
contemplating the value of humility.
 
F

Frederick Gotham

Jerry Coffin posted:
Yes. So what? This typically sacrifices speed to save memory. On a few
ocassions it's justified -- but it's fairly rare.


I was referring to the contrary, actually. If a system has 16-Bit int's,
then:

unsigned i = 40000;

will be faster than:

long i = 40000;

It's obvious that you simply refuse to be convinced by rational
arguments. I've already pointed out how multiplying two variables of
unsigned type can and often will produce a result that's a negative
number.


That's a moot point, because you invoke undefined behaviour. Next we'll be
discussing why your nasal demons have horns, and why mine only have hooves
on their hind legs. Do yours breathe fire?

Yes, it's possible to use casts to work around the problem --
but you can't get away from the fact that you're starting by causing a
problem, and then using an ugly wart to (hopefully) keep the problem
from occurring.


If I was going to be subtracting larger unsigned values from smaller
unsigned ones, then I may consider using a signed type. However, I won't
simply start using signed types all the time just because they make one
little scenario more convenient.

As for your accusation of an "air of incompetency", to me it sounds
elitist and arrogant.


Possibly so. But I'm up against people who constantly put forward their
view that "arrays are dangerous", and "null-terminated strings are
dangerous", and who condemn my code for using them.

The only reason someone would think that these things are dangerous are if
they consistently make mistakes with them.

Thus, to put forward my argument, I must segregate the competent
programmers from the incompetent programmers. Competent programmers can use
arrays and null-terminated strings -- incompetent programmers can't.
 
J

Jerry Coffin

[ ... ]
The only reason someone would think that these things are dangerous are if
they consistently make mistakes with them.

Nonsense. A person who's used them for years and never made a mistake
can still observe the number of mistakes that have been made.
Thus, to put forward my argument, I must segregate the competent
programmers from the incompetent programmers. Competent programmers can use
arrays and null-terminated strings -- incompetent programmers can't.

What a load of BS! Your measure of competence bears no relationship with
reality. Just for example, it basically starts by assuming that all
competent programmers are C programmers. There are hundreds of other
programming languages, and many of them simply don't use null-terminated
strings at all. Dismissing everybody who uses them as incompetent is
utterly ridiculous.

Even if we restrict your comments to C++ programmers, you're still being
idiotic. I've used null-terminated strings for years, and while I could
continue to do so indefinitely if necessary, I also realize that doing
so would be extremely counterproductive.

I haven't (at least of which I'm aware) had any major problems with
null-teriminated strings or arrays when I've used them -- but I've seen
enough problems with them to advise against their use most of the time.

One need not lose a hand to recognize the danger of large, unprotected
saw blades spinning at high speed. That doesn't mean you need to
recommend using a butter knife to attempt to build a house -- but
insisting that a saw only be operated with all its safety guards removed
would be equally silly.
 
N

Noah Roberts

Jerry said:
As for your accusation of an "air of incompetency", to me it sounds
elitist and arrogant.

He called me incompetent also. I wouldn't worry about it too much.
He doesn't work proffesionally and therefor doesn't have to worry about
any of the things proffesional programmers have to worry about. His
opinions are based on his limited experience as a hobby programmer who
gets to do things his way all the time.
 
K

Kaz Kylheku

Frederick said:
Jerry Coffin posted:



Convince me. Everything so far has persuaded me to use "unsigned".
Benefits:

(1) No UB upon overflow.

Correct programming isn't simply avoidance of undefined behavior. If a
calculation invokes undefined behavior due to overflow, that is a
software defect. Changing the types to unsigned to remove the undefined
behavior doesn't necessariy fix the defect.
(2) 16-Bit type can be used instead of 32-Bit (for storing 40000).

That will only save space in large arrays.
(3) More efficient on machines other than 2's complement.

This is backwards. At the bit level, most unsigned arithmetic is the
same as two's complement, and on a two's complement machine can share
most of the same circuitry.

The signed types have more latitude to follow the machine's native
representation.

It may well take more work to implement unsigned arithmetic on a
sign-magnitude machine where, for instance, subtracting 1 from 0 does
not give an all 1's bit pattern.
Yes, we must cast to overcome this. Take the following code for example:

unsigned short a = 65535, b = 65535;

Intelligent programmers don't use unsigned short simply because it's
the smallest available type which will "shrink wrap" a given value.
unsigned c = a * b;

Even if an "unsigned int" can hold the value 4294836225, this snippet can
invoke undefined behaviour because a "signed int" might be 17-Bit, and thus
"unsigned short" would be promoted to "signed int", the two figures would
be multiplied, overflow, and invoke UB. We must keep our whits about us
with regard to integer promotion:

The smart thing to do is to avoid integral promotion by avoiding use of
the short type.
unsigned c = (unsigned)a * b;

I.e.:

unsigned a = 65535, b = 65535;

Now the cast isn't needed. However, the multiplication still has an
implementation-defined result. A truly competent programmer would make
it like this:

long a = 65535, b = 65535;

long c = a * b;

Now not only is the behavior defined, the product is actually correct
also.

It doesn't matter that the behavior is well defined if the wrong result
is computed. The wrong result will have consequences. It might cause
undefined behavior in some later computation. Or the program may
produce incorrect output which has repercussions elsewhere in the
world.
In human maths, yes, but not in C++ maths.

However, the negative result may be what is intended. A single unsigned
operand can "poison" the entire calculation into being unsigned.

pointer += displacement - offset;

Suppose displacement is signed long but offset is unsigned long
(perhaps because someone though that since it is always positive, an
unsigned type ought to be used!)

The intent may be that a negative value is computed in the case offset
displacement. But of course offset will be converted to unsigned, and so a large positive value will be computed instead.

This may well cause undefined behavior when added to the pointer. Even
if it doesn't, it's still the wrong location.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,994
Messages
2,570,222
Members
46,810
Latest member
Kassie0918

Latest Threads

Top