Division/math bug in perl?

J

John W. Kennedy

Snail said:
John W. Kennedy wrote:




Thank you for your response. So does this behavior stem from hardware?
Like how a CPU handles it? or? (when you said "hardware architect's
design" above.)

The design of C supposes that / and % for integers are mostly used with
positive numbers, and that object code should therefore be generated
that will run as fast as possible with positive numbers. The "divide"
instruction on most architectures will do this easily. But to force the
results for negative numbers to fit any particular pattern will take
extra instructions on some machines. Therefore, C gives the compiler
designer the freedom to create the fastest code for positive numbers, no
matter what the results are for negative numbers, and a well-thought-out
C compiler will normally do that. (Some compilers may include options to
force one philosophy or another.)
I did not realize that c (and c++?) do not define it themselves. If it
is related to hardware as I am now suspecting, it might explain why. But
even so, I don't think it would have been difficult to program the
conversion algorithms to work a certain way. It seem, from what I've
gathered so far in this thread, that that is what Perl does.

Perl does not compile to true object code, so forcing the decision has
only a very minor effect on speed. Forcing a rule on C might make divide
operations expand from about two instructions to about six. Forcing it
in Perl might make it expand from, say, about fifty to about fifty-four.
 
A

Anno Siegel

Alfred Z. Newmane said:
darkon wrote:

Not mathematically it isn't.

Mathematics is what mathematicians define it to be.

In particular, the int function is rarely used in mathematics (the
floor and ceiling functions are). There is no binding convention
how it would have to be defined.
I should be -3. Think of it like this. The
int part of 2.6 is 2, which is the /lowest/ number before the next ^^^^^^^
highest

integer on the number line. Applying this to -2.6, the /lowest/ number
before the next integer is -3.

It is also the integer whose absolute value is maximal below or equal
to (the absolute value of) 2.6. Apply that to -2.6, and the result is -3.

Anno
 
I

Ilya Zakharevich

[A complimentary Cc of this posting was sent to
Snail
I think I was a little missleading then, and I'm sorry. I knew that int
does what it does. What I was really getting at was why laguages like
Perl, c, c++, etc, do this sort of division in the first place?

I have no idea why C had chosen this (IMO, completely broken) semantic
of convert-to-integer. But since C did it, so should have C++.

Now why Perl did it? Before about v5.005, Perl was just a very
shallow wrapper about C w.r.t. numeric stuff. And when I got bold
enough to change the semantic of numerics, the backward compatibility
stroke in. One of arguments (IIRC, by tchrist) was that the code like

my $digit = int random 10;

was a legitimate Perl, so it would not be very nice to suddently make
it produce 10 as a possible answer.

Hope this helps,
Ilya
 
A

Arndt Jonasson

Snail said:
[...]
I only wanted to start a dicussion on
why langs like Perl, c, c++ (and java?) do this sort of division. I am
thinking there has to be some logical reason why languages to this.

I think comp.programming may be the right group for this, but I haven't
read it for a long time, so I don't know if it's still a useful group.

I'm sure the question has been asked many times in comp.lang.c, and I
would be surprised if Chris Torek hasn't given an excellent answer at
some time.

C++ inherited the C semantics, of course. Perl maybe took the C semantics
because C was the predominant language in the environment where Perl
was developed (but this is speculation on my part).

For some languages, all the relevant functions exist: truncating upward,
downward and towards zero. The former are often called 'ceiling' and
'floor' when they exist.

It's also historically ill-defined what a 'mod' function does when its
second argument is negative. I think that C, at least originally, took
the view that the C 'mod' operator ('%') did whatever the processor
instruction did, which was the expected thing for positive
arguments, and something you should not rely upon for negative arguments.
I don't know what the most recent C standard says about the subject.
 
R

Robert Sedlacek

Snail said:
Any hand calculator I've tried gives -3 for int (-2.6), like a texas
instruments graphing calc.

If you're looking for that, you may give the POSIX package a chance, I
think there were rounding functions. hth.
 
G

Geoff

Why is this:

$ perl -e 'print (int (-2.6), "\n")'
-2

Shouldn't it be -3? I thought converting from float to int is supposed
to give the integer part, which is -3, and not round towards zero, as it
seems to be doing, resulting in -2? For that matter, why does c/c++ do
this too?

Any hand calculator I've tried gives -3 for int (-2.6), like a texas
instruments graphing calc.

To it's credit, perl correctly does mod func correctly:

$ perl -e 'print (-13 % 5, "\n")'
2

Where as in c/c++ you get -3, which is mathematically incorrect. (Any
hand calculator I've tried gives 2 for the above operation.)

There is no int() function in C.

One may _cast_ a float to an int, but this is not a function call.
The effects of this are platform and implementation dependent per the
standard. This was due to the history of C from it's inception.

Here's what the ANSI C documentation has to say:
"When an object of floating type is converted to an integral type, the
fractional part is truncated. No rounding takes place in the
conversion process."

The ANSI, <float.h> was an attempt to characterize the
implementation-dependent behavior of the floating point types so that
a coder could detect at compile time the nature of the environment and
avoid errors. Likewise, <limits.h> was an attempt to characterize the
integral types.

For example, in Perl what would happen if you performed int(-33000.0)?
According to C's <limits.h> INT_MIN is -32767, unless Perl
automatically converts it to a long, the result should be an error. If
it silently converts it to a long how will the program behave if what
the programmer presumes is an int is used in a bitmap context?

In C++ it is the designer of the class who determines the behavior of
the operator. A proper numeric class might have an int operator that
takes the integer part of a floating type and a round operator that
rounds a float to the nearest integer.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,169
Messages
2,570,917
Members
47,458
Latest member
Chris#

Latest Threads

Top