IF-free conception.

E

Evgeney Knyazhev

Start No-IF heapsort



Breakpoint 1, siftDown (a=0x7ffff6c03010, root=49999, end=100000)

at esort.cc:76

76 dv = (a[child] < a[child+1]) * dv;

(gdb) print child+1

$1 = 100000
Ben, so let's take code again:
while(child<end){
dv = (child + 1 < end);
dv = (a[child] < a[child+1])*dv;
child += dv;
dv = le(&a[root], &a[child]);
end*= dv;
}
(gdb) print child+1

$1 = 100000
why'd you need that???? yes, child+1==end, but our very point is control 'child' value itself ;D
dv = (child + 1 < end);
this line controls a bound, thereby child<end (always that way).
 
E

Evgeney Knyazhev

frankly, this code has a potential problem :D

dv = (a[child] < a[child+1])*dv;

too intellectual memory management can abort this line as illegal access. actually, such policy would be quite wise. however, solution is simple: we could use buffer item to avoid ambiguities like that.
 
B

Ben Bacarisse

Evgeney Knyazhev said:
Start No-IF heapsort



Breakpoint 1, siftDown (a=0x7ffff6c03010, root=49999, end=100000)

at esort.cc:76

76 dv = (a[child] < a[child+1]) * dv;

(gdb) print child+1

$1 = 100000
Ben, so let's take code again:
while(child<end){
dv = (child + 1 < end);
dv = (a[child] < a[child+1])*dv;
child += dv;
dv = le(&a[root], &a[child]);
end*= dv;
}
(gdb) print child+1

$1 = 100000
why'd you need that????

Because I'm trying to understand what it is you are missing. Obviously
you agree that child+1 == 100000. We agree on one thing at least.
yes, child+1==end, but our very point is control 'child' value itself ;D
dv = (child + 1 < end);
this line controls a bound, thereby child<end (always that way).

That line does not "control a bound", it simply sets dv to 0. The
problem (the out-of-bounds access) is in the next line:

dv = (a[child] < a[child+1])*dv;

This line has no defined meaning in C (or C++) because it accesses an
element of 'a' that does not exist. The previous statement does not
stop this statement from being executed, even though the value of dv
means that you or I could work out what the result is without that
out-of-bounds access.
 
K

Keith Thompson

Evgeney Knyazhev said:
Start No-IF heapsort

Breakpoint 1, siftDown (a=0x7ffff6c03010, root=49999, end=100000)
at esort.cc:76
76 dv = (a[child] < a[child+1]) * dv;
(gdb) print child+1
$1 = 100000
Ben, so let's take code again:
while(child<end){
dv = (child + 1 < end);
dv = (a[child] < a[child+1])*dv;
child += dv;
dv = le(&a[root], &a[child]);
end*= dv;
}
(gdb) print child+1

$1 = 100000
why'd you need that???? yes, child+1==end, but our very point is control 'child' value itself ;D
dv = (child + 1 < end);
this line controls a bound, thereby child<end (always that way).

Your program evaluates the expression `a[child+1]` when `child+1 ==
100000`. Since the array `a` has no element 100000, the evaluation of
that expression has undefined behavior.

Multiplying the result by 0 doesn't change that.
 
B

Ben Bacarisse

Evgeney Knyazhev said:
frankly, this code has a potential problem :D

dv = (a[child] < a[child+1])*dv;

too intellectual memory management can abort this line as illegal
access. actually, such policy would be quite wise. however, solution
is simple: we could use buffer item to avoid ambiguities like that.

At last!
 
J

James Kuyper

frankly, this code has a potential problem :D

dv = (a[child] < a[child+1])*dv;

too intellectual memory management can abort this line as illegal access. actually, such policy would be quite wise. however, solution is simple: we could use buffer item to avoid ambiguities like that.

Ben's first message pointing out that problem was nearly two days ago.
76 messages have been posted since them, a large fraction of them
discussing this one issue. I'm glad you finally understand, but more
than a little annoyed that it took so long and so many words to achieve
that result.
By the way, "too intellectual memory management" sounds a bit
dismissive. Reading past the end of an array runs a real risk of reading
outside the section of memory that the operating system has made
available to your program. There are systems that strictly enforce
memory access limitations, by aborting processes that violate them. I
wouldn't call them "too intellectual", I'd call them "safer" or "more
secure".
 
G

glen herrmannsfeldt

(snip)
dv = (a[child] < a[child+1])*dv;
(snip)
Ben's first message pointing out that problem was nearly two days ago.
76 messages have been posted since them, a large fraction of them
discussing this one issue. I'm glad you finally understand, but more
than a little annoyed that it took so long and so many words to achieve
that result.
By the way, "too intellectual memory management" sounds a bit
dismissive. Reading past the end of an array runs a real risk of reading
outside the section of memory that the operating system has made
available to your program. There are systems that strictly enforce
memory access limitations, by aborting processes that violate them. I
wouldn't call them "too intellectual", I'd call them "safer" or "more
secure".

For a paging MMU with 4K pages, it isn't so likely that you end exactly
at the end of a page.

On the 80286 with OS/2, I was once allocating segments directly
from OS/2, instead of with malloc(), with the segment limit exactly
at the end of the array. But no-one does that with current systems,
though segments are still there.

-- glen
 
E

Evgeney Knyazhev

frankly, this code has a potential problem :D
dv = (a[child] < a[child+1])*dv;
too intellectual memory management can abort this line as illegal access. actually, such policy would be quite wise. however, solution is simple: we could use buffer item to avoid ambiguities like that.



Ben's first message pointing out that problem was nearly two days ago.

76 messages have been posted since them, a large fraction of them

discussing this one issue. I'm glad you finally understand, but more

than a little annoyed that it took so long and so many words to achieve

that result.

By the way, "too intellectual memory management" sounds a bit

dismissive. Reading past the end of an array runs a real risk of reading

outside the section of memory that the operating system has made

available to your program. There are systems that strictly enforce

memory access limitations, by aborting processes that violate them. I

wouldn't call them "too intellectual", I'd call them "safer" or "more

secure".

James, C never been safe programming language. most of C-code no's (hasn't)been safe up to now :D it has been known since not 2 days ago ;D In short, for me, it no'w (wasn't) issue at all. real issue for me is idiotic optimization of compiler: no-if C-code becomes if-fy Asm output. Such an irony all-around. XD
 
B

BartC

James, C never been safe programming language. most of C-code no's
(hasn't) been safe up to now :D it has been known since not 2 days ago ;D

You're proposing to replace if-statements with alternatives might be faster,
but might crash. That's not really acceptable. It can also be considered
cheating. It's also seems to be debatable whether they are actually faster.
They are also unreadable, harder to maintain, and likely to introduce extra,
even more subtle bugs.

In short, you're not really selling your proposal very well! Your
devil-may-care attitude doesn't help either..
 
Ö

Öö Tiib

Evgeney Knyazhev said:
frankly, this code has a potential problem :D

dv = (a[child] < a[child+1])*dv;

too intellectual memory management can abort this line as illegal
access. actually, such policy would be quite wise. however, solution
is simple: we could use buffer item to avoid ambiguities like that.

You have one iteration too lot. That dv is to remove effects of last
iteration. Better may be to iterate one less at first place. Less
code often (not always) results with better efficiency.

Congrats Ben. You deserve gold medal of stubborn mentor.
 
Ö

Öö Tiib

real issue for me is idiotic optimization of compiler: no-if C-code
becomes if-fy Asm output. Such an irony all-around. XD

Evgeney, you are still far from good feel of how funnily modern
processors and compilers perform and how interesting it is to
measure their performance in a meaningful way. There are lot more
irony coming. I donate you another example:

#include <stdio.h>
#include <time.h>

/// @return - time difference between start and stop in milliseconds
int ms_elapsed( clock_t start, clock_t stop )
{
return (int)( 1000.0 * ( stop - start ) / CLOCKS_PER_SEC );
}

int const Billion = 1000000000;
/// & with numbers up to Billion gives 0, 0, 2, 2 repeating pattern
int const Pattern_0_0_2_2 = 0x40000002;

/// @return - half of Billion
int unpredictableIfs()
{
int sum = 0;
for ( int i = 0; i < Billion; ++i )
{
// true, true, false, false ...
if ( ( i & Pattern_0_0_2_2 ) == 0 )
{
++sum;
}
}
return sum;
}

/// @return - half of Billion
int noIfs()
{
int sum = 0;
for ( int i = 0; i < Billion; ++i )
{
// 1, 1, 0, 0 ...
sum += ( i & Pattern_0_0_2_2 ) == 0;
}
return sum;
}

int main()
{
clock_t volatile start;
clock_t volatile stop;
int volatile sum;
printf( "Following is a puzzle for Evgeney Knyazhev:\n" );

start = clock();
sum = unpredictableIfs();
stop = clock();
printf( "Unpredictable ifs took %d msec, answer was %d\n"
, ms_elapsed(start, stop), sum );

start = clock();
sum = unpredictableIfs();
stop = clock();
printf( "Unpredictable ifs took %d msec, answer was %d\n"
, ms_elapsed(start, stop), sum );

start = clock();
sum = noIfs();
stop = clock();
printf( "Same without ifs took %d msec; answer was %d\n"
, ms_elapsed(start, stop), sum );

start = clock();
sum = unpredictableIfs();
stop = clock();
printf( "Unpredictable ifs took %d msec, answer was %d\n"
, ms_elapsed(start, stop), sum );
}

Compiled with VS2008 and VS2010; /O2 optimizations, for 32 bit.
Ran on Intel Core 2, OS Windows XP 32 and Intel Core 5 with Windows 7 64.

Output:

Following is a puzzle for Evgeney Knyazhev:
Unpredictable ifs took 1360 msec, answer was 500000000
Unpredictable ifs took 1015 msec, answer was 500000000
Same without ifs took 1032 msec; answer was 500000000
Unpredictable ifs took 4812 msec, answer was 500000000

The values did deviate a bit here or there but the pattern was
like that. Enjoy!
 
G

gwowen

Compiled with VS2008 and VS2010; /O2 optimizations, for 32 bit.
Ran on Intel Core 2, OS Windows XP 32 and Intel Core 5 with Windows 7 64.

Output:

    Following is a puzzle for Evgeney Knyazhev:
    Unpredictable ifs took 1360 msec, answer was 500000000
    Unpredictable ifs took 1015 msec, answer was 500000000
    Same without ifs took 1032 msec; answer was 500000000
    Unpredictable ifs took 4812 msec, answer was 500000000

The values did deviate a bit here or there but the pattern was
like that. Enjoy!

Here's the results on MinGW, g++ 4.71, WinXP, IntelCore2 2.40GHz

gowen@MATHS01 ~
$ g++ -O0 foo.cpp && ./a.exe
Following is a puzzle for Evgeney Knyazhev:
Unpredictable ifs took 4734 msec, answer was 500000000
Unpredictable ifs took 4720 msec, answer was 500000000
Same without ifs took 5391 msec; answer was 500000000
Unpredictable ifs took 4719 msec, answer was 500000000

gowen@MATHS01 ~
$ g++ -O1 foo.cpp && ./a.exe
Following is a puzzle for Evgeney Knyazhev:
Unpredictable ifs took 1656 msec, answer was 500000000
Unpredictable ifs took 0 msec, answer was 500000000
Same without ifs took 1969 msec; answer was 500000000
Unpredictable ifs took 0 msec, answer was 500000000

gowen@MATHS01 ~
$ g++ -O2 foo.cpp && ./a.exe
Following is a puzzle for Evgeney Knyazhev:
Unpredictable ifs took 1656 msec, answer was 500000000
Unpredictable ifs took 1656 msec, answer was 500000000
Same without ifs took 1407 msec; answer was 500000000
Unpredictable ifs took 1656 msec, answer was 500000000

gowen@MATHS01 ~
$ g++ -O3 foo.cpp && ./a.exe
Following is a puzzle for Evgeney Knyazhev:
Unpredictable ifs took 1890 msec, answer was 500000000
Unpredictable ifs took 2516 msec, answer was 500000000
Same without ifs took 1422 msec; answer was 500000000
Unpredictable ifs took 2516 msec, answer was 500000000

Figure those out! (I particularly like the O1 results!)
 
K

Keith Thompson

James Kuyper said:
frankly, this code has a potential problem :D

dv = (a[child] < a[child+1])*dv;

too intellectual memory management can abort this line as illegal
access. actually, such policy would be quite wise. however, solution
is simple: we could use buffer item to avoid ambiguities like that.

Ben's first message pointing out that problem was nearly two days ago.
76 messages have been posted since them, a large fraction of them
discussing this one issue. I'm glad you finally understand, but more
than a little annoyed that it took so long and so many words to achieve
that result.
By the way, "too intellectual memory management" sounds a bit
dismissive. Reading past the end of an array runs a real risk of reading
outside the section of memory that the operating system has made
available to your program. There are systems that strictly enforce
memory access limitations, by aborting processes that violate them. I
wouldn't call them "too intellectual", I'd call them "safer" or "more
secure".

Not only that, but a compiler is permitted to assume that a program's
behavior is defined, and perform optimizations based on that assumption.
By writing code whose behavior is undefined, you are in effect lying to
the compiler.

The effects of accessing an array outside its bounds are not limited to
accessing memory adjacent to the array (which could be bad enough).
 
K

Keith Thompson

Evgeney Knyazhev said:
James, C never been safe programming language. most of C-code no's
(hasn't) been safe up to now :D it has been known since not 2 days ago
;D In short, for me, it no'w (wasn't) issue at all. real issue for me
is idiotic optimization of compiler: no-if C-code becomes if-fy Asm
output. Such an irony all-around. XD

It's true that C is less "safe" than many other programming languages,
in the sense that it's easy to write code whose behavior is undefined,
and most implementations are unlikely to detect such errors.

That's exactly why you need to write code *carefully*. It doesn't
justify a cavalier attitude towards safety; quite the opposite, in fact.

It's entirely possible to write safe C code, but it requires more
attention to detail than you've exhibited.

And as for your posting style: Google Groups, unfortunately, is a very
poor interface for posting to Usenet. Among other problems, it has a
serious bug that causes it quoted text to be double-spaced. It would be
helpful if you'd delete the additional blank lines before posting.

And, as I've mentioned before, it's helpful to keep your text down to
about 72 columns; you're still writing each paragraph as a single long
line. It probably looks ok in your browser window, but it causes
problems for a lot of us who read Usenet in other ways.

Consider using a Usenet server other than Google Groups (I use
news.eternal-september.com; it's good and free) with a decent client
program (I use Gnus, which runs under Emacs; if you're not an Emacs
user Mozilla Thunderbird might be a good alternative).
 
J

James Kuyper

On 02/18/2013 09:28 PM, glen herrmannsfeldt wrote:
....
For a paging MMU with 4K pages, it isn't so likely that you end exactly
at the end of a page.

It's sloppy programming to assume that, just because the chance is
small, it can be ignored. If you do enough allocations, allocating
exactly at the end of a page becomes a near certainty, even though each
individual allocation has only a small chance of ending up there.
 
K

Ken Brody

On 02/18/2013 09:28 PM, glen herrmannsfeldt wrote:
...

It's sloppy programming to assume that, just because the chance is
small, it can be ignored. If you do enough allocations, allocating
exactly at the end of a page becomes a near certainty, even though each
individual allocation has only a small chance of ending up there.

The odds of a large meteor passing within the orbits of geosynchronous
satellites is small. The odds of a medium-sized meteor entering the Earth's
atmosphere is smaller. The odds of both happening within 24 hours of each
other is miniscule.

But not zero.
 
G

glen herrmannsfeldt

The odds of a large meteor passing within the orbits of geosynchronous
satellites is small. The odds of a medium-sized meteor entering the Earth's
atmosphere is smaller. The odds of both happening within 24 hours of each
other is miniscule.
But not zero.

I haven't been following the detail so carefully, but it seems likely
that the two events aren't statistially independent. That is, they might
have the same source. Believing events to be statistically independent
when they aren't is the cause for many mistakes, possibly including much
of the economic meltdown of 2008.

There is the story of an old lady (why are stories always about old
ladies) who heard that the chance of a bomb on an airplane is 1 in
1000, but the chance of two bombs 1 in a million. Just to be sure,
the always brings her own bomb.

-- glen
 
Ö

Öö Tiib

Here's the results on MinGW, g++ 4.71, WinXP, IntelCore2 2.40GHz

gowen@MATHS01 ~
$ g++ -O0 foo.cpp && ./a.exe
Following is a puzzle for Evgeney Knyazhev:
Unpredictable ifs took 4734 msec, answer was 500000000
Unpredictable ifs took 4720 msec, answer was 500000000
Same without ifs took 5391 msec; answer was 500000000
Unpredictable ifs took 4719 msec, answer was 500000000

gowen@MATHS01 ~
$ g++ -O1 foo.cpp && ./a.exe
Following is a puzzle for Evgeney Knyazhev:
Unpredictable ifs took 1656 msec, answer was 500000000
Unpredictable ifs took 0 msec, answer was 500000000
Same without ifs took 1969 msec; answer was 500000000
Unpredictable ifs took 0 msec, answer was 500000000

gowen@MATHS01 ~
$ g++ -O2 foo.cpp && ./a.exe
Following is a puzzle for Evgeney Knyazhev:
Unpredictable ifs took 1656 msec, answer was 500000000
Unpredictable ifs took 1656 msec, answer was 500000000
Same without ifs took 1407 msec; answer was 500000000
Unpredictable ifs took 1656 msec, answer was 500000000

gowen@MATHS01 ~
$ g++ -O3 foo.cpp && ./a.exe
Following is a puzzle for Evgeney Knyazhev:
Unpredictable ifs took 1890 msec, answer was 500000000
Unpredictable ifs took 2516 msec, answer was 500000000
Same without ifs took 1422 msec; answer was 500000000
Unpredictable ifs took 2516 msec, answer was 500000000

Figure those out! (I particularly like the O1 results!)

You are fairly positive person! I would call g++ -O2 and g++ -O3
outright defective by those results. How "lower" level optimizer
can reuse result of a function call but "higher" level optimizers
fail to do it?
 
K

Keith Thompson

Robert Wessel said:
In fact the odds of an allocation ending adjacent to a page boundary
are only about 1/1024, assuming 4K pages and 4 byte entries. Sizes
and alignment and granularities in the memory allocator may tweak that
a bit, of course.

That's assuming that the alignment of allocations is distributed
uniformly and randomly. In practice, allocation patterns for a
particular application may be extremely non-uniform in one way
or another.

Which means that if you decided that the probability of some event
happening is 1/1024, you may well find either that it *never* happens,
or that it *always* happens.

And even if the distribution is uniform, 1/1024 approaches certainty for
something that's done a billion times.
It may be that on some systems it is always be safe to read the four
bytes after non-static allocations, because commonly the memory
allocator puts a length (for linking back to the actual allocation
header from the following block). Or in the case of a stack
allocation, that such a reference would only cause the next stack page
to be allocated. Depending on such is, of course, hideous.

Indeed.
 
G

glen herrmannsfeldt

(snip on memory allocation and the probability of a memory
access fault.)
In fact the odds of an allocation ending adjacent to a page boundary
are only about 1/1024, assuming 4K pages and 4 byte entries. Sizes
and alignment and granularities in the memory allocator may tweak that
a bit, of course.
It may be that on some systems it is always be safe to read the four
bytes after non-static allocations, because commonly the memory
allocator puts a length (for linking back to the actual allocation
header from the following block).

For the usual malloc() implementation, the linkage information is
stored before the data. (Array element [-1].) In that case, you are
safer going off the low end of an array than the high end.
Or in the case of a stack allocation, that such a reference
would only cause the next stack page to be allocated.

In the case of a decrementing stack, as used on many currently
popular processors, you should be safe going off the high end,
not so safe the low end. (Assuming arrays are allocated with
higher elements at higher addresses.*)
Depending on such is, of course, hideous.

Yes. Though there might be some system dependent cases
where the time savings make it worthwhile.

It is common in sort algorithms to arrange the inner loop
such that it can't go off the end of an array, even without
a test. One way is to add a large valued element to the end
of an array (possibly after copying the previous element).
You can then do a loop with a greater or equal exit test
confident that the loop won't go past the end.

(*) The first Fortran compiler allocated arrays with
increasing subscripts at decreasing addresses. The index
registers apparently made that easier. As far as I know,
that would be legal in C if done consistently.

-- glen
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,077
Messages
2,570,566
Members
47,202
Latest member
misc.

Latest Threads

Top