Java left shift and right shift operators.

E

Eric Sosman

I didnt knew how much inefficient it is. I heard Math. (...) functions
are very fast and they use Native support like assembly language so
they are quite faster.

"Fast" is not an absolute, and neither is "very fast." If you
plant an acorn today and have a fifty-foot oak tree in ten years,
that's "fast." If you strike a match now and see a flame in ten
minutes, that's "slow" even though its half a million times faster
than the "fast" oak tree.

In other words, interpret "fast" in relation to the amount of
work being done.
Is there a "Integer" in java which is longer than 64 bits?
[...]
I have heard of BigInteger I dont know how many bits it supports.

Perhaps the Javadoc for BigInteger might be helpful? Oh, no,
sorry, that's *way* too obvious, there must be a grue waiting for
anyone who tries to read the Javadoc. Never mind.
Can I use byte[] to represent a longer number and do multiplications/
divisions on them?

No. Somebody else could, if he were sufficiently masochistic,
but at your present level of Java proficiency[*] it's pretty clear
that you cannot.

[*] No offense: We all begin from ignorance. Your progress from
the starting point is not yet enough to enable you to do this task.
 
L

Lew

Lew said:
Sanny, please attribute your citations.
Sanny wrote: [sic*]
Is there [...] Eric Sosman wrote: [sic*]
Let me [...]
Sanny said:
I didnt [...]

*: Two of three attributions (intendly?) wrongly indented.

Lew, please indent attributions with the correct number of ">"s.

Obviously you neglected to see how far back these quotes came. The
indentations were correct. Check again.
 
T

Travers Naran

I have computed and found "if statments" take 20-50 times longer than
basic arithmetic operations. So I wanted to replace if condition by
some arithmetics/ Maths.

This is why I think assembly language should be mandatory in schools. :)

An if statement gets translated into something like (using a
pseudo-assembly for readability):

;; if (b > 0) return a << b else return a >> -b
compare b to 0
jump-less-than to L1
a := a << b
jump L2
L1:
b := -b
a := a >> b
L2:
return ;; result in a

It's the jump that's adding the time. But on an x86, that kind of
instruction can be entirely cached and the Intel/AMD processors can
perform predictive branching to speed up this bit of code.

The only way you should be getting 20-50 times longer is if you aren't
using any sort of optimizing run-time. Example: running always as byte
code without compiling.

But usually this kind of if-checking and bit-twiddling is so fast that
it's not worth optimizing beyond a single statement. What are you
coding that makes this the most expensive calculation in your program?


OFF-TOPIC
=========

Bonus Question:
If branching is "20-50 times longer" than any of the operations
(negation or shifting), how could you re-write this assembly to use a
single branch statement?

Extra Bonus:
Using your favorite processor instruction set, come up with a fast
implementation of: if (b > 0) return a << b; else return a >> -b;
 
A

Andreas Leitgeb

Lew said:
Lew said:
Sanny, please attribute your citations.
Sanny wrote: [sic*]
Is there [...]
Eric Sosman wrote: [sic*]
Let me [...]
Sanny wrote:
I didnt [...]
*: Two of three attributions (intendly?) wrongly indented.
Lew, please indent attributions with the correct number of ">"s.
Obviously you neglected to see how far back these quotes came. The
indentations were correct. Check again.

I said they were wrongly *indented*, not that they were wrongly placed.

As I explained already once, a snippet like this:

Eric said:

does NOT say, that the words "Let me" were Eric's, but instead implies,
that Eric Sosman had quoted someone who wrote those words, because there
are two levels difference in indentation.

Had you instead written:
Eric said:
Let me [...]
it would have been correct and remained clear in further followups,
each adding another ">" infront of all the lines (except the new
initial attribution line).

PS: there was another [sic] in my post, though: the word "intendedly"
had unnoticedly lost one of its "ed"s.

PPS: The higher the need to strictly clarify who wrote what, the lower
really the use of contributing to that particular subthread. :)
 
A

Arved Sandstrom

On 11-04-27 03:57 AM, Andreas Leitgeb wrote:
[ SNIP ]
As I explained already once, a snippet like this:

Eric said:

does NOT say, that the words "Let me" were Eric's, but instead implies,
that Eric Sosman had quoted someone who wrote those words, because there
are two levels difference in indentation.
[ SNIP ]

That's strictly true, but Jesus, I hope nobody does that. :) I've seen
a few too many flame wars start from attribution confusion caused by
quoting finessing like that.

AHS
 
L

Lew

Andreas said:
Lew said:
Andreas said:
Lew wrote:
Sanny, please attribute your citations.
Sanny wrote: [sic*]
Is there [...]
Eric Sosman wrote: [sic*]
Let me [...]
Sanny wrote:
I didnt [...]
*: Two of three attributions (intendly?) wrongly indented.
Lew, please indent attributions with the correct number of ">"s.
Obviously you neglected to see how far back these quotes came. The
indentations were correct. Check again.
[Andreas]
I said they were wrongly *indented*, not that they were wrongly placed.

As I explained already once, a snippet like this:

Eric said:

does NOT say, that the words "Let me" were Eric's, but instead implies,
that Eric Sosman had quoted someone who wrote those words, because there
are two levels difference in indentation.

TFB, dude.
Had you instead written:
Eric said:
Let me [...]
it would have been correct and remained clear in further followups,
each adding another ">" infront of all the lines (except the new
initial attribution line).

PS: there was another [sic] in my post, though: the word "intendedly"
had unnoticedly lost one of its "ed"s.

PPS: The higher the need to strictly clarify who wrote what, the lower
really the use of contributing to that particular subthread. :)

**** that. I'm sticking with the indentation Thunderbird gives me, and the
occasional reinforcement with reminders of whose speaking. Work it out or
don't read it, I don't care which.
 
L

Lew

Patricia said:
Lew wrote:
As far as I can tell, Lew-Boy, ...

[I had not previously seen appending "-Boy" to the name as a convention
for addressing people in USENET articles, but I'll go along with
whatever mode of address the person I'm responding to seems to prefer.]

It's an informal mode of address popular in the U.S. It rhymes with "Danny
Boy", title the famous song set to an English folk tune. It's a rubric often
used between friends, as in, "Hey, Jimmy-Boy! How the hell have you been, you
old so-and-so?"

I suppose you're complaining that I was being too informal. Sorry. You, of
course, Dr. Shanahan, do not have to ask for the privilege of being informal
with me. I would be honored to have you call me "Lew-Boy".
 
L

Lew

Lew said:
TFB, dude.

I wouldn't care a dime, either, if you weren't always so
f'ing insisting on others following usenet [sic] conventions.

You mean like actually attributing citations and providing enough useful
information about a problem that someone could help them?

Yeah, those are pretty fucking arbitrary conventions.
 
P

Paul Cager

It's an informal mode of address popular in the U.S.  It rhymes with "Danny
Boy", title the famous song set to an English folk tune.  It's a rubricoften
used between friends, as in, "Hey, Jimmy-Boy!  How the hell have you been, you
old so-and-so?"

Describing "Londonderry Air" as an _English_ folk tune could get you
into troubles [sic].

P.S. I didn't realise your use of "-Boy" was just informal
friendliness. I'd assumed you used it as a mild insult. Is that not
the case?
 
R

Roedy Green

You claim inefficiency in something you have no clue whatsoever to be inefficient.

If you examined the code generated, you might be pleasantly surprised.
I did a fair bit of looking at the code Jet generated and came to the
conclusion it would nearly always generate better code than a human
would. It was not only optimising cycles, it was optimising pipelines.

--
Roedy Green Canadian Mind Products
http://mindprod.com
Politicians complain that Kindles and iBooks are killing jobs by
destroying the paper book industry. I see it that they have create a way
to produce books for less than a third the cost without destroying forests
and emitting greenhouse gases in the process. They have created wealth.
They are encouraging literacy and cutting the costs of education.
 
L

Lew

If you examined the code generated, you might be pleasantly surprised.
I did a fair bit of looking at the code Jet generated and came to the
conclusion it would nearly always generate better code than a human
would. It was not only optimising cycles, it was optimising pipelines.

Excellent point. Hotspot also optimizes code to a good level due to its
dynamic nature. I don't know how to examine its results, although that would
be interesting to capture somehow.
 
O

Owen Jacobson

On 4/26/2011 10:40 PM, Travers Naran wrote:
...
...

The obvious possibility is a random mix of left and right shifts, with
the two directions equally probable. In that case, the hardware branch
prediction would mis-predict, on average, half the time.

There are worse distributions. Consider a program that alternates
between the two possible branch outcomes, assuming a four-bit branch
prediction counter like those found in several generations of Intel
chips: it's possible that the sequence will start by acting opposite to
the prediction and flipping the predicted direction for the next branch
(from b01 to b10 or vice versa). A strictly alternating sequence of
branches would then be mis-predicted *every time*.

One of the things that could've made Itanic so cool was the ability to
pre-emptively evaluate both sides of a branch, many instructions in
advance, and throw away all the alternatives but the one that "actually
happened". Unfortunately, programming around this was really hard even
for experienced compiler writers, and the chips were notoriously
power-hungry and expensive, so it never really took off.
When the hardware branch prediction gets it wrong, the result is a
complete pipeline flush. With deep pipelines and multiple instructions
in each pipeline stage, a pipeline flush costs the equivalent of dozens
of simple arithmetic operations.

My question is whether the measurement was from the actual program, or
from a benchmark that may differ from the real program in ways that
affect branch prediction.

My question is "who cares?"

There are lots of ways to squeeze more performance out of an
otherwise-optimal algorithm: make better use of vector operations,
migrate some or all of the work to the GPU (a massively-parallel
floating point unit), or find ways to avoid running it at all.
Measuring the cost of an if statement is pretty far down the list of
tuning choices.

(Having seen examples of Sanny's code in other newsgroups, I am
extremely doubtful that he's reached the "optimal algorithm" stage.
However, I haven't seen his code, so it's possible.)
Why bother? Your sample code only has one conditional branch,
"jump-less-than to L1". Unconditional jumps to a fixed target, such as
"jump L2" are always correctly predicted, and take about the same time
as a simple arithmetic operation.

Unless they force a cache stall. For a routine as trivial as the one
Travers posted, it's unlikely that the final 'return' would be on a
separate cache line from the start of the routine, but in a longer
routine, replacing the jump-to-return with a return might actually keep
the end of the routine out of cache.

Of course, if the code that called this routine has fallen out of
cache, the return instruction will force a cache stall, instead.
The method is short enough that the
JVM will in-line it anywhere it is frequently called so we don't have to
worry about the return.

Agree. Even though it's not actually magic and doesn't always produce
totally optimal (either in time or in space) code, the JIT compiler
produces surprisingly good code most of the time. Second-guessing it
down at the level of individual machine instructions is the sort of
thing that impresses bystanders but rarely accomplishes anything useful.

-o
 
L

Lothar Kimmeringer

Travers said:
The usual rules for optimization:

1. Profile your code
2. Determine where your code is spending most of its time
3. Re-visit your algorithm and decide if you are using the right algorithm.
3a. If so, optimize the methods that are consuming most of the time.
3b. Implement a better algorithm

3c Run your testcases to check that nothing is broken
3d Realize that you haven't one
3e Roll back your changes using SVN->Revert
3f Write a testcase checking the current algorithm and its
borders where it's not working anymore and run it
3g Go back to 3b and skip 3d - 3g

3f
and repeat these processes.


Regards, Lothar
--
Lothar Kimmeringer E-Mail: (e-mail address removed)
PGP-encrypted mails preferred (Key-ID: 0x8BC3CD81)

Always remember: The answer is forty-two, there can only be wrong
questions!
 
M

Michael Wojcik

Owen said:
One of the things that could've made Itanic so cool was the ability to
pre-emptively evaluate both sides of a branch, many instructions in
advance, and throw away all the alternatives but the one that "actually
happened". Unfortunately, programming around this was really hard even
for experienced compiler writers, and the chips were notoriously
power-hungry and expensive, so it never really took off.

Eh? Speculative execution is hardly a rare CPU feature, among modern
general-purpose CPUs. At least one reference[1] claims "all modern
CPUs" have spec-exec - which seems dubious, but I suppose one could
quibble over what a "modern" CPU is. x86 CPUs have had it at least
since the Pentium Pro (P6).[2] PPC has had it since at least the 603.[3]

If it's branch predication (note I'm talking about *predication*, not
*prediction*, here) you're thinking of, that also antedates Itanium
and continues to be implemented in modern CPUs. DEC Alpha and HP
PA-RISC had limited predication, where stores to memory on a
speculative branch could be rewritten to write to a register instead,
with a predicate-controlled store after the branch result was known;
and PPC has had aggressive predication since at least the 604.[4]

US patent 5,471,593 describes predicated execution; it was filed in
1995. And it cites prior work, including the Cydra 5 from the late
'80s - which I don't know much about, but which apparently had a basic
predicated-execution system.

So, not new in the Itanium, and not sinking with it either.

[1] http://www.hardwaresecrets.com/printpage/209
[2] http://www.datasheetarchive.com/datasheet-pdf/078/DSAE0072944.html
[3] http://www.cs.pitt.edu/~alanjawi/history.html
[4]
http://www.princeton.edu/~rblee/ELE572Papers/the-effects-of-predicated.pdf
 
T

Thomas Richter

This is why I think assembly language should be mandatory in schools. :)

An if statement gets translated into something like (using a
pseudo-assembly for readability):

;; if (b > 0) return a << b else return a >> -b
compare b to 0
jump-less-than to L1
a := a << b
jump L2
L1:
b := -b
a := a >> b
L2:
return ;; result in a

Folks, this is all too complicated. As already stated, branches can kill
the performance, but it is just too easy without them:

(x >> r) << l.

And set l = max(-s,0) and r = max(s,0) where is is the "signed right
shift". If the shift is always by the same constant, it is easier to
precalculate l and r once and apply the shift all over again.

Alternatively, if the shift is always dynamic, l and r can also be
computed without any branch:

int m = (s >> 31);
int r = s & m;
int l = (-s) & (~m);
int x = (x >> r) << l;

Whether this is faster depends of course whether a branch prediction is
able to predict the branches well.

Greetings,
Thomas
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,982
Messages
2,570,185
Members
46,736
Latest member
AdolphBig6

Latest Threads

Top