On Fri, 02 Feb 2007 08:46:22 -0800, Chris Uppal
It would be interesting to know whether the overhead of the method call
(and it's internal logic too) would be quicker than an explicit switch
statement.
That's a very good question. If the method call is actually implemented
with a machine language CALL instruction (at this point for me this is
purely a case of second-guessing the JVM's byte-code interpreter), then
call gates that result in task switches and other overhead issues
involving stacks, various descriptor tables, etc., certainly can
potentially cause a degree of serious performance hits with repeated use.
To implement a switch statement in assembler (which is actually a lot
more common than most people realize) requires conditional branching
instructions, which may or may not have inherent overhead issues depending
on the architecture. My exposure to machine language is mostly limited to
the ~1 MHz 6502 (on the Commodore 64 in my elementary school years) and
Intel's 8088...Pentium (but mostly the 8088 and the 80386), so I don't
know how CALL instructions on RISC and other well-known processors work
under-the-hood and what effects they have.
Actually, it's quite hard to imagine a situation where the performance of
either technique would matter much -- bulk text is by its very nature IO
limited, and if there isn't a huge bulk of text to scan, why should
ultra-fast scanning be worth the effort ?
The nature of I/O is limited today, but may not be at some point in the
future. In fact, the use of USB Memory Sticks and RAM disks come to mind
immediately as current real-world examples of breaking one type of such
expected/assumed I/O speed limits. With faster mass storage technologies,
constantly improving caching algorithms (e.g., Novell's NetWare has an
extremely effective read-ahead cache that has been light-years ahead of
the industry for well over a decade, and is one of the reasons for its
long-standing reputation as being the best Network Operating System for
File and Print services), and a smattering of related hardware solutions
(e.g., caching SCSI and SASCSI controllers with vast amounts of
high-performance RAM installed), I firmly believe that I/O in most (if not
all) areas will continue to improve (demand and marketplace competition
are two key driving forces).
Just because a situation where performance is crucial can't be fathomed,
doesn't mean that one or more such situations don't, or won't, exist.
There are a number of areas in computer programming, and a few other
topics, that I know very well, but there are also a great deal more that I
know little or nothing about (or even that they exist), thus I'm certain
that assuming I could consider all possible uses or scenarios for anything
is simply not realistic (although there's certainly nothing wrong with
going through the exercise of trying to achieve this over any period of
time).
Regarding "worth," how would one measure this? Optimization is clearly
the "right" thing to do in many cases, although it may not be
"economic"ally viable. Understanding the reality of "getting paid for our
work so we can continue to survive, etc." is obviously paramount, but if
we develop a tendancy to ignore "right" in favour of "economic"
considerations, then we also increase the risk losing at some [usually]
undeterminable point in the future. In essence, finding the optimal
balance for the long term will more likley help to consider "worth"
correctly.
Also, understanding the "bigger picture" of what an entire application is
designed for is a very helpful aid in determining which areas (if any) are
the best candidates for optimization.
(Except for pure intellectual interest, of course).
I used to do a lot of assembler programming (Roedy Green got me
interested in it many years ago), and I've had a somewhat keen interest in
code optimization ever since (but only as time allows).
One unexpected side-effect of code optimization that I discovered early
on was that subtle "bugs" that would likely go undiscovered for many years
were suddenly obvious, often in an "out of the blue" sort of way. So, in
addition to both the intellectual and performance considerations, the
discovery of unexpected programming errors is an excellent justification
for even attempting a small amount of optimization since it can
potentially result in much better code.