[ binary search ]
It is better design to decouple the algorithm from the result
processing. If nothing else, it gives you a reusable algorithm.
The algorithm is something abstract and as such, inherently reusable
(although the practical usefulness of an algorithm may be limited
outside the problem situation it was intended to help with): Provided
you need to write code which has to search for something in a sorted
array, you can use the reusable 'binary search' algorithm instead of
inventing your own searching algorithm ('binary search' is a really
bad example here because it is so absolutely trivial to implement). It
is also possible to create a reusable implementation of an algorithm,
which amounts to exchanging the problem of implementing the algorithm
for the problem of dealing with an external dependency whose exact
behaviour cannot be determined by inspecting the source code of a
particular program alone, whose exact behaviour may change independently
of the code of the the programs using it, whose implementation isn't
necessarily particularly well-suited for solving the specific problem
at hand and whose implementation may now (or at any time in future)
contain errors independent of the source code of any program using it.
In nutshell, the means such a
program-using-the-reusable-implementation-of-a-reusable algorithm can
possibly/ probably be written faster but it becomes harder to debug
and maintain in exchange for that. Except if the functionality gained
in this way is non-trivial, this is usually a bad tradeoff since
debugging and maintaining code is generally more difficult and
time-consuming than writing it.
I recently spent roughly a complete work day trying to 'decrypt'
Ulrich Drepperts custom macro orgies inside glibc in order to
determine what code is actually being executed by a program which
failed inside the getpsnam library routine before I was able to
identify the actual problem and what was causing it and fix that which
provides a nice example of the maintenance problem I wrote about in
the first paragraph. And I'd wager a bet that a majority of
'developers' won't even contemplate to try to determine why some
seemingly correct code fails inside a library call but just stack
workaround onto workaround in order to mitigate the practical
consequences whenever the reliability problems caused by the actual
error happens to reach them (and this is bound to be much less often
than the problem affecting a user of the code in some significant
way).
Better still if this can be done with little or no overhead.
Computers are really fast nowadays and avoiding the inherent overhead
of 'the most general "optimized" solution we were able to find'
usually turns 'performance problems' into non-issues: Only heavy users
of insanely huge and complex third-party libraries are still affected
by that (this is a somewhat hyperbolic statement with a true core).