SG,
it really depends. You see, in languages like C++ or Fortran,
encapsulated functions for seemingly simple operations typically
prefer numerical robustness, [...]
An instance is LAPACK's LASSQ, which uses a sum-and-scale approach to
calculating sum of squares, precisely to avoid underflows, or even
overflows (in some cases it is not even necessary to form the result
explicitly).
This is all fine and dandy. I would agree with you that it's worth
the hassle to prevent underflows/overflows for things like std::abs.
A std::abs function with overflow/underflow prevention effectivly
doubles the cases it works on with reasonable accuracy. Instead of
only supporting exponents in the range of -511 ... 511 (rough
estimate) it'll work also in cases where numbers with exponents in the
range -1022 ... 1022 are involved (assuming IEEE 754 doubles). That's
a big difference that is worth mentioning.
But I don't see where a complicated std::norm like that helps
anybody. If your program works with numbers/vectors where the squared
norm is almost de-normalized you have a problem either way. A
complicated version of std::norm isn't going to help here much except
that it extends "its domain of not-sucking" by only a teensy bit.
That's not even worth mentioning, IMHO.
simple std::norm version:
-------------------------
PRO: better accuracy in most cases
PRO: fast
CON: bad accuracy on a tiny fraction of cases
that are dangerously close to values where
even the complicated version won't work
complicated std::norm version:
------------------------------
PRO: Underflow prevention for an insignificant
fraction of cases (not really a PRO, is it?)
CON: slow
CON: worse overall accuracy compared to simple
version for almost every case.
Am I making sense?