We need to know why we're rounding. If we're dealing with, say, currency
(or some analogous system), the proper solution is to do calculations in
an integer unit of which all other currency units are a multiple (e.g. for
Sterling, use pennies; in the USA, use cents; in Europe, use Euros), and
to establish a protocol for dealing with calculations that don't fit into
this process (e.g. interest calculations). If we're dealing with
calculations that simply require a neatening off for display purposes, on
the other hand, then the proper solution is to round the text
representation, not the value itself.
Not many posting here seem to believe there are real and practical
reasons for rounding values to so many decimals (or rounding to a
nearest fraction, a related problem).
Currency seems the most understood, and storing values in floating
point as dollars/pounds/euros, and rounding intermediate calculations
to the nearest cent/penny (0.01) works perfectly well for a typical
shop or business invoice. For large banks adding up accounts for
millions of customers, government and so on, I'm sure they have their
specialist developers.
Another example is CAD (drawing tools) where input is inherently noisy
(23.423618182 mm) and the usual practice is to round ('snap') to the
nearest aesthetic value, depending on zoom factor and scale and so on,
so 23., 23.5, 23.42 and so on. Otherwise you would get all sorts of
skewy lines.
There is still a noise factor present (the errors we've been
discussing) but you would need to zoom in by factor of a billion to
see them. In typical printouts things look perfect. In fact it's
interesting to zoom in and see these errors come to life on the
screen.
Also often everything is stored as, say, millimetres, while the user
might be using inches, and rounding would need to be as inches (say
hundredths of an inch, which would be the nearest multiple of 0.254),
again an approximation but works well enough (this allows designs
created with different units to be combined).
Actually rounding for printing purposes is probably not done much
outside printf() and such functions. In fact perhaps it's because
printf() does round floating point numbers, and therefore shows a
value that is only an approximation, that gives rise to much
misunderstanding. Maybe it should indicate (with a trailing ? perhaps)
that the value printed is not quite right unless explicitly told to
round.
Bart