James said:
They sound rather "artificial". Maybe specific definitions
introduced locally in order to characterize problems.
"Errors" are something that occurs but which shouldn't occur (in
an ideal world).
I wouldn't say "shouldn't". I mean, sure, you don't want the hard disk to
fail, but very many do. I'd prefer a definition of "error" something more
akin to "an error is something you have to contingently plan for".
Programming errors are made by programmers,
Oh, those are bugs, not "errors" (not the way I was using "error" and how
I was specifically trying to make the distinction between exactly that as
compared to those expected things that you have to contingently plan
for).
writing the code, and in general, can't be handled in the code
(because the presense of such errors means that we don't know
the current state of the program).
Repeating what I said can't hurt.
Another word (probably
better) for such errors is defects.
Well, to me that conjures up quality control from the '80s, so I just use
"bug", which is certainly the field's chosen term for such.
Other errors (dropped
connections, errors reading files, etc.) may be due to defects
in other equiment connected to the machine, but there's no
reason to abort everything when they occur.
Now those are examples of "errors" (not "bugs").
Other things, like
insufficient memory, are processed as "errors", since their
impact on the program is very much like that of a defect in
other equipment.
Yes, still "errors" to be planned for.
And of course, there are user errors.
Of course, some users ARE errors! (But I digress).
And
hardware errors in the equipment you're running on. (One of the
most difficult errors I ever had to track down was a hardware
design error which resulted in one bit in the generated machine
code changing---about once every three or four hours.)
How much of the universe a given program/system considers is
application-specific.
Before talking about appropriate strategy, I think you have to
rigorously defined what types of errors you're talking about.
Identification and categorization of errors is indeed one step of a
handful that are needed for a comprehensive error "handling" plan. Before
identification of specific errors though, one needs a good grasp of the
larger scope of where the program/system resides, so in that respect
strategy definitely comes first (top-down, rather than bottom-up). It may
be "taken for granted" in mature organizations/teams/developers, but to
exclude commenting on it would surely leave some thrashing for solution
too long from a bottom-up path.
I
like the term "defects" for errors in the system you deliver (as
opposed to defects or failures in connecting equipment, or user
error), and errors for the rest.
I like "bug", and I think it is the clearest term. It IS a bit more,
shall we say, "direct", though, as in: "I paid money for this buggy
software?! Fix the @#x& bugs!!". Rather then the much more milder term
you suggested that surely would be only spoken by the voice that says,
"Level 4, failed. You have, 2, more lives remaining" (ref: Minute to Win
It).
But even then, I wouldn't
ignore the fact that a defect in the software is always due to a
human error in the development process.
I'm not sure I like that outlook on things! As much as there are many
formal processes and methods and levels, I'd stick out a few bugs and buy
software from the guy who could use a few bucks (obviously not for the
flight control system on the airplane I board, duh). All said, different
domains require different strategies, methods and techniques.
Somebody made a
mistake, and at least to me, a "bug" sounds to impersonal,
random and caused by some external factor, which is not the
case.
Ohhhhh.... I get it now, you think that bugs are always somebody's FAULT.
Eww-k. (I'm gonna leave that there, because I want to say something in
response to it and feel if I don't say anything I will have implicitly
responded to it be not saying anything, even though it may be taken the
wrong way.)