With software the law is immature. To my way of thinking there are some
implied obligations that come into effect as soon as a software program
is published, regardless of price. Despite all the "legal" disclaimers
to the effect that all the risk is assumed by the user of the free
software, the fact is that the author would not make the program
available unless he believed that it worked, and unless he believed that
it would not cause harm. This is common sense.
Common sense has the interesting attribute that it is frequently totally
wrong.
I have published a fair amount of code which I was quite sure had at
least some bugs, but which I believed worked well enough for recreational
use or to entertain. Or which I thought might be interesting to someone
with the time or resources to make it work. Or which I believed worked in
the specific cases I'd had time to test.
I do believe that software will not cause harm *unless people do something
stupid with it*. Such as relying on it without validating it.
I don't know if there is a legal principle attached to this concept, but
if not I figure one will get identified. Simply put, the act of
publishing _is_ a statement of fitness for use by the author, and to
attach completely contradictory legal disclaimers to the product is
somewhat absurd.
I don't agree. I think it is a reasonable *assumption*, in the lack of
evidence to the contrary, that the publication is a statement of *suspected*
fitness for use. But if someone disclaims that, well, you should assume that
they have a reason to do so.
Such as, say, knowing damn well that it is at least somewhat buggy.
Wind River Linux 3.0 shipped with a hunk of code I wrote, which is hidden
and basically invisible in the infrastructure. We are quite aware that it
had, as shipped, at least a handful of bugs. We are pretty sure that these
bugs have some combination of the following attributes:
1. Failure will be "loud" -- you can't fail to notice that a particular
failure occurred, and the failure will call attention to itself in some
way.
2. Failure will be "harmless" -- operation of the final system image
built in the run which triggered the failure will be successful because
the failure won't matter to it.
3. Failure will be caught internally and corrected.
So far, out of however many users over the last year or so, plus huge amounts
of internal use, we've not encountered a single counterexample. We've
encountered bugs which had only one of these traits, or only two of them,
but we have yet to find an example of an installed system failing to operate
as expected as a result of a bug in this software. (And believe me, we
are looking!)
That's not to say it's not worth fixing these bugs; I've spent much of my
time for the last couple of weeks doing just that. I've found a fair number
of them, some quite "serious" -- capable of resulting in hundreds or thousands
of errors... All of which were caught internally and corrected.
The key here is that I wrote the entire program with the assumption that I
could never count on any other part of the program working. There's a
client/server model involved. The server is intended to be robust against
a broad variety of misbehaviors from the clients, and indeed, it has been
so. The client is intended to be robust against a broad variety of
misbehavior from the server, and indeed, it has been so. At one point in
early testing, a fairly naive and obvious bug resulted in the server
coredumping under fairly common circumstances. I didn't notice this for two
or three weeks because the code to restart the server worked consistently.
In fact, I only actually noticed it when I noticed the segfault log messages
on the console...
A lot of planning goes into figuring out how to handle bad inputs, how
to fail gracefully if you can't figure out how to handle bad inputs, and so
on. Do enough of that carefully enough and you have software that is at
least moderately durable.
-s
p.s.: For the curious: It's something similar-in-concept to the "fakeroot"
tool used on Debian to allow non-root users to create tarballs or disk images
which contain filesystems with device nodes, root-owned files, and other
stuff that allows a non-root developer to do system development for targeting
of other systems. It's under GPLv2 right now, and I'm doing a cleanup pass
after which we plan to make it available more generally under LGPL. When
it comes out, I will probably announce it here, because even though it is
probably the least portable code I have EVER written, there is of course a
great deal of fairly portable code gluing together the various non-portable
bits, and some of it's fairly interesting.