Having to guess is unacceptable. If a function allocates a buffer by
calling malloc() and doesn't document the fact that the caller will have
to free() it, I won't be using that function, thankyouverymuch.
What does the language have to do with whether a function is documented?
If it operates on bitstreams, but it doesn't distinguish between a
bitstream consisting of 9 bits and one consisting of 16 bits, with the
last 7 equal to 0, then it's not a valid compression function. Unless
it's meant to be lossy -- something that would need to be mentioned in
the documentation if there were any.
It is and it isn't. It's not "lossy", that has another meaning. The last
few bits are often a problem for a bitstream, because conventional
backing store interfaces don't normally allow for storage of a specified
number of bits. So the true end of data is going to have to be tagged
specially, somehow.
And if that sentinel sequence occurs as valid data in the middle of the
bitstream? Or is it not intended to operate on arbitrary data?
A bitstream is data, not random bits. So a sentinel is like a zero in
a string. If you need to represent a string with embedded zeroes, you can
have an escape. But it has to be parsed by something which understands
it. As a bitstream gets passed about on systems with varying byte sizes,
it will tend to accumulate trailing bits, inevitably. Until it is
parsed and trimmed back to its genuine size. Unlikely to be much of
a practical problem, and we're only talking about one or two bytes
each time.
I can't be sure what the function does in the normal case, where
CHAR_BIT==8. If I had to compress data and decompress data on a 9-bit
system I'd find something else to use. If I *had* to use this one for
some reason, I'd want to examine the source code and/or perform very
thorough testing; seeing it behave sensibly with a byte value of 0x101
wouldn't be enough to give me confidence that it won't corrupt my data.
Any function can have bugs. The test tells you that CHAR_BIT isn't
hard-coded to 8, it treats larger bytes as larger. There might be
more bugs lurking there, for example if it uses a "rack" of 32 bits, and
bytes are also 32 bits long, the "rack" might be too short. But that's
true of almost any function written in any language.
In real life, such functions *do* have documentation -- perhaps good,
perhaps bad, perhaps incomplete, but more than just a bare declaration.
For this hypothetical example, and for the sake of discussion, I'd b
willing to accept that documentation does exist, and that it describes
the behavior adequately and correctly. Lacking that, I see little
reason to consider using it.
If you can employ perfect programmers who never make any mistakes, then
it really doesn't matter much what language you use. They never make
mistakes, so everything will always go very smoothly.
The question is how the language responds to a programmer being sloppy,
or miscommunication (meticulous documentation, but in Chinese), or
designs not being done, or being compromised by urgent changes to
requirements.
We see that being given a difficult situation - an undocumented compress
function and a system which doesn't use 8 bit bytes, C doesn't respond
too badly. We can work out how the function works relatively easily,
we can isolate any bugs / limitations.
No-ones saying that these are ideal circumstances, or that code shouldn't
be documented.