Well, what if I have just 19 bytes to write that means 2 chars and 3
bits!
This is really an algorithm matter rather than a C matter, as you
are unlikely to find very many systems at all that allow you to
end files on arbitrary bit boundaries, no matter which language you use.
In the "alphabet" known to the compressor, include an artificial
"end of text" token. When you get to the end of the input source,
encode that token; then follow it with whatever amount of junk
bits you need to reach a character boundary. Your decompressor
will know not to examine the junk bits because it will decode the
end-of-text token and know not to go further.
In practice, you will find that it is often useful in a compression
routine to have a "flush dictionary" token with the same property
of "ignore bits until the next octet boundary". Instead of having
a specific "end of text" token, you can use the flush-dictionary
token and then end the file.
The flush-dictionary token allows you to adapt to changing conditions
in the source data; you could imagine, for example, that the
statistics useful for compressing the introduction to a document
might be quite poor for compressing the technical material in the
main document itself. Also, a flush-dictionary token has the
useful property of allowing you to append the compressed versions
of several files together, and have the result decompress to the
original files appended together -- useful for building an archive
for example.