The fgetc() function returns a value "as if" the (plain) char had
been unsigned to start with, so that on ordinary signed-char 8-bit
systems, fgetc() returns either EOF, or a value in [0..255]. As
long as UCHAR_MAX <= INT_MAX, EOF can be defined as any negative
"int" value.
The fputc() function should logically be called with equivalent
values, but the Standard says that it just converts its argument
to unsigned char -- so fputc(EOF, stream) just does the same thing
as fputc((unsigned char)EOF, stream).
Hosted implementations on machines in which "char" and "int" have
the same range (e.g., 32-bit char and 32-bit int) have a problem.
(The only implementations I know of in which char and int have the
same range are not "hosted", so they do not have to make stdio
work.)
Standard Library functions need not be written in portable C,
so there's any number of ways.
Indeed. On the other hand, the example below is not particularly
good, I think:
But, you can use ferror to differentiate,
for output functions, for example.
int fputs(const char *s, FILE *stream)
{
while (*s != '\0') {
if (putc(*s, stream) == EOF && ferror(stream) != 0) {
return EOF;
}
++s;
}
return 0;
}
The first problem is that ferror(stream) could be nonzero even
before entering this fputs(). (This is not actually harmful in
this case, as I will explain in a moment, but it suggests a
perhaps-incorrect model. Just because output failed earlier
does not necessarily mean that output will continue to fail.
Consider a floppy disk with a single bad sector, in which writes
to the bad sector fail, but writes to the rest of the disk work.)
The second problem is that the test is redundant, except on those
UCHAR_MAX > INT_MAX implementations that have problems implementing
fgetc(). The reason is that fputc() returns the character put,
i.e., (unsigned char)*s, on success. If UCHAR_MAX <= INT_MAX,
fputc() (and thus putc()) can only return EOF on failure, in the
same way that fgetc() can only return EOF on failure-or-EOF.