Dan said:
Arthur J. O'Dwyer said:
Dan Pop wrote:
because the library
specification relies on INT_MAX >= UCHAR_MAX and this would be
impossible if sizeof(int) == 1.
But is it possible we might have [letting plain char be signed]
#define CHAR_BIT 16
#define UCHAR_MAX 65535
#define SCHAR_MIN -32767 /* !!! */
#define SCHAR_MAX 32767
#define INT_MIN -32768
#define INT_MAX 32767
#define EOF -32768
Is anything wrong, from the C standpoint, with these definitions?
Yes, for a hosted implementation:
int cannot represent the whole range of unsigned char.
Okay, then I just don't see why you say that. I thought you were
talking about the EOF-and-getchar issue, but you're not. What *are*
you referring to? Chapter and verse would probably help.
Yes, I was talking about the EOF-and-getchar issue.
Why would you think otherwise?
Arthur J. O'Dwyer may be under the impression that
I hereby give you permission to call me by my first name only. ;-)
"int cannot represent the whole range of unsigned char."
is an incorrect aphorism about C, being used to criticize the code,
rather than a direct criticism of the code.
And I have no idea what you mean by that, so I'll leave it for the
moment. However, re: Dan's reply: getchar() returns either a 'char'
value, cast to 'int', or it returns EOF, which is a negative 'int'
value unequal to any 'char' value. Right?
My #definitions above provide exactly enough numbers to do this
job: the range of 'char', which is signed, goes from -32767 to 32767,
and EOF is the 'int' value -32768. So if you were talking only about
the "EOF-and-getchar issue," you were wrong, AFAICT.
However, since I posted that message I noticed a post elsethread
talking about the <ctype.h> functions, which expect to be passed an
'unsigned char' value, cast to 'int'. That complicates things, or
^^^^^^^^
so I thought... but now I'm not so sure about that, either. I think
I'm really going to need the C&V here, or you're going to have to
show me a piece of code that pokes large holes in my #definitions.
-Arthur