James said:
I'm not sure what your point is. Even if int's and char's have
the same size, they're different types.
C handled wchar_t as a typedef. This doesn't work in C++
because of overload resolution: you want wchar_t to behave as a
character type, when outputting it, for example; if it were a
typedef for int, you have a problem.
I mean long long is merely introduced because C committee decided to
introduce it to C99, no other real reason. What will happen if they
decide in the future to add another such built-in type?
What are you disagreeing with? Neither literal type is Unicode,
unless an implementation decides to make it Unicode. Most of
the ones I have access to don't.
Those implementations you are mentioning are compiling programs for OSes
that do provide Unicode?
Under Windows I suppose current VC++ implements wchar_t as Unicode, and
in my OS (Linux) I suppose wchar_t is Unicode (haven't verified the last
though).
So with these new character types will we get Unicodes under OSes that
do not support Unicode? With the introduction of these new types, what
will be the use of wchar_t?
Essentially I am talking about restricting the introduction of new
features in the new standard, only to the most essential ones. I have
the feeling that all these Unicodes will be messy. Why are all these
Unicode types needed? After a new version of Unicode, we will have it
introduced as a new built-in type in C++ standard? What will be the use
of the old ones? What I am saying is that we will be having an
continuous accumulation of older built-in character types.
We are repeating C's mistakes here, adding built in types instead of
providing them as libraries.