Basil said:
The above header is not part of the C++ standard. However, the header
<strstream>
with its deprecated strstream classes is. I assume you just mistyped
its name and
forgot an appropriate namespace 'std' qualification (or using
directive).
int main () {
wchar_t ff [10] = {' s','d ', 'f', 'g', 't'};
istrstream b1 (ff);
return 0;
}
Error message: Could not find a match for ' istrstream:: istrstream
(wchar_t *).
'istrstream' is for narrow characters and there is no wide character
version as
'istrstream' is not a class template like its replacement
'basic_istringstream'
(at least conceptually using 'std::basic_string' as its
representation). This
should work:
std::wistringstream b1(ff);
Note that 'ff' has to be null terminated for this to work. Since the
array 'ff'
has more elements than mentioned in the initializer list, it is filled
up by
null characters. I would consider this to be a pure accident and would
rather
write it like this:
whcar_t ff[] = L"sdfgt";
String literals, whether using narrow or wide characters, are always
automatically
null terminated.
1. Can I have a Unicode stream?
You can have a wide character stream. Unicode is an external encoding
and it
does not make much sense to talk of a Unicode stream (*). You can have
Unicode
encoding of the stuff written externally, if the implementation ships
with an
appropriate code conversion facet ('std::codecvt') or if you have a
suitable
implementation thereof (e.g. Dinkumware, <
www.dinkumware.com> offers a
library
doing things like this; you can implement it yourself if you want to).
(*) At least conceptually this should be true. Unfortunately, the
Unicode people
messed up Unicode entirely and a program processing Unicode cannot
really be
completely Unicode agnostic: special treatment of combining
characters is
necessary at least.
2. If it is impossible, can I work with Unicode without the OS tools?
I wouldn't call it impossible. Inconvenient may be a better term.
However,
processing of Unicode is always inconvenient. This was apparently a
major design
goal of Unicode although the stated goals were somewhat different...
3. Is there the other compilers with support Unicode streams?
Standard conforming implementations at least allow processing of
Unicode by means
of the code conversion facets. However, the C++ standard does not
define which
external codes need to be supported. Internally, C++ is guaranteed to
process
wide characters. However, these may be - and on some platforms normally
are - 16
bit entities which are not sufficient to represent Unicode characters
in one
entity (Unicode characters have 20 bit). Of course, even 32 bit
entities would be
insufficient due to stuff messed up by the Unicode people (notably
combining
characters).
The C++ view of character processing is that each character entity
(i.e. each
'char' or 'wchar_t') represents a complete character. Possible multi
width
encodings (UTF-8, UTF-16) are transformed to or form the internal
representation
during reading or writing using the 'std::codecvt' facet (with
appropriate
template parameters). Since 'wchar_t' is often 16 bits rather than the
required
20 bits, this is somewhat dwarved. Processing can still be done using
the C++
mechanisms, e.g. using 'std::basic_string<wchar_t>' (aka
'std::wstring'), but
it becomes much more complex. Of course, the same complexity you will
find with
other processing systems, too.
4. What is about Unicode stream in the standard?
As mentioned above, there are wide character streams but the standard
does not
specifically address Unicode streams.