You mean, copying (parts of) MFC over to another platform and hacking it
to work there? Is this even legal?
Something like that. Not sure on the legal fine print, probably it's in
the gray area as I sell the end product didn't see restrictions on where
I compile my source. Even if i couldn't use the original code it is no
problem to re-implement the interface that is pretty lean. But who
cares when YAGNI applies to this case.
decays automatically into const char* pointer,
What is a great feature that makes the std:: version crappy to use with
real-life environment. Like WIN32, someone using VS is pretty likely to
target. And if you deal with code that originated in C, you can replace
char[] buffers without messing up the client code.
And relying on UB whenever they happen to be passed to printf() or any
other varargs function.
It is not UB, as the implementation defines the behavior for the case.
You can find it somewhere on msdn.
I understand MS is doing their best to make this
UB work, but it is still UB and makes the code yet more unportable.
Undefined by the standard, defined by implementation -- sum is well
defined, and working fine. Certainly on the a different implementation
things could change. Until the theoretic shit hits you're way ahead as
the alternative does not work on any platform to start with.
And
if I am using _UNICODE like all Windows code should do nowadays, such
kind of usage compiles fine and silently misbehaves at run-time.
Dunno, I don't use _UNICODE. But for that realm you shall use all kind
of 'T' stuff consistently and if you do everything works fine.
Also IMO the printf typesafety issue is quite last-century, we have a
ton of static analyzers and this check is the most basic one. and those
who refuse to use static anal have way bigger trouble to start with --
just look the pages of companies creating those tools, most run their
program against open codebases and show example problems and counts. ;-)
This means you have done the conversion only halfways. And I dislike more
the (LPCTSTR) noise advocated by MS.
I lost you here.
Just because in an Unicode-aware Windows program CString would be UTF-16.
You're welcome to use CStringA and CStringW instead if config-dependent
changes bother you. But if one wants to have common code that can be
compiled for either 8 or 16 bit chars the MS way works fine even if with
some noise.
In a real setup I'd push for a global decision on one type of config and
just not support the other and enjoy noiseless coding.
OK, I gather one could use some specialization of CStringT instead, but I
have never bothered to do this.
?
In my mind, it is easier to ignore part of std::string interface than
ignore the whole of it and learn something else instead.
So you ignore all the 100+ functions and are left with just three usable
ones: regular ctor, range ctor and op+. Is that really worth a standard
class? bah.
And besides,
according the MSDN documentation CStringT has more member functions (35)
than basic_string (32). Maybe you indeed get hundreds if you count
overloads, but overloads are all doing basically the same jobs so
learning or ignoring them is easier.
IMO overloads are still members, but we can drop it and switch to count
useful things only.
Plus std::string does have some member functions which I use all the time
and which are lacking in CString:
- find_first_not_of, find_last_not_of
- compare with offset and length arguments
- append with offset and length arguments
The last one sounds like Mid(), the others I never needed.
While these are handy functions sometimes, they have the common problem
that they are locale-dependent and thus the results are basically
unpredictable (and there are no variants taking explicit locale
arguments).
That is true, but poor excuse to discard the most common use.
In my work the locale-specific strings are almost a different family,
while I have heavy use that is tied to ASCII set. "Key"s in the program
used in all kind of technical files and such.
The really localized things are firewalled away from the rest of the
program.
Even the wide version CString MakeUpper() seems to depend on
the narrow codepage locale (by reasons beyond my comprehension) and
produces some crap instead of working properly.
Probably so, but it's a different story for a different day. Messing the
core use cases of the string is a source of problems. One violent case
to recall: we had problems with xml and especially xslt conversions on
linux. it took hours literally. We profiled the problem to operator ==
of Glib::ustring. Which we replaced (in experimental setup) with simple
strcmp, to gain some 100x speed with the same behavior. It was
normalizing the content before doing actual compare -- as unicode is
messed up with precomposed characters and other such stuff. Certainly
nothing like was needed with our data that had ascii keys only, and even
if we wanted arbitrary things it would be presented in a consistent,
prenormalized way. So simple strcmp would serve.
IMNSHO imperfect generic locale support should never prevent
implementation of the most frequent uses of strings.
I think this is a single point for CString so far.
And a really big one really -- especially as I mentioned that in real
life you will have many extensions, where string will be a basic LEGO
piece. The C++98 version of std::string is just hostile for them with
its forced copy and lack of linear storage. While CString has all the
pieces out of the box.