James Kanze said:
Alf P. Steinbach /Usenet said:
* James Kanze, on 30.08.2010 17:05:
[...]
And of course, as good people here noted, why would you not
consider assembly?
Because Windows doesn't define the assembler level API.
Uh, it does.
That's what an ABI means.
Defining that level.
if there is one thing I think Windows got right better than
most Unix variants and friends did, it is that they tend to
define things down to the binary level, so although it may be
painful to interop at these levels in many cases, it is at
least possible...
I don't see what that buys you. However...
in nearly all forms of Unix, most binary details are left up
to the implementation (including flag bit constants, numerical
magic values, ...). only really source-level compatibility is
considered, and even then it is a bit hit or miss as a lot is
still only vaguely defined and tend to differ from one system
to another.
granted, there may be preactical reasons for doing this, but
it doesn't help matters much, more so that no comprehensive
binary level specifications have been undertaken.
for example, with Linux (theoretically a single OS), it is a
bit hit or miss if binary code will even work between distros,
or between different versions of the same distro (since many
of the library writers see no problem in making changes to
library headers and API's which tend to break binary code,
seeing things like changing structure layouts or magic numbers
as "innocent changes"...).
This is a real problem. Techically, it's not a problem with the
OS, since most of the libraries are third party, and not part of
the OS, but if you're trying to get something to work, it really
doesn't matter.
Note that Windows is even worse, however. You have to ensure
that all of the libraries were compiled with the same compiler,
using the same options. (The distributed binaries for Boost
didn't work with our system. Nor did those created by another
group in the company, using the same compiler, but slightly
different options.)
this usually only pops up when using C++ across library borders though...
if one only uses C-level APIs across these borders, these problems are
greatly reduced, as the "which compiler with which options" issue,
typically, mostly disappears.
but, even then, one has to be fairly rigid about coding practices to avoid
many of the subtle issues which tend to pop up with more "casual" API-design
practices (even things as simple as physically passing and returning
structs, vs sending them as pointers, may foul things up as different
compilers seem to disagree over things like how to pass or return various
types of struct, or over things like how to pad misaligned struct members,
exact size/alignment for struct arrays, ...).
CC-A: expects a pointer as a hidden first arg for returning a given struct;
CC-B: expects this struct to be put in registers (say, the < 12-byte
EAX/ECX/EDX interpretation).
CC-A: expects to pass an internal reference to a struct on the stack;
CC-B: expects the whole struct to be placed on the stack.
....
but, otherwise, at the C level things tend to be much more solid, whereas
the C++ level is an inter-compiler mess of sorts.
except WRT Cygwin, which adds its own bizarreness, which IMO as a matter of
policy should not be used to compile DLL's... (MinGW and MSVC are fairly
safe though IME...).
however, most open-source code is difficult to get to build on even on
Cygwin, much less MinGW, and MSVC typically requires a lot of internal
tweaking (as, sadly, even most "portable" OSS code tends to use the
occasional GCC'ism here or there...).
I fear you're right.
Most commercial libraries seem to realize the importance of
backward compatibility; if your code worked with version x, it
will work with version x+1. So if some new code requires a
newer version of the library, you don't break all of the older
code. I've not found this to be the case with most free
libraries, however. (But there are exceptions both ways.)
yes.
the problem comes down a lot to API design...
to keep everything from breaking requires fairly careful API design and
maintainence, which many/most OSS libraries don't seem to care about
bothering with...
not everyone wants to write libraries, say, with the look and feel of
OpenGL, although GL is a good example of a library/system which has done
notably well at avoiding versioning issues...
DirectX doesn't do as well, since an app usually has to consider issues
related to the particular version of the library they are developing
against, API versions, ..., and in a few cases DX has dropped little-used
features, potentially breaking any (likely rare) apps which may have
depended on them.
but, even then, DX is still a lot better than many OSS libraries in these
regards (many which make little real effort to address the matter of
versioning, ...).
but, yes, keeping one's bit flags, magic values, struct layouts, ... "set in
stone" (or avoiding them altogether), is a little more effort (and many OSS
libs don't bother).
directly using C++ across a library boundary is a practice I personally
think is nearly the opposite extreme, as it provides almost no protection
from the matters of compiler-dependent features and from versioning (since
something as "innocent" as adding a new method to a class may essentially
change the vtable layouts of both the class and any other class which
inherits from it, thus breaking binary interop, ...).
even in naively designed codebases, this option may pop up, as say changing
something in a header and rebuilding code may leave much of the rest of the
codebase as "stale" and causing bugs, requiring either that code be rebuilt
if headers change (an extreme time-wasting hassle, especially for Mloc
codebases...), or doing a "clean" build (deleting all objects and binary
code) if any significant changes are made.
however, recently with my coding practices, I have largely avoided both of
the above (I neither track header changes, nor usually need to bother with
clean builds).
or such...