"James Kanze" <
[email protected]> wrote in message
<--
By "most machines", I meant most types of machines. Windows is
a bit of an exception, although even here, it's only really an
exception when using VC++, who decided not to use the standard
mechanism. (cdecl and stdcall resolve to ``extern "C"'' and
``extern "Pascal"'', I think. And thiscall is an extension of
cdecl, only relevant for ``extern "C++"''. The language also
allows a compiler to define something like ``extern "Fast"'',
although why one would use a slow call when a fast one is
available is beyond me.)
-->
most compilers I have seen have used the "__callconv" type keywords,
<--
The only one I've seen which does this is the Microsoft
compiler.
-->
GCC and most other compilers which support Windows also do this.
although granted my experience is mostly limited to Windows (where this
convention would likely be default as MS uses it),
<--
Most of my experience is Unix. I've only really aborded Windows
in the last year (and only with the Visual Studios).
-->
yeah.
I develop on both Windows and Linux, but anymore mostly on Windows, since
this is where most of the potential users are.
however, I use an "unusual" build setup, typically mixing MSVC and the GNU
toolchain (via Cygwin), which I worry sometimes may hinder others' from
using the code (since they would have to figure out how to get this
build-setup to work).
but, the problem is that building from the commandline is notably less
"nice" with MS's tools than with the GNU tools (MS's nmake sucks, ...).
also lame is that I often don't get around with keeping all of the
Linux-specific (and non-MSVC) code and Makefile's up to date, ... so usually
trying to get a Linux build to work (when I get around to it usually
requires several hours or more trying to get everything to build and work
again).
and Linux (where generally there is only a single calling
convention in use, hence no real need to use it...).
<--
I've not seen this under Linux. G++ does have a __attribute__
keyword, but this covers a lot more than linkage. It allows
declaring, for example, that a function never returns, or is
pure. Things so useful for the optimizer that the next version
of the standard will provide a similar mechanism.
-->
"no real need to use it" also implies "will rarely/never be seen" or "can be
assumed not to exist".
one could try and see what happens if they try to use __stdcall on 32-bit
Linux, and whether or not the compiler will accept it or what the exact
result will be, I don't know...
<--
Under Windows, this technique is used for things like dllimport
and dllexport as well (since it is necessary), and on Intel
processors, there is an attribute cdecl, which will override any
compile line option telling the compiler to use a different
calling convention (which no one in their right mind would use,
since it means that you can't link with any number of other
programs---although by all rights, the "optional" form should be
the default for C++).
-->
yes.
there is also __declspec, which MSVC uses.
my compiler ended up having to support both __declspec and __attribute__
modifiers, since it partly emulates both MSVC and GCC (this itself is a bit
of a mess...). this is partly because many headers will not do anything
"sane" if not emulating a compiler they know about.
#if defined(__GNUC__)
....
#elif defined(_MSC_VER)
....
#elif defined(__WATCOMC__)
....
#else
#error "GASP! What is going on here!"
#endif
and, going and having to fiddle with all these headers to add specifics for
a new compiler would be a problem.
so, it is easier to try to emulate whatever other compiler is being used for
building code (using my own defines to identify my compiler in "emulation
mode"), and then try to sort out the mountains of crud which typically
results from this (IOW, the "side compiler" strategy...).
this notation makes a little more sense IMO than the 'extern
"lang" ' notation in many cases, as it allows defining the
calling convention of function pointers, ... which the former
can't do effectively,
<--
Why not? Function pointers do have language binding as part of
their type; you can't assign the address of a C++ function to a
function pointer with ``extern "C"'' in its type. (At least two
compilers are broken in this regard, however, and don't enforce
the rule: Microsoft VC++ and g++.)
-->
possibly, but I was unaware of this working in the typical (inline) case, as
I would have thought it would require either defining them as proper
variables or as typedef's...
example:
fptr=((void *(__stdcall *)(HANDLE, LPCSTR))GetProcAddrss(windll,
"GetProcAddress"))(windll, "CreateWindowEx");
although a contrieved example, this sort of usage pattern does pop up
sometimes (usually in nasty code which is better off wrapped).
most commonly in my case, this sort of usage pattern pops up when calling
from statically compiled code into dynamically compiled code, since for
(likely fairly obvious) reasons, one can't call directly from a
statically-compiled piece of code into code which does not exist until
run-time.
this is also one of the few nasty places where I need glue-code to interface
with script languages (another place being to expose C the contents of
structs to dynamically-typed languages, ...), as the machinery for doing all
this transparently is still not complete.
actually, implicit C -> dynamic-typed language calls present a few other
hassles, such as needing to identify another function/prototype/... as a
"template" for the function pointer to return (although, many wrappers can
use themselves as the template, presuming they exist and are C functions).
admittedly, I don't really trust the code which does all this, and it is not
well tested, and so the strategy of using an API call to perform the
function call is preferable, rather than trying to call the thing via a
function pointer.
t=dyCall2("someDynamicFunction", x, y); //safer, as it avoids internal
thunk-generation nastiness
the above also works in wrappers, but requires extra glue-code if the
objective is a function pointer to use as a callback or similar. current
"best practice" is to not try to use callbacks with dynamically typed code,
or at least until better "proven safe", or at least "proven to generally
work"...
however, the 'extern "lang" ' notation does have the
usefulness that it can be applied to an entire block of
definitions, rather than having to be endlessly repeated for
each declaration.
hence:
void (__stdcall *foo)(int x, double y);
<--
Or also:
extern "C" void foo(void* (*)(void*));
Which can only be called with a pointer to an ``extern "C"''
function.
-->
yes, the "declare a proper variable" case from before.
I have no idea if this works from casts though, or what the exact notation
would be.
<--
As is the case under Solaris, HP/UX and AIX. And probably every
other system around.
-->
most non-Windows systems on x86-64 AFAIK use AMD64.
<--
Only those running on AMD 64 bit hardware and the Intel clones
of them. Sparc has completely different conventions, as does
HP/UX on HP's PA based machines. IIRC, 32 bit Linux uses an
adaptation of Intel's Itanium 64 bit conventions.
-->
I did say "on x86-64" here, which naturally excludes things like SPARC, ...
IIRC, g++ on Win32 uses the same C++ ABI as on Linux, hence, it doesn't play
well with MSVC for C++ code, hence creating a wall if one is using libraries
compiled with both compilers together...
admittedly, I personally like Win64's design a little more, as
AMD64 seems a bit complicated and over-engineered and likely
to actually reduce performance slightly in common use cases vs
Win64's design.
<--
On 32-bit Linux, I've never used anything.
-->
on 32-bit Linux, there is only a single calling convention in
single use, so no one needs to... doesn't mean the calling
convention is not there, only that it is not needed to specify
it.
<--
Rather that the default is universal. I think the difference is
that under Windows, the default may be something like cdecl, but
many (most) of the OS interface functions use something else.
-->
yep.
<--
Yes and no. All of the C compilers I tried did the same thing.
All of the C++ compilers were different. But that's the case
almost universally today as well.
-->
there were differences. many C compilers used OMF (as the
object format), and some used others (including COFF, but this
was typically for DPMI-based compilers, as well as I think I
remember there being 32-bit OMF, ...); there were a few others
as well IIRC.
although, all this was long ago, and my memory is faded.
<--
Most of the compilers I used under MS-DOS used Microsoft's
object format, which was originally based on Intel's. The one
exception, Intel's own compiler, used the Intel format, but
could link the Microsoft object format as well.
-->
fair enough...
[...]
but, Windows and x86 (or Windows and x64 / x86-64) represent
the vast majority of total systems (desktop and laptop at
least) in use...
<--
You notice that you have to qualify it. I'd be surprised if
there were more Windows than Symbian or VxWorks, in terms of
numbers of machines running the system. And of course, Unix
still dominates servers and large scale embedded systems
(network management, telecoms, etc.).
-->
servers and embedded systems represent different domains...
a lot depends on if one is intending the eventual target use of the app to
be:
on a server somewhere;
being used by an end-user;
on someones' cellphone;
in their microwave;
....
most of my experience is with desktop/end-user targetted software, and here
Windows is dominant...
Linux and OSX (on x86 or x86-64) are most of the rest.
ARM and PPC are used in many embedded systems. (not sure the
OS popularity distribution for embedded systems, from what I
have seen I would guess: Linux, FreeDOS, and various
proprietary OS's...).
<--
VxWorks dominates, I think. Except on portable phones, where
Symbian dominates.
-->
fair enough.
most of my (limited) exposure to embedded systems has been with things like
Linksys routers and with Mizu and some other PRC-manufactured devices (which
often use Linux AFAICT).
granted, I am not sure where most devices or manufactured, or what the
largest manufacturing statistics tend to be.
(just as a wild guess from personal experience, I would think the PRC
manufactures most of the devices I see around, and what little I have heard
implies that a stripped down Linux kernel is most popular there, but really
I don't know... and most of these devices are not terribly convinient to
just go and look at and try to figure out what sort of OS or HW they are
running...).
most other architectures and operating systems can be largely
safely ignored...
<--
Unless you're doing something important: a large scale server,
network management, etc. I've done far more work under Solaris
than under Windows.
-->
fair enough, but I have never really done much related to larger-scale
systems, since these are generally the sole property of people/companies/...
who actually have money...
otherwise, one may run a server which is basically just a Win XP laptop or
similar which is left always running and is maybe rebooted every so often,
like if it starts bogging down or crashes or whatever...
luckily, most often if XP crashes it will reboot anyways, minimizing the
need for manual intervention most of the time (except if it gets stuck on a
blue-screen or similar...).