Compile errors?

J

James Kuyper

I got the impression that you couldn't use sizeof() in an #if expression

Correct. C types, and objects of those types, don't even exist until
translation phase 7; #if expressions are evaluated during translation
phase 4. All #if expressions are evaluated using uintmax_t or intmax_t,
so even if sizeof() were allowed, the only things it could be applied to
would be expressions of those two types.
 
G

glen herrmannsfeldt

(snip on calling conventions and their avanatages)
Not to my knowledge. Since an "internal" function is easy to detect at
compile time (declared static and never has its address taken within the
translation unit in which it's defined), it's probably not worthwhile to
make it a language feature.

Well, the main advantage (feature) and also disadvantage of internal
functions is access to variables from the host. I suppose having
file scope makes that not quite as necessary as it otherwise
would be.

The place I always miss it most is in calling a generic routine,
such as qsort, which then calls another routine. There is no
convenient way to pass other parameters through.

The problem occurs more in the case of integration, minimization,
or differential equation solving routines, such as are more often
written in Fortran. Other parameters were passed through COMMON,
as that was the only way to get them through.

There isn't so much need to pass values through to qsort(), but at
least one time I wanted to do that. You can use file scope variables
as long as you don't need to be reentrant.
It occurs to me that inlining is a (rather drastic) modification of
the calling convention. The set of functions that can be inlined is
probably about the same as the set of functions that can be called
with a non-standard (and perhaps more efficient) calling convention.

Seems to me that one advantage of inlining is that you might not
have to save all the registers (that would normally be saved).
On processors with many registers, that should be an advantage,
but on register starved systems, like IA32, and for not too
simple functions, it might be better to just save them.

In the case of an internal function, it would be a little more
obvious to the compiler that it has the choice between saving
or not.

Well, partly it goes back to the old tradeoff between time and
space. Inlining takes more space, but should be faster.

Even more, consider an internal function that is only called once.
You don't need to inline, just branch to where it is, and branch
back! (No return address to save.)

Oh well.

-- glen
 
K

Keith Thompson

BartC said:
I got the impression that you couldn't use sizeof() in an #if expression
otherwise sizeof(int*) would clearly be simpler. And trying it, it doesn't
seem to work.

Correct.

For this particular Microsoft bug, the fix is documented at
http://connect.microsoft.com/VisualStudio/feedback/details/721786
(as Geoff pointed out). A reasonable workaround is:

#include <stdint.h>

#if defined(_WIN32) || defined(_WIN64)
#undef INTPTR_MIN
#undef INTPTR_MAX
#undef UINTPTR_MAX

#ifdef _WIN64
#define INTPTR_MIN INT64_MIN
#define INTPTR_MAX INT64_MAX
#define UINTPTR_MAX UINT64_MAX
#else /* _WIN64 */
#define INTPTR_MIN INT32_MIN
#define INTPTR_MAX INT32_MAX
#define UINTPTR_MAX UINT32_MAX
#endif /* _WIN64 */
#endif

(assuming that _WIN32 is defined or not defined in the obvious way.)

You could wrap it in your own header, or you could probably even replace
the system header (though I'd be very hesitant to do that).
 
G

Geoff

"It" being the incorrect definition of UINTPTR_MAX (and, I suspect of
INTPTR_MIN and INTPTR_MAX) on 64-bit Windows.

Microsoft says the bug was fixed in June 2011 and the fix will be
available in VC11.

Geoff: When you post a followup, please quote enough of the parent
article to provide context.

Since I was following up to my own post I thought quoting it
redundant.

I found this at the bottom of Microsoft's stdint.h:

/*
* Copyright (c) 1992-2009 by P.J. Plauger. ALL RIGHTS RESERVED.
* Consult your license regarding permissions and restrictions.
V5.20:0009 */


So now we know where it came from.
 
K

Keith Thompson

glen herrmannsfeldt said:
(snip on calling conventions and their avanatages)



Well, the main advantage (feature) and also disadvantage of internal
functions is access to variables from the host. I suppose having
file scope makes that not quite as necessary as it otherwise
would be.

I don't know what you mean by "from the host".
The place I always miss it most is in calling a generic routine,
such as qsort, which then calls another routine. There is no
convenient way to pass other parameters through.

I suspect I've completely misunderstood what you mean by "internal
functions".

Do you mean functions defined inside another function definition? If
so, gcc offers that as an extension. But I don't see how that helps
when you want to pass parameters through qsort().

[...]
Even more, consider an internal function that is only called once.
You don't need to inline, just branch to where it is, and branch
back! (No return address to save.)

Then the compiler might as well just inline it and save two branches.
 
K

Keith Thompson

Geoff said:
[...]
Geoff: When you post a followup, please quote enough of the parent
article to provide context.

Since I was following up to my own post I thought quoting it
redundant.
[...]

It depends on how someone is reading.

In my case, I had already read your previous article; when I re-entered
the newsgroup the parent article was gone, and all I saw was your
followup. My newsreader does have a way to jump to the parent article,
but it's clunky; as far as I can tell there's no easy way to jump back
afterwards.
 
G

glen herrmannsfeldt

(snip on calling conventions, internal functions, and inlining)
I don't know what you mean by "from the host".

I think "host" is the Fortran term, though I wouldn't have thought
of it. I am not sure of the term used in other languages.

An internal function has access to variables in the function
that it is internal to. It can still access them if called
through a function pointer passed to another routine. (In the
case of recursive routines, to the appropriate invocation.)

PL/I could do that from the beginning. Fortran didn't allow
procedure pointers (or the more usual, using a function name as
an actual argument) until Fortran 2008, as I understand mostly
due to getting it right in the recursive case.
I suspect I've completely misunderstood what you mean
by "internal
functions".
Do you mean functions defined inside another function definition? If
so, gcc offers that as an extension. But I don't see how that helps
when you want to pass parameters through qsort().

Internal functions have access to variable of the function that they
are internal to. Even if called indirectly, such as by qsort(),
the variables are still available. So, a function could set a
variable (say, for example, sort increasing or decreasing) call
qsort, which then calls the compare function. The compare function
checks that variable and returns the appropriate value.

Note that this works even in the case of reentrant functions and
recursion. Also, note that it complicates function pointers.

A more obvious way, and I don't know why they didn't do it, would
be to pass a (void*) to qsort, which would then pass as a third
argument to the compare routine. (Look at the differential equation
solvers in Matlab and Octave, for examples. All the trailing arguments
are passed through.)

-- glen
 
K

Keith Thompson

glen herrmannsfeldt said:
I think "host" is the Fortran term, though I wouldn't have thought
of it. I am not sure of the term used in other languages.

An internal function has access to variables in the function
that it is internal to. It can still access them if called
through a function pointer passed to another routine. (In the
case of recursive routines, to the appropriate invocation.)

The more common term for that is "nested functions", and if a function
is defined inside another function definition I'd call the outer
function the "parent" function.

Pascal and Ada have allowed this from the beginning.

There are two common mechanisms for letting inner functions have access
to variables declared in the parent. One is a "static link", where a
function has a pointer to the data of its enclosing parent. Another is
called a "display", where a nested function has an array of links to its
parent, its grandparent, and so on, so that access through multiple
levels doesn't have to traverse a linked list.

[...]
 
I

Ian Collins

Robert said:
Most commonly called nested functions, Pascal being somewhat famous
for them. The nested functions can usually access variables defined
in their parent functions (etc.). IMO, they have limited utility,
(private) member functions of a class present a much more general
solution.

Although the C++ committee considered them useful enough to add Lambda
functions to the language. I guess they could also be added to C
without breaking anything. Anyone familiar with JavaScript will know
how convenient they can be.
 
I

Ian Collins

Robert said:
Intel certainly would not have been interested, but MS worked with AMD
from pretty early on. In fact it was MS who basically told Intel to
do AMD64 because they wouldn't support a second 64 bit x86 extension.
MS has always had a somewhat uneasy partnership with Intel, and has
usually backed AMD as a balance - if often somewhat quietly.

MS released a public AMD64 beta of XP something like nine months after
the first Opterons shipped. And pre-beta versions were available well
before that.

In any event, any notion that AMD was not primarily courting *MS* for
AMD64 support overestimates the importance of Linux in the market at
the time by an order of magnitude.

Indeed. If I recall correctly, Sun were the first to ship AMD64 boxes
running a native 64 bit OS.
 
D

David Brown

Well, I would consider the ratio of complexity, including the rest
of the calling sequence, in which case it isn't all that much.

Agreed - that's why I don't think it is a big issue. I just took issue
with Jacob's claim that the Linux calling convention was somehow more
complex than the Win64 convention.
For many years now, processors do enough instruction overlap that
you don't know that any time is wasted. If it sames more time later,
on average it might be less.

Again, I fully agree - and I understand there can be strange effects
from pipelining and re-ordering in some processors which could make
stack allocation before a call instruction faster than stack allocation
/after/ the call instruction. But I can't think of way in which this
extra allocation of 32 bytes of stack would save time later - small
functions would not need the space, and big functions would typically
need more stack space and thus have to allocate stack themselves.

If there /is/ a way for this to save time on average, or numbers to
prove it, or an explanation for why the Win64 calling convention is the
way it is, I would like to hear about it. I don't know enough details
of x86-64, or Windows calling conventions, so I can only argue from
general cpu understanding and simple logic - if anyone can give better
reasoning then I would enjoy it, even if it proves me wrong!
 
D

David Brown

Seems to me that one advantage of inlining is that you might not
have to save all the registers (that would normally be saved).
On processors with many registers, that should be an advantage,
but on register starved systems, like IA32, and for not too
simple functions, it might be better to just save them.

Inlining can have more advantages that that. As well as avoiding saving
extra registers, it can often save a lot of data movement between
registers - you don't need to put parameters into specific registers,
and the inlined function can use any registers for return values (of
course, this is also more relevant on systems with lots of
general-purpose registers).

Inlining also gives scope for a variety of optimisations, such as
constant propagation, code hoisting, branch elimination, etc. This is
particularly true if the function is "called" with constant values in
some of its parameters.
In the case of an internal function, it would be a little more
obvious to the compiler that it has the choice between saving
or not.

Well, partly it goes back to the old tradeoff between time and
space. Inlining takes more space, but should be faster.

Due to the elimination of call overhead and the extra scope for
optimisation, inlining can often /save/ space as well as being faster.
Of course, if the inlined function is big and it is used in several
places, the result will be bigger - but small leaf functions are often
smaller when inlined.

And due to instruction caches, prefetch buffers, etc., smaller often
means faster.
Even more, consider an internal function that is only called once.
You don't need to inline, just branch to where it is, and branch
back! (No return address to save.)

gcc (with optimisations enabled, of course) will automatically inline
"local" functions that are only called once. Inlining will often give
better results (due to more optimisations being possible) than merely
changing the call/ret to branches. (Actually, it can get more
complicated than that with partial inlining, hot/cold partitioning for
locality of reference, etc.)
 
A

Alain Ketterlin

Keith Thompson said:
Alain Ketterlin said:
Maybe what BartC means is that: "The standard calling sequence
requirements apply only to global functions. Local functions that are
not reachable from other compilation units may use different
conventions." (from the AMD64 ABI Draft 0.99.4 -- Jan 13, 2010)

So, not "within code generated by a compiler", but "within a given
compilation unit, for local functions" only.
C of course doesn't use the terms "local" and "global" for functions.
[...]

Sure. But the ABI does.

The point is to figure out how to map the ABI's terms "local" and
"global" onto C.

Sometimes you can't. For instance, most optimizing compilers generate
specialized versions of a specific function when it appears appropriate.
They end up with a global instance, and one or more local instances, all
originating from the same C function. I guess this is one of the reason
why the ABI document does not use C-level scopes.

-- Alain.
 
D

David Brown

Le 17/02/2014 17:21, David Brown a écrit :

Since windows XP 64, presented in April 2005 to the public, windows has
been 64 bits. Next april will be NINE years then, that windows is using
64 Bit code. I do not count the versions of windows for the 64 bit DEC
Alpha, from 2001 or the 2001 version of Windows NT for Itanium.

Indeed, you do not have the figures...

What is of interest is not how long there has been 64-bit windows code -
but how long it has been in common use.

How many people actually /used/ Windows XP 64? About a dozen or so? I
was in fact one of these people - I had Windows XP 64 on my home machine
for perhaps 4 years or so. It was as far from being the "default" as
you could get, and there were virtually /zero/ 64-bit windows
application. There were endless issues with drivers, as few
manufacturers had even heard of 64-bit windows. And there were many
compatibility and stability issues - I had some windows programs
(32-bit, obviously) that I had to run inside a virtual box machine
because they would not run on XP-64.

When Vista came out, the 64-bit version was reasonably usable (to the
extent that Vista was ever "usable"). It was not until Windows 7
arrived that 64-bit windows could be classified as "mainstream" - and
even then the 32-bit installation was most popular at the start.
Certainly most new Win7 (and Win8) are 64-bit now, and have been for a
couple of years, but even now the great majority of windows programs are
32-bit only.
 
D

David Brown

(snip, someone wrote)


Well, at least some Windows distributions give you two DVDs and you
decide which one to install. I believe Linux does that, too.

For many people "64" is bigger than "32" and so much be better.

Certainly 64-bit is the common choice for the system now, and there is
seldom good reason for picking a 32-bit version for Windows or Linux
(assuming your processor supports it, of course). My point is merely
about what is common for existing systems - it is only recently that
64-bit windows has been common. (According to Steam, about 70% of
windows systems in current use for gaming are 64-bit.)
Well, last I knew 64 bit Linux would also run 32 bit binaries.
If you install the 64 bit OS, you have your choice for which programs
you can run.

That's true, and I run several 32-bit programs on my 64-bit Linux
systems. But most people get most of their Linux software directly from
their distribution, and that mostly means 64-bit software for 64-bit
systems. So 64-bit has been the default for the majority of Linux
software for a long time, but 32-bit is still the most common for
Windows software (even on 64-bit systems).
 
B

Ben Bacarisse

Ian Collins said:
Although the C++ committee considered them useful enough to add Lambda
functions to the language. I guess they could also be added to C
without breaking anything. Anyone familiar with JavaScript will know
how convenient they can be.

The issues of scope and lifetime intervene here. EcmaScript's nested
functions can have indefinite lifetime, which is the main reason they
are so useful. By returning such a function form inside the scope in
which it is defined, it "captures" that environment (thereby also
extending the lifetime of the objects it refers to). In older languages
the mechanism was a purely lexical one making it much less useful. For
example, in gcc's extension a nested function could not be called after
its enclosing function had returned.
 
S

Stephen Sprunk

Inlining can have more advantages that that. As well as avoiding
saving extra registers, it can often save a lot of data movement
between registers - you don't need to put parameters into specific
registers, and the inlined function can use any registers for return
values (of course, this is also more relevant on systems with lots
of general-purpose registers).

With an orthogonal register set--and x86(-64) mostly qualifies--this is
a non-issue. The compiler knows where each result needs to end up, so
it arranges the code above so that's where the result goes. It doesn't
calculate something, put the result in RBX and then copy that result to
RAX a few cycles later. Even if it did, that's free on OOO machines.

(There are a few x86(-64) instructions with fixed registers, e.g. MUL
and DIV, but in general that hasn't been true for a long time.)
Inlining also gives scope for a variety of optimisations, such as
constant propagation, code hoisting, branch elimination, etc. This
is particularly true if the function is "called" with constant values
in some of its parameters.

Indeed, and that can be a huge benefit--when inlining is possible.
And due to instruction caches, prefetch buffers, etc., smaller often
means faster.

OTOH, inlining can defeat instruction caches because the same code gets
loaded from different addresses, on top of any bloat issues.

Still, the other optimizations plus a decent prefetcher usually make
inlining a net win.

S
 
M

Martin Shobe

Again, I fully agree - and I understand there can be strange effects
from pipelining and re-ordering in some processors which could make
stack allocation before a call instruction faster than stack allocation
/after/ the call instruction. But I can't think of way in which this
extra allocation of 32 bytes of stack would save time later - small
functions would not need the space, and big functions would typically
need more stack space and thus have to allocate stack themselves.

If there /is/ a way for this to save time on average, or numbers to
prove it, or an explanation for why the Win64 calling convention is the
way it is, I would like to hear about it. I don't know enough details
of x86-64, or Windows calling conventions, so I can only argue from
general cpu understanding and simple logic - if anyone can give better
reasoning then I would enjoy it, even if it proves me wrong!

The rationale given in their description
(http://msdn.microsoft.com/en-us/library/ms235286.aspx) is, "This aids
in the simplicity of supporting C unprototyped functions, and vararg
C/C++ functions."

Not sure I really buy it.

[snip]

Martin Shobe
 
G

glen herrmannsfeldt

(snip)
The more common term for that is "nested functions", and if a function
is defined inside another function definition I'd call the outer
function the "parent" function.

But isn't parent and child what you call the relationship in
the call tree? Seems to me you should have a different word for
this case.
Pascal and Ada have allowed this from the beginning.

I haven't done much Pascal for a while, but I think some might
have only allowed internal functions. That is, no separate compilation.
There are two common mechanisms for letting inner functions have access
to variables declared in the parent. One is a "static link", where a
function has a pointer to the data of its enclosing parent. Another is
called a "display", where a nested function has an array of links to its
parent, its grandparent, and so on, so that access through multiple
levels doesn't have to traverse a linked list.

That sounds about right.

Also, I believe Fortran still only allows one level of nesting.
That is, no internal to internal procedures. That avoids some of the
complications of variable association.

-- glen
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,075
Messages
2,570,549
Members
47,197
Latest member
NDTShavonn

Latest Threads

Top