Le 03/09/12 23:25, Ian Collins a écrit :
Then we should also agree that providing the means to allocate beyond
the the stack without any means for validating the allocation is a
serious specification bug.
This is just NOT TRUE!
int fn(int elements)
{
int tab[elements < 1000 ? elements:1000];
}
There you have limited the number of stack bytes!
But this is stupid, compared to just:
int tab[1000];
If there is an upper limit, then you should just allocate to that limit
in every stack frame.
This way you have a fighting chance of reproducing the worst-case stack usage
in testing, making sure that the program fits into what the system makes
available.
Like any fatal bug, you want to hit worst-case stack usage in testing, rather
than have your users run into it.
You can combine stack allocation with malloc allocation as I
have shown in a thread in comp.std.c:
int fn(size_t datasiz)
{
char databuf[datasiz<1000?datasiz:1];
char *p;
int result;
if (datasiz >= 1000) {
p = malloc(datasiz);
if (p == NULL {/*error handler here */
What makes you confident that databuf[datasiz < 1000?datasiz:1] will always
succeed? And whatever that reason is, why doesn't it make you confident that
databuf[999] will always succeed? (After all, datasiz could be 999).
If you're confident that databuf[datasiz < 1000?datasiz:1] will succeed,
but not that databuf[999] will succeed, the only reason for that would be
that you know that datasiz is either greater than 999, or sufficiently less
than 999. I.e. you know something about the value datasiz from doing a
complicated analysis of stack space usage which takes into account run-time
values.
I would rather not have to do that. It's easier to be confident that
databuf[999] will work, and once you have that assurance, there is little point
in writing databuf[datasiz < 1000?datasiz:1].
The function will work just fine with:
char databuf[999];
Sure, if datasiz is significantly above 999 or below, you save stack
space with the VLA. But by doing so, you have created a function which needs a
suitable argument value in order to hit its worst-case stack usage, and that
must be hit by a test case.
The main advantage of the VLA is simplicity: you just declare it. No OOM
checks, no strategies for different sizes, no nothing. You've lost all that in
this function.
You're trying to "sell" the VLA as a stack-space saving tool, but the argument
is not very powerful.
The one aspect of this that I "buy" is that with a VLA, we can bring stack
frames closer together, eliminating the loose space created by worst-case
allocations. This brings about better locality for better caching and paging
performance. There is the question of whether this is offset by the additional
cost of the steps to allocate the VLA's, which depends on the execution
and data access patterns of the program.
How I might exploit this would be to have a compile macro which can be
configured to switch all fixed arrays to VLA's.
I would make sure that the program passes a battery of test cases using
statically-sized local arrays, never blowing its stack.
Production builds of the program would use dynamic sizes to compact the
stack.
This seems to land into the category of micro-optimization.
Here we have the best path: In MOST cases we will use a cheap stack
allocation.
Stack allocation is cheapest when it is constant. When the function is entered,
all the space it needs is reserved in one step. char databuf[1000] is even
cheaper than char databuf[datasiz ...].
(But then there is that aforementioned loss of access locality. Two things
locatd on either side of databuf could land into the same cache line if databuf
is small enough, and the whole stack fits into fewer pages, etc.)
char databuf[datasiz<1000?datasiz:1];
In the few cases where the data size is exceptionally big
we use the more costly heap allocation. We can handle an unbounded
amount of data.
This is by no means a new trick; you've just introduced the VLA into it.