Variable-length arrays: should they be used at all?

J

jacob navia

Le 27/06/12 18:36, Stefan Ram a écrit :
Ok, I see. LIST_SIZE must be either »sizeof List« or a
programmer-written literal, such as »104«, which is
error-prone.

(One could generate a header file »#define LIST_SIZE 104«
during the make process using means beyond the C language
only.)

However, in your code, you have:

List *L = (List *)&buffer[0];

. Doesn't this also mean that »List« must be disclosed to
the client, which, as I understand it, is the actual list
header structure?

No, the actual "List" structure mustn't be disclosed at all since you
always work with POINTERS to that incomplete type and never with
the type itself.

This is the beauty of C "private" types: they are REALLY private
and you have absolutely NO CLUE at all what they actually are.

Interface and implementation are then completely separated. I can
(and have) changed the list structure several times without ever
needing to change client code.

More: Old code that was compiled long ago when the list structure
was different will STILL WORK.
 
B

Ben Bacarisse

Eric Sosman said:
A shared stack would use less memory, unless there was some
execution path in which all functions were active simultaneously.

Very true, and it seems that memory is scarce in this example so it's
not likely to worthwhile.

<snip>
 
S

Stefan Ram

jacob navia said:
Le 27/06/12 18:36, Stefan Ram a écrit :
However, in your code, you have:
List *L = (List *)&buffer[0];
. Doesn't this also mean that »List« must be disclosed to
the client, which, as I understand it, is the actual list
header structure?
No, the actual "List" structure mustn't be disclosed at all since you
always work with POINTERS to that incomplete type and never with
the type itself.

Ok, I see.

(Possibly, sometimes a public

#define LIST_SIZE 104

can be made less error-prone, once one adds run-time assertions of

LIST_SIZE == sizeof PRIVATE_LIST

to the initialization function for the list package if
there is such an initialization function (that is not called
once per list initialization, but only once for each process
using the list package.)
 
R

Rui Maciel

David said:
Yep, they should be used with caution. Mind you, the same arguments apply
to using recursion.

And it is applied. It is a common rule of thumb that recursion should
simply be avoided in order to avoid nasty surprises, such as the ones you've
described.

How deeply you can safely recurse, and the
consequences if you go too deep, also murky.

That's essentially the problem with VLAs; if it isn't possible to know
beforehand if a VLA will work as expected or if it will actually crash the
program, shouldn't VLAs simply be avoided?

Considering that there are other ways to reserve memory that at least make
it possible to put in place the necessary safety checks to avoid having the
program blow up in the programmer's face, using VLAs may be a risk which
isn't worth taking.


Rui Maciel
 
I

Ian Collins

There are two distinct concerns here. Let's take them one at a
time.

First, encountering a VLA declaration during execution may blow
the stack and crash, and there is no portable way to detect or
deal with that. However, it is easy for implementations to
provide a way of doing that, without adding any new language
constructs, as I explained in another posting.

The flip side of the first concern is that, one, it often isn't a
big deal in practice; two, implementors should be encouraged to
supply a mechanism for detecting/handling VLA allocation failure,
since it is easy to provide such a mechanism; and three, VLA
support provides an important benefit that does not have the
associated risk of undetectable allocation failure, namely,
variably modified types and more specifically pointers to VLAs,
which are useful even if VLA objects are never declared.

While the "easy to provide" may be true on some platforms, it will be
false for others without memory management. Even the easy
implementations may come at a run time cost which negates the
performance advantages of VLAs over dynamically allocated memory.
 
T

Tim Rentsch

Ian Collins said:
While the "easy to provide" may be true on some platforms, it will be
false for others without memory management. Even the easy
implementations may come at a run time cost which negates the
performance advantages of VLAs over dynamically allocated memory.

I guess you missed the posting where I explained this. No
elaborate memory management (eg, malloc() etc) is needed --
just ordinary stack allocation, the way VLAs are usually
done. For example:

int
function_needing_a_VLA( int n ){
...
double values[n][n];
if( ! &values ){
... here we have detected an allocation failure,
and the stack is not blown (although of course
we cannot use the 'values' VLA), so the error
condition can be handled appropriately (eg,
returning an error flag, longjmp(), ...) ...
} else {
... here we can use the 'values' VLA safely ...
}
}

How would this be done? Simple. The compiler notices the check
of the address of the VLA immediately after its declaration,
and does that check _before_ adjusting the stack pointer, ie,
generated code would include an addition (to the current stack
pointer) and a compare (to a stack limit value), before installing
the new value as the current stack pointer. It's hard to imagine
an architecture where VLAs can be implemented cheaply but this
check is expensive to do; so the runtime cost is pretty close to
zero, and actually would be zero for code that uses VLAs without
any checking.

Incidentally, here I have written the "VLA-using code" as a
separate branch of the if() statement, but there is no reason
that would have to be required. I wrote it this way just to
emphasize the different possible execution paths.

Does that seem a little better now?
 
I

Ian Collins

Ian Collins said:
While the "easy to provide" may be true on some platforms, it will be
false for others without memory management. Even the easy
implementations may come at a run time cost which negates the
performance advantages of VLAs over dynamically allocated memory.

I guess you missed the posting where I explained this. No
elaborate memory management (eg, malloc() etc) is needed --
just ordinary stack allocation, the way VLAs are usually
done. For example:

int
function_needing_a_VLA( int n ){
...
double values[n][n];
if( !&values ){
... here we have detected an allocation failure,
and the stack is not blown (although of course
we cannot use the 'values' VLA), so the error
condition can be handled appropriately (eg,
returning an error flag, longjmp(), ...) ...
} else {
... here we can use the 'values' VLA safely ...
}
}

How would this be done? Simple. The compiler notices the check
of the address of the VLA immediately after its declaration,
and does that check _before_ adjusting the stack pointer, ie,
generated code would include an addition (to the current stack
pointer) and a compare (to a stack limit value), before installing
the new value as the current stack pointer.

Assuming you have a stack limit value available.
It's hard to imagine
an architecture where VLAs can be implemented cheaply but this
check is expensive to do; so the runtime cost is pretty close to
zero, and actually would be zero for code that uses VLAs without
any checking.

I would expect it to be expensive on most, unless they already have a
mechanism in place.
Incidentally, here I have written the "VLA-using code" as a
separate branch of the if() statement, but there is no reason
that would have to be required. I wrote it this way just to
emphasize the different possible execution paths.

Does that seem a little better now?

Not really, I'd be interesting in knowing about real implementations
that have a trivial means of determining how far through the stack you are.

Even where you can check, there's always the pathological case of the
VLA consuming almost all of the stack and the next function call failing!

I guess working an a field where resources are constrained and static
analysis is frequently used to determine stack limits, I'm somewhat biased.
 
T

Tim Rentsch

Ian Collins said:
Ian Collins said:
On 06/28/12 06:20 AM, Tim Rentsch wrote:

There are two distinct concerns here. Let's take them one at a
time.

First, encountering a VLA declaration during execution may blow
the stack and crash, and there is no portable way to detect or
deal with that. However, it is easy for implementations to
provide a way of doing that, without adding any new language
constructs, as I explained in another posting.

<snip second>

The flip side of the first concern is that, one, it often isn't a
big deal in practice; two, implementors should be encouraged to
supply a mechanism for detecting/handling VLA allocation failure,
since it is easy to provide such a mechanism; and three, VLA
support provides an important benefit that does not have the
associated risk of undetectable allocation failure, namely,
variably modified types and more specifically pointers to VLAs,
which are useful even if VLA objects are never declared.

While the "easy to provide" may be true on some platforms, it will be
false for others without memory management. Even the easy
implementations may come at a run time cost which negates the
performance advantages of VLAs over dynamically allocated memory.

I guess you missed the posting where I explained this. No
elaborate memory management (eg, malloc() etc) is needed --
just ordinary stack allocation, the way VLAs are usually
done. For example:

int
function_needing_a_VLA( int n ){
...
double values[n][n];
if( !&values ){
... here we have detected an allocation failure,
and the stack is not blown (although of course
we cannot use the 'values' VLA), so the error
condition can be handled appropriately (eg,
returning an error flag, longjmp(), ...) ...
} else {
... here we can use the 'values' VLA safely ...
}
}

How would this be done? Simple. The compiler notices the check
of the address of the VLA immediately after its declaration,
and does that check _before_ adjusting the stack pointer, ie,
generated code would include an addition (to the current stack
pointer) and a compare (to a stack limit value), before installing
the new value as the current stack pointer.

Assuming you have a stack limit value available.

I find it hard to believe that there's a platform in use today
where some limit value (perhaps a conservative one) could not be
discovered. However, assuming there is such a platform, there's
an easy way out - it can just always indicate that an allocation
fails. The absence of a safely allocated VLA will be detected
and handled as the program sees fit.
I would expect it to be expensive on most, unless they already
have a mechanism in place.

I have no idea why you think that. All that is needed is
(1) getting the address of the place in the stack where
the VLA would go, (2) an addition (or subtraction) to
compare against a stack limit, and (3) the availability of
a stack limit to compare against. Parts (1) and (2) are
pretty much necessary for (inexpensive) VLAs without checking;
part (3) was covered above. It's a few instructions, I would
guess, and not more than that, on most platforms out there.

So let me turn that around and ask if you can give an
example of a platform where it's easy to provide VLAs
but hard to provide checking of the kind I've described.
Not really, I'd be interesting in knowing about real implementations
that have a trivial means of determining how far through the stack you
are.

Stack pointer is an address? Easy to find that address (at least
approximately)? There is a static limit (or a current limit that
can be easily computed)? For me it's hard to imagine an actual
environment (as opposed to a hypothetical one) where it would be
expensive.
Even where you can check, there's always the pathological case of the
VLA consuming almost all of the stack and the next function call
failing!

Presumably VLAs (at least "large" ones) would be used primarily
in leaf procedures, but even if they aren't, there is a way to
protect against such eventualities:

{ char test_space[ n*n* sizeof (double) + SPACE_FOR_REMAINING_CALLS ];
if( ! &test_space ){
// OOPS! aren't going to have enough.. handle that
// eg, return or longjmp()
}
}
double values[n][n];
// okay to proceed here since the earlier check worked
I guess working an a field where resources are constrained and static
analysis is frequently used to determine stack limits, I'm somewhat
biased.

Clearly the environments I'm used to working in are different from
yours. But I don't think what I've suggested (ie, a mechanism to
detect VLA allocation failure) is at odds with what you do. It may
be the case that VLAs still don't work well in your environment; if
that's how it is, well then that's how it is. However, adding this
capability makes it more likely that VLAs _could_ be useful (perhaps
in conjunction with more elaborate static analysis) in those
environments. Certainly I don't mean to advocate use of VLAs in
environments where they shouldn't be used. At the same time, I
would like them to be capable of being used more safely in
environments where using VLAs is reasonable. I think the benefit
is (relatively) large, and the cost in terms of implemention
effort is pretty small. Of course, I'm just making semi-educated
guesses about the implementation costs, so if anyone has some
information about actual platforms or implementations that would
help nail those down it would be good to hear about that.
 
I

Ian Collins

I find it hard to believe that there's a platform in use today
where some limit value (perhaps a conservative one) could not be
discovered. However, assuming there is such a platform, there's
an easy way out - it can just always indicate that an allocation
fails. The absence of a safely allocated VLA will be detected
and handled as the program sees fit.

While I admit I've never looked too hard, I don't recall any of the
embedded systems I've used having one. It simply isn't a very common
thing to do.
I have no idea why you think that. All that is needed is
(1) getting the address of the place in the stack where
the VLA would go, (2) an addition (or subtraction) to
compare against a stack limit, and (3) the availability of
a stack limit to compare against. Parts (1) and (2) are
pretty much necessary for (inexpensive) VLAs without checking;
part (3) was covered above. It's a few instructions, I would
guess, and not more than that, on most platforms out there.

So let me turn that around and ask if you can give an
example of a platform where it's easy to provide VLAs
but hard to provide checking of the kind I've described.

Well on most Unix systems the stack limit can be changed on a running
process, so that muddies the waters a bit. The Solaris compiler does
have a stack check option, but I believe it causes a early SEGV rather
than an error return. I know Solaris also has a mechanism for getting
the process context, which include the stack. So yes VLAs could be
checked, but the compiler and the OS would have to cooperate.

Having to make a function call behind the scenes removes the simplicity
and immediacy of VLAs. Does it narrow the performance gap with malloc
significantly? Maybe.

Maybe a compromise would be to standardise a function like alloca with a
requirement to perform a space check?
 
M

Malcolm McLean

בת×ריך ×™×•× ×©×‘×ª,30 ביוני 2012 10:58:52 UTC+1, מ×ת Ian Collins:
On 06/30/12 05:48 AM, Tim Rentsch wrote:

Maybe a compromise would be to standardise a function like alloca with a
requirement to perform a space check?
It doesn't need a requirement to perform a space check. it just needs to beallowed to do so.
The thinking is that if the program has to respond gracefully to out of memory conditions, the platform will provide some way of doing the check. You might need to set a compiler option that entails that alloca() goes slower.But on many platforms there's nothing you can do on out of memory. I believe that Linux automatically kills a process at random if system memory is exhausted. So on this system it's pretty pointless to do anything other thansegfault, and there's no real reason to slow down alloca().
 
T

Tim Rentsch

Ian Collins said:
While I admit I've never looked too hard, I don't recall any of the
embedded systems I've used having one. It simply isn't a very common
thing to do.

Perhaps they don't at present, but is it hard for them to do so?
If what's needed is easy and cheap to provide, I expect those
OS's (or whatever is the moral equivalent) that don't already
provide that would likely be upgraded to do so. Also, even if
there is no OS support to find a dynamic stack limit, the
implementation can self-impose a static limit (which could be
checked once at program startup if nothing else), and then
check VLA allocation against the static limit.
Well on most Unix systems the stack limit can be changed on a running
process, so that muddies the waters a bit.

In Linux at least, setrlimit() is defined in libc.a. No reason
it couldn't be changed to remember the current stack limit in
a global variable.

(I assume you're talking about a process changing its own
stack limit, not having it changed by a different process,
unbeknownst to the affected process. Obviously there is
no way to protect against arbitrary outside interference,
but that's true of all sorts of things, not just this.)
The Solaris compiler does
have a stack check option, but I believe it causes a early SEGV rather
than an error return.

No reason an implementation couldn't have its own signal handler
installed to get that signal, and handle it appropriately
(including re-dispatching a SEGV that arises not as a result of
VLA stack adjustment, if such is needed). The signal() function
is part of the standard C library.
I know Solaris also has a mechanism for getting
the process context, which include the stack. So yes VLAs could be
checked, but the compiler and the OS would have to cooperate.

Certainly I expect there would be some level of cooperation
between the OS and the implementation (or maybe I should
say that as there would be some level of cooperation from
the OS). However I don't think the amount of cooperation
needed is very large - it could be as simple as memory
location with a pre-defined name that stores the current
stack limit. As long as there is _some_ cooperative
mechanism, it doesn't have to be the _same_ mechanism from
platform to platform: the implementation would know
which mechanism to use for the different platforms it
supported.
Having to make a function call behind the scenes removes the
simplicity and immediacy of VLAs. Does it narrow the performance gap
with malloc significantly? Maybe.

Perhaps I am wrong in this, but I believe (or assume) that in
most cases a fairly good estimate (ie, a safe estimate, but not
wildly wrong) can be made without a function call being
necessary. However, even if a function call is needed, if it's a
user-space function call (so no kernel context switch is needed),
I would expect the performance to be better than malloc(), both
because the deallocation code is simpler, and because in many or
most cases no explicit deallocation cycles are needed -- it just
happens automatically when the stack-unwinding procedure return
is done.
Maybe a compromise would be to standardise a function like alloca with
a requirement to perform a space check?

In a sense that is what I'm proposing, although with more
integrated semantics than alloca(). A key difference between
alloca() and VLAs is, once you do an alloca(), that space is used
(ie, never deallocated) until the function exits. By attaching
the checking semantics to VLAs, space can be allocated on the
stack (perhaps with a function call overhead), but deallocation
occurs (by decrementing the stack pointer, presumably) at the end
of the compound statement the VLA is in, which could be long
before the function exit point.
 
P

Phil Carmody

Tim Rentsch said:
For example:

int
function_needing_a_VLA( int n ){
...
double values[n][n];
if( ! &values ){
... here we have detected an allocation failure, ....
How would this be done? Simple. The compiler notices the check
of the address of the VLA immediately after its declaration

This may have been covered elsethread (I have a lot to catch up on)
but was there any reason why you chose to decorate the test with
an & operator? Why would
if(!values) {
not be an equally workable idiom? (I like the idea of there being
some idiom for this purpose, it does leave the language a bit
fragile without it.)

Phil
--
I'd argue that there is much evidence for the existence of a God.
Pics or it didn't happen.
-- Tom (/. uid 822)
 
P

Phil Carmody

Tim Rentsch said:
Presumably VLAs (at least "large" ones) would be used primarily
in leaf procedures.

I wouldn't expect that at all. If it's big, it will be allocated
once early on and then acted upon many times, in my experience.
(And possibly by functions of unknown complexity and depth - say
you're creating a bitmap on the stack, and then running arbitrary
filters across that image - how much stack space does an arbitrary
filter take?)

Phil
--
I'd argue that there is much evidence for the existence of a God.
Pics or it didn't happen.
-- Tom (/. uid 822)
 
P

Phil Carmody

Ian Collins said:
Rui Maciel said:
jacob navia wrote:
VLAs are indispensable when you want to avoid unnecessary calls
to the expensive malloc function.

But what about the inability to gracefully recover from a memory allocation
error? If a VLA is defined with the wrong size at the wrong moment then it
appears that it isn't possible to do anything about it, nor is it even
possible to put in place any safeguard to avoid that. In fact, are there
any scenarios where a VLA defined with an arbitrary size is guaranteed to
work as expected?

No (unless the implementation defines its own error-detection
mechanism).

But the same applies to old-style constant-size arrays. If you declare

double mat[100][100];

either at block scope or at file scope, there's no mechanism to detect
an allocation failure.

But you can do a static analysis.

Once you've solved the Halting Problem!

Phil
--
I'd argue that there is much evidence for the existence of a God.
Pics or it didn't happen.
-- Tom (/. uid 822)
 
A

Andrew Cooper

Once you've solved the Halting Problem!

Phil

Computers are finite state machines, so the Halting Problem need not be
solved. The Halting Problem argument only applies to infinite state
machines, as any finite state machine can be tested by providing every
permutation of input, single stepping it, and asserting that the global
state never repeats for a unique input.

~Andrew
 
I

Ian Collins

Ian Collins said:
But the same applies to old-style constant-size arrays. If you declare

double mat[100][100];

either at block scope or at file scope, there's no mechanism to detect
an allocation failure.

But you can do a static analysis.

Once you've solved the Halting Problem!

Don't you remember the tool I wrote to analyse the base station code way
back when??
 
G

gwowen

Once you've solved the Halting Problem!

You only need solve the Halting Problem if you need to do static
analysis using the same algorithm on arbitrary programs / inputs.
Algorithmically determining the stack usage of a small finite number
of programs / modules with constrained inputs is very much *not* the
Halting Problem.
 
T

Tim Rentsch

Phil Carmody said:
Tim Rentsch said:
For example:

int
function_needing_a_VLA( int n ){
...
double values[n][n];
if( ! &values ){
... here we have detected an allocation failure, ...
How would this be done? Simple. The compiler notices the check
of the address of the VLA immediately after its declaration

This may have been covered elsethread (I have a lot to catch up on)
but was there any reason why you chose to decorate the test with
an & operator? Why would
if(!values) {
not be an equally workable idiom? (I like the idea of there being
some idiom for this purpose, it does leave the language a bit
fragile without it.)

I think both should be workable. More specifically,
if either works I think the other should work the
same way, otherwise it violates The Law of Least
Astonishment. I chose this way of writing it so
the test sticks out like a very large sore thumb, to
make it very clear to a human reader that something
unusual is going on. But as far as the compiler
goes I think the two forms should be interchangeable.
And I'm happy to leave the question of which style
is better for another time...
 
T

Tim Rentsch

Andrew Cooper said:
Computers are finite state machines, so the Halting Problem need not be
solved. The Halting Problem argument only applies to infinite state
machines, as any finite state machine can be tested by providing every
permutation of input, single stepping it, and asserting that the global
state never repeats for a unique input.

This reasoning is wrong both in principle and in practice.

It's wrong in practice because the number of possible states is
so large there is no way to keep track of which ones have been
seen for even very small processors -- four 32-bit registers, for
example, already gives a state space of 2**128 states, and that's
not even counting any RAM. Computers are treated like Turing
Machines because the amount of state they have is so large that
treating them like FSMs is not practicable.

It's wrong in principle because computers do input/output, and that
expands the state available without any predetermined bound. There
is no reason a program couldn't ask someone to plug in a USB drive,
or two, or twenty, or open a network connection to "the cloud",
etc. Without a fixed bound on the state, what we have is no longer
a FSM.
 
T

Tim Rentsch

Phil Carmody said:
I wouldn't expect that at all. If it's big, it will be allocated
once early on and then acted upon many times, in my experience.
(And possibly by functions of unknown complexity and depth - say
you're creating a bitmap on the stack, and then running arbitrary
filters across that image - how much stack space does an arbitrary
filter take?)

In that case it's safe to say that our expectations are
different.

I might modify my previous statement to say "leaf or near-leaf
procedures", meaning functions that have a relatively fixed,
well-defined, and fairly shallow forest of called procedures,
probably done as an aid to program structuring.

My sense though is that you mean greater variability than that,
perhaps even unknown at compile time and potentially of unknown
(and large?) depth. Obviously that's a harder problem (and
may be whether VLAs are used or not), and probably needs a
more powerful mechanism to be used safely.

So the mechanism I suggested, even though I think it does
fairly well for an important subset of the situations
involved when using VLAs, does address only a subset,
and a more general or more powerful mechanism would
be needed to handle cases like the ones you describe.

So now here is my question for you - what might such
a more general, more powerful, mechanism be?
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,079
Messages
2,570,574
Members
47,205
Latest member
ElwoodDurh

Latest Threads

Top