Wired binary behavior

K

Kris Garrett

On AIX-4.3.3 using xlc v5.0, I observe the following madness:

int
main(int argc, char *argv[])
{
char ifxsrv[64];

ProcConfig config;
FSINFO fs_array[NUM_FS];
SYSCFG util;
int rc = 0;
[..snipped..]
return 0;

}

Results in "Illegal instruction(coredump)" upon exiting (via return 0).
The program does run to completion, but looks as if it somehow returns
to a non-valid address. The wired part is that if the above code is
changed to:

int
main(int argc, char *argv[])
{
ProcConfig config;
FSINFO fs_array[NUM_FS];
SYSCFG util;
int rc = 0;
char ifxsrv[64];
[..snipped..]
return 0;
}

The program runs to completion without coredumping on an illegal
instruction.

FSINFO, SYSCGF, and ProcConfig are defined in a separate header file.

My question is why would the order of declarations effect the stability
of an executable? Non-deterministic behavior == a bad day.
 
W

Walter Roberson

On AIX-4.3.3 using xlc v5.0, I observe the following madness:
Results in "Illegal instruction(coredump)" upon exiting (via return 0).
The program does run to completion, but looks as if it somehow returns
to a non-valid address. The wired part is that if the above code is
changed to:
The program runs to completion without coredumping on an illegal
instruction.
My question is why would the order of declarations effect the stability
of an executable? Non-deterministic behavior == a bad day.

The order of declarations affects, on most implementations, the
order in which variables are allocated on a stack. Some implementations
bother to sort by size or by alignment (in hopes of packing better),
but some just put the variables on a stack either in the order
declared or in the reverse of the ordered declared.

Therefore, if you happen to be overwriting an array or before the end
of an array or writing into free memory, whether you have allocated
a variable or not in a certain relative location in the code can
affect exactly what happens to be at the place being overwritten,
and thus can affect whether you see an obvious crash or not.
 
A

Ark Khasin

Walter said:
The order of declarations affects, on most implementations, the
order in which variables are allocated on a stack. Some implementations
bother to sort by size or by alignment (in hopes of packing better),
but some just put the variables on a stack either in the order
declared or in the reverse of the ordered declared.

Therefore, if you happen to be overwriting an array or before the end
of an array or writing into free memory, whether you have allocated
a variable or not in a certain relative location in the code can
affect exactly what happens to be at the place being overwritten,
and thus can affect whether you see an obvious crash or not.
If your assessment is correct, so is my claim that arrays on stack are
evil. This claim had been thoroughly rebuffed in this NG though. :)
-- Ark
 
S

Stephen Sprunk

Kris Garrett said:
On AIX-4.3.3 using xlc v5.0, I observe the following madness:

int
main(int argc, char *argv[])
{
char ifxsrv[64];

ProcConfig config;
FSINFO fs_array[NUM_FS];
SYSCFG util;
int rc = 0;
[..snipped..]
return 0;

}

Results in "Illegal instruction(coredump)" upon exiting (via return 0).
The program does run to completion, but looks as if it somehow returns
to a non-valid address. The wired part is that if the above code is
changed to: ....
My question is why would the order of declarations effect the stability
of an executable? Non-deterministic behavior == a bad day.

The reason is almost certainly that you're writing outside the bounds of one
of your arrays. If the arrays are in one order, your program corrupts the
return address on the stack; if the arrays are in a different order, it
corrupts some other part of the stack (like your other variables).

S
 
R

Richard Bos

In fact, I would say - as a first, not completely informed guess - that
somewhere in the OP's [..snipped..] code he scribbles past the end of
ifxsrv[], the array in question.
If your assessment is correct, so is my claim that arrays on stack are
evil.

No, that claim is not correct. Writing past the end of an array is evil,
regardless of where and how that array is declared.

Richard
 
C

CBFalconer

Ark said:
Walter Roberson wrote:
.... snip ...


If your assessment is correct, so is my claim that arrays on stack
are evil. This claim had been thoroughly rebuffed in this NG though.

No, the thing that is evil is writing into memory that is not part
of the destination object.
 
K

Kenneth Brody

Kris said:
On AIX-4.3.3 using xlc v5.0, I observe the following madness:

int
main(int argc, char *argv[])
{
char ifxsrv[64];

ProcConfig config;
FSINFO fs_array[NUM_FS];
SYSCFG util;
int rc = 0;
[..snipped..]
return 0;

}

Results in "Illegal instruction(coredump)" upon exiting (via return 0).
The program does run to completion, but looks as if it somehow returns
to a non-valid address. The wired part is that if the above code is
changed to:

int
main(int argc, char *argv[])
{
ProcConfig config;
FSINFO fs_array[NUM_FS];
SYSCFG util;
int rc = 0;
char ifxsrv[64];
[..snipped..]
return 0;
}

The program runs to completion without coredumping on an illegal
instruction.

FSINFO, SYSCGF, and ProcConfig are defined in a separate header file.

My question is why would the order of declarations effect the stability
of an executable? Non-deterministic behavior == a bad day.

Pure guess, based on lack of any other information...

What if something caused a buffer overrun in ifxsrv[]? In the
initial version, the overrun smashes the call stack, while the
modified version smashes rc.

One possible test would be to put another array immediately
before and after ifxsrv's definition, memset them to a known
value, and then examine those arrays before returning from
main().

--
+-------------------------+--------------------+-----------------------+
| Kenneth J. Brody | www.hvcomputer.com | #include |
| kenbrody/at\spamcop.net | www.fptech.com | <std_disclaimer.h> |
+-------------------------+--------------------+-----------------------+
Don't e-mail me at: <mailto:[email protected]>
 
A

Ark Khasin

CBFalconer said:
No, the thing that is evil is writing into memory that is not part
of the destination object.
[Also to Richard Bos]
Let's face it: we mortals are fallible. When we admit that our errors do
sometimes go out in the wide world, it's appropriate to think about
minimizing the impact of an error. IMHO, corrupting the heap is not as
disastrous as corrupting a stack.

-- Ark
 
F

Flash Gordon

Ark Khasin wrote, On 27/08/07 21:20:
CBFalconer said:
No, the thing that is evil is writing into memory that is not part
of the destination object.
[Also to Richard Bos]
Let's face it: we mortals are fallible. When we admit that our errors do
sometimes go out in the wide world, it's appropriate to think about
minimizing the impact of an error. IMHO, corrupting the heap is not as
disastrous as corrupting a stack.

IMHO corrupting either is a complete disaster.
 
C

CBFalconer

Ark said:
.... snip ...

Let's face it: we mortals are fallible. When we admit that our
errors do sometimes go out in the wide world, it's appropriate to
think about minimizing the impact of an error. IMHO, corrupting
the heap is not as disastrous as corrupting a stack.

Not so. Either can (and probably does) cause undefined behaviour,
which includes giving the expected result.
 
R

Richard

Flash Gordon said:
Ark Khasin wrote, On 27/08/07 21:20:
CBFalconer said:
Ark Khasin wrote:
Walter Roberson wrote:

... snip ...
Therefore, if you happen to be overwriting an array or before the
end of an array or writing into free memory, whether you have
allocated a variable or not in a certain relative location in the
code can affect exactly what happens to be at the place being
overwritten, and thus can affect whether you see an obvious crash
or not.
If your assessment is correct, so is my claim that arrays on stack
are evil. This claim had been thoroughly rebuffed in this NG though.

No, the thing that is evil is writing into memory that is not part
of the destination object.
[Also to Richard Bos]
Let's face it: we mortals are fallible. When we admit that our
errors do sometimes go out in the wide world, it's appropriate to
think about minimizing the impact of an error. IMHO, corrupting the
heap is not as disastrous as corrupting a stack.

IMHO corrupting either is a complete disaster.

Of course. The previous statement is simply ridiculous. The heap could
hold information as important, if not more so, information than the
stack. Not only that but the side effects might take an eon to debug
since it is not immediately apparent - it could manifest itself in one
of a million undefined ways including incorrect mallocs, deallocs, data
reads, password caches etc etc etc. It is far more likely, especially
under the scope of a good debugger, that a prudent programmer would
notice a stack corruption ESPECIALLY since the stack is invariably
dedicated its own inspection window in the debuggers I have used.
 
A

Ark Khasin

Richard said:
Flash Gordon said:
Ark Khasin wrote, On 27/08/07 21:20:
CBFalconer wrote:
Ark Khasin wrote:
Walter Roberson wrote:

... snip ...
Therefore, if you happen to be overwriting an array or before the
end of an array or writing into free memory, whether you have
allocated a variable or not in a certain relative location in the
code can affect exactly what happens to be at the place being
overwritten, and thus can affect whether you see an obvious crash
or not.
If your assessment is correct, so is my claim that arrays on stack
are evil. This claim had been thoroughly rebuffed in this NG though.
No, the thing that is evil is writing into memory that is not part
of the destination object.

[Also to Richard Bos]
Let's face it: we mortals are fallible. When we admit that our
errors do sometimes go out in the wide world, it's appropriate to
think about minimizing the impact of an error. IMHO, corrupting the
heap is not as disastrous as corrupting a stack.
IMHO corrupting either is a complete disaster.

Of course. The previous statement is simply ridiculous.

Your righteousness is ridiculous, but in a complex way.
The heap could
hold information as important, if not more so, information than the
stack. Not only that but the side effects might take an eon to debug
since it is not immediately apparent - it could manifest itself in one
of a million undefined ways including incorrect mallocs, deallocs, data
reads, password caches etc etc etc. It is far more likely, especially
under the scope of a good debugger, that a prudent programmer would
notice a stack corruption ESPECIALLY since the stack is invariably
dedicated its own inspection window in the debuggers I have used.
Bugs don't like bright light; they tend to show up when you are not looking.
It is not uncommon in critical apps to wrap malloc so as to allocate
areas with guards on one or both sides.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Similar Threads

Weird Behavior with Rays in C and OpenGL 4
Binary Search in C 7
UB ? Checking size of off_t 5
bench: binary trees 10
comparing binary trees in C 12
wierd behavior in program 4
Odd behavior with odd code 33
Binary Tree 17

Members online

No members online now.

Forum statistics

Threads
473,995
Messages
2,570,228
Members
46,818
Latest member
SapanaCarpetStudio

Latest Threads

Top