Uninitialized values?

J

James Kanze

The above disclaimer must be understood as representing the
view of some, but by no means all, C++ programmers.

I think it pretty much sums up the only reasonable attitude for
a professional programmer to have.
One of the stated purposes in the development of the C++
language was to keep the efficiency that makes C so valuable
for system programming. Adding extra overhead of initializing
local variables without directions from the programmer adds
code size and execution time.

How much extra overhead does it really have? Have you measured
it? Most of the time, if you do actually initialize the
variable before use, the compiler will see it, and not bother
with any other initialization.
When you consider the fact that one of the first features that
C++ added to C was the ability to define local objects at any
point in a block, making it possible to defer definition until
you know what you want to initialize it with, the complaint
above is can be seen in a different context.
The author wants to add baggage to the language to protect
programmers from coding errors.

The author wants to make program behavior reproduceable, so that
a test means something. If you don't believe in testing code,
and don't mind random behavior, there's no problem. Otherwise,
there is.
 
J

James Kanze

I can't find the word »trap« in ISO/IEC 14882:2003(E),
but I know it from ISO/IEC 9899:1999 (E). Maybe the authors
of ISO/IEC 14882:2003(E) thought that this was self-evident
or only use »illegal value« - but I also can not find
»illegal value« or »legal vallue« in ISO/IEC 14882:2003(E).
You might be right, but I can not prove it from ISO/IEC
14882:2003(E). May ISO/IEC 14882:2003(E) targets a smaller set
of architectures than ISO/IEC 9899:1999 (E), where there are
no trap representations? A C++ compiler will not be able to
hide trap representation from the programmer if a C compiler
can't hide them.
I also found this suggestion from November 3, 1995, which does
/not/ seem to be implemented in ISO/IEC 14882:2003(E):
»476 - Can objects with "indeterminate initial value" be referred to?
8.5p6 says:
"If no initializer is specified for an object with automatic or
dynamic storage duration, the object and its subobjects, if any,
have an indeterminate initial value."
The C standard specifies that accessing a variable with
indeterminate value results in undefined behavior, but the C++ draft
contains no such language.
Proposed Resolution:
Add the following text at the end of 8.5 paragraph 6:
"Referring to an object with an indeterminate value results in
undefined behavior."«

ISO/IEC 14882:2003 is based on ISO/IEC 9899:1990. The wording
in it is very close to that in C90. The wording is not very
precise (and in fact contradicts C90 in one small
particular---unintentionally, I'm pretty sure). The C committee
addressed the issue, and rewrote the section to be more precise;
this results in the wording in C99.

Even without C99, however, the wording in C++03 explicitly says
that "For character types, all bits of the object representation
participate in the value representation. For unsigned character
types, all possible bit patterns of the value representation
represent numbers. These requirements do not hold for other
types." In other words, not all bits participate in the value
representation. So some are free to create trap values.

In practice, at least one currently marketed machine does have a
tag bit of some sort in its integral types, plus a number of
bits which must be 0. I don't know what happens if the tag bit
has the wrong value, but it could result in a trap. And if the
must be 0 bits are not 0, the value is interpreted as a floating
point value, which is another possible "undefined behavior".
 
I

Ian Collins

Andy said:
If static checking tools and compilers could do all the checking for us
we would never need to debug.
You said "I'd be quite happy if the debug-mode runtime checked for
uninitialised use". That is something lint or the compiler can do.
 
J

James Kanze

[...]
If static checking tools and compilers could do all the
checking for us we would never need to debug.

In general, if code compiles without errors, and passes code
review, it should work. An error downstream is generally a very
strong indication that your process needs improvement.
 
J

James Kanze

James Kanze wrote:
Are you seriously telling me that as a result of compiler
checks and code reviews you never have a bug?

I'm seriously telling you that any bug which occurred downstream
from development was investigated, and the process modified so
that it wouldn't reoccur. This process was initially instigated
because we more or less had to: we were delivering a turn-key
system with contractual penalties for downtime---every minute
the system wasn't available, the customer billed us. What we
found out was that this process also reduced our development
costs: it's actually cheaper to produce quality software than it
is to produce junk. (I suspect that this is only true up to a
point, and that at one error per 100KLoc, we hadn't reached that
point.)
 
I

Ian Collins

Andy said:
James,

If most people posted this up I'd think it was BS - but I've seen enough
from you to be pretty sure it isn't. We've had all sorts of people in
to advise us on processes, and none of them has come up with anything
that would be a real change from the way we've always written software:

Engage brain, double check it, then test it to death. About the only
thing that's changed is a formal code review - and I know that does miss
things.
There's also Engage brain, write test, write code to pass test, repeat.
 
R

red floyd

James said:
I'm seriously telling you that any bug which occurred downstream
from development was investigated, and the process modified so
that it wouldn't reoccur. This process was initially instigated
because we more or less had to: we were delivering a turn-key
system with contractual penalties for downtime---every minute
the system wasn't available, the customer billed us. What we
found out was that this process also reduced our development
costs: it's actually cheaper to produce quality software than it
is to produce junk. (I suspect that this is only true up to a
point, and that at one error per 100KLoc, we hadn't reached that
point.)

You're describing a CMMI Level 4 process. Good stuff.
 
J

James Kanze

James Kanze wrote:
If most people posted this up I'd think it was BS - but I've seen enough
from you to be pretty sure it isn't. We've had all sorts of people in
to advise us on processes, and none of them has come up with anything
that would be a real change from the way we've always written software:
Engage brain, double check it, then test it to death. About the only
thing that's changed is a formal code review - and I know that does miss
things.
I'd be fascinated to know what you are doing. Is it documented anywhere?

SEI. (http://www.sei.cmu.edu/)

Of course, it's not been the case everywhere I've worked. But
when the contract with the final user specifies contractual
penalties for down time, management is motivated to make it
work.

The key, or one of them, is the feedback. You get an error in
the field; you ask "what could we have done differently so that
it would have been caught in code review?" Sometimes, it's a
question of simply taking code review a bit more seriously, and
checking for "obvious" things, like uninitialized variables.
Other times, it's a question of writing simpler code (although
if the code is too complicated for the reviewers to be 100% sure
that it works, it needs to be reworked to be simpler). Or are
you forgetting to consider the border cases---make it a point
for a while to list them explicitly, and verify them. I've also
found writing things out to be helpful---somehow, what
"obviously works" ends up having flaws when you try to explain
in writing why you're sure it works. The point is, of course,
that someone looking at your code should be rapidly convinced
that it is correct. Not that he doesn't see any errors, but
that he can see clearly that there aren't any.
 
J

James Kanze

Andy Champ wrote:

[...]
There's also Engage brain, write test, write code to pass
test, repeat.

Which doesn't change the basic problem. If an error slips
through the code review/tests/etc., what do you do about it?
What do you have to change so that code review will find similar
errors in the future, and to be sure that tests will reveal the
error (if possible---some things can't reasonably be tested).
And so on. The problem has to be addressed at a higher level.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,175
Messages
2,570,942
Members
47,490
Latest member
Finplus

Latest Threads

Top