is NULL-checking redundant in accessor-functions?

J

Jens M Andreasen

AFAIR this would happen in 286 protected mode with invalid segment
selectors.

Yes it would have perhaps had happened with a noncorforming malloc, and at
the time you are referring to, there was no other standard except for a
general 'look & feel'!

/j
--
Thanks. Take care, Brian Inglis Calgary, Alberta, Canada

(e-mail address removed) (Brian[dot]Inglis{at}SystematicSW[dot]ab[dot]ca)
fake address use address above to reply
 
J

Jens M Andreasen

It's deadly serious. It could even be the way forward in the battle
against wild pointer errors - I would be all for it.


Now this is a sensible reply!


mvh // Jens M Andreasen


PS: The pointer in the example is not in the wild ... yet :p
 
J

Jens M Andreasen

Yeah, I was appalled when I heard that Linux had done that. That is *not*
a conforming C implementation. If malloc returns a non-null pointer, then
the storage shall have been *allocated*, i.e. it exists, has addresses,
etc.

Nitpicking off-topic:

It is the C-lib that would have have done the overcommitting. The
Linux kernel itself is pretty braindead (hopefully!) and will just do
weird things when you run out of swap.

/j
 
D

Douglas A. Gwyn

Francis said:
... IIRC it was the almost
unanimous opinion of those present at that meeting that a change to a
bit pattern that did not change the value (for example, normalising a
pointer representation on a machine using multiple representations of
addresses) was allowed at almost any time.

The problem with that is that previously we all seemed
to agree that a value could be safely copied via the
equivalent of memcpy in the same way. But if the
representation can change while its bytes are being
copied, that would no longer hold.
 
J

Jens M Andreasen

[ja:]
And as long as you just keep the credit-card in your pocket, nothing bad
happens. It is when you start using the teller-machine and actually move
some values that the police will trap you. No?

Unless you get caught for some other reason, and the police inventory your
pockets when you are put in jail.

Hey, wait a second ... The tellermachine/MMU anology wasn't that bad!
Extendinding it to mean any bitpattern I may or may not have floating
around in memory leads us astray ...

Are you using the large memory model, or it's 32-bit equivalent
(sometimes called "intergalactic model"?)

That would be the 'intergalactic' model :) Although I still have 5 inch
install disks with a 'no nonsense' license, I have no machine with an OS
where it can legally (nor practically) be installed.
Nobody said anything about "mixing" models. Any model in which a
pointer contains a protected-mode selector piece (16 bits) in a segment
register will be affected.

OK! I suppose you still keep a DOS partition around for reference? If so,
please compile and run: p = malloc(somesize),p?free(p):, p? ...

We are now back in a timespace when even the proposed standard wasn't
more than a mere wink in the eye(s) of the founding fathers, but:

If memory serves me well, I would say that for a given project of some
size: As long as malloc did not fail, p would be within limits.

The key here is that we are not casting any random value into a pointer.
The implications for a task-switching OS would otherwise be tremendous:

p = malloc(somesize)
q(p) // q will free p but implementation cannot see it
/**/ // taskswitch here, flush, reload and fail?
.....


Mmmm ... I can see a flaw in the taskswitch argument above, running in
ring zero as a kernel might very well be different from running in
userspace. And we are discussing specifics of a 386 class processor? This
might be a good entrypoint for shutting me up!

/j
 
J

Jarno A Wuolijoki

Even when a certain case of UB is now real-world impossible (and in the
case under discussion, it is not even that), computer science, for both
hardware and software, advances so rapidly that there's no guarantee
that it won't be possible next year, and state of the art in five.

Read it again and you'll find that I'm very much not talking about
the real world there;)

With a certain mindset 'every possible implementation does X' can be
read as 'it can be proven from [a list of c&v's] that a compiler
that performs some other action but X is not an implementation' which
is the same as 'the standard requires the compiler to do X' which is
the same as 'X is the defined behaviour here' which contradicts
'X is UB'.

Of course if you introduce some offtopic issues like physics of the
world surrounding us into the definition of 'possible', this logic
breaks down.
 
?

=?iso-8859-1?q?Dag-Erling_Sm=F8rgrav?=

Douglas A. Gwyn said:
Yeah, I was appalled when I heard that Linux had done that.
That is *not* a conforming C implementation. If malloc
returns a non-null pointer, then the storage shall have
been *allocated*, i.e. it exists, has addresses, etc.

Pretty much every major operating system today, including the one
you're currently using, overcommits memory.

DES
 
L

Lawrence Kirby

I think you are mistaken, at least the above does not seem to match the
decision made in Sydney wrt indeterminate values. IIRC it was the almost
unanimous opinion of those present at that meeting that a change to a
bit pattern that did not change the value (for example, normalising a
pointer representation on a machine using multiple representations of
addresses) was allowed at almost any time.

But consider that I can view any (addressable) object as an array of
unsigned char. If the value (effectively the representation) of a
non-volatile object changes under that view there needs to be a good
reason for it.
It was also the opinion of
those present that an indeterminate value can be represented by any and
all possible bit patterns.

But whether the value of an object is indeterminate or not depends on the
type of lvalue used to access it. If I access an indeterminate pointer
with a pointer lvalue then, fair enough, that could change it to anything.
If I only access its bytes with unsigned char lvalues the implementation
has no business doing pointer related things behind the scenes.

Lawrence
 
L

Lawrence Kirby

Yeah, I was appalled when I heard that Linux had done that.
That is *not* a conforming C implementation.

I don't see a problem with this.
If malloc
returns a non-null pointer, then the storage shall have
been *allocated*, i.e. it exists, has addresses, etc.

How will a strictly conforming program tell the difference?

Consider that the *only* program that a conforming C implementation must
translate and execute successfully is the one specified in 5.2.4.1, and
AFAIK this program isn't required to call malloc(). For any other strictly
conforming program all we know is that to the extent that the
implementation does execute it successfully the output generated must be
consistent with the abstract machine as specified by 5.1.2 and its
subclauses.

The execution of any program except the designated one may fail at any
time for any reason as long as the failure mode is clean in the sense that
it doesn't produce wrong output. There's nothing wrong with incomplete
output as long as the program didn't terminate normally, and didn't
otherwise violate 5.1.2.3.

There's no way a strictly conforming program can tell the difference (i.e.
engineer output based on this) between, say, malloc() overcommitting at
the point of allocation, the system subsequently taking memory away from
the program, a different failure such as a "stack" allocation failure,
kill -9, and so on.

7.20.3.3p3 says:

"The malloc function returns either a null pointer or a pointer to the
allocated space".

This is a description of the abstract machine (so says 5.1.2.3p1). A
conforming implementation need only follow the semantics of the abstract
machine as far as it is required to by the relevant parts of the standard,
notably in sections 4 and 5.

Lawrence
 
F

Francis Glassborow

Douglas A. Gwyn said:
The problem with that is that previously we all seemed
to agree that a value could be safely copied via the
equivalent of memcpy in the same way. But if the
representation can change while its bytes are being
copied, that would no longer hold.

I do not see any subsequent comments on the Sydney decision and I note
that neither you nor Larry were at the Redmond meeting. If either of you
believe the decision was wrong then I think you need to raise it fairly
quickly with WG14.

The specific point you raise was not raised at the Sydney meeting and
certainly should be considered.
 
C

CBFalconer

Nils said:
Douglas A. Gwyn wrote:
..... snip ...

Even if there is no way to convince you that lazy allocation does
make sense, you have to understand that it is a feature, not a bug.
Linux and the other systems I have mentioned above make it
*optional*. You can choose to use safe eager allocation instead if
you so desire, using /proc/sys/vm/overcommit_memory or sysctl
vm.overcommit_memory on Linux. The extent to which the system
shall overcommit can be configured too, using sysctl
vm.overcommit_ratio.


Then you will also have to complain about every other system
application that may disrupt the execution of a conforming C
program, such as the Unix kill command and debuggers (using
ptrace() and/or procfs to alter the state of the program.)

As long as these things are useful, people will continue to prefer
their availability in favor of strict standards compliance.

There is no conformance problem if the system, on finding that the
lazy allocated memory is not available, simply pauses the program
for i/o until that memory chunk can be physically allocated. Then
the only thing affected is the run timing.
 
R

Richard Bos

Jens M Andreasen said:
OK! I suppose you still keep a DOS partition around for reference?
Yes.

If so, please compile and run: p = malloc(somesize),p?free(p):, p? ...

Not on your nelly. First, it's unsyntactical; second, several important
parts are missing; and third, an experiment on one implementation is
irrelevant, really. MS-DOS C compilers often did things that tried to go
beyond what the OS itself strictly allowed.
We are now back in a timespace when even the proposed standard wasn't
more than a mere wink in the eye(s) of the founding fathers, but:

Eh? 1995 was earlier than 1989?
If memory serves me well, I would say that for a given project of some
size: As long as malloc did not fail, p would be within limits.

The key here is that we are not casting any random value into a pointer.

You're not casting _anything_, random or otherwise.
The implications for a task-switching OS would otherwise be tremendous:

p = malloc(somesize)
q(p) // q will free p but implementation cannot see it
/**/ // taskswitch here, flush, reload and fail?

I don't see where you get that idea. There are many ways around this, of
which the most simple one in pure ISO C is that the value of p may now
be a trap value when seen as a pointer, but not as an array of unsigned
chars (since unsigned char does not have any trap values).
This might be a good entrypoint for shutting me up!

You reading the Standard would be a better solution to that problem.

Richard
 
?

=?iso-8859-1?q?Dag-Erling_Sm=F8rgrav?=

Jens M Andreasen said:
It is the C-lib that would have have done the overcommitting. The
Linux kernel itself is pretty braindead (hopefully!) and will just do
weird things when you run out of swap.

Wrong. The C library uses underlying kernel facilities (mmap() /
brk() / sbrk()) to allocate memory. The fact that those underlying
kernel facilities only allocate address space, not physical memory
(the physical memory is allocated only when the aforementioned address
space is accessed), is a feature of the kernel, not of the C library.

Moving away from strict C and into the POSIX realm, the C library or
the application can work around memory overcommit by touching every
allocated page (forcing physical memory or swap space to be allocated)
with a SIGSEGV handler installed to catch unsatisfied page faults, or
by using a large non-sparse file as a backing store.

DES
 
M

Michael Wojcik

Give me one intstance where 'p == NULL' will fail (on that line) and I
will shut up! This is after p=malloc(somesize),free(p) ..

Perhaps there isn't one. I've just tried ILE C on the AS/400, which
was a promising candidate, and it let the test through, in this simple
case.

It might fail in a more complex one, where the freed pointer wasn't
in scope; for example, if I freed a pointer and then passed it to a
function in another t.u. which tested it for null. That's because on
the AS/400 pointers aren't simple addresses. They're 128-bit objects
(for a 64-bit address space) that have to be "materialized" before
they can be used. The AS/400 doesn't use a separate virtual address
space for each process ("job" in OS/400); it uses a single virtual
address space for the entire system, and uses strong pointer typing
and hardware address validation to enforce storage access rules.

Here's what the output of the %p format specifier looks like in ILE
C:

intp is SPP:0000 :1aefQPADEV0001MWW 006945 :174:1:24d4
funcp is PRP:0:24d4

The first is an int pointer, the second a function pointer.

The "materialize pointer" instruction does some sophisticated
checking. For example, the following code will produce a run-time
violation and abort the program (or, if the program is compiled for
debugging, suspend it), sending an operator message to the message
queue of the user who submitted it:

void *voidp;
int (*funcp)(void) = main;
voidp = (void *)funcp;

The AS/400 will not let you convert a function pointer to a void
pointer, even with a cast. It catches that at runtime. That's no
odder than trapping a reference to a free'd pointer; it'd be trivial
for the materialization of the latter, even just to read its value
(and not dereference it), to check to see that it was still valid.

It doesn't require any change to the pointer representation, either.
A sequence of 128 bits may be a valid pointer at one point in a
program's lifetime, and not at another, and the implementation could
detect that if it wanted to. In this case, it appears that the IBM
compiler team decided not to trap this error; but the technology is
there for them to have done so.

And they might, in another revision of the compiler. And that's why
it's stupid to assume that, just because no one can point to such an
implementation today, such an implementation will never exist.

--
Michael Wojcik (e-mail address removed)

They had forgathered enough of course in all the various times; they had
again and again, since that first night at the theatre, been face to face
over their question; but they had never been so alone together as they were
actually alone - their talk hadn't yet been so supremely for themselves.
-- Henry James
 
G

Gordon Burditt

There is no conformance problem if the system, on finding that the
lazy allocated memory is not available, simply pauses the program
for i/o until that memory chunk can be physically allocated. Then
the only thing affected is the run timing.

Unless the system deadlocks. It will if you have enough processes
trying to grow that also hold lots of memory, and more processes
waiting on those that hold more memory, and few processes that are
going to release sufficient memory that aren't waiting on the first
two sets.

Then again, the possibility of power failures or repossession of the
computer system also makes the implementation non-conforming.

Gordon L. Burditt
 
D

Douglas A. Gwyn

Dag-Erling Smørgrav said:
Moving away from strict C and into the POSIX realm, the C library or
the application can work around memory overcommit by touching every
allocated page (forcing physical memory or swap space to be allocated)
with a SIGSEGV handler installed to catch unsatisfied page faults, or
by using a large non-sparse file as a backing store.

When the SIGSEGV occurs all that can reliably be
determined is that malloc should have returned null.
That is malloc's job in the first place; fix it.

The idea that every reliable app will have to take
extraordinary measures to handle this situation
indicates what a horrible design decision it was.
 
?

=?iso-8859-1?q?Dag-Erling_Sm=F8rgrav?=

Douglas A. Gwyn said:
[memory overcommit considered harmful]

Look, I can grok that ivory towers look really neat, but don't they
get awfully cold in the winter? Wouldn't you rather be living in the
real world?

DES
 
D

Douglas A. Gwyn

Dag-Erling Smørgrav said:
Douglas A. Gwyn said:
[memory overcommit considered harmful]
Look, I can grok that ivory towers look really neat, but don't they
get awfully cold in the winter? Wouldn't you rather be living in the
real world?

In the real world, it is not considered acceptable for
correct programs to be crashed due to such a poor design.
Also, in the world I come from, programmers *learn* how
to implement sparse arrays so that they don't *have* to
wreck the whole system.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Similar Threads


Members online

Forum statistics

Threads
474,183
Messages
2,570,977
Members
47,553
Latest member
AshliLavig

Latest Threads

Top