"null Considered Harmful"

M

Malcolm McLean

In any case, usually there's not much that can be done other than report the
fact and then quit, if this was a routine allocation that was expected to
work. It's too bad if this has happened to a customer, then you need to
establish the reason.
The reason is always that either customer didn't have enough memory installed,
or some other process was taking up the memory.
 
B

BartC

Malcolm McLean said:
The reason is always that either customer didn't have enough memory
installed,
or some other process was taking up the memory.

So if a program has a memory leak, the solution to just keep adding more
memory?!

It also happens that a wrong choice of algorithm could be using up too much
memory, perhaps for intermediate results. In either case the software needs
fixing rather than upgrading the hardware first.
 
L

Les Cargill

Malcolm said:
The reason is always that either customer didn't have enough memory installed,
or some other process was taking up the memory.

That is a planning failure masquerading as a software bug.
 
B

BGB

If we're talking about memory allocation failures, then it will usually be
for one of two reasons: there's either a bug in the software that makes it
use up all the memory, or (more rarely) the machine has actually run out of
memory because the program needs too much of it. (Or, sometimes, it
might be
other programs that are hogging most of the memory).

In any case, usually there's not much that can be done other than report
the
fact and then quit, if this was a routine allocation that was expected to
work. It's too bad if this has happened to a customer, then you need to
establish the reason.

In some cases an out-of-memory condition (ie. an attempt to allocate
memory,
that failed) *can* be handled by the software, but the code will know about
it and will use a different allocator that doesn't just quit. So in an
interactive app, for example, it can report that a particular operation
failed, but then carries on and the user can try something else.


OTOH: it can also be because the app is 32-bit, and there is only 3GB of
address space available for 32-bit apps, effectively limiting the
maximum amount of allocated memory to around 2.0 - 2.5 GB (the remaining
space being needed for things like stacks and the program
binaries/DLLs/... and similar).


also, the app can be designed in such a way that it actually *uses* most
of the address space, without it actually being a leak.

for example, a voxel-based 3D engine can eat up lots of RAM for things
like voxel data and similar (lots of 3D arrays).

in such a case, memory allocation failure may then effectively mean "no,
your heap isn't getting any bigger", and may have to be dealt with
gracefully.

nevermind if it pages really bad on older computers that don't have a
lot of RAM installed, and requires ~ 8GB-16GB of swap space in these
cases, ...


I have used it successfully on an old laptop with 1GB of RAM though (set
up for 8GB swap), and it sort of runs passably (if the player doesn't
run around too much, the swap can mostly keep up).

FWIW, this laptop can't really run Minecraft either...
 
C

Charlton Wilbur

LC> So I have habits that *preclude* that sort of thing. It's too
LC> detailed, but enforce your constraints with the furniture
LC> provided by the language system.

But that's exactly my point: for some problem domains, taking 25% as
long to write the code, because things can be expressed tersely, but
having the code take 4 times longer to run, because the runtime is
handling exceptions and checking array bounds and validating after each
operation that constraints are true is a very desirable tradeoff.

This is why there are many languages and many development frameworks and
many development environments. The sweet spot for tradeoffs on a
multi-user 248K PDP-11 is not the same as the sweet spot for tradeoffs
on a 1MB Mac Plus is not the same as the sweet spot for tradeoffs on a
512MB iPhone 5.

Charlton
 
M

Malcolm McLean

On Friday, December 13, 2013 11:03:58 AM UTC, Bart wrote:

On 64 bit desktop systems, malloc will only return NULL if the size is 0
or ridiculousy large because of a bug. If you keep allocating smallish
amounts of memory, you will run out of RAM, start swapping, and your
computer will slow down so badly that the user gives up waiting before
malloc ever returns NULL.
For a GUI app that essentially provides a nice user interface to a few
trivial calculations. Which is a lot of software, but not everything.

If you start attacking NP-complete problems O(2^N) then you often find that
you can get a reasonable answer in reasonable time, at the cost of a
huge amount of memory. Quite often that memory is in a tree or similar
structure that naturally maps to billions of allocations of small chunks.
However large your machine, you can swiftly exhaust the memory.
 
L

Les Cargill

Charlton said:
LC> So I have habits that *preclude* that sort of thing. It's too
LC> detailed, but enforce your constraints with the furniture
LC> provided by the language system.

But that's exactly my point: for some problem domains, taking 25% as
long to write the code, because things can be expressed tersely, but
having the code take 4 times longer to run, because the runtime is
handling exceptions and checking array bounds and validating after each
operation that constraints are true is a very desirable tradeoff.

Sure - that's included in what I meant. I am pretty sure it all comes
out in the wash. I doubt you'd get 400% speedups in development
time just from exceptions, though.

I just don't find explicit constraint checks to be that slow to put in
nor to test.

If I'm *really* pressed for time, I tend to use Tcl*,
and that sort of thing just doesn't matter, outside of
pulling data from files or sockets.

*path dependency...
This is why there are many languages and many development frameworks and
many development environments. The sweet spot for tradeoffs on a
multi-user 248K PDP-11 is not the same as the sweet spot for tradeoffs
on a 1MB Mac Plus is not the same as the sweet spot for tradeoffs on a
512MB iPhone 5.

May the ghost of Grace Murray Hopper haunt you with her nanosecond! :)

I mean, since it's getting to be "A Christmas Carol" season and all :)
 
T

Thomas Jahns

The reason is always that either customer didn't have enough memory installed,
or some other process was taking up the memory.

Except that on todays machines it's more often address space than physical
memory which runs out. At least for the large chunk of programs still compiled
to 32bit execution environments.

Thomas
 
T

Thomas Jahns

R-O-N-G. The decision about how to respond typically involves
a large amount of the caller's context -- in fact, a large amount
of the caller's caller's caller's caller's context (which is why
strategies like "call abort() on any failure" are stupid, and
"call a caller-supplied callback, passing it not enough information"
are even stupider).

For some programs that might be true. For the big automatons we run, error
recovery is usually not possible without user intervention (meaning the users
change input or program logic). Failing early and hard is usually the most sane
option.

Thomas
 
G

glen herrmannsfeldt

Except that on todays machines it's more often address space
than physical memory which runs out. At least for the large chunk
of programs still compiled to 32bit execution environments.

I suppose, but just barely.

Reminds me of about 20 years ago when I was using a 486 machine
withe 8MB as a router (no-one wanted it for anything else) and
put a brand new 2GB disk in it to run FreeBSD. I allocated 1GB
for swap, so 128 times physical memory.

But 4GB physical (installed) memory is pretty common now, though
it doesn't cost all that much for more. With the memory used by the
OS and other things that have to run, you really don't want more
than 2GB allocated for a user program. Many 32 bit systems limit
user address space to 2GB, leaving 2GB for the OS.

If programs made better use of virtual memory, larger address
space would be more useful.

-- glen
 
C

Charlton Wilbur

TJ> For some programs that might be true. For the big automatons we
TJ> run, error recovery is usually not possible without user
TJ> intervention (meaning the users change input or program
TJ> logic). Failing early and hard is usually the most sane option.

And since there are many possible sets of circumstances, there are many
programmers. And many programming languages.

Charlton
 
B

BGB

I suppose, but just barely.

Reminds me of about 20 years ago when I was using a 486 machine
withe 8MB as a router (no-one wanted it for anything else) and
put a brand new 2GB disk in it to run FreeBSD. I allocated 1GB
for swap, so 128 times physical memory.

But 4GB physical (installed) memory is pretty common now, though
it doesn't cost all that much for more. With the memory used by the
OS and other things that have to run, you really don't want more
than 2GB allocated for a user program. Many 32 bit systems limit
user address space to 2GB, leaving 2GB for the OS.

If programs made better use of virtual memory, larger address
space would be more useful.

in most computers I have seen in recent years, 8GB or 16GB has gotten a
lot more common, with some higher-end "gamer rigs" with 32GB and similar
(ex: 4x 8GB modules...).

newer PCs coming with 4GB is at this point mostly laptop territory.


my desktop PC has 16GB of RAM in it, FWIW (4x 4GB).
 
S

Siri Cruz

in most computers I have seen in recent years, 8GB or 16GB has gotten a
lot more common, with some higher-end "gamer rigs" with 32GB and similar
(ex: 4x 8GB modules...).

That's real memory not virtual memory. I have 4 GB real memory and currently 180
GB virtual. Apple has switched to 64-bit virtual byte address, but the limit on
the virtual address space may be smaller because of restricted address
translation hardware. The real memory address is currently up to about 35 bits;
hardware restriction might impose a limit below 64 bits.

The kernel and address translation hardware convert the potentially 64 bit
virtual address down to page faults or the much smaller 32 bit real address
space on my Mac. Or the slightly larger real address space on the Mac next to it.
 
B

BGB

For some programs that might be true. For the big automatons we run, error
recovery is usually not possible without user intervention (meaning the users
change input or program logic). Failing early and hard is usually the most sane
option.

and almost completely non-viable for end-user graphical application
software...


if the app just exits and dumps the user off at the desktop, they are
more likely to have a response like "WTF?!".

better is, at least, to provide a notification error-box "hey, this crap
has died on you.", or more often attempt error recovery, very often
while playing a "ding" sound effect and/or popping up a notification box.


many other types of applications are largely autonomous and will try to
handle any recovery on their own, filling in any holes with a plausible
substitute.

this is much more common in things like games and graphical software
(such as 3D modeling software, ...).
"hey, this 3D model uses a material which can't be loaded?! well, just
use some sort of generic checkerboard placeholder pattern or similar
instead, and maybe print an error message to the in-program console."



but, it doesn't really make much sense to have completely different
infrastructure for command-line tools vs end-user application software,
so usually a general compromise is needed.

most often, this is either some sort of exception mechanism, or
returning status indicators of some sort.
 
B

BGB

That's real memory not virtual memory. I have 4 GB real memory and currently 180
GB virtual. Apple has switched to 64-bit virtual byte address, but the limit on
the virtual address space may be smaller because of restricted address
translation hardware. The real memory address is currently up to about 35 bits;
hardware restriction might impose a limit below 64 bits.

The kernel and address translation hardware convert the potentially 64 bit
virtual address down to page faults or the much smaller 32 bit real address
space on my Mac. Or the slightly larger real address space on the Mac next to it.

but, yes, the specific topic there was physical memory installed, which
is at this point typically 8GB or 16GB, rather than 4GB (at least in
newer desktop PCs, nevermind older desktop PCs or laptops).

also nevermind if a 32-bit process is normally limited to 2GB or 3GB,
and an application will use up the 32-bit virtual space well before the
available physical RAM is exhausted.


it is a very different situation on my old 2003-era laptop, which has a
larger virtual-address space than physical RAM (and as such using up the
whole 2-3GB of VA space will result in considerable swapping), and this
is only doable really because I went and turned up the swap to 8GB
(which is about 1/6 of said laptops' HDD space as well...).
 
I

Ian Collins

Thomas said:
For some programs that might be true. For the big automatons we run, error
recovery is usually not possible without user intervention (meaning the users
change input or program logic). Failing early and hard is usually the most sane
option.

Exceptions are a good option in this case. If you can't handle the
exception, the programme will abort (fail early and fast). When you can
(say when there is a user to prompt), the result is better than a crash.
 
I

Ian Collins

Charlton said:
П> I think better solution is to use exceptions. You can't ignore
П> an exception, whilst return code is very easy to ignore.

Exceptions add overhead to the runtime, which makes them only a better
solution in situations where that overhead is acceptable.

When implemented well, they only add overhead when they are thrown. the
normal code path should be faster and clearer without the overhead (to
both the human reader and the machine) of error checking code.
 
P

protherojeff

"null Considered Harmful"

http://peat.org/2013/12/07/null-considered-harmful/



"In almost every software project I've been a part of,
the majority of errors I've encountered are caused by
unexpected null references. Many are caught at compile
or test time, but they always creep though, and few
platforms are exempt.

Null-pointer errors are characteristic of C variants, which use 1970's-era typechecking. Modern typesafe languages eliminate this complete class of bugs at compiletime, basically by treating "foo*, might be NULL" as a different type from "foo*, known to be non-NULL". In Mythryl (which I happen to maintain... :) the distinction is between types Null_Or(Foo) vs Foo. (Null_Or() is a type constructor, another feature of modern languages based in Hindley-Milner-Damas type-inference/type-checking.) Dereferencing a pointer is not allowed unless it is known to be non-null.

In a language with this sort of typechecking, returning NULL pointers is actually safer than throwing an exception, in general: It is easy to forget to catch the exception at the appropriate leaf in the code, but the typechecker guarantees that the leaf has to check for NULL before dereferencing.

This might sound clumsy and intrusive, but actually it works very smoothly.The type inference means that one rarely has to actually specify types except at compilation-unit interfaces (the code looks more like Ruby than Java, due to the pervasive lack of type declarations), and the overwhelming majority of pointers are guaranteed by the typechecker to be non-NULL, so it is fairly rare to have to explicitly check for NULL -- typically happens when calling a library routine that may fail due to filesystem issues (permission, missing file) or such.

I've been programming in Mythryl pretty intensively for about ten years now, and I must say in all that time I've never seen a null pointer bug, and Ihaven't missed the experience one bit. :)

This technology is slowly trickling down to legacy languages like Java. Seefor example http://help.eclipse.org/juno/index....oc.user/tasks/task-using_null_annotations.htm. So even if your installed base prevents you from upgrading to a modern language, there is still hope!
 
G

glen herrmannsfeldt

(snip)
Null-pointer errors are characteristic of C variants, which use
1970's-era typechecking. Modern typesafe languages eliminate
this complete class of bugs at compiletime, basically by treating
"foo*, might be NULL" as a different type from "foo*, known to
be non-NULL". In Mythryl (which I happen to maintain... :)
the distinction is between types Null_Or(Foo) vs Foo.
(Null_Or() is a type constructor, another feature of modern
languages based in Hindley-Milner-Damas type-inference/type-checking.)
Dereferencing a pointer is not allowed unless it is known to
be non-null.

Hmm. I have written a number of simple tree processing routines
in Java, and usually check for null at the top, instead of before
each recursion call. For one, it means one if instead of two,
and usually in a simpler place. That means more recursion depth,
though.
In a language with this sort of typechecking, returning NULL
pointers is actually safer than throwing an exception,
in general: It is easy to forget to catch the exception at
the appropriate leaf in the code, but the typechecker guarantees
that the leaf has to check for NULL before dereferencing.

Well, Java won't turn off the null Object test, so you will always
get the exception if you miss.

-- glen
 
M

Malcolm McLean

Null-pointer errors are characteristic of C variants, which use 1970's-era
typechecking. Modern typesafe languages eliminate this complete class of
bugs at compiletime, basically by treating "foo*, might be NULL" as a
different type from "foo*, known to be non-NULL".
In C++ a reference is known to be non-null.
But in some ways it's a bit of a nuisance, because the references always have
to be constructed at the point they come into scope.
So we have

class Node
{
Node &link1;
Node &link2;

Node Node()
: link1( what do we put here ?),
link2(same problem)
{
}
};

If you're not careful you end up creating a dummy node. Which ends up as
effectively a null pointer, except the system inherently cannot catch the
bug if you try to use it.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,075
Messages
2,570,553
Members
47,197
Latest member
NDTShavonn

Latest Threads

Top