How to choose between malloc() and calloc()

L

lohith.matad

Hi all,
Though the purpose of both malloc() and calloc() is the same, and as we
also know that calloc() initializes the alloacted locations to 'zero',
and also that malloc() is used for bytes allocation whereas calloc()
for chunk of memory allocation.
Apart from these is there any strong reason that malloc() is prefered
over calloc() or vice-versa?

Looking forward for your clarrifications , possibly detailed.

Thanks in advance
Lohi
 
K

kernelxu

Hi ,
calloc initializes all bits of the allocated memory units to 'zero',
however all
bits zero doesn't mean it's equal to zero.It is therefore useless overhead to use calloc() with floating point or
pointer types, because the memory must be treated as uninitialized to
avoid undefined behaviour. (reference
http://alien.dowling.edu/~rohit/wiki/index.php/C_Programming
What is the difference between calloc() and malloc()? )

Looking forward for your clarrifications , possibly detailed.
 
S

Suman

Hi all,
Though the purpose of both malloc() and calloc() is the same, and as we
also know that calloc() initializes the alloacted locations to 'zero',
and also that malloc() is used for bytes allocation whereas calloc()
for chunk of memory allocation.
Apart from these is there any strong reason that malloc() is prefered
over calloc() or vice-versa?

1) Read the FAQ:
http://www.eskimo.com/~scs/C-faq/q7.31.html

2) Search the group
 
L

Lawrence Kirby

Hi all,
Though the purpose of both malloc() and calloc() is the same, and as we
also know that calloc() initializes the alloacted locations to 'zero',

It initialised the bytes to zero, this doesn't guarantee that floating
point objects will be zero or pointers will be null.
and also that malloc() is used for bytes allocation whereas calloc()
for chunk of memory allocation.

There's really no distinction here, a "chunk of memory" is simply a
sequence of bytes. What you allocate in C is memory that can be used to
hold an object, whether that object hppens to be an array of char or
something complex like a structure. malloc() and calloc() can be used for
both of these.
Apart from these is there any strong reason that malloc() is prefered
over calloc() or vice-versa?

If you want all-bytes-zero then it makes sense to use calloc(), but again
remember that it doesn't guarantee initial value for floating point or
pointer objects. If you don't need that then calloc() still has the
overhead of zeroing memory. malloc() is more commonly used because it is
simpler and setting all bytes to zero is typically not the initialisation
needed.

Lawrence
 
B

Bryan Donlan

Hi all,
Though the purpose of both malloc() and calloc() is the same, and as we
also know that calloc() initializes the alloacted locations to 'zero',
and also that malloc() is used for bytes allocation whereas calloc()
for chunk of memory allocation.
Apart from these is there any strong reason that malloc() is prefered
over calloc() or vice-versa?

malloc() is likely to be faster, as it does not need to zero the allocated
space. Apart from that, they're essentially equivalent - you can allocate
chunks using malloc by multiplying the size of the chunk by the number and
mallocing that many bytes.
 
K

kar1107

Hi all,
Though the purpose of both malloc() and calloc() is the same, and as we
also know that calloc() initializes the alloacted locations to 'zero',
and also that malloc() is used for bytes allocation whereas calloc()
for chunk of memory allocation.
Apart from these is there any strong reason that malloc() is prefered
over calloc() or vice-versa?

calloc has the overhead of zeroing. If the code isn't hitting
performance bottlenecks, I would prefer calloc. The reason is that it
eliminates a source of randomnes. And that is a good thing from
debugging perspective. A reproducible problem is lot easier to debug
than a problem which manifests in different way for every re-run.

Karthik
 
E

Eric Sosman

calloc has the overhead of zeroing. If the code isn't hitting
performance bottlenecks, I would prefer calloc. The reason is that it
eliminates a source of randomnes. And that is a good thing from
debugging perspective. A reproducible problem is lot easier to debug
than a problem which manifests in different way for every re-run.

The goal of debugging is to remove bugs, not to hide
them. To get them out into the open where they're easier
to squash, a value less "regular" than zero is often a
help. Initializing new allocations with 0xDEADBEEF is
traditional; there's no C library function to do the chore,
but it's not hard. I've also seen good results from using
memset() to fill each new area with 0x99.

As to the original topic: Others may have a different
experience, but I very seldom use calloc(). Usually I
allocate when I need someplace to store something, so the
malloc() is closely followed by filling the allocated area
with the data I wanted to put there. Pre-zeroing only to
overwrite immediately doesn't seem worth while.
 
M

Mark F. Haigh

calloc has the overhead of zeroing. If the code isn't hitting
performance bottlenecks, I would prefer calloc. The reason is that it
eliminates a source of randomnes. And that is a good thing from
debugging perspective. A reproducible problem is lot easier to debug
than a problem which manifests in different way for every re-run.

Usually seeing a calloc where a malloc should be translates roughly to:
"Warning: the programmer that wrote this was probably an idiot."


Mark F. Haigh
(e-mail address removed)
 
K

kar1107

Eric said:
The goal of debugging is to remove bugs, not to hide
them.

Using calloc is hiding them? I am wondering how you
arrive at such a conclusion. Nobody is advocating practices
to hide bugs. My statement clearly says, "a reproducible
problem is easier to debug".. where is hiding mentioned?

To get them out into the open where they're easier
to squash, a value less "regular" than zero is often a
help. Initializing new allocations with 0xDEADBEEF is
traditional; there's no C library function to do the chore,
but it's not hard. I've also seen good results from using
memset() to fill each new area with 0x99.

So you see the benefit of using fixed patterns, right?
Whether 0x99 or 0x0, it does not matter. What matters is
you see the same (wrong) behavior on code execution.
As to the original topic: Others may have a different
experience, but I very seldom use calloc(). Usually I
allocate when I need someplace to store something, so the
malloc() is closely followed by filling the allocated area
with the data I wanted to put there. Pre-zeroing only to
overwrite immediately doesn't seem worth while.

True, if you can clearly see that is the behavior, malloc
fits the bill. Otherwise the extra developer time saved
in debugging because of calloc usage is lot more valuable
than the saving of a little performance on its non use.

Karthik
 
E

Eric Sosman

Eric said:
[... about flood-filling freshly-allocated memory ...]
Initializing new allocations with 0xDEADBEEF is
traditional; there's no C library function to do the chore,
but it's not hard. I've also seen good results from using
memset() to fill each new area with 0x99.

So you see the benefit of using fixed patterns, right?
Whether 0x99 or 0x0, it does not matter. What matters is
you see the same (wrong) behavior on code execution.

Certainly the value of the fill matters. Pre-filling each
memory allocation attempts to cause an observable malfunction
from one particular class of bug: Fetching something from the
memory area without storing a "real" value first. All-bits-zero
is quite likely to be mistaken for a legitimate NULL (at the
"end" of a linked list gone astray, say), or for an ordinary
zero-valued integer or floating-point value. A program that
fetches such a thing from "uninitialized" memory stands a
reasonable chance of stumbling ahead successfully despite its
bug, producing no malfunction that a tester might notice.

A value like 0x99, on the other hand, has several chances
to produce nastier and more overt misbehavior:

- A pointer full of 0x99's is unlikely to satisfy the
alignment requirements (if any exist) of anything wider
than a `char'. Pluck an "uninitialized" pointer from a
memory area and use it to address a struct or to call a
function, and there's a good chance of something like
SIGBUS.

- A signed integer full of 0x99's is likely to be negative.
In many situations a negative number is nonsensical, and
may cause erroneous behavior a zero would not provoke.
Even if it's nothing worse than "Summary: -1717986919
errors detected" it'll raise more testers' eyebrows than
if the reported number were zero.

- A size_t full of 0x99's is likely to be "very large," so
large that it stands a chance of causing trouble. A call
like memcpy(&target, &source, 0x9999999999999999) is almost
sure to produce a trap of some kind.

- A string full of 0x99's has no terminator, and may well
cause trouble if passed to strlen() or printf() or whatever.
A string full of zeros has a valid terminator, and will
not cause trouble if you strcpy() from it.

Other fill patterns have their advantages, too: Filling
freshly-allocated memory with signalling NaNs seems a promising
avenue, for example. A good debugging allocator should probably
be able to use different fill patterns depending on an environment
variable or some such.[*]

[*] On one system I used years ago, privileged code could
set and clear the memory parity bits at will, regardless of the
data; the capability was used in diagnostic programs. One fellow
attempted to exploit this as a cheap way of catching this class
of bug: He'd set bad parity in uninitialized memory areas. If
the program wrote to the area it would regenerate correct parity
in the process, but if it read before reading there'd be a machine
malfunction trap which he could intercept. Unfortunately, if the
program crashed for some other reason and the core dump routine
came along ... We had to take the imaginative fellow aside and
speak to him rather sternly.
True, if you can clearly see that is the behavior, malloc
fits the bill. Otherwise the extra developer time saved
in debugging because of calloc usage is lot more valuable
than the saving of a little performance on its non use.

We agree on the purpose, but not on the tactics. I feel
that a "clean" zero is less likely to expose a latent bug than
is a fill pattern constructed with diabolic intent. Also, if
the developer saves debugging time at the cost of letting more
bugs through, the trade-off needs justification (I'm not saying
it's always unjustifiable, just that it's a risk that needs
assessment). Finally, note that the use of calloc() makes the
use of poisonous fill patterns impossible and makes this bug-
provoking technique unavailable.
 
E

Eric Sosman

Eric said:
[...] If
the program wrote to the area it would regenerate correct parity
in the process, but if it read before reading [...]

Tricky, that. Should have been "before writing," of course.
 
K

kar1107

Eric said:
Eric said:
[... about flood-filling freshly-allocated memory ...]
Initializing new allocations with 0xDEADBEEF is
traditional; there's no C library function to do the chore,
but it's not hard. I've also seen good results from using
memset() to fill each new area with 0x99.

So you see the benefit of using fixed patterns, right?
Whether 0x99 or 0x0, it does not matter. What matters is
you see the same (wrong) behavior on code execution.

Certainly the value of the fill matters. Pre-filling each
memory allocation attempts to cause an observable malfunction
from one particular class of bug: Fetching something from the
memory area without storing a "real" value first. All-bits-zero
is quite likely to be mistaken for a legitimate NULL (at the
"end" of a linked list gone astray, say), or for an ordinary
zero-valued integer or floating-point value. A program that
fetches such a thing from "uninitialized" memory stands a
reasonable chance of stumbling ahead successfully despite its
bug, producing no malfunction that a tester might notice.

A value like 0x99, on the other hand, has several chances
to produce nastier and more overt misbehavior:

- A pointer full of 0x99's is unlikely to satisfy the
alignment requirements (if any exist) of anything wider
than a `char'. Pluck an "uninitialized" pointer from a
memory area and use it to address a struct or to call a
function, and there's a good chance of something like
SIGBUS.

- A signed integer full of 0x99's is likely to be negative.
In many situations a negative number is nonsensical, and
may cause erroneous behavior a zero would not provoke.
Even if it's nothing worse than "Summary: -1717986919
errors detected" it'll raise more testers' eyebrows than
if the reported number were zero.

- A size_t full of 0x99's is likely to be "very large," so
large that it stands a chance of causing trouble. A call
like memcpy(&target, &source, 0x9999999999999999) is almost
sure to produce a trap of some kind.

- A string full of 0x99's has no terminator, and may well
cause trouble if passed to strlen() or printf() or whatever.
A string full of zeros has a valid terminator, and will
not cause trouble if you strcpy() from it.

Other fill patterns have their advantages, too: Filling
freshly-allocated memory with signalling NaNs seems a promising
avenue, for example. A good debugging allocator should probably
be able to use different fill patterns depending on an environment
variable or some such.[*]

I see your point. Yes a fixed pattern like 0x99 is better than 0x0
in aiding debugging. My requirement is that I prefer determinisitic
values. When I run through the debugger and see a value, I like it to
provide some clue. Surely your 0x99 provides that. A simple 0x0 also
does.
But a very random number (which malloc can give -- the only
guarantee)doesn't
help. Say it can look like a very valid pointer (inside a struct that I
had
just dumped in the debugger).

Within the standards of the language, if there is an allocation routine
which can give me a region prefilled with a fixed pattern, I'll use
that
always. Unfortunately there is none except for calloc.

Sure there is a tradeoff here. If I know for sure my malloc is going
to prefill with some magic (like 0xDEADBEEF, or say 0x0D0D0D0D or
0x99),
then I will prefer that malloc to calloc.

Karthik




[*] On one system I used years ago, privileged code could
set and clear the memory parity bits at will, regardless of the
data; the capability was used in diagnostic programs. One fellow
attempted to exploit this as a cheap way of catching this class
of bug: He'd set bad parity in uninitialized memory areas. If
the program wrote to the area it would regenerate correct parity
in the process, but if it read before reading there'd be a machine
malfunction trap which he could intercept. Unfortunately, if the
program crashed for some other reason and the core dump routine
came along ... We had to take the imaginative fellow aside and
speak to him rather sternly.
True, if you can clearly see that is the behavior, malloc
fits the bill. Otherwise the extra developer time saved
in debugging because of calloc usage is lot more valuable
than the saving of a little performance on its non use.

We agree on the purpose, but not on the tactics. I feel
that a "clean" zero is less likely to expose a latent bug than
is a fill pattern constructed with diabolic intent. Also, if
the developer saves debugging time at the cost of letting more
bugs through, the trade-off needs justification (I'm not saying
it's always unjustifiable, just that it's a risk that needs
assessment). Finally, note that the use of calloc() makes the
use of poisonous fill patterns impossible and makes this bug-
provoking technique unavailable.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,992
Messages
2,570,220
Members
46,805
Latest member
ClydeHeld1

Latest Threads

Top