Garbage collection

T

Tom Wright

Hi all

I suspect I may be missing something vital here, but Python's garbage
collection doesn't seem to work as I expect it to. Here's a small test
program which shows the problem on python 2.4 and 2.5:

$ python2.5
Python 2.5 (release25-maint, Dec 9 2006, 15:33:01)
[GCC 4.1.2 20061115 (prerelease) (Debian 4.1.1-20)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
(at this point, Python is using 15MB)

(at this point, Python is using 327MB)

(at this point, Python is using 251MB)

(at this point, Python is using 252MB)


Is there something I've forgotten to do? Why is Python still using such a
lot of memory?


Thanks!
 
T

Thinker

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Tom said:
Hi all

I suspect I may be missing something vital here, but Python's garbage
collection doesn't seem to work as I expect it to. Here's a small test
program which shows the problem on python 2.4 and 2.5:
................ skip .....................
(at this point, Python is using 252MB)


Is there something I've forgotten to do? Why is Python still using such a
lot of memory?


Thanks!
How do you know amount of memory used by Python?
ps ¡B top or something?

- --
Thinker Li - (e-mail address removed) (e-mail address removed)
http://heaven.branda.to/~thinker/GinGin_CGI.py
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (FreeBSD)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGATUI1LDUVnWfY8gRAhy9AKDTA2vZYkF7ZLl9Ufy4i+onVSmWhACfTAOv
PdQn/V1ppnaKAhdrblA3y+0=
=dmnr
-----END PGP SIGNATURE-----
 
T

Tom Wright

Thinker said:
How do you know amount of memory used by Python?
ps ? top or something?

$ ps up `pidof python2.5`
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
tew24 26275 0.0 11.9 257592 243988 pts/6 S+ 13:10 0:00 python2.5

"VSZ" is "Virtual Memory Size" (ie. total memory used by the application)
"RSS" is "Resident Set Size" (ie. non-swapped physical memory)
 
S

skip

Tom> I suspect I may be missing something vital here, but Python's
Tom> garbage collection doesn't seem to work as I expect it to. Here's
Tom> a small test program which shows the problem on python 2.4 and 2.5:

Tom> (at this point, Python is using 15MB)
0

Tom> (at this point, Python is using 252MB)

Tom> Is there something I've forgotten to do? Why is Python still using
Tom> such a lot of memory?

You haven't forgotten to do anything. Your attempts at freeing memory are
being thwarted (in part, at least) by Python's int free list. I believe the
int free list remains after the 10M individual ints' refcounts drop to zero.
The large storage for the list is grabbed in one gulp and thus mmap()d I
believe, so it is reclaimed by being munmap()d, hence the drop from 320+MB
to 250+MB.

I haven't looked at the int free list or obmalloc implementations in awhile,
but if the free list does return any of its memory to the system it probably
just calls the free() library function. Whether or not the system actually
reclaims any memory from your process is dependent on the details of the
malloc/free implementation's details. That is, the behavior is outside
Python's control.

Skip
 
T

Thinker

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Tom said:
$ ps up `pidof python2.5` USER PID %CPU %MEM VSZ RSS TTY
STAT START TIME COMMAND tew24 26275 0.0 11.9 257592 243988
pts/6 S+ 13:10 0:00 python2.5

"VSZ" is "Virtual Memory Size" (ie. total memory used by the
application) "RSS" is "Resident Set Size" (ie. non-swapped physical
memory)
This is amount of memory allocate by process not Python interpreter.
It is managemed by
malloc() of C library. When you free a block memory by free()
function, it only return
the memory to C library for later use, but C library not always return
the memory to
the kernel.

Since there is a virtual memory for modem OS, inactive memory will be
paged
to pager when more physical memory blocks are need. It don't hurt
much if you have enough
swap space.

What you get from ps command is memory allocated by process, it don't
means
they are used by Python interpreter.

- --
Thinker Li - (e-mail address removed) (e-mail address removed)
http://heaven.branda.to/~thinker/GinGin_CGI.py
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (FreeBSD)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGATzJ1LDUVnWfY8gRAjSOAKC3uzoAWBow0VN77srjR5eBF0kXawCcCUYv
0RgdHNHqWMEn2Ap7zQuOFaQ=
=/hWg
-----END PGP SIGNATURE-----
 
T

Tom Wright

You haven't forgotten to do anything. Your attempts at freeing memory are
being thwarted (in part, at least) by Python's int free list. I believe
the int free list remains after the 10M individual ints' refcounts drop to
zero. The large storage for the list is grabbed in one gulp and thus
mmap()d I believe, so it is reclaimed by being munmap()d, hence the drop
from 320+MB to 250+MB.

I haven't looked at the int free list or obmalloc implementations in
awhile, but if the free list does return any of its memory to the system
it probably just calls the free() library function. Whether or not the
system actually reclaims any memory from your process is dependent on the
details of themalloc/free implementation's details. That is, the behavior
is outside Python's control.

Ah, thanks for explaining that. I'm a little wiser about memory allocation
now, but am still having problems reclaiming memory from unused objects
within Python. If I do the following:
(memory use: 953 MB)

....and then I allocate a lot of memory in another process (eg. open a load
of files in the GIMP), then the computer swaps the Python process out to
disk to free up the necessary space. Python's memory use is still reported
as 953 MB, even though nothing like that amount of space is needed. From
what you said above, the problem is in the underlying C libraries, but is
there anything I can do to get that memory back without closing Python?
 
S

skip

Tom> ...and then I allocate a lot of memory in another process (eg. open
Tom> a load of files in the GIMP), then the computer swaps the Python
Tom> process out to disk to free up the necessary space. Python's
Tom> memory use is still reported as 953 MB, even though nothing like
Tom> that amount of space is needed. From what you said above, the
Tom> problem is in the underlying C libraries, but is there anything I
Tom> can do to get that memory back without closing Python?

Not really. I suspect the unused pages of your Python process are paged
out, but that Python has just what it needs to keep going. Memory
contention would be a problem if your Python process wanted to keep that
memory active at the same time as you were running GIMP. I think the
process's resident size is more important here than virtual memory size (as
long as you don't exhaust swap space).

Skip
 
T

Tom Wright

Tom> ...and then I allocate a lot of memory in another process (eg.
open Tom> a load of files in the GIMP), then the computer swaps the
Python
Tom> process out to disk to free up the necessary space. Python's
Tom> memory use is still reported as 953 MB, even though nothing like
Tom> that amount of space is needed. From what you said above, the
Tom> problem is in the underlying C libraries, but is there anything I
Tom> can do to get that memory back without closing Python?

Not really. I suspect the unused pages of your Python process are paged
out, but that Python has just what it needs to keep going.

Yes, that's what's happening.
Memory contention would be a problem if your Python process wanted to keep
that memory active at the same time as you were running GIMP.

True, but why does Python hang on to the memory at all? As I understand it,
it's keeping a big lump of memory on the int free list in order to make
future allocations of large numbers of integers faster. If that memory is
about to be paged out, then surely future allocations of integers will be
*slower*, as the system will have to:

1) page out something to make room for the new integers
2) page in the relevant chunk of the int free list
3) zero all of this memory and do any other formatting required by Python

If Python freed (most of) the memory when it had finished with it, then all
the system would have to do is:

1) page out something to make room for the new integers
2) zero all of this memory and do any other formatting required by Python

Surely Python should free the memory if it's not been used for a certain
amount of time (say a few seconds), as allocation times are not going to be
the limiting factor if it's gone unused for that long. Alternatively, it
could mark the memory as some sort of cache, so that if it needed to be
paged out, it would instead be de-allocated (thus saving the time taken to
page it back in again when it's next needed)

I think the process's resident size is more important here than virtual
memory size (as long as you don't exhaust swap space).

True in theory, but the computer does tend to go rather sluggish when paging
large amounts out to disk and back. Surely the use of virtual memory
should be avoided where possible, as it is so slow? This is especially
true when the contents of the blocks paged out to disk will never be read
again.


I've also tested similar situations on Python under Windows XP, and it shows
the same behaviour, so I think this is a Python and/or GCC/libc issue,
rather than an OS issue (assuming Python for linux and Python for windows
are both compiled with GCC).
 
S

Steve Holden

Tom said:
Yes, that's what's happening.


True, but why does Python hang on to the memory at all? As I understand it,
it's keeping a big lump of memory on the int free list in order to make
future allocations of large numbers of integers faster. If that memory is
about to be paged out, then surely future allocations of integers will be
*slower*, as the system will have to:

1) page out something to make room for the new integers
2) page in the relevant chunk of the int free list
3) zero all of this memory and do any other formatting required by Python

If Python freed (most of) the memory when it had finished with it, then all
the system would have to do is:

1) page out something to make room for the new integers
2) zero all of this memory and do any other formatting required by Python

Surely Python should free the memory if it's not been used for a certain
amount of time (say a few seconds), as allocation times are not going to be
the limiting factor if it's gone unused for that long. Alternatively, it
could mark the memory as some sort of cache, so that if it needed to be
paged out, it would instead be de-allocated (thus saving the time taken to
page it back in again when it's next needed)
Easy to say. How do you know the memory that's not in use is in a
contiguous block suitable for return to the operating system? I can
pretty much guarantee it won't be. CPython doesn't use a relocating
garbage collection scheme, so objects always stay at the same place in
the process's virtual memory unless they have to be grown to accommodate
additional data.
True in theory, but the computer does tend to go rather sluggish when paging
large amounts out to disk and back. Surely the use of virtual memory
should be avoided where possible, as it is so slow? This is especially
true when the contents of the blocks paged out to disk will never be read
again.
Right. So all we have to do is identify those portions of memory that
will never be read again and return them to the OS. That should be easy.
Not.
I've also tested similar situations on Python under Windows XP, and it shows
the same behaviour, so I think this is a Python and/or GCC/libc issue,
rather than an OS issue (assuming Python for linux and Python for windows
are both compiled with GCC).
It's probably a dynamic memory issue. Of course if you'd like to provide
a patch to switch it over to a relocating garbage collection scheme
we'll all await it with bated breath :)

regards
Steve
 
D

Dennis Lee Bieber

True, but why does Python hang on to the memory at all? As I understand it,
it's keeping a big lump of memory on the int free list in order to make
future allocations of large numbers of integers faster. If that memory is
about to be paged out, then surely future allocations of integers will be
*slower*, as the system will have to:
It may not just be that free list -- which on a machine with lots of
RAM may never be paged out anyway [mine (XP) currently shows: physical
memory total/available/system: 2095196/1355296/156900K, commit charge
total/limit/peak: 514940/3509272/697996K (limit includes page/swap file
of 1.5GB)] -- it could easily just be that the OS or runtime just
doesn't return memory to the OS until a process/executable image exits.
--
Wulfraed Dennis Lee Bieber KD6MOG
(e-mail address removed) (e-mail address removed)
HTTP://wlfraed.home.netcom.com/
(Bestiaria Support Staff: (e-mail address removed))
HTTP://www.bestiaria.com/
 
S

skip

Tom> True, but why does Python hang on to the memory at all? As I
Tom> understand it, it's keeping a big lump of memory on the int free
Tom> list in order to make future allocations of large numbers of
Tom> integers faster. If that memory is about to be paged out, then
Tom> surely future allocations of integers will be *slower*, as the
Tom> system will have to:

Tom> 1) page out something to make room for the new integers
Tom> 2) page in the relevant chunk of the int free list
Tom> 3) zero all of this memory and do any other formatting required by
Tom> Python

If your program's behavior is:

* allocate a list of 1e7 ints
* delete that list

how does the Python interpreter know your next bit of execution won't be to
repeat the allocation? In addition, checking to see that an arena in the
free list can be freed is itself not a free operation. From the comments at
the top of intobject.c:

free_list is a singly-linked list of available PyIntObjects, linked
via abuse of their ob_type members.

Each time an int is allocated, the free list is checked to see if it's got a
spare object lying about sloughin off. If so, it is plucked from the list
and reinitialized appropriately. If not, a new block of memory sufficient
to hold about 250 ints is grabbed via a call to malloc, which *might* have
to grab more memory from the OS. Once that block is allocated, it's strung
together into a free list via the above ob_type slot abuse. Then the 250 or
so items are handed out one-by-one as needed and stitched back into the free
list as they are freed.

Now consider how difficult it is to decide if that block of 250 or so
objects is all unused so that we can free() it. We have to walk through the
list and check to see if that chunk is in the free list. That's complicated
by the fact that the ref count fields aren't initialized to zero until a
particular chunk is first used as an allocated int object and would have to
be to support this block free operation (=> more cost up front). Still,
assume we can semi-efficiently determine that a particular block is composed
of all freed int-object-sized chunks. We will then unstitch it from the
chain of blocks and call free() to free it. Still, we are left with the
behavior of the operating system's malloc/free implementation. It probably
won't sbrk() the block back to the OS, so after all that work your process
still holds the memory.

Okay, so malloc/free won't work. We could boost the block size up to the
size of a page and use mmap() to map a page into memory. I suspect that
would become still more complicated to implement, and the block size being
probably about eight times larger than the current block size would incur
even more cost to determine if it was full of nothing but freed objects.

Tom> If Python freed (most of) the memory when it had finished with it,
Tom> then all the system would have to do is:

That's the rub. Figuring out when it is truly "finished" with the memory.

Tom> Surely Python should free the memory if it's not been used for a
Tom> certain amount of time (say a few seconds), as allocation times are
Tom> not going to be the limiting factor if it's gone unused for that
Tom> long.

This is generally the point in such discussions where I respond with
something like, "patches cheerfully accepted". ;-) If you're interested in
digging into this, have a look at the free list implementation in
Objects/intobject.c. It might make for a good Google Summer of Code
project:

http://code.google.com/soc/psf/open.html
http://code.google.com/soc/psf/about.html

but I'm not the guy you want mentoring such a project. There are a lot of
people who understand the ins and outs of Python's memory allocation code
much better than I do.

Tom> I've also tested similar situations on Python under Windows XP, and
Tom> it shows the same behaviour, so I think this is a Python and/or
Tom> GCC/libc issue, rather than an OS issue (assuming Python for linux
Tom> and Python for windows are both compiled with GCC).

Sure, my apologies. The malloc/free implementation is strictly speaking not
part of the operating system. I tend to mentally lump them together because
it's uncommon for people to use a malloc/free implementation different than
the one delivered with their computer.

Skip
 
S

Steven D'Aprano

On Wed, 21 Mar 2007 15:03:17 +0000, Tom Wright wrote:

[snip]
Ah, thanks for explaining that. I'm a little wiser about memory allocation
now, but am still having problems reclaiming memory from unused objects
within Python. If I do the following:

(memory use: 953 MB)

...and then I allocate a lot of memory in another process (eg. open a load
of files in the GIMP), then the computer swaps the Python process out to
disk to free up the necessary space. Python's memory use is still reported
as 953 MB, even though nothing like that amount of space is needed.

Who says it isn't needed? Just because *you* have only one object
existing, doesn't mean the Python environment has only one object existing.

From what you said above, the problem is in the underlying C libraries,

What problem?

Nothing you've described seems like a problem to me. It sounds like a
modern, 21st century operating system and programming language working
like they should. Why do you think this is a problem?

You've described an extremely artificial set of circumstances: you create
40,000,000 distinct integers, then immediately destroy them. The obvious
solution to that "problem" of Python caching millions of integers you
don't need is not to create them in the first place.

In real code, the chances are that if you created 4e7 distinct integers
you'll probably need them again -- hence the cache. So what's your actual
problem that you are trying to solve?

but is there anything I can do to get that memory back without closing
Python?

Why do you want to manage memory yourself anyway? It seems like a
horrible, horrible waste to use a language designed to manage memory for
you, then insist on over-riding it's memory management.

I'm not saying that there is never any good reason for fine control of the
Python environment, but this doesn't look like one to me.
 
S

Steven D'Aprano

True, but why does Python hang on to the memory at all? As I understand it,
it's keeping a big lump of memory on the int free list in order to make
future allocations of large numbers of integers faster. If that memory is
about to be paged out, then surely future allocations of integers will be
*slower*, as the system will have to:

1) page out something to make room for the new integers
2) page in the relevant chunk of the int free list
3) zero all of this memory and do any other formatting required by Python

If Python freed (most of) the memory when it had finished with it, then all
the system would have to do is:

1) page out something to make room for the new integers
2) zero all of this memory and do any other formatting required by Python

Surely Python should free the memory if it's not been used for a certain
amount of time (say a few seconds), as allocation times are not going to be
the limiting factor if it's gone unused for that long. Alternatively, it
could mark the memory as some sort of cache, so that if it needed to be
paged out, it would instead be de-allocated (thus saving the time taken to
page it back in again when it's next needed)

And increasing the time it takes to re-create the objects in the cache
subsequently.

Maybe this extra effort is worthwhile when the free int list holds 10**7
ints, but is it worthwhile when it holds 10**6 ints? How about 10**5 ints?
10**3 ints?

How many free ints is "typical" or even "common" in practice?

The lesson I get from this is, instead of creating such an enormous list
of integers in the first place with range(), use xrange() instead.

Fresh running instance of Python 2.5:

$ ps up 9579
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
steve 9579 0.0 0.2 6500 2752 pts/7 S+ 03:42 0:00 python2.5


Run from within Python:
.... # create lots of ints, one at a time
.... # instead of all at once
.... n += i # make sure the int is used
....49999995000000L


And the output of ps again:

$ ps up 9579
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
steve 9579 4.2 0.2 6500 2852 pts/7 S+ 03:42 0:11 python2.5

Barely moved a smidgen.

For comparison, here's what ps reports after I create a single list with
range(int(1e7)), and again after I delete the list:

$ ps up 9579 # after creating list with range(int(1e7))
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
steve 9579 1.9 15.4 163708 160056 pts/7 S+ 03:42 0:11 python2.5

$ ps up 9579 # after deleting list
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
steve 9579 1.7 11.6 124632 120992 pts/7 S+ 03:42 0:12 python2.5


So there is another clear advantage to using xrange instead of range,
unless you specifically need all ten million ints all at once.
 
T

Tom Wright

Steven said:
You've described an extremely artificial set of circumstances: you create
40,000,000 distinct integers, then immediately destroy them. The obvious
solution to that "problem" of Python caching millions of integers you
don't need is not to create them in the first place.

I know it's a very artificial setup - I was trying to make the situation
simple to demonstrate in a few lines. The point was that it's not caching
the values of those integers, as they can never be read again through the
Python interface. It's just holding onto the space they occupy in case
it's needed again.
So what's your actual problem that you are trying to solve?

I have a program which reads a few thousand text files, converts each to a
list (with readlines()), creates a short summary of the contents of each (a
few floating point numbers) and stores this summary in a master list. From
the amount of memory it's using, I think that the lists containing the
contents of each file are kept in memory, even after there are no
references to them. Also, if I tell it to discard the master list and
re-read all the files, the memory use nearly doubles so I presume it's
keeping the lot in memory.

The program may run through several collections of files, but it only keeps
a reference to the master list of the most recent collection it's looked
at. Obviously, it's not ideal if all the old collections hang around too,
taking up space and causing the machine to swap.
Why do you want to manage memory yourself anyway? It seems like a
horrible, horrible waste to use a language designed to manage memory for
you, then insist on over-riding it's memory management.

I agree. I don't want to manage it myself. I just want it to re-use memory
or hand it back to the OS if it's got an awful lot that it's not using.
Wouldn't you say it was wasteful if (say) an image editor kept an
uncompressed copy of an image around in memory after the image had been
closed?
 
T

Tom Wright

Steve said:
Easy to say. How do you know the memory that's not in use is in a
contiguous block suitable for return to the operating system? I can
pretty much guarantee it won't be. CPython doesn't use a relocating
garbage collection scheme

Fair point. That is difficult and I don't see a practical solution to it
(besides substituting a relocating garbage collector, which seems like a
major undertaking).
Right. So all we have to do is identify those portions of memory that
will never be read again and return them to the OS. That should be easy.
Not.

Well, you have this nice int free list which points to all the bits which
will never be read again (they might be written to, but if you're writing
without reading then it doesn't really matter where you do it). The point
about contiguous chunks still applies though.
 
T

Tom Wright

If your program's behavior is:

* allocate a list of 1e7 ints
* delete that list

how does the Python interpreter know your next bit of execution won't be
to repeat the allocation?

It doesn't know, but if the program runs for a while without repeating it,
it's a fair bet that it won't mind waiting the next time it does a big
allocation. How long 'a while' is would obviously be open to debate.
In addition, checking to see that an arena in
the free list can be freed is itself not a free operation.
(snip thorough explanation)

Yes, that's a good point. It looks like the list is designed for speedy
re-use of the memory it points to, which seems like a good choice. I quite
agree that it should hang on to *some* memory, and perhaps my artificial
situation has shown this as a problem when it wouldn't cause any issues for
real programs. I can't help thinking that there are some situations where
you need a lot of memory for a short time though, and it would be nice to
be able to use it briefly and then hand most of it back. Still, I see the
practical difficulties with doing this.
 
S

Steve Holden

Tom said:
I know it's a very artificial setup - I was trying to make the situation
simple to demonstrate in a few lines. The point was that it's not caching
the values of those integers, as they can never be read again through the
Python interface. It's just holding onto the space they occupy in case
it's needed again.


I have a program which reads a few thousand text files, converts each to a
list (with readlines()), creates a short summary of the contents of each (a
few floating point numbers) and stores this summary in a master list. From
the amount of memory it's using, I think that the lists containing the
contents of each file are kept in memory, even after there are no
references to them. Also, if I tell it to discard the master list and
re-read all the files, the memory use nearly doubles so I presume it's
keeping the lot in memory.
I'd like to bet you are keeping references to them without realizing it.
The interpreter won't generally allocate memory that it can get by
garbage collection, and reference counting pretty much eliminates the
need for garbage collection anyway except when you create cyclic data
structures.
The program may run through several collections of files, but it only keeps
a reference to the master list of the most recent collection it's looked
at. Obviously, it's not ideal if all the old collections hang around too,
taking up space and causing the machine to swap.
We may need to see code here for you to convince us of the correctness
of your hypothesis. It sounds pretty screwy to me.
I agree. I don't want to manage it myself. I just want it to re-use memory
or hand it back to the OS if it's got an awful lot that it's not using.
Wouldn't you say it was wasteful if (say) an image editor kept an
uncompressed copy of an image around in memory after the image had been
closed?
Yes, but I'd say it was the programmer's fault if it turned out that the
interpreter wasn't doing anything wrong ;-) It could be something inside
an exception handler that is keeping a reference to a stack frame or
something silly like that.

regards
Steve
 
S

Steven D'Aprano

I have a program which reads a few thousand text files, converts each to a
list (with readlines()), creates a short summary of the contents of each (a
few floating point numbers) and stores this summary in a master list. From
the amount of memory it's using, I think that the lists containing the
contents of each file are kept in memory, even after there are no
references to them. Also, if I tell it to discard the master list and
re-read all the files, the memory use nearly doubles so I presume it's
keeping the lot in memory.

Ah, now we're getting somewhere!

Python's caching behaviour with strings is almost certainly going to be
different to its caching behaviour with ints. (For example, Python caches
short strings that look like identifiers, but I don't believe it caches
great blocks of text or short strings which include whitespace.)

But again, you haven't really described a problem, just a set of
circumstances. Yes, the memory usage doubles. *Is* that a problem in
practice? A few thousand 1KB files is one thing; a few thousand 1MB files
is an entirely different story.

Is the most cost-effective solution to the problem to buy another 512MB of
RAM? I don't say that it is. I just point out that you haven't given us
any reason to think it isn't.

The program may run through several collections of files, but it only keeps
a reference to the master list of the most recent collection it's looked
at. Obviously, it's not ideal if all the old collections hang around too,
taking up space and causing the machine to swap.

Without knowing exactly what your doing with the data, it's hard to tell
where the memory is going. I suppose if you are storing huge lists of
millions of short strings (words?), they might all be cached. Is there a
way you can avoid storing the hypothetical word-lists in RAM, perhaps by
writing them straight out to a disk file? That *might* make a
difference to the caching algorithm used.

Or you could just have an "object leak" somewhere. Do you have any
complicated circular references that the garbage collector can't resolve?
Lists-of-lists? Trees? Anything where objects aren't being freed when you
think they are? Are you holding on to references to lists? It's more
likely that your code simply isn't freeing lists you think are being freed
than it is that Python is holding on to tens of megabytes of random text.
 
N

Nick Craig-Wood

Steven D'Aprano said:
Or you could just have an "object leak" somewhere. Do you have any
complicated circular references that the garbage collector can't resolve?
Lists-of-lists? Trees? Anything where objects aren't being freed when you
think they are? Are you holding on to references to lists? It's more
likely that your code simply isn't freeing lists you think are being freed
than it is that Python is holding on to tens of megabytes of random
text.

This is surely just the fragmented heap problem.

It is a hard problem returning unused memory to the OS since it
usually comes in page size (4k) chunks and you can only return pages
on the end of your memory (the sbrk() interface).

The glibc allocator uses mmap() for large allocations which *can* be
returned to the OS without any fragmentation worries.

However if you have lots of small allocations then the heap will be
fragmented and you'll never be able to return the memory to the OS.

However that is why we have virtual memory systems.
 
A

Aahz

This is surely just the fragmented heap problem.

Possibly. I believe PyMalloc doesn't have as much a problem in this
area, but off-hand I don't remember the extent to which strings use
PyMalloc. Nevertheless, my bet is on holding references as the problem
with doubled memory use.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,967
Messages
2,570,148
Members
46,694
Latest member
LetaCadwal

Latest Threads

Top