Long-running daemon acquiring giant memory footprint

  • Thread starter Jason DiCioccio
  • Start date
J

Jason DiCioccio

I have written a long-running daemon in ruby to handle dynamic DNS updates.
I have just recently moved it from ruby 1.6 to ruby 1.8 and updated all of
its libraries to their latest versions (it uses dbi and dbd-postgres). The
problem i am having now is that it appears to start out using a sane amount
of memory (around 8mb) but then by the next day around the same time will
be using close to 200MB for the ruby interpreter alone. The daemon code
itself is 100% ruby so I don't understand how this leak is happening. Are
there any dangerous code segments I should look for that could make it do
this? The only thing I could think of is the fact that every returned
object from a sql query is .dup'd since ruby dbi passes a reference.
However, these should be getting swept up automatically by the garbage
collector. This is driving me nuts and I would love it if someone could
point me in the right direction..

Thanks!
-JD-
 
A

Ara.T.Howard

Date: Sat, 8 Nov 2003 03:44:59 +0900
From: Jason DiCioccio <[email protected]>
Newsgroups: comp.lang.ruby
Subject: Long-running daemon acquiring giant memory footprint

I have written a long-running daemon in ruby to handle dynamic DNS updates.
I have just recently moved it from ruby 1.6 to ruby 1.8 and updated all of
its libraries to their latest versions (it uses dbi and dbd-postgres). The
problem i am having now is that it appears to start out using a sane amount
of memory (around 8mb) but then by the next day around the same time will
be using close to 200MB for the ruby interpreter alone. The daemon code
itself is 100% ruby so I don't understand how this leak is happening. Are
there any dangerous code segments I should look for that could make it do
this? The only thing I could think of is the fact that every returned
object from a sql query is .dup'd since ruby dbi passes a reference.
However, these should be getting swept up automatically by the garbage
collector. This is driving me nuts and I would love it if someone could
point me in the right direction..

postgresql result sets need a call to #clear or they can leak memory - are you
doing this?

-a
--

ATTN: please update your address books with address below!

===============================================================================
| EMAIL :: Ara [dot] T [dot] Howard [at] noaa [dot] gov
| PHONE :: 303.497.6469
| ADDRESS :: E/GC2 325 Broadway, Boulder, CO 80305-3328
| STP :: http://www.ngdc.noaa.gov/stp/
| NGDC :: http://www.ngdc.noaa.gov/
| NESDIS :: http://www.nesdis.noaa.gov/
| NOAA :: http://www.noaa.gov/
| US DOC :: http://www.commerce.gov/
|
| The difference between art and science is that science is what we
| understand well enough to explain to a computer.
| Art is everything else.
| -- Donald Knuth, "Discover"
|
| /bin/sh -c 'for l in ruby perl;do $l -e "print \"\x3a\x2d\x29\x0a\"";done'
===============================================================================
 
L

Luke A. Kanies

I have written a long-running daemon in ruby to handle dynamic DNS updates.
I have just recently moved it from ruby 1.6 to ruby 1.8 and updated all of
its libraries to their latest versions (it uses dbi and dbd-postgres). The
problem i am having now is that it appears to start out using a sane amount
of memory (around 8mb) but then by the next day around the same time will
be using close to 200MB for the ruby interpreter alone. The daemon code
itself is 100% ruby so I don't understand how this leak is happening. Are
there any dangerous code segments I should look for that could make it do
this? The only thing I could think of is the fact that every returned
object from a sql query is .dup'd since ruby dbi passes a reference.
However, these should be getting swept up automatically by the garbage
collector. This is driving me nuts and I would love it if someone could
point me in the right direction..

The last time this happened to me it was because I had a member of a hash
referring to the parent. I would assume that that would reliably cause
memory holes in just about any language. I would double check your code,
see if you can find anything. I resolved the problem by chopping the code
up until I found the growing part.

Good luck!

Luke
 
J

Joel VanderWerf

Jason said:
I have written a long-running daemon in ruby to handle dynamic DNS
updates. I have just recently moved it from ruby 1.6 to ruby 1.8 and
updated all of its libraries to their latest versions (it uses dbi and
dbd-postgres). The problem i am having now is that it appears to start
out using a sane amount of memory (around 8mb) but then by the next day
around the same time will be using close to 200MB for the ruby
interpreter alone. The daemon code itself is 100% ruby so I don't
understand how this leak is happening. Are there any dangerous code
segments I should look for that could make it do this? The only thing I
could think of is the fact that every returned object from a sql query
is .dup'd since ruby dbi passes a reference. However, these should be
getting swept up automatically by the garbage collector. This is
driving me nuts and I would love it if someone could point me in the
right direction..

If the 200MB is used by objects that are still known to the interpreter
(i.e., not garbage), then you can use ObjectSpace to find them. For
instance, just to count objects of each class:

irb(main):001:0> h = Hash.new(0); ObjectSpace.each_object {|x|
h[x.class] += 1}
=> 6287
irb(main):002:0> h
=> {RubyToken::TkRBRACE=>1, IO=>3, Regexp=>253, IRB::WorkSpace=>1,
SLex::Node=>78, RubyToken::TkRBRACK=>1, RubyToken::TkINTEGER=>2,
Float=>5, NoMemoryError=>1, SLex=>1, RubyToken::TkRPAREN=>1,
RubyToken::TkBITOR=>2, RubyToken::TkIDENTIFIER=>7, RubyToken::TkNL=>1,
RubyToken::TkCONSTANT=>2, Proc=>49, IRB::Context=>1, IRB::Locale=>1,
RubyToken::TkSPACE=>7, ThreadGroup=>1, RubyToken::TkLPAREN=>1,
Thread=>1, fatal=>1, File=>10, String=>4413, Data=>1,
RubyToken::TkfLBRACE=>1, RubyToken::TkDOT=>3,
IRB::ReadlineInputMethod=>1, RubyToken::TkASSIGN=>1, Hash=>136,
IRB::Irb=>1, RubyToken::TkfLBRACK=>1, Object=>6, RubyLex=>1,
RubyToken::TkSEMICOLON=>1, MatchData=>111, Tempfile=>1, Module=>23,
RubyToken::TkOPASGN=>1, SystemStackError=>1, Binding=>2, Class=>345,
Array=>806}
 
M

Mauricio Fernández

The last time this happened to me it was because I had a member of a hash
referring to the parent. I would assume that that would reliably cause
memory holes in just about any language. I would double check your code,

Ruby does mark&sweep, not reference counting; I thus fail to see why
such a structure would fail to be reclaimed.

Do you mean something like

a = {}
a[:foo] = a

?

--
_ _
| |__ __ _| |_ ___ _ __ ___ __ _ _ __
| '_ \ / _` | __/ __| '_ ` _ \ / _` | '_ \
| |_) | (_| | |_\__ \ | | | | | (_| | | | |
|_.__/ \__,_|\__|___/_| |_| |_|\__,_|_| |_|
Running Debian GNU/Linux Sid (unstable)
batsman dot geo at yahoo dot com

Real Men don't make backups. They upload it via ftp and let the world mirror it.
-- Linus Torvalds
 
J

Jason DiCioccio

--On Saturday, November 8, 2003 6:51 AM +0900 Joel VanderWerf
Jason said:
I have written a long-running daemon in ruby to handle dynamic DNS
updates. I have just recently moved it from ruby 1.6 to ruby 1.8 and
updated all of its libraries to their latest versions (it uses dbi and
dbd-postgres). The problem i am having now is that it appears to start
out using a sane amount of memory (around 8mb) but then by the next day
around the same time will be using close to 200MB for the ruby
interpreter alone. The daemon code itself is 100% ruby so I don't
understand how this leak is happening. Are there any dangerous code
segments I should look for that could make it do this? The only thing I
could think of is the fact that every returned object from a sql query
is .dup'd since ruby dbi passes a reference. However, these should be
getting swept up automatically by the garbage collector. This is
driving me nuts and I would love it if someone could point me in the
right direction..

If the 200MB is used by objects that are still known to the interpreter
(i.e., not garbage), then you can use ObjectSpace to find them. For
instance, just to count objects of each class:

irb(main):001:0> h = Hash.new(0); ObjectSpace.each_object {|x| h[x.class]
+= 1} => 6287
irb(main):002:0> h
=> {RubyToken::TkRBRACE=>1, IO=>3, Regexp=>253, IRB::WorkSpace=>1,
SLex::Node=>78, RubyToken::TkRBRACK=>1, RubyToken::TkINTEGER=>2,
Float=>5, NoMemoryError=>1, SLex=>1, RubyToken::TkRPAREN=>1,
RubyToken::TkBITOR=>2, RubyToken::TkIDENTIFIER=>7, RubyToken::TkNL=>1,
RubyToken::TkCONSTANT=>2, Proc=>49, IRB::Context=>1, IRB::Locale=>1,
RubyToken::TkSPACE=>7, ThreadGroup=>1, RubyToken::TkLPAREN=>1, Thread=>1,
fatal=>1, File=>10, String=>4413, Data=>1, RubyToken::TkfLBRACE=>1,
RubyToken::TkDOT=>3, IRB::ReadlineInputMethod=>1, RubyToken::TkASSIGN=>1,
Hash=>136, IRB::Irb=>1, RubyToken::TkfLBRACK=>1, Object=>6, RubyLex=>1,
RubyToken::TkSEMICOLON=>1, MatchData=>111, Tempfile=>1, Module=>23,
RubyToken::TkOPASGN=>1, SystemStackError=>1, Binding=>2, Class=>345,
Array=>806}


I had a lot of hope for this method when I first tried it out.
Unfortunately at the moment the process is using 162M resident memory and
here's the output:

Total Objects: 8499 Detail: {EOFError=>2, DBI::Row=>81
, SQLPool=>1, IO=>3, DBI::StatementHandle=>49, fatal=>1,
SystemStackError=>1, Fl
oat=>18, Binding=>1, Mutex=>4, String=>5218, DBI::DatabaseHandle=>9,
DBI::DBD::p
g::pgCoerce=>9, DBI::DBD::pg::Tuples=>49, TCPSocket=>7, ODS=>1,
DBI::DBD::pg::Dr
iver=>1, NoncritError=>3, RR=>6, Thread=>23, DBI::DBD::pg::Statement=>49,
DBI::S
QL::preparedStatement=>49, User=>4, ConditionVariable=>2, ThreadGroup=>1,
DBI::D
BD::pg::Database=>9, Event=>22, Proc=>16, File=>1, Hash=>253, Range=>11,
PGconn=
9, PGresult=>49, CritError=>2, Errno::ECONNABORTED=>1, Object=>4,
Bignum=>5, IO
Error=>5, Whois=>1, TCPServer=>2, DBI::DriverHandle=>1, NoMemoryError=>1,
Module
=>34, Array=>1922, Sql=>5, MatchData=>150, Class=>232, Regexp=>172}

At it's peak it reaches about 20k. I'm guessing the drop occurs when the
garbage collector steps in. However, the memory size of the process
doesn't seem to drop at that point.. I'm running FreeBSD 4.9 with ruby
1.8.1 (2003-10-31). The problem was also occuring with release and stable
builds of 1.8.0 though as well. It was not, however, occurring in 1.6.x.

Any ideas? Bug?

Thanks!
-JD-
 
F

Fritz Heinrichmeyer

under freebsd, solaris etc the memory size of a process never drops. It
at most can stay constant. This is a strange feature. Under linux the
situation is different, here you should see a drop. Linux uses gnu
malloc. At least this was the situation some years ago.
 
J

Jason DiCioccio

Hmm.. So since my memory problem does not appear to be in ruby's object
space and all of my code is ruby code, should this be considered a bug and
submitted as such? If so, I can submit this as such. I wish I knew more
about how ruby's GC worked. One thing about this daemon is that it is used
heavily and accepts many queries/updates per second at times. Is it
possible that the GC is unable to 'keep up' ? Or does it not work that way
(I assume it doesn't.) I just don't see these object counts leading to
process sizes of over 200M after running a while.

Thanks again!
-JD-

--On Sunday, November 9, 2003 10:22 PM +0900 Jason DiCioccio <[email protected]>
wrote:

If the 200MB is used by objects that are still known to the interpreter
(i.e., not garbage), then you can use ObjectSpace to find them. For
instance, just to count objects of each class:

irb(main):001:0> h = Hash.new(0); ObjectSpace.each_object {|x| h[x.class]
+= 1} => 6287
irb(main):002:0> h
=> {RubyToken::TkRBRACE=>1, IO=>3, Regexp=>253, IRB::WorkSpace=>1,
SLex::Node=>78, RubyToken::TkRBRACK=>1, RubyToken::TkINTEGER=>2,
Float=>5, NoMemoryError=>1, SLex=>1, RubyToken::TkRPAREN=>1,
RubyToken::TkBITOR=>2, RubyToken::TkIDENTIFIER=>7, RubyToken::TkNL=>1,
RubyToken::TkCONSTANT=>2, Proc=>49, IRB::Context=>1, IRB::Locale=>1,
RubyToken::TkSPACE=>7, ThreadGroup=>1, RubyToken::TkLPAREN=>1, Thread=>1,
fatal=>1, File=>10, String=>4413, Data=>1, RubyToken::TkfLBRACE=>1,
RubyToken::TkDOT=>3, IRB::ReadlineInputMethod=>1, RubyToken::TkASSIGN=>1,
Hash=>136, IRB::Irb=>1, RubyToken::TkfLBRACK=>1, Object=>6, RubyLex=>1,
RubyToken::TkSEMICOLON=>1, MatchData=>111, Tempfile=>1, Module=>23,
RubyToken::TkOPASGN=>1, SystemStackError=>1, Binding=>2, Class=>345,
Array=>806}


I had a lot of hope for this method when I first tried it out.
Unfortunately at the moment the process is using 162M resident memory and
here's the output:

Total Objects: 8499 Detail: {EOFError=>2, DBI::Row=>81
, SQLPool=>1, IO=>3, DBI::StatementHandle=>49, fatal=>1,
SystemStackError=>1, Fl oat=>18, Binding=>1, Mutex=>4, String=>5218,
DBI::DatabaseHandle=>9, DBI::DBD::p g::pgCoerce=>9,
DBI::DBD::pg::Tuples=>49, TCPSocket=>7, ODS=>1, DBI::DBD::pg::Dr iver=>1,
NoncritError=>3, RR=>6, Thread=>23, DBI::DBD::pg::Statement=>49, DBI::S
QL::preparedStatement=>49, User=>4, ConditionVariable=>2, ThreadGroup=>1,
DBI::D BD::pg::Database=>9, Event=>22, Proc=>16, File=>1, Hash=>253,
Range=>11, PGconn=
9, PGresult=>49, CritError=>2, Errno::ECONNABORTED=>1, Object=>4,
Bignum=>5, IO
Error=>5, Whois=>1, TCPServer=>2, DBI::DriverHandle=>1, NoMemoryError=>1,
Module =>34, Array=>1922, Sql=>5, MatchData=>150, Class=>232, Regexp=>172}

At it's peak it reaches about 20k. I'm guessing the drop occurs when the
garbage collector steps in. However, the memory size of the process
doesn't seem to drop at that point.. I'm running FreeBSD 4.9 with ruby
1.8.1 (2003-10-31). The problem was also occuring with release and
stable builds of 1.8.0 though as well. It was not, however, occurring in
1.6.x.

Any ideas? Bug?

Thanks!
-JD-
 
J

Jason DiCioccio

Luke,
After a while of debugging I found a bunch of objects that were being
created that apparently contain one of the primary keys in one of my
database tables. The thing is is the query should only return one result.
This particular row is hardly ever referenced either. So now I have found
a line in my code that is something like you might have been referring to:

nsEntryId = nsEntryId[0][0]

That is called quite often, would that cause the object to stay around?
Is this what you were referring to?

If not, I'll have to dig a lot deeper and find out why these values are
scattered all over the object space.

Thanks in advance,
Jason DiCioccio

--On Saturday, November 8, 2003 6:44 AM +0900 "Luke A. Kanies"
 
L

Luke A. Kanies

Luke,
After a while of debugging I found a bunch of objects that were being
created that apparently contain one of the primary keys in one of my
database tables. The thing is is the query should only return one result.
This particular row is hardly ever referenced either. So now I have found
a line in my code that is something like you might have been referring to:

nsEntryId = nsEntryId[0][0]

That is called quite often, would that cause the object to stay around?
Is this what you were referring to?

Yep, that's exactly what I'm referring to. If you undef your local copy
of nsEntryId, it still maintains a pointer to itself, so it becomes a
closed off lump of storage. I don't know much about ruby's memory
management, but it apparently can't catch these problems (I know perl
can't).

I expect that if you fix that self-reference, your growth will go away.

Good luck!
 
J

Jason DiCioccio

I imagine changing it to:

nsEntryId = nsEntryId[0][0].dup would take care of the problem? Or just
using another object name?

Regards,
-JD-

--On Monday, November 17, 2003 10:50 AM -0600 "Luke A. Kanies"
Luke,
After a while of debugging I found a bunch of objects that were being
created that apparently contain one of the primary keys in one of my
database tables. The thing is is the query should only return one
result. This particular row is hardly ever referenced either. So now I
have found a line in my code that is something like you might have been
referring to:

nsEntryId = nsEntryId[0][0]

That is called quite often, would that cause the object to stay around?
Is this what you were referring to?

Yep, that's exactly what I'm referring to. If you undef your local copy
of nsEntryId, it still maintains a pointer to itself, so it becomes a
closed off lump of storage. I don't know much about ruby's memory
management, but it apparently can't catch these problems (I know perl
can't).

I expect that if you fix that self-reference, your growth will go away.

Good luck!
 
L

Luke A. Kanies

I imagine changing it to:

nsEntryId = nsEntryId[0][0].dup would take care of the problem? Or just
using another object name?

Um, I don't really know. I can't think of a generally good reason to have
an object refer to itself. Does the object actually need a
self-reference, or are you just reusing the name for convenience? If it's
just convenience, I definitely recommend using a different name.

Duplication might also solve the problem, but I'm not really sure. I'm
still a relative ruby newbie and have not figured out all the details of
when you get a reference vs. a copy vs. a real variable. Others can
hopefully chime in with those answers.

Luke

--
: We are looking at a newspaper clipping labelled "Huffington
: Herald 11/12/96" in which "Huffington" states, in part:
: "The inhabitants of Tiera del Fuego ... have a single word
: that means 'to look at each other hoping that either will offer to do
: something that both parties desire but are unwilling to do.'"
: Does anyone know that word?
Management?
 
R

Robert Klemme

Luke A. Kanies said:
Luke,
After a while of debugging I found a bunch of objects that were being
created that apparently contain one of the primary keys in one of my
database tables. The thing is is the query should only return one result.
This particular row is hardly ever referenced either. So now I have found
a line in my code that is something like you might have been referring to:

nsEntryId = nsEntryId[0][0]

That is called quite often, would that cause the object to stay
around?

Not the old nsEntryId unless nsEntryId[0][0] has a reference to it.
Yep, that's exactly what I'm referring to. If you undef your local copy
of nsEntryId, it still maintains a pointer to itself, so it becomes a
closed off lump of storage. I don't know much about ruby's memory
management, but it apparently can't catch these problems (I know perl
can't).

Hmmm, since Ruby used mark and sweep GC it should be able to catch loops -
self references as well as loops made of several instances. It seems to
me that this is unlikely to be the reason for the memory growth. I'd
rather assume some references as indicated above.

Kind regards

robert
 
J

Jason DiCioccio

Greetings,

--On Tuesday, November 18, 2003 2:12 AM +0900 Robert Klemme
Luke A. Kanies said:
Luke,
After a while of debugging I found a bunch of objects that were being
created that apparently contain one of the primary keys in one of my
database tables. The thing is is the query should only return one result.
This particular row is hardly ever referenced either. So now I have found
a line in my code that is something like you might have been referring to:

nsEntryId = nsEntryId[0][0]

That is called quite often, would that cause the object to stay
around?

Not the old nsEntryId unless nsEntryId[0][0] has a reference to it.

I've tried removing this line and it does indeed appear to be the problem.
There was no reference from nsEntryId[0][0] back to nsEntryId. Is this a
bug? I don't think it had this problem in ruby 1.6.x, so it appears
something changed in the handling of arrays between 1.6 and 1.8.


Regards,
-JD-
 
R

Robert Klemme

Jason DiCioccio said:
Greetings,

--On Tuesday, November 18, 2003 2:12 AM +0900 Robert Klemme
Luke A. Kanies said:
On Mon, 17 Nov 2003, Jason DiCioccio wrote:

Luke,
After a while of debugging I found a bunch of objects that were being
created that apparently contain one of the primary keys in one of my
database tables. The thing is is the query should only return one result.
This particular row is hardly ever referenced either. So now I
have
found
a line in my code that is something like you might have been
referring
to:
nsEntryId = nsEntryId[0][0]

That is called quite often, would that cause the object to stay
around?

Not the old nsEntryId unless nsEntryId[0][0] has a reference to it.

I've tried removing this line and it does indeed appear to be the problem.
There was no reference from nsEntryId[0][0] back to nsEntryId. Is this a
bug? I don't think it had this problem in ruby 1.6.x, so it appears
something changed in the handling of arrays between 1.6 and 1.8.

Hm. Since I don't know the data at hand, I'm not in a position to comment
more specific. I do wonder however, how this assignment should be
responsible for growth of mem consumption if nsEntryId[0][0] does not
backreference nsEntryId.

I'd do a to_yaml of nsEntryId before and after the assignment to be able
to analyse the object graph. Maybe there's a backreference you
overlooked.

The other option seems to be a bug in the garbage collector but I'm not
very inclined in believing this.

If there is an extension involved that could hold on to the memory in ways
not easy to determine. That's another option that comes to my mind.

Kind regards

robert
 
T

ts

J> I've tried removing this line and it does indeed appear to be the problem.
J> There was no reference from nsEntryId[0][0] back to nsEntryId. Is this a
J> bug? I don't think it had this problem in ruby 1.6.x, so it appears
J> something changed in the handling of arrays between 1.6 and 1.8.

Well, if you can, try to reproduce the problem with a *small* script and
post it if you still have the problem


Guy Decoux
 
J

Jason DiCioccio

--On Tuesday, November 18, 2003 6:17 PM +0900 Robert Klemme
Jason DiCioccio said:
Greetings,

--On Tuesday, November 18, 2003 2:12 AM +0900 Robert Klemme
On Mon, 17 Nov 2003, Jason DiCioccio wrote:

Luke,
After a while of debugging I found a bunch of objects that were
being
created that apparently contain one of the primary keys in one of my
database tables. The thing is is the query should only return one
result.
This particular row is hardly ever referenced either. So now I have
found
a line in my code that is something like you might have been referring
to:

nsEntryId = nsEntryId[0][0]

That is called quite often, would that cause the object to stay
around?

Not the old nsEntryId unless nsEntryId[0][0] has a reference to it.

I've tried removing this line and it does indeed appear to be the problem.
There was no reference from nsEntryId[0][0] back to nsEntryId. Is this a
bug? I don't think it had this problem in ruby 1.6.x, so it appears
something changed in the handling of arrays between 1.6 and 1.8.

Hm. Since I don't know the data at hand, I'm not in a position to comment
more specific. I do wonder however, how this assignment should be
responsible for growth of mem consumption if nsEntryId[0][0] does not
backreference nsEntryId.

I'd do a to_yaml of nsEntryId before and after the assignment to be able
to analyse the object graph. Maybe there's a backreference you
overlooked.

The other option seems to be a bug in the garbage collector but I'm not
very inclined in believing this.

If there is an extension involved that could hold on to the memory in ways
not easy to determine. That's another option that comes to my mind.

Kind regards

robert

Robert,
The object in question is just data from ruby dbi object. You could be
right about the back reference, however, the back reference was definitely
not something that I am aware is there. It could be a pointer that ruby
DBI puts into all of its objects?

Here's a code snippet to illustrate what it contains:

nsEntryId, = Thread.current[:sql].query( "SELECT NSEntryId FROM NSEntry
WHERE LOWER(Host) = LOWER(?) AND DomainId=?::bigint AND OwnerId=?::bigint
AND UserId=?::bigint #{type} #{ target if target }", domain[0],
Domain::id(domain[1]), Domain::eek:wnerId(domain[1]), @aUser.id )

The Sql#query method is below as well:

def query( sqlRequest, *vars )
@conn.prepare( sqlRequest ) do |sqlVar|
begin
sqlVar.execute( *vars )
rescue => detail
if detail.message =~ /server closed the connection unexpectedly/
log("Connection to DB unexpectedly closed")
$sqlPool.reopen!( @conn )
end

Thread.current.raise NoncritError, "An error occurred: #{detail},
\"#{sqlRequest}\""
end

case sqlRequest.word( 0 )
when /^INSERT/ then return
when /^UPDATE/ then return
when /^DELETE/ then return
end

result = []
rcnt = 0
while (row = sqlVar.fetch) != nil
result.push row.dup
rcnt = rcnt.next
end

return [nil, rcnt] if result.length == 0
return [result, rcnt]
end
end

Any ideas where this reference might be coming from if it exists?

Regards,
-JD-
 
R

Robert Klemme

Jason DiCioccio said:
--On Tuesday, November 18, 2003 6:17 PM +0900 Robert Klemme
Jason DiCioccio said:
Greetings,

--On Tuesday, November 18, 2003 2:12 AM +0900 Robert Klemme


On Mon, 17 Nov 2003, Jason DiCioccio wrote:

Luke,
After a while of debugging I found a bunch of objects that were
being
created that apparently contain one of the primary keys in one
of
my
database tables. The thing is is the query should only return one
result.
This particular row is hardly ever referenced either. So now I have
found
a line in my code that is something like you might have been referring
to:

nsEntryId = nsEntryId[0][0]

That is called quite often, would that cause the object to stay
around?

Not the old nsEntryId unless nsEntryId[0][0] has a reference to it.

I've tried removing this line and it does indeed appear to be the problem.
There was no reference from nsEntryId[0][0] back to nsEntryId. Is
this
a
bug? I don't think it had this problem in ruby 1.6.x, so it appears
something changed in the handling of arrays between 1.6 and 1.8.

Hm. Since I don't know the data at hand, I'm not in a position to comment
more specific. I do wonder however, how this assignment should be
responsible for growth of mem consumption if nsEntryId[0][0] does not
backreference nsEntryId.

I'd do a to_yaml of nsEntryId before and after the assignment to be able
to analyse the object graph. Maybe there's a backreference you
overlooked.

The other option seems to be a bug in the garbage collector but I'm not
very inclined in believing this.

If there is an extension involved that could hold on to the memory in ways
not easy to determine. That's another option that comes to my mind.

Kind regards

robert

Robert,
The object in question is just data from ruby dbi object. You could be
right about the back reference, however, the back reference was definitely
not something that I am aware is there. It could be a pointer that ruby
DBI puts into all of its objects?

Here's a code snippet to illustrate what it contains:

nsEntryId, = Thread.current[:sql].query( "SELECT NSEntryId FROM NSEntry
WHERE LOWER(Host) = LOWER(?) AND DomainId=?::bigint AND OwnerId=?::bigint
AND UserId=?::bigint #{type} #{ target if target }", domain[0],
Domain::id(domain[1]), Domain::eek:wnerId(domain[1]), @aUser.id )

The Sql#query method is below as well:

def query( sqlRequest, *vars )
@conn.prepare( sqlRequest ) do |sqlVar|
begin
sqlVar.execute( *vars )
rescue => detail
if detail.message =~ /server closed the connection unexpectedly/
log("Connection to DB unexpectedly closed")
$sqlPool.reopen!( @conn )
end

Thread.current.raise NoncritError, "An error occurred: #{detail},
\"#{sqlRequest}\""
end

case sqlRequest.word( 0 )
when /^INSERT/ then return
when /^UPDATE/ then return
when /^DELETE/ then return
end

result = []
rcnt = 0
while (row = sqlVar.fetch) != nil
result.push row.dup
rcnt = rcnt.next
end

return [nil, rcnt] if result.length == 0
return [result, rcnt]
end
end

Any ideas where this reference might be coming from if it exists?

Only the row.dup. What type does it return? Is it an Array or some DBI
internal class? If it's DBI internal, I'd try row.to_a instead to make
sure, that only values are retained and not the row collection.

Cheers

robert
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,995
Messages
2,570,236
Members
46,822
Latest member
israfaceZa

Latest Threads

Top