why does this code leak?

R

Rick DeNatale

When you create the lambda, what is the value of "self" inside the
lambda?

The answer is that it is going to be the object in which the lambda
was created. In the code above, this would be the object that you are
trying to finalize -- i.e. an instance of Foo. Since the lambda has a
reference to the Foo instance, that instance will always be marked by
the GC, and hence, it will never be garbage collected.

Right, this analysis is correct for Robert's code, and I was thinking
the same thing about Ara's "leaky_finalizer" code as well, but that
code, here simplified, doesn't have the same problem as far as I can
tell:

class Class


def leaky_finalizer
lambda{}
end

def new *a, &b
object = allocate
object.send :initialize, *a, &b
object
ensure
ObjectSpace.define_finalizer object, leaky_finalizer
end
end

end

Note that we are in class Class so self when the lambda is created is
not the new instance but Class itself. In this case it looks as if
the lambda (or something else) is holding on to the binding of the
caller of the finalize method where object is bound to the object to
be finalized.
 
T

Tim Pease

Right, this analysis is correct for Robert's code, and I was thinking
the same thing about Ara's "leaky_finalizer" code as well, but that
code, here simplified, doesn't have the same problem as far as I can
tell:

class Class


def leaky_finalizer
lambda{}
end

def new *a, &b
object = allocate
object.send :initialize, *a, &b
object
ensure
ObjectSpace.define_finalizer object, leaky_finalizer
end
end

end

Note that we are in class Class so self when the lambda is created is
not the new instance but Class itself. In this case it looks as if
the lambda (or something else) is holding on to the binding of the
caller of the finalize method where object is bound to the object to
be finalized.

Hmmm ... I get the same results as my previous example:

$ cat a.rb
class Class
def leaky_finalizer
lambda {|object_id|
puts "#{object_id} #{local_variables.inspect}
#{instance_variables.inspect}"
}
end

def new(*a, &b)
object = allocate
object.send :initialize, *a, &b
object
ensure
ObjectSpace.define_finalizer object, leaky_finalizer
end
end

class Foo; end

10.times {
GC.start
Foo.new
p "Foo" => ObjectSpace.each_object(Foo){}
}


$ ruby a.rb
{"Foo"=>1}
{"Foo"=>2}
{"Foo"=>3}
84800 ["object_id"] []
{"Foo"=>3}
89480 ["object_id"] []
84470 ["object_id"] []
{"Foo"=>2}
{"Foo"=>3}
84360 ["object_id"] []
83880 ["object_id"] []
{"Foo"=>2}
89480 ["object_id"] []
{"Foo"=>2}
83740 ["object_id"] []
{"Foo"=>2}
84730 ["object_id"] []
{"Foo"=>2}
84770 ["object_id"] []
84800 ["object_id"] []


It looks like everything is getting cleaned up -- just not as quickly
as one would assume. But by the end of the program, all 10 finalizers
have been called.

Blessings,
TwP
 
R

Robert Dober

I honestly fail to see why the closure shall take a reference to the
object . I am with Ara here, no need to keep a reference to the object
and this is not only MHO but also that one of Ruby1.9;), if not a bug
at least it is odd behavior.
<snip>
Robert
 
R

Rick DeNatale

I honestly fail to see why the closure shall take a reference to the
object . I am with Ara here, no need to keep a reference to the object
and this is not only MHO but also that one of Ruby1.9;), if not a bug
at least it is odd behavior.

Because the code which creates the proc doesn't do any analysis of
what's inside the block.

It's like the story about the guy who checked out of hotel, and
protested the mini-bar charge on his bill, saying, "I didn't use
anything from the mini-bar." The Hotel manager said, "I'm sorry sir,
it's Hotel policy, the mini-bar was available for your use, and we
have to charge a fee to maintain it."

The man hesitated a second and quickly wrote out a bill for $100 and
presented it to the Hotel Manager, who asked "What's this for?"

The man said "For sleeping with my wife."

The Hotel manager said, "I didn't sleep with your wife!"

To which the man said, "But she was available for your use, and she's
much more expensive to maintain than that mini-bar."


Seriously, it might be possible for the Ruby parser to mark the AST or
byte-codes representing a block to indicate whether or not it needed
to be a closure or not, and perhaps even to limit what actually got
bound, but as far as I know it doesn't.

I'm also of the opinion that expecting objects to be reclaimed as
rapidly as 'logically' possible might not be the best trade-off in
designing a GC anyway.
 
R

Robert Dober

Having said all this, I would urge caution, because such
implementation approaches work best when accomplished by careful
cost-benefit analysis.
Agreed, but do you think that this kind of indeterminism is acceptable
upon the explicit call of GC.start, I am not sure.

Cheers
Robert
 
R

Rick DeNatale

As said above I would love to know a case where this is needed in
which case one shall probably file a bug report to Ruby 1.9.

Imagine this code:

class Foo
def initialize
creation_time = Time.now
ObjectSpace.define_finalizer self, lambda{ puts "An Object has
died (#{creation_time}-#(Time.now}) R.I.P."
end

end

Here in the block puts really means self.puts, so the block needs to
capture the binding of self, as well as the binding of the local
creation_time.

Now since the VM doesn't look inside the block when creating a proc,
it has to assume that the binding of he context in which the block was
created has to be captured.
 
M

MenTaLguY

Now since the VM doesn't look inside the block when creating a proc,
it has to assume that the binding of he context in which the block was
created has to be captured.

Also, even if the VM did look inside the block to see which variables were
captured, it has to keep all of them around anyway because they have to
remain accessible because the binding is exposed via Proc#binding.

(That's more or less the main reason why us JRuby folks aren't big fans
of Proc#binding...)

-mental
 
R

Robert Dober

Thanks for your time Rick.
I have just written lots of testcode and there is no need to post it,
it is clear self is captured in the closure. ( Probably very useful
too ).
This happens for 1.9 too, so why did we get the following

class Foo
def initialize
creation_time = Time.now
ObjectSpace.define_finalizer self, lambda{ puts "An Object has
died (#{creation_time}-#(Time.now}) R.I.P."
end

end

I guess the finalizer is not used and thus the lambda thrown away:

682/183 > cat leak.rb && ruby1.9 leak.rb
# vim: sw=2 ts=2 ft=ruby expandtab tw=0 nu syn:
#

Foo = Class::new{
def initialize
ObjectSpace.define_finalizer self, lambda {p :finalized}
end
}

(42/7).times {
Foo.new
GC.start
p "Foo" => ObjectSpace.each_object(Foo){}
}
{"Foo"=>1}
{"Foo"=>1}
{"Foo"=>1}
{"Foo"=>1}
{"Foo"=>1}
{"Foo"=>1}

Bingo!!!
Robert
 
R

Rick DeNatale

Also, even if the VM did look inside the block to see which variables were
captured, it has to keep all of them around anyway because they have to
remain accessible because the binding is exposed via Proc#binding.

Good observation!
$ qri Proc#binding
----------------------------------------------------------- Proc#binding
prc.binding => binding
------------------------------------------------------------------------
Returns the binding associated with prc. Note that Kernel#eval
accepts either a Proc or a Binding object as its second parameter.

def fred(param)
proc {}
end

b = fred(99)
eval("param", b.binding) #=> 99
eval("param", b) #=> 99

Any optimization of procs by making them less than a full closure even
those representing an empty block would break this 'specification'.

On the other hand Ruby 1.9 made changes to much less obscure specifications!
 
M

MenTaLguY

Any optimization of procs by making them less than a full closure even
those representing an empty block would break this 'specification'.

On the other hand Ruby 1.9 made changes to much less obscure
specifications!

Well... my impression was that Matz wasn't sold on the idea of changing it
at the time it was discussed on ruby-core.

-mental
 
R

Rick DeNatale

This discussion reminds me of how such little details can have
significant effects. Having the Proc#binding method seems to me to be
somewhat similar to the "classical" Smalltalk dependency design.

This was one of, if not the, first examples of the Observer pattern.

Smalltalk defines methods on Object which allow dependents (observers)
to be added to any object, an object notifies observers when it
changes by self.changed which sends the message update to each
dependent with the object as the parameter.

Since this could be used with any object, but was actually used with
few objects, the implementation in the Object class stored the list of
dependents in a global identity dictionary(a hash which uses identity
rather than equality in comparing keys) keyed on the object.

What this means is that as long as an object has any dependents, it,
and it's dependents can't be GCed, even though nothing outside of the
dependency graph refers to any of those objects.

For Smalltalk applications which actually used dependents it was
common practice to override the method used to find the collection of
dependents and keep it in an instance variable in the object itself
rather than using the global identity dictionary. I just looked at
the Squeak image and there's a subclass of Object called Model whose
sole purpose is to do this.

Interestingly, if one were to do this in Ruby, the default
implementation could easily use an instance variable, since in Ruby
unlike Smalltak, an instance variables don't take up any space in an
object until it's actually needed. i.e.

class Object

def dependents
# Defer actually creating a dependents iv until we have at
least one dependent
@dependents || []
end

def add_dependent(dependent)
(@dependents ||= []) << dependent
end

def changed
self.dependents.each {|dependent| dependent.update(self)}
end
end

This dynamic instance variable allocation is one of the reasons I now
prefer Ruby to Smalltalk despite a long relationship with the former.
 
R

Robert Dober

This dynamic instance variable allocation is one of the reasons I now
prefer Ruby to Smalltalk despite a long relationship with the former.
This is indeed a feature I like a lot, it's supression was however
discussed once, it is still there though.
OTOH who knows maybe Squeak will have it tomorrow, do you think that
would be possible with the actual VM?

Cheers
Robert
 
R

Rick DeNatale

This is indeed a feature I like a lot, it's supression was however
discussed once, it is still there though.
OTOH who knows maybe Squeak will have it tomorrow, do you think that
would be possible with the actual VM?

Well just about anything is possible, as we used to say it's a Simple
Matter of Programming.

On the other hand, I doubt that it would be practical to do this with
Squeak or other ST implementations of which I'm aware. It's pretty
fundamental to the design of the VM that instance variables are bound
at class definition time to an offset from the beginning of the
object. The byte code is optimized for fetching and storing such iv
references. When you change a class definition in Smalltalk, say by
adding an iv, then the ide recompiles all the methods of the class and
any subclasses since this causes ivs to move around in the object
instance. Most ST implementations also then mutate any existing
instances as well.

Dave Ungar, after starting work on Self, used to amuse himself by
going to various Smalltalk implementations, adding an instance
variable to Object and seeing how long the system lived.

I just tried this with Squeak, got a warning that Object can't be
changed with the option to proceed anyway, then got a second warning
with proceed option, after which it started churning away recompiling
all the classes in the image, got through about 30 of the 1500 or so
and hung.

Ruby is more like self than Smalltalk in this regard. In Ruby IVs are
implemented as values in a hash keyed by the iv name. In self, the
whole object is basically a collection of named slots, and methods are
just executable objects referenced by some of these slots.

So in Smalltalk, the class holds both a format descriptor of its
instances and a method dictionary used to find instance methods. In
Ruby the instance layout is in the object itself and is self described
via the hash, while the method dictionary remains in the klass. In
self everything is in the 'instance' there are no formal classes but
there is a notion of delegation via a special reference slot which is
used to find a named slot which is not in the current object.
 
R

Robert Dober

Well just about anything is possible, as we used to say it's a Simple
Matter of Programming.

On the other hand, I doubt that it would be practical to do this with
Squeak or other ST implementations of which I'm aware. It's pretty
fundamental to the design of the VM that instance variables are bound
at class definition time to an offset from the beginning of the
object. The byte code is optimized for fetching and storing such iv
references. When you change a class definition in Smalltalk, say by
adding an iv, then the ide recompiles all the methods of the class and
any subclasses since this causes ivs to move around in the object
instance. Most ST implementations also then mutate any existing
instances as well.

Dave Ungar, after starting work on Self, used to amuse himself by
going to various Smalltalk implementations, adding an instance
variable to Object and seeing how long the system lived.

I just tried this with Squeak, got a warning that Object can't be
changed with the option to proceed anyway, then got a second warning
with proceed option, after which it started churning away recompiling
all the classes in the image, got through about 30 of the 1500 or so
and hung.

Ruby is more like self than Smalltalk in this regard. In Ruby IVs are
implemented as values in a hash keyed by the iv name. In self, the
whole object is basically a collection of named slots, and methods are
just executable objects referenced by some of these slots.

So in Smalltalk, the class holds both a format descriptor of its
instances and a method dictionary used to find instance methods. In
Ruby the instance layout is in the object itself and is self described
via the hash, while the method dictionary remains in the klass. In
self everything is in the 'instance' there are no formal classes but
there is a notion of delegation via a special reference slot which is
used to find a named slot which is not in the current object.
Very interesting stuff I thought that in the Bluebook there were ivars
in predefined slots (16) and others were added in
a dictionary (of course a very rare case ). So somehow I wondered if
dynamic ivars could just be added to the dictionary.
But I am afraid that I am completely OT now, however thanks a lot for
your time Rick.

Robert
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Similar Threads

Why does this code works without cat ? 46
FFI Memory Leak 4
Memory leak 8
Memory leak in 1.9.2-p330? 2
Why does this template code compile? 2
[ANN] nmap-1.1.0 3
ANN main-4.4.0 0
Net::HTTP Closes STDIN 29

Members online

No members online now.

Forum statistics

Threads
473,995
Messages
2,570,226
Members
46,815
Latest member
treekmostly22

Latest Threads

Top