ability to run finalizers at a given point of a program?

  • Thread starter Guillaume Cottenceau
  • Start date
G

Guillaume Cottenceau

Hi,

I'm considering the possibility to run the finalizers at a given point
of a program. I have written the following program, and I'm
experiencing the following unexpected behaviour: I call GC.start in
the hope that finalizers of unscoped objects that have one will be
run, but it seems they aren't.

http://www.zarb.org/~gc/t/prog/destructor/nodestructor.rb

I have written the following program in ocaml, which has a closer GC
implementation than perl or python (these have reference counting,
ocaml doesn't) and it seems that contrary to Ruby, finalizers are run
as expected when a GC run is forced (however, when the closure used as
finalizer has a reference to anything inside the object, this counts
as a reference thus object is always scoped, so this is not really
useful as a finalizer/destructor anyway).

http://www.zarb.org/~gc/t/prog/destructor/destructor.ml

Do you guys have any insight with ruby behaviour? I use 1.8.2 on Linux.
 
T

ts

G> http://www.zarb.org/~gc/t/prog/destructor/nodestructor.rb

The code is

G> class Foo
G> def initialize
G> puts "*** constructor"
G> end
G> end
G>
G>
G> def scopeme
G> foo = Foo.new
G> ObjectSpace.define_finalizer(foo, proc { puts "*** pseudo-destructor" })
G> end
G>
G> scopeme
G> puts "Foo was out-scoped."
G>
G> GC.start
G> puts "Gc was run."

you must know that the gc is a little special, try it with

svg% cat b.rb
#!/usr/local/bin/ruby
class Foo
def initialize
puts "*** constructor"
end
def self.final
proc { puts "*** pseudo-destructor" }
end
end

def scopeme
foo = Foo.new
ObjectSpace.define_finalizer(foo, Foo.final)
end

a = scopeme
puts "Foo was out-scoped."

GC.start
puts "Gc was run."

svg%

svg% b.rb
*** constructor
Foo was out-scoped.
*** pseudo-destructor
Gc was run.
svg%




Guy Decoux
 
R

Robert Klemme

Guillaume Cottenceau said:
Hi,

I'm considering the possibility to run the finalizers at a given point
of a program. I have written the following program, and I'm
experiencing the following unexpected behaviour: I call GC.start in
the hope that finalizers of unscoped objects that have one will be
run, but it seems they aren't.

http://www.zarb.org/~gc/t/prog/destructor/nodestructor.rb

Apart from what Guy wrote already, why do you need to determine the point in
time when finalizers are called? The whole idea of GC and finalization is
that you *don't* care when it happens. If you need to ensure (hint, hint)
that some cleanup code is invoked at some point in time then the
transactional pattern employed by File.open() and others might be more
appropriate:

def do_work
x = create_x_somehow
begin
yield x
ensure
# always called, even in case of exception
x.cleanup
end
end

do_work do |an_x|
puts an_x.to_u
end

And another remark: as opposed to Java finalizers, Ruby finalizers are
guaranteed to be invoked, even on program exit:

$ ruby -e 'ObjectSpace.define_finalizer(Object.new){ puts "called" }'
called
$ ruby -e 'o=Object.new;ObjectSpace.define_finalizer(o){ puts "called" }'
called

Btw, you can also define exit handlers with at_exit:
http://www.ruby-doc.org/core/classes/Kernel.html#M001736

Kind regards

robert
 
G

Guillaume Cottenceau

What version of Ruby are you running? With your example I can see:

*** constructor
Foo was out-scoped.
Gc was run.
*** pseudo-destructor

(which tends to prove that GC.start did not trigger the finalizer -
just the same as my program actually)

What difference do you pretend your program is making with mine?
 
S

Saynatkari

Guillaume said:
Hi,

I'm considering the possibility to run the finalizers at a given point
of a program. I have written the following program, and I'm
experiencing the following unexpected behaviour: I call GC.start in
the hope that finalizers of unscoped objects that have one will be
run, but it seems they aren't.

If you really want to somehow 'delete' the objects (rather than just
free some memory) at a certain point, you might want to use ensure.
It works 'outside' exception handling, too.

def foo()
# Something

ensure
# Finalize
end
http://www.zarb.org/~gc/t/prog/destructor/nodestructor.rb

I have written the following program in ocaml, which has a closer GC
implementation than perl or python (these have reference counting,
ocaml doesn't) and it seems that contrary to Ruby, finalizers are run
as expected when a GC run is forced (however, when the closure used as
finalizer has a reference to anything inside the object, this counts
as a reference thus object is always scoped, so this is not really
useful as a finalizer/destructor anyway).

http://www.zarb.org/~gc/t/prog/destructor/destructor.ml

Do you guys have any insight with ruby behaviour? I use 1.8.2 on Linux.

E
 
G

Guillaume Cottenceau

Apart from what Guy wrote already, why do you need to determine the point in
time when finalizers are called? The whole idea of GC and finalization is
that you *don't* care when it happens. If you need to ensure (hint, hint)

Yes. This was partly an academic question, partly because I feel in my
guts that java people pretending that "it's not a problem we don't
have multiple inheritance because you don't need it" and "it's not a
problem we don't have destructors because you don't need it" are plain
wrong (I do know this is making implementation easier so I'd like that
people admit that) (and I am a bit sorry that ruby, otherwise a great
language, follows this path), and partly because I've bumped into a
similar problem in a java program I work on for a living (the need to
free DB connections associated with out of scope java objects).

In other words, in my opinion there are cases where IO can be freed
with a try|begin/catch|rescue/finally|ensure but other cases where for
example an object is a wrapper around some IO, and in such
circumstances it makes good sense to free this IO when the object it
out of scope instead of explicitely calling a close/free method,
especially when there can be several locations of your program that
make use of such an object. With a reference counting implementation
of a GC (Perl, Python) the use of destructors for such a matter is
immediate (and that may explain why they provide destructors, btw) but
with a mark & sweep or another "asynchronous" GC it becomes a problem;
this problem can possibly be workarounded by explicitely calling the
GC from carefully crafter locations ("when a new request enters" comes
to mind when you deal with server-based service), however I admit this
is far from ideal. But even that seems impossible with ruby (and java)
according to the results of my short program.
And another remark: as opposed to Java finalizers, Ruby finalizers are
guaranteed to be invoked, even on program exit:

$ ruby -e 'ObjectSpace.define_finalizer(Object.new){ puts "called" }'
called
$ ruby -e 'o=Object.new;ObjectSpace.define_finalizer(o){ puts "called" }'
called

Yes, and this is a very good point, I know that.

Thanks for your message.
 
G

gabriele renzi

Guillaume Cottenceau ha scritto:

In other words, in my opinion there are cases where IO can be freed
with a try|begin/catch|rescue/finally|ensure but other cases where for
example an object is a wrapper around some IO, and in such
circumstances it makes good sense to free this IO when the object it
out of scope instead of explicitely calling a close/free method,
especially when there can be several locations of your program that
make use of such an object.

I think you misunderstood slightly the precious message.
In ruby, whenever you want this kind of "create & use & destroy quickly"
idiom, you don't call a free/close method explicitly, you rely on
methods that handle it for you, say:

open('file') do |f|
bla bla
end

it is the #open call that cares of freeing the resource, there is no
need to handle it by yourself.

In java you don't have blocks, so you have to always use "finally".
 
R

Robert Klemme

Guillaume Cottenceau said:
Yes. This was partly an academic question, partly because I feel in my
guts that java people pretending that "it's not a problem we don't
have multiple inheritance because you don't need it" and "it's not a
problem we don't have destructors because you don't need it" are plain
wrong (I do know this is making implementation easier so I'd like that
people admit that) (and I am a bit sorry that ruby, otherwise a great
language, follows this path),

Yes, but with significant differences: 1. finalizers are guaranteed to be
invoked (other than in Java) and 2. you cannot ressurect an object from the
finalizer (a really odd property of Java). Plus, there are other elegant
means to deal with automated resource deallocation (method + block).
and partly because I've bumped into a
similar problem in a java program I work on for a living (the need to
free DB connections associated with out of scope java objects).

Use "finally". You can as well mimic Ruby behavior by defining a callback
interface (which in Ruby would be a block) like this:

interface Action {
public void doit(Connection tx) throws SQLException;
}

class DbPool {
void doit(Action action) throws SQLException {
Connection tx = getFromPool();
try {
action.doit( tx );
}
finally {
returnToPool( tx );
}
}
}

That is just slightly more verbose than the Ruby equivalent but as save
(i.e. cleanup is always properly done).
In other words, in my opinion there are cases where IO can be freed
with a try|begin/catch|rescue/finally|ensure but other cases where for
example an object is a wrapper around some IO, and in such
circumstances it makes good sense to free this IO when the object it
out of scope instead of explicitely calling a close/free method,

As I tried to explain in my last post you don't need to invoke the cleanup
explicitely because you can encapsulate that in a method that takes a block.
especially when there can be several locations of your program that
make use of such an object. With a reference counting implementation
of a GC (Perl, Python) the use of destructors for such a matter is
immediate (and that may explain why they provide destructors, btw) but
with a mark & sweep or another "asynchronous" GC it becomes a problem;
this problem can possibly be workarounded by explicitely calling the
GC from carefully crafter locations ("when a new request enters" comes
to mind when you deal with server-based service), however I admit this
is far from ideal. But even that seems impossible with ruby (and java)
according to the results of my short program.

No, explicitely invoking GC is definitely *not* the solution for this. In
Ruby there is "ensure" either used directly or from a method that receives a
block. Even if some instance is a wrapper around an IO instance this
pattern can be applied - and it's the most appropriate one.

Another reason not to use GC for this is that you don't have access to the
GC'ed instance in the finalizer which makes things overly complicated.

I really think you are trying to use the wrong tool for the problem at hand.
Yes, and this is a very good point, I know that.

Thanks for your message.

You're welcome.

Cheers

robert
 
G

Guillaume Cottenceau

In other words, in my opinion there are cases where IO can be freed
As I tried to explain in my last post you don't need to invoke the cleanup
explicitely because you can encapsulate that in a method that takes a block.

As I tried to explain as well, let's try not to stay on the usual
"your algorithm is broken" answer, and consider the problem (you might
want to think you're considering a purely academic question, if that
helps).

Ok, since I know that no one will want to do that without a more
precise example, here it is: what happens when the resource is
allocated and worked on first, then in a totally different part of the
program, much later, results are extracted from it - and this
extraction can also possibly be performed multiple times, then again
later (laaaaaater) the object is collected? Does this block trick
still work? It seems not, if I understand it correctly. And, may I
add, the "destructor semantics" simply perfectly apply to such
circumstance. E.g. putting in the object's class itself some code to
be run when object disappears, whenever and on whatever circumstance
this is the case.
 
G

Glenn Parker

Guillaume said:
Ok, since I know that no one will want to do that without a more
precise example, here it is: what happens when the resource is
allocated and worked on first, then in a totally different part of the
program, much later, results are extracted from it - and this
extraction can also possibly be performed multiple times, then again
later (laaaaaater) the object is collected? Does this block trick
still work? It seems not, if I understand it correctly.

Fair enough, long-lived objects are not suitable for the block-wrapping
trick. But that still doesn't explain the nature of the work you want
to do in a finalizer.

Do you know when the last "extraction" has been done (making it safe for
your finalizer to run)? Can you use that knowledge to explicitly run
the finalization method, instead of waiting for the GC?
 
R

Robert Klemme

Glenn Parker said:
Fair enough, long-lived objects are not suitable for the block-wrapping
trick.

Not necessarily. It might just be that the block is on a higher level. But
yes, there are applications where the pattern is awkward to apply.
But that still doesn't explain the nature of the work you want to do in a
finalizer.

Do you know when the last "extraction" has been done (making it safe for
your finalizer to run)? Can you use that knowledge to explicitly run the
finalization method, instead of waiting for the GC?

That's the exact right question. Because if one can invoke GC manually,
then one can as well invoke some cleanup method. Nothing really gained.

robert
 
Y

Yohanes Santoso

Guillaume Cottenceau said:
What version of Ruby are you running? With your example I can see:

*** constructor
Foo was out-scoped.
Gc was run.
*** pseudo-destructor

$ ruby -v
ruby 1.8.2 (2004-12-06) [i386-linux]

# Guillaume Cottenceau's version

$ ruby /tmp/guil.rb
*** constructor
Foo was out-scoped.
Gc was run.
*** pseudo-destructor

# TS' version

$ ruby /tmp/ts.rb
*** constructor
Foo was out-scoped.
*** pseudo-destructor
Gc was run.

So, I am getting the same result ts was getting.
What difference do you pretend your program is making with mine?

The difference is in this:

(your version):
def scopeme
foo = Foo.new
ObjectSpace.define_finalizer(foo, proc { puts "*** pseudo-destructor" })
end

That block captures foo as well. As a result, that instance will never
be unreferenced, and thus never going to be GC'ed until the program
ends.

OTOH, ts' version:
class Foo
def self.final
proc { puts "*** pseudo-destructor" }
end
end

def scopeme
foo = Foo.new
ObjectSpace.define_finalizer(foo, Foo.final)
end

Notice that the finalizer block does not capture the Foo instance.

YS.
 
P

Pit Capitain

Robert said:
That's the exact right question. Because if one can invoke GC manually,
then one can as well invoke some cleanup method. Nothing really gained.

In some unit tests it would be nice if you could force the GC to run the
finalizers. For testing the finalizer code itself you just can call it
explicitly, that's right. But how can you verify that after calling a
cleanup method there are no more references to certain objects? The only
way I could think of was explicitly starting the GC and checking whether
the finalizers had been called. Unfortunately, this didn't work, because
I couldn't reliably force the GC to run the finalizers.

For example, on my system, Guy's code isn't working either:

C:\tmp>ruby -v r.rb
ruby 1.8.1 (2003-12-25) [i386-mswin32]
*** constructor
Foo was out-scoped.
Gc was run.
*** pseudo-destructor

If this has changed in 1.8.2 I'd be glad to update, but I doubt it.

Regards,
Pit
 
G

Guillaume Cottenceau

So, I am getting the same result ts was getting.

Now that is strange.. :/
The difference is in this:

(your version):
def scopeme
foo = Foo.new
ObjectSpace.define_finalizer(foo, proc { puts "*** pseudo-destructor" })
end

That block captures foo as well. As a result, that instance will never
be unreferenced, and thus never going to be GC'ed until the program
ends.

OTOH, ts' version:
class Foo
def self.final
proc { puts "*** pseudo-destructor" }
end
end

def scopeme
foo = Foo.new
ObjectSpace.define_finalizer(foo, Foo.final)
end

Notice that the finalizer block does not capture the Foo instance.

Wait.. both calls to define_finalizer have a reference to foo as the
first argument, and both have a closure as second argument which
doesn't have a reference to foo.

I fear I just fell into a parallel universe..
 
G

Guillaume Cottenceau

Do you know when the last "extraction" has been done (making it safe for
your finalizer to run)? Can you use that knowledge to explicitly run
the finalization method, instead of waiting for the GC?

Theoretically, I could use that knowledge - but again, compare
"explicitely running the finalization method" and "letting the
destructor do its job on its own". That's why I pretend (real)
destructors are superior in that circumstance. The ability to
explicitely ask for a GC run would also be a little superior since
this can be done at a single location of the program, whereas there
can be a large number of these "extractions". Back to my initial
question I guess.

But even that solution is hardly an option since the last "extraction"
can't be told :/

(sorry if some of my messages look harsh; I really appreciate all
answers, thank you all)
 
G

Guillaume Cottenceau

Do you know when the last "extraction" has been done (making it safe for
That's the exact right question. Because if one can invoke GC manually,
then one can as well invoke some cleanup method. Nothing really gained.

It's far from being as elegant as real destructors, but it's still
better because invoking the GC manually can be done at a single point
of a program (for example, when a server-oriented request is finished)
whereas invoking cleanup method is needed to each objet at each
different location of the program where you know you need to.
 
E

Eric Hodel

--Apple-Mail-39--220468687
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset=US-ASCII; format=flowed

Now that is strange.. :/


Wait.. both calls to define_finalizer have a reference to foo as the
first argument, and both have a closure as second argument which
doesn't have a reference to foo.

Incorrect. The first argument is irrelevant.
ObjectSpace::define_finalizer is written this way probably to make
things ugly. You have to pass in the object you wish to attach a
finalizer to, and that's what the first arg is for.

In your version, foo is in-scope when the finalizer proc is created, so
the proc captures foo *even though it is not explicitly used in the
proc*. That's right! Just because you don't reference foo inside a
closure doesn't mean it magically disappears for you.

--
Eric Hodel - (e-mail address removed) - http://segment7.net
FEC2 57F1 D465 EB15 5D6E 7C11 332A 551C 796C 9F04

--Apple-Mail-39--220468687
content-type: application/pgp-signature; x-mac-type=70674453;
name=PGP.sig
content-description: This is a digitally signed message part
content-disposition: inline; filename=PGP.sig
content-transfer-encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.4 (Darwin)

iEYEARECAAYFAkJczZkACgkQMypVHHlsnwQCegCgkVahNV1GLJLZY239fQ6+Gp5K
y5cAni9wxFq/d48/VoporyDssfv83csz
=t2XY
-----END PGP SIGNATURE-----

--Apple-Mail-39--220468687--
 
T

ts

G> What version of Ruby are you running? With your example I can see:

1.8.2

svg% cat b.rb
#!/usr/local/bin/ruby -v
class Foo
def initialize
puts "*** constructor"
end
def self.final
proc { puts "*** pseudo-destructor" }
end
end

def scopeme
foo = Foo.new
ObjectSpace.define_finalizer(foo, Foo.final)
end

a = scopeme
puts "Foo was out-scoped."

GC.start
puts "Gc was run."
svg%

svg% b.rb
ruby 1.8.2 (2004-12-25) [i686-linux]
*** constructor
Foo was out-scoped.
*** pseudo-destructor
Gc was run.
svg%


G> What difference do you pretend your program is making with mine?

closure + assignement

svg% diff -u b.rb~ b.rb
--- b.rb~ 2005-04-13 10:08:15.000000000 +0200
+++ b.rb 2005-04-13 10:08:40.000000000 +0200
@@ -13,7 +13,7 @@
ObjectSpace.define_finalizer(foo, Foo.final)
end

-a = scopeme
+scopeme
puts "Foo was out-scoped."

GC.start
svg%

svg% b.rb
ruby 1.8.2 (2004-12-25) [i686-linux]
*** constructor
Foo was out-scoped.
Gc was run.
*** pseudo-destructor
svg%


the GC is conservative.



Guy Decoux
 
P

Pit Capitain

ts said:
closure + assignement

-a = scopeme
+scopeme

the GC is conservative.

Hi Guy,

thanks for the hint. I know you don't like to write in English, but
could you please try to explain why the assignment makes a difference?
The line

scopeme

doesn't work as expected, but each of

a = scopeme
puts scopeme
scopeme.id

has the desired effect. Why is it necessary to do something with the
finalizer proc at the top level, even just sending it a message? Doing
the same things inside the method "scopeme" doesn't work.

I read about the definition of a conservative GC, but couldn't find a
relation to this behavior. I could somehow understand if the difference
is whether the finalizer proc itself is reachable from the root objects
or not. This would explain why an assignment works at the top level, but
not inside the method.

What I don't understand though is why sending a message like "id" to the
finalizer proc is making a difference. And even more puzzling to me is
that this works only when doing it at the top level, but not inside the
method.

Regards,
Pit
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,997
Messages
2,570,240
Members
46,830
Latest member
HeleneMull

Latest Threads

Top