T
Talin
Thanks to everyone for pointing out my bone-headed errors I don't
*usually* make that sort of mistake...
What I'm attempting to do is create an implentation of the unification
algorithm. Now, its fairly straightforward to do unification on my own
custom types, all I have to do is define a unify() method for each class:
def unify( self, other, bindings ):
...
However, this doesn't work so well when the objects being unified are
built-in python types (tuples, integers, etc.) This requires the
"un-pythonic" testing of each individual type and then calling the
appropriate unification function for the given type.
Another thing that is important is that this particular unifier supports
commutative functions (which accept their arguments in any order), which
is what the permutation stuff is for. (I haven't done associativity yet,
still working on it.) The unifier is structured as a stack of
generators, each returning all possible matches. This allows the higher
levels of the unifier to backtrack by processing alternatives produced
by the lower-level generators.
So here's my list of further questions:
1) Is there are better way to do "functional overloading" on built-in
types than using a whole series of "if type( x ) is y".
2) Is there an easy way to determine if a given object has a callable
method named "unify"? I know how to determine if there's an attribute
with a name, but not how to determine whether or not that attribute is
callable.
3) The "bindings" object is a dictionary that is constructed as the
unification proceeds. (So for example, if I attempt to unify "x + 1"
with "2 + 1" it will have the binding "x : 2" in the dictionary.
I want this bindings object to behave like a function call scope - in
that you can "see" the variables in the enclosing scope. In fact, I'd
like to be able to use a regular python namespace (such as a module) as
the enclosing scope, so that the unification algorithm has access to all
of the variable definitions within that module. (Again, this is part of
the notion that I want to be able to do unification operations on normal
python data structures, rather than specially "wrapped" types.)
In fact, I had thought about the idea of the bindings being an actual
Python "module" object, however that doesn't appear to be possible (or
what I want for that matter.) Similarly, being able to take the local
function scope and capture it in a closure and export that to the
outside world was also something I briefly pondered.
Because of the backtracking nature of the matcher, I need to be able to
"undefine" bindings that have been previously defined. The way I am
currently doing this is to create a new "scope" each time I add a new
binding, where the new scope's "parent" is the previous scope. (So in
fact my dictionary has only one entry in it.) Something like this:
for new_bindings in generate_matches( a, b, old_bindings ):
...
If none of the alternatives pan out, it simply discards "new_bindings"
and reverts back to the old_bindings.
So my question here is: Do I want to continue using a subclass of dict
for this, or something more exotic?
4) I've seen a couple of code examples on the net where people use the
idiom "lambda x: (for x in [])" to represent a "null" iterator, i.e. one
that immediately terminates. How is this any different from simply
returning "()"? (Or range( n, n ) for that matter.) Which one is the
most efficient? And if they are different, how about adding a null
iterator to itertools
-- Talin
*usually* make that sort of mistake...
What I'm attempting to do is create an implentation of the unification
algorithm. Now, its fairly straightforward to do unification on my own
custom types, all I have to do is define a unify() method for each class:
def unify( self, other, bindings ):
...
However, this doesn't work so well when the objects being unified are
built-in python types (tuples, integers, etc.) This requires the
"un-pythonic" testing of each individual type and then calling the
appropriate unification function for the given type.
Another thing that is important is that this particular unifier supports
commutative functions (which accept their arguments in any order), which
is what the permutation stuff is for. (I haven't done associativity yet,
still working on it.) The unifier is structured as a stack of
generators, each returning all possible matches. This allows the higher
levels of the unifier to backtrack by processing alternatives produced
by the lower-level generators.
So here's my list of further questions:
1) Is there are better way to do "functional overloading" on built-in
types than using a whole series of "if type( x ) is y".
2) Is there an easy way to determine if a given object has a callable
method named "unify"? I know how to determine if there's an attribute
with a name, but not how to determine whether or not that attribute is
callable.
3) The "bindings" object is a dictionary that is constructed as the
unification proceeds. (So for example, if I attempt to unify "x + 1"
with "2 + 1" it will have the binding "x : 2" in the dictionary.
I want this bindings object to behave like a function call scope - in
that you can "see" the variables in the enclosing scope. In fact, I'd
like to be able to use a regular python namespace (such as a module) as
the enclosing scope, so that the unification algorithm has access to all
of the variable definitions within that module. (Again, this is part of
the notion that I want to be able to do unification operations on normal
python data structures, rather than specially "wrapped" types.)
In fact, I had thought about the idea of the bindings being an actual
Python "module" object, however that doesn't appear to be possible (or
what I want for that matter.) Similarly, being able to take the local
function scope and capture it in a closure and export that to the
outside world was also something I briefly pondered.
Because of the backtracking nature of the matcher, I need to be able to
"undefine" bindings that have been previously defined. The way I am
currently doing this is to create a new "scope" each time I add a new
binding, where the new scope's "parent" is the previous scope. (So in
fact my dictionary has only one entry in it.) Something like this:
for new_bindings in generate_matches( a, b, old_bindings ):
...
If none of the alternatives pan out, it simply discards "new_bindings"
and reverts back to the old_bindings.
So my question here is: Do I want to continue using a subclass of dict
for this, or something more exotic?
4) I've seen a couple of code examples on the net where people use the
idiom "lambda x: (for x in [])" to represent a "null" iterator, i.e. one
that immediately terminates. How is this any different from simply
returning "()"? (Or range( n, n ) for that matter.) Which one is the
most efficient? And if they are different, how about adding a null
iterator to itertools
-- Talin