S
Steven Bethard
PEP 288 was mentioned in one of the lambda threads and so I ended up
reading it for the first time recently. I definitely don't like the
idea of a magical __self__ variable that isn't declared anywhere. It
also seemed to me like generator attributes don't really solve the
problem very cleanly. An example from the PEP[1]:
def mygen():
while True:
print __self__.data
yield None
g = mygen()
g.data = 1
g.next() # prints 1
g.data = 2
g.next() # prints 2
I looked in the archives but couldn't find a good discussion of why
setting an attribute on the generator is preferable to passing the
argument to next. Isn't this example basically equivalent to:
class mygen(object):
def next(self, data):
print data
return None
g = mygen()
g.next(1) # prints 1
g.next(2) # prints 2
Note that I didn't even define an __iter__ method since it's never used
in the example.
Another example from the PEP:
def filelike(packagename, appendOrOverwrite):
data = []
if appendOrOverwrite == 'w+':
data.extend(packages[packagename])
try:
while True:
data.append(__self__.dat)
yield None
except FlushStream:
packages[packagename] = data
ostream = filelike('mydest','w')
ostream.dat = firstdat; ostream.next()
ostream.dat = firstdat; ostream.next()
ostream.throw(FlushStream)
This could be rewritten as:
class filelike(object):
def __init__(self, packagename, appendOrOverwrite):
self.data = []
if appendOrOverwrite == 'w+':
self.data.extend(packages[packagename])
def next(self, dat):
self.data.append(dat)
return None
def close(self):
packages[packagename] = self.data
ostream = filelike('mydest','w')
ostream.next(firstdat)
ostream.next(firstdat)
ostream.close()
So, I guess I have two questions:
(1) What's the benefit of the generator versions of these functions over
the class-based versions?
(2) Since in all the examples there's a one-to-one correlation between
setting a generator attribute and calling the generator's next function,
aren't these generator attribute assignments basically just trying to
define the 'next' parameter list?
If this is true, I would have expected that a more useful idiom would
look something like:
def mygen():
while True:
data, = nextargs()
print data
yield None
g = mygen()
g.next(1) # prints 1
g.next(2) # prints 2
where the nextargs function retrieves the arguments of the most recent
call to the generator's next function.
With a little sys._getframe hack, you can basically get this behavior now:
py> class gen(object):
.... def __init__(self, gen):
.... self.gen = gen
.... def __iter__(self):
.... return self
.... def next(self, *args):
.... return self.gen.next()
.... @staticmethod
.... def nextargs():
.... return sys._getframe(2).f_locals['args']
....
py> def mygen():
.... while True:
.... data, = gen.nextargs()
.... print data
.... yield None
....
py> g = gen(mygen())
py> g.next(1)
1
py> g.next(2)
2
Of course, it's still a little magical, but I think I like it a little
better because you can see, looking only at 'mygen', when 'data' is
likely to change value...
Steve
[1] http://www.python.org/peps/pep-0288.html
reading it for the first time recently. I definitely don't like the
idea of a magical __self__ variable that isn't declared anywhere. It
also seemed to me like generator attributes don't really solve the
problem very cleanly. An example from the PEP[1]:
def mygen():
while True:
print __self__.data
yield None
g = mygen()
g.data = 1
g.next() # prints 1
g.data = 2
g.next() # prints 2
I looked in the archives but couldn't find a good discussion of why
setting an attribute on the generator is preferable to passing the
argument to next. Isn't this example basically equivalent to:
class mygen(object):
def next(self, data):
print data
return None
g = mygen()
g.next(1) # prints 1
g.next(2) # prints 2
Note that I didn't even define an __iter__ method since it's never used
in the example.
Another example from the PEP:
def filelike(packagename, appendOrOverwrite):
data = []
if appendOrOverwrite == 'w+':
data.extend(packages[packagename])
try:
while True:
data.append(__self__.dat)
yield None
except FlushStream:
packages[packagename] = data
ostream = filelike('mydest','w')
ostream.dat = firstdat; ostream.next()
ostream.dat = firstdat; ostream.next()
ostream.throw(FlushStream)
This could be rewritten as:
class filelike(object):
def __init__(self, packagename, appendOrOverwrite):
self.data = []
if appendOrOverwrite == 'w+':
self.data.extend(packages[packagename])
def next(self, dat):
self.data.append(dat)
return None
def close(self):
packages[packagename] = self.data
ostream = filelike('mydest','w')
ostream.next(firstdat)
ostream.next(firstdat)
ostream.close()
So, I guess I have two questions:
(1) What's the benefit of the generator versions of these functions over
the class-based versions?
(2) Since in all the examples there's a one-to-one correlation between
setting a generator attribute and calling the generator's next function,
aren't these generator attribute assignments basically just trying to
define the 'next' parameter list?
If this is true, I would have expected that a more useful idiom would
look something like:
def mygen():
while True:
data, = nextargs()
print data
yield None
g = mygen()
g.next(1) # prints 1
g.next(2) # prints 2
where the nextargs function retrieves the arguments of the most recent
call to the generator's next function.
With a little sys._getframe hack, you can basically get this behavior now:
py> class gen(object):
.... def __init__(self, gen):
.... self.gen = gen
.... def __iter__(self):
.... return self
.... def next(self, *args):
.... return self.gen.next()
.... @staticmethod
.... def nextargs():
.... return sys._getframe(2).f_locals['args']
....
py> def mygen():
.... while True:
.... data, = gen.nextargs()
.... print data
.... yield None
....
py> g = gen(mygen())
py> g.next(1)
1
py> g.next(2)
2
Of course, it's still a little magical, but I think I like it a little
better because you can see, looking only at 'mygen', when 'data' is
likely to change value...
Steve
[1] http://www.python.org/peps/pep-0288.html