Most "active" coroutine library project?

M

Michele Simionato

I specifically left out all "yield" statements in my version, since
that's exactly the point here. :)  With "real" coroutines, they're not
necessary - coroutine calls look just like any other call.

Personally, I like the yield. I understand that being forced to
rewrite code
to insert the yield is ugly, but the yield makes clear that the
control
flow is special. In Scheme there is no syntactic distinction between
ordinary function
calls and continuation calls, but they are quite different since
continuation do
not return (!). I always thought this is was a wart of the language,
not an advantage.
 
G

Grant Edwards

The first time I encountered coroutines was in Simula-67. Coroutine
switching was certainly explicit there. IIRC, the keyword was resume.

I'm not sure exactly what "coroutine calls" refers to, but the
"mis-feature" in Python co-routines that's being discussed is
the fact that you can only yeild/resume from the main coroutine
function.

You can't call a function that yields control back to the other
coroutine(s). By jumping through some hoops you can get the
same effect, but it's not very intuitive and it sort of "feels
wrong" that the main routine has to know ahead of time when
calling a function whether that function might need to yield or
not.
 
S

Simon Forman

I'm not sure exactly what "coroutine calls" refers to, but the
"mis-feature" in Python co-routines that's being discussed is
the fact that you can only yeild/resume from the main coroutine
function.

You can't call a function that yields control back to the other
coroutine(s).  By jumping through some hoops you can get the
same effect, but it's not very intuitive and it sort of "feels
wrong" that the main routine has to know ahead of time when
calling a function whether that function might need to yield or
not.

You mean a "trampoline" function? I.e. you have to call into your
coroutines in a special main function that expects as part of the
yielded value(s) the next coroutine to pass control to, and your
coroutines all need to yield the next coroutine?

~Simon
 
G

Grant Edwards

You mean a "trampoline" function? I.e. you have to call into your
coroutines in a special main function that expects as part of the
yielded value(s) the next coroutine to pass control to, and your
coroutines all need to yield the next coroutine?

Exactly. Compared to "real" coroutines where a yield statement
can occur anywhere, the trampoline business seems pretty
convoluted.
 
J

Jason Tackaberry

You can't call a function that yields control back to the other
coroutine(s). By jumping through some hoops you can get the
same effect, but it's not very intuitive and it sort of "feels
wrong" that the main routine has to know ahead of time when
calling a function whether that function might need to yield or
not.

Not directly, but you can simulate this, or at least some pseudo form of
it which is useful in practice. I suppose you could call this "jumping
through some hoops," but from the point of view of the coroutine, it can
be done completely transparently, managed by the coroutine scheduler.

In kaa, which I mentioned earlier, this might look like:

import kaa

@kaa.coroutine()
def task(name):
for i in range(10):
print name, i
yield kaa.NotFinished # kind of like a time slice

@kaa.coroutine()
def fetch_google():
s = kaa.Socket()
try:
yield s.connect('google.com:80')
except:
print 'Connection failed'
return
yield s.write('GET / HTTP/1.1\nHost: google.com\n\n')
yield (yield s.read())

@kaa.coroutine()
def orchestrate():
task('apple')
task('banana')
page = yield fetch_google()
print 'Fetched %d bytes' % len(page)

orchestrate()
kaa.main.run()


The two task() coroutines spawned by orchestrate() continue to "run in
the background" while any of the yields in fetch_google() are pending
(waiting on some network resource).

It's true that the yields in fetch_google() aren't yielding control
_directly_ to one of the task() coroutines, but it _is_ doing so
indirectly, via the coroutine scheduler, which runs inside the main
loop.

Cheers,
Jason.
 
G

Grant Edwards

Not directly, but you can simulate this, or at least some pseudo form of
it which is useful in practice. I suppose you could call this "jumping
through some hoops,"

It's nice that I could, because I did. :)
but from the point of view of the coroutine, it can be done
completely transparently, managed by the coroutine scheduler.

In kaa, which I mentioned earlier, this might look like:

import kaa

@kaa.coroutine()
def task(name):
for i in range(10):
print name, i
yield kaa.NotFinished # kind of like a time slice

@kaa.coroutine()
def fetch_google():
s = kaa.Socket()
try:
yield s.connect('google.com:80')

That's not comletely transparently. The routine fetch_google()
has to know a priori that s.connect() might want to yield and
so has to invoke it with a yield statement. Completely
transparent would be this:

try:
s.connect('google.com:80')
except:
print 'Connection faild'
return
yield s.write('GET / HTTP/1.1\nHost: google.com\n\n')
yield (yield s.read())

Again, you have to know ahead of time which functions might
yeild and which ones don't and call them differently. That's
the "hoop". If somewhere in the implementation of a function
you dicover a need to yield, you have to modify all the "calls"
all the way up to the top frame.
It's true that the yields in fetch_google() aren't yielding control
_directly_ to one of the task() coroutines, but it _is_ doing so
indirectly, via the coroutine scheduler, which runs inside the main
loop.

True. But I wouldn't call that transparent.
 
S

Simon Forman

Not directly, but you can simulate this, or at least some pseudo form of
it which is useful in practice.  I suppose you could call this "jumping
through some hoops," but from the point of view of the coroutine, it can
be done completely transparently, managed by the coroutine scheduler.

In kaa, which I mentioned earlier, this might look like:

       import kaa

       @kaa.coroutine()
       def task(name):
          for i in range(10):
             print name, i
             yield kaa.NotFinished  # kind of like a time slice

       @kaa.coroutine()
       def fetch_google():
          s = kaa.Socket()
          try:
             yield s.connect('google.com:80')
          except:
             print 'Connection failed'
             return
          yield s.write('GET / HTTP/1.1\nHost: google.com\n\n')
          yield (yield s.read())

       @kaa.coroutine()
       def orchestrate():
           task('apple')
           task('banana')
           page = yield fetch_google()
           print 'Fetched %d bytes' % len(page)

       orchestrate()
       kaa.main.run()


The two task() coroutines spawned by orchestrate() continue to "run in
the background" while any of the yields in fetch_google() are pending
(waiting on some network resource).

It's true that the yields in fetch_google() aren't yielding control
_directly_ to one of the task() coroutines, but it _is_ doing so
indirectly, via the coroutine scheduler, which runs inside the main
loop.

Cheers,
Jason.



So Kaa is essentially implementing the trampoline function.

If I understand it correctly MyHDL does something similar (to
implement models of hardware components running concurrently.)
http://www.myhdl.org/
 
J

Jason Tackaberry

That's not comletely transparently. The routine fetch_google()
has to know a priori that s.connect() might want to yield and
so has to invoke it with a yield statement.

With my implementation, tasks that execute asynchronously (which may be
either threads or coroutines) return a special object called an
InProgress object. You always yield such calls.

So you're right, it does require knowing a priori what invocations may
return InProgress objects. But this isn't any extra effort. It's
difficult to write any non-trivial program without knowing a priori what
callables will return, isn't it?

Completely transparent would be this: [...]
try:
s.connect('google.com:80')
except:

Jean-Paul made the same argument. In my view, the requirement to yield
s.connect() is a feature, not a bug. Here, IMO explicit truly is better
than implicit. I prefer to know at what specific points my routines may
branch off.

And I maintain that requiring yield doesn't make it any less a
coroutine.

Maybe we can call this an aesthetic difference of opinion?

Again, you have to know ahead of time which functions might
yeild and which ones don't and call them differently. That's
the "hoop".

To the extent it should be considered jumping through hoops to find what
any callable returns, all right.

True. But I wouldn't call that transparent.

What I meant by transparent is the fact that yield inside fetch_google()
can "yield to" (indirectly) any active coroutine. It doesn't (and
can't) know which. I was responding to your specific claim that:
[...] the "mis-feature" in Python co-routines that's being discussed
is the fact that you can only yeild/resume from the main coroutine
function.

With my implementation this is only half true. It's true that for other
active coroutines to be reentered, "main" coroutines (there can be more
than one in kaa) will need to yield, but once control is passed back to
the coroutine scheduler (which is hooked into main loop facility), any
active coroutine may be reentered.

Cheers,
Jason.
 
J

Jason Tackaberry

So Kaa is essentially implementing the trampoline function.

Essentially, yeah. It doesn't require (or support, depending on your
perspective) a coroutine to explicitly yield the next coroutine to be
reentered, but otherwise I'd say it's the same basic construct.

Cheers,
Jason.
 
G

Grant Edwards

Jean-Paul made the same argument. In my view, the requirement to yield
s.connect() is a feature, not a bug. Here, IMO explicit truly is better
than implicit. I prefer to know at what specific points my routines may
branch off.

And I maintain that requiring yield doesn't make it any less a
coroutine.

Maybe we can call this an aesthetic difference of opinion?

Certainly.

You've a very valid point that "transparent" can also mean
"invisible", and stuff happening "invisibly" can be a source of
bugs. All the invisible stuff going on in Perl and C++ has
always caused headaches for me.
 
D

Dennis Lee Bieber

EXX accomplised much of the context switch operation. I don't
remember how much RAM was available, but it wasn't much...
Zilog Z80... as with the rest of the "improved" 8080 family -- 64kB
address space...
 
D

Dennis Lee Bieber

The first time I encountered coroutines was in Simula-67. Coroutine
switching was certainly explicit there. IIRC, the keyword was resume.

Lucky you...

Xerox Sigma 6, CP/V, using Xerox Extended FORTRAN IV. It had, as I
recall, some means (besides the ASSIGNed GOTO) of passing labels between
caller and callee, such that the callee could "return" to a statement
not immediately following the call -- along with being able to call into
particular labels, not just the top of the routine.
 
D

Dave Angel

Dennis said:
Zilog Z80... as with the rest of the "improved" 8080 family -- 64kB
address space...
I knew of one Z80 implementation which gave nearly 128k to the user.
Code was literally in a separate 64k page from data, and there were
special ways to access it, when you needed to do code-modification on
the fly. The 64k bank select was normally chosen on each bus cycle by
status bits from the CPU indicating whether it was part of an
instruction fetch or a data fetch.

Actually even 64k looked pretty good, compared to the 1.5k of RAM and 2k
of PROM for one of my projects, a navigation system for shipboard use.

DaveA
 
G

Grant Edwards

Zilog Z80... as with the rest of the "improved" 8080 family --
64kB address space...

Right. I meant I didn't recall how much RAM was available in
that particular product. Using the shadow register set to
store context is limiting when compared to just pushing
everything onto the stack and then switching to another stack,
but that does require more RAM.
 
G

Grant Edwards

Actually even 64k looked pretty good, compared to the 1.5k of
RAM and 2k of PROM for one of my projects, a navigation system
for shipboard use.

I've worked on projects as recently as the past year that had
only a couple hundred bytes of RAM, and most of it was reserved
for a message buffer.
 
P

Piet van Oostrum

Grant Edwards said:
e> I specifically left out all "yield" statements in my version, since that's
e> exactly the point here. :) With "real" coroutines, they're not necessary -
e> coroutine calls look just like any other call. With Python's enhanced
e> generators, they are.
GE> I'm not sure exactly what "coroutine calls" refers to, but the
GE> "mis-feature" in Python co-routines that's being discussed is
GE> the fact that you can only yeild/resume from the main coroutine
GE> function.

Yes, I know, but the discussion had drifted to making the yield
invisible, if I understood correctly.
GE> You can't call a function that yields control back to the other
GE> coroutine(s). By jumping through some hoops you can get the
GE> same effect, but it's not very intuitive and it sort of "feels
GE> wrong" that the main routine has to know ahead of time when
GE> calling a function whether that function might need to yield or
GE> not.

I know. I think this is an implementation restriction in Python, to make
stack management easier. Although if you would lift this restriction
some new syntax would have to be invented to distinguish a generator
from a normal function.
 
H

Hendrik van Rooyen

I've worked on projects as recently as the past year that had
only a couple hundred bytes of RAM, and most of it was reserved
for a message buffer.

There is little reason to do that nowadays - one can buy a single cycle 8032
running at 30 MHz with 16/32/64k of programming flash and ik of RAM, as well
as some bytes of eeprom for around US$10-00. - in one off quantities.

- Hendrik
 
G

Grant Edwards

There is little reason to do that nowadays - one can buy a
single cycle 8032 running at 30 MHz with 16/32/64k of
programming flash and ik of RAM, as well as some bytes of
eeprom for around US$10-00. - in one off quantities.

$10 is pretty expensive for a lot of applications. I bet that
processor also uses a lot of power and takes up a lot of board
space. If you've only got $2-$3 in the money budget, 200uA at
1.8V in the power budget, and 6mm X 6mm of board-space, your
choices are limited.

Besides If you can get by with 256 or 512 bytes of RAM, why pay
4X the price for a 1K part?

Besides which, the 8032 instruction set and development tools
are icky compared to something like an MSP430 or an AVR. ;)

[The 8032 is still head and shoulders above the 8-bit PIC
family.]
 
A

Arlo Belshee

Certainly.

You've a very valid point that "transparent" can also mean
"invisible", and stuff happening "invisibly" can be a source of
bugs.  All the invisible stuff going on in Perl and C++ has
always caused headaches for me.

There are some key advantages to this transparency, especially in the
case of libraries built on libraries. For example, all the networking
libraries that ship in the Python standard lib are based on the
sockets library. They assume the blocking implementation, but then add
HTTPS, cookie handling, SMTP, and all sorts of higher-level network
protocols.

I want to use non-blocking network I/O for (concurrency) performance.
I don't want to re-write an SMTP lib - my language ships with one.
However, it is not possible for someone to write a non-blocking socket
that is a drop-in replacement for the blocking one in the std lib.
Thus, it is not possible for me to use _any_ of the well-written
libraries that are already part of Python's standard library. They
don't have yields sprinkled throughout, so they can't work with a non-
blocking, co-routine implemented socket. And they certainly aren't
written against the non-blocking I/O APIs.

Thus, the efforts by lots of people to write entire network libraries
that, basically, re-implement the Python standard library, but change
the implementation of 7 methods (bind, listen, accept, connect, send,
recv, close). They end up having to duplicate tens of thousands of
LoC, just to change 7 methods.

That's where transparency would be nice - to enable that separation of
concerns.
 
H

Hendrik van Rooyen

$10 is pretty expensive for a lot of applications. I bet that
processor also uses a lot of power and takes up a lot of board
space. If you've only got $2-$3 in the money budget, 200uA at
1.8V in the power budget, and 6mm X 6mm of board-space, your
choices are limited.

Besides If you can get by with 256 or 512 bytes of RAM, why pay
4X the price for a 1K part?

Besides which, the 8032 instruction set and development tools
are icky compared to something like an MSP430 or an AVR. ;)

[The 8032 is still head and shoulders above the 8-bit PIC
family.]

I am biased.
I like the 8031 family.
I have written pre-emptive multitasking systems for it,
as well as state-machine round robin systems.
In assembler.
Who needs tools if you have a half decent macro assembler?
And if the macro assembler is not up to much, then you write your own pre
processor using python.

The 803x bit handling is, in my arrogant opinion, still the best of any
processor. - jump if bit set then clear as an atomic instruction rocks.

:)

Where do you get such nice projects to work on?

- Hendrik
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,189
Messages
2,571,016
Members
47,618
Latest member
Leemorton01

Latest Threads

Top