L
Laszlo Nagy
Is there any extension for Python that can do async I/O for PostgreSQL
with tornadoweb's ioloop?
Something like:
class MainHandler(tornado.web.RequestHandler):
@tornado.web.asynchronous
def get(self):
pg_connection.(long_taking_query_sql,params,callback=self.on_query_opened)
def on_query_opened(self, query):
self.write(process_rows(query))
self.finish()
What would be an alternative?
The theoretical problem: suppose there are 100 clients (web browsers)
connected to the server with keep alive connections. They are doing
long-polls and they are also sending/receiving events (with short
response times). Each web browser has an associated state stored on the
server side, in the memory (as an object tree). The state is bound to
the client with a session id. Most requests will have to be responded
with small amounts of data, calculated from the session state, or
queried from the database. Most database queries are simple, running for
about 100msec. But a few of them will run for 1sec or more. Number of
requests ending in database queries is relatively low (10/sec). Other
requests can be responded must faster. but they are much more frequent
(100/sec, that is. 1 request/sec/client). There is a big global cache
full of (Python) objects. Their purpose is to reduce the number of
database queries. These objects in the global cache are emitting events
to other objects found in the client sessions. Generally, it is not
possible to tell what request will end in a database query.
Multi-threading is not an option because number of clients is too high
(running 100 threads is not good). This is why I decided to use anyc
I/O. Tornadoweb looks good for most requirements: async i/o, store
session state in objects etc. The biggest problem is that psycopg is not
compatible with this model. If I use blocking I/O calls inside a request
handler, then they will block all other requests most of the time,
resulting in slow response times.
What would be a good solution for this?
Thanks,
Laszlo
with tornadoweb's ioloop?
Something like:
class MainHandler(tornado.web.RequestHandler):
@tornado.web.asynchronous
def get(self):
pg_connection.(long_taking_query_sql,params,callback=self.on_query_opened)
def on_query_opened(self, query):
self.write(process_rows(query))
self.finish()
What would be an alternative?
The theoretical problem: suppose there are 100 clients (web browsers)
connected to the server with keep alive connections. They are doing
long-polls and they are also sending/receiving events (with short
response times). Each web browser has an associated state stored on the
server side, in the memory (as an object tree). The state is bound to
the client with a session id. Most requests will have to be responded
with small amounts of data, calculated from the session state, or
queried from the database. Most database queries are simple, running for
about 100msec. But a few of them will run for 1sec or more. Number of
requests ending in database queries is relatively low (10/sec). Other
requests can be responded must faster. but they are much more frequent
(100/sec, that is. 1 request/sec/client). There is a big global cache
full of (Python) objects. Their purpose is to reduce the number of
database queries. These objects in the global cache are emitting events
to other objects found in the client sessions. Generally, it is not
possible to tell what request will end in a database query.
Multi-threading is not an option because number of clients is too high
(running 100 threads is not good). This is why I decided to use anyc
I/O. Tornadoweb looks good for most requirements: async i/o, store
session state in objects etc. The biggest problem is that psycopg is not
compatible with this model. If I use blocking I/O calls inside a request
handler, then they will block all other requests most of the time,
resulting in slow response times.
What would be a good solution for this?
Thanks,
Laszlo