PyQT app accessible over network?

M

Monte Milanuk

Hello all,

New guy here, with a kind of general question. Hopefully its not too
silly...

I've been working at learning python off and on now for a while, with a
couple programs in mind as a goal - kind of specialized stuff that I
can't seem to find a good match for already available, competitor
records, score-keeping & results for an amateur sports tournament. Many
places use some cobbled-together Excel spreadsheet, which has its
limitations. Others use an antiquated DOS-style application written in
PowerBasic that has issues of its own.

Probably 98-99% of the time the match administration would be done by a
single individual on a single PC, which seems like it would be nearly
ideal for a desktop application implemented in PyQt4 or similar. The
problem is (as usual) those edge cases where there are enough
volunteers/resources to have more than one person doing data entry
(maybe 2-3 in practice, but lets say 10-12 for arguments sake to pad
things a bit).

What I was wondering is what would be a good way of handling this with a
PyQt app? Build the desktop app first, and add some sort of
functionality to enable a lightweight web server and framework for the
additional data entry 'clients'? Or would it be better to create
dedicated PyQt client apps to connect to the PC/laptop running the
'main' application? Should I go a different direction entirely, with a
complete self-hosted webapp built on a framework like web2py?

As you can probably tell, I have only a vaguely fuzzy idea of 'how' at
this point... but I would like to be able to proceed with some
confidence that as I get further down the road I'm not going to run into
a dead-end and have to start over down a different path.

Thanks,

Monte
 
W

Wolfgang Keller

I've been working at learning python off and on now for a while, with
a couple programs in mind as a goal - kind of specialized stuff that
I can't seem to find a good match for already available, competitor
records, score-keeping & results for an amateur sports tournament.

So you want to develop a database application. That's a standard case.
Probably 98-99% of the time the match administration would be done by
a single individual on a single PC, which seems like it would be
nearly ideal for a desktop application implemented in PyQt4 or
similar. The problem is (as usual) those edge cases where there are
enough volunteers/resources to have more than one person doing data
entry (maybe 2-3 in practice, but lets say 10-12 for arguments sake
to pad things a bit).

PostgreSQL and the frameworks mentioned below don't care for the number
of clients. You could buy a zSeries (or whatever they are called now)
from IBM and serve thousands of clients simultaneously if you needed to.
What I was wondering is what would be a good way of handling this
with a PyQt app? Build the desktop app first, and add some sort of
functionality to enable a lightweight web server and framework for
the additional data entry 'clients'?

No, you just implement a GUI in whatever GUI framework you want (PyQt,
PyGTK, wxPython) and use a client-server RDBMS for storage.

No web-nonsense gadgetry required with bloated cursor-animation
"browsers" etc..

For the storage I recommend PostgreSQL, for the client GUI, there are
several frameworks available:

using PyQt (& Sqlalchemy):
Pypapi: www.pypapi.org
Camelot: www.python-camelot.com
Qtalchemy: www.qtalchemy.org

using PyGTK:
Sqlkit: sqlkit.argolinux.org (also uses Sqlalchemy)
Kiwi: www.async.com.br/projects/kiwi
Glom: www.glom.org

using wxPython:
Dabo: www.dabodev.com
Defis: sourceforge.net/projects/defis (Russian only)
GNUe: www.gnuenterprise.org

Pypapi, Camelot, Sqlkit and Dabo seem to be the most active and best
documented/supported ones.

Sincerely,

Wolfgang
 
M

Monte Milanuk

Yes, I am looking at a database-centric application. I know that the
'larger' databases such as PostgreSQL, MySQL, etc. would not have any
problem handling that small amount of traffic.

My concern is that using postgres or mysql for this would be akin to
using a sledgehammer to swat a fly, when sqlite could most likely handle
the load well enough (I think) since the handful of people doing data
entry would rarely (if ever) be trying to write to the same record.
That would be the whole point of having multiple people doing data entry
in this situation - each one handling a different competitors entry form
or submitted scores.

My other reason for wanting one 'central' app is that there are various
functions (setting up the tournament, closing registration, editing
scores, finalizing results) that I really *don't* want the
satellite/client apps to be able to do. My personal view is that sort
of thing needs to be handled from one point, by one person (the match
director or chief stats officer, depending on the size of the event).

That is why I was looking at things in terms of having one central app
that handles the database, whether locally via sqlite or postgres or
whatever, but have the clients access go through that main application
in order to ensure that all they have is a limited set of CRUD abilities
for competitor registration and entering scores.

Thanks for the links... some of those I was already aware of (Camelot,
Dabo) but some of the others are new (QtAlchemy, etc). Should make for
interesting reading!

Thanks,

Monte
 
A

Alec Taylor

Monte: I noticed you mentioned web2py; that would be my recommendation.

You also mention different features being available to different
users; perfect use-case for web2py's built-in RBAC.

Scalability: Go with Postgres, MySQL; or considering how much data
you're talking about, even SQLite would be a close enough fit!

Another advantage of sticking to the web that hasn't been mentioned so
far is agnostic interoperability.

E.g.: you can CRUD on your TV (e.g.: if it runs Android); or on your
phone (e.g.: if you use twitter-bootstrap; which web2py comes with out
of the box; but is usable in any framework)
 
M

Monte Milanuk

Monte: I noticed you mentioned web2py; that would be my recommendation.

You also mention different features being available to different
users; perfect use-case for web2py's built-in RBAC.

Scalability: Go with Postgres, MySQL; or considering how much data
you're talking about, even SQLite would be a close enough fit!

Another advantage of sticking to the web that hasn't been mentioned so
far is agnostic interoperability.

E.g.: you can CRUD on your TV (e.g.: if it runs Android); or on your
phone (e.g.: if you use twitter-bootstrap; which web2py comes with out
of the box; but is usable in any framework)


Web2py does seem pretty attractive in that it seems to come with a lot
of functionality rolled in already. It seems to be pretty easy to
deploy... since this would be more of a case where the volunteer match
directors are not necessarily computer gurus, and something that can
literally run from a USB stick on nearly any computer has its benefits.
I've seen some examples (I think) of twitter-bootstrap in some other
demos of flask, and it looked reasonably attractive without being too
over the top. web2py's DAL seems fairly straight-forward too. Looks
like I may have to get more fluent in CSS & javascript, though...
 
M

Michael Torrie

Web2py does seem pretty attractive in that it seems to come with a lot
of functionality rolled in already. It seems to be pretty easy to
deploy... since this would be more of a case where the volunteer match
directors are not necessarily computer gurus, and something that can
literally run from a USB stick on nearly any computer has its benefits.
I've seen some examples (I think) of twitter-bootstrap in some other
demos of flask, and it looked reasonably attractive without being too
over the top. web2py's DAL seems fairly straight-forward too. Looks
like I may have to get more fluent in CSS & javascript, though...

If you just use web2py to implement the database calls and business
logic, and to implement a simple, clean API (RPC really) for the clients
to talk to, then you can still use your non-web UI tools like PyQt. But
as an added bonus you can do a web interface as well. You'll have
flexibility either way. A client is a client, whether it's web-bases
and running on the same server, or a remote app using RPC over HTTP.

I think all web-based apps should expose a web service (an API). that
way you have flexibility to do a variety of front-ends. Normal web
browser, mobile browser, a standalone app (think android or iphone).

As far as doing client/server stuff with just a database engine, unless
you have tight control over the environment end to end, from a security
pov, it's not a good idea to expose the database engine itself to the
internet. Better to put a restricted web services API in front of it
that handles all the authorization needs (access-control) on the
detailed level that you require.
 
D

Dennis Lee Bieber

My concern is that using postgres or mysql for this would be akin to
using a sledgehammer to swat a fly, when sqlite could most likely handle
the load well enough (I think) since the handful of people doing data
entry would rarely (if ever) be trying to write to the same record.
That would be the whole point of having multiple people doing data entry
in this situation - each one handling a different competitors entry form
or submitted scores.

Problem: SQLite3 (and M$ JET/Access) are considered "file server"
databases. Each instance of the program accessing the database is
directly opening the database file(s). While SQLite3 has a fairly
complex locking system, the normal locking is NOT "per record". Instead
it allows for multiple readers to be active at once; the first
connection/cursor to attempt to write anything will block any new
attempts to read, and will be blocked until all other active readers
exit (and none of those other readers can attempt to write). When there
are no other open readers, the writer can finish and commit changes.
That is why I was looking at things in terms of having one central app
that handles the database, whether locally via sqlite or postgres or
whatever, but have the clients access go through that main application
in order to ensure that all they have is a limited set of CRUD abilities
for competitor registration and entering scores.
At which point you've essentially written the conflict management
that a client/server system already provides.
 
C

Chris Angelico

Problem: SQLite3 (and M$ JET/Access) are considered "file server"
databases. Each instance of the program accessing the database is
directly opening the database file(s). While SQLite3 has a fairly
complex locking system, the normal locking is NOT "per record". Instead
it allows for multiple readers to be active at once; the first
connection/cursor to attempt to write anything will block any new
attempts to read, and will be blocked until all other active readers
exit (and none of those other readers can attempt to write). When there
are no other open readers, the writer can finish and commit changes.

Also MySQL, when using the default MyISAM back-end. In contrast,
PostgreSQL uses MVCC to permit lock-free reading in most cases (you
keep reading the version you can "see", and a writer happily tinkers
with a new version; until the writer COMMITs, its version is invisible
to you). There's more overhead to the PostgreSQL system, but it scales
better with multiple writers. (MySQL is primarily designed for dynamic
web sites, where there are thousands of readers but only (relatively)
occasional writers.)

ChrisA
 
A

Alec Taylor

If you just use web2py to implement the database calls and business
logic, and to implement a simple, clean API (RPC really) for the clients
to talk to, then you can still use your non-web UI tools like PyQt. But
as an added bonus you can do a web interface as well. You'll have
flexibility either way. A client is a client, whether it's web-bases
and running on the same server, or a remote app using RPC over HTTP.

I think all web-based apps should expose a web service (an API). that
way you have flexibility to do a variety of front-ends. Normal web
browser, mobile browser, a standalone app (think android or iphone).

As far as doing client/server stuff with just a database engine, unless
you have tight control over the environment end to end, from a security
pov, it's not a good idea to expose the database engine itself to the
internet. Better to put a restricted web services API in front of it
that handles all the authorization needs (access-control) on the
detailed level that you require.

Michael Torrie: Have seen a few PyWt examples in alpha if that's what
you describing…

But there would still be more implementation overhead then just using
e.g.: SQLFORM(db.table_name) to create a CRUD form.

I don't see any disadvantage of using web2py for everything; unless
we're talking decentralised infrastructure in which case a queuing
mechanism would likely be better; and have each client implement a
server as well. (thus still no use-case for Qt).

Also SQLite has a number of excellent features, namely 2 file deployments.

So it's very portable. Otherwise for postgres or mysql you'd probably
need to package in your own silent installer (which admittedly isn't
overly difficult; but is quite involved)…

Looks like I may have to get more fluent in
CSS & javascript, though...

Understanding how `style` attributes work, how to use FireBug (or
Chrome Dev Tools); and finding a good javascript widget library (e.g.:
from Twitter Bootstrap) should be more than enough for your project.

In fact; it's been enough for almost all my projects!

(though now I'm moving to AngularJS will need to get more involved on
the js front :p)
 
W

Wolfgang Keller

My concern is that using postgres or mysql for this would be akin to
using a sledgehammer to swat a fly,

I wouldn't use MySQL for anything that requires anything else than
"select".

And PostgreSQL has extremely spartanic resource requirements in the
default configuration. It runs on Linux on hardware where (the
most recent) Windows alone wouldn't run.
My other reason for wanting one 'central' app is that there are
various functions (setting up the tournament, closing registration,
editing scores, finalizing results) that I really *don't* want the
satellite/client apps to be able to do.

Easy, you simply restrict access rights to the corresponding tables
for the individual users. Any halfway decent database application
framework will allow to configure the application correspondingly for
each user.

Sincerely,

Wolfgang
 
W

Wolfgang Keller

As far as doing client/server stuff with just a database engine,
unless you have tight control over the environment end to end, from a
security pov, it's not a good idea to expose the database engine
itself to the internet. Better to put a restricted web services API
in front of it that handles all the authorization needs
(access-control) on the detailed level that you require.

Excuse me but that's bullshit.

PostgreSQL is definitely more secure than any self-made RPC protocol
with a self-made "web" server on top of SQLite that re-invents what
PostgreSQL provides "out of the box" and much more efficient that http
could ever do it. Experience with security of PostgreSQL servers exposed
to "the internet" has been capitalised for much more than a decade now.
You won't get anywhere close to that level of security (and reliability)
with your private selfmade webnonsense anytime soon.

And if there's anything that all those scriptkiddies know their way
with it's http servers.

Sincerely,

Wolfgang
 
C

Chris Angelico

Excuse me but that's bullshit.

I don't use the term but I absolutely agree with the sentiment. Of
course, if you're assuming a MySQL setup, then yes, exposing the
database engine directly would have risks. But I grew up with DB2, and
there were MANY ways in which you could control exactly what people
could do (views and stored procedures being the two easiest/most
commonly used) - to the extent that one of the recommended
organizational structures was to have the end-user login actually *be*
the database connection credentials, and to have your fancy app just
connect remotely. There's a guarantee that someone who logs in as a
non-administrator cannot access administrative functionality.
PostgreSQL has all those same features, packaged up in an open source
system; MySQL has a philosophical structure of "user logs in to app,
but app logs in to database as superuser regardless of user login".

ChrisA
 
F

Frank Millman

On 24/02/2013 16:58, Chris Angelico wrote:

[...]
MySQL has a philosophical structure of "user logs in to app,
but app logs in to database as superuser regardless of user login".

Out of curiosity, is there anything wrong with that approach?

The project I am developing is a business/accounting application, which
supports multiple database systems - at this stage, PostgreSQL, MS SQL
Server, and sqlite3.

I use exactly the philosophy you describe above. If I relied on the
RDBMS's internal security model, I would have to understand and apply
each one separately.

Any comments will be appreciated.

Frank Millman
 
C

Chris Angelico

Out of curiosity, is there anything wrong with that approach?

The project I am developing is a business/accounting application, which
supports multiple database systems - at this stage, PostgreSQL, MS SQL
Server, and sqlite3.

I use exactly the philosophy you describe above. If I relied on the RDBMS's
internal security model, I would have to understand and apply each one
separately.

Fundamentally no; it's a viable approach, as evidenced by the success
of MySQL and the myriad applications that use it in this way. It's a
matter of damage control and flexibility. Suppose your web server were
to be compromised - there are so many exploits these days that can
result in files on the server being unexpectedly read and transmitted
to the attacker. Your database superuser password (or, let's hope,
"just" database admin) is compromised, and with it the entire
database. This also forces you to treat the web application (usually
PHP scripts) as back-end.

In contrast, if you control permissions in the database itself, you
can actually treat the application as the front-end. You can happily
deploy it, exactly as-is, to untrusted systems. Sure, your typical PHP
system won't ever need that, but when you write something in Python,
it's much more plausible that you'd want to run it as a desktop app
and connect remotely to the database. It's flexibility that you may or
may not use, but is still nice to have.

Most RDBMSes have a broadly similar permissions system; at any rate,
no more different than the ancillaries (PostgreSQL has the "SERIAL"
type (which is a shortcut for INTEGER with a default value and an
associated SEQUENCE object), MySQL has AUTO_INCREMENT, etc, etc - if
you're going to support all of them, you either go for the lowest
common denominator, or you have different code here and there anyway).
You control access of different types to different named objects;
reading requires SELECT privilege on all tables/views read from,
editing requires INSERT/UPDATE, etc. For finer control than the table,
just deny all access to the table and grant access to a view. For more
complicated stuff ("edits to this table must have corresponding
entries in the log"), either triggers or stored procedures can do the
job.

It may take a lot of work to get the permissions down to their
absolute minimum, but one easy "half-way house" would be to create a
read-only user - SELECT permission on everything, no other perms. Not
applicable to all situations, but when it is, it's an easy way to
manage the risk of compromise.

I'm sure others can weigh in with a lot more detail.

ChrisA
 
F

Frank Millman

Fundamentally no; it's a viable approach, as evidenced by the success
of MySQL and the myriad applications that use it in this way. It's a
matter of damage control and flexibility. Suppose your web server were
to be compromised - there are so many exploits these days that can
result in files on the server being unexpectedly read and transmitted
to the attacker. Your database superuser password (or, let's hope,
"just" database admin) is compromised, and with it the entire
database. This also forces you to treat the web application (usually
PHP scripts) as back-end.

[snip much valuable food for thought]
I'm sure others can weigh in with a lot more detail.

Thanks for the input, Chris - much appreciated.

I don't have a lot of experience in this area, but it is a very
important topic and I have applied my mind to the issues as best I can,
so I would appreciate a critique of my current approach.

The main app is written in python. It is designed to run on a server. It
could be on the same server as the database or not - the person setting
up the system supplies the connection parameters.

The app runs a web server (cherrypy) which anyone can connect to via a
browser, with a valid userid and password. User credentials are stored
in the database, and the system has its own mapping of which users (or
rather roles) have access to which tables. The front end is written in
Javascript.

So to refer to your two concerns of damage control and flexibility, the
second one does not really apply in my case - I would never want the
main app to run on a desktop.

Regarding security, obviously it is a concern. However, the various user
ids and passwords have to be stored *somewhere*, and if it is
compromised I would have thought that they would be equally vulnerable.

There is one idea I think is worth looking into, when I have time. I
subscribe to the 'getmail' mailing list, and for a long time the
maintainer has resisted pressure to encrypt the mailbox password in the
configuration file, on the grounds that if the password is vulnerable,
the encryption method is equally vulnerable, so it would give a false
sense of security. However, he has recently been persuaded of the merits
of using something called a 'keyring'. I don't know much about it, but
it is on my list of things to look at some time.

All comments welcome.

Frank
 
D

Dennis Lee Bieber

It may take a lot of work to get the permissions down to their
absolute minimum, but one easy "half-way house" would be to create a
read-only user - SELECT permission on everything, no other perms. Not
applicable to all situations, but when it is, it's an easy way to
manage the risk of compromise.
I think I'd recommend that even this read permission be limited to
the tables required by the application... Wouldn't want someone to
"accidentally" read the database user account tables, would we?

MySQL's permission system, as I recall (I'm too lazy to grab one of
the five MySQL reference books on my shelf) can be set for "database",
"table", and "column" levels. (Setting permissions at the column level
would be painful, IMO -- especially if one has a goodly number of tables
with lots of fields; Creating a view and using a table level restriction
may be better -- not sure if MySQL views honor the access restrictions
though)
 
D

Dennis Lee Bieber

The app runs a web server (cherrypy) which anyone can connect to via a
browser, with a valid userid and password. User credentials are stored
in the database, and the system has its own mapping of which users (or
rather roles) have access to which tables. The front end is written in
Javascript.

Regarding security, obviously it is a concern. However, the various user
ids and passwords have to be stored *somewhere*, and if it is
compromised I would have thought that they would be equally vulnerable.
Which maps fairly directly to the MySQL (and likely other DBMS)
access control. If you are already storing UserID/passwords in a
(restricted access) table -- you might as well make them the native
database user accounts and use the database restriction controls to
limit access to database/table/column... Roles may be trickier if a
single userID is allowed to act in different roles (but then, if a user
can specify which role they are acting as, nothing prevents them from
always picking the most capable role, so just give them the accesses for
the highest role they are allowed).
 
C

Chris Angelico

I think I'd recommend that even this read permission be limited to
the tables required by the application... Wouldn't want someone to
"accidentally" read the database user account tables, would we?

Of course; once you have the concept of divided access levels, you can
take it whereever you like. But some systems don't even HAVE "database
user account tables" as such; look at this site:

http://rosuav.com/1/

That's an old PHP-based site of mine, originally done in MySQL, now
using PostgreSQL but not as yet moved off PHP. In index.php, the
database connection has read-only access; there's a separate page that
lets me log in using higher database credentials, and thus gain the
power to add/edit entries. It's fine for the read-only user to have
access to every table, because there's really only one table (not
counting statistics).

ChrisA
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Similar Threads


Members online

Forum statistics

Threads
473,969
Messages
2,570,161
Members
46,708
Latest member
SherleneF1

Latest Threads

Top