P
peufeu
postgresql
When not using transactions, MySQL will blow away postgres in
INSERT/UPDATE speed until the concurrency gets up a bit and the readers
block writers strategy used by MyISAM starts to show its weaknesses.
This is in agreement with mass hosting for instance, a lot of small
databases on a mysql will not have concurrency problems.
Of course when not using transactions you have to remind that your data
is not secure, and any power crash might corrupt your database.
Postgres on a RAID with battery backed up cache will no longer have to
sync the disk on each transaction so it gets a lot faster, and you still
have data security. You can also run it with fsync=off for a massive
speedup in transactions per second but you lose data security.
When using transactions (innodb) I've read that postgres is a bit faster.
Regarding query optimization, for simple dumb queries like grabbing a row
from a table, mysql will be a little faster (say 0.2 vs 0.3 ms), and for
medium complex queries like joins on >= 4 medium sized tables (>10 K rows)
postgres can be faster by anything from 1x to 1000x. I've seen it happen,
the same query taking 0.5 seconds in my and 0.5 ms in postgres, simply
because mysql can't plan it correctly.
I'd suggest that on anything medium postgres will be a lot faster.
psycopg is extremely fast and powerful, so it makes a lot more things
that the dbapi says.
I'd say that database independence is an utopia, once you start to use
triggers and stored procedures and specific column types, you'll be more
or less tied to one database, and doing this is necessary to get good
performance and generally do things right.
When not using transactions, MySQL will blow away postgres in
INSERT/UPDATE speed until the concurrency gets up a bit and the readers
block writers strategy used by MyISAM starts to show its weaknesses.
This is in agreement with mass hosting for instance, a lot of small
databases on a mysql will not have concurrency problems.
Of course when not using transactions you have to remind that your data
is not secure, and any power crash might corrupt your database.
Postgres on a RAID with battery backed up cache will no longer have to
sync the disk on each transaction so it gets a lot faster, and you still
have data security. You can also run it with fsync=off for a massive
speedup in transactions per second but you lose data security.
When using transactions (innodb) I've read that postgres is a bit faster.
Regarding query optimization, for simple dumb queries like grabbing a row
from a table, mysql will be a little faster (say 0.2 vs 0.3 ms), and for
medium complex queries like joins on >= 4 medium sized tables (>10 K rows)
postgres can be faster by anything from 1x to 1000x. I've seen it happen,
the same query taking 0.5 seconds in my and 0.5 ms in postgres, simply
because mysql can't plan it correctly.
I'd suggest that on anything medium postgres will be a lot faster.
I don't know about the whole picture, but I know form evidence on this
group that there are PostgreSQL driver modules (the name "psycopg" comes
to mind, but this may be false memory) that appear to take diabolical
liberties with DBAPI-2.0, whereas my experience with MySQLdb has been
that I can interchange the driver with mxODBC (for example) as a drop-in
replacement (modulo the differing paramstyles :-().
psycopg is extremely fast and powerful, so it makes a lot more things
that the dbapi says.
I'd say that database independence is an utopia, once you start to use
triggers and stored procedures and specific column types, you'll be more
or less tied to one database, and doing this is necessary to get good
performance and generally do things right.