Stupid ODBC!

  • Thread starter Lennon Day-Reynolds
  • Start date
L

Lennon Day-Reynolds

Sean: I like the idea of a seperate component that handles nothing but
ID generation. Properly done, it could easily extend to other useful
applications (sessions IDs for non-RDBMS-backed webapps, etc.).

Since you could always just increment the counter again whenever a
database transaction failed and had to be retried, I support you
wouldn't really even need to manage the two operation under a single
transactional context.

Now the only question is what underlying storage engine to
use...Berkeley DB? CDB? Flat files? PStore?

Okay, looks like I have a good weekend project now.

Lennon
 
I

Iki

At least for the MSSQL server I think the most appropriate query is

select SCOPE_IDENTITY()

just after the insert statement.
You could also use

select IDENT_CURRENT('table_name')

but I suspect this isn't appropriate
due to concurrency issues.

Keep up the good work and let us know about the ODBC adapter.
I for one could use it to try out ActiveRecord on Windows.
Actually I am a bit surprised that it wasn't the first adapter built.
 
S

Sean O'Dell

Sean: I like the idea of a seperate component that handles nothing but
ID generation. Properly done, it could easily extend to other useful
applications (sessions IDs for non-RDBMS-backed webapps, etc.).

Now the only question is what underlying storage engine to
use...Berkeley DB? CDB? Flat files? PStore?

I would use a regular old file. If you have a persistent layer, maintain it
internally as a native integer and write it out each time it's incremented as
text. Re-load the value from the file when the persistent layer first loads.

Sean O'Dell
 
A

Ara.T.Howard

I would use a regular old file. If you have a persistent layer, maintain it
internally as a native integer and write it out each time it's incremented as
text. Re-load the value from the file when the persistent layer first loads.

Sean O'Dell

this is alot trickier than it sounds: you will have locking issues since every
thread/process will need to coordinate access to this file. you can wrap a
call to File#flock with a mutex, but this will fail with, for example, NFS
mounted home directories. for that you could use a mutex wrapped fcntl lock,
but that will fail on certain OSes - like solaris. you could use my lockfile
class, which will work with threads, processes, and NFS filesystems, but that
will greatly complicate matters...

in short, i think that if you move state out of the database you'll need to
write a database. well, not really, but you have two choices:

* coordinate access via a central server
* coordinate access amongst all clients

the first is probably impossible/overkill and the second is very, very hard.

-a
--
===============================================================================
| EMAIL :: Ara [dot] T [dot] Howard [at] noaa [dot] gov
| PHONE :: 303.497.6469
| A flower falls, even though we love it;
| and a weed grows, even though we do not love it.
| --Dogen
===============================================================================
 
S

Sean O'Dell

this is alot trickier than it sounds: you will have locking issues since
every thread/process will need to coordinate access to this file. you can
wrap a call to File#flock with a mutex, but this will fail with, for
example, NFS mounted home directories. for that you could use a mutex
wrapped fcntl lock, but that will fail on certain OSes - like solaris. you
could use my lockfile class, which will work with threads, processes, and
NFS filesystems, but that will greatly complicate matters...

A middle layer server would be able to use file locks and mutexes to
coordinate access, which is what I advocated.

However, if multiple processes wanted to handle the file directly, they only
need to lock the file at appropriate times. I actually have a pretty large
project right now that consists of multiple processes accessing the same
files using file locks as semaphores. It works very well, but I wouldn't
have had much success with it if I hadn't written a module to manage the
locks.

But for a single file it shouldn't be tricky.

Sean O'Dell
 
C

Carl Youngblood

Yes. I did this exact test a couple of years ago and it just keeps
incrementing. I think the only time it resets its counter is if you
do a truncate or something like that.
 
A

Ara.T.Howard

But for a single file it shouldn't be tricky.

unless you are on something pervasive, and evil, like NFS... i agree in
general with you and have done alot of that too. just wanted to point out
some pitfalls.

regards.

-a
--
===============================================================================
| EMAIL :: Ara [dot] T [dot] Howard [at] noaa [dot] gov
| PHONE :: 303.497.6469
| A flower falls, even though we love it;
| and a weed grows, even though we do not love it.
| --Dogen
===============================================================================
 
S

Sean O'Dell

unless you are on something pervasive, and evil, like NFS... i agree in
general with you and have done alot of that too. just wanted to point out
some pitfalls.

Oh, okay, I understand. Extend that then to make it a general rule: NEVER use
file locks as semaphores on any file system which doesn't support it properly
or when failure of the file system is common (disconnects). To anyone who
uses file locks on NFS, Samba, mounted FTP shares, etc.: /salute.

Sean O'Dell
 
L

Lennon Day-Reynolds

I appreciate everyone being so interested in this, but I think we're
wander far afield of the whole point here, which is implementing a
simple ORM layer atop a standard RDBMS backend. Implementing a
concurrent atomic sequence manager external to that database sounds a
lot like reinventing the wheel, and is certainly way outside the scope
of what I was trying to do with Active Record.

At this point, I think I'm going to have to go one of two ways on
this. The first option would be to simply write a SQL Server-specific
(rather than generic ODBC) adapter for Active Record, using the
built-in identity management functionlity. The simpler option for my
purposes may actually be to transfer the data I need (which is mostly
periodic reporting from a sales transaction database) from SQL Server
to something easier to work with (like SQLite) via straight ODBC/SQL,
do my Active Record-based manipulations on that local snapshot, and
then drop the local database.

The general issue of supporting arbitrary ODBC backends through Active
Record will probably take some more time to solve.

Lennon
 
C

Carl Youngblood

What's so bad about the idea of using the database itself to guarantee
atomicity?
 
L

Lennon Day-Reynolds

Mainly the fact that I can't change the schema on some of the
databases I need to access, as they are backing live production
systems.

Really, the more we go around and around on this, the more I realize
that a simple "Silver Bullet" solution just doesn't really exist. I'm
still going to work on a SQL Server adapter for Active Record, just
for future projects (and because I've made enough noise at this point
to feel like I need some code to back it all up).

Unforunately, my plan of hacking away today and tomorrow while here at
OSCON has been complicated by the premature death of my laptop AC
adapter last night. I still have a perfectly good Thinkpad; it just
has no battery charge, and no way of recharging until I can get a new
wall wart.

Lennon
 
D

David Heinemeier Hansson

At this point, I think I'm going to have to go one of two ways on
this. The first option would be to simply write a SQL Server-specific
(rather than generic ODBC) adapter for Active Record, using the
built-in identity management functionlity.

Here's what I think would be a good idea. Implement an abstract ODBC
driver, which holds as much that can be shared for all ODBC accessible
databases, then subclass that for each of the interesting databases you
want to access through ODBC. This would allow each database, like SQL
Server, to make full use of its abilities to do the auto-id stuff --
without worrying about the rest of the ODBC feature set.

Further more, it would allow database specific optimizations to be
made, if any could be found. So the ODBC driver is never really
interesting in itself, its just a quicker way to get more specific
database drivers into Active Record.

Does that make sense?
--
David Heinemeier Hansson,
http://www.rubyonrails.org/ -- Web-application framework for Ruby
http://www.instiki.org/ -- A No-Step-Three Wiki in Ruby
http://www.basecamphq.com/ -- Web-based Project Management
http://www.loudthinking.com/ -- Broadcasting Brain
http://www.nextangle.com/ -- Development & Consulting Services
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,150
Messages
2,570,853
Members
47,394
Latest member
Olekdev

Latest Threads

Top