SQL Transactions and Database Pooling

D

DiscoStu

Hello Everyone,

Im writing a java server that uses the jakarta pooling classes to
store database connections. Im then trying to use those connections to
send blocks of data to the SQL Server. When I set it to
autoCommit(false); it throws this exception on me:

java.sql.SQLException: [Microsoft][SQLServer 2000 Driver for
JDBC]Can't start manual transaction mode because there are cloned
connections.


I have WAY to many individual insert statements to do them one-by-one,
how can I use my database connection pool and still be able to send
big blocks of insert statements and only have them processed on the
SQLServer when they ALL get there.... not receive-process
receive-process... etc.....

Thanks
 
R

rkm

DiscoStu said:
Hello Everyone,

Im writing a java server that uses the jakarta pooling classes to
store database connections. Im then trying to use those connections to
send blocks of data to the SQL Server. When I set it to
autoCommit(false); it throws this exception on me:

java.sql.SQLException: [Microsoft][SQLServer 2000 Driver for
JDBC]Can't start manual transaction mode because there are cloned
connections.


I have WAY to many individual insert statements to do them one-by-one,
how can I use my database connection pool and still be able to send
big blocks of insert statements and only have them processed on the
SQLServer when they ALL get there.... not receive-process
receive-process... etc.....

Thanks

use stmt.addBatch() to build up a big set,
then executeBatch() to run them.

downside, at least with Oracle, is if any of them throw an
exception, the returned array of status codes is practically
useless and you can't tell which statement had the error.
That part sucks, so I now do the bind-execute on individual
statements, and find it to be faster than executeBatch
overall, especially when it gets into a recovery situation.

How much faster? On certain of my tables, executeBatch will
routinely run at about 300 or so inserts per second. If it
goes into recovery mode because of say a constraint
violation, then I retry each statement to see which one
threw, then switch that one from, say, an Insert to an
Update and rerun it. Doing that, I was down to about 10
rows a second overall throughput. With the individual
bind-execute approach however, I immediately see the bad
statement throw an error and repeat only that row as an
Update. Overall insert rate using this approach hits 500 to
600 per second. That's faster than the executeBatch
approach even when executeBatch isn't getting any errors.

Rick
 
B

Benny

Hi,

Does your string in marking the connection with the database
contains the attribute "SelectMethod", it should be set to "cursor".
Like as follow:


jdbc:microsoft:sqlserver://x.x.x.x:1433;DatabaseName=DBNAME;SelectMethod=cursor

Benny
 
D

DiscoStu

Thats excellent Benny! Im going to give that a try. I never would have
found that otherwise....

Greg
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,995
Messages
2,570,230
Members
46,819
Latest member
masterdaster

Latest Threads

Top