Writing Unicode to database using ODBC

M

Mudcat

In short what I'm trying to do is read a document using an xml parser
and then upload that data back into a database. I've got the code more
or less completed using xml.etree.ElementTree for the parser and dbi/
odbc for my db connection.

To fix problems with unicode I built a work-around by mapping unicode
characters to equivalent ascii characters and then encoding everything
to ascii. That allowed me to build the application and debug it
without running into problems printing to file or stdout to screen.

However, now that I've got all that working I'd like to simply take
the unicode data from the xml parser and then pass it directly into
the database (which is currently set up for unicode data). I've run
into problems and just can't figure why this isn't working.

The breakdown is occurring when I try to execute the db query:

cur.execute( query )

Fairly straightforward. I get the following error:

File "atp_alt.py", line 273, in dbWrite
cur.execute( query )
UnicodeEncodeError: 'ascii' codec can't encode character u'\u201c' in
position 3
79: ordinal not in range(128)

I've verified that query is of type unicode by checking the type a
statement or two earlier (output: <type 'unicode'>).

So then I thought maybe the odbc execute just can't handle unicode
data. But when I do the following command:

query = query.encode('utf-8')

It actually works. So apparently execute can handle unicode data. The
problem now is that basically the data has been encoded twice and is
in the wrong format when I pull it from the database:
u'+CMGL: (\xe2\u20ac\u0153REC UNREAD\xe2\u20ac\x9d,\xe2\u20ac\x9dREC
READ\xe2\u20ac\x9d,\xe2\u20ac\x9dSTO UNSENT\xe2\u20ac\x9d,\xe2\u20ac
\x9dSTO SENT\xe2\u20ac\x9d,\xe2\u20ac\x9dALL\xe2\u20ac\x9d) OK'+CMGL: (“REC UNREADâ€,â€REC READâ€,â€STO UNSENTâ€,â€STO SENTâ
€,â€ALLâ€) OK

The non-alpha characters should be double-quotes. It works correctly
if I copy/paste into the editor:
<type 'str'>


I can then decode the string to get back the proper unicode data. I
can't do that with the data out of the db because it's of the wrong
type for the data that it has.

I think the problem is that I'm having to encode data again to force
it into the database, but how can I use the odbc.execute() function
without having to do that?
 
J

John Machin

However, now that I've got all that working I'd like to simply take
the unicode data from the xml parser and then pass it directly into
the database (which is currently set up for unicode data). I've run

What database? What does "set up for unicode data" mean? If you are
using MS SQL Server, are your text columns defined to be varchar or
nvarchar or something else?
into problems and just can't figure why this isn't working.

The breakdown is occurring when I try to execute the db query:

          cur.execute( query )

Fairly straightforward. I get the following error:

  File "atp_alt.py", line 273, in dbWrite
    cur.execute( query )
UnicodeEncodeError: 'ascii' codec can't encode character u'\u201c' in
position 3
79: ordinal not in range(128)

I've verified that query is of type unicode by checking the type a
statement or two earlier (output: <type 'unicode'>).
So then I thought maybe the odbc execute just can't handle unicode
data.

It appears to be expecting a str object, not a unicode object.
But when I do the following command:

          query = query.encode('utf-8')

It actually works. So apparently execute can handle unicode data.

"not crashing" != "works"
The
problem now is that basically the data has been encoded twice and is
in the wrong format when I pull it from the database:

No, your utf8 string has been DEcoded using some strange encoding.
u'+CMGL: (\xe2\u20ac\u0153REC UNREAD\xe2\u20ac\x9d,\xe2\u20ac\x9dREC
READ\xe2\u20ac\x9d,\xe2\u20ac\x9dSTO UNSENT\xe2\u20ac\x9d,\xe2\u20ac
\x9dSTO SENT\xe2\u20ac\x9d,\xe2\u20ac\x9dALL\xe2\u20ac\x9d) OK'>>> print a

+CMGL: (“REC UNREAD†,†REC READ†,†STO UNSENT†,†STO SENTâ
€ ,†ALL†) OK

It would help very much if you showed the repr() of your input unicode
text.

Observation: the first bunch of rubbish output (\xe2\u20ac\u0153)
differs from all the others (\xe2\u20ac\x9d).
The non-alpha characters should be double-quotes.

What "double-quotes" character(s)? Unicode has several: U+0022
(unoriented), U+201C (left), U+201D (right), plus more exotic ones.
It works correctly
if I copy/paste into the editor:

'\xe2\x80\x9cREC'

More observations:
u'\xe2\u20ac\u0153'

Aha! The first load of rubbish! However:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python25\lib\encodings\cp1252.py", line 15, in decode
return codecs.charmap_decode(input,errors,decoding_table)
UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position

Hmmm, try the ferschlugginer mcbs encoding:

So, if you must persist with the odbc module, either encode your
unicode text with mbcs, not utf8, or find out how to "set up for
unicode data" so that utf8 is the default.

You may like to consider using pyODBC or mxODBC instead of odbc.

HTH,
John
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,982
Messages
2,570,189
Members
46,735
Latest member
HikmatRamazanov

Latest Threads

Top