C
coldpizza
Hi,
I have a basic Python CGI web form that shows data from a SQLite3
database. It runs under the built-in CGIWebserver which looks like
this:
The script runs Ok with ANSI characters, but when I try to process non-
ASCII data I get an UnicodeDecodeError exception ('ascii' codec can't
decode byte 0xd0 in position 0: ordinal not in range(128)).
I have added the the 'u' prefix to all my literal strings, and I
_have_ wrapped all my output statements into myString.encode('utf8',
"replace"), but, apparently the UnicodeDecodeError exception occurs
because of a string that I get back to the script through
cgi.FieldStorage( ).
I.e. I have the lines:
form = cgi.FieldStorage( )
word= form['word']
which retrieve the 'word' value from a GET request.
I am using this 'word' variable like this:
print u'''<input type="text" name="blabla" value="%s">''' % (word)
and apparently this causes exceptions with non-ASCII strings.
I've also tried this:
print u'''<input type="text" name="blabla" value="%s">''' %
(word.encode('utf8'))
but I still get the same UnicodeDecodeError..
What is the general good practice for working with UTF8?
The standard Python CGI documentation has nothing on character sets.
It looks insane to have to explicitly wrap every string
with .encode('utf8'), but even this does not work.
Could the problem be related to the encoding of the string returned by
the cgi.fieldstorage()? My page is using UTF-8 encoding.
What would be encoding for the data that comes from the browser after
the form is submitted?
Why does Python always try to use 'ascii'? I have checked all my
strings and they are prefixed with 'u'. I have also tried replacing
print statements with
sys.stdout.write (DATA.encode('utf8'))
but this did not help.
Any clues?
Thanks in advance.
I have a basic Python CGI web form that shows data from a SQLite3
database. It runs under the built-in CGIWebserver which looks like
this:
Code:
import SimpleHTTPServer
import SocketServer
SocketServer.TCPServer(("",
80),SimpleHTTPServer.SimpleHTTPRequestHandler).serve_forever()
The script runs Ok with ANSI characters, but when I try to process non-
ASCII data I get an UnicodeDecodeError exception ('ascii' codec can't
decode byte 0xd0 in position 0: ordinal not in range(128)).
I have added the the 'u' prefix to all my literal strings, and I
_have_ wrapped all my output statements into myString.encode('utf8',
"replace"), but, apparently the UnicodeDecodeError exception occurs
because of a string that I get back to the script through
cgi.FieldStorage( ).
I.e. I have the lines:
form = cgi.FieldStorage( )
word= form['word']
which retrieve the 'word' value from a GET request.
I am using this 'word' variable like this:
print u'''<input type="text" name="blabla" value="%s">''' % (word)
and apparently this causes exceptions with non-ASCII strings.
I've also tried this:
print u'''<input type="text" name="blabla" value="%s">''' %
(word.encode('utf8'))
but I still get the same UnicodeDecodeError..
What is the general good practice for working with UTF8?
The standard Python CGI documentation has nothing on character sets.
It looks insane to have to explicitly wrap every string
with .encode('utf8'), but even this does not work.
Could the problem be related to the encoding of the string returned by
the cgi.fieldstorage()? My page is using UTF-8 encoding.
What would be encoding for the data that comes from the browser after
the form is submitted?
Why does Python always try to use 'ascii'? I have checked all my
strings and they are prefixed with 'u'. I have also tried replacing
print statements with
sys.stdout.write (DATA.encode('utf8'))
but this did not help.
Any clues?
Thanks in advance.