Umlaut characters in Unicode

  • Thread starter =?ISO-8859-1?Q?J=FCrgen_Kahrs?=
  • Start date
?

=?ISO-8859-1?Q?J=FCrgen_Kahrs?=

Martin said:
Why is an umlaut a problem? Unicode certainly contains/allows umlaut
characters.

Umlaut is not a problem for Unicode.
Umlaut is a problem if you write a text
with an editor in ISO-8859-1 mode and
watch the text with an editor in UTF-8
mode.

For example, while writing this posting,
I use ISO-8859-1 mode and this is an u-Umlaut: ü
Now, switch your news reader to UTF-8 and you
will find that the character does not look like
an u-umlaut anymore.
 
S

Steve W. Jackson

Jürgen Kahrs said:
:Martin Honnen wrote:
:
:> Why is an umlaut a problem? Unicode certainly contains/allows umlaut
:> characters.
:
:Umlaut is not a problem for Unicode.
:Umlaut is a problem if you write a text
:with an editor in ISO-8859-1 mode and
:watch the text with an editor in UTF-8
:mode.
:
:For example, while writing this posting,
:I use ISO-8859-1 mode and this is an u-Umlaut: ü
:Now, switch your news reader to UTF-8 and you
:will find that the character does not look like
:an u-umlaut anymore.

That's precisely the problem we've encountered with our application,
which stores its data in UTF-8 encoded XML documents.

We maintain everything internally in our Java application as part of a
DOM, and it's saved to an external file on request. But we failed to
force the byte stream written to the file to be encoded to UTF-8, so it
used the default ISO-8859-1 on our American systems. When the next
attempt was made to read the file (only if such characters appeared),
errors occurred because there were non-UTF-8 characters present.

The solution we found was to serialize the DOM with UTF-8 encoding
specified (which we were already doing) and then also specify UTF-8
encoding on the output file stream when writing. When this was done,
opening such an XML file in an editor clearly showed something that did
not resemble the letter with umlaut, or accent, or other special feature.

= Steve =
 
?

=?ISO-8859-1?Q?J=FCrgen_Kahrs?=

Steve said:
We maintain everything internally in our Java application as part of a
DOM, and it's saved to an external file on request. But we failed to
force the byte stream written to the file to be encoded to UTF-8, so it
used the default ISO-8859-1 on our American systems. When the next
attempt was made to read the file (only if such characters appeared),
errors occurred because there were non-UTF-8 characters present.

Yes, this is the situation I was thinking of.
Now, with your unpleasant experience in mind,
would you say that the following document was
also encoded in an inadequate way ?

http://belnet.dl.sourceforge.net/sourceforge/ganttproject/ganttproject-example3.xml

As I said in my original posting, I am guessing
that the author used an ISO-8859-1 environment
(just like you) but forgot to change the encoding
declaration from UTF-8 to ISO-8859-1.

Thanks for answering !
 
S

Steve W. Jackson

Jürgen Kahrs said:
:Steve W. Jackson wrote:
:
:> We maintain everything internally in our Java application as part of a
:> DOM, and it's saved to an external file on request. But we failed to
:> force the byte stream written to the file to be encoded to UTF-8, so it
:> used the default ISO-8859-1 on our American systems. When the next
:> attempt was made to read the file (only if such characters appeared),
:> errors occurred because there were non-UTF-8 characters present.
:
:Yes, this is the situation I was thinking of.
:Now, with your unpleasant experience in mind,
:would you say that the following document was
:also encoded in an inadequate way ?
:
: http://belnet.dl.sourceforge.net/sourceforge/ganttproject/ganttproject-examp
: le3.xml
:
:As I said in my original posting, I am guessing
:that the author used an ISO-8859-1 environment
:(just like you) but forgot to change the encoding
:declaration from UTF-8 to ISO-8859-1.
:
:Thanks for answering !

It looks to me as if it's not encoded properly, based on the visual
appearance of the <resource> element near the end.

Just to make clear what I said earlier, the problem we encountered did
not stem from using an ISO-8859-1 encoding in the XML itself. All of
our files already included <?xml version="1.0" encoding="UTF-8"?> at the
top when serialized, since we told the XML serializer to use UTF-8.

Instead, we also write the file using Java's OutputStreamWriter, in
which we specify the stream being written (in this case, Java's
FileOutputStream class designating the file) and the encoding to use
when writing the stream. Only if *both* of these things were done would
non-ASCII characters get correctly written and then parse without error
next time around. We got a separate report of this same problem from a
German user who used a directory name containing an umlaut-o (as in ö)
and from a French user with an accented e (as in é).

= Steve =
 
R

Richard Tobin

Jürgen Kahrs said:

The file at that URL appears to be well-formed, and contains a
correctly encoded UTF-8 u-with-umlaut. I don't see any problem with it.

Putting a UTF-8 declaration on a file that is really Latin-1 (and which
contains non-ascii characters) will almost always result in a detectable
error because the result will almost always be an illegal UTF-8 byte
sequence. An XML parser should detect the error.

-- Richard
 
A

Alan J. Flavell

Putting a UTF-8 declaration on a file that is really Latin-1 (and which
contains non-ascii characters) will almost always result in a detectable
error
Indeed...

because the result will almost always be an illegal UTF-8 byte
sequence. An XML parser should detect the error.

In fact, anything which is supposed to handle utf-8 should give up at
that point, if only for security reasons. XML is a higher layer in
the protocol layer-cake: I'm not sure that it really should be allowed
to have any say in these lower-level problems. That way lie dragons,
from a security analysis point of view.
 
M

Martin Honnen

Jürgen Kahrs wrote:

Now, with your unpleasant experience in mind,
would you say that the following document was
also encoded in an inadequate way ?

http://belnet.dl.sourceforge.net/sourceforge/ganttproject/ganttproject-example3.xml


As I said in my original posting, I am guessing
that the author used an ISO-8859-1 environment
(just like you) but forgot to change the encoding
declaration from UTF-8 to ISO-8859-1.

I have no problems viewing that file with Netscape 7 or IE 6, I don't
see anything displayed incorrectly that suggests the encoding has not
been declared correctly.
 
?

=?ISO-8859-1?Q?J=FCrgen_Kahrs?=

Richard said:
Putting a UTF-8 declaration on a file that is really Latin-1 (and which
contains non-ascii characters) will almost always result in a detectable
error because the result will almost always be an illegal UTF-8 byte

I should have looked into the hexdump immediately:

00002250 20 6e 61 6d 65 3d 22 41 6e 64 72 65 61 73 20 50 | name="Andreas P|
00002260 6c c3 bc 73 63 68 6b 65 22 20 66 75 6e 63 74 69 |l..schke" functi|

C3BC in UTF-8 converts to position 0FC as described here:

http://www.pemberley.com/janeinfo/latin1.html#utf8

And 0FC is really the position of the ü as described
on page 2 of this one:

http://www.unicode.org/charts/PDF/U0080.pdf

This mixture of bitwise encoding and character sets
is a pain if you work with it rarely.
sequence. An XML parser should detect the error.

The problem was that I did not trust my parser.
I think I should put the Unicode 4.0 book onto my book shelf.

Thanks to all who answered.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,995
Messages
2,570,228
Members
46,818
Latest member
SapanaCarpetStudio

Latest Threads

Top