W
wxjmfauth
Le mercredi 30 octobre 2013 03:17:21 UTC+1, Chris Angelico a écrit :
---------
It is not obvious to make comparaisons with all these
methods and characters (lookup depending on the position
in the table, ...). The only think that can be done and
observed is the tendency between the subsets the FSR
artificially creates.
One can use the best algotithms to adjust bytes, it is
very hard to escape from the fact that if one manipulates
two strings with different internal representations, it
is necessary to find a way to have a "common internal
coding " prior manipulations.
It seems to me that this FSR, with its "negative logic"
is always attempting to "optimize" with the worst
case instead of "optimizing" with the best case.
This kind of effect is shining on the memory side.
Compare utf-8, which has a memory optimization on
a per code point basis with the FSR which has an
optimization based on subsets (One of its purpose).
1020
jmf
His idea of bad handling is "oh how terrible, ASCII and BMP have
optimizations". He hates the idea that it could be better in some
areas instead of even timings all along. But the FSR actually has some
distinct benefits even in the areas he's citing - watch this:
0.3582399439035271
The first two examples are his examples done on my computer, so you
can see how all four figures compare. Note how testing for the
presence of a non-Latin1 character in an 8-bit string is very fast.
Same goes for testing for non-BMP character in a 16-bit string. The
difference gets even larger if the string is longer:
2.8308718007456264
Wow! The FSR speeds up searches immensely! It's obviously the best
thing since sliced bread!
ChrisA
---------
It is not obvious to make comparaisons with all these
methods and characters (lookup depending on the position
in the table, ...). The only think that can be done and
observed is the tendency between the subsets the FSR
artificially creates.
One can use the best algotithms to adjust bytes, it is
very hard to escape from the fact that if one manipulates
two strings with different internal representations, it
is necessary to find a way to have a "common internal
coding " prior manipulations.
It seems to me that this FSR, with its "negative logic"
is always attempting to "optimize" with the worst
case instead of "optimizing" with the best case.
This kind of effect is shining on the memory side.
Compare utf-8, which has a memory optimization on
a per code point basis with the FSR which has an
optimization based on subsets (One of its purpose).
1020
jmf