B
bearophileHUGS
Hello, this time I have a question about PIL usage, maybe if Lundh has
some time he can answer me.
I am experimenting different color quantization algorithms, so having
computed the palette with a clustering function, I use the code below
to quantize the original image to produce an image without dithering
(so I can see better the quantization results).
I have seen that the *standard* distance function I use isn't standard
enough, because given the fixed palette computed by me, most graphics
programs give me a different (better) quantized image. I don't know
what's wrong/different in this quantize function, maybe you can tell
me.
I'd also like to know if there is a simpler code for PIL to do the same
thing (given my palette and a truecolor image), but this is less
important. I know the dither=Image.NONE for the im.convert() method,
but I don't know a good way to use it in this problem. (Note that the
quantize function below uses a perceptual-based color distance, but to
do the quantization with PIL I can settle with its standard color
distance function.)
Thank you,
bearophile
# Input:
# im = input truecolor image
# palette = a palette computed by me, of about 32 colors
# im_out = output image with no dithering
def quantize(data, palette_short):
out_data = []
for rgb in data:
dist_min = 1e100
closest_col = None
for col_pos, pal_col in enumerate(palette_short):
# Standard distance
#dr = rgb[0] - pal_col[0]
#dg = rgb[1] - pal_col[1]
#db = rgb[2] - pal_col[2]
#d = dr*dr + dg*dg + db*db
d = perceptualColorDistance(rgb, pal_col)
if d < dist_min:
dist_min = d
closest_col = col_pos
out_data.append(closest_col)
return out_data
#..........
import psyco; psyco.bind(quantize)
# Copy of palette, to speed up quantization
palette_short = list(palette)
# Add duplicated colors (the last one) to produce a palette of 256
colors
palette.extend( palette[-1] for i in xrange(256 - len(palette)) )
# Create empty paletted output image
im_out = Image.new("P", im.size, 0)
# Flatten the list of colors, for PIL
#flattened_palette = flatten(palette)
flattened_palette = [component for color in palette for component in
color]
# Put the computed palette in the output image
im_out.putpalette(flattened_palette)
# quantize the input image with the computed palette
out_data = quantize(data, palette_short)
# Put the computed data inside the output image
im_out.putdata(out_data)
# Save computed output image
im_out.save(out_filename)
some time he can answer me.
I am experimenting different color quantization algorithms, so having
computed the palette with a clustering function, I use the code below
to quantize the original image to produce an image without dithering
(so I can see better the quantization results).
I have seen that the *standard* distance function I use isn't standard
enough, because given the fixed palette computed by me, most graphics
programs give me a different (better) quantized image. I don't know
what's wrong/different in this quantize function, maybe you can tell
me.
I'd also like to know if there is a simpler code for PIL to do the same
thing (given my palette and a truecolor image), but this is less
important. I know the dither=Image.NONE for the im.convert() method,
but I don't know a good way to use it in this problem. (Note that the
quantize function below uses a perceptual-based color distance, but to
do the quantization with PIL I can settle with its standard color
distance function.)
Thank you,
bearophile
# Input:
# im = input truecolor image
# palette = a palette computed by me, of about 32 colors
# im_out = output image with no dithering
def quantize(data, palette_short):
out_data = []
for rgb in data:
dist_min = 1e100
closest_col = None
for col_pos, pal_col in enumerate(palette_short):
# Standard distance
#dr = rgb[0] - pal_col[0]
#dg = rgb[1] - pal_col[1]
#db = rgb[2] - pal_col[2]
#d = dr*dr + dg*dg + db*db
d = perceptualColorDistance(rgb, pal_col)
if d < dist_min:
dist_min = d
closest_col = col_pos
out_data.append(closest_col)
return out_data
#..........
import psyco; psyco.bind(quantize)
# Copy of palette, to speed up quantization
palette_short = list(palette)
# Add duplicated colors (the last one) to produce a palette of 256
colors
palette.extend( palette[-1] for i in xrange(256 - len(palette)) )
# Create empty paletted output image
im_out = Image.new("P", im.size, 0)
# Flatten the list of colors, for PIL
#flattened_palette = flatten(palette)
flattened_palette = [component for color in palette for component in
color]
# Put the computed palette in the output image
im_out.putpalette(flattened_palette)
# quantize the input image with the computed palette
out_data = quantize(data, palette_short)
# Put the computed data inside the output image
im_out.putdata(out_data)
# Save computed output image
im_out.save(out_filename)