CSS tables - more confused

D

dorayme

Whoa. Chemicals. I used to do b/w darkroom work, too. I can smell
the stop bath, now. :)


Yes, nice isn't it! You are right, the stop bath is the distinctive thing
that smells...

dorayme
 
J

Jim Scott

Sorry Jim, I missed your bit about scanning in my last post and just
noticed! So maybe we need to look at your method of scanning. My remarks can
be transposed for the scanning procedure: scan at the highest quality and go
from there in a good photo editor.

If you are not sure about what is not the best about your pics - and as I
said, many are *very nice and acceptable* - perhaps look at Windfarm and
look at around the props, the pole on the left; look at Anchors2 and over
the bridge to the right of the windmill there... sort of watery aberrations,
possibly the sign of over compression (the poor algorithm does it best!).
Yes, I know around the props look almost ok because it might be air
disturbance! But it is a fault I think that can be seen in a few pics where
there is not this "excuse" :)

dorayme

One of the problems is that photos like Wind Warm and the 1993 shipyard
series were all scans from 6.5" x 4.5" prints.
Most of them were worked on with Adobe Photoshop and "Seved for the Web"
 
D

dorayme

From: Jim Scott said:
One of the problems is that photos like Wind Warm and the 1993 shipyard
series were all scans from 6.5" x 4.5" prints.
Most of them were worked on with Adobe Photoshop and "Seved for the Web"


This should not be a problem that results in such marks. A good quality scan
from a decent pic of that size will result in a good pic that would more
than fill most screens (because the pixels on a screen are very much less
than the equivalent dots bunched in a digital print of the same file. Plenty
enough material to work with). Try not to use such commands as "save for the
web" if you can help it - because you mostly don't know what is happening
and therefore lose control and experience. Scan at a high resolution. Open
in PS. In one or other order (I am awaiting feedback on a slight question on
this), reduce the pic to a size you roughly want and compress choosing a
number from 1 to 10 in JPG levels. Try 5 or 6 to begin. 4 might be ok. I
suspect the faults I mentioned are due too much compression (lower numbers
like 0,1,2). You can keep compressing a bit (don't go the other way though)
and you can adjust the px dimensions down (again, don't go the other way)
till you get a result you want. You will get experienced and not have to
fiddle too much after a while...

Good luck...

dorayme
 
O

Oli Filth

dorayme said:
When starting with a big px size high quality file of any type, two
operations are usually necessary:

1. Reducing the pixel size

2. Compressing the info using say jpg.

Of course, both operations will result in loss of quality. I was using the
term compression to refer to the operation in 2.

What exactly are you saying more than this?

If you resize a JPEG, what's actually happening (AFAIK) is:

* decompression (JPEG -> bitmap)
* resize
* compression (bitmap -> JPEG)

The JPEG is decompressed to a bitmap before resizing, and then recompressed.

So in resizing *after* converting to JPEG, you've performed two lots of
JPEG compression and one resize. Whereas resizing *before* conversion
means one JPEG compression and one resize.
 
S

Spartanicus

dorayme said:
It is not pointless if you do not know what use you will make of the pic or
if you might want to print later.

You claimed that it would enhance the quality of the web images the OP
is currently publishing (the biggest size he uses is about 560x400).
And it is not pointless if they are to be
given later to someone else to prepare - who would not want better orig pics
to work with?

Anyone with common sense, handling bigger and/or non compressed images
significantly increases resource usage, it's slower, needless for most
web images. Bigger doesn't mean better.
I am sure there is a lot of good software besides PS. If it is really true
that there is a lot of cheap software that does this job just as well, then
fine.

Not just cheap, there is plenty of free software that does at least as
good a job at resizing and/or jpeg compression as the $599 Photoshop.
Suck it and see, I would not tend to be too trusting beforehand
though... But I don't think (not quite on your point, I realise) the same
can be said for resizing (px size, height, width wise) algorithms in
different software. In PS it is very good in quality (using the bicubic,
which was slow on old computers but lightening on modern)

Bilinear and bicubic resizing is quite fast on my Pentium 2/266Mhz, but
I use sensible source images.
Oli was saying something on this bit, I am not fully with you. Could you
spell out this argument please.

Assuming photo type images for the web, the final images need jpeg
compression, this compression can only be applied after they've been
resized, if you compress before you resize you loose information twice.
 
D

dorayme

From: Spartanicus said:
You claimed that it would enhance the quality of the web images the OP
is currently publishing (the biggest size he uses is about 560x400).
-------------------------------------------------------------
With respect, I did not quite do this. It was general and conservative
advice to avoid the pitfalls of going too far the other way and keeping
control of the situation. How do you know what the quality of the camera
used is? How do you know how it all worked or how the scans were made or how
deft the OP is in these matters? Again, it was not pointless advice. It
might have been very conservative.
-------------------------------------------------------------
Anyone with common sense, handling bigger and/or non compressed images
significantly increases resource usage, it's slower, needless for most
web images. Bigger doesn't mean better.
----------------------------------------
I did not imply it was certainly better. Is no subtlety allowed? The point
is it is a safer route. Once taken, a pic cannot always be retaken. It can
be degraded but rarely upgraded. If you know the chain of responsibilities
in these matters you can be more confident in your common sense. The point
is not that bigger is necessarily better, it is that it is safer. I have
cases of this sort of thing quite regularly, if clients had only taken their
pics at higher res I would be able to actually use *a part* of the pic for
something I need, but as it is, that possibility is closed off. Beware of
too much common sense - it has a history of being wrong... :)
----------------------------------------
Not just cheap, there is plenty of free software that does at least as
good a job at resizing and/or jpeg compression as the $599 Photoshop.


Bilinear and bicubic resizing is quite fast on my Pentium 2/266Mhz, but
I use sensible source images.
---------------------------------------------------------------
Good for you. (I don't like the implication about sensible but what can I
do? I am destined to be hurt badly by you all and I will speak to my shrink
about curbing over-sensitivity...) I sometimes have to resize huge files
designed for printing and it takes almost no time using the bicubic (forget
the other one these days) And this on an uncharacteristically unmodern Mac.
Truth is I do not always have control over what I get, but I sure as hell
prefer big to start with. And when I do have control as when scanning I
start higher than common sense might suggest and work down.
----------------------------------------------------------------
Assuming photo type images for the web, the final images need jpeg
compression, this compression can only be applied after they've been
resized, if you compress before you resize you loose information twice.
_________________________________
Well, I gave some reasoning to show that either way you lose info twice, if
there is something to clear this up, I would be interested. But you just
repeat the claim. Perhaps there is something that is obvious to you that you
can detect and explain that I am missing?
__________________________________
dorayme
 
S

Spartanicus

I sometimes have to resize huge files
designed for printing and it takes almost no time using the bicubic (forget
the other one these days)

Bicubic and bilinear are both needed, in fact bicubic is poorly suited
to scale downwards. Rule of thumb for photo realistic images: use
bicubic to scale upwards, and bilinear to scale down.
Well, I gave some reasoning to show that either way you lose info twice

Your calculation is flawed, using your example you lose information 3
times (compress to save space, rescale downward, compress for web
publication).
 
D

dorayme

From: Spartanicus said:
Bicubic and bilinear are both needed, in fact bicubic is poorly suited
to scale downwards. Rule of thumb for photo realistic images: use
bicubic to scale upwards, and bilinear to scale down.

I have always thought it a bad idea to scale up in pixel based images. But
it is an interesting matter. Have you anything to back up your claims? And
your rule of thumb? Arguments from the nature of the algorithms, examples,
references and *explanations* from experts in these matters (please do not
give general links that do not go to the heart of the matter)?

My knowledge is that bicubic in PS gives smoother tonal gradations. I got my
view from official PS publications and my own experiences. In one PS manual,
an Adobe PS User Guide (and a beautifully produced book if I may add!) to
hand it describes 3 interpolation algorithms, "Nearest neighbour, Bilinear,
and Bicubic and says these are respectively slower in operation but better
in quality.

Here is a similar quote from the help files attached to many PS programs:

When an image is resampled, Adobe Photoshop uses an interpolation method to
assign color values to any new pixels based on the color values of existing
pixels in the image. The more sophisticated the method, the more quality and
detail from the original image are preserved.

The General Preferences dialog box lets you specify a default interpolation
method to use whenever images are resampled with the Image Size or
transformation commands. The Image Size command also lets you specify an
interpolation method other than the default.

To specify the default interpolation method:
1 Choose File > Preferences > General.
2 For Interpolation, choose an option:
€ Nearest Neighbor for the fastest, but least precise, method. This method
can result in jagged effects, which become apparent when distorting or
scaling an image or performing multiple manipulations on a selection.
€ Bilinear for a medium-quality method.
€ Bicubic for the slowest, but most precise, method, resulting in the
smoothest tonal gradations

dorayme
 
D

dorayme

From: Spartanicus said:
Your calculation is flawed, using your example you lose information 3
times (compress to save space, rescale downward, compress for web
publication).


You give no explanation or argument at all. Plus you add an unfair factor
(see after *). The basic issue is information loss in two possible
operations, you have a big image file and try for a reasonable file size and
width/height size. Either you compress first and then resize or the other
way round. In both cases you lose info twice. I did advise someone to do it
in one direction to which you objected. I admitted i had done it it both
ways but that my intuitions were for one. I am not so sure now, it is true.
But I am pretty sure it makes little if any difference. You (and Oli) are
saying the one direction loses info twice and I say it loses info twice
anyway.

I did some tests this morning on a new very sharp and big monitor. I made a
gradient 5000 * 400 psd, flattened and made two tests. There was no
difference that I could detect at all (and not in speed either). The file
sizes and appearances of the results were identical. (I was rather hoping to
back my initial intuition that jping first was superior... but it was not at
all true!). Given this, would probably now advise people to do what I often
do because it is very convenient (not for the unsubstantiated claims you
make): resize and then jpg. I still hanker for a test that might make any
difference more apparent. I will look at some 100+ MB pic files I have when
I have more time (Hassleblad negatives professionally scanned at - I am told
- approx $100 US each. Ouch, even tho I did not pay for them) ...

dorayme

(you are picking up an irrelevancy in a "third" to do with advice about re
jpging or sizing if not happy with previous attempts - of course you lose
even more info then, but you would if you went back to the original file and
made a more forceful 2 step to get to what you want)
 
S

Spartanicus

dorayme said:
I have always thought it a bad idea to scale up in pixel based images. But
it is an interesting matter. Have you anything to back up your claims? And
your rule of thumb? Arguments from the nature of the algorithms, examples,
references and *explanations* from experts in these matters (please do not
give general links that do not go to the heart of the matter)?

From the help file of my editor: (search the web for confirmation)

In the Resize Type box, select the type of resizing for Paint Shop Pro
to apply. There are four choices:

1) Smart size, where Paint Shop Pro chooses the best algorithm based on
the current image characteristics.

2) Bicubic resample, which uses a process called interpolation to
minimize the raggedness normally associated with expanding an image. As
applied here, interpolation smoothes out rough spots by estimating how
the "missing" pixels should appear, and then filling them with the
appropriate color. It produces better results than the Pixel resize
method with photo-realistic images and with images that are irregular or
complex. Use Bicubic resample when enlarging an image.

3) Bilinear resample, which reduces the size of an image by applying a
similar method as Bicubic resample. Use it when reducing photo-realistic
images and images that are irregular or complex.

4) Pixel Resize, where Paint Shop Pro duplicates or removes pixels as
necessary to achieve the selected width and height of an image. It
produces better results than the resampling methods when used with
hard-edged images.
My knowledge is that bicubic in PS gives smoother tonal gradations. I got my
view from official PS publications and my own experiences. In one PS manual,
an Adobe PS User Guide (and a beautifully produced book if I may add!) to
hand it describes 3 interpolation algorithms, "Nearest neighbour, Bilinear,
and Bicubic and says these are respectively slower in operation but better
in quality.

If that's all it says on the topic then bin the book.
 
S

Spartanicus

dorayme said:
You give no explanation or argument at all.

It's been explained to you twice by 2 different people that to resize an
image needs to be uncompressed. Lose your combative stance, this is a
requirement for you to start learning.
 
O

Oli Filth

dorayme said:
You give no explanation or argument at all. Plus you add an unfair factor
(see after *). The basic issue is information loss in two possible

No, he's not. I think you're missing the point that for a computer to be
able to resize a JPEG image, that image *must* be decompressed for any
operations can occur. This is all automatic and behind the scenes.
Therefore, overall you're performing *two* JPEG compression algorithms,
each one of which is lossy.
I did some tests this morning on a new very sharp and big monitor. I made a
gradient 5000 * 400 psd, flattened and made two tests. There was no
difference that I could detect at all (and not in speed either). The file
sizes and appearances of the results were identical. (I was rather hoping to
back my initial intuition that jping first was superior... but it was not at
all true!). Given this, would probably now advise people to do what I often
do because it is very convenient (not for the unsubstantiated claims you
make): resize and then jpg. I still hanker for a test that might make any
difference more apparent. I will look at some 100+ MB pic files I have when
I have more time (Hassleblad negatives professionally scanned at - I am told
- approx $100 US each. Ouch, even tho I did not pay for them) ...

Well, a gradient's not the best test of a JPEG algorithm, but still, I
can assure you there's a difference that one can easily measure:

* Assuming JPEG1 is the Resized->Compressed image, and JPEG2 is the
Compressed->Resized image.

* Load JPEG into Photoshop

* Copy one and paste it as a 2nd layer on top of the other.

* Right-click the top layer in the Layers panel and select "Blending
Options..." for the top layer, then select "Difference" in the "Blend
Mode" drop-down box (i.e. it subtracts it from the bottom layer).

* Select Layer->Flatten Image to create a single layer.

* Select Image->Adjustments->Curves... to get the contrast adjustment graph.

* Bring the low end right up (set the point to Input=4, Output=255). You
*will* see a difference then.


Obviously, this doesn't prove which is superior. To do this, perform the
difference comparison test above twice, firstly between the resized BMP
and JPEG1, and then resized BMP and JPEG2. The result for the second
will be much brighter, proving that this picture is definitively more
inaccurate and therefore has lost more information.

Check out http://olifilth.co.uk/compression/ if you can't be bothered to
do all this yourself! (I assure you that JPEG3 and JPEG4 were created
with the same contrast enhancement settings.)
 
D

dorayme

From: Spartanicus said:
From the help file of my editor: (search the web for confirmation)

In the Resize Type box, select the type of resizing for Paint Shop Pro
to apply. There are four choices:

1) Smart size, where Paint Shop Pro chooses the best algorithm based on
the current image characteristics.

2) Bicubic resample, which uses a process called interpolation to
minimize the raggedness normally associated with expanding an image. As
applied here, interpolation smoothes out rough spots by estimating how
the "missing" pixels should appear, and then filling them with the
appropriate color. It produces better results than the Pixel resize
method with photo-realistic images and with images that are irregular or
complex. Use Bicubic resample when enlarging an image.

3) Bilinear resample, which reduces the size of an image by applying a
similar method as Bicubic resample. Use it when reducing photo-realistic
images and images that are irregular or complex.

4) Pixel Resize, where Paint Shop Pro duplicates or removes pixels as
necessary to achieve the selected width and height of an image. It
produces better results than the resampling methods when used with
hard-edged images.

Right, you quote your Paint Shop pro software help file and I quoted PS help
files. PS implies something differently to PSP. Now we need to find out if
they are using the same algorithms with the same names. Bilinear actually
can use 2 different techniques but both are based on taking account of at
most 4 relevant pixels in the source whereas bicubic takes a more detailed
and plausibly more thorough account of surrounding 16 pixels. My experience
in using bicubic for reduction in PS bears this out.

I might do some tests to see when I fire up a big sharp new screen on a
different computer. Perhaps there are more detailed references for PS
elsewhere... In the meantime, let us not assume that that all the software
have consistent algorithms or that the help files settle the matter...
If that's all it says on the topic then bin the book.


You worry about chucking your own things out... Look around you... anything
not perfect? Throw it out!

dorayme
 
D

dorayme

From: Oli Filth said:
No, he's not. I think you're missing the point that for a computer to be
able to resize a JPEG image, that image *must* be decompressed for any
operations can occur. This is all automatic and behind the scenes.
Therefore, overall you're performing *two* JPEG compression algorithms,
each one of which is lossy.

The way you are putting it now, maybe I am getting a glimpse of your meaning
Oli! Sorry to be so slow. Let me think aloud:

The only cases at the core of all this are two, forget all other things. To
jpg an image first and then to resize for width and height or to do it the
other way around instead.

I have thought all along that either way (jpging and then resizing or vice
versa) results in two sets of losses. Tell me one thing: if you resize an
image (a tiff or PSD) are you counting this as a loss of info? I am. So if I
resize first and then I jpg I lose info twice. If I jpg first I lose once,
if I then resize I lose again I am saying. But you are pointing to an
interesting thing here: you are saying that in this last case there is a
third loss. The jpging losses - that's one. The start of the resize incurs a
loss in uncompressing - that's two? And the resize down itself - that's
three. More loss in one road for preparing pics than the other. If this is
right I feel like Salieri to your Mozart. Or are we at cross purposes? I
must think about this idea of decompression loss (never considered it
really... gulp!)

I will come back to you on the rest of your most interesting comments on
practical tests. I am most keen to do the tests myself and certainly "can be
bothered" I am surprised that a gradient is not the best, but I am tired
now, Spartanicus has worn me out by being so horrible to me...

:)

dorayme
 
D

dorayme

From: Spartanicus said:
It's been explained to you twice by 2 different people that to resize an
image needs to be uncompressed. Lose your combative stance, this is a
requirement for you to start learning.


When was it *explained* twice? You counting repeated assertions? And what is
your evidence that it is a requirement of learning to simply accept what is
told to you. I have learnt a lot from this group. Sorry if it annoys you
that I do not simply accept what my experience and tests and understanding
tell me in favour of a couple of repeated assertions by you and maybe by one
other. Your idea of how best to learn is positively medieval! True, there
are circumstances where it is right for a student to accept what a known
authoritative teacher recommends but you kid yourself if you suppose this is
one of those situations. The best teachers I know have always been prepared
to back up their recommendations and I have been fortunate indeed to have
known some. I am sorry you see my trying to get to the bottom of this issue
as best as I can as combative. Personally I like students to enquire and
challenge robustly and when I was a student I used to react badly to people
with your attitude. Anyway, if you cannot trust my sincerity in this matter,
or I displease you, do not trouble yourself any further.

dorayme
 
O

Oli Filth

dorayme said:
I have thought all along that either way (jpging and then resizing or vice
versa) results in two sets of losses. Tell me one thing: if you resize an
image (a tiff or PSD) are you counting this as a loss of info? I am.
Yup.

So if I resize first and then I jpg I lose info twice. If I jpg first I lose
once, if I then resize I lose again I am saying. But you are pointing to an
interesting thing here: you are saying that in this last case there is a
third loss. The jpging losses - that's one. The start of the resize incurs a
loss in uncompressing - that's two? And the resize down itself - that's
three.

That's nearly it. The decompression itself isn't lossy (though some
algorithms might be due to slight rounding errors), it's the
*re*compression afterwards that is the problem. *Every* time a picture
is recompressed to a JPEG, information can be lost (although if a
picture is repeatedly uncompressed and recompressed there should be an
upper bound to how much information is lost in total).
I will come back to you on the rest of your most interesting comments on
practical tests. I am most keen to do the tests myself and certainly "can be
bothered" I am surprised that a gradient is not the best, but I am tired
now, Spartanicus has worn me out by being so horrible to me...

I didn't mean "if you can't be bothered" in a derogratory sense ;)

JPEG is designed specifically for photo-realistic images, which a
continuous linear gradient most certainly isn't. When it comes to
measuring loss quantitively, the only useful way is a statistical
measure based on a sample set of real photographs. Typically this is
done by taking a photo, compressing it, calculating the difference, and
finding the RMS (root mean square) error and dividing it by the number
of pixels.

With enough photos of different types (e.g. subject matter, colour
depth, light levels) one can come up with the average error per pixel
that a compression algorithm produces.
 
D

dorayme

From: Spartanicus said:
This experiment demonstrates that the rule of thumb is correct:
http://nickyguides.digital-digest.com/bilinear-vs-bicubic.htm


Am grateful for this reference and will look into it and test the
interesting arguments put and get back to you if I may... Thank you.

Bit surprised (but do understand) how the discussion has brought in the
element of upsizing which I almost never do, always I return to my big
source material and work down if I want bigger than I have prepared (and a
motivation for my advice to folk to start with as good as possible, I know
you think this a bit neurotic and wasteful - fair enough, but at least I
don't get end result trouble from the policy. There have been occasions when
these sources are not available and I looked into it and was very
disappointed indeed in all enlargements except the most modest (to make a
set of very closely similar pics on a web page all the same without
cropping). I even downloaded some software that claimed to do this better
than any other technique and tho it was admittedly better than standard
image editing software, I reckon it was still lousy! Frankly, enlarging pics
digitally is very much a back-burner issue. But you raise my curiosity as it
happens...

dorayme
 
D

dorayme

From: Oli Filth said:
That's nearly it. The decompression itself isn't lossy (though some
algorithms might be due to slight rounding errors), it's the
*re*compression afterwards that is the problem. *Every* time a picture
is recompressed to a JPEG, information can be lost (although if a
picture is repeatedly uncompressed and recompressed there should be an
upper bound to how much information is lost in total).

OK, resize-then-jpg counts as 2 losses, we are agreed on this. jpg-then-
resize counts as 3 for you and I am struggling to see more than 2 but am
starting to get your drift! Where are the 3 losses: the jpging is 1, the
resize is 2 and the *saving as jpg* at the resize is 3. It is this last loss
that I have been missing presumably in all this... I need to think about
this "recompress loss" more, the significance of this loss over and above
the resize loss.

I can't do *best* visual tests on the screens in front of me right now but
will later on a very big brand new one I own on another machine. The *quick*
test I have just done on a good sharp regular photo taken with a digital
camera of an outdoor scene results in no difference I can detect but more
importantly for now: no difference in file size. To clarify:

I jpgd at level 5, I resized and then saved as jpg at the same level as my
first jpging. I got a file of 64,667K. I then went back to the original and
resized first, jpgd at level 5 and saved. Again 64,667K. So if information
loss is reflected in file size, this "third loss" is not happening? But I am
not saying here that you have no case. I will look into it further.

I didn't mean "if you can't be bothered" in a derogratory sense ;)

Oli, it never even crossed my mind that you meant it in a derogatory way.
JPEG is designed specifically for photo-realistic images, which a
continuous linear gradient most certainly isn't. When it comes to
measuring loss quantitively, the only useful way is a statistical
measure based on a sample set of real photographs. Typically this is
done by taking a photo, compressing it, calculating the difference, and
finding the RMS (root mean square) error and dividing it by the number
of pixels.

With enough photos of different types (e.g. subject matter, colour
depth, light levels) one can come up with the average error per pixel
that a compression algorithm produces.

Will think more on this. In what you say may be the answer to my puzzle
about file size. I did suppose that a gradient was an excellent model for
most photographic pics. But maybe I am wrong. I was thinking it is just the
thing gifs are bad at, they are bad at photos mostly too and I was supposing
it was the feature of gradual tonal changes that are at the heart of so much
photography that was being captured by my gradient (it was truly a beautiful
sight, a whole spectrum 5000 wide on a very nice screen... try it, it calms
the nerves. I better add I am not suggesting you are not calm...)

dorayme

(I sent you a direct email of this by mistake, the button for posts are too
close together... must install that newsreader software that someone
suggested a while back and get off OE)
 
S

Spartanicus

dorayme said:
Bit surprised (but do understand) how the discussion has brought in the
element of upsizing which I almost never do, always I return to my big
source material and work down if I want bigger than I have prepared (and a
motivation for my advice to folk to start with as good as possible, I know
you think this a bit neurotic and wasteful

I argued against your suggestion to habitually record or scan in much
bigger dimensions than what is expected to be needed for the end image,
thus a downsize scenario.

Oversizing only occasionally makes sense, like when there is noticeable
sensor noise, downscaling can then noticeably reduce the noise. But even
in such an event, a 2x oversized source should be more than enough.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,995
Messages
2,570,230
Members
46,819
Latest member
masterdaster

Latest Threads

Top