From: Ofnuts on
On 10/08/2010 09:40, Martin Brown wrote:
> On 07/08/2010 21:42, Ofnuts wrote:
>> I wrote a short utility to check the usual claim that JPEG image quality
>> degrades with the successive saves.
>>
>> This utility saves an image multiple times, each time after making a
>> minor and very localized changed to it. To avoid suspecting that
>> "convert" does it cleverly to minimize losses, the image is saved to a
>> lossless format (PNG) and then converted from PNG to JPEG. The resulting
>> image is then compared with the original image (diff0-*), and with the
>> result of the first step (diff1-*) (red pixels are the changed pixels).
>>
>> The utility and the results of some runs are available here:
>>
>> http://dl.free.fr/rjtMETz9h
>>
>> The subdirectories provided are the results of running the utility over
>> the same image with JPEG quality 25, 50, 75, and 90.
>>
>> Now for the interesting part. This dispels some misunderstandings:
>>
>> - In all cases, most of the damage occurs on the 1st save. The
>> subsequent saves show very little difference with the first step, even
>> at very low quality settings. Save steps beyond the third do not add any
>> loss... The JPEG algorithm is "stable", and the decoded values
>> eventually get re-encoded the very same way.
>
> This is basically correct. The coefficients for each 8x8 or 16x16 block
> usually converge onto an attractor in 5-10 cycles or may bounce between
> a few closely related similar versions in a cyclic way. Not sure I would
> be so bold as to say it is stable, but it is mostly chaotic around the
> same stable attractor giving a series of very similar looking images
> that may repeat with a short period 0,1,2,3 etc.
>
> Serious damage tends to be mostly caused by the chroma subsampling
> routine which averages YCrCb colour over 4x4 blocks and certain boundary
> condition errors in the classical reconstruction methods.
>
>> - The amount of "damage" is very low at reasonable quality settings (75
>> or above). To get an experimental "feel":
>>
>> -- load the original image and the result of any step in a photo editing
>> software that support layers
>> -- obtain the "difference" between the two layers
>> -- the resulting image seems a very uniform black to the naked eye
>> -- use a "treshold" transform and lower the treshold value until
>> recognizable pattersn appear (besides the marker dots at top left)
>> -- At 90 quality, using the result of the 10th step, the first white
>> pixel shows up at 20 (artefact at lower border due to picture height not
>> a multiple of 8), the first pixel in the image a 11.
>> -- At 75 quality, the difference produces a recognizable ghost of the
>> linnet. The threshold method shows that most differences are below 20.
>
> I did one based on an 8x8 test pattern that is designed to distress the
> JPEG algorithm a long while ago. The results are at:
>
> http://www.nezumi.demon.co.uk/photo/jpeg/2/jpeg2.htm
>
> The difference between chroma subsampled JPEG saves (the default in most
> applications) and the full chroma JPEG is very significant. A lot of
> info is lost in the chroma subsampling and up sampling step.
>
> The zoomed version doesn't look good on modern browsers with smoothed
> upsampling. They are 8x8 pixel blocks that should have sharp edges.
>>
>> Disclaimers:
>>
>> - Global image changes (white balance, contrast, colors) are a whole
>> different matter, not adressed here (though, IMHO, the problem with JPEG
>> in these operations is more the 8-bit-per-channel limit it puts on the
>> picture that in turn leads to a comb-like histogram)
>>
>> - The original JPEG uses 1:1:1 sub-sampling and so does 'convert' by
>> default.
>
> Full chroma sampling is very much better at preserving image integrity
> than subsampled chroma (but the latter are considerably smaller).
> PSPro 8 manages to do both incorrectly resulting in patterns in the sky
> (and other artefacts that can be demonstrated on simple testcases).
>>
>> -- Unless reproduced by different means, these results only apply when
>> the same software is used throughout.
>
> And you use exactly the same quality settings for every save.
>
> I agree though that JPEG is blamed for a lot of things that are not its
> fault. You can encode graphics line art quite successfully with the
> right choice of Q and full chroma sampling. The algorithm is optimised
> for photographic images but it is not limited to them. PNG is usually
> more compact for line art but not always.

PNG is vastly under-used. As a developper, I sometimes get bug reports
about the GUI from pixel-peepers and the "evidence" is a artefact-laden
JPEG. I have to teach them the beauties of PNG (which , unfortunately,
is still not supported as an image format by some "enterprise" software).


--
Bertrand
From: Ryan McGinnis on
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 8/10/2010 2:40 AM, Martin Brown wrote:

> I did one based on an 8x8 test pattern that is designed to distress the
> JPEG algorithm a long while ago. The results are at:
>
> http://www.nezumi.demon.co.uk/photo/jpeg/2/jpeg2.htm

That is a fascinating page; thanks for sharing.

- --
- -Ryan McGinnis
The BIG Storm Picture -- http://bigstormpicture.com
Vortex-2 image licensing at http://vortex-2.com
Getty: http://www.gettyimages.com/search/search.aspx?artist=Ryan+McGinnis

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQEcBAEBAgAGBQJMYWJKAAoJEIzODkDZ7B1b3q8H/ijDC0LspVBHDAbwRKJGnatK
LMlOpFh3XHcrA0wwXIr27k2y+GIF+g9++qEd+iDuVVUcff6ATYSLQa2IMZUztkYC
HtHXAWPwcniQ+VSd+Z/8en1iPGvMhbfMZ0DBg836KmOeRcyy4d8VueZqofSpMpsp
wiSDYnmvCMSHhgbUkVF1fDO5Y0qlK/7H6dvIXKY2F9IgRqsbYBSamc3wNR9eJWLO
V7g4SIUhmDImcNivK9LEQ00Kc76IjLWJDg4eOzlnjdHedKdtbB81x5J8aD6/Pw4y
NCURkHypQ9m/63NU/iiSfDBd7W1NqGSAyIaCWxhfcoqHfrdF0ewp4awI3fYxzMQ=
=MAP/
-----END PGP SIGNATURE-----
From: Paul Furman on
Martin Brown wrote:
> On 09/08/2010 19:35, Paul Furman wrote:
>> Ofnuts wrote:
>>>
>>> - In all cases, most of the damage occurs on the 1st save. The
>>> subsequent saves show very little difference with the first step, even
>>> at very low quality settings. Save steps beyond the third do not add any
>>> loss... The JPEG algorithm is "stable", and the decoded values
>>> eventually get re-encoded the very same way.
>>
>> Interesting. Also worth noting; when an image remains open in (photoshop
>> at least), you can save as much as you like for backup and it won't
>> 'damage' the file till you close it and open again.
>
> A lot of programs do that by just renaming the buffer but without
> reloading the image that results from the JPEG encode and decode cycle.
>
> This can be misleading and I have seen people ruin images by overwriting
> an original with a lower quality copy because they did not realise what
> they saw on the screen did not reflect what was encoded in the file.
> Applications that allow you to see a zoomable preview of the encoded and
> decoded image and a filesize estimate are better.

I'm not sure what you're saying but this can be tested in PS just by
using high compression and save for web zoomed in on the preview. I
wasn't implying PS is special, just that's the only one I know it works
for. Lightroom sometimes uses old previews, and other programs...