From: Don on 14 Apr 2006 07:21
On 13 Apr 2006 09:13:56 -0700, "Alan Meyer" <ameyer2(a)yahoo.com> wrote:
>> At 100% magnification (i.e. 1:1) even a JPG image at lowest
>> compression (i.e. highest quality) stands out like a sore thumb when
>> compared to the original.
>I downloaded the two images that Raphael referenced in his reply
>then magnified them 16 times (i.e., 4004 pixels per dimension instead
>of the 1001 pixels of the original image.)
>I'm not sure I can tell them apart.
>Then I magnified them almost 100 times i.e., almost 10 times
>as many pixels per dimension. Now, if I looked closely, I could see
>that some individual pixels had different colors. But I still couldn't
>see JPEG artifact squares, and still wasn't sure which image was
>which without looking at the file names.
The point is that's your personal, subjective perception. However,
objectively, there is a marked difference. Just because you yourself
can't see it or find it too minor it doesn't mean it's not there or
that it doesn't influence the workflow.
As I said last time, if you're happy with JPGs (and apparently you
are) that's great! But the problem is that's a subjective impression
which goes contrary to objective facts. But if it satisfies your
requirements, then, of course, that's all that counts.
>> JPG uses 8-bit precision simply because that's all today's monitors
>> can display. However, in not too distant future monitors will expand
>> this dynamic range and then a 16-bit (or higher) dynamic range will
>> become essential.
>Leaving aside the monitors, how much precision can the
>human eye distinguish? I suspect the best eyes can only do
>around 10 bits, though I'm not at all sure about that.
No, it's more like 8-bits some say as low as 6-bits. *But* (and that's
a big but!) the total dynamic range of the human eye is far larger.
What this means is that even though we only have this 6-8 bit "window"
it moves through a far greater total dynamic range our eyes can see.
For example, when going from a dark area into a bright area you're
temporarily blinded and can't see any detail. But after a while as
your eyes adjust (the 6-8 bit "window" moves to highlights) you start
seeing detail. The same goes the other way around, of course.
So even though the window itself is relatively small, it covers a
large absolute area and can move according to lighting conditions. If
you only use JPGs this total or absolute area is limited to 8 bits and
that's a massive loss of information.
>Furthermore, human perception is non-linear. There are some
>ranges in which we are more sensitive than others. I believe
>that some JPEG compression algorithms know that and take
>advantage of it to produce images that are really very close to
>the maximum human perception.
>However I'm not an expert on this. Someone who is should post the
We have different sensitivity to different wavelength. In practice
this means we see green much better than red or blue. Specifically,
the perceptors in our eyes are broken down as follows: red = 30%,
green = 59%, blue = 11%. So that's the ratio used to calculate
But that's not really the point. It's that data is being lost. First
by going from 16-bit (or 14 in case of some scanners, etc) to 8-bit.
And then if that weren't enough, reducing that further by applying JPG
compression. And that's not really suitable for archiving purposes.
Don't forget also that JPG compression level i.e. "quality" is not
standardized! Each software uses different metrics. You mentioned 10
as the maximum, well, the highest quality in Photoshop is 12. Both
numbers are totally meaningless because there is no reference.
>I agree with part of that too. And I agree that people should do
>their own tests and draw their own conclusions.
>I'll go further and say that whether you are losing "massive"
>amounts of data is also a subjective conclusion. There's no
>doubt that a computer will find a signficant difference between
>TIFF and good JPEG. But whether that's "massive" from a
>human point of view is not obvious to me.
It is if you look at it in context. Not only is the difference there
but once you bring in the workflow, JPG is not (objectively speaking)
suitable for archiving.
I have no problem (and indeed do it myself!) with JPGs used "for
consumption" i.e. to distribute on DVDs or upload to a web site.
The problem is editing and archiving. Just because you find it
difficult to see the differences or don't find them objectionable the
loss of data between a JPG image and the original is massive.
>Finally, I want to defend my point that good scanning is more important
>than saving TIFFs.
>The quality of the scanner, the decisions made by the scanning
>software, the adjustments for color and contrast, the cleaning of
>the image and the glass plate - all have a bigger effect on final
>results than TIFF vs. good JPEG.
No, as I mentioned last time that's just factually wrong. All that
work will go to waste if your workflow is based on JPGs and/or using
scanner software editing.
For example, the editing in scanning software is very limited and only
contains a small subset of necessary tools. Any editing decisions you
make there will be based on the tiny preview "keyhole" and the 8-bit
histogram. Etc. All of that will lead to (objectivelly) massive loss
of information. Even more if you do "touch ups" later.
The "proper" workflow is to scan at maximum scanner bit depth and
native resolution without using any of the scanner software editing
features (e.g. curves etc). The only exception is ICE due to the way
it's implemented (for marketing reasons :-/).
Such a scan is known as a "raw scan" and it contains everything the
particular scanner can pull out of the media. *That's* the image which
should be archived! TIFF seems the format of choice but any *lossless*
format will do as long as one does the conversions later if needed.
One then edits this image using an external editor (much better than
the cut-down versions with limited features in scanner software). The
editing should be performed at the original bit depth and resolution.
When done, one may save that image as well for the record and to be
able to go back to it without having to edit all over again.
The last step is to then convert such an image "for consumption". This
may be for printing in which case the resolution and color information
will be reduced to match the printer. Or it may be for viewing in
which case the resolution will be reduced to fit the monitor and then
saved as JPG.
The beauty of this approach is that the original is still available.
Secondly, once the monitor size or bit-depth changes (and they will!)
or a new printer is purchased or the print fades (and it will!) you go
to your original image or the saved edited image and reduce to
accommodate these new requirements.
From: Raphael Bustin on 14 Apr 2006 09:27
On 12 Apr 2006 22:15:37 -0700, "Noons" <wizofoz2k(a)yahoo.com.au> wrote:
>Raphael Bustin wrote:
>> Here's a 4000 dpi film scan snippet as JPG:
>sorry, without knowing how magnified the images
>are it's impossible to do a comparison.
>Still, the jpeg one appears to be slightly more
>"muddy" in the shadows under the roof eaves.
>That's consistent with what I've seen it do
>in similar cases.
Mr Noons, the information is there: the film
scan is at 4000 dpi. If you know the effective
resolution of your monitor, you can calculate
the magnification. In all probabibility, your
monitor is set to around 75-100 dpi, so the
magnification is no less than 40 x.
From: Raphael Bustin on 14 Apr 2006 09:31
On Thu, 13 Apr 2006 14:33:55 +0200, Don <phoney.email(a)yahoo.com>
>On Wed, 12 Apr 2006 13:50:35 -0400, "Alan Meyer" <ameyer2(a)yahoo.com>
>>I believe that the TIFF vs JPEG issue is a red herring. I defy
>>you to look at an image on screen or on paper from a TIFF
>>vs. one from a JPEG compressed to 1/10 original size, and
>>tell which is which.
>>If you blow up the images to the point where you can see
>>individual pixels, you will see that some of the pixels are
>>different in the two images. But even then, unless you know
>>in advance, you won't be able to tell which was the original,
>That's just patently false!
>At 100% magnification (i.e. 1:1) even a JPG image at lowest
>compression (i.e. highest quality) stands out like a sore thumb when
>compared to the original.
Sorry, that's BS. See my earlier post on this subject.
In a blind test, you could not tell them apart.
Here's a 4000 dpi film scan snippet as JPG:
and here's the same as a TIF:
These are each 1000 x 1000 pixel crops straight off
From: Noons on 15 Apr 2006 03:08
Raphael Bustin wrote:
> Mr Noons, the information is there: the film
> scan is at 4000 dpi.
That means nothing. I donh't know how big
or small of a crop this is. Not from the
posted urls, which simply point to an image.
From: Raphael Bustin on 15 Apr 2006 09:48
On 15 Apr 2006 00:08:23 -0700, "Noons" <wizofoz2k(a)yahoo.com.au> wrote:
>Raphael Bustin wrote:
>> Mr Noons, the information is there: the film
>> scan is at 4000 dpi.
>That means nothing. I donh't know how big
>or small of a crop this is. Not from the
>posted urls, which simply point to an image.
Child, right-click on the image. Your browser
tells you the size, in pixels. Or suck it into your
image editor and get the info that way.
You whined that you didn't know the magnification.
I showed you that the information was there and
that you were just playing dumb.
Now you whine that you "don't know how big
the crop is." Which is equally false, and equally
The crop in question is 0.25" x 0.25" of a scan of
a slide at 4000 dpi, at the scan resolution.
Read the text on the page that those images
came from. It's all spelled out in detail.
You know how to peel back a URL, don't you,
Mr. Noons? You understand that, given this URL --
there's likely to be more of the same here?
You knew that, right? Or do you enjoy showing
your ignorance at every opportunity?
But deal with the reality of what's there first.
Why does the crop size matter? Why does
the magnification matter? I've showed the
same exact image, saved as TIF and then
as JPG, and asked you to point out the
differences. You can't, so you evade.
scan snippets page: