Prev: Canon 1250U2
Next: driver
From: Lorenzo J. Lucchini on
Don wrote:
> On Thu, 29 Sep 2005 18:16:04 +0200, "Lorenzo J. Lucchini"
> <ljlbox(a)tiscali.it> wrote:
>
> [snip]
>
>>Don't be so fixated with my 8 bits.
>
> I'm not. It's simply a very important factor when considering the
> context because it directly affects the result. So to ignore it would
> lead to wrong conclusions.

But I'm not even ignoring it, since at the moment I haven't yet reached
the point of trying to apply my "slanted edge" measurements to a *real*
image.

What's the point of arguing that the "real" image should be 16-bit
instad of 8-bit, when I currently have no way to test a "real" image?
When my program will be able to do that, then we'll see if a 16-bit
image is needed.

Unless you mean...

>>>I strongly suspect (but don't know for a fact) that you will not be
>>>able to show a demonstrable difference between any custom sharpening
>>>and just applying unsharp mask at 8-bit depth.
>>
>>Well, wait a minute: at any rate, I'll be able to use a *measured* --
>>instead of just *guessed* -- amount (and kind) of sharpening.
>
> But that measurement will be so inaccurate as to be virtually
> meaningless i.e. the margin of error will be so high that it will not
> improve on the guessed amount. Indeed it's quite conceivable that in
> significant number of cases it may actually produce worse results than
> guessing.

.... that I should *scan the slanted edge* at 16 bit -- as you say that
it's the *measurement* that's inaccurate.

But, you see, Imatest's manual doesn't particularly insist on scanning
the edge at 16-bit, and I think I can see the reason: the oversampling
that the slant allows compensates very well for the low bit-depth.

You see, even if I scan the edge at 8-bit, the final edge spread
function I obtain will have a *much higher* bit depth than 8-bit --
that's because I'm scanning the edge transition multiple times (200
times, if the scan is 200 pixels tall).

Wouldn't you think an image would get to have a much higher bit depth
than 8 bit if you multi-scan it 200 times, even if each of the scans is
made at 8-bit? :-)

>>Then I guess you'll be able to obtain the same results by using a "right
>>amount" of unsharp masking... but that's precisely what I want to avoid,
>>guessing values!
>
> Please don't get me wrong. I hate guessing. But if the metrics are
> inadequate then you may end up making matters worse. I mean you, can't
> measure millimeters with a ruler that only has marks for centimeters.

But, you see, the *metrics* are not inadequate (or at least, wouldn't be
if my program wasn't full of bugs).
At worst, it's the (8-bit) scanned *picture* that's inadequate, but in
that case, it can only be as inadequate for "guessing" as it is for
using measured values!

That is, at most you should be able to obtain, by guessing, the *same*
result I obtain with measurement, but no better.
Then, sure, using a 16-bit scan of the picture may well improve the
results of *both* guessing and using measurements.

>>I'm not an experienced photographer, quite the contrary, and I can't
>>really tell how much unsharp masking "looks right".
>
> On a tangent, I personally don't use any sharpening at all. I compared
> sharpened images to the originals (at large magnification) and didn't
> really like what unsharp mask did. But, that's a matter of personal
> taste...

Isn't that because of the haloes perhaps?
Halos are precisely one of the things I wish to avoid with the PSF
method, which should be able to compute optimal sharpening that
*doesn't* cause haloes.

I think the main problem will (still) be noise, as it will be amplified
by the sharpening, and computing the PSF doesn't help much with that.

I think it's going to be a matter of compromise here: how much noise am
I prepared to accept versus how much (or, up to what frequency) do I
want to improve the MTF.

>>>I think you can improve the sharpness considerably more (even at 8-bit
>>>depth) by simply aligning individual channels to each other.
>>
>>That's another possibility to explore, sure. Imatest, SFRWin and my
>>program all give some measurement of color aberrations -- channel
>>misalignment, chiefly.
>
> I was shocked when I discovered this on my film scanner. I expected
> that the channel alignment would be much more accurate.

Well, looking at the MTF graphs, I see that while my scanner does about
1500dpi on the CCD axis, it does something like 1300dpi on the motor axis.

Now, this is the resolution, not the color aberration, but I'm afraid it
gives a measure of color aberration, too. Actually, I think color
aberrations increase *more* than resolution decreases.

> [snip: the ruler test]

I'll try, just for laughs (or cries).
But even after measuring what's going on with a ruler, I'm still afraid
there is very little that can be done.

Possibly, one could scan a ruler next to the film, and use some program
to "reshape" every color channel based on the positions of the ruler
ticks... but, I dunno, I have a feeling this is only going to work in
theory.

>>>If the three RGB channels are not perfectly aligned (and they never
>>>are!) then combining them in any way will introduce a level of
>>>inaccuracy (fuzziness). In case of luminance that inaccuracy will also
>>>have a green bias, while the average will be more even - which is why
>>>I said that your original idea to use average seems like the "lesser
>>>evil" when compared to the skewed and green-biased luminance values.
>>
>>At this point, I think I have a better idea: let's *first* measure the
>>amount of misalignment, and then average the channels to luminance
>>*after* re-aligning them.
>
> That's what I'm saying (individual channel measurements) only use a
> straight average when you combine the results. Luminance will simply
> introduce a large green bias at the expense of red, for the most part.

More at the expense of blue, I think.
But, besides this, what you said before was that either average or
luminance will not be right when the channels are misaligned.

What I'm saying is, well, this will be no issue, as the channels will be
re-aligned *before* taking either average or luminance.
"Will", because this is not currently done in my program -- don't know
about Imatest.

> [snip]
>
>>>>So you see that I'm *already* doing measurements that are inherently
>>>>"perceptual". So why not be coherent and keep this in mind throughout
>>>>the process?
>>>
>>>Because perception is subjective. When there is no other way, then
>>>yes, use perception. But since you already have the values of those
>>>gray pixels it just seem much more accurate to use those values.
>>
>>I'm not sure. Yes, I have the real values, but... my program tries to
>>answer the question "how much resolution can my scanner get from the
>>original?". The answer itself depends on the observer's eye, as a person
>>might be able to see constrasts less than 10%, and another might only
>>see up to 15%, say.
>
> No, the answer is not based on any one individual person. The answer
> is quite clear if you read out the gray values. Whether one person can
> see those grays and the other can't doesn't really matter in the
> context of objective measurements.

But then why were 50% and (expecially) 10% chosen as standard?
Because of some physical reason? No, because they make perceptual sense:
10% is the boundary where the average human eye stops seeing contrast.

Sure, if you read the *whole* MTF instead of just MTF50 and MTF10, you
don't have to chose an observer-dependent frequency; but, like it or
not, MTF50 and MTF10 are standards. Sometimes it makes sense to follow
standards.

>>So, since an average observer *must* be "invented" anyway for the
>>results to have any practical meaning, then it makes sense to also
>>adjust them so that the colors the average eye sees best count more, as
>>they *will* affect perceived resolution.
>
> If you want to test human perception that's a completely different
> test. However, that doesn't change the *objective* resolution of the
> scanner.
>
> I mean take number of colors on a monitor. You can objectively
> determine how many colors your monitor can display. And then you can
> also determine how many of those colors an average person can
> distinguish. Those are two totally different test, using different
> metrics.

But you can express the number of colors a monitor can display with
*one* number.

You can't express a scanner's "objective resolution" with one number;
you must print an MTF grpah.
But what if you want a measure in ppi (which many people want)? Then you
have to *choose a point* on the MTF, and the choice will be based on
human perception: "you scanner can do 1500ppi [at 10% contrast, because
that's what you will see at best]".
It follows that such a figure should be a *perceptually weighted*
average of the three channels' values.

On the other hand, I can see some merit with using a simple average when
presenting someone with the *whole* MTF graph.

But still, for the purpose of sharpening, Bart says color aberrations
will occur if average is used instead of sharpening. We'll see how that
works out (hopefully, if I can complete the program).

>>And "perceived resolution" is the only viable metric: in fact, if I
>>wanted to measure "real" resolution, I would have to say that my scanner
>>really does resolve 2400 dpi (which it doesn't), as just before Nyquist,
>>there is (for example) a 0.00001 response.
>>Hey, that's still resolution, isn't it! But it's resolution that counts
>>nothing, as no observer will be able to see it, and sharpening won't
>>help because noise will overwhelm everything else.
>
> In that case, it appears you're not really interested in your
> scanner's actual resolution but your perception of that resolution.
> And that's a different test.

Maybe, but it seems that Imatest is interested in the same, and Imatest
doesn't look like it was written by idiots who didn't know what they
were testing.

Anyway, let's end it here: my program will default to luminance, but
will offer an option of using average, and it will always also show the
three channels separately anyway.

The beauty of choice!


by LjL
ljlbox(a)tiscali.it
From: Don on
On Fri, 30 Sep 2005 18:36:39 +0200, "Lorenzo J. Lucchini"
<ljlbox(a)tiscali.it> wrote:

>Unless you mean...

Yes, that's what I mean! ;o)

>... that I should *scan the slanted edge* at 16 bit -- as you say that
>it's the *measurement* that's inaccurate.

Bingo!

>But, you see, Imatest's manual doesn't particularly insist on scanning
>the edge at 16-bit, and I think I can see the reason: the oversampling
>that the slant allows compensates very well for the low bit-depth.

I don't know what Imatest manual says, but I suspect being a generic
type of test they compensate in advance just in case people do use
less that optimal bit depths.

But that doesn't mean one should use lower bit depths if one has more.

>Wouldn't you think an image would get to have a much higher bit depth
>than 8 bit if you multi-scan it 200 times, even if each of the scans is
>made at 8-bit? :-)

Not really, especially not for flatbeds because of the stepper motor
inaccuracies. You will just blur the image more. Not to mention it
will take "forever" to acquire all those 100s of samples. And all
along you have a ready and (compared to 200 * 8-bit scans) a quick
solution i.e. 16-bit!

The point I'm making is why try to "fix" 8-bit when there is 16-bit
readily available? Now, if your scanner did *not* have 16-bit then,
yes, trying to get 8-bit as accurate as possible makes sense.

But having said that, in my life I've done even sillier things (much,
*much*, sillier things!) simply because they were fun. And if that's
the goal, than just ignore everything I say and have fun! :o)

>But, you see, the *metrics* are not inadequate (or at least, wouldn't be
>if my program wasn't full of bugs).
>At worst, it's the (8-bit) scanned *picture* that's inadequate, but in
>that case, it can only be as inadequate for "guessing" as it is for
>using measured values!

Yes, 8-bit image is inadequate which results in inadequate metrics.

>That is, at most you should be able to obtain, by guessing, the *same*
>result I obtain with measurement, but no better.
>Then, sure, using a 16-bit scan of the picture may well improve the
>results of *both* guessing and using measurements.

It will not improve guessing because we only have 8-bit eyes (some say
even only 6-bit) so you will not even be able to perceive or see the
extra color gradation available in 16-bit. But *mathematics* will!

>> On a tangent, I personally don't use any sharpening at all. I compared
>> sharpened images to the originals (at large magnification) and didn't
>> really like what unsharp mask did. But, that's a matter of personal
>> taste...
>
>Isn't that because of the haloes perhaps?

Halos is just one thing, but I don't like the concept behind it, that
edges are changed in such a drastic way.

The problem is the image is *not* made sharper but the contrast of
transition between dark and light is simply increased because humans
perceive high contrast as sharpness. It's an optical illusion, really.

That's what bothered me, conceptually. The image was degraded in order
to generate an optical illusion and that just doesn't make sense to
me. It's like anti-aliasing (which I *HATE*!) and which *pretends* to
"remove" jaggies by blurring everything!!! To me, that's madness! But
that's just me... ;o)

>Halos are precisely one of the things I wish to avoid with the PSF
>method, which should be able to compute optimal sharpening that
>*doesn't* cause haloes.

By definition (because of increased contrast) it will cause haloes.
Whether you see them or not is another matter. If you zoom in and
compare to the original you will see them.

>I think the main problem will (still) be noise, as it will be amplified
>by the sharpening, and computing the PSF doesn't help much with that.
>
>I think it's going to be a matter of compromise here: how much noise am
>I prepared to accept versus how much (or, up to what frequency) do I
>want to improve the MTF.

Yes, that's exactly what it boils down to! You have to balance one
against the other. Which means, back to "guessing" what looks better.

>> [snip: the ruler test]
>
>I'll try, just for laughs (or cries).
>But even after measuring what's going on with a ruler, I'm still afraid
>there is very little that can be done.
>
>Possibly, one could scan a ruler next to the film, and use some program
>to "reshape" every color channel based on the positions of the ruler
>ticks... but, I dunno, I have a feeling this is only going to work in
>theory.

It will work in practice too if you have guides along both axis. The
trouble is that's very clumsy and time consuming.

If you do decided to do that I would create a movable frame so you
have guides on all 4 sides. That's because the whole assembly wiggles
as it travels so the distortion may not be the same at opposite edges.

Also, the misalignment is not uniform but changes because the stepper
motor sometimes goes faster and sometimes slower! So you will not be
able to just change the height/width of the image and have perfect
reproduction. You'll actually have to transform the image. Which means
superimposing a grid... Which means figuring out the size of that grid
i.e. determine the variance of stepper motor speed change... Argh!!

Of course, the key question is, is it worth it? In my case, in the
end, I decided it wasn't. But it still bugs me! ;o)

>> That's what I'm saying (individual channel measurements) only use a
>> straight average when you combine the results. Luminance will simply
>> introduce a large green bias at the expense of red, for the most part.
>
>More at the expense of blue, I think.
>But, besides this, what you said before was that either average or
>luminance will not be right when the channels are misaligned.
>
>What I'm saying is, well, this will be no issue, as the channels will be
>re-aligned *before* taking either average or luminance.

I would *not* align them because that would change the values!!! And
those changes are bound to be much more than what you're measuring!

In principle, you should never do anything to the data coming from the
scanner if the goal is to perform measurements. That's why even gamma
is not applied but only linear data is used for calculations.

I really think the best way is to simply do each channel separately
and then see what the results are. In theory, they should be pretty
equal. If you want a single number I would then just average those
three results.

> > [snip]
> >
>> No, the answer is not based on any one individual person. The answer
>> is quite clear if you read out the gray values. Whether one person can
>> see those grays and the other can't doesn't really matter in the
>> context of objective measurements.
>
>But then why were 50% and (expecially) 10% chosen as standard?
>Because of some physical reason? No, because they make perceptual sense:
>10% is the boundary where the average human eye stops seeing contrast.

No, there is physical reason why those luminance percentages were
chosen. It's to do with how our eyes are built and the sensors for
individual colors.

I mean, if you're measuring how fast a car is going, are you going to
change the scale because of how you perceive speed or are you going to
ignore your subjective perception and just measure the speed?

This is the same thing. You measure the amounts of gray the scanner
can't resolve. How you perceive this gray is, is totally irrelevant to
the measurement.

Now, if you want to measure *your perception*, that's a different
thing altogether. But before you measure something as subjective as
individual perception, you still need objective measurements as a
starting point and a baseline.

>But still, for the purpose of sharpening, Bart says color aberrations
>will occur if average is used instead of sharpening. We'll see how that
>works out (hopefully, if I can complete the program).

Bart has a tendency to be literal and just repeats what he reads
elsewhere without paying enough attention to context. Mind you, it's
good information with good links and I often save his messages because
of that, but it's really just repeating stuff seen somewhere else.

I mean, in this case Bart may very well be right, but I'd ask Kennedy
because Kennedy thinks laterally and actually analyzes what the
implications are in the full context.

>Maybe, but it seems that Imatest is interested in the same, and Imatest
>doesn't look like it was written by idiots who didn't know what they
>were testing.

They may be testing a different thing, though. BTW, why don't you just
use Imatest?

Anyway, as I said on the outset. I'm just kibitzing here and threw in
that luminance note because it seemed contradictory to the task.

But instead of wasting time on replying carry on with programming! :o)

Don.
From: Lorenzo J. Lucchini on
Don wrote:
> On Fri, 30 Sep 2005 18:36:39 +0200, "Lorenzo J. Lucchini"
> <ljlbox(a)tiscali.it> wrote:
>
> [snip]
>
>>Wouldn't you think an image would get to have a much higher bit depth
>>than 8 bit if you multi-scan it 200 times, even if each of the scans is
>>made at 8-bit? :-)
>
> Not really, especially not for flatbeds because of the stepper motor
> inaccuracies. You will just blur the image more. Not to mention it
> will take "forever" to acquire all those 100s of samples. And all
> along you have a ready and (compared to 200 * 8-bit scans) a quick
> solution i.e. 16-bit!

Ok, you're probably right here for "real" images.
But this doesn't apply to the slanted edge: you aren't *really* taking
200 scans, it's just that every scan line "counts as a sampling pass" in
reconstructing the ESF.

The "16-bit quick solution" don't change much for scanning a slanted
edge, as you have to do the oversampling anyway.

It might be that scanning the edge at 16 bit still gives better results
than scanning it at 8 bit. Let's find out...

No, let's not find out: SFRWin doesn't accept 16 bit edges.
(Which might be a clue that they're not necessary, anyway)

Don't know about Imatest.

By the way, I see that the levels (or perhaps the gamma) are different
if I scan at 16-bit and if I scan at 8-bit, with otherwise the same
settings. Actually, the 16-bit scan clips. Wonderful, another bug in my
fine scanner driver!

> The point I'm making is why try to "fix" 8-bit when there is 16-bit
> readily available? Now, if your scanner did *not* have 16-bit then,
> yes, trying to get 8-bit as accurate as possible makes sense.
>
> But having said that, in my life I've done even sillier things (much,
> *much*, sillier things!) simply because they were fun. And if that's
> the goal, than just ignore everything I say and have fun! :o)

But, no, one goal is to make a program alternative to Imatest (its SFR
function, at least) and SFRWin, and the other goal is to reconstruct a
PSF to sharpen images.

The goal is not to get 16-bit from 8-bit... aren't you just getting
confused with other threads or parts of this thread?

Yes, currently I'm scanning things at 8-bit. Yes, I'm scanning my
slanted edges at 8-bit, too.
But my program works in floating point, it can load both 8-bit or 16-bit
edge images (though the code for loading 16-bit PPM isn't tested right
now); it's just that I'm using it with 8-bit images right now.

It's not functional enough to make a difference at the moment, in any case!

>>But, you see, the *metrics* are not inadequate (or at least, wouldn't be
>>if my program wasn't full of bugs).
>>At worst, it's the (8-bit) scanned *picture* that's inadequate, but in
>>that case, it can only be as inadequate for "guessing" as it is for
>>using measured values!
>
> Yes, 8-bit image is inadequate which results in inadequate metrics.

Let's agree on terms. I took the "metrics" as meaning the slanted edge
test results, and the "image" is just the image, that is the picture to
be sharpened (or whatever).

>>That is, at most you should be able to obtain, by guessing, the *same*
>>result I obtain with measurement, but no better.
>>Then, sure, using a 16-bit scan of the picture may well improve the
>>results of *both* guessing and using measurements.
>
> It will not improve guessing because we only have 8-bit eyes (some say
> even only 6-bit) so you will not even be able to perceive or see the
> extra color gradation available in 16-bit. But *mathematics* will!

No, I was saying that *with an 8-bit image* your "guess" couldn't be
better than my measurements -- the best you can achieve is to make it
*as good as* the measurements. Visibile or not to the eye, there are
only 8 bpc in the image.

And then, if our eyes really have the equivalent of 6 bits, you wouldn't
be able to guess as well as you measure even with a poor 8-bit image!

> [snip]
>
>>Halos are precisely one of the things I wish to avoid with the PSF
>>method, which should be able to compute optimal sharpening that
>>*doesn't* cause haloes.
>
> By definition (because of increased contrast) it will cause haloes.
> Whether you see them or not is another matter. If you zoom in and
> compare to the original you will see them.

No wait -- my understanding is that, by definition, "optimal sharpening"
is the highest amount you can apply *without* causing haloes.
Perhaps unsharp mask in particular always causes them, I don't know, but
there isn't only unsharp mask around.

Haloes show quite clearly on the ESF graph, and I assure you that I
*can* apply some amount of sharpening that doesn't cause "hills" in the
ESF graph.

>>I think the main problem will (still) be noise, as it will be amplified
>>by the sharpening, and computing the PSF doesn't help much with that.
>>
>>I think it's going to be a matter of compromise here: how much noise am
>>I prepared to accept versus how much (or, up to what frequency) do I
>>want to improve the MTF.
>
> Yes, that's exactly what it boils down to! You have to balance one
> against the other. Which means, back to "guessing" what looks better.

As far as noise is concerned, yes, this is mostly true.
But noise and haloes are two separate issues!

Anyway, what would seem a reasonable "balance" to me is this: apply the
best sharpening you can that does not cause noise to go higher than the
noise a non-staggered-CCD-array would have.

This is what I'd call the "right" sharpening for Epson scanners: make it
as sharp as the linear CCD scanners, making noise go no higher than a
linear CCD scanner's (since you know a linear CCD of the same size as my
staggered CCD has more noise in general, as the sensors are smaller).

>>>[snip: the ruler test]
>>
>>I'll try, just for laughs (or cries).
>>But even after measuring what's going on with a ruler, I'm still afraid
>>there is very little that can be done.
>>
>>Possibly, one could scan a ruler next to the film, and use some program
>>to "reshape" every color channel based on the positions of the ruler
>>ticks... but, I dunno, I have a feeling this is only going to work in
>>theory.
>
> It will work in practice too if you have guides along both axis. The
> trouble is that's very clumsy and time consuming.
>
> If you do decided to do that I would create a movable frame so you
> have guides on all 4 sides. That's because the whole assembly wiggles
> as it travels so the distortion may not be the same at opposite edges.
>
> Also, the misalignment is not uniform but changes because the stepper
> motor sometimes goes faster and sometimes slower! So you will not be
> able to just change the height/width of the image and have perfect
> reproduction. You'll actually have to transform the image. Which means
> superimposing a grid... Which means figuring out the size of that grid
> i.e. determine the variance of stepper motor speed change... Argh!!

Yes, this is precisely what I meant with "is only going to work in
theory". Remember also that I was talking in the context of color
aberrations, which means the process would have to be repeated *three
times* separately for each channel!

It'd take ages of processing times... and, also, what kind of
super-ruler should we get? Any common ruler just isn't going to be good
enough: the ticks will be to thick, non-uniformely spaced and unsharp;
and the transparent plastic the ruler is made of will, itself, cause
color aberrations.

> Of course, the key question is, is it worth it? In my case, in the
> end, I decided it wasn't. But it still bugs me! ;o)

I know. By the way, changing slightly the topic, what about two-pass
scanning and rotating the slide/film 90 degrees between the two passes?
I mean, we know the stepper motor axis has worse resolution than the CCD
axis. So, perhaps multi-pass scanning would work best if we let the CCD
axis get a horizontal *and* a vertical view of the image.

Of course, you'd still need to sub-pixel align and all that hassle, but
perhaps the results could be better than the "usual" multi-pass scanning.
Clearly, there is a disadvantage in that you'd have to physically rotate
your slides or film between passes...

>>>That's what I'm saying (individual channel measurements) only use a
>>>straight average when you combine the results. Luminance will simply
>>>introduce a large green bias at the expense of red, for the most part.
>>
>>More at the expense of blue, I think.
>>But, besides this, what you said before was that either average or
>>luminance will not be right when the channels are misaligned.
>>
>>What I'm saying is, well, this will be no issue, as the channels will be
>>re-aligned *before* taking either average or luminance.
>
> I would *not* align them because that would change the values!!! And
> those changes are bound to be much more than what you're measuring!

Hm? I don't follow you. When you have got the ESF, you just *have* your
values. You can then move them around at your heart's will, and you
won't lose anything. Which implies that you can easily move the three
ESFs so that they're all aligned (i.e. the "edge center" is found in the
same place), before taking any kind of average.

> In principle, you should never do anything to the data coming from the
> scanner if the goal is to perform measurements. That's why even gamma
> is not applied but only linear data is used for calculations.

Yes, and I'm not doing anything to the data *coming from the scanner*;
just to the ESF, which is a high-precision, floating point function that
I've calculated *from* the scanner data.
It's not made of pixels: it's made for x's and y's, in double precision
floating point. I assure you that I'm already doing so much more
(necessary) evil to these functions, that shifting them around a bit
isn't going to lose anything.

> I really think the best way is to simply do each channel separately
> and then see what the results are. In theory, they should be pretty
> equal. If you want a single number I would then just average those
> three results.

Yes, in theory. In practice, my red channel has a visibly worse MTF than
the green channel, for one.

>>>[snip]
>>>
>>>No, the answer is not based on any one individual person. The answer
>>>is quite clear if you read out the gray values. Whether one person can
>>>see those grays and the other can't doesn't really matter in the
>>>context of objective measurements.
>>
>>But then why were 50% and (expecially) 10% chosen as standard?
>>Because of some physical reason? No, because they make perceptual sense:
>>10% is the boundary where the average human eye stops seeing contrast.
>
> No, there is physical reason why those luminance percentages were
> chosen. It's to do with how our eyes are built and the sensors for
> individual colors.

Did you just say "with how our eyes are built"? Now that's perceptual!
Ok, not necessarily perceptual in the sense that it has to do with our
brain, but it has to do with the observer.

MTF10 is chosen *because the average observer can't see less than 10%
contrast* (because of how his eyes are built, or whatever; it's still
the observer, not the data).

> I mean, if you're measuring how fast a car is going, are you going to
> change the scale because of how you perceive speed or are you going to
> ignore your subjective perception and just measure the speed?

Hmm, we definitely base units and scales of measurements on our
perception. We don't use light-years, we use kilometers; some weird
peoples even use "standardized" parts of their bodies (like my "average
observer", you see), such as feet, inches and funny stuff like that ;-P

Sure, we do use light-years now when we measure things that are
*outside* our normal perception.

> This is the same thing. You measure the amounts of gray the scanner
> can't resolve. How you perceive this gray is, is totally irrelevant to
> the measurement.

But the problem is that there is *no* amount of gray the scanner can't
resolve! It can resolve everything up to half Nyquist. I mean *my*
scanner. It's just that it resolves frequencies near Nyquist with such a
low contrast that they're hardly distinguishable.

Where do you draw the line? Just how much uniformely gray your test
pattern must be before you say "ok, this is the point after which my
scanner has no useful resolution"?

I don't see a choice other than the perceptual choice. Which also has
the advantage of being a farily standard choice.

> [snip]
>
> They may be testing a different thing, though. BTW, why don't you just
> use Imatest?

Because they're asking money for it :-) I've had my trial runs, finished
them up, and I'm now left with SFRWin and no intention to buy Imatest
(not that it's a bad program, it's just that I don't buy much of
anything in general).

I'm not sure I would call myself a "free software advocate", but I
definitely do like free software. And certainly the fact that my program
might be useful to other people gives me more motivation to write it,
than if it were only useful to myself.
Not necessarily altruism, mind you, just seeing a lot of downloads of a
program I've written would probably make me feel a star :-) hey, we're
human.

> Anyway, as I said on the outset. I'm just kibitzing here and threw in
> that luminance note because it seemed contradictory to the task.
>
> But instead of wasting time on replying carry on with programming! :o)

I can't keep programming all day anyway! -- well, I've done that
yesterday, and the result was one of those headaches you don't quickly
forget.

Anyway, have you tried out ALE yet? I don't think it can re-align
*single* rows or columns in an image, but it does perform a lot of
geomtry transformation while trying to align images. And it works with
16 bit images and all, which you were looking for, weren't you? It's
just so terribly slow.


by LjL
ljlbox(a)tiscali.it
From: john rehn on
Hi !

I did not saw this thread before I started my own.
But may I ask you: What sfr do you actually get from your
scanners ??

regards

john rehn

From: Bart van der Wolf on

"john rehn" <john.e.rehn(a)gmail.com> wrote in message
news:1128251538.948673.68060(a)g49g2000cwa.googlegroups.com...
> Hi !
>
> I did not saw this thread before I started my own.
> But may I ask you: What sfr do you actually get from your
> scanners ??

http://www.xs4all.nl/~bvdwolf/main/foto/Imatest/SFR_DSE5400_GD.png

Bart

First  |  Prev  |  Next  |  Last
Pages: 1 2 3 4 5 6 7 8 9 10 11
Prev: Canon 1250U2
Next: driver