Prev: Canon 1250U2
Next: driver
From: Lorenzo J. Lucchini on
Don wrote:
> On Wed, 28 Sep 2005 15:28:36 +0200, "Lorenzo J. Lucchini"
> <ljlbox(a)tiscali.it> wrote:
>
>
>>>I've only been skimming this thread and not paying too much attention
>>>because the MTF of my scanner is what it is and there's nothing I can
>>>do about it... So, with that in mind...
>>
>>Well, but I don't plan to stop here. What I'd mostly like to obtain from
>>all this is a way to "sharpen" my scans using a function that is
>>tailored to my scanner's specifics (rather than just "unsharp mask as I
>>see fit").
>>
>>So, you see, there is a practical application of the measurements I can
>>obtain, it's not just about knowing how poor the scanner's resolution is.
>
> I agree with that in principle. However, in practical terms I think
> that starting with an 8-bit image there's only so much accuracy you
> can achieve.

Don't be so fixated with my 8 bits. I'll start working with 16 bits,
possibly after buying a USB 2 card, if that needs be.

> I strongly suspect (but don't know for a fact) that you will not be
> able to show a demonstrable difference between any custom sharpening
> and just applying unsharp mask at 8-bit depth.

Well, wait a minute: at any rate, I'll be able to use a *measured* --
instead of just *guessed* -- amount (and kind) of sharpening.

Then I guess you'll be able to obtain the same results by using a "right
amount" of unsharp masking... but that's precisely what I want to avoid,
guessing values!
I'm not an experienced photographer, quite the contrary, and I can't
really tell how much unsharp masking "looks right".

> I think you can improve the sharpness considerably more (even at 8-bit
> depth) by simply aligning individual channels to each other.

That's another possibility to explore, sure. Imatest, SFRWin and my
program all give some measurement of color aberrations -- channel
misalignment, chiefly.

Certainly, those values could find some use; however, I don't have much
hope: in fact, unsurprisignly, my scanner seems to have very little
color misalignment in the CCD direction, but it does show quite a bit of
misalignment in the motor direction!
But, I thought, that must be because the motor's steps are not very
regular and precise. If that's the cause, then it'll be impossible to
try and re-align the colors, as misalignment will change at every scan line.

Sort of like what you've found out with multi-pass scans alignment..

>>And why do you say I'm measuring the "objective values" of the pixels
>>instead of their "perceptual values"? I'm mostly trying to measure
>>resolution, in the form of the MTF.
>
> Because it's all based on those gray pixels which are created because
> the scanner can't resolve that border area. So it's much better to
> read the actual values of those gray pixels rather than take either an
> average or luminance value.
>
> If the three RGB channels are not perfectly aligned (and they never
> are!) then combining them in any way will introduce a level of
> inaccuracy (fuzziness). In case of luminance that inaccuracy will also
> have a green bias, while the average will be more even - which is why
> I said that your original idea to use average seems like the "lesser
> evil" when compared to the skewed and green-biased luminance values.

At this point, I think I have a better idea: let's *first* measure the
amount of misalignment, and then average the channels to luminance
*after* re-aligning them.

Of course, I must first be sure the way I currently measure misalignment
is correct, as SFRWin gives different results.
But that's (by far) not the only thing that's currently wrong in my
program...

>>So you see that I'm *already* doing measurements that are inherently
>>"perceptual". So why not be coherent and keep this in mind throughout
>>the process?
>
> Because perception is subjective. When there is no other way, then
> yes, use perception. But since you already have the values of those
> gray pixels it just seem much more accurate to use those values.

I'm not sure. Yes, I have the real values, but... my program tries to
answer the question "how much resolution can my scanner get from the
original?". The answer itself depends on the observer's eye, as a person
might be able to see constrasts less than 10%, and another might only
see up to 15%, say.

So, since an average observer *must* be "invented" anyway for the
results to have any practical meaning, then it makes sense to also
adjust them so that the colors the average eye sees best count more, as
they *will* affect perceived resolution.

And "perceived resolution" is the only viable metric: in fact, if I
wanted to measure "real" resolution, I would have to say that my scanner
really does resolve 2400 dpi (which it doesn't), as just before Nyquist,
there is (for example) a 0.00001 response.
Hey, that's still resolution, isn't it! But it's resolution that counts
nothing, as no observer will be able to see it, and sharpening won't
help because noise will overwhelm everything else.

>>>Actually, what I would do is measure each channel *separately*!
>>
>>... I'm doing this already.
>>The "gray" channel is measured *in addition* to the other three
>>channels, and is merely a convenience.
>
> That's good. So displaying individual results should be easy.

Not should, is. The table I output has the fields "Frequency", "Red
response", "Green response", "Blue response", "Luminance response".

If the users wants to take advantage of the luminance results, they can,
but they also can ignore them just as well.


by LjL
ljlbox(a)tiscali.it
From: Lorenzo J. Lucchini on
stevenj(a)alum.mit.edu wrote:
> Lorenzo J. Lucchini wrote:
>
>>>Depending on the library implementation, for complex numbers , Abs[z]
>>>gives the modulus |z| .
>>
>>No, there isn't such a function in FFTW.
>
> FFTW is not a complex-number library. You can easily take the absolute
> value of its complex number yourself by sqrt(re*re + im*im), or you can
> use the standard C complex math library (or the C++ library via a
> typecast).

I'd like to remain within plain C; and the complex math library is,
AFAIK, part of C99, so it might not be available on all compilers yet.
(Yeah, I know, I'm using // comments, but *those* are in every stupid
compiler ;-)

Currently I'm precisely doing the computation you say (well, I'm using
pow() actually), it's just that Bart says this is not a robust method
with "extreme" values. But it should be ok for now, in any case.

>>However, there are functions to
>>directly obtain a real-to-real FFT; I probably should look at them,
>>although I'm not sure if the real data they output are the moduli or
>>simply the real parts of the transform's output.
>
> Neither. The real-to-real interface is primarily for transforms of
> real-even or real-odd data (i.e. DCTs and DSTs), which can be expressed
> via purely real outputs. They are also for transforms of real data
> where the outputs are packed into a real array of the same length, but
> the outputs in this case are still complex numbers (just stored in a
> different format).

I see, thanks. It wasn't very clear reading the FFTW manual, for someone
who's meeting Fourier transforms for the first time in his life.


by LjL
ljlbox(a)tiscali.it
From: Bart van der Wolf on

"Lorenzo J. Lucchini" <ljlbox(a)tiscali.it> wrote in message
news:POG_e.2576$133.720(a)tornado.fastwebnet.it...
SNIP
These two, cygwin1.dll and cygfftw3-3.dll, are sufficient.

SNIP
> It's there. As you might suspect, at
> http://ljl.741.com/slantededge-alpha3.tar.gz

I'll give the "slantededge.exe" a try. I'm just trying to get it to
recognize my options, but no results are saved (only some statistics
are reported on screen). For example "--csv-esf testedge.ppm" is an
"unrecognized" option.

SNIP
> Should these 10% and 90% positions fixed as if the image were
> normalized?

Yes, between the 10% and 90% response of the normalized ESF data
(assuming linearized image gamma 1.0, I would assume since all data
should be linearized).

Bart

From: Lorenzo J. Lucchini on
Bart van der Wolf wrote:
>
> "Lorenzo J. Lucchini" <ljlbox(a)tiscali.it> wrote in message
> news:POG_e.2576$133.720(a)tornado.fastwebnet.it...
> SNIP
> These two, cygwin1.dll and cygfftw3-3.dll, are sufficient.
>
> SNIP
>
>> It's there. As you might suspect, at
>> http://ljl.741.com/slantededge-alpha3.tar.gz
>
>
> I'll give the "slantededge.exe" a try. I'm just trying to get it to
> recognize my options, but no results are saved (only some statistics are
> reported on screen). For example "--csv-esf testedge.ppm" is an
> "unrecognized" option.

Wait a moment, but you must specify a filename to save the ESF to.

Try with this command line:

slantededge.exe --verbose --csv-esf esf.txt --csv-lsf lsf.txt --csv-mtf
mtf.txt testedge.ppm

It works for me.

> SNIP
>
>> Should these 10% and 90% positions fixed as if the image were normalized?
>
> Yes, between the 10% and 90% response of the normalized ESF data
> (assuming linearized image gamma 1.0, I would assume since all data
> should be linearized).

Fine, thanks. 'll fix.

by LjL
ljlbox(a)tiscali.it
From: Don on
On Thu, 29 Sep 2005 18:16:04 +0200, "Lorenzo J. Lucchini"
<ljlbox(a)tiscali.it> wrote:

>>>Well, but I don't plan to stop here. What I'd mostly like to obtain from
>>>all this is a way to "sharpen" my scans using a function that is
>>>tailored to my scanner's specifics (rather than just "unsharp mask as I
>>>see fit").
....
>>
>> I agree with that in principle. However, in practical terms I think
>> that starting with an 8-bit image there's only so much accuracy you
>> can achieve.
>
>Don't be so fixated with my 8 bits.

I'm not. It's simply a very important factor when considering the
context because it directly affects the result. So to ignore it would
lead to wrong conclusions.

>> I strongly suspect (but don't know for a fact) that you will not be
>> able to show a demonstrable difference between any custom sharpening
>> and just applying unsharp mask at 8-bit depth.
>
>Well, wait a minute: at any rate, I'll be able to use a *measured* --
>instead of just *guessed* -- amount (and kind) of sharpening.

But that measurement will be so inaccurate as to be virtually
meaningless i.e. the margin of error will be so high that it will not
improve on the guessed amount. Indeed it's quite conceivable that in
significant number of cases it may actually produce worse results than
guessing.

>Then I guess you'll be able to obtain the same results by using a "right
>amount" of unsharp masking... but that's precisely what I want to avoid,
>guessing values!

Please don't get me wrong. I hate guessing. But if the metrics are
inadequate then you may end up making matters worse. I mean you, can't
measure millimeters with a ruler that only has marks for centimeters.

>I'm not an experienced photographer, quite the contrary, and I can't
>really tell how much unsharp masking "looks right".

On a tangent, I personally don't use any sharpening at all. I compared
sharpened images to the originals (at large magnification) and didn't
really like what unsharp mask did. But, that's a matter of personal
taste...

>> I think you can improve the sharpness considerably more (even at 8-bit
>> depth) by simply aligning individual channels to each other.
>
>That's another possibility to explore, sure. Imatest, SFRWin and my
>program all give some measurement of color aberrations -- channel
>misalignment, chiefly.

I was shocked when I discovered this on my film scanner. I expected
that the channel alignment would be much more accurate.

>Certainly, those values could find some use; however, I don't have much
>hope: in fact, unsurprisignly, my scanner seems to have very little
>color misalignment in the CCD direction, but it does show quite a bit of
>misalignment in the motor direction!

Yes, that's a very big problem with all flatbeds! The stepper motor is
also very irregular as the belts slip, etc.

>But, I thought, that must be because the motor's steps are not very
>regular and precise. If that's the cause, then it'll be impossible to
>try and re-align the colors, as misalignment will change at every scan line.
>
>Sort of like what you've found out with multi-pass scans alignment..

Here's a little test to try: Take a ruler and scan it twice at optical
resolution of your scanner: once horizontally, and once vertically.
Rotate one of them and compare! It makes a grown man cry! ;o)

Now, if that weren't bad enough, do several vertical scans (along the
stepper motor axis) and compare! And then really start to cry! ;o)

It sure made me pull my hair! I noticed this when I was scanning some
square images and was surprised that their vertical vs. horizontal
dimensions in Photoshop were not the same. I then rotated the photo,
scanned again, and now it was stretched in the other direction!
Aaaaarrrrggghhh!

That's when I invented "the ruler test". My scans on stepper motor
axis are about 1mm shorter than scans on CCD axis using a 10cm strip.

>> If the three RGB channels are not perfectly aligned (and they never
>> are!) then combining them in any way will introduce a level of
>> inaccuracy (fuzziness). In case of luminance that inaccuracy will also
>> have a green bias, while the average will be more even - which is why
>> I said that your original idea to use average seems like the "lesser
>> evil" when compared to the skewed and green-biased luminance values.
>
>At this point, I think I have a better idea: let's *first* measure the
>amount of misalignment, and then average the channels to luminance
>*after* re-aligning them.

That's what I'm saying (individual channel measurements) only use a
straight average when you combine the results. Luminance will simply
introduce a large green bias at the expense of red, for the most part.

Luminance really has no part in this context because it only skews the
results by favoring the green channel results and neglecting the red.

>Of course, I must first be sure the way I currently measure misalignment
>is correct, as SFRWin gives different results.
>But that's (by far) not the only thing that's currently wrong in my
>program...

That's why we like programming! ;o) It's solving those problems!

>>>So you see that I'm *already* doing measurements that are inherently
>>>"perceptual". So why not be coherent and keep this in mind throughout
>>>the process?
>>
>> Because perception is subjective. When there is no other way, then
>> yes, use perception. But since you already have the values of those
>> gray pixels it just seem much more accurate to use those values.
>
>I'm not sure. Yes, I have the real values, but... my program tries to
>answer the question "how much resolution can my scanner get from the
>original?". The answer itself depends on the observer's eye, as a person
>might be able to see constrasts less than 10%, and another might only
>see up to 15%, say.

No, the answer is not based on any one individual person. The answer
is quite clear if you read out the gray values. Whether one person can
see those grays and the other can't doesn't really matter in the
context of objective measurements.

>So, since an average observer *must* be "invented" anyway for the
>results to have any practical meaning, then it makes sense to also
>adjust them so that the colors the average eye sees best count more, as
>they *will* affect perceived resolution.

If you want to test human perception that's a completely different
test. However, that doesn't change the *objective* resolution of the
scanner.

I mean take number of colors on a monitor. You can objectively
determine how many colors your monitor can display. And then you can
also determine how many of those colors an average person can
distinguish. Those are two totally different test, using different
metrics.

>And "perceived resolution" is the only viable metric: in fact, if I
>wanted to measure "real" resolution, I would have to say that my scanner
>really does resolve 2400 dpi (which it doesn't), as just before Nyquist,
>there is (for example) a 0.00001 response.
>Hey, that's still resolution, isn't it! But it's resolution that counts
>nothing, as no observer will be able to see it, and sharpening won't
>help because noise will overwhelm everything else.

In that case, it appears you're not really interested in your
scanner's actual resolution but your perception of that resolution.
And that's a different test.

>>>>Actually, what I would do is measure each channel *separately*!
>>>
>>>... I'm doing this already.
>>>The "gray" channel is measured *in addition* to the other three
>>>channels, and is merely a convenience.
>>
>> That's good. So displaying individual results should be easy.
>
>Not should, is.

It's a figure of speech.

Don.
First  |  Prev  |  Next  |  Last
Pages: 1 2 3 4 5 6 7 8 9 10 11
Prev: Canon 1250U2
Next: driver