Prev: Canon 1250U2
Next: driver
From: Bart van der Wolf on

"Lorenzo J. Lucchini" <ljlbox(a)tiscali.it> wrote in message
news:aVw_e.1452$133.727(a)tornado.fastwebnet.it...
SNIP
> I'm not sure. Besides Bart, I think I've read somewhere else that
> luminance should be used in this process. Perhaps it's even in the
> ISO recommendations.

The ISO allows to test whatever one wants, single channels or weigthed
combinations of more than one, whatever suits the purpose, but a
formal test of a device should include R+G+B measurements. However, to
quote them "If desired, a luminance resolution measurement may be made
on a luminance signal formed from an appropriate combination of the
colour records".
Since the purpose is sharpening (which should be done in the Luminance
channel if you want to avoid colored artifacts), it only makes sense
to use a weighting that simulated the luminance sensitivity of the
human eye.

Imatest also calculates a Y channel for luminance (and that was not an
uninformed choice), as it is the most significant for the sensation we
call 'sharpness'. With human eyes, color resolution is much lower than
Luminance resolution.

> And why do you say I'm measuring the "objective values" of the
> pixels instead of their "perceptual values"? I'm mostly trying to
> measure resolution, in the form of the MTF. People usually cite the
> MTF50 and the MTF10, because these are points where it *makes
> perceptual sense* to measure: MTF10 is about the point where the
> human eye cannot discern contrast anymore, Bart said.

You don't have to take my word, but this is what the ISO says:
"A very good correlation between limiting visual resolution and the
spatial frequency associated with a 0,10 SFR response has been found
experimentally. Should this frequency exceed the half-sampling
frequency, the limiting visual resolution shall be the spatial
frequency associated with the half-sampling frequency".

SNIP
> In any case, it makes sense to conform to what other programs of
> this kind do, so that the results can be easily compared.

There are many different opinions on what the mix should be.
If you want to exactly match Imatest, you could use
Y=0.3*R+0.59*G+0.11*B
(http://www.imatest.com/docs/sfr_instructions2.html almost
halfway down the page under Channel).

Other researchers use L=0.299R+0.587G+0.114B .
And Luminance weighting according to ITU-R BT.709 is:
Y=0.2125*R+0.7154*G+0.0721*B
which comes close(r) to:
<http://hyperphysics.phy-astr.gsu.edu/hbase/vision/efficacy.html#c1>

Whatever the choice (ultimately user selectable would be best for
flexibility, but makes comparisons more hazardous), I think Human
perception should carry some weight when the goal is to optimize
sharpening.

Bart

From: Lorenzo J. Lucchini on
Bart van der Wolf wrote:
>
> "Lorenzo J. Lucchini" <ljlbox(a)tiscali.it> wrote in message
> news:aVw_e.1452$133.727(a)tornado.fastwebnet.it...
>
> [snip]
>
> There are many different opinions on what the mix should be.
> If you want to exactly match Imatest, you could use
> Y=0.3*R+0.59*G+0.11*B
> (http://www.imatest.com/docs/sfr_instructions2.html almost
> halfway down the page under Channel).
>
> Other researchers use L=0.299R+0.587G+0.114B .
> And Luminance weighting according to ITU-R BT.709 is:
> Y=0.2125*R+0.7154*G+0.0721*B
> which comes close(r) to:
> <http://hyperphysics.phy-astr.gsu.edu/hbase/vision/efficacy.html#c1>
>
> Whatever the choice (ultimately user selectable would be best for
> flexibility, but makes comparisons more hazardous), I think Human
> perception should carry some weight when the goal is to optimize
> sharpening.

I think I'll go for user selectable, with a default that's recommended
for comparing others' results.

But all this made me wonder about something else: would it make any
sense to compare the edge *position* of each (red, green and blue)
channel with the edge position in the luminance channel?

I mean. SFRWin gives "red", "blue" and "green" color offsets (for
measuring "color fringing"), but the "green" offset is always zero, as
the other two channels are compared to green.

Would comparing the three channels to luminance, instead, have any
advantage over SFRWin's approach? I don't remember what Imatest does here.


by LjL
ljlbox(a)tiscali.it
From: Bart van der Wolf on

"Lorenzo J. Lucchini" <ljlbox(a)tiscali.it> wrote in message
news:L3l_e.674$133.670(a)tornado.fastwebnet.it...
> Bart van der Wolf wrote:
SNIP
>> The image code values should be linarized at this stage,
>> so film/sensor non-linearity and gamma adjustments
>> can't influence the calculations.
>
> Yes, I am currently ignoring gamma, as my test images
> are gamma=1.0 anyway.

For real accurate results, that remains to be verified ...
Testing a (transparent) step wedge may/will reveal 'interesting'
features of hardware *and* of scanner driver software.

> If I'm not mistaken, though, this all boils down to a
> "Pixel=InputPixel^Gamma" instead of just "Pixel=InputPixel",
> so I'll > be be very easy to add.

Yes, for straight Gamma only (sRGB profiled images use a
'slope-limited' Gamma). Also beware that Gamma adjustment
may mean 1/Gamma, depending on what is being adjusted where.
This does assume that Gamma is the only non-linearity.

SNIP
>> I think that, especially on non-linear image codes, this will
>> influence the MTF results, because the contrast is expanded.
>> On a perfectly symmetrical brightness distribution its effect
>> will be small, but the possibility of clipping in later stages
>> should be avoided.
>
> I'm not sure I understand why it can affect the MTF, but I'll take
> your word for it.

Assume all it takes is a lowering of the highlight clipping point,
which essentially is the same as multiplying all luminance levels by a
fixed factor. That would work out differently for shadows/highlights
if the response was non-linear.

SNIP
>> Also a check for at least 20% edge modulation should be made, in
>> order to avoid a too low input S/N ratio.
>
> I'm taking note, but I think I'll leave such checks for later when
> the program is somewhat stable.

Obviously, I know how actual programming works (tackle large issues
first, and get a working alpha version before working on the 'icing of
the cake'), but just don't forget some obvious boundary checks in the
end.

>> It is however perfectly normal to normalize the ESF output to a
>> range between 0.0 and 1.0, and later to normalize the SFR/MTF to
>> 1.0 (100%) at zero spatial frequency.
>
> Currently, I'm normalizing the ESF, the LSF and the MTF to between
> 0.0 and 1.0.

Just note that actual MTFs can exceed 1.0, assuming correct
normalization to 1.0 at zero cycles. Edge sharpening halo can achieve
that easily.

SNIP
>> The ISO suggests to [...] determine the centroid of the LSF (by
>> calculating the discrete derivative of the ESF). The centroids can
>> be used for regression.
> >
>> The derivative suggested by the ISO is:
>> "for each line of pixels perpendicular to the edge, the edge is
>> differentiated using the discrete derivative "-0,5 ; +0,5", meaning
>> that the derivative value for pixel "X" is equal to -1/2 times the
>> value of the pixel immediately to the left, plus 1/2 times the
>> value of the pixel to the right".
>
> Sorry if I'm thick, but mathematics isn't my best friend...

Hey, I also know my limitations in that field ... ;-)

> You're implying that, for each line of pixels, the edge center(oid?)
> will be the absolute maximum of the above derivative, aren't you?

Yep.

> But isn't the absolute maximum of the derivative precisely the
> maximum gradient?

Rereading it, yes, but it actually is where the increasing contrast
turns into decreasing contrast (the first derivative being the slope
of the curve).

> (Though the formula I use is currently simpler than the one you
> cite: simply y'[i]=y[i+1]-y[i])

Yes, and it'll produce a difference, but actual nodes will on average
be displaced by half a pixel. Nevertheless, the sample code from
the ISO seems to do what you did, so I'd suggest leaving it that way.

SNIP
>> See earlier remark, and provisions need to be made to detect
>> multiple maxima (caused by noise/graininess).
>
> What kind of provisions?

With noisy images, there can be multiple LSF maxima from a single ESF.
One should decide which maximum to take. I dug up some old Word
document with C code for the SFR calculation. It takes the average
between the leftmost and rightmost maxima.
If your email address in your signature is valid, I can send you that
document.

SNIP
> Even though the SourceForge description currently says little more
> than "calculates the MTF from a slanted edge", ultimately I'd like
> this program to do automatic deconvolution (or whatever is best) of
> images based on the edge results.

Yes, that's a good goal, although it will take more than a single
slanted edge to get a two-dimensional a-symmetric PSF). What's worse,
the PSF can (and does) change throughout the image, but a symmetrical
PSF will already allow to improve image quality.
Some hurdles will need to be taken, but the goal is exactly what I am
looking for.

SNIP
> My main resource has been
> http://www.isprs.org/istanbul2004/comm1/papers/2.pdf
>
> where I took the evil alternative to the "4x bins" that I'm
> currently using, with all the regression nighmares it brings ;-)
> But it was an interesting document, anyway.

Yes, we're not the only ones still looking for the holy grail, it
seems.

I'm working on a method that will produce a PSF, based on the ESF
derived from a slanted edge. That PSF can be used in various
deconvolution methods, and it can be used to create a High-pass filter
kernel. Could be useful to incorporate in the final program.

Bart

From: Bart van der Wolf on

"Lorenzo J. Lucchini" <ljlbox(a)tiscali.it> wrote in message
news:nim_e.699$133.298(a)tornado.fastwebnet.it...
SNIP
> The new archive at
> http://ljl.741.com/slantededge-alpha2.tar.gz
> or
> http://ljl.150m.com/slantededge-alpha2.tar.gz
> now includes a Windows executable, as well as a test image.

Thanks. Unfortunately I get an error box with:
"This application has failed to start because cygwin1.dll was not
found".

SNIP
> At
> http://ljl.741.com/comparison.gif
> there is a a graph showing the MTF calculated both by my program and
> SFRWin, from the test image included in the archive.

The test image looks a bit odd. It seems like the edge was resized
vertically with a nearest neighbor interpolation. The edge pixels look
like they are 2 pixels high and 1 pixel wide. The noise in the image
is single pixel noise (and has a bit of hardware calibration
striping).

It looks strange, and thus produces strange sharpening artifacts if
pushed to the limits (deconvolution sharpening with a derived PSF, and
a Richardson-Lucy restoration with the same PSF). Nevertheless, the
Imatest SFR analysis looks identical to the SFRWin result.

Bart

From: Lorenzo J. Lucchini on
Bart van der Wolf wrote:
>
> "Lorenzo J. Lucchini" <ljlbox(a)tiscali.it> wrote in message
> news:nim_e.699$133.298(a)tornado.fastwebnet.it...
> SNIP
>
>> The new archive at
>> http://ljl.741.com/slantededge-alpha2.tar.gz
>> or
>> http://ljl.150m.com/slantededge-alpha2.tar.gz
>> now includes a Windows executable, as well as a test image.
>
>
> Thanks. Unfortunately I get an error box with:
> "This application has failed to start because cygwin1.dll was not found".

Yeah, it's the Unix emulation layer that Cygwin compiled programs
apparently need.
I've uploaded it at
http://ljl.741.com/cygwin1.dll.gz
http://ljl.150m.com/cygwin1.dll.gz

Putting it in the same directory as the program should work, as should
putting in in \Windows\System32.

> SNIP
>
>> At
>> http://ljl.741.com/comparison.gif
>> there is a a graph showing the MTF calculated both by my program and
>> SFRWin, from the test image included in the archive.
>
>
> The test image looks a bit odd. It seems like the edge was resized
> vertically with a nearest neighbor interpolation. The edge pixels look
> like they are 2 pixels high and 1 pixel wide.

It could be: I've scanned some edges at 2400x4800 and resized them down
to see how this affected the MTF (remember the thread "Multi-sampling
and 2400x4800 scanners").

Obviously, I'm stupid enough to give very meaningful names to my files
such as "edge1.tif", "edge2.tif", "edgehg.tif", so it's entirely
possible that I took the wrong edge =)

I guess the next tarball will contain a freshly scanned edge, to avoid
this confusion.

> The noise in the image is
> single pixel noise (and has a bit of hardware calibration striping).

What is single pixel noise -- or, is it "single pixel" as opposed to what)?

> [snip]

by LjL
ljlbox(a)tiscali.it
First  |  Prev  |  Next  |  Last
Pages: 1 2 3 4 5 6 7 8 9 10 11
Prev: Canon 1250U2
Next: driver