Prev: Canon 1250U2
Next: driver
From: Lorenzo J. Lucchini on
Bart van der Wolf wrote:
>
> "Lorenzo J. Lucchini" <ljlbox(a)tiscali.it> wrote in message
> news:L3l_e.674$133.670(a)tornado.fastwebnet.it...
>
>> Bart van der Wolf wrote:
>
> SNIP
>
>>> The image code values should be linarized at this stage,
>>> so film/sensor non-linearity and gamma adjustments
>>> can't influence the calculations.
>>
>> Yes, I am currently ignoring gamma, as my test images
>> are gamma=1.0 anyway.
>
> For real accurate results, that remains to be verified ...
> Testing a (transparent) step wedge may/will reveal 'interesting'
> features of hardware *and* of scanner driver software.

True. But test targets cost money, and I'm allergic to expenses ;-)
Well, sooner or later, perhaps I'll go buy an IT8 test target, so that I
have a gray scale for measuring gamma, as well as everything else for
measuring colors.

But not right now. Actually, one of the reasons why I got very
interested in this slanted edge test was its astonishing inexpensiveness :-)

By the way, can the test somehow also tell something about color
corrections? Intuitively, it seems to me that with the ESF and the MTF
for red, green and blue one should be able to fix at least *some* color
correction parameters.

>> If I'm not mistaken, though, this all boils down to a
>> "Pixel=InputPixel^Gamma" instead of just "Pixel=InputPixel",
>> so I'll > be be very easy to add.
>
> Yes, for straight Gamma only (sRGB profiled images use a
> 'slope-limited' Gamma). Also beware that Gamma adjustment
> may mean 1/Gamma, depending on what is being adjusted where.
> This does assume that Gamma is the only non-linearity.

But if I'm not mistaken, Imatest does the same, so I can be content with
it right now.
SFRWin on the other hand allows one to specify a look-up table.

> [snip]
>
> Just note that actual MTFs can exceed 1.0, assuming correct
> normalization to 1.0 at zero cycles. Edge sharpening halo can achieve
> that easily.

Right. I'll have to fix this, as the program currently doesn't give the
zero-cycles point any special treatment, but just normalizes.

> [snip]
>
>>> See earlier remark, and provisions need to be made to detect
>>> multiple maxima (caused by noise/graininess).
>>
>> What kind of provisions?
>
> With noisy images, there can be multiple LSF maxima from a single ESF.
> One should decide which maximum to take. I dug up some old Word document
> with C code for the SFR calculation. It takes the average between the
> leftmost and rightmost maxima.
> If your email address in your signature is valid, I can send you that
> document.

Yes, it's valid. It's usually full because of spam, but I'm keeping it
clean lately... just don't put "enlargement" in the subject and I should
receive it :-)

Anyway, can noise really cause such problems, besides in "extreme"
situations (i.e. when the measurments would be better thrown away anyway)?

I'm thinking that even a very high spike of noise shouldn't be able to
get higher than the edge constrast, considering that *many* lines are
averaged together to obtain the ESF...

Perhaps *very* bad calibration striping could cause that, but then again
one should probably throw away the scan in such a case.

> SNIP
>
>> Even though the SourceForge description currently says little more
>> than "calculates the MTF from a slanted edge", ultimately I'd like
>> this program to do automatic deconvolution (or whatever is best) of
>> images based on the edge results.
>
> Yes, that's a good goal, although it will take more than a single
> slanted edge to get a two-dimensional a-symmetric PSF).

Well, I assume that a vertical+horizontal edge measurement can be
accurate enough for many purposes.

By the way, I've done (but not published) some tests on horizontal edges
(i.e. stepping motor resolution), and the results are a bit depressing!
About 100-200 dpi lower resolution than on the CCD axis, and a
frightening amount of color fringing.

> What's worse,
> the PSF can (and does) change throughout the image, but a symmetrical
> PSF will already allow to improve image quality.
> Some hurdles will need to be taken, but the goal is exactly what I am
> looking for.

Well, it's a bit soon for this perhaps, but... how do I obtain the PSF
from two (h+v) ESFs?
Perhaps for each point, average the two points on the ESFs that are at
the same distance from the "center" of the function, and weight the
average based on the angle between the points and the two (h+v) axes?

But anyway...

> SNIP
>
>> My main resource has been
>> http://www.isprs.org/istanbul2004/comm1/papers/2.pdf
>>
>> where I took the evil alternative to the "4x bins" that I'm
>> currently using, with all the regression nighmares it brings ;-)
>> But it was an interesting document, anyway.
>
> Yes, we're not the only ones still looking for the holy grail, it seems.
>
> I'm working on a method that will produce a PSF, based on the ESF
> derived from a slanted edge. That PSF can be used in various
> deconvolution methods, and it can be used to create a High-pass filter
> kernel. Could be useful to incorporate in the final program.

.... of course it can be useful. No need for me to invent the square
wheel when you've already worked on an octagonal one :-)


by LjL
ljlbox(a)tiscali.it
From: Bart van der Wolf on

"Lorenzo J. Lucchini" <ljlbox(a)tiscali.it> wrote in message
news:Gay_e.1611$133.572(a)tornado.fastwebnet.it...
SNIP
> What do you propose for image restoration, instead of deconvolution?
> ("image restoration" obviously being restricted to the things you
> can do when you have an ESF or PSF)

Straight deconvolution will 'restore' noise (and graininess) as well
as the signal itself, or the noise is even amplified. There are better
methods like the adaptive (damped) Richardson-Lucy algorithm, but that
is a very processing intensive and thus time consuming operation. The
method used in "Image Analyzer", using the "CGLS" algorithm, is much
faster but slightly noisier than RL.
<http://www.mathcs.emory.edu/~nagy/RestoreTools/index.html>

SNIP
> So in your opinion it's ok if I just consider an arbitrary number of
> pixels (like Imatest does) as constituting "the edge", without going
> to great length trying to have the program make an educated guess?

Yes, although flatbed scanners exhibit a significantly wider ESF with
long tails. Don't be too conservative. The range used by Imatest is
from -6 to +10 in quarter pixel increments, and thus allows to handle
at least a symmetrical PSF with a support of 13 pixels, which would be
for a *very* blurry image.

SNIP
>> You'll have to window because of the abrupt edges. That's reality
>> in Fourier transforms, we deal with a small subset of the real data
>> which reaches out to infinity.
>
> Yes, but there are two other options I've considered:
>
> 1) taking *many* DFTs of small (Hanning-windowed) pieces of the LSF,
> and then average them together. Wouldn't this avoid the change of
> LSF shape that using a single, big window may cause?

Although possible, it's probably more efficient to code the
binning/averaging many single row LSFs, thus approaching the real LSF.
It's the real LSF that needs to be 'DFT'ed. IMO it's not so useful to
perform many DFTs on such a small (e.g. 64 quarter pixels if ranging
from ) array, it becomes less efficient and I think complex. Some
calculations are easier/faster in the frequency domain (e.g.
deconvolution) and others in the spatial domain (averaging and small
kernel convolution). The windowing has only to be done once on the
multiple LSF average (which already has lower noise than single
samples).

> 2) not using any window, but "cropping" the LSF so that the edges
> are (very near) zero. Would this have any chances of working? It
> would completely avoid changing the LSF's shape.

Truncation loses information, I don't think that's helpful. Windowing
will further reduce the already low response and it suppresses noise
at the boundaries. That's helpful.

SNIP
> Yes, I do this.
> MTF[i]=SquareRoot(ImaginaryPart[i]^2+RealPart[i]^2)

Depending on the library implementation, for complex numbers , Abs[z]
gives the modulus |z| .
Also note http://www.library.cornell.edu/nr/bookcpdf/c5-4.pdf (5.4.3)
which is a more robust way of doing it in extreme cases (which may
never occur, but it's still more robust).

>>> - How many samples should my ESF/LSF have? I understand that it
>>> only depends on how high you want your frequencies to be -- i.e.,
>>> if I want to show the MTF up to 4xNyquist, I should have 4x more
>>> samples than there are real pixels. Is this correct?
>>
>> No.
>
> Anyway, the method I'm using comes from the document I pointed to in
> the other article, so it shouldn't be *too* stupid.
>
> Still, the method you describe sounds much simpler to implement, so
> I guess I'll go for it.

In the Netherlands we have a saying, "there are several roads, all
leading to Rome" ;-)
All I did is describe how the ISO suggests to do it, and it seems
relatively simple to code it. Simple code reduces the chance on bugs
and unforeseen 'features'.


>
>> In the ISO method you would calculate an ESF for each line (row) of
>> pixels that crosses the edge. The average of all those ESFs is
>> produced after shifting each row in proportion with the centroid
>> regression. It is at that point, the shifting, that you bin the
>> pixels in an array that's 4x wider than the edge crop. That allows
>> you to bin the centroid with a 4x higher (=quarter pixel)
>> resolution. After that it's just statistics, larger numbers of ESFs
>> make a more likely approximation of the actual ESF.
>
> Let us see if I've understood this.
[...]

I think I have an even better suggestion. I have some C code examples
from the ISO that shows how it could be implemented. If I can email
the 48KB Word document (is your ljlbox(a)tiscali.it in the signature
valid?), it'll become quite clear, I'm sure. I also have a 130KB PDF
file you'll love to read because it describes the various steps and
considerations in more detail.

One more remark. I've been having some discussions with Fernando
Carello (also from Italy) about PSFs and types of restoration. He was
willing (if time permits) to attempt to tackle that side of the
challenge. Although I understand that it's always a bit difficult to
fathom a mix of programming styles, it could potentially help (again,
if his time allows).

Bart

From: Bart van der Wolf on

"Lorenzo J. Lucchini" <ljlbox(a)tiscali.it> wrote in message
news:vyz_e.1841$133.1819(a)tornado.fastwebnet.it...
SNIP
> I think I'll go for user selectable, with a default that's
> recommended for comparing others' results.

Yep, seems like the pragmatic approach wins.

> But all this made me wonder about something else: would it make any
> sense to compare the edge *position* of each (red, green and blue)
> channel with the edge position in the luminance channel?

Depends on what one wants to investigate, but I see no direct use for
comparison of a color channel with Luminance. Besides Luminance for
sharpness, one could compare R, G, and B for Chromatic aberrations. In
digicams with a Bayer CFA it can be quite revealing what a difference
Raw converters can make. For scanners that seems less of an issue.

> I mean. SFRWin gives "red", "blue" and "green" color offsets (for
> measuring "color fringing"), but the "green" offset is always zero,
> as the other two channels are compared to green.
>
> Would comparing the three channels to luminance, instead, have any
> advantage over SFRWin's approach? I don't remember what Imatest does
> here.

No advantage for me. Imatest produces a figure for Chromatic
Aberration (http://www.imatest.com/docs/tour_sfr.html#ca).

Bart

From: Lorenzo J. Lucchini on
Bart van der Wolf wrote:
>
> "Lorenzo J. Lucchini" <ljlbox(a)tiscali.it> wrote in message
> news:Gay_e.1611$133.572(a)tornado.fastwebnet.it...
> SNIP
>
> [snip]
>
>> So in your opinion it's ok if I just consider an arbitrary number of
>> pixels (like Imatest does) as constituting "the edge", without going
>> to great length trying to have the program make an educated guess?
>
> Yes, although flatbed scanners exhibit a significantly wider ESF with
> long tails. Don't be too conservative. The range used by Imatest is from
> -6 to +10 in quarter pixel increments, and thus allows to handle at
> least a symmetrical PSF with a support of 13 pixels, which would be for
> a *very* blurry image.

Well, I will use -10 .. +10 for now, if it doesn't get too slow (but
using the "4x bins" thing it should all go much faster, I think).
That should allow plenty of room.

> SNIP
>
>>> You'll have to window because of the abrupt edges. That's reality in
>>> Fourier transforms, we deal with a small subset of the real data
>>> which reaches out to infinity.
>>
>>
>> Yes, but there are two other options I've considered:
>>
>> 1) taking *many* DFTs of small (Hanning-windowed) pieces of the LSF,
>> and then average them together. Wouldn't this avoid the change of LSF
>> shape that using a single, big window may cause?
>
>
> Although possible, it's probably more efficient to code the
> binning/averaging many single row LSFs, thus approaching the real LSF.
> It's the real LSF that needs to be 'DFT'ed. IMO it's not so useful to
> perform many DFTs on such a small (e.g. 64 quarter pixels if ranging
> from ) array, it becomes less efficient and I think complex. Some
> calculations are easier/faster in the frequency domain (e.g.
> deconvolution) and others in the spatial domain (averaging and small
> kernel convolution). The windowing has only to be done once on the
> multiple LSF average (which already has lower noise than single samples).

No, sorry, I didn't mean to take the DFTs of all single-row LSFs.

What I meant is:
- assume you have a (final) LSF that is 128 values long
- you take the windowed DFT of the first 16 values
- then you take the DFT of values from 8 to 24
- then from 16 to 32
- etc
- then you average all these together

I've not just invented this strange thing, I've read it somewhere,
though not in a graphics context, and I might have misinterpreted it.

>> 2) not using any window, but "cropping" the LSF so that the edges are
>> (very near) zero. Would this have any chances of working? It would
>> completely avoid changing the LSF's shape.
>
> Truncation loses information, I don't think that's helpful. Windowing
> will further reduce the already low response and it suppresses noise at
> the boundaries. That's helpful.

I see. Well, at least I guess that taking a "longer" LSF (i.e.
considering more pixels "part of the edge") could help reduce any
artifacts caused by the windowing, since most of the "important" data is
in a small part of the whole LSF.

Also, do you think a Hanning window is a reasonable choice for my
purposes, or should I choose a different window type?

> SNIP
>
>> Yes, I do this.
>> MTF[i]=SquareRoot(ImaginaryPart[i]^2+RealPart[i]^2)
>
> Depending on the library implementation, for complex numbers , Abs[z]
> gives the modulus |z| .

No, there isn't such a function in FFTW. However, there are functions to
directly obtain a real-to-real FFT; I probably should look at them,
although I'm not sure if the real data they output are the moduli or
simply the real parts of the transform's output.

> Also note http://www.library.cornell.edu/nr/bookcpdf/c5-4.pdf (5.4.3)
> which is a more robust way of doing it in extreme cases (which may never
> occur, but it's still more robust).

Site down at the moment :-) I'm starting to worry that it may somehow be
me...

> [snip]
>
> In the Netherlands we have a saying, "there are several roads, all
> leading to Rome" ;-)

Tutte le strade portano a Roma (all roads bring to Rome), clearly we
have that too, and in our case, it's actually true ;-) at least for
those roads that were originally built in ancient Roman times.

> [snip]
>
>> Let us see if I've understood this.
>
> [...]
>
> I think I have an even better suggestion. I have some C code examples
> from the ISO that shows how it could be implemented. If I can email the
> 48KB Word document (is your ljlbox(a)tiscali.it in the signature valid?),
> it'll become quite clear, I'm sure.

It's valid, yes.
Only, how copyrighted is that stuff, and what's the legal state of those
documents? ISO might get upset with both of us, if somehow I wasn't
supposed to read or use their papers (ok, ok, in the real world they
won't even notice we exist I suppose).

> I also have a 130KB PDF file you'll
> love to read because it describes the various steps and considerations
> in more detail.

Sure, if the main concepts can be understood without going too deeply
technical, it'll be most welcome.

> One more remark. I've been having some discussions with Fernando Carello
> (also from Italy) about PSFs and types of restoration. He was willing
> (if time permits) to attempt to tackle that side of the challenge.

I've read through some of the threads you've had with him.

> Although I understand that it's always a bit difficult to fathom a mix
> of programming styles, it could potentially help (again, if his time
> allows).

Sure! And if we work on separate aspects of the issue, the problem of
mixed programming styles is easily solved by having separate
sources/libraries communicate through an interface, you know, like I've
heard real programmers (rarely) do ;-)

Hm, I see a problem though... these PSF MTF ESF DFT regression curve
fitting etc thingies, I don't know how to call most of them in Italian :-D


by LjL
ljlbox(a)tiscali.it
From: Bart van der Wolf on

"Lorenzo J. Lucchini" <ljlbox(a)tiscali.it> wrote in message
news:1aB_e.2000$133.1794(a)tornado.fastwebnet.it...
SNIP
> Yeah, it's the Unix emulation layer that Cygwin compiled programs
> apparently need.
> I've uploaded it at
> http://ljl.741.com/cygwin1.dll.gz
> http://ljl.150m.com/cygwin1.dll.gz

Getting closer, although now it complains about not being to locate
cygfftw3-3.dll (presumably the FFT routine library).

SNIP
>> The test image looks a bit odd. It seems like the edge was resized
>> vertically with a nearest neighbor interpolation. The edge pixels
>> look like they are 2 pixels high and 1 pixel wide.
>
> It could be: I've scanned some edges at 2400x4800 and resized them
> down to see how this affected the MTF (remember the thread
> "Multi-sampling and 2400x4800 scanners").

Yes, that's what I was thinking of, interpolation in one direction to
compensate for the aspect ratio.

SNIP
>> The noise in the image is single pixel noise (and has a bit of
>> hardware calibration striping).
>
> What is single pixel noise -- or, is it "single pixel" as opposed to
> what)?

No, the edge seems to be NN interpolated, but the noise is not (there
are single (as opposed to 1x2 pixel) variations).

I'll wait for the next tarball, before making further benchmark tests.

Bart

First  |  Prev  |  Next  |  Last
Pages: 1 2 3 4 5 6 7 8 9 10 11
Prev: Canon 1250U2
Next: driver