Prev: Canon 1250U2
Next: driver
From: Lorenzo J. Lucchini on
Lorenzo J. Lucchini wrote:
> Bart van der Wolf wrote:
>
> > [snip]
>
> I know. The main problem is that I'm using the FFTW library for taking
> Fourier transforms, and while there seems to be a Windows version
> available (several, actually), well... the site is down. The web doesn't
> like me apparently.

Fortunately, I see that CygWin comes with FFTW. I guess it will be easy
enough then.

by LjL
ljlbox(a)tiscali.it
From: Bart van der Wolf on

"Lorenzo J. Lucchini" <ljlbox(a)tiscali.it> wrote in message
news:5KYZe.49331$nT3.33343(a)tornado.fastwebnet.it...
SNIP

> and I've created a SourceForge project at
> http://sourceforge.net/projects/slantededge/
>
> with sourcecode in the CVS.
>
>
> I'll explain briefly what the program does and in what order it does
> it, as the source is not very well commented, er, at the moment.


I've added a few comments, hopefully it will help to better understand
the reasons behind the steps.

> 1) Read in an RGB image
> 2) We work with vertical edges, so rotate the image if needed

This is only done to simplify the calculations. All that's required is
to get the image data in a uniform orientation, so all subsequent
subroutines will know what to expect (which avoids rechecking
assumptions).

> 3) Besides the "red", "blue" and "green" channels, create a "gray"
> channel that is the average of the other three

The image code values should be linarized at this stage, so
film/sensor non-linearity and gamma adjustments can't influence the
calculations.

It is customary to use a different weighting than the (R+G+B)/3
average. The ISO suggests the calculation of a luminance channel, so a
ratio of approx. 3R:6G:1B is closer to what we experience as
luminance.

> 4) Normalize the image, so that 0.5% of the pixels clip down, and
> 0.5% clip up

I think that, especially on non-linear image codes, this will
influence the MTF results, because the contrast is expanded. On a
perfectly symmetrical brightness distribution its effect will be
small, but the possibility of clipping in later stages should be
avoided. Also a check for at least 20% edge modulation should be made,
in order to avoid a too low input S/N ratio.

It is however perfectly normal to normalize the ESF output to a range
between 0.0 and 1.0, and later to normalize the SFR/MTF to 1.0 (100%)
at zero spatial frequency.

> 5) For each line in the image, find the point with the max adjacent
> pixel difference (should be the edge)

Not necessarily, that is just the maximum gradient and that need not
be the same as the edge.
The ISO suggests to combine this with with your step 9, and determine
the centroid of the LSF (by calculating the discrete derivative of the
ESF). The centroids can be used for regression.
The derivative suggested by the ISO is:
"for each line of pixels perpendicular to the edge, the edge is
differentiated using the discrete derivative "-0,5 ; +0,5", meaning
that the derivative value for pixel "X" is equal to -1/2 times the
value of the pixel immediately to the left, plus 1/2 times the value
of the pixel to the right".
They then specify something different in their centroid formula, but
perhaps they changed that in the official standard.

There is another calculation method possible. That calculation is done
in the orthogonal direction, so almost along the edge instead of
across the edge.

> 6) By doing a least squares regression on those points, find a
> straight line that ought to best represent the edge
> 7) Associate a distance from that line to each pixel in the image.

The ISO method shifts each row of the ESF by the calculated amount
from the regression, but uses quarter pixel bins. This produces a 4x
oversampling per pixel position.

> The function PixelValue(Distance) approximates the edge spread
> function.
> 8) Use "some kind" of local regression to create a uniformly-spaced
> version of the ESF, from the data described above.
> 9) Derive the line spread function from the edge spread function:
> LSF(i)=ESF(i+1)-ESF(i)

See earlier remark, and provisions need to be made to detect multiple
maxima (caused by noise/graininess).

> 10) Apply a Hanning window to the LSF

That is needed to reduce noise and the discontinuity at the borders of
the LSF.

> 11) Take the discrete Fourier transform or the resulting data

And take the Modulus, and normalize.

> Note that, at the moment, the input image must be an RGB, 8-bit
> ASCII ("plain") PPM file. These can be created using "*topnm" and
> "pnmtoplainpnm" from the NetPBM suite, or by using the GIMP.
> Type --help for some uncomprehensive help.

;-) Room for improvement ...

> I have a lot of doubts and questions to ask, but first I'd like to
> get an overall look at the program by someone competent, to find out
> what I have done *badly* wrong (something awful is bound to be
> there).

I'll have to study the source code before I can comment.

> Please keep in mind that I knew nothing about regressions, spread
> functions or Fourier transforms two weeks ago -- and I suppose I
> don't know that much more now.

Isn't that the fun of programming, it forces you to describe the
principles in detail. Learning new stuff is inevitable.

> I just read some Internet source and implemented what I thought they
> meant.

With the program (to be) available to the public, the chance of
helping hands increases. The source code you have probably read will
give a good start. As always, the devil is in the details, but that
can be overcome.

Bart

From: Bart van der Wolf on

"Lorenzo J. Lucchini" <ljlbox(a)tiscali.it> wrote in message
news:%ib_e.38$U13.8(a)tornado.fastwebnet.it...
SNIP
> I'll see what I can do, perhaps I can just write a slow, simple DFT
> myself (I have no idea how difficult it is, I'll have to read a bit)
> as a compile alternative to FFTW.

http://astronomy.swin.edu.au/~pbourke/other/dft/ and
http://www.library.cornell.edu/nr/bookcpdf/c12-2.pdf has C code for a
DFT function.
http://www.library.cornell.edu/nr/cbookcpdf.html chapter 12 has more
background on Fourier transforms, and chapter 13.1 has routines and
backgound on Deconvolution (although there are better functions for
image restoration).

> Anyway, do you have a SourceForge account? With one, I can just add
> you to the project, and then you'll be able to access SF's shell
> servers. This way you could easily run - and even easily compile -
> the program remotely, without installing anything special on your
> computer.

No, I don't have an SF account.

> Now some of the most important questions I have:
>
> - What is Imatest's "10%" and "90%"? Initially, I took these as the
> minimum and maximum pixel values that can constitute the "edge".

No, there are several ways to quantify an "infinite" function in a
single number. On an Edge Response Function it is a common procedure
to choose the width between the 10% and 90% response points. On a
Gaussian type of function, a Full Width at Half of Maximum is often
used.

> But it appears clear that the showed ESF also contains lower and
> higher values; besides, it always seem to go from -6 pixels to +8
> pixels from the edge center. Is there a reason for this?

I'm not sure about the exact reason for the choice, but I assume it
has to do with the shape of some ESFs that Norman Koren encountered
when he developed the program. The actual data in Imatest is recorded
from -6 to +10 at 0.25 intervals.

> So, now I suppose "10%" and "90%" are simply used to compute (guess
> what) the 10%-90% rise.

Actually the width in pixels between the two response points.

> Which in turns call for: should I normalize the image before doing
> anything else? I currently normalize so that 0.5% of pixels clip to
> black and 0.5% clip to white.

No, it is better to only normalize the output curves but leave the
original data (which is where the truth is hidden) as it is. By
manipulating the image data itself, you run the risk of changing the
data (in case of non-linear response data), and of introducing
quantization errors (by rounding up/down half a bit).

> - Is it ok to take a (single) Fourier transform of the Hanning-
> windowed LSF?

Yes, it's not that much data, so the DFT is fast.

> Without windowing, I get weird results, but with windowing, I'm
> afraid I'm affecting the data.

You'll have to window because of the abrupt edges. That's reality in
Fourier transforms, we deal with a small subset of the real data which
reaches out to infinity.

> My MTF's always look "smoother" than Imatest's and SFTWin's ones,
> and too high in the lower frequencies.

We'll have to see, but make sure you compute the magnitude of the
transform (take the "Modulus" of the DFT).

> - How many samples should my ESF/LSF have? I understand that it only
> depends on how high you want your frequencies to be -- i.e., if I
> want to show the MTF up to 4xNyquist, I should have 4x more samples
> than there are real pixels. Is this correct?

No. In the ISO method you would calculate an ESF for each line (row)
of pixels that crosses the edge. The average of all those ESFs is
produced after shifting each row in proportion with the centroid
regression. It is at that point, the shifting, that you bin the pixels
in an array that's 4x wider than the edge crop. That allows you to bin
the centroid with a 4x higher (=quarter pixel) resolution. After that
it's just statistics, larger numbers of ESFs make a more likely
approximation of the actual ESF.

The ISO takes one additional precaution, they take an integer number
of phase rotations. That means that if you e.g. calculated a slope of
1 pixel for every ten rows, then they take an integer multiple of ten
rows, starting at the top and trunkating the image data at the bottom.

> - How do I reduce frequencies spacing in the DFT?

I'm not sure what you mean, but it may have to do with the previous
quarter pixel binning.
SNIP

> - The method I'm currently using for getting a smooth,
> uniformely-spacing sampled ESF from the point I have is naive and
> very slow. The sources I've read suggest using "LOESS curve fitting"
> for this. I've had some trouble finding good references about this,
> and it seems very complicated anyway.

It apparently is a kind of locally weighted regression with reduced
sensitivity for outliers.

> The question is: is something simpler good enough?

Part of it may have to do with the quarter pixel sampling/binning.
If you just want to fit a monotone curve through regularly sampled
points, a simple interpolation (Cubic or Hermite) seems good enough to
me:
http://astronomy.swin.edu.au/~pbourke/other/interpolation/

Bart

From: Lorenzo J. Lucchini on
Bart van der Wolf wrote:
>
> "Lorenzo J. Lucchini" <ljlbox(a)tiscali.it> wrote in message
> news:5KYZe.49331$nT3.33343(a)tornado.fastwebnet.it...
> SNIP
>
> [snip]
>
> I've added a few comments, hopefully it will help to better understand
> the reasons behind the steps.
>
>> 1) Read in an RGB image
>> 2) We work with vertical edges, so rotate the image if needed
>
> This is only done to simplify the calculations. All that's required is
> to get the image data in a uniform orientation, so all subsequent
> subroutines will know what to expect (which avoids rechecking assumptions).

Sure. I only specified this because I think I used terms like "lines",
"rows" etc. later on.

>> 3) Besides the "red", "blue" and "green" channels, create a "gray"
>> channel that is the average of the other three
>
> The image code values should be linarized at this stage, so film/sensor
> non-linearity and gamma adjustments can't influence the calculations.

Yes, I am currently ignoring gamma, as my test images are gamma=1.0 anyway.
If I'm not mistaken, though, this all boils down to a
"Pixel=InputPixel^Gamma" instead of just "Pixel=InputPixel", so I'll be
be very easy to add.

> It is customary to use a different weighting than the (R+G+B)/3 average.
> The ISO suggests the calculation of a luminance channel, so a ratio of
> approx. 3R:6G:1B is closer to what we experience as luminance.

I suspected this. This is also easy to do, so I'll fix it right now.

>> 4) Normalize the image, so that 0.5% of the pixels clip down, and 0.5%
>> clip up
>
> I think that, especially on non-linear image codes, this will influence
> the MTF results, because the contrast is expanded. On a perfectly
> symmetrical brightness distribution its effect will be small, but the
> possibility of clipping in later stages should be avoided.

I'm not sure I understand why it can affect the MTF, but I'll take your
word for it.

I've included this normalization step mainly because of the way I decide
how many pixels in each line will be considered part of the "edge", i.e.
contribute to the ESF: what I did is consider every pixel above a
certain threshold and below another (e.g. 10% and 90%, or more recently
5% and 95% since the former didn't work out well) part of the edge.

But (also judging from your other message) it seems I was completely off
track about this. I suppose I can just do like Imatest does, and say
that 10 pixels on the right and 6 on the left of the edge center will be
"part of the edge".

This way, the normalization process becomes unnecessary.

> Also a check
> for at least 20% edge modulation should be made, in order to avoid a too
> low input S/N ratio.

I'm taking note, but I think I'll leave such checks for later when the
program is somewhat stable.

> It is however perfectly normal to normalize the ESF output to a range
> between 0.0 and 1.0, and later to normalize the SFR/MTF to 1.0 (100%) at
> zero spatial frequency.

Currently, I'm normalizing the ESF, the LSF and the MTF to between 0.0
and 1.0.

>> 5) For each line in the image, find the point with the max adjacent
>> pixel difference (should be the edge)
>
> Not necessarily, that is just the maximum gradient and that need not be
> the same as the edge.
> The ISO suggests to combine this with with your step 9, and determine
> the centroid of the LSF (by calculating the discrete derivative of the
> ESF). The centroids can be used for regression.
>
> The derivative suggested by the ISO is:
> "for each line of pixels perpendicular to the edge, the edge is
> differentiated using the discrete derivative "-0,5 ; +0,5", meaning that
> the derivative value for pixel "X" is equal to -1/2 times the value of
> the pixel immediately to the left, plus 1/2 times the value of the pixel
> to the right".

Sorry if I'm thick, but mathematics isn't my best friend...
You're implying that, for each line of pixels, the edge center(oid?)
will be the absolute maximum of the above derivative, aren't you?

But isn't the absolute maximum of the derivative precisely the maximum
gradient?

(Though the formula I use is currently simpler than the one you cite:
simply y'[i]=y[i+1]-y[i])

> [snip]
>
>> 6) By doing a least squares regression on those points, find a
>> straight line that ought to best represent the edge
>> 7) Associate a distance from that line to each pixel in the image.
>
> The ISO method shifts each row of the ESF by the calculated amount from
> the regression, but uses quarter pixel bins. This produces a 4x
> oversampling per pixel position.

This doesn't sound like a bad idea at all. It'd probably simplify things
a lot, expecially with my "local regression" problems...

>> The function PixelValue(Distance) approximates the edge spread function.
>> 8) Use "some kind" of local regression to create a uniformly-spaced
>> version of the ESF, from the data described above.
>> 9) Derive the line spread function from the edge spread function:
>> LSF(i)=ESF(i+1)-ESF(i)
>
> See earlier remark, and provisions need to be made to detect multiple
> maxima (caused by noise/graininess).

What kind of provisions?

> [snip]
>> 11) Take the discrete Fourier transform or the resulting data
>
> And take the Modulus, and normalize.

Yes, I forgot to mention these steps, but they're done by the program.

>> Note that, at the moment, the input image must be an RGB, 8-bit ASCII
>> ("plain") PPM file. These can be created using "*topnm" and
>> "pnmtoplainpnm" from the NetPBM suite, or by using the GIMP.
>> Type --help for some uncomprehensive help.
>
> ;-) Room for improvement ...

I know :-) But I was concentrating more on the mathematical aspects
now... after all, I *am* able to write code that loads an image file --
well, I can take some time, but I can -- while to be sure I can manage
to compute an MTF or things like that, I have to try first...

> [snip]
>
>> Please keep in mind that I knew nothing about regressions, spread
>> functions or Fourier transforms two weeks ago -- and I suppose I don't
>> know that much more now.
>
> Isn't that the fun of programming, it forces you to describe the
> principles in detail. Learning new stuff is inevitable.

Sure, one of the reasons why I didn't just uploads some edges and ask
you to do my homework on them with Imatest! ;-)

Even though the SourceForge description currently says little more than
"calculates the MTF from a slanted edge", ultimately I'd like this
program to do automatic deconvolution (or whatever is best) of images
based on the edge results.

Like, "to sharpen an image, first scan a cutter or razor blade using the
'--edge' option, then run the program on the image with '--sharpen'".
Would be nice.

>> I just read some Internet source and implemented what I thought they
>> meant.
>
> With the program (to be) available to the public, the chance of helping
> hands increases.

Hope so!

> The source code you have probably read will give a good
> start. As always, the devil is in the details, but that can be overcome.

To be honest, I've read very little source, if any (well, except the
FFTW tutorial).

My main resource has been
http://www.isprs.org/istanbul2004/comm1/papers/2.pdf

where I took the evil alternative to the "4x bins" that I'm currently
using, with all the regression nighmares it brings ;-)
But it was an interesting document, anyway.


by LjL
ljlbox(a)tiscali.it
From: Lorenzo J. Lucchini on
Lorenzo J. Lucchini wrote:
> Lorenzo J. Lucchini wrote:
>
>> Bart van der Wolf wrote:
>>
>> > [snip]
>
>
>> I know. The main problem is that I'm using the FFTW library for taking
>> Fourier transforms, and while there seems to be a Windows version
>> available (several, actually), well... the site is down. The web
>> doesn't like me apparently.
>
> Fortunately, I see that CygWin comes with FFTW. I guess it will be easy
> enough then.

The new archive at
http://ljl.741.com/slantededge-alpha2.tar.gz
or
http://ljl.150m.com/slantededge-alpha2.tar.gz
now includes a Windows executable, as well as a test image.
Also, the program now uses luminance instead of RGB average, can be told
to gamma-correct, and doesn't normalize the image anymore.


At
http://ljl.741.com/comparison.gif
there is a a graph showing the MTF calculated both by my program and
SFRWin, from the test image included in the archive.


by LjL
ljlbox(a)tiscali.it
First  |  Prev  |  Next  |  Last
Pages: 1 2 3 4 5 6 7 8 9 10 11
Prev: Canon 1250U2
Next: driver