Prev: Canon 1250U2
Next: driver
From: Lorenzo J. Lucchini on
Bart van der Wolf wrote:
>
> "Lorenzo J. Lucchini" <ljlbox(a)tiscali.it> wrote in message
> news:7A91f.15406$133.4116(a)tornado.fastwebnet.it...
> SNIP
>
>> Why is my edge odd-looking?
>
>
> The pixels "seem" to be sampled with different resolution
> horizontally/vertically. As you said earlier, you were experimenting
> with oversampling and downsizing, so that may be the reason. BTW, I am
> using the version that came with the compiled alpha 3.

Maybe I see what you mean: it seems that every pixel in the edge has a
vertical neighbour that has the same value.
But if I look at the noise, this doesn't seem to hold true anymore (and
you mentioned this before, I think).

Look... now that I think of it, I once scanned an edge where every pixel
on the edge was *darker* than the one below it (think of an edge with
the same orientation as testedge.tif).

Now, I don't really remember what settings I used on that one, so this
doesn't mean much, but I'm really sure that I've used all the "right"
settings for the testedge.ppm and testedge.tif I've included in alpha 3.

I can't quite explain this.

> [snip]

Before I try to digest what you wrote... :-) Have you heard of the Abel
transform -- whatever that is, of course, don't assume I know just
because I mention it! -- being used to reconstruct the PSF from the LSF?
(yes, unfortunately I've only read about "LSF", singular, upto now)

I ask just to be sure I'm not following a dead end, in case you already
looked into this.


by LjL
ljlbox(a)tiscali.it
From: Bart van der Wolf on

"Lorenzo J. Lucchini" <ljlbox(a)tiscali.it> wrote in message
news:FWi1f.16530$133.15287(a)tornado.fastwebnet.it...
> Bart van der Wolf wrote:
>>
>> "Lorenzo J. Lucchini" <ljlbox(a)tiscali.it> wrote in message
>> news:7A91f.15406$133.4116(a)tornado.fastwebnet.it...
SNIP
> Maybe I see what you mean: it seems that every pixel in the edge has
> a vertical neighbour that has the same value.
> But if I look at the noise, this doesn't seem to hold true anymore
> (and you mentioned this before, I think).

Correct, that looks a bit strange.

Of course, for testing purposes only, one could make a well behaved
CGI slanted edge and apply several Gaussian blurs to it, and even add
a little Poisson noise. That won't solve things for your scanner, but
it does provide a known response for testing the program.

SNIP
> Before I try to digest what you wrote... :-) Have you heard of the
> Abel transform -- whatever that is, of course, don't assume I know
> just because I mention it! -- being used to reconstruct the PSF from
> the LSF? (yes, unfortunately I've only read about "LSF", singular,
> upto now)

No, I hadn't heard of it (or I wasn't paying attention when I did
;-)).
http://mathworld.wolfram.com/AbelTransform.html describes it, but I'm
not enough of a mathematician to immediately grasp its usefulness.

> I ask just to be sure I'm not following a dead end, in case you
> already looked into this.

I'll have to see myself if it can be of use, too early to tell.
However, do keep in mind that I am also considering the final
deconvolution or convolution step, which will take a very long time on
large images (e.g. 39 Mega-pixels on a full 5400 ppi scan from my film
scanner).

There is a lot of efficiency to be gained from a separable function
(like a Gaussian, or a polynomial(!)) versus one that requires a
square/rectangular kernel. It's roughly the difference between e.g. 18
instead of 81 multiplications per pixel when convolving with a 9x9
kernel, times the number of pixels.

What I'm actually suggesting, is that I'm willing to compromise a
little accuracy (:-() for a huge speed gain in execution. If execution
speed is unacceptable in actual use, then it won't be used. But I'm
willing to be surprised by any creative solution ...

One final remark for now, I think that for large images the
deconvolution path may prove to be too processing intensive (although
the CGLS method used by "Image Analyzer" seems rather efficient). It
is probably faster to convolve in the Spatial domain with a small
kernel than to deconvolve in the Frequency domain, which is why I
often mention the High-Pass filter solution. There are also free image
processing applications, like ImageMagick <http://www.imagemagick.org>
(also includes APIs for C or C++), that can use arbitrarily sized
(square) convolution kernels, so the final processing can be done (in
16-bit/channel, or I believe even in 32-b/ch if need be).

Bart

From: Lorenzo J. Lucchini on
Bart van der Wolf wrote:
>
> "Lorenzo J. Lucchini" <ljlbox(a)tiscali.it> wrote in message
> news:FWi1f.16530$133.15287(a)tornado.fastwebnet.it...
>
>> Bart van der Wolf wrote:
>
> [snip]
>
> There is a lot of efficiency to be gained from a separable function
> (like a Gaussian, or a polynomial(!)) versus one that requires a
> square/rectangular kernel. It's roughly the difference between e.g. 18
> instead of 81 multiplications per pixel when convolving with a 9x9
> kernel, times the number of pixels.
>
> What I'm actually suggesting, is that I'm willing to compromise a little
> accuracy (:-() for a huge speed gain in execution. If execution speed is
> unacceptable in actual use, then it won't be used. But I'm willing to be
> surprised by any creative solution ...

You're right about speed of course.

One thing: by "separable function" you mean one that can be split into
two, which in turn are applied to the horizontal and vertical axes,
aren't you?

If so... are we really sure that there isn't a way to directly apply the
two LSFs for "deconvolution" (well, it'll have to be something slightly
different of course), instead of somehow reconstructing a PSF and then
splitting it again?

> One final remark for now, I think that for large images the
> deconvolution path may prove to be too processing intensive (although
> the CGLS method used by "Image Analyzer" seems rather efficient). It is
> probably faster to convolve in the Spatial domain with a small kernel
> than to deconvolve in the Frequency domain, which is why I often mention
> the High-Pass filter solution. There are also free image processing
> applications, like ImageMagick <http://www.imagemagick.org> (also
> includes APIs for C or C++), that can use arbitrarily sized (square)
> convolution kernels, so the final processing can be done (in
> 16-bit/channel, or I believe even in 32-b/ch if need be).

Indeed, I've been considering linking to ImageMagick, as well as to the
NetPBM library.

ImageMagick is probably more refined than NetPBM, and judging from your
article about resizing algorithms, it definitely has some merits.

Also, if I want to support more formats than just PNM (which I'm only
half-supporting right now, anyway), I'll have to use some kind of
library -- I'm definitely not going to manually write TIFF loading code,
not my piece of cake :-)


by LjL
ljlbox(a)tiscali.it
From: Bart van der Wolf on

"Lorenzo J. Lucchini" <ljlbox(a)tiscali.it> wrote in message
news:yfv1f.17515$133.14442(a)tornado.fastwebnet.it...
SNIP
> One thing: by "separable function" you mean one that can be split
> into two, which in turn are applied to the horizontal and vertical
> axes, aren't you?

Yep.

> If so... are we really sure that there isn't a way to directly apply
> the two LSFs for "deconvolution" (well, it'll have to be something
> slightly different of course), instead of somehow reconstructing a
> PSF and then splitting it again?

I'm working on that, but my method currently starts with finding a
match
to the ESF, there's no need to make a PSF for that.

For now the PSF is still useful for existing applications that require
a rectangular/square PSF as input for deconvolution, and the FIR
(Finite Impulse Response) filter support of many existing applications
is
often limited to 7x7 or 9x9 kernels. Also, calculating a PSF from an
ESF is not too much effort, although the final excecution of a PSF is
slower than
applying two 'separated' functions.

SNIP
> ImageMagick is probably more refined than NetPBM, and judging from
> your article about resizing algorithms, it definitely has some
> merits.

I have no experience with NetPBM, but ImageMagick is quite (too?)
potent and it also covers many file formats.

Bart

First  |  Prev  | 
Pages: 1 2 3 4 5 6 7 8 9 10 11
Prev: Canon 1250U2
Next: driver