Prev: Canon 1250U2
Next: driver
From: Don on
On Sun, 02 Oct 2005 03:23:42 +0200, "Lorenzo J. Lucchini"
<ljlbox(a)tiscali.it> wrote:

>By the way, I see that the levels (or perhaps the gamma) are different
>if I scan at 16-bit and if I scan at 8-bit, with otherwise the same
>settings. Actually, the 16-bit scan clips. Wonderful, another bug in my
>fine scanner driver!

If could be that 8-bit clips as well but you just can't see it.
Another common problem when looking at 16-bit images with an 8-bit
histogram is that the programs don't calculate correctly. Even
Photoshop "massages" the histogram data before it shows it, resulting
in some really weird artefacts. Because of all those reasons I wrote
my own 16-bit histogram program.

>> Yes, 8-bit image is inadequate which results in inadequate metrics.
>
>Let's agree on terms. I took the "metrics" as meaning the slanted edge
>test results, and the "image" is just the image, that is the picture to
>be sharpened (or whatever).

Yes, metrics simply means results of a measurement. In the above
context, image is anything you used to make these measurements on.

>No wait -- my understanding is that, by definition, "optimal sharpening"
>is the highest amount you can apply *without* causing haloes.
>Perhaps unsharp mask in particular always causes them, I don't know, but
>there isn't only unsharp mask around.
>
>Haloes show quite clearly on the ESF graph, and I assure you that I
>*can* apply some amount of sharpening that doesn't cause "hills" in the
>ESF graph.

As I mentioned last time it's all about how the sharpening is done. It
simply means localized increase of (edge) contrast resulting in an
optical illusion i.e. we perceive such an image as sharp.

Now, whether you get ESF peaks is not really what I was addressing but
the fact that the whole concept of sharpening is based on this
selective contrast. So whether this causes ESF peaks or not, the image
has been (in my view) "corrupted". It may look good, and all that, but
I just don't like the concept.

>>>>[snip: the ruler test]
>>>
>> Of course, the key question is, is it worth it? In my case, in the
>> end, I decided it wasn't. But it still bugs me! ;o)
>
>I know. By the way, changing slightly the topic, what about two-pass
>scanning and rotating the slide/film 90 degrees between the two passes?
>I mean, we know the stepper motor axis has worse resolution than the CCD
>axis. So, perhaps multi-pass scanning would work best if we let the CCD
>axis get a horizontal *and* a vertical view of the image.
>
>Of course, you'd still need to sub-pixel align and all that hassle, but
>perhaps the results could be better than the "usual" multi-pass scanning.
>Clearly, there is a disadvantage in that you'd have to physically rotate
>your slides or film between passes...

And it's also nearly impossible to rotate exactly 90 degrees, at least
not to satisfy the accuracy of the scanner. So there will be problems
with that too. Also, due to stretching the pixels are no longer
perfectly rectangular so that will have to be fixed. Etc.

It's a very clever idea, though!

Another option (for lower resolutions) is to simply take a picture
with a high resolution digital camera. This causes many other
problems, of course, but at least as far as horizontal vs vertical
distortion goes it could be much more regular than a scanner.

>>>What I'm saying is, well, this will be no issue, as the channels will be
>>>re-aligned *before* taking either average or luminance.
>>
>> I would *not* align them because that would change the values!!! And
>> those changes are bound to be much more than what you're measuring!
>
>Hm? I don't follow you. When you have got the ESF, you just *have* your
>values. You can then move them around at your heart's will, and you
>won't lose anything. Which implies that you can easily move the three
>ESFs so that they're all aligned (i.e. the "edge center" is found in the
>same place), before taking any kind of average.

I'm not talking about ESF per se but in general. If you align the
channels (using some sort of sub-pixel interpolation) you will be
changing the actual sampled values. This may work visually but it will
throw off any measurements or calculations based on such data.

>> In principle, you should never do anything to the data coming from the
>> scanner if the goal is to perform measurements. That's why even gamma
>> is not applied but only linear data is used for calculations.
>
>Yes, and I'm not doing anything to the data *coming from the scanner*;
>just to the ESF, which is a high-precision, floating point function that
>I've calculated *from* the scanner data.
>It's not made of pixels: it's made for x's and y's, in double precision
>floating point. I assure you that I'm already doing so much more
>(necessary) evil to these functions, that shifting them around a bit
>isn't going to lose anything.

I don't know exactly what you're doing and it may very well not be
important but it's an easy trap to fall into. That's all I was saying.

>> I really think the best way is to simply do each channel separately
>> and then see what the results are. In theory, they should be pretty
>> equal. If you want a single number I would then just average those
>> three results.
>
>Yes, in theory. In practice, my red channel has a visibly worse MTF than
>the green channel, for one.

That's *very* interesting!!! I wonder why that is?

>>BTW, why don't you just use Imatest?
>
>Because they're asking money for it :-)

Oh, really! That's disgusting!!

Like I said, I'm not really into all that, but aren't there free
versions available? Surely, others must have done this many times by
now? Especially if Imatest is so greedy!

>I've had my trial runs, finished
>them up, and I'm now left with SFRWin and no intention to buy Imatest
>(not that it's a bad program, it's just that I don't buy much of
>anything in general).
>
>I'm not sure I would call myself a "free software advocate", but I
>definitely do like free software. And certainly the fact that my program
>might be useful to other people gives me more motivation to write it,
>than if it were only useful to myself.

As you know, in GNU sense "free" doesn't refer to cost but to the fact
that the software is not "imprisoned".

>Not necessarily altruism, mind you, just seeing a lot of downloads of a
>program I've written would probably make me feel a star :-) hey, we're
>human.

That's one of the main motivations for some of the best free software
out there. Or just simply because people are curious and don't believe
the martektroids so they do things themselves.

>Anyway, have you tried out ALE yet?

No, unfortunately not! It's still sitting on top of my "X-files" (I
have a temporary "x" directory where I keep all my current stuff).

I got sidelined because I ran out of disk space. You see, I've done
all my programming and I'm now heavily into scanning. It's complicated
to explain but I want to scan everything to disk first before I start
offloading the images to DVDs. The reason is because the chronology is
unclear, so until I finish scanning *all* of them I will not be able
to re-order them correctly. (Looking at slides with a naked eye is not
good enough.) And I don't want to start burning DVDs only to find out
later, the images are actually out of chronological order. I'm just
being silly, but that's the workflow I chose.

So I was forced to get a new drive. And then "just for fun" I decided
to format it as NTFS (the first time I did that). Long story short,
I'm still running tests and "playing" with it...

>I don't think it can re-align
>*single* rows or columns in an image, but it does perform a lot of
>geomtry transformation while trying to align images. And it works with
>16 bit images and all, which you were looking for, weren't you? It's
>just so terribly slow.

I have my own alignment program and it does what I need, but I was
just interested to see what they did in ALE. In the end, I not only
sub-pixel align in my program but actually transform the image. I do
this with 4 anchor points instead of going with a full mesh exactly
because it is so slow.

Don.
From: Lorenzo J. Lucchini on
Bart van der Wolf wrote:
>
> "john rehn" <john.e.rehn(a)gmail.com> wrote in message
> news:1128251538.948673.68060(a)g49g2000cwa.googlegroups.com...
>
>> Hi !
>>
>> I did not saw this thread before I started my own.
>> But may I ask you: What sfr do you actually get from your
>> scanners ??
>
>
> http://www.xs4all.nl/~bvdwolf/main/foto/Imatest/SFR_DSE5400_GD.png

Yes, "the guy" has some resolution graphs on xs4all :-)

You can see the figures for my Epson RX500 here:
http://ljl.150m.com/scans/fig-blade2.gif

But please notice that I might have made some mistakes scanning:
somehow, the edge image looks color-corrected and possibly
gamma-corrected, even though I thought I told the driver to disable that.

Still, the actual MTF doesn't look too different from the ones I've got
with certainly good scans.

by LjL
ljlbox(a)tiscali.it
From: Lorenzo J. Lucchini on
Don wrote:
> On Sun, 02 Oct 2005 03:23:42 +0200, "Lorenzo J. Lucchini"
> <ljlbox(a)tiscali.it> wrote:
>
> [my 16-bit scans having different colors from my 8-bit scans]

I don't really know, I'll have to do some tests on this. Yeah, it could
be that Photoshop is messing up something, but the images do *look* very
different, too, with the 16-bit image having the whitepoint at 255,
while the 8-bit scan is around 230 or so.

> [snip]
>
>>Haloes show quite clearly on the ESF graph, and I assure you that I
>>*can* apply some amount of sharpening that doesn't cause "hills" in the
>>ESF graph.
>
> As I mentioned last time it's all about how the sharpening is done. It
> simply means localized increase of (edge) contrast resulting in an
> optical illusion i.e. we perceive such an image as sharp.
>
> Now, whether you get ESF peaks is not really what I was addressing but
> the fact that the whole concept of sharpening is based on this
> selective contrast. So whether this causes ESF peaks or not, the image
> has been (in my view) "corrupted". It may look good, and all that, but
> I just don't like the concept.

I see, but I'm not sure sharpening can be dismissed as an optical illusion.
From all I've understood, scanners (expecially staggered array ones)
soften the original image, and sharpening, when done correctly, is
simply the inverse operation.

Actually, "softening" and "sharpening" are just two specific case, the
general concept being: if your optical system *corrupts* the original
target it's imaging, you can *undo* this corruption, as long as you know
exactly the (convolution) function that represents the corruption.

Look at these images, for example:
http://refocus-it.sourceforge.net/

All I know is that I can't read what's written in the original image,
while I can quite clearly read the "restored" version(s).
Yes, there is much more noise, that's unavoidable (expecially in such an
extreme example)... but I have a problem calling the technique "corruption".

Or look at the first example image here:
http://meesoft.logicnet.dk/Analyzer/help/help2.htm#RestorationByDeconvolution

Sure, the "restored" image is not as good as one that was taken without
motion blur to begin with, but still the result is quite impressive.

And note that both programs, Refocus-it and Image Analyzer, *guess* (or
let the user guess) the kind of blurring function *from* the image --
which does result in artifacts, as guessing is hard (the same that
happens with unsharp masking).

But if you know instead of guess, I'm convinced the sharpened result
will not only be more pleasing to the eye, but more mathematically close
to the original target.

>>>>>[snip: the ruler test]
>>>>
>>>Of course, the key question is, is it worth it? In my case, in the
>>>end, I decided it wasn't. But it still bugs me! ;o)
>>
>>I know. By the way, changing slightly the topic, what about two-pass
>>scanning and rotating the slide/film 90 degrees between the two passes?
>>I mean, we know the stepper motor axis has worse resolution than the CCD
>>axis. So, perhaps multi-pass scanning would work best if we let the CCD
>>axis get a horizontal *and* a vertical view of the image.
>>
>>Of course, you'd still need to sub-pixel align and all that hassle, but
>>perhaps the results could be better than the "usual" multi-pass scanning.
>>Clearly, there is a disadvantage in that you'd have to physically rotate
>>your slides or film between passes...
>
> And it's also nearly impossible to rotate exactly 90 degrees, at least
> not to satisfy the accuracy of the scanner. So there will be problems
> with that too. Also, due to stretching the pixels are no longer
> perfectly rectangular so that will have to be fixed. Etc.

Yes, but these things have to be done (though perhaps to a lesser
extent) with "simple" multi-pass scans, as well, because of the problems
we know -- stepper motor and all.
I'm not sure just *how much* increased complexity my idea would add to
the game.

> [snip]
>
>>>>What I'm saying is, well, this will be no issue, as the channels will be
>>>>re-aligned *before* taking either average or luminance.
>>>
>>>I would *not* align them because that would change the values!!! And
>>>those changes are bound to be much more than what you're measuring!
>>
>>Hm? I don't follow you. When you have got the ESF, you just *have* your
>>values. You can then move them around at your heart's will, and you
>>won't lose anything. Which implies that you can easily move the three
>>ESFs so that they're all aligned (i.e. the "edge center" is found in the
>>same place), before taking any kind of average.
>
> I'm not talking about ESF per se but in general. If you align the
> channels (using some sort of sub-pixel interpolation) you will be
> changing the actual sampled values. This may work visually but it will
> throw off any measurements or calculations based on such data.

Ok, I see.
But don't worry then, I don't have to do any sub-pixel interpolation in
the ESF case (or, you could say, I *do* have to do some, but I have to
do it no matter whether I have to re-align or not).

>>>In principle, you should never do anything to the data coming from the
>>>scanner if the goal is to perform measurements. That's why even gamma
>>>is not applied but only linear data is used for calculations.
>>
>>Yes, and I'm not doing anything to the data *coming from the scanner*;
>>just to the ESF, which is a high-precision, floating point function that
>>I've calculated *from* the scanner data.
>>It's not made of pixels: it's made for x's and y's, in double precision
>>floating point. I assure you that I'm already doing so much more
>>(necessary) evil to these functions, that shifting them around a bit
>>isn't going to lose anything.
>
> I don't know exactly what you're doing and it may very well not be
> important but it's an easy trap to fall into. That's all I was saying.

Ok. Just to explain briefly: imagine scanning a sharp edge. You now want
to obtain the function that describes how pixel values change on the
edge (the edge spread function = ESF).

So you take any single row of the scanned edge's image, and look at how
pixels change.

This function looks like the one I had uploaded here:
http://ljl.150m.com/scans/fig-blade2.gif (the first graph)


But, how can the function be so precisely defined, when there are only a
few pixels representing the edge transition in any one row of the edge
image?

Interpolation? No. You simply take more than just *one* row: you take
all of them.
And you scan an edge that's tilted by some degrees with respect to the
scanner axis you want to measure.

This way, you get oversampled data, just as if you were doing many
misaligned scan passes -- only, you know quite precisely what the
misalignment of each is (as you can measure where the edge *is* with
some decent precision).

Once you've done this, and you have the ESF, you don't need to do
anything sub-pixel anymore; you already have an oversampled, "sub-pixel"
function that you've obtained not by interpolation, but by clever use of
real data.

>>>I really think the best way is to simply do each channel separately
>>>and then see what the results are. In theory, they should be pretty
>>>equal. If you want a single number I would then just average those
>>>three results.
>>
>>Yes, in theory. In practice, my red channel has a visibly worse MTF than
>>the green channel, for one.
>
> That's *very* interesting!!! I wonder why that is?

Well, for one, who's to say that my scanner's white light source is white?
If red is less well represented in the light source than the other
primaries, there will be more noise in the red channel.

Though noise should directly affect the MTF, AFAIK.

But there are other possibilities: being a flatbed, my scanner has a
glass. Who says the glass "spreads" all wavelengths the same way?

>>>BTW, why don't you just use Imatest?
>>
>>Because they're asking money for it :-)
>
> Oh, really! That's disgusting!!

:-) It's their right. It's a fine program after all. Imagine what? A guy
on an Italian newsgroup just posted an *executable-only* Visual Basic
program for calculating resistor values from colors.
He was asking for advice about how to improve the program.
But when people told him they couldn't be of much help without the
source code, he reply he wouldn't post it on the net.

My reaction was to write a similar (but hopefully better) program, send
it to him, and tell him I transfered the copyright to him ;-)

> Like I said, I'm not really into all that, but aren't there free
> versions available? Surely, others must have done this many times by
> now? Especially if Imatest is so greedy!

There is SFRWin, which is free, though not open source; and it only
outputs the MTF, while Imatest also gives you the ESF and the LSF (which
have to be calculated to get to the MTF, anyway), as well as some other
useful information.

Also, both Imatest and SFRWin only work under Windows (a version of
SFRWin, called SFR2, runs under Matlab, which *might* mean it could work
under Octave, for all I know, but I doubt it).

> [snip]

by LjL
ljlbox(a)tiscali.it
From: Don on
On Sun, 02 Oct 2005 16:07:04 +0200, "Lorenzo J. Lucchini"
<ljlbox(a)tiscali.it> wrote:

>> Now, whether you get ESF peaks is not really what I was addressing but
>> the fact that the whole concept of sharpening is based on this
>> selective contrast. So whether this causes ESF peaks or not, the image
>> has been (in my view) "corrupted". It may look good, and all that, but
>> I just don't like the concept.
>
>I see, but I'm not sure sharpening can be dismissed as an optical illusion.

It can, because the image is not sharpened only the contrast at both
sides of the border between dark and light areas is enhanced locally.

To really sharpen the image one would need to "shorten" the transition
from dark to light i.e. eliminate or reduce the "fuzzy" part and
generally that's not what's being done.

One simple proof of that is halos. If the image were truly sharpened
(the fuzzy transition is shortened) you could never get haloes! In the
most extreme case of sharpening (complete elimination of gray
transition) you would simply get a clean break between black and
white. That's the sharpest case possible.

The fact that you get halos shows that so-called sharpening algorithms
do not really sharpen but only "fudge" or as I would say "corrupt".

> From all I've understood, scanners (expecially staggered array ones)
>soften the original image, and sharpening, when done correctly, is
>simply the inverse operation.

But that's my point, exactly, it does not really reverse the process
but only tries to and that just adds to the overall "corruption".

>Actually, "softening" and "sharpening" are just two specific case, the
>general concept being: if your optical system *corrupts* the original
>target it's imaging, you can *undo* this corruption, as long as you know
>exactly the (convolution) function that represents the corruption.

In theory but not in practice. And certainly not always. You can't
reverse a lossy process. You can "invent" pixels to compensate
("pretend to reverse") but you can never get the lossy part back.

Now, some of those algorithms are very clever and produce good results
while others just corrupt the image even more (e.g. anti-aliasing).

Whether the result is acceptable or not depends on each individual
because it's a subjective call, really.

>Or look at the first example image here:
>http://meesoft.logicnet.dk/Analyzer/help/help2.htm#RestorationByDeconvolution
>
>Sure, the "restored" image is not as good as one that was taken without
>motion blur to begin with, but still the result is quite impressive.

Which only confirms what I said: Some processes are very clever but
you can never really get the lossy part back.

>>>>I really think the best way is to simply do each channel separately
>>>>and then see what the results are. In theory, they should be pretty
>>>>equal. If you want a single number I would then just average those
>>>>three results.
>>>
>>>Yes, in theory. In practice, my red channel has a visibly worse MTF than
>>>the green channel, for one.
>>
>> That's *very* interesting!!! I wonder why that is?
>
>Well, for one, who's to say that my scanner's white light source is white?
>If red is less well represented in the light source than the other
>primaries, there will be more noise in the red channel.
>
>Though noise should directly affect the MTF, AFAIK.
>
>But there are other possibilities: being a flatbed, my scanner has a
>glass. Who says the glass "spreads" all wavelengths the same way?

But none of that should affect the results because you're dealing with
*relative* change in brightness along the edge. Now, in *absolute*
terms there may be difference between channels but if, for example,
red receives less light than other channels the *relative* transition
should still be the same, only the red pixels will be a bit darker.

I don't think noise enters into this because red would need to receive
considerably less light for noise to affect the measurements. If that
were the case you would notice this in the scans as they would get a
cyan cast.

Don.
From: Lorenzo J. Lucchini on
Don wrote:
> On Sun, 02 Oct 2005 16:07:04 +0200, "Lorenzo J. Lucchini"
> <ljlbox(a)tiscali.it> wrote:
>
>
>>>Now, whether you get ESF peaks is not really what I was addressing but
>>>the fact that the whole concept of sharpening is based on this
>>>selective contrast. So whether this causes ESF peaks or not, the image
>>>has been (in my view) "corrupted". It may look good, and all that, but
>>>I just don't like the concept.
>>
>>I see, but I'm not sure sharpening can be dismissed as an optical illusion.
>
> It can, because the image is not sharpened only the contrast at both
> sides of the border between dark and light areas is enhanced locally.
>
> To really sharpen the image one would need to "shorten" the transition
> from dark to light i.e. eliminate or reduce the "fuzzy" part and
> generally that's not what's being done.
>
> One simple proof of that is halos. If the image were truly sharpened
> (the fuzzy transition is shortened) you could never get haloes! In the
> most extreme case of sharpening (complete elimination of gray
> transition) you would simply get a clean break between black and
> white. That's the sharpest case possible.
>
> The fact that you get halos shows that so-called sharpening algorithms
> do not really sharpen but only "fudge" or as I would say "corrupt".

But my point is that sharpening algorithms should not necessarily
produce haloes. I don't have proof -- actually, proof is what I'm hoping
to obtain if I can make my program work! --, but just note that my
hypothesis is just that: halos need not necessarily occur.

By the way - not that it's particularly important, but I don't think the
"sharpest case possible" is a clean break between black and white, as at
least *one* gray pixel will be unavoidable, unless you manage to place
all of your "borders" *exactly* at the point of transition between two
pixels.

>>From all I've understood, scanners (expecially staggered array ones)
>>soften the original image, and sharpening, when done correctly, is
>>simply the inverse operation.
>
> But that's my point, exactly, it does not really reverse the process
> but only tries to and that just adds to the overall "corruption".

Make a distinction between unsharp masking and similar techniques, and
processes based on knowledge of the system's point spread function,
which is what I'm trying to work on.

Unsharp masking just assumes that every pixel is "spread out" in a
certain way (well, you can set some parameters), and bases its
reconstruction on that.

*That*, I think, is its shortcoming. But if you knew *exactly* the way
every pixel is "spread out" (i.e., if you knew the point spread
function), my understanding is that you *could* then really reverse the
process, by inverting the convolution.

Read below before you feel an urge to say that it's impossible because
the process is irreversible...

>>Actually, "softening" and "sharpening" are just two specific case, the
>>general concept being: if your optical system *corrupts* the original
>>target it's imaging, you can *undo* this corruption, as long as you know
>>exactly the (convolution) function that represents the corruption.
>
> In theory but not in practice. And certainly not always. You can't
> reverse a lossy process. You can "invent" pixels to compensate
> ("pretend to reverse") but you can never get the lossy part back.

Now, this time, yes, what we're talking about is a lossy process, and as
such it cannot be completely reversed.

But before giving up, we should ask, *what is it* that makes it lossy?
Well, I'm still trying to understand how this all really works, but
right now, my answer is: noise makes the process lossy. If you had an
ideal scanner with no noise, then you could *exactly* reverse what the
sensor+optics do.

In real life, we have noise, and that's why you can't just do a
deconvolution and get a "perfect" result. The problem you'll have is
that you also amplify noise, but you won't be otherwise corrupting the
image.

Sure, amplifying noise is still something you might not want to do...
but pretend you own my Epson staggered CCD scanner: you have a scanner
that has twice less noise than equivalent linear CCD scanners, but a
worse MTF in exchange. What do you do? You improve the MTF, at the
expense of getting noise to the levels a linear CCD scanner would should.
And, in comparison with a linear CCD scanner, you've still gained
anti-aliasing.

Kennedy would agree :-) So let's quote him, although I can't guarrantee
the quote isn't a bit out of context, as I've only just looked it up
quickly.

--- CUT ---

[Ed Hamrick]
> The one thing you've missed is that few (if any) flatbed scanners
> have optics that focus well enough to make aliasing a problem when
> scanning film. In this case, staggered linear array CCD's don't
> help anything, and just reduce the resolution.

[Kennedy McEwen]
If this was the situation then any 'loss' in the staggered CCD spatial
response would be more than adequately recovered by simple boost
filtering due to the increased signal to noise of the larger pixels

--- CUT ---

> [snip]
>
>>>>Yes, in theory. In practice, my red channel has a visibly worse MTF than
>>>>the green channel, for one.
>>>
>>>That's *very* interesting!!! I wonder why that is?
>>
>>Well, for one, who's to say that my scanner's white light source is white?
>>If red is less well represented in the light source than the other
>>primaries, there will be more noise in the red channel.
>>
>>Though noise should directly affect the MTF, AFAIK.

Ehm, note that I meant to say "shouldn't" here.

>>But there are other possibilities: being a flatbed, my scanner has a
>>glass. Who says the glass "spreads" all wavelengths the same way?
>
> But none of that should affect the results because you're dealing with
> *relative* change in brightness along the edge. Now, in *absolute*
> terms there may be difference between channels but if, for example,
> red receives less light than other channels the *relative* transition
> should still be the same, only the red pixels will be a bit darker.

Not darker: the scanner calibration will set the light as the
whitepoint, so channels will still have the same brightness.
I agree on a second thought that the red source should be *really*
dimmer than the other, for this to produce noticeably more noise.

But I think the glass hypothesis still stands: if the glass blurs red
more than it blurs the other colors, well, here you have a longer edge
transition, and a worse MTF.

> [snip]

by LjL
ljlbox(a)tiscali.it
First  |  Prev  |  Next  |  Last
Pages: 1 2 3 4 5 6 7 8 9 10 11
Prev: Canon 1250U2
Next: driver