From: DonRoth on
Later this week, I will relook at my image processing books also and see if I can find more information regarding the image deconvolution process.  In the meantime, I wish we could have some help here from other experts. FYI, you can normalize the deconvolution result to optimize viewing for grayscale if so desired (see attached).  Now you see why I still like the intensity graph as an image container vs. picture control and vision control.  Intensity graph is more flexible and manipulable due to so many property nodes.  Viewing optimized / fully contrast enhanced gray scale is done just by autoscaling Z (and interpolating color).Don


normalize_gray_scale_result.png:
http://forums.ni.com/attachments/ni/170/209189/1/normalize_gray_scale_result.png


normalize_array_data_for_8-bit.vi:
http://forums.ni.com/attachments/ni/170/209189/2/normalize_array_data_for_8-bit.vi
From: DonRoth on
Here is the code reposted in LabVIEW 8.0. Note that we have Vision routines in this, and when I saved for previous, I did denote that the toolkit should be version 8.0 as well.  If you cannot open the Vision routines, note that part of the code is now actually duplicated only using LabVIEW and gives close to identical results to that seen from the output of the Vision portion.  So you should be able to extract the LabVIEW only code if you need to.
 
Sincerely,
 
Don


deconvolve_8.0.zip:
http://forums.ni.com/attachments/ni/170/209888/1/deconvolve_8.0.zip
From: GregS on
The code is pretty close. The main problem is that the FFT of the PSF contains zeros, therefore when you divide in the deconvolution, you're dividing by zero all over the place, and you end up just looking at the noise! This is a well known issue, so when you go looking for articles or books on deconvolution, they should cover this. The easiest solution is to add a weighting factor - essentially a very small number - to the zeroes - this process is know by the term Weiner deconvolution (because the weight is often based on a Weiner filter). The downside is that it essentially blurs the result, so you need to carefully choose this value.I've modifed your code a bit further - mostly to simplify and demonstrate how you might do an iterative Weiner deconvolution. I've done this just with LV arrays rather than Vision modules - chiefly because it's now a simple extension to move it to 3D. The left hand side of the diagram is essentially the same (except you need to normalise the PSF to sum to 1), and then I've created an event loop so you can play with the values of alpha, and the number of iterations to perform.  The Weiner deconvolution is put in a separate VI (see the attached PNG). I've added code to iteratively deconvolve - basically this subtracts the result from the original image, and then performs a subsequent deconvolution on that "error" times the PSF, to improve the solution. The Swap Quadrants at the end just repositions the result.Start with "alpha=0" and you'll see essentially the same result as you had. Now increase alpha slightly (0.00001 say) - notice the noise is eliminated, and the result begins to appear. If you keep increasing alpha, the noise keeps reducing, but the result gets more blurred. Now with alpha around 0.005, try increasing the number of iterations - even up to 5 or 50 - for the right values, you'll get a sharp image with perhaps a little ringing - this might be reduced if you zero-padded your arrays to eliminate wrap-around.Hope this helps you get started - this is still the simplest deconvolution that "works" - look up Richardson-Lucy algorithms if you need something "better".Cheers ~ GregMessage Edited by GregS on 10-12-2006 07:16 PM


Deconvolve_Weiner.zip:
http://forums.ni.com/attachments/ni/170/210034/1/Deconvolve_Weiner.zip


Weiner.png:
http://forums.ni.com/attachments/ni/170/210034/2/Weiner.png
From: DonRoth on
I had some time this morning and I wrote a small routine that pads dimensions with the PSF minimum value (rather than zero) thinking that that allows us to potentially skip the Weiner formulation if not needed.  I padded only to the minimum number needed, and in fact the routine can have dimensions off by 1 (indeed that is the case in this example) but the result seems to be valid.  This can likely be tweaked and optimized but it was a simple solution for now.  So yes, it does appear that the dimensions are required to be (about) equal to allow us to carry out the deconvolution.  Thanks again for taking the time to look at all of this and making great suggestions.
Sincerely,
 
Don


Test_deconvolve_different_size_arrays_8.0_Folder.zip:
http://forums.ni.com/attachments/ni/170/210408/1/Test_deconvolve_different_size_arrays_8.0_Folder.zip