Now that we have the lens Point Spread Function it should be straight forward to reverse its blurring effect out of the image, right?
After all, with a few simplifications typically valid in general photography, we know that the image projected by the lens onto the sensing plane of our digital camera is the continuous image from the scene in object space projected onto the sensing plane per geometrical optics convolved with the of the lens
(1)
with representing two dimensional convolution and sensing/image plane coordinates. We know that convolution in the spatial domain is element by element multiplication in the frequency domain, so indicating the Fourier Transform by
(2)
why not just divide element-wise the FT of the Image on the sensing plane by the FT of in order to obtain restored scene image unblurred by the lens? And isn’t this operation deconvolution since it undoes convolution? Yes, yes but…
Deconvolution by Division
We’ll drop image plane coordinates for the rest of the article for readability. In order to restore unblurred object image tainted by lens blur we divide the blur out of the actual image on the sensing plane in the frequency domain, then return to the spatial domain via inverse FT
(3)
with division above element by element. In digital photography the image is sampled in the spatial domain by an MxN sensor and conversion to and from the frequency domain is performed by the Discrete Fourier Transform, which produces a spectrum of the same size. Since the division is performed element-wise frequency-by-frequency we need to ensure that both image and are the same size.
Images are typically thousands of pixels on the side while the we have in Figure 1 is only 63×63 pixels because we have seen in the previous post that the decays quickly. Therefore starting from a few pixels or so away from the center it is made up of truly tiny values. No problems, we can pad it with zeros to make it the same size as image . Or even better, our was the result of a model, so we can calculated it out to whatever radius we want, keeping in mind that it will still be full of zeros in the frequency domain beyond diffraction extinction.
Therefore, taking a
- 2048×2048 target object sampled image ,
- Convolving it with the of Figure 1 to obtain (below left);
- Dividing the DFT of by the DFT of the lens in Figure 1 and
- Taking the inverse DFT of that per Equation 3
we should get the unblurred original sampled image back, as shown below right:
And indeed Figure 2 right shows the restored image resulting from deconvolution by division to be identical to the original image before blurring. The original is not shown because it is truly identical to the restored one, after all 5 x / is still 5, at least with floating point math.
Only Works in Ideal Conditions
However, any minimal deviation from ideality in the system, say noise or the blurring function not being known precisely, will cause deconvolution by naive division to become unstable. This is what adding 2% gaussian noise to convolved image (Figure 2 left) and deconvolving it with the same lens as earlier does to the restored image (below right)
Although it may be initially surprising that adding a little noise effectively obliterates the ability to restore a blurred image by naive inverse filtering it is actually easy to understand why this happens.
As we know each value of a Discrete Fourier Transform is the weighted sum of the intensity of every single pixel of the image being transformed. The reverse is true for the inverse transform and this is where the problem arises. Just a few unreasonably large values resulting from division in the frequency domain can act like a wrench in the otherwise perfect deconvolution machine.
For instance, if we pad with zeros the tiny 63×63 to match the size of the much larger 2048×2048 sample image, the resulting DFT will be made up of mostly tiny numbers, that when divided into the much larger values of the DFT of the image will generate potentially huge figures. This is especially true at higher spatial frequencies where signal energy approaches zero and the spectrum is dominated by noise. Division by a very small noisy spectrum results in a very large noisy spectrum that contaminates the whole inverse transform, hence the resulting ‘restored’ image in Figure 3. This problem makes the method unusable other than in ideal conditions not typically found in photographic practice.
Weiner-Helstrom to the Rescue
There are ways to mitigate this issue and they tend to take the name of their proponents. One of the better known was proposed by Weiner and indeed Weiner filter deconvolution is a well known method in the presence of noise. It adds the factor shown below left to the denominator of the naive division in the frequency domain
(4)
with SNR the ratio of the power spectra of the signal and the noise, ideally estimated at every frequency. This version of deconvolution by division in the frequency domain incorporates what is known as a least square error or minimum mean square error filter[1].
It is clear that in clean portions of the spectrum where SNR is high this additional factor drives restored image to similar values as in the naive solution of Equation 3; conversely it drives the resulting spectrum towards zero in noisy portions because SNR itself approaches zero. In the frequency domain this is equivalent to saying that the Weiner factor turns the function of the denominator increasingly from high-pass to low-pass as noise power is increased, thus mitigating its negative high frequency amplification effects.
Below the blurred image to the left is again corrupted by the addition of 2% Gaussian noise and the restored image to the right is recovered using Equation 4 with a blanket SNR of 50. As you can see deconvolution by division in the frequency domain, damped by the Wiener filter, has indeed been able to restore some of the lost sharpness – but at the cost of unwelcome artifacts.
I won’t pursue this approach further for now because for typical photographic images of relatively high IQ, iterative methods tend to produce better results. Next.
Notes and References
1. Digital Image Processing, Third Edition, Gonzalez and Woods, Pearson 2009. Section 5.8, p. 353.
2. The Matlab/Octave script used to generate images in this post can be found here