Deconvolution is one of the processes by which we can attempt to undo blurring introduced by our hardware while capturing an image. It can be performed in the spatial domain via a kernel or in the frequency domain by dividing image data by one or more Point Spread Functions. The best single deconvolution PSF to use when Capture Sharpening is the one that resulted in the blurring in the first place: the System PSF. It is often not easy or practical to determine it.
The following simplified discussion and subsequent articles assume the system PSF at the center of a well corrected lens, ignoring changing lens aberrations throughout the rest of the field of view.
There are at least three approaches to detemining PSFs for deconvolution capture sharpening. The first is measuring the actual two-dimensional system PSF but that is hard to accomplish with decent photographic equipment because the PSF is typically too small compared to the size of a pixel and the Bayer pattern. The second is modeling the 2D system PSF based on the physical characteristics of the individual camera/lens components as set up, the subject of a future post. The third is eyeballing the 2D System PSF through an easily applied catch-all function, an approach also often used by deconvolution routines found in current raw converters. This post will deal with this last method in the frequency domain.
The generic approach typically assumes a two-dimensional Gaussian (Normal) System PSF because when numerous different component PSFs are convolved together they tend to take on a Gaussian shape. One of the properties of a Gaussian is that it retains its Gaussian form when going back and forth from the spatial to the frequency domain, although with different parameters.
We’ve seen that the MTF curve of a D4+85mm:1.8G at f/5.6 (green solid line in the picture below) looks in fact quite similar to a Gaussian with standard deviation/radius of 0.65 pixels (red dotted line), an indication that a Gaussian PSF would be a decent approximation for deconvolution.
Since deconvolution in the frequency domain is in theory a simple (err, complex) division, the resulting capture-sharpened MTF should be the quotient of the two curves
Deconvolved MTF = (System PSF MTF) / (deconvolution PSF MTF).
In the example above, at 0.1 cycles per pixel the MTF after deconvolution will show a value of about 0.89/0.92 = 0.97; and at 0.4 cycles per pixel it will be about 0.29/0.27 = 1.07. Clearly wherever the two curves intersect they have the same value and therefore the result of the division/deconvolution will be an MTF of 1, as we can see in the figure above at spatial frequencies of 0, 0.29 and 0.59 cycles per pixels.
As long as the deconvolution PSF MTF in the denominator is less than one the MTF resulting from division will be higher than the System MTF. Because well behaved photographic System MTF curves decrease monotonically at least up to Nyquist, a smaller (therefore noisier) signal is amplified more and more as spatial frequencies increase. In the example above spatial frequencies at Nyquist are only at 12% of their original value and deconvolution would need to amplify their contrast more than eight times to restore them.
Amplification in the frequency domain therefore helps make more apparent some detail but to do so it amplifies everything including noise, which results in increased noise and artifacts in our deconvolved images – so it is best used in moderation. Getting the balance right is easier said than done and accomplished through additional filtering to limit and cutoff unwanted noise and artifacts.
Here is the result of dividing (deconvolving) our example’s well behaved System MTF by the Gaussian PSF MTF of radius 0.65 px shown in the previous figure:
The severe amplification of higher spatial frequencies needs to be kept in check by appropriate filtering in order not to flood the deconvolved image with noise and artifacts. Frequencies at and above 0.7 cy/px need to be cut-off altogether.
Keeping in check such amplification and its side effects is one of the hardest jobs for deconvolution plug-in designers and anyone considering applying deconvolution to their images. It is the main reason why naive division is virtually never used in practice and more advanced methods are applied instead, as you can read in these articles.
Ideally, with a perfectly clean signal, after deconvolution we would end up with an MTF close to unity from 0 to 0.5 cycles per pixel (so-called Nyquist) spatial frequency – and zero MTF from there on up. That would be an indication that all captured original spatial information has been restored in the deconvolved image. There is only one way to do that and that is to use the same PSF that resulted in the blurring in the first place as the deconvolution PSF: the System PSF. But that is difficult to accomplish in practice because the System PSF is seldom known precisely. And we have seen how even a seemingly small difference between the system and deconvolution PSF can cause wild swings in amplification, especially with smaller and noisier signals in higher frequencies.
In addition not all System MTF curves are as well behaved (Gaussian) as the one in this example, in fact most aren’t. In a future post we will consider a class of sensors that – because of the lack of pre-filtering of the captured image by an optical low pass filter – typically show poorly behaved System MTF curves: AAless sensors.
You describe simple linear deconvolution, there is another type of deconvolution which can outperform that: non-linear deconvolution like Richardson/Lucy deconvolution. RawTherapee (http://rawtherapee.com/) can perform this type of capture sharpening.
Hi DSPog, nice of you to drop in. You are right, mention of iterative methods would be useful.
Richardson Lucy isn’t Non Linear Deconvolution.
When we say non linear Deconvolution we talk about the model not the solver.
Richardson Lucy has linear model with non Gaussian Noise. One of the ways to solve this is using Non Linear solver.
Right R, with Poisson Noise. See the dedicated RL article.
Hi,
I’m doing fluid mechanics, but have a big interest in photography. We have a kind of modeling used to describe turbulent flows based on deconvolution. In spatial space, we perform filtering and use the transfer function or the filter directly, to compute the deconvolution. This is done using s.th. like: http://de.wikipedia.org/wiki/Van-Cittert-Dekonvolution. It’s nice, because all we need to have is the filter itself. It helps to restore detail near the filter cutoff and the Van-Cittert procedures avoids singularities in the inversion process. Great site, by the way!
Thanks, and thanks for the pointer, I’ll definitely take a look.