UnSharp Masking (USM) capture sharpening is somewhat equivalent to taking a black/white marker and drawing along every transition in the picture to make it stand out more – automatically. Line thickness and darkness is chosen arbitrarily to achieve the desired effect, much like painters do.
One way to look at USM is to imagine coming across one such simplified transition in an image, say a sharp edge from black to white (your choice of the horizontal or vertical ones below)
and plotting its profile as if you were crossing it perpendicularly. The plot of the relative brightness (Luminance) profile might look something like this (0 signal is black, 1 is white, from an actual Edge Spread Function):
The painter/photographer looks at the result and then says to herself:”Hmm, that’s one fuzzy edge. It takes what looks like the distance of 6 pixels to go from black to white. Surely I can make it look sharper than that. Maybe I can arbitrarily squeeze its ends together so that it fits in fewer pixels”. She takes out her tool (USM/marker), dials in darkness 1, thickness 1 and redraws the transition to pleasure:
Now the transition fits in a space of less than two pixels. “Alright, that looks more like it” she says contentedly and moves on to the next transition.
The only problem with this approach to capture sharpening is that it has (very little if) nothing to do with the reality of the scene. It is completely perceptual, arbitrary and destructive . It is not reversible. We can make the slope of the transition (acutance!) as steep as we like simply by choosing more or less aggressive parameters. MTFs shoot through the roof beyond what’s physically possible, actual scene information need not apply. Might as well draw the transition in with a thick marker.
Nothing inherently wrong with it: the USM approach is perfectly fine and quite useful in many cases, especially where creative or output sharpening are concerned. But as far as capture sharpening goes, upon closer scrutiny USM always disappoints (at least this humble observer) because the arbitrariness and artificiality of it show up in all of their glory as you can clearly see above: halos (overshoots, undershoots), ringing, pixels boldly going where they were never meant to go – where is the center of the transition now?
So what is the judicious pixel peeper supposed to do in order to restore a modicum of pre-capture sharpness?
Well, contrary to USM’s approach one could start with scene information first. If the aggregate edge profile in the raw data looks like that, and such and such an aperture produces this type of blur, and the pixels were this shape and size and the AA was this strong and of this type, the lens bends and blurs light that way on the image around the area of the transition – perhaps we can model and attempt to undo some of the blurring introduced by each of these components of our camera/lens system, taking a good stab at reconstructing what the edge actually looked like before it was blurred by them.
The process by which we attempt to undo one by one the blurring introduced by each of these components is called deconvolution.
Deconvolution math is easier performed in the frequency domain because there it involves mainly simple (err, complex) division/multiplication. If one can approximately model and characterize the effect of each component in the frequency domain, one can in theory undo blurring introduced by it – with many limits, mostly imposed by imperfect modeling, complicated 2D variations in parameters and (especially) noise combined with insufficient energy at higher frequencies. In general photography you can undo some of it well, some of it approximately, some of it not at all. This is what characterization in the frequency domain looks like for the simplest components to model:
Note how in this case the various modeled blurring components (dashed lines) multiply out to a pretty good match of the overall measured actual performance (solid lines).
Deconvolution can also be performed in the spatial domain by applying discrete kernels to the image – but they typically require much larger amounts of computing power. Either way deconvolution results as far as capture sharpening is concerned are much more appealing to the eye of this pixel peeper than the rougher, arbitrary alternative of USM. And the bonus is that deconvolution is by and large reversible and not as destructive. USM can always and subsequently be added later in moderation to specific effect if required.
In a nutshell: USM is a meat cleaver handled by an artistic butcher. Deconvolution is a scalpel wielded by a careful photographic surgeon. There is little room for USM in capture sharpening – meaning use it only when absolutely needed and even then in strict moderation.
Pay attention that many times applying a Linear Convolution Kernel in spatial domain is much faster than in Frequency Domain.
Right Roy, deconvolution works best with relatively clean images and one trades off ‘accuracy’ for speed, although that’s less of a consideration with today’s powerful machines.
Dear Sir,
Very interesting article, but a little bit to scientific with respect to my background. I am interestesd in deconvolution and my feeling is that is much way better than USM. I tried Imagesplus, wich is the favorite deconvolution software of Roger Clark, but with modest result given the diffciculties of finding the good parameters, I am also having a look at Raw Therapie but it look not as sharp as the other one. My feeleing is that deconvolution settings to recover Bayer demarticiation could be easly set and should be universal. The the calibration for each lens, and aperture should be a completely other story. Any suggestion for a passionate of photos, but not enouth skiled to deep into deconvolution scientific approach ?
Thanks in advance
Best regards
Niccolo Baldassini
Hi Niccolo,
You make some interesting points. I think deconvolution is excellent at capture sharpening images that were taken with good technique. In such cases its effect is necessarily subtle. If one wants to get creative then HP, USM, local contrast etc, are just as good if not better.
With regards to undoing the effect of demosaicing, that’s just one of many components that get added up together to form the hardware+software system PSF. It is easier to deal with it in the aggregate than in isolation, even because different demosaicing algorithms leave different footprints.
That’s why most commercial software tends to pretend that the overall blur is gaussian in shape and attempt to undo that. ImagesPlus is an astro program not well suited for general use. I like Focus Magic as well as RawTherapee’s implementation, but there are many others.
Jack
PS If you enjoyed this one, you may be interested in the series of articles that starts here.