Canon recently introduced its EOS-1D X Mark III Digital Single-Lens Reflex [Edit: and now also possibly the R5 Mirrorless ILC] touting a new and improved Anti-Aliasing filter, which they call a High-Res Gaussian Distribution LPF, claiming that
“This not only helps to suppress moiré and color distortion, but also improves resolution.”
In this article we will try to dissect the marketing speak and understand a bit better the theoretical implications of the new AA. For the abridged version, jump to the Conclusions at the bottom. In a picture:
This article will discuss a simple frequency domain model for an AntiAliasing (or Optical Low Pass) Filter, a hardware component sometimes found in a digital imaging system[1]. The filter typically sits just above the sensor and its objective is to block as much of the aliasing and moiré creating energy above the monochrome Nyquist spatial frequency while letting through as much as possible of the real image forming energy below that, hence the low-pass designation.
In consumer digital cameras it is often implemented by introducing one or two birefringent plates in the sensor’s filter stack. This is how Nikon shows it for one of its DSLRs:
Having shown that our simple two dimensional MTF model is able to predict the performance of the combination of a perfect lens and square monochrome pixel with 100% Fill Factor we now turn to the effect of the sampling interval on spatial resolution according to the guiding formula:
(1)
The hats in this case mean the Fourier Transform of the relative component normalized to 1 at the origin (), that is the individual MTFs of the perfect lens PSF, the perfect square pixel and the delta grid; represents two dimensional convolution.
Sampling in the Spatial Domain
While exposed a pixel sees the scene through its aperture and accumulates energy as photons arrive. Below left is the representation of, say, the intensity that a star projects on the sensing plane, in this case resulting in an Airy pattern since we said that the lens is perfect. During exposure each pixel integrates (counts) the arriving photons, an operation that mathematically can be expressed as the convolution of the shown Airy pattern with a square, the size of effective pixel aperture, here assumed to have 100% Fill Factor. It is the convolution in the continuous spatial domain of lens PSF with pixel aperture PSF shown in Equation (2) of the first article in the series.
Sampling is then the product of an infinitesimally small Dirac delta function at the center of each pixel, the red dots below left, by the result of the convolution, producing the sampled image below right.
The next few posts will describe a linear spatial resolution model that can help a photographer better understand the main variables involved in evaluating the ‘sharpness’ of photographic equipment and related captures. I will show numerically that the combined spectral frequency response (MTF) of a perfect AAless monochrome digital camera and lens in two dimensions can be described as the magnitude of the normalized product of the Fourier Transform (FT) of the lens Point Spread Function by the FT of the pixel footprint (aperture), convolved with the FT of a rectangular grid of Dirac delta functions centered at each pixel:
With a few simplifying assumptions we will see that the effect of the lens and sensor on the spatial resolution of the continuous image on the sensing plane can be broken down into these simple components. The overall ‘sharpness’ of the captured digital image can then be estimated by combining the ‘sharpness’ of each of them.
Several sites for photographers perform spatial resolution ‘sharpness’ testing of a specific lens and digital camera set up by capturing a target. You can also measure your own equipment relatively easily to determine how sharp your hardware is. However comparing results from site to site and to your own can be difficult and/or misleading, starting from the multiplicity of units used: cycles/pixel, line pairs/mm, line widths/picture height, line pairs/image height, cycles/picture height etc.
This post will address the units involved in spatial resolution measurement using as an example readings from the popular slanted edge method, although their applicability is generic.