Tag Archives: spectral frequency response

Minimalist ESF, LSF, MTF by Monotonic Regression

Because the Slanted Edge Method of estimating the Spectral Frequency Response of a camera and lens is one of the more popular articles on this site, I have fielded variations on the following question many times over the past ten years:

How do you go from the intensity cloud  produced by the projection of a slanted edge captured in a raw file to a good estimate of the relevant Line Spread Function?

Figure 1.  Slanted edge captured in the raw data and projected to the edge normal.  The data noisy because of shot noise and PRNU.  How to estimate the underlying edge profile (orange line, the Edge Spread Function)?

So I decided to write down the answer that I have settled on.  It relies on monotone spline regression to obtain an Edge Spread Function (ESF) and then reuses the parameters of the regression to infer the relative regularized Line Spread Function (LSF) analytically in one go.

This front-loads all uncertainty to just the estimation of the ESF since the other steps on the way to the SFR become purely mechanical.  In addition the monotonicity constraint puts some guardrails around the curve, keeping it on the straight and narrow without further effort.

This minimalist, what-you-see-is-what-you-get approach gets around the usual need for signal conditioning such as binning, finite difference calculations and other filtering, with their drawbacks and compensations.  It has the potential to be further refined so consider it a hot-rod DIY kit.  Even so it is an intuitively direct implementation of the method and it provides strong validation for Frans van den Bergh’s open source MTF Mapper, the undisputed king in this space,[1] as it produces very similar results with raw slanted edge captures. Continue reading Minimalist ESF, LSF, MTF by Monotonic Regression

A Simple Model for Sharpness in Digital Cameras – Defocus

This series of articles has dealt with modeling an ideal imaging system’s ‘sharpness’ in the frequency domain.  We looked at the effects of the hardware on spatial resolution: diffraction, sampling interval, sampling aperture (e.g. a squarish pixel), anti-aliasing OLPAF filters.  The next two posts will deal with modeling typical simple imperfections related to the lens: defocus and spherical aberrations.

Defocus = OOF

Defocus means that the sensing plane is not exactly where it needs to be for image formation in our ideal imaging system: the image is therefore out of focus (OOF).  Said another way, light from a point source would go through the lens but converge either behind or in front of the sensing plane, as shown in the following diagram, for a lens with a circular aperture:

Figure 1. Back Focus, In Focus, Front Focus.
Figure 1. Top to bottom: Back Focus, In Focus, Front Focus.  To the right is how the relative PSF would look like on the sensing plane.  Image under license courtesy of Brion.

Continue reading A Simple Model for Sharpness in Digital Cameras – Defocus

A Simple Model for Sharpness in Digital Cameras – AA

This article is part of a continuing series on the determinants of sharpness in an imaging system and will discuss a simple frequency domain model for an AntiAliasing (or Optical Low Pass) Filter, a hardware component sometimes found in a digital imaging system[1].  The filter typically sits just above the sensor and its objective is to block as much of the aliasing and moiré creating energy above the monochrome Nyquist spatial frequency while letting through as much as possible of the real image forming energy below that, hence the low-pass designation.

Downsizing Box 4X
Figure 1. The blue line indicates the pass through performance of an ideal anti-aliasing filter presented with an Airy PSF (Original): pass all spatial frequencies below Nyquist (0.5 c/p) and none above that. No filter has such ideal characteristics and if it did its hard edges would result in undesirable ringing in the image.

In consumer digital cameras it is often implemented  by introducing one or two birefringent plates in the sensor’s filter stack.  This is how Nikon shows it for one of its DSLRs:

d800-aa1
Figure 2. Typical Optical Low Pass Filter implementation  in a current Digital Camera, courtesy of Nikon USA (yellow displacement ‘d’ added).

Continue reading A Simple Model for Sharpness in Digital Cameras – AA

A Simple Model for Sharpness in Digital Cameras – I

The next few posts will describe a linear spatial resolution model that can help a photographer better understand the main variables involved in evaluating the ‘sharpness’ of photographic equipment and related captures.   I will show numerically that the combined spectral frequency response (MTF) of a perfect AAless monochrome digital camera and lens in two dimensions can be described as the magnitude of the normalized product of the Fourier Transform (FT) of the lens Point Spread Function (PSF) by the FT of the pixel footprint (aperture), convolved with the FT of a square grid of Dirac delta functions centered at each  pixel:

    \[ MTF_{2D} = \left|\widehat{ PSF_{lens} }\cdot \widehat{PIX_{ap} }\right|_{pu}\ast\ast\: \delta\widehat{\delta_{pitch}} \]

With a few simplifying assumptions we will see that the effect of the lens and sensor on the spatial resolution of the continuous image on the sensing plane can be broken down into these simple components.  The overall ‘sharpness’ of the captured digital image can then be estimated by combining the ‘sharpness’ of each of them.

The stage will be set in this first installment with a little background and perfect components.  Following articles will deal with the effect on captured sharpness of

We will learn how to measure MTF curves for our equipment, look at numerical methods to model PSFs and MTFs from the wavefront at the pupil of the lens and the theory behind them. Continue reading A Simple Model for Sharpness in Digital Cameras – I