Category Archives: MTF

Minimalist ESF, LSF, MTF by Monotonic Regression

Because the Slanted Edge Method of estimating the Spectral Frequency Response of a camera and lens is one of the more popular articles on this site, I have fielded variations on the following question many times over the past ten years:

How do you go from the intensity cloud  produced by the projection of a slanted edge captured in a raw file to a good estimate of the relevant Line Spread Function?

Figure 1.  Slanted edge captured in the raw data and projected to the edge normal.  The data noisy because of shot noise and PRNU.  How to estimate the underlying edge profile (orange line, the Edge Spread Function)?

So I decided to write down the answer that I have settled on.  It relies on monotone spline regression to obtain an Edge Spread Function (ESF) and then reuses the parameters of the regression to infer the relative regularized Line Spread Function (LSF) analytically in one go.

This front-loads all uncertainty to just the estimation of the ESF since the other steps on the way to the SFR become purely mechanical.  In addition the monotonicity constraint puts some guardrails around the curve, keeping it on the straight and narrow without further effort.

This minimalist, what-you-see-is-what-you-get approach gets around the usual need for signal conditioning such as binning, finite difference calculations and other filtering, with their drawbacks and compensations.  It has the potential to be further refined so consider it a hot-rod DIY kit.  Even so it is an intuitively direct implementation of the method and it provides strong validation for Frans van den Bergh’s open source MTF Mapper, the undisputed king in this space,[1] as it produces very similar results with raw slanted edge captures. Continue reading Minimalist ESF, LSF, MTF by Monotonic Regression

Introduction to Texture MTF

Texture MTF is a method to measure the sharpness of a digital camera and lens by capturing the image of a target of known characteristics.  It purports to better evaluate the perception of fine details in low contrast areas of the image – what is referred to as ‘texture’ – in the presence of noise reduction, sharpening or other non-linear processing performed by the camera before writing data to file.

Figure 1. Image of Dead Leaves low contrast target. Such targets are designed to have controlled scale and direction invariant features with a power law Power Spectrum.

The Modulation Transfer Function (MTF) of an imaging system represents its spatial frequency response,  from which many metrics related to perceived sharpness are derived: MTF50, SQF, SQRI, CMT Acutance etc.  In these pages we have used to good effect the slanted edge method to obtain accurate estimates of a system’s MTF curves in the past.[1]

In this article we will explore proposed methods to determine Texture MTF and/or estimate the Optical Transfer Function of the imaging system under test from a reference power-law Power Spectrum target.  All three rely on variations of the ratio of captured to reference image in the frequency domain: straight Fourier Transforms; Power Spectral Density; and Cross Power Density.  In so doing we will develop some intuitions about their strengths and weaknesses. Continue reading Introduction to Texture MTF

Fourier Optics and the Complex Pupil Function

In the last article we learned that a complex lens can be modeled as just an entrance pupil, an exit pupil and a geometrical optics black-box in between.  Goodman[1] suggests that all optical path errors for a given Gaussian point on the image plane can be thought of as being introduced by a custom phase plate at the pupil plane, delaying or advancing the light wavefront locally according to aberration function \Delta W(u,v) as earlier described.

The phase plate distorts the forming wavefront, introducing diffraction and aberrations, while otherwise allowing us to treat the rest of the lens as if it followed geometrical optics rules.  It can be associated with either the entrance or the exit pupil.  Photographers are usually concerned with the effects of the lens on the image plane so we will associate it with the adjacent Exit Pupil.

aberrations coded as phase plate in exit pupil generalized complex pupil function
Figure 1.  Aberrations can be fully described by distortions introduced by a fictitious phase plate inserted at the uv exit pupil plane.  The phase error distribution is the same as the path length error described by wavefront aberration function ΔW(u,v), introduced in the previous article.

Continue reading Fourier Optics and the Complex Pupil Function

An Introduction to Pupil Aberrations

As discussed in the previous article, so far we have assumed ideal optics, with spherical wavefronts propagating into and out of the lens’ Entrance and Exit pupils respectively.  That would only be true if there were no aberrations. In that case the photon distribution within the pupils would be uniform and such an optical system would be said to be diffraction limited.

Figure 1.   Optics as a black box, fully described for our purposes by its terminal properties at the Entrance and Exit pupils.  A horrible attempt at perspective by your correspondent: the Object, Pupils and Image planes should all be parallel and share the optical axis z.

On the other hand if lens imperfections, aka aberrations, were present the photon distribution in the Exit Pupil would be distorted, thus unable to form a perfectly  spherical wavefront out of it, with consequences for the intensity distribution of photons reaching the image.

Either pupil can be used to fully describe the light collection and concentration characteristics of a lens.  In imaging we are typically interested in what happens after the lens so we will choose to associate the performance of the optics with the Exit Pupil. Continue reading An Introduction to Pupil Aberrations

A Simple Model for Sharpness in Digital Cameras – Sampling & Aliasing

Having shown that our simple two dimensional MTF model is able to predict the performance of the combination of a perfect lens and square monochrome pixel with 100% Fill Factor we now turn to the effect of the sampling interval on spatial resolution according to the guiding formula:

(1)   \begin{equation*} MTF_{Sys2D} = \left|(\widehat{ PSF_{lens} }\cdot \widehat{PIX_{ap} })\right|_{pu}\ast\ast\: \delta\widehat{\delta_{pitch}} \end{equation*}

The hats in this case mean the Fourier Transform of the relative component normalized to 1 at the origin (_{pu}), that is the individual MTFs of the perfect lens PSF, the perfect square pixel and the delta grid;  ** represents two dimensional convolution.

Sampling in the Spatial Domain

While exposed a pixel sees the scene through its aperture and accumulates energy as photons arrive.  Below left is the representation of, say, the intensity that a star projects on the sensing plane, in this case resulting in an Airy pattern since we said that the lens is perfect.  During exposure each pixel integrates (counts) the arriving photons, an operation that mathematically can be expressed as the convolution of the shown Airy pattern with a square, the size of effective pixel aperture, here assumed to have 100% Fill Factor.  It is the convolution in the continuous spatial domain of lens PSF with pixel aperture PSF shown in Equation (2) of the first article in the series.

Sampling is then the product of an infinitesimally small Dirac delta function at the center of each pixel, the red dots below left, by the result of the convolution, producing the sampled image below right.

Footprint-PSF3
Figure 1. Left, 1a: A highly zoomed (3200%) image of the lens PSF, an Airy pattern, projected onto the imaging plane where the sensor sits. Pixels shown outlined in yellow. A red dot marks the sampling coordinates. Right, 1b: The sampled image zoomed at 16000%, 5x as much, because in this example each pixel’s width is 5 linear units on the side.

Continue reading A Simple Model for Sharpness in Digital Cameras – Sampling & Aliasing

System MTF from Bayer Sensors

In this and the previous article I discuss how Modulation Transfer Functions (MTF) obtained from every raw color plane of a Bayer CFA in isolation can be combined to provide an objective and meaningful composite MTF curve for the imaging system as a whole.  There are two main ways to accomplish this goal:

  • an input-referred linear Hardware System MTF (MTF_L) that reflects the mix of spectral information captured in the raw data, divorced from downstream color science; and
  • an output-referred linear Luminance System MTF (MTF_Y) that reflects the luminance channel of the image as neutrally displayed.

Both are valid on their own, though the weights of the former are fixed for any Bayer sensor while the latter are scene, camera/lens and illuminant dependent.  For this reason I usually prefer input-referred weights as a first pass when comparing cameras and lens hardware in similar conditions. Continue reading System MTF from Bayer Sensors