Category Archives: Optics

What is Resolution?

In photography Resolution refers to the ability of an imaging system to capture fine detail from the scene, making it a key determinant of Image Quality.  For instance, with high resolution equipment we might be able to count the number of tiny leaves on a distant tree, while we might not with a lower-res one.  Or the leaves might look sharp with the former and unacceptably mushy with the latter.

We quantify resolution by measuring detail contrast after it has been inevitably smeared by the imaging process.  As detail becomes smaller and closer together in the image, the blurred darker and lighter parts start mixing together until the relative contrast decreases to the point that it disappears, a limit referred to as  diffraction extinction, beyond which all detail is lost and no additional spatial information can be captured from the scene.

Sinusoidal target of increasing frequency to diffraction limit extinction
Increasingly small detail smeared by the imaging process, highly magnified.

The units of resolution are spatial frequencies, the inverse of the size and distance of the detail in question.  Of course at diffraction extinction no visual information is captured, therefore in most cases the criteria for usability are set by larger detail than that – or equivalently at lower frequencies.  Thresholds tend to be application specific and arbitrary.

The type of resolution being measured must also be specified since the term can be applied to different physical quantities: sensor, spatial, temporal, spectral, type of light, medium etc.  In photography we are normally interested in Spatial Resolution from incoherent light traveling in air so that will be the focus here.

Continue reading What is Resolution?

An Introduction to Pupil Aberrations

As discussed in the previous article, so far we have assumed ideal optics, with spherical wavefronts propagating into and out of the lens’ Entrance and Exit pupils respectively.  That would only be true if there were no aberrations. In that case the photon distribution within the pupils would be uniform and such an optical system would be said to be diffraction limited.

Figure 1.   Optics as a black box, fully described for our purposes by its terminal properties at the Entrance and Exit pupils.  A horrible attempt at perspective by your correspondent: the Object, Pupils and Image planes should all be parallel and share the optical axis z.

On the other hand if lens imperfections, aka aberrations, were present the photon distribution in the Exit Pupil would be distorted, thus unable to form a perfectly  spherical wavefront out of it, with consequences for the intensity distribution of photons reaching the image.

Either pupil can be used to fully describe the light collection and concentration characteristics of a lens.  In imaging we are typically interested in what happens after the lens so we will choose to associate the performance of the optics with the Exit Pupil. Continue reading An Introduction to Pupil Aberrations

Pupils and Apertures

We’ve seen in the last article that the job of an ideal photographic lens is simple: to receive photons from a set of directions bounded by a spherical cone with its apex at a point on the object; and to concentrate them in directions bounded  by a spherical cone with its apex at the corresponding point on the image.   In photography both cones are assumed to be in air.

In this article we will distill the photon collecting and distributing function of a complex lens down to its terminal properties, the Entrance and Exit Pupils, allowing us to deal with any lens simply and consistently. Continue reading Pupils and Apertures

Diffracted DOF Aperture Guides: 24-35mm

As a landscape shooter I often wonder whether old rules for DOF still apply to current small pixels and sharp lenses. I therefore roughly measured  the spatial resolution performance of my Z7 with 24-70mm/4 S in the center to see whether ‘f/8 and be there’ still made sense today.  The journey and the diffraction-simple-aberration aware model were described in the last few posts.  The results are summarized in the Landscape Aperture-Distance charts presented here for the 24, 28 and 35mm focal lengths.

I also present the data in the form of a simplified plot to aid making the right compromises when the focusing distance is flexible.  This information is valid for the Z7 and kit in the center only.  It probably just as easily applies to cameras with similarly spec’d pixels and lenses. Continue reading Diffracted DOF Aperture Guides: 24-35mm

DOF and Diffraction: Setup

The two-thin-lens model for precision Depth Of Field estimates described in the last two articles is almost ready to be deployed.  In this one we will describe the setup that will be used to develop the scenarios that will be outlined in the next one.

The beauty of the hybrid geometrical-Fourier optics approach is that, with an estimate of the field produced at the exit pupil by an on-axis point source, we can generate the image of the resulting Point Spread Function and related Modulation Transfer Function.

Pretend that you are a photon from such a source in front of a f/2.8 lens focused at 10m with about 0.60 microns of third order spherical aberration – and you are about to smash yourself onto the ‘best focus’ observation plane of your camera.  Depending on whether you leave exactly from the in-focus distance of 10 meters or slightly before/after that, the impression you would leave on the sensing plane would look as follows:

Figure 1. PSF of a lens with about 0.6um of third order spherical aberration focused on 10m.

The width of the square above is 30 microns (um), which corresponds to the diameter of the Circle of Confusion used for old-fashioned geometrical DOF calculations with full frame cameras.  The first ring of the in-focus PSF at 10.0m has a diameter of about 2.44\lambda \frac{f}{D} = 3.65 microns.   That’s about the size of the estimated effective square pixel aperture of the Nikon Z7 camera that we are using in these tests.
Continue reading DOF and Diffraction: Setup

DOF and Diffraction: Image Side

This investigation of the effect of diffraction on Depth of Field is based on a two-thin-lens model, as suggested by Alan Robinson[1].  We chose this model because it allows us to associate geometrical optics with one lens and Fourier optics with the other, thus simplifying the underlying math and our understanding.

In the last article we discussed how the front element of the model could present at the rear element the wavefront resulting from an on-axis source as a function of distance from the lens.  We accomplished this by using simple geometry in complex notation.  In this one we will take the relative wavefront present at the exit pupil and project it onto the sensing plane, taking diffraction into account numerically.  We already know how to do it since we dealt with this subject in the recent past.

Figure 1. Where is the plane with the Circle of Least Confusion?  Through Focus Line Spread Function Image of a lens at f/2.8 with the indicated third order spherical aberration coefficient, and relative measures of ‘sharpness’ MTF50 and Acutance curves.  Acutance is scaled to the same peak as MTF50 for ease of comparison and refers to my typical pixel peeping conditions: 100% zoom, 16″ away from my 24″ monitor.

Continue reading DOF and Diffraction: Image Side

DOF and Diffraction: Object Side

In this and the following articles we shall explore the effects of diffraction on Depth of Field through a two-lens model that separates geometrical and Fourier optics in a way that keeps the math simple, though via complex notation.  In the process we will gain a better understanding of how lenses work.

The results of the model are consistent with what can be obtained via classic DOF calculators online but should be more precise in critical situations, like macro photography.  I am not a macro photographer so I would be interested in validation of the results of the explained method by someone who is.

Figure 1. Simple two-thin-lens model for DOF calculations in complex notation.  Adapted under licence.

Continue reading DOF and Diffraction: Object Side

The Nikon Z7’s Insane Sharpness

Ever since getting a Nikon Z7 MILC a few months ago I have been literally blown away by the level of sharpness it produces.   I thought that my surprise might be the result of moving up from 24 to 45.7MP, or the excellent pin-point focusing mode, or the lack of an Antialiasing filter.  Well, it turns out that there is probably more at work than that.

This weekend I pulled out the largest cutter blade I could find and set it up rough and tumble near vertically about 10 meters away  to take a peek at what the MTF curves that produce such sharp results might look like.

Continue reading The Nikon Z7’s Insane Sharpness

Capture Sharpening: Estimating Lens PSF

The next few articles will outline the first tiny few steps towards achieving perfect capture sharpening, that is deconvolution of an image by the Point Spread Function (PSF) of the lens used to capture it.  This is admittedly  a complex subject, fraught with a myriad ever changing variables even in a lab, let alone in the field.  But studying it can give a glimpse of the possibilities and insights into the processes involved.

I will explain the steps I followed and show the resulting images and measurements.  Jumping the gun, the blue line below represents the starting system Spatial Frequency Response (SFR)[1], the black one unattainable/undesirable perfection and the orange one the result of part of the process outlined in this series.

Figure 1. Spatial Frequency Response of the imaging system before and after Richardson-Lucy deconvolution by the PSF of the lens that captured the original image.

Continue reading Capture Sharpening: Estimating Lens PSF

Wavefront to PSF to MTF: Physical Units

In the last article we saw that the intensity Point Spread Function and the Modulation Transfer Function of a lens could be easily approximated numerically by applying Discrete Fourier Transforms to its generalized exit pupil function \mathcal{P} twice in sequence.[1]

Numerical Fourier Optics: amplitude Point Spread Function, intensity PSF and MTF

Obtaining the 2D DFTs is easy: simply feed MxN numbers representing the two dimensional complex image of the Exit Pupil function in its uv space to a Fast Fourier Transform routine and, presto, it produces MxN numbers representing the amplitude of the PSF on the xy sensing plane.  Figure 1a shows a simple case where pupil function \mathcal{P} is a uniform disk representing the circular aperture of a perfect lens with MxN = 1024×1024.  Figure 1b is the resulting intensity PSF.

Figure 1a, left: A circular array of ones appearing as a white disk on a black background, representing a circular aperture. Figure 1b, right: Array of numbers representing the PSF of image 1a in the classic shape of an Airy Pattern.
Figure 1. 1a Left: Array of numbers representing a circular aperture (zeros for black and ones for white).  1b Right: Array of numbers representing the PSF of image 1a (contrast slightly boosted).

Simple and fast.  Wonderful.  Below is a slice through the center, the 513th row, zoomed in.  Hmm….  What are the physical units on the axes of displayed data produced by the DFT? Continue reading Wavefront to PSF to MTF: Physical Units

Aberrated Wave to Image Intensity to MTF

Goodman, in his excellent Introduction to Fourier Optics[1], describes how an image is formed on a camera sensing plane starting from first principles, that is electromagnetic propagation according to Maxwell’s wave equation.  If you want the play by play account I highly recommend his math intensive book.  But for the budding photographer it is sufficient to know what happens at the Exit Pupil of the lens because after that the transformations to Point Spread and Modulation Transfer Functions are straightforward, as we will show in this article.

The following diagram exemplifies the last few millimeters of the journey that light from the scene has to travel in order to be absorbed by a camera’s sensing medium.  Light from the scene in the form of  field  U arrives at the front of the lens.  It goes through the lens being partly blocked and distorted by it as it arrives at its virtual back end, the Exit Pupil, we’ll call this blocking/distorting function P.   Other than in very simple cases, the Exit Pupil does not necessarily coincide with a specific physical element or Principal surface.[iv]  It is a convenient mathematical construct which condenses all of the light transforming properties of a lens into a single plane.

The complex light field at the Exit Pupil’s two dimensional uv plane is then  U\cdot P as shown below (not to scale, the product of the two arrays is element-by-element):

Figure 1. Simplified schematic diagram of the space between the exit pupil of a camera lens and its sensing plane. The space is assumed to be filled with air.

Continue reading Aberrated Wave to Image Intensity to MTF

Taking the Sharpness Model for a Spin – II

This post  will continue looking at the spatial frequency response measured by MTF Mapper off slanted edges in DPReview.com raw captures and relative fits by the ‘sharpness’ model discussed in the last few articles.  The model takes the physical parameters of the digital camera and lens as inputs and produces theoretical directional system MTF curves comparable to measured data.  As we will see the model seems to be able to simulate these systems well – at least within this limited set of parameters.

The following fits refer to the green channel of a number of interchangeable lens digital camera systems with different lenses, pixel sizes and formats – from the current Medium Format 100MP champ to the 1/2.3″ 18MP sensor size also sometimes found in the best smartphones.  Here is the roster with the cameras as set up:

Table 1. The cameras and lenses under test.

Continue reading Taking the Sharpness Model for a Spin – II

Taking the Sharpness Model for a Spin

The series of articles starting here outlines a model of how the various physical components of a digital camera and lens can affect the ‘sharpness’ – that is the spatial resolution – of the  images captured in the raw data.  In this one we will pit the model against MTF curves obtained through the slanted edge method[1] from real world raw captures both with and without an anti-aliasing filter.

With a few simplifying assumptions, which include ignoring aliasing and phase, the spatial frequency response (SFR or MTF) of a photographic digital imaging system near the center can be expressed as the product of the Modulation Transfer Function of each component in it.  For a current digital camera these would typically be the main ones:

(1)   \begin{equation*} MTF_{sys} = MTF_{lens} (\cdot MTF_{AA}) \cdot MTF_{pixel} \end{equation*}

all in two dimensions Continue reading Taking the Sharpness Model for a Spin

A Simple Model for Sharpness in Digital Cameras – Polychromatic Light

We now know how to calculate the two dimensional Modulation Transfer Function of a perfect lens affected by diffraction, defocus and third order Spherical Aberration  – under monochromatic light at the given wavelength and f-number.  In digital photography however we almost never deal with light of a single wavelength.  So what effect does an illuminant with a wide spectral power distribution, going through the color filter of a typical digital camera CFA  before the sensor have on the spatial frequency responses discussed thus far?

Monochrome vs Polychromatic Light

Not much, it turns out. Continue reading A Simple Model for Sharpness in Digital Cameras – Polychromatic Light

A Simple Model for Sharpness in Digital Cameras – Spherical Aberrations

Spherical Aberration (SA) is one key component missing from our MTF toolkit for modeling an ideal imaging system’s ‘sharpness’ in the center of the field of view in the frequency domain.  In this article formulas will be presented to compute the two dimensional Point Spread and Modulation Transfer Functions of the combination of diffraction, defocus and third order Spherical Aberration for an otherwise perfect lens with a circular aperture.

Spherical Aberrations result because most photographic lenses are designed with quasi spherical surfaces that do not necessarily behave ideally in all situations.  For instance, they may focus light on systematically different planes depending on whether the respective ray goes through the exit pupil closer or farther from the optical axis, as shown below:

371px-spherical_aberration_2
Figure 1. Top: an ideal spherical lens focuses all rays on the same focal point. Bottom: a practical lens with Spherical Aberration focuses rays that go through the exit pupil based on their radial distance from the optical axis. Image courtesy Andrei Stroe.

Continue reading A Simple Model for Sharpness in Digital Cameras – Spherical Aberrations

A Simple Model for Sharpness in Digital Cameras – Defocus

This series of articles has dealt with modeling an ideal imaging system’s ‘sharpness’ in the frequency domain.  We looked at the effects of the hardware on spatial resolution: diffraction, sampling interval, sampling aperture (e.g. a squarish pixel), anti-aliasing OLPAF filters.  The next two posts will deal with modeling typical simple imperfections related to the lens: defocus and spherical aberrations.

Defocus = OOF

Defocus means that the sensing plane is not exactly where it needs to be for image formation in our ideal imaging system: the image is therefore out of focus (OOF).  Said another way, light from a point source would go through the lens but converge either behind or in front of the sensing plane, as shown in the following diagram, for a lens with a circular aperture:

Figure 1. Back Focus, In Focus, Front Focus.
Figure 1. Top to bottom: Back Focus, In Focus, Front Focus.  To the right is how the relative PSF would look like on the sensing plane.  Image under license courtesy of Brion.

Continue reading A Simple Model for Sharpness in Digital Cameras – Defocus

A Simple Model for Sharpness in Digital Cameras – Sampling & Aliasing

Having shown that our simple two dimensional MTF model is able to predict the performance of the combination of a perfect lens and square monochrome pixel with 100% Fill Factor we now turn to the effect of the sampling interval on spatial resolution according to the guiding formula:

(1)   \begin{equation*} MTF_{Sys2D} = \left|(\widehat{ PSF_{lens} }\cdot \widehat{PIX_{ap} })\right|_{pu}\ast\ast\: \delta\widehat{\delta_{pitch}} \end{equation*}

The hats in this case mean the Fourier Transform of the relative component normalized to 1 at the origin (_{pu}), that is the individual MTFs of the perfect lens PSF, the perfect square pixel and the delta grid;  ** represents two dimensional convolution.

Sampling in the Spatial Domain

While exposed a pixel sees the scene through its aperture and accumulates energy as photons arrive.  Below left is the representation of, say, the intensity that a star projects on the sensing plane, in this case resulting in an Airy pattern since we said that the lens is perfect.  During exposure each pixel integrates (counts) the arriving photons, an operation that mathematically can be expressed as the convolution of the shown Airy pattern with a square, the size of effective pixel aperture, here assumed to have 100% Fill Factor.  It is the convolution in the continuous spatial domain of lens PSF with pixel aperture PSF shown in Equation (2) of the first article in the series.

Sampling is then the product of an infinitesimally small Dirac delta function at the center of each pixel, the red dots below left, by the result of the convolution, producing the sampled image below right.

Footprint-PSF3
Figure 1. Left, 1a: A highly zoomed (3200%) image of the lens PSF, an Airy pattern, projected onto the imaging plane where the sensor sits. Pixels shown outlined in yellow. A red dot marks the sampling coordinates. Right, 1b: The sampled image zoomed at 16000%, 5x as much, because in this example each pixel’s width is 5 linear units on the side.

Continue reading A Simple Model for Sharpness in Digital Cameras – Sampling & Aliasing

A Simple Model for Sharpness in Digital Cameras – Diffraction and Pixel Aperture

Now that we know from the introductory article that the spatial frequency response of a typical perfect digital camera and lens (its Modulation Transfer Function) can be modeled simply as the product of the Fourier Transform of the Point Spread Function of the lens and pixel aperture, convolved with a Dirac delta grid at cycles-per-pixel pitch spacing

(1)   \begin{equation*} MTF_{Sys2D} = \left|\widehat{ PSF_{lens} }\cdot \widehat{PIX_{ap} }\right|_{pu}\ast\ast\: \delta\widehat{\delta_{pitch}} \end{equation*}

we can take a closer look at each of those components (pu here indicating normalization to one at the origin).   I used Matlab to generate the examples below but you can easily do the same with a spreadsheet.   Continue reading A Simple Model for Sharpness in Digital Cameras – Diffraction and Pixel Aperture

A Simple Model for Sharpness in Digital Cameras – I

The next few posts will describe a linear spatial resolution model that can help a photographer better understand the main variables involved in evaluating the ‘sharpness’ of photographic equipment and related captures.   I will show numerically that the combined spectral frequency response (MTF) of a perfect AAless monochrome digital camera and lens in two dimensions can be described as the magnitude of the normalized product of the Fourier Transform (FT) of the lens Point Spread Function by the FT of the pixel footprint (aperture), convolved with the FT of a rectangular grid of Dirac delta functions centered at each  pixel:

    \[ MTF_{2D} = \left|\widehat{ PSF_{lens} }\cdot \widehat{PIX_{ap} }\right|_{pu}\ast\ast\: \delta\widehat{\delta_{pitch}} \]

With a few simplifying assumptions we will see that the effect of the lens and sensor on the spatial resolution of the continuous image on the sensing plane can be broken down into these simple components.  The overall ‘sharpness’ of the captured digital image can then be estimated by combining the ‘sharpness’ of each of them.

The stage will be set in this first installment with a little background and perfect components.  Later additional detail will be provided to take into account pixel aperture and Anti-Aliasing filters.  Then we will look at simple aberrations.  Next we will learn how to measure MTF curves for our equipment, and look at numerical methods to model PSFs and MTFs from the wavefront at the aperture. Continue reading A Simple Model for Sharpness in Digital Cameras – I

Chromatic Aberrations MTF Mapped

A number of interesting insights come to light once one realizes that as far as the slanted edge method (of measuring  the Modulation Transfer Function of a Bayer CFA digital camera and lens from its raw data) is concerned it is as if it were dealing with identical images behind three color filters, each in their own separate, full resolution color plane:

CFA Sensor Frequency Domain Model
Figure 1. The Modulation Transfer Function of the three color planes can be measured separately directly in the raw data by open source  MTF Mapper

Continue reading Chromatic Aberrations MTF Mapped

The Units of Spatial Resolution

Several sites for photographers perform spatial resolution ‘sharpness’ testing of a specific lens and digital camera set up by capturing a target.  You can also measure your own equipment relatively easily to determine how sharp your hardware is.  However comparing results from site to site and to your own can be difficult and/or misleading, starting from the multiplicity of units used: cycles/pixel, line pairs/mm, line widths/picture height, line pairs/image height, cycles/picture height etc.

This post will address the units involved in spatial resolution measurement using as an example readings from the popular slanted edge method, although their applicability is generic.

Continue reading The Units of Spatial Resolution

The Slanted Edge Method

My preferred method for measuring the spatial resolution performance of photographic equipment these days is the slanted edge method.  It requires a minimum amount of additional effort compared to capturing and simply eye-balling a pinch, Siemens or other chart but it gives more, useful, accurate, quantitative information in the language and units that have been used to characterize optical systems for over a century: it produces a good approximation to  the Modulation Transfer Function of the two dimensional camera/lens system impulse response – at the location of the edge in the direction perpendicular to it.

Much of what there is to know about an imaging system’s spatial resolution performance can be deduced by analyzing its MTF curve, which represents the system’s ability to capture increasingly fine detail from the scene, starting from perceptually relevant metrics like MTF50, discussed a while back.

In fact the area under the curve weighted by some approximation of the Contrast Sensitivity Function of the Human Visual System is the basis for many other, better accepted single figure ‘sharpness‘ metrics with names like Subjective Quality Factor (SQF), Square Root Integral (SQRI), CMT Acutance, etc.   And all this simply from capturing the image of a slanted edge, which one can actually and somewhat easily do at home, as presented in the next article.

Continue reading The Slanted Edge Method