As a landscape shooter I often wonder whether old rules for DOF still apply to current small pixels and sharp lenses. I therefore roughly measured the spatial resolution performance of my Z7 with 24-70mm/4 S in the center to see whether ‘f/8 and be there’ still made sense today. The journey and the diffraction-simple-aberration aware model were described in the last few posts. The results are summarized in the Landscape Aperture-Distance charts presented here for the 24, 28 and 35mm focal lengths.
I also present the data in the form of a simplified plot to aid making the right compromises when the focusing distance is flexible. This information is valid for the Z7 and kit in the center only. It probably just as easily applies to cameras with similarly spec’d pixels and lenses. Continue reading Diffracted DOF Aperture Guides: 24-35mm→
After an exhausting two and a half hour hike you are finally resting, sitting on a rock at the foot of your destination, a tiny alpine lake, breathing in the thin air and absorbing the majestic scenery. A cool light breeze suddenly rips the surface of the water, morphing what has until now been a perfect reflection into an impressionistic interpretation of the impervious mountains in the distance.
The beautiful flowers in the foreground are so close you can touch them, the reflection in the water 10-20m away, the imposing mountains in the background a few hundred meters further out. You realize you are hungry. As you search the backpack for the two panini you prepared this morning you begin to ponder how best to capture the scene: subject, composition, Exposure, Depth of Field.
Depth of Field. Where to focus and at what f/stop? You tip your hat and just as you look up at the bluest of blue skies the number 16 starts enveloping your mind, like rays from the warm noon sun. You dial it in and as you squeeze the trigger that familiar nagging question bubbles up, as it always does in such conditions. If this were a one shot deal, was that really the best choice?
In this article we attempt to provide information to make explicit some of the trade-offs necessary in the choice of Aperture for 24mm landscapes. The result of the process is a set of guidelines. The answers are based on the previously introduced diffraction-aware model for sharpness in the center along the depth of the field – and a tripod-mounted Nikon Z7 + Nikkor 24-70mm/4 S kit lens at 24mm. Continue reading DOF and Diffraction: 24mm Guidelines→
The two-thin-lens model for precision Depth Of Field estimates described in the last two articles is almost ready to be deployed. In this one we will describe the setup that will be used to develop the scenarios that will be outlined in the next one.
The beauty of the hybrid geometrical-Fourier optics approach is that, with an estimate of the field produced at the exit pupil by an on-axis point source, we can generate the image of the resulting Point Spread Function and related Modulation Transfer Function.
Pretend that you are a photon from such a source in front of a f/2.8 lens focused at 10m with about 0.60 microns of third order spherical aberration – and you are about to smash yourself onto the ‘best focus’ observation plane of your camera. Depending on whether you leave exactly from the in-focus distance of 10 meters or slightly before/after that, the impression you would leave on the sensing plane would look as follows:
The width of the square above is 30 microns (um), which corresponds to the diameter of the Circle of Confusion used for old-fashioned geometrical DOF calculations with full frame cameras. The first ring of the in-focus PSF at 10.0m has a diameter of about 2.44 = 3.65 microns. That’s about the size of the estimated effective square pixel aperture of the Nikon Z7 camera that we are using in these tests. Continue reading DOF and Diffraction: Setup→
This investigation of the effect of diffraction on Depth of Field is based on a two-thin-lens model, as suggested by Alan Robinson[1]. We chose this model because it allows us to associate geometrical optics with one lens and Fourier optics with the other, thus simplifying the underlying math and our understanding.
In the last article we discussed how the front element of the model could present at the rear element the wavefront resulting from an on-axis source as a function of distance from the lens. We accomplished this by using simple geometry in complex notation. In this one we will take the relative wavefront present at the exit pupil and project it onto the sensing plane, taking diffraction into account numerically. We already know how to do it since we dealt with this subject in the recent past.
In this and the following articles we shall explore the effects of diffraction on Depth of Field through a two-lens model that separates geometrical and Fourier optics in a way that keeps the math simple, though via complex notation. In the process we will gain a better understanding of how lenses work.
The results of the model are consistent with what can be obtained via classic DOF calculators online but should be more precise in critical situations, like macro photography. I am not a macro photographer so I would be interested in validation of the results of the explained method by someone who is.
In the last article we saw that the intensity Point Spread Function and the Modulation Transfer Function of a lens could be easily approximated numerically by applying Discrete Fourier Transforms to its generalized exit pupil function twice in sequence.[1]
Obtaining the 2D DFTs is easy: simply feed MxN numbers representing the two dimensional complex image of the Exit Pupil function in its space to a Fast Fourier Transform routine and, presto, it produces MxN numbers representing the amplitude of the PSF on the sensing plane. Figure 1a shows a simple case where pupil function is a uniform disk representing the circular aperture of a perfect lens with MxN = 1024×1024. Figure 1b is the resulting intensity PSF.
Simple and fast. Wonderful. Below is a slice through the center, the 513th row, zoomed in. Hmm…. What are the physical units on the axes of displayed data produced by the DFT? Continue reading Wavefront to PSF to MTF: Physical Units→
Goodman, in his excellent Introduction to Fourier Optics[1], describes how an image is formed on a camera sensing plane starting from first principles, that is electromagnetic propagation according to Maxwell’s wave equation. If you want the play by play account I highly recommend his math intensive book. But for the budding photographer it is sufficient to know what happens at the Exit Pupil of the lens because after that the transformations to Point Spread and Modulation Transfer Functions are straightforward, as we will show in this article.
The following diagram exemplifies the last few millimeters of the journey that light from the scene has to travel in order to be absorbed by a camera’s sensing medium. Light from the scene in the form of field arrives at the front of the lens. It goes through the lens being partly blocked and distorted by it as it arrives at its virtual back end, the Exit Pupil, we’ll call this blocking/distorting function . Other than in very simple cases, the Exit Pupil does not necessarily coincide with a specific physical element or Principal surface.[iv] It is a convenient mathematical construct which condenses all of the light transforming properties of a lens into a single plane.
The complex light field at the Exit Pupil’s two dimensional plane is then as shown below (not to scale, the product of the two arrays is element-by-element):
We now know how to calculate the two dimensional Modulation Transfer Function of a perfect lens affected by diffraction, defocus and third order Spherical Aberration – under monochromatic light at the given wavelength and f-number. In digital photography however we almost never deal with light of a single wavelength. So what effect does an illuminant with a wide spectral power distribution, going through the color filter of a typical digital camera CFA before the sensor have on the spatial frequency responses discussed thus far?
This series of articles has dealt with modeling an ideal imaging system’s ‘sharpness’ in the frequency domain. We looked at the effects of the hardware on spatial resolution: diffraction, sampling interval, sampling aperture (e.g. a squarish pixel), anti-aliasing OLPAF filters. The next two posts will deal with modeling typical simple imperfections related to the lens: defocus and spherical aberrations.
Defocus = OOF
Defocus means that the sensing plane is not exactly where it needs to be for image formation in our ideal imaging system: the image is therefore out of focus (OOF). Said another way, light from a point source would go through the lens but converge either behind or in front of the sensing plane, as shown in the following diagram, for a lens with a circular aperture: