The two-thin-lens model for precision Depth Of Field estimates described in the last two articles is almost ready to be deployed. In this one we will describe the setup that will be used to develop the scenarios that will be outlined in the next one.
The beauty of the hybrid geometrical-Fourier optics approach is that, with an estimate of the field produced at the exit pupil by an on-axis point source, we can generate the image of the resulting Point Spread Function and related Modulation Transfer Function.
Pretend that you are a photon from such a source in front of a f/2.8 lens focused at 10m with about 0.60 microns of third order spherical aberration – and you are about to smash yourself onto the ‘best focus’ observation plane of your camera. Depending on whether you leave exactly from the in-focus distance of 10 meters or slightly before/after that, the impression you would leave on the sensing plane would look as follows:
The width of the square above is 30 microns (um), which corresponds to the diameter of the Circle of Confusion used for old-fashioned geometrical DOF calculations with full frame cameras. The first ring of the in-focus PSF at 10.0m has a diameter of about 2.44 = 3.65 microns. That’s about the size of the estimated effective square pixel aperture of the Nikon Z7 camera that we are using in these tests.
Scaling the ‘Focal’ Length
The reason why the PSFs will always look like the animation above regardless of any available physical units is due to the fact that the relative Fraunhofer diffraction pattern will magically appear on the imaging plane wherever that may occur. Physical scaling of both the PSF and MTF results is directly related to the product , as described in detail in a dedicated article.
We saw earlier that for our two-thin-lens model to produce ‘best focus’ the sensing plane of the camera will need to be located at working focal length as defined in Equation (6) of the previous article. Consequently, working f-number will be equal to .
Scaling the Aberrations
In typical photographic situations with the subject a fair distance away from the lens, working focal length and f-number are similar to nominal values. But when approaching a macro photography regime differences can be quite significant so all parameters that depend on them will have to be scaled accordingly.
The assumptions for the scenarios to follow are based on the characteristics of a Nikon Z7 body mounting a Nikkor 50mm/1.8 S prime and a 24-70mm/4 S zoom lens. One parameter we are missing is third order spherical aberration peak-to-valley coefficient and how it varies with f-number.
To determine it, a box-cutter knife edge was captured 10 meters away for the 50mm and 3.5m away for the 24mm lens, in the center of the field of view at the following nominal f-numbers. The p-v coefficients are labeled in the tables below for this exercise, though they are the plug in the fit (together with p-v coefficient ), so in practice they also collect non-idealities like axial-color, as described in a related article.
The tabled data below is obtained from the green raw channels, with mean wavelength expected to be 0.535um, a pixel with about 4.33um pitch and 0.88 linear aperture multiplier, yielding an assumed effective square pixel aperture 3.81um on the side. Testing was not performed in studio conditions so there is a certain amount of noise in the system, especially given the limited number of pixels on the knife (less than 150 on the long side).
First the Nikkor 50mm/1.8 S:
Nikkor 50mm/1.8 S | f/2 | f/2.8 | f/4 | f/5.6 | f/8 |
---|---|---|---|---|---|
Wyant Z8 (nm) | 198.1 | 99.6 | 50.2 | 35.8 | 13.3 |
p-v W040 (wavelengths) | 2.220 | 1.117 | 0.530 | 0.400 | 0.149 |
MTF50 (lp/mm) | 81.1 | 100.2 | 105.2 | 93.7 | 76.2 |
We know that peak-to-valley coefficient should theoretically be inversely proportional to the f-number squared and indeed that is what the measurements show[*]:
It seems therefore reasonable to model aberrations for the 50mm prime per the cyan line:
(1)
The numerator of the fraction in the Equation would make for a neat lens quality metric as far as aberrations in the center go, wouldn’t it? Call it quality factor .
I have since simplified the procedure to determine this quality factor and described it in the appendix. These are the estimated ‘s for a Nikkor 24-70mm/4 S at its various focal lengths in the center.
Nikkor 24-70/4 S | 24mm | 28mm | 35mm | 50mm | 70mm |
---|---|---|---|---|---|
QF | 17.9 | 20.1 | 20.1 | 20.4 | 20.4 |
It looks like it is best corrected at 24mm. There is a lot of play in the system so this is not exactly Jim Kasson quality data – but it will do for our DOF modeling purposes.
Measures of Sharpness
‘Depth’ in DOF depends on a threshold to indicate what is to be considered acceptably in-focus and what isn’t. We therefore need to decide on a sharpness metric to guide our choice.
Historical Depth of Field calculations are based on the diameter of the out of focus disk obtained geometrically, the Circle of Confusion, refer to the article on defocus for a description of it. All detail smaller than a CoC of about 0.03mm (30 ) is deemed by historical assumptions to be in-focus when captured with a full frame camera.
Reality is of course more complicated than that and that’s what we have been attempting to model in these pages. For instance these are the green channel PSFs from Figure 1 with perfect ‘focus’ at 10m, just third order Spherical Aberrations, representative of the performance of a well corrected lens in the center of the field of view. The red line represents 30 , the historical diameter of the CoC.
More modern measures tend to exploit the correlation between the Modulation Transfer Function of the imaging system and humans’ perception of sharpness. Over the last half century the more popular such metrics have been variations on taking the area under the MTF curve after weighting it by a Contrast Sensitivity Function (CSF, the estimated MTF of the human visual system, see this article for an introduction). This means that they depend on viewing size, distance and conditions. They have names like SQF, SQRI and Edge Acutance.
SQF[1] uses a CSF that emphasizes higher frequencies and log integration. SQRI[2] uses a CSF that peaks earlier and takes the square root of the MTF curve before log integration. Recently Edge Acutance[3] appears to have become the standard adopted by the Camera Phone Image Quality initiative, with a simplified middle of the road CSF and linear integration up to the monochrome Nyquist frequency. Results are then normalized by dividing by the area under the full CSF curve.
Similar results are often obtained by using MTF50, the spatial frequency at which contrast of detail captured by the system is halved. This has been a metric favored by testing sites online for years, so we know for instance that system MTF50 measurements above 40 lp/mm are considered very good and above 50 lp/mm excellent, with the best exceeding 100 lp/mm. We (I) don’t have such knowledge where Acutance is concerned, please share if you do.
Next, putting the model to work and drawing some conclusions on landscape DOF at a 24mm focal length and a Nikon Z7 coupled to a Nikkor 24-70mm/4 S.
Appendix: Determining Lens Quality Factor
The DOF analysis that follows in the next article is based on a simplified model of the lens. It boils down to one main assumption: that only aberrations are present at the exit pupil (mainly third order spherical aberration and axial color). This is a decent assumption for well corrected glass near the center of the field of view. In that case all we need to specify its performance throughout the aperture range is the Quality Factor introduced in Equation (1). This appendix explains how it was estimated for the lenses at hand.
Third order spherical aberration and other aberrations can be quantified by how much peak-to-valley wavefront error they induce at the exit pupil. The error, also referred to as Optical Path Difference , is expected to increase with the inverse of the f-number squared as we saw in Figure 2 above. Therefore it is large and consequential at wide apertures but becomes quickly much smaller and less consequential as the lens is stopped down and only the well machined, progressively less aberrated, central portion of the lens contributes to forming the image. By f/8 it makes very little difference to the overall MTF curve, since the aperture is relatively tiny and a good lens is by then almost diffraction limited. This is reflected in a decreased sensitivity of the model to this variable at higher f-numbers – so it makes more sense to try to estimate it at lower f/stops, where it is larger and easier to measure.
I took a number of raw captures of a new utility cutter knife slanted a few degrees off the vertical. It was set up almost against a living room window, parallel to it, with a bright, uniform sky near the horizon as background. I chose an overcast day shortly after lunch, but any day will do as long as there are no visible clouds, birds or gradients near the edge, just bright, uniform sky. The camera was mounted on a tripod, square to and at the same height as the knife, about 150x focal lengths away, VR off, silent mode, pinpoint focusing, 3 second delayed release. For the Nikkor Z 24-70mm/4 S I used manual mode and selected f/4 at base ISO with shutter speeds that showed some room to spare on the viewfinder histogram under spot metering. I took four captures, each time half pressing first to re-focus. Then stopped down 1/3 of a stop and repeated the four captures per f/stop until f/5.6.
Next crops of the edge from the raw files were put through excellent open source MTF Mapper and the best focused Green channel MTF curve selected for each f/stop. Then theoretical MTF curves were simulated per the model by varying coefficient (with defocus equal and opposite for RMS WFE ‘in-focus’) until they best matched measured MTF. Here for example is the 24mm focal length of a Nikkor Z 24-70mm/4 S at f/5.6, resulting in a decent fit of the measured data by the model with equal to 0.549 (Z8 of 49 nm in Wyant Zernike notation):
The fit can be better with more degrees of freedom but it is good enough in this context. The best at each f/stop was plotted vs 1/N^2 like in Figure 2 to obtain the Quality Factor shown in the relative table.
In retrospect, if the quality of the data is good to start with, one could simply estimate singly at the aperture with the best resolution, results are close enough for these purposes. For example, f/5 at 28mm for my copy of the 24-70mm/4 S.
Notes and References
1. “An optical merit function (SQF), which correlates with subjective image judgments”, Granger and Cupery (1972).
2. Barten, P. G. J. (1999). Contrast sensitivity of the human eye and its effects on image quality. Eindhoven: Technische Universiteit Eindhoven DOI: 10.6100/IR523072
3. See “Development of the I3A CPIQ spatial metrics, Baxter et al.” for the Camera Phone Image Quality initiative’s definition of Edge Acutance.
4. The Matlab/Octave code used to produce these plots can be downloaded from here.
Jack
Once again, thanks for developing and sharing your thinking and work.
I hope, in the end, your work doesn’t remain ‘just’ academic. I for one hope to be able to use your work in a practical way, in my case in one of my DoF Magic Lantern Lua scripts, which are currently limited to only work away from the macro end, which for landscape focus stacking, say, is fine.
I guess what I’m saying is, I hope you finish this thread with a practical example of how your approach, accounting for the focus distance and diffraction, leads to estimates of the near and far depth of field distances, assuming a specified ‘quality criterion’.
All the best.
Garry