In the last article we learned that a complex lens can be modeled as just an entrance pupil, an exit pupil and a geometrical optics black-box in between. Goodman[1] suggests that all optical path errors for a given Gaussian point on the image plane can be thought of as being introduced by a custom phase plate at the pupil plane, delaying or advancing the light wavefront locally according to aberration function as earlier described.
The phase plate distorts the forming wavefront, introducing diffraction and aberrations, while otherwise allowing us to treat the rest of the lens as if it followed geometrical optics rules. It can be associated with either the entrance or the exit pupil. Photographers are usually concerned with the effects of the lens on the image plane so we will associate it with the adjacent Exit Pupil.
As a landscape shooter I often wonder whether old rules for DOF still apply to current small pixels and sharp lenses. I therefore roughly measured the spatial resolution performance of my Z7 with 24-70mm/4 S in the center to see whether ‘f/8 and be there’ still made sense today. The journey and the diffraction-simple-aberration aware model were described in the last few posts. The results are summarized in the Landscape Aperture-Distance charts presented here for the 24, 28 and 35mm focal lengths.
I also present the data in the form of a simplified plot to aid making the right compromises when the focusing distance is flexible. This information is valid for the Z7 and kit in the center only. It probably just as easily applies to cameras with similarly spec’d pixels and lenses. Continue reading Diffracted DOF Aperture Guides: 24-35mm→
After an exhausting two and a half hour hike you are finally resting, sitting on a rock at the foot of your destination, a tiny alpine lake, breathing in the thin air and absorbing the majestic scenery. A cool light breeze suddenly rips the surface of the water, morphing what has until now been a perfect reflection into an impressionistic interpretation of the impervious mountains in the distance.
The beautiful flowers in the foreground are so close you can touch them, the reflection in the water 10-20m away, the imposing mountains in the background a few hundred meters further out. You realize you are hungry. As you search the backpack for the two panini you prepared this morning you begin to ponder how best to capture the scene: subject, composition, Exposure, Depth of Field.
Depth of Field. Where to focus and at what f/stop? You tip your hat and just as you look up at the bluest of blue skies the number 16 starts enveloping your mind, like rays from the warm noon sun. You dial it in and as you squeeze the trigger that familiar nagging question bubbles up, as it always does in such conditions. If this were a one shot deal, was that really the best choice?
In this article we attempt to provide information to make explicit some of the trade-offs necessary in the choice of Aperture for 24mm landscapes. The result of the process is a set of guidelines. The answers are based on the previously introduced diffraction-aware model for sharpness in the center along the depth of the field – and a tripod-mounted Nikon Z7 + Nikkor 24-70mm/4 S kit lens at 24mm. Continue reading DOF and Diffraction: 24mm Guidelines→
The two-thin-lens model for precision Depth Of Field estimates described in the last two articles is almost ready to be deployed. In this one we will describe the setup that will be used to develop the scenarios that will be outlined in the next one.
The beauty of the hybrid geometrical-Fourier optics approach is that, with an estimate of the field produced at the exit pupil by an on-axis point source, we can generate the image of the resulting Point Spread Function and related Modulation Transfer Function.
Pretend that you are a photon from such a source in front of a f/2.8 lens focused at 10m with about 0.60 microns of third order spherical aberration – and you are about to smash yourself onto the ‘best focus’ observation plane of your camera. Depending on whether you leave exactly from the in-focus distance of 10 meters or slightly before/after that, the impression you would leave on the sensing plane would look as follows:
The width of the square above is 30 microns (um), which corresponds to the diameter of the Circle of Confusion used for old-fashioned geometrical DOF calculations with full frame cameras. The first ring of the in-focus PSF at 10.0m has a diameter of about 2.44 = 3.65 microns. That’s about the size of the estimated effective square pixel aperture of the Nikon Z7 camera that we are using in these tests. Continue reading DOF and Diffraction: Setup→
Canon recently introduced its EOS-1D X Mark III Digital Single-Lens Reflex [Edit: and now also possibly the R5 Mirrorless ILC] touting a new and improved Anti-Aliasing filter, which they call a High-Res Gaussian Distribution LPF, claiming that
“This not only helps to suppress moiré and color distortion, but also improves resolution.”
In this article we will try to dissect the marketing speak and understand a bit better the theoretical implications of the new AA. For the abridged version, jump to the Conclusions at the bottom. In a picture:
In this article we shall find that the effect of a Bayer CFA on the spatial frequencies and hence the ‘sharpness’ information captured by a sensor compared to those from the corresponding monochrome version can go from (almost) nothing to halving the potentially unaliased range – based on the chrominance content of the image and the direction in which the spatial frequencies are being stressed. Continue reading Bayer CFA Effect on Sharpness→
This post will continue looking at the spatial frequency response measured by MTF Mapper off slanted edges in DPReview.com raw captures and relative fits by the ‘sharpness’ model discussed in the last few articles. The model takes the physical parameters of the digital camera and lens as inputs and produces theoretical directional system MTF curves comparable to measured data. As we will see the model seems to be able to simulate these systems well – at least within this limited set of parameters.
The following fits refer to the green channel of a number of interchangeable lens digital camera systems with different lenses, pixel sizes and formats – from the current Medium Format 100MP champ to the 1/2.3″ 18MP sensor size also sometimes found in the best smartphones. Here is the roster with the cameras as set up:
Having shown that our simple two dimensional MTF model is able to predict the performance of the combination of a perfect lens and square monochrome pixel with 100% Fill Factor we now turn to the effect of the sampling interval on spatial resolution according to the guiding formula:
(1)
The hats in this case mean the Fourier Transform of the relative component normalized to 1 at the origin (), that is the individual MTFs of the perfect lens PSF, the perfect square pixel and the delta grid; represents two dimensional convolution.
Sampling in the Spatial Domain
While exposed a pixel sees the scene through its aperture and accumulates energy as photons arrive. Below left is the representation of, say, the intensity that a star projects on the sensing plane, in this case resulting in an Airy pattern since we said that the lens is perfect. During exposure each pixel integrates (counts) the arriving photons, an operation that mathematically can be expressed as the convolution of the shown Airy pattern with a square, the size of effective pixel aperture, here assumed to have 100% Fill Factor. It is the convolution in the continuous spatial domain of lens PSF with pixel aperture PSF shown in Equation (2) of the first article in the series.
Sampling is then the product of an infinitesimally small Dirac delta function at the center of each pixel, the red dots below left, by the result of the convolution, producing the sampled image below right.
In this and the previous article I discuss how Modulation Transfer Functions (MTF) obtained from every color channel of a Bayer CFA raw capture in isolation can be combined to provide a meaningful composite MTF curve for the imaging system as a whole.
There are two ways that this can be accomplished: an input-referred approach () that reflects the performance of the hardware only; and an output-referred one () that also takes into consideration how the image will be displayed. Both are valid and differences are typically minor, though the weights of the latter are scene, camera/lens, illuminant dependent – while the former are not. Therefore my recommendation in this context is to stick with input-referred weights when comparing cameras and lenses.1Continue reading COMBINING BAYER CFA MTF Curves – II→
So, is it true that a Four Thirds lens needs to be about twice as ‘sharp’ as its Full Frame counterpart in order to be able to display an image of spatial resolution equivalent to the larger format’s?
It is, because of the simple geometry I will describe in this article. In fact with a few provisos one can generalize and say that lenses from any smaller format need to be ‘sharper’ by the ratio of their sensor diagonals in order to produce the same linear resolution on same-sized final images.
This is one of the reasons why Ansel Adams shot 4×5 and 8×10 – and I would too, were it not for logistical and pecuniary concerns.