In the last article we showed how a digital camera’s captured raw data is related to Color Science. In my next trick I will show that CIE 2012 2 deg XYZ Color Matching Functions , , displayed in Figure 1 are an exact linear transform of Stockman & Sharpe (2000) 2 deg Cone Fundamentals , , displayed in Figure 2
(1)
with CMFs and CFs in 3xN format, a 3×3 matrix and matrix multiplication. Et voilà:[1]
After an exhausting two and a half hour hike you are finally resting, sitting on a rock at the foot of your destination, a tiny alpine lake, breathing in the thin air and absorbing the majestic scenery. A cool light breeze suddenly rips the surface of the water, morphing what has until now been a perfect reflection into an impressionistic interpretation of the impervious mountains in the distance.
The beautiful flowers in the foreground are so close you can touch them, the reflection in the water 10-20m away, the imposing mountains in the background a few hundred meters further out. You realize you are hungry. As you search the backpack for the two panini you prepared this morning you begin to ponder how best to capture the scene: subject, composition, Exposure, Depth of Field.
Depth of Field. Where to focus and at what f/stop? You tip your hat and just as you look up at the bluest of blue skies the number 16 starts enveloping your mind, like rays from the warm noon sun. You dial it in and as you squeeze the trigger that familiar nagging question bubbles up, as it always does in such conditions. If this were a one shot deal, was that really the best choice?
In this article we attempt to provide information to make explicit some of the trade-offs necessary in the choice of Aperture for 24mm landscapes. The result of the process is a set of guidelines. The answers are based on the previously introduced diffraction-aware model for sharpness in the center along the depth of the field – and a tripod-mounted Nikon Z7 + Nikkor 24-70mm/4 S kit lens at 24mm. Continue reading DOF and Diffraction: 24mm Guidelines→
The two-thin-lens model for precision Depth Of Field estimates described in the last two articles is almost ready to be deployed. In this one we will describe the setup that will be used to develop the scenarios that will be outlined in the next one.
The beauty of the hybrid geometrical-Fourier optics approach is that, with an estimate of the field produced at the exit pupil by an on-axis point source, we can generate the image of the resulting Point Spread Function and related Modulation Transfer Function.
Pretend that you are a photon from such a source in front of a f/2.8 lens focused at 10m with about 0.60 microns of third order spherical aberration – and you are about to smash yourself onto the ‘best focus’ observation plane of your camera. Depending on whether you leave exactly from the in-focus distance of 10 meters or slightly before/after that, the impression you would leave on the sensing plane would look as follows:
The width of the square above is 30 microns (um), which corresponds to the diameter of the Circle of Confusion used for old-fashioned geometrical DOF calculations with full frame cameras. The first ring of the in-focus PSF at 10.0m has a diameter of about 2.44 = 3.65 microns. That’s about the size of the estimated effective square pixel aperture of the Nikon Z7 camera that we are using in these tests. Continue reading DOF and Diffraction: Setup→
This investigation of the effect of diffraction on Depth of Field is based on a two-thin-lens model, as suggested by Alan Robinson[1]. We chose this model because it allows us to associate geometrical optics with one lens and Fourier optics with the other, thus simplifying the underlying math and our understanding.
In the last article we discussed how the front element of the model could present at the rear element the wavefront resulting from an on-axis source as a function of distance from the lens. We accomplished this by using simple geometry in complex notation. In this one we will take the relative wavefront present at the exit pupil and project it onto the sensing plane, taking diffraction into account numerically. We already know how to do it since we dealt with this subject in the recent past.
In this and the following articles we shall explore the effects of diffraction on Depth of Field through a two-lens model that separates geometrical and Fourier optics in a way that keeps the math simple, though via complex notation. In the process we will gain a better understanding of how lenses work.
The results of the model are consistent with what can be obtained via classic DOF calculators online but should be more precise in critical situations, like macro photography. I am not a macro photographer so I would be interested in validation of the results of the explained method by someone who is.
Now that we know how to create a 3×3 linear matrix to convert white balanced and demosaiced raw data into connection space – and where to obtain the 3×3 linear matrix to then convert it to a standard output color space like sRGB – we can take a closer look at the matrices and apply them to a real world capture chosen for its wide range of chromaticities.
We understand from the previous article that rendering color with Adobe DNG raw conversion essentially means mapping raw data in the form of triplets into a standard color space via a Profile Connection Space in a two step process
The first step white balances and demosaics the raw data, which at that stage we will refer to as , followed by converting it to Profile Connection Space through linear projection by an unknown ‘Forward Matrix’ (as DNG calls it) of the form
How do we translate captured image information into a stimulus that will produce the appropriate perception of color? It’s actually not that complicated[1].
Recall from the introductory article that a photon absorbed by a cone type (, or ) in the fovea produces the same stimulus to the brain regardless of its wavelength[2]. Take the example of the eye of an observer which focuses on the retina the image of a uniform object with a spectral photon distribution of 1000 photons/nm in the 400 to 720nm wavelength range and no photons outside of it.
Because the system is linear, cones in the foveola will weigh the incoming photons by their relative sensitivity (probability) functions and add the result up to produce a stimulus proportional to the area under the curves. For instance a cone may see about 321,000 photons arrive and produce a relative stimulus of about 94,700, the weighted area under the curve:
This article will set the stage for a discussion on how pleasing color is produced during raw conversion. The easiest way to understand how a camera captures and processes ‘color’ is to start with an example of how the human visual system does it.
An Example: Green
Light from the sun strikes leaves on a tree. The foliage of the tree absorbs some of the light and reflects the rest diffusely towards the eye of a human observer. The eye focuses the image of the foliage onto the retina at its back. Near the center of the retina there is a small circular area called fovea centralis which is dense with light receptors of well defined spectral sensitivities called cones. Information from the cones is pre-processed by neurons and carried by nerve fibers via the optic nerve to the brain where, after some additional psychovisual processing, we recognize the color of the foliage as green[1].
In this and the previous article I discuss how Modulation Transfer Functions (MTF) obtained from the raw data of each of a Bayer CFA color channel can be combined to provide a meaningful composite MTF curve for the imaging system as a whole.
There are two ways that this can be accomplished: an input-referred approach () that reflects the performance of the hardware only; and an output-referred one () that also takes into consideration how the image will be displayed. Both are valid and differences are typically minor, though the weights of the latter are scene, camera/lens, illuminant dependent – while the former are not. Therefore my recommendation in this context is to stick with input-referred weights when comparing cameras and lenses.1Continue reading COMBINING BAYER CFA MTF Curves – II→
In this and the following article I will discuss my thoughts on how MTF50 results obtained from raw data of the four Bayer CFA color channels off a neutral target captured with a typical camera through the slanted edge method can be combined to provide a meaningful composite MTF50 for the imaging system as a whole. The perimeter of the discussion are neutral slanted edge measurements of Bayer CFA raw data for linear spatial resolution (‘sharpness’) photographic hardware evaluations. Corrections, suggestions and challenges are welcome. Continue reading Combining Bayer CFA Modulation Transfer Functions – I→
For the purposes of ‘sharpness’ spatial resolution measurement in photography cameras can be considered shift-invariant, linear systems when capturing scene detail of random size and direction such as one often finds in landscapes.
Shift invariant means that the imaging system should respond exactly the same way no matter where light from the scene falls on the sensing medium . We know that in a strict sense this is not true because for instance pixels tend to have squarish active areas so their response cannot be isotropic by definition. However when using the slanted edge method of linear spatial resolution measurement we can effectively make it shift invariant by careful preparation of the testing setup. For example the edges should be slanted no more than this and no less than that. Continue reading Linearity in the Frequency Domain→
Whether the human visual system perceives a displayed slow changing gradient of tones, such as a vast expanse of sky, as smooth or posterized depends mainly on two well known variables: the Weber-Fechner Fraction of the ‘steps’ in the reflected/produced light intensity (the subject of this article); and spatial dithering of the light intensity as a result of noise (the subject of a future one).
In the last few posts I have made the case that Image Quality in a digital camera is entirely dependent on the light Information collected at a sensor’s photosites during Exposure. Any subsequent processing – whether analog amplification and conversion to digital in-camera and/or further processing in-computer – effectively applies a set of Information Transfer Functions to the signal that when multiplied together result in the data from which the final photograph is produced. Each step of the way can at best maintain the original Information Quality (IQ) but in most cases it will degrade it somewhat.
IQ: Only as Good as at Photosites’ Output
This point is key: in a well designed imaging system** the final image IQ is only as good as the scene information collected at the sensor’s photosites, independently of how this information is stored in the working data along the processing chain, on its way to being transformed into a pleasing photograph. As long as scene information is properly encoded by the system early on, before being written to the raw file – and information transfer is maintained in the data throughout the imaging and processing chain – final photograph IQ will be virtually the same independently of how its data’s histogram looks along the way.
A reader suggested that a High-Res Olympus E-M5 Mark II image used in the previous post looked sharper than the equivalent Sony a6000 image, contradicting the relative MTF50 measurements, perhaps showing ‘the limitations of MTF50 as a methodology’. That would be surprising because MTF50 normally correlates quite well with perceived sharpness, so I decided to check this particular case out.
‘Who are you going to believe, me or your lying eyes’?
Why Raw? The question is whether one is interested in measuring the objective, quantitative spatial resolution capabilities of the hardware or whether instead one would prefer to measure the arbitrary, qualitatively perceived sharpening prowess of (in-camera or in-computer) processing software as it turns the capture into a pleasing final image. Either is of course fine.
My take on this is that the better the IQ captured the better the final image will be after post processing. In other words I am typically more interested in measuring the spatial resolution information produced by the hardware comfortable in the knowledge that if I’ve got good quality data to start with its appearance will only be improved in post by the judicious use of software. By IQ here I mean objective, reproducible, measurable physical quantities representing the quality of the information captured by the hardware, ideally in scientific units.
You want to measure how sharp your camera/lens combination is to make sure it lives up to its specs. Or perhaps you’d like to compare how well one lens captures spatial resolution compared to another you own. Or perhaps again you are in the market for new equipment and would like to know what could be expected from the shortlist. Or an old faithful is not looking right and you’d like to check it out. So you decide to do some testing. Where to start?
In the next four articles I will walk you through my methodology based on captures of slanted edge targets:
This is a recurring nightmare for a new photographer: they head out with their brand new state-of-the art digital camera, capture a set of images with a vast expanse of sky or smoothly changing background, come home, fire them up on their computer, play with a few sliders and … gasp! … there are visible bands (posterization, stairstepping, quantization) all over the smoothly changing gradient. ‘Is my new camera broken?!’, they wonder in horror.
When first approaching photographic science a photographer is often confused by the unfamiliar units used. In high school we were taught energy and power in radiometric units like watts ($W$) – while in photography the same concepts are dealt with in photometric units like lumens ($lm$).
Once one realizes that both sets of units refer to the exact same physical process – energy transfer – but they are fine tuned for two slightly different purposes it becomes a lot easier to interpret the science behind photography through the theory one already knows.
It all boils down to one simple notion: lumens are watts as perceived by the Human Visual System.
Is MTF50 a good proxy for perceived sharpness? In this article and those that follow MTF50 indicates the spatial frequency at which the Modulation Transfer Function of an imaging system is half (50%) of what it would be if the system did not degrade detail in the image painted by incoming light.
It makes intuitive sense that the spatial frequencies that are most closely related to our perception of sharpness vary with the size and viewing distance of the displayed image.
For instance if an image captured by a Full Frame camera is viewed at ‘standard’ distance (that is a distance equal to its diagonal), it turns out that the portion of the MTF curve most representative of perceived sharpness appears to be around MTF90. On the other hand, when pixel peeping the spatial frequencies around MTF50 look to be a decent, simple to calculate indicator of it, assuming a well set up imaging system in good working conditions. Continue reading MTF50 and Perceived Sharpness→
UnSharp Masking (USM) capture sharpening is somewhat equivalent to taking a black/white marker and drawing along every transition in the picture to make it stand out more – automatically. Line thickness and darkness is chosen arbitrarily to achieve the desired effect, much like painters do. Continue reading Deconvolution vs USM Capture Sharpening→