In this article we confirm quantitatively that getting the White Point, hence the White Balance, right is essential to obtaining natural tones out of our captures. How quickly do colors degrade if the estimated Correlated Color Temperature is off?
Category Archives: Color
A Question of Balance
In this article I bring together qualitatively the main concepts discussed in the series and argue that in many (most) cases a photographer’s job in order to obtain natural looking tones in their work during raw conversion is to get the illuminant and relative white balance right – and to step away from any slider found in menus with the word ‘color’ in them.
If you are an outdoor photographer trying to get balanced greens under an overcast sky – or a portrait photographer after good skin tones – dialing in the appropriate scene, illuminant and white balance puts the camera/converter manufacturer’s color science to work and gets you most of the way there safely. Of course the judicious photographer always knew to do that – hopefully now with a better appreciation as for why.
White Point, CCT and Tint
As we have seen in the previous post, knowing the characteristics of light at the scene is critical to be able to determine the color transform that will allow captured raw data to be naturally displayed from an output color space like ubiquitous sRGB.
White Point
The light source Spectral Power Distribution (SPD) corresponds to a unique White Point, namely a set of coordinates in the color space, obtained by multiplying wavelength-by-wavelength its SPD (the blue curve below) by the response of the retina of a typical viewer, otherwise known as the CIE Color Matching Functions of a Standard Observer ( in the plot)
Adding (integrating) the three resulting curves we get three values that represent the illuminant’s coordinates in the color space. The White Point is obtained by dividing these coordinates by the value to normalize it to 1.
For example a Standard Daylight Illuminant with a Correlated Color Temperature of 5300 kelvins has a White Point of[1]
= [0.9593 1.0000 0.8833]
assuming CIE (2012) 2-deg XYZ “physiologically relevant” Color Matching Functions from cvrl.org. Continue reading White Point, CCT and Tint
Linear Color Transforms
Building on a preceeding article of this series, once demosaiced raw data from a Bayer Color Filter Array sensor represents the captured image as a set of triplets, corresponding to the estimated light intensity at a given pixel under each of the three spectral filters part of the CFA. The filters are band-pass and named for the representative peak wavelength that they let through, typically red, green, blue or , , for short.
Since the resulting intensities are linearly independent they can form the basis of a 3D coordinate system, with each triplet representing a point within it. The system is bounded in the raw data by the extent of the Analog to Digital Converter, with all three channels spanning the same range, from Black Level with no light to clipping with maximum recordable light. Therefore it can be thought to represent a space in the form of a cube – or better, a parallelepiped – with the origin at [0,0,0] and the opposite vertex at the clipping value in Data Numbers, expressed as [1,1,1] if we normalize all data by it.
The job of the color transform is to project demosaiced raw data to a standard output color space designed for viewing. Such spaces have names like , or . The output space can also be shown in 3D as a parallelepiped with the origin at [0,0,0] with no light and the opposite vertex at [1,1,1] with maximum displayable light. Continue reading Linear Color Transforms
Cone Fundamentals & the LMS Color Space
In the last article we showed how a digital camera’s captured raw data is related to Color Science. In my next trick I will show that CIE 2012 2 deg XYZ Color Matching Functions , , displayed in Figure 1 are an exact linear transform of Stockman & Sharpe (2000) 2 deg Cone Fundamentals , , displayed in Figure 2
(1)
with CMFs and CFs in 3xN format, a 3×3 matrix and matrix multiplication. Et voilà:[1]
Connecting Photographic Raw Data to Tristimulus Color Science
Absolute Raw Data
In the previous article we determined that the three values recorded in the raw data in the center of the image plane in units of Data Numbers per pixel – by a digital camera and lens as a function of absolute spectral radiance at the lens – can be estimated as follows:
(1)
with subscript indicating absolute-referred units and the three system Spectral Sensitivity Functions. In this series of articles is wavelength by wavelength multiplication (what happens to the spectrum of light as it progresses through the imaging system) and the integral just means the area under each of the three resulting curves (integration is what the pixels do during exposure). Together they represent an inner or dot product. All variables in front of the integral were previously described and can be considered constant for a given photographic setup. Continue reading Connecting Photographic Raw Data to Tristimulus Color Science
The Physical Units of Raw Data
In the previous article we (I) learned that the Spectral Sensitivity Functions of a given digital camera and lens are the result of the interaction of light from the scene with all of the spectrally varied components that make up the imaging system: mainly the lens, ultraviolet/infrared hot mirror, Color Filter Array and other filters, finally the photoelectric layer of the sensor, which is normally silicon in consumer kit.
In this one we will put the process on a more formal theoretical footing, setting the stage for the next few on the role of white balance.
The Spectral Response of Digital Cameras
Photography works because visible light from one or more sources reaches the scene and is reflected in the direction of the camera, which then captures a signal proportional to it. The journey of light can be described in integrated units of power all the way to the sensor, for instance so many watts per square meter. However ever since Newton we have known that such total power is in fact the result of the weighted sum of contributions by every frequency that makes up the light, what he called its spectrum.
Our ability to see and record color depends on knowing the distribution of the power contained within a subset of these frequencies and how it interacts with the various objects in its path. This article is about how a typical digital camera for photographers interacts with the spectrum arriving from the scene: we will dissect what is sometimes referred to as the system’s Spectral Response or Sensitivity.
Opening Raspberry Pi High Quality Camera Raw Files
The Raspberry Pi Foundation recently released an interchangeable lens camera module based on the Sony IMX477, a 1/2.3″ back side illuminated sensor with 3040×4056 pixels of 1.55um pitch. In this somewhat technical article we will unpack the 12-bit raw still data that it produces and render it in a convenient color space.
Continue reading Opening Raspberry Pi High Quality Camera Raw Files
Phone Camera Color ‘Accuracy’
Just in case anyone was wondering (I was), it turns out that my smartphone camera produces a better SMI color score off a ColorChecker Passport target than a full frame Nikon D610 DSLR .
My latest phone, a late 2017 incarnation of the LG V34, produces raw DNG files, so I went poking around. From what I could gather the sensor is most likely Sony’s IMX 234[1], 1/2.6″, Back Side Illuminated, stacked and based on the latest and cleanest Exmor RS technology. The sensor’s 1.12um pixels produce 16MP raw files with 10-bit depth, which I understand to be typical for current phone cameras. Other features include phase detect AF, an electronic shutter with variable integration time, HDR, hot pixel suppression and raw noise reduction (ugh!) – plus a slew of video features. Continue reading Phone Camera Color ‘Accuracy’
A Just Noticeable Color Difference
While checking some out-of-gamut tones on an xy Chromaticity Diagram I started to wonder how far two tones needed to be in order for an observer to notice a difference. Were the tones in the yellow and red clusters below discernible or would they be indistinguishable, all being perceived as the same ‘color’?
The Perfect Color Filter Array
We’ve seen how humans perceive color in daylight as a result of three types of photoreceptors in the retina called cones that absorb wavelengths of light from the scene with different sensitivities to the arriving spectrum.
A photographic digital imager attempts to mimic the workings of cones in the retina by usually having different color filters arranged in an array (CFA) on top of its photoreceptors, which we normally call pixels. In a Bayer CFA configuration there are three filters named for the predominant wavelengths that each lets through (red, green and blue) arranged in quartets such as shown below:
A CFA is just one way to copy the action of cones: Foveon for instance lets the sensing material itself perform the spectral separation. It is the quality of the combined spectral filtering part of the imaging system (lenses, UV/IR, CFA, sensing material etc.) that determines how accurately a digital camera is able to capture color information from the scene. So what are the characteristics of better systems and can perfection be achieved? In this article I will pick up the discussion where it was last left off and, ignoring noise for now, attempt to answer this question using CIE conventions, in the process gaining insight in the role of the compromise color matrix and developing a method to visualize its effects.[1] Continue reading The Perfect Color Filter Array
Phase One IQ3 100MP Trichromatic vs Standard Back Linear Color, Part III
Over the last two posts we’ve been exploring some of the differences introduced by tweaks to the Color Filter Array of the Phase One IQ3 100MP Trichromatic Digital Back versus its original incarnation, the Standard Back. Refer to those for the background. In this article we will delve into some of these differences quantitatively[1].
Let’s start with the compromise color matrices we derived from David Chew’s captures of a ColorChecher 24 in the shade of a sunny November morning in Ohio[2]. These are the matrices necessary to convert white balanced raw data to the perceptual CIE XYZ color space, where it is said there should be one-to-one correspondence with colors as perceived by humans, and therefore where most measurements are performed. They are optimized for each back in the current conditions but they are not perfect, the reason for the word ‘compromise’ in their name:
Continue reading Phase One IQ3 100MP Trichromatic vs Standard Back Linear Color, Part III
Phase One IQ3 100MP Trichromatic vs Standard Back Linear Color, Part II
We have seen in the last post that Phase One apparently performed a couple of main tweaks to the Color Filter Array of its Medium Format IQ3 100MP back when it introduced the Trichromatic: it made the shapes of color filter sensitivities more symmetric by eliminating residual transmittance away from the peaks; and it boosted the peak sensitivity of the red (and possibly blue) filter. It did this with the objective of obtaining more accurate, less noisy color out of the hardware, requiring less processing and weaker purple fringing to boot.
Both changes carry the compromises discussed in the last article so the purpose of this one and the one that follows is to attempt to measure – within the limits of my tests, procedures and understanding[1] – the effect of the CFA changes from similar raw captures by the IQ3 100MP Standard Back and Trichromatic, courtesy of David Chew. We will concentrate on color accuracy, leaving purple fringing for another time.
Continue reading Phase One IQ3 100MP Trichromatic vs Standard Back Linear Color, Part II
Phase One IQ3 100MP Trichromatic vs Standard Back Linear Color, Part I
It is always interesting when innovative companies push the envelope of the state-of-the-art of a single component in their systems because a lot can be learned from before and after comparisons. I was therefore excited when Phase One introduced a Trichromatic version of their Medium Format IQ3 100MP Digital Back last September because it could allows us to isolate the effects of tweaks to their Bayer Color Filter Array, assuming all else stays the same.
Thanks to two virtually identical captures by David Chew at getDPI, and Erik Kaffehr’s intelligent questions at DPR, in the following articles I will explore the effect on linear color of the new Trichromatic CFA (TC) vs the old one on the Standard Back (SB). In the process we will discover that – within the limits of my tests, procedures and understanding[1] – the Standard Back produces apparently more ‘accurate’ color while the Trichromatic produces better looking matrices, potentially resulting in ‘purer’ signals. Continue reading Phase One IQ3 100MP Trichromatic vs Standard Back Linear Color, Part I
Bayer CFA Effect on Sharpness
In this article we shall find that the effect of a Bayer CFA on the spatial frequencies and hence the ‘sharpness’ information captured by a sensor compared to those from the corresponding monochrome version can go from (almost) nothing to halving the potentially unaliased range – based on the chrominance content of the image and the direction in which the spatial frequencies are being stressed. Continue reading Bayer CFA Effect on Sharpness
Linear Color: Applying the Forward Matrix
Now that we know how to create a 3×3 linear matrix to convert white balanced and demosaiced raw data into connection space – and where to obtain the 3×3 linear matrix to then convert it to a standard output color space like sRGB – we can take a closer look at the matrices and apply them to a real world capture chosen for its wide range of chromaticities.
Continue reading Linear Color: Applying the Forward Matrix
Color: Determining a Forward Matrix for Your Camera
We understand from the previous article that rendering color with Adobe DNG raw conversion essentially means mapping raw data in the form of triplets into a standard color space via a Profile Connection Space in a two step process
The first step white balances and demosaics the raw data, which at that stage we will refer to as , followed by converting it to Profile Connection Space through linear projection by an unknown ‘Forward Matrix’ (as DNG calls it) of the form
(1)
with data as column-vectors in a 3xN array. Determining the nine coefficients of this matrix is the main subject of this article[1]. Continue reading Color: Determining a Forward Matrix for Your Camera
Color: From Object to Eye
How do we translate captured image information into a stimulus that will produce the appropriate perception of color? It’s actually not that complicated[1].
Recall from the introductory article that a photon absorbed by a cone type (, or ) in the fovea produces the same stimulus to the brain regardless of its wavelength[2]. Take the example of the eye of an observer which focuses on the retina the image of a uniform object with a spectral photon distribution of 1000 photons/nm in the 400 to 720nm wavelength range and no photons outside of it.
Because the system is linear, cones in the foveola will weigh the incoming photons by their relative sensitivity (probability) functions and add the result up to produce a stimulus proportional to the area under the curves. For instance a cone may see about 321,000 photons arrive and produce a relative stimulus of about 94,700, the weighted area under the curve:
An Introduction to Color in Digital Cameras
This article will set the stage for a discussion on how pleasing color is produced during raw conversion. The easiest way to understand how a camera captures and processes ‘color’ is to start with an example of how the human visual system does it.
An Example: Green
Light from the sun strikes leaves on a tree. The foliage of the tree absorbs some of the light and reflects the rest diffusely towards the eye of a human observer. The eye focuses the image of the foliage onto the retina at its back. Near the center of the retina there is a small circular area called fovea centralis which is dense with light receptors of well defined spectral sensitivities called cones. Information from the cones is pre-processed by neurons and carried by nerve fibers via the optic nerve to the brain where, after some additional psychovisual processing, we recognize the color of the foliage as green[1].
Continue reading An Introduction to Color in Digital Cameras
How Is a Raw Image Rendered?
What are the basic low level steps involved in raw file conversion? In this article I will discuss what happens under the hood of digital camera raw converters in order to turn raw file data into a viewable image, a process sometimes referred to as ‘rendering’. We will use the following raw capture by a Nikon D610 to show how image information is transformed at every step along the way:
Rendering = Raw Conversion + Editing
Linearity in the Frequency Domain
For the purposes of ‘sharpness’ spatial resolution measurement in photography cameras can be considered shift-invariant, linear systems when capturing scene detail of random size and direction such as one often finds in landscapes.
Shift invariant means that the imaging system should respond exactly the same way no matter where light from the scene falls on the sensing medium . We know that in a strict sense this is not true because for instance pixels tend to have squarish active areas so their response cannot be isotropic by definition. However when using the slanted edge method of linear spatial resolution measurement we can effectively make it shift invariant by careful preparation of the testing setup. For example the edges should be slanted no more than this and no less than that. Continue reading Linearity in the Frequency Domain
Nikon CFA Spectral Power Distribution
I measured the Spectral Photon Distribution of the three CFA filters of a Nikon D610 in ‘Daylight’ conditions with a cheap spectrometer. Taking a cue from this post I pointed it at light from the sun reflected off a gray card and took a raw capture of the spectrum it produced.
An ImageJ plot did the rest. I took a dozen captures at slightly different angles to catch the picture of the clearest spectrum. Shown are the three spectral curves averaged over the two best opposing captures, each proportional to the number of photons let through by the respective Color Filter. The units on the vertical axis are raw black-subtracted values from the raw file (DN), therefore the units on the vertical axis are proportional to the number of incident photons in each case. The Photopic Eye Luminous Efficiency Function (2 degree, Sharpe et al 2005) is also shown for reference, scaled to the same maximum as the green curve (although in energy units, my bad). Continue reading Nikon CFA Spectral Power Distribution