In this article we confirm quantitatively that getting the White Point, hence the White Balance, right is essential to obtaining natural tones out of our captures. How quickly do colors degrade if the estimated Correlated Color Temperature is off?
Tag Archives: white balance
A Question of Balance
In this article I bring together qualitatively the main concepts discussed in the series and argue that in many (most) cases a photographer’s job in order to obtain natural looking tones in their work during raw conversion is to get the illuminant and relative white balance right – and to step away from any slider found in menus with the word ‘color’ in them.
If you are an outdoor photographer trying to get balanced greens under an overcast sky – or a portrait photographer after good skin tones – dialing in the appropriate scene, illuminant and white balance puts the camera/converter manufacturer’s color science to work and gets you most of the way there safely. Of course the judicious photographer always knew to do that – hopefully now with a better appreciation as for why.
White Point, CCT and Tint
As we have seen in the previous post, knowing the characteristics of light at the scene is critical to be able to determine the color transform that will allow captured raw data to be naturally displayed from an output color space like ubiquitous sRGB.
White Point
The light source Spectral Power Distribution (SPD) corresponds to a unique White Point, namely a set of coordinates in the color space, obtained by multiplying wavelength-by-wavelength its SPD (the blue curve below) by the Color Matching Functions of a Standard Observer ()
Adding (integrating) the three resulting curves up we get three values that represent the illuminant’s coordinates in the color space. The White Point is then obtained by dividing these coordinates by the value to normalize it to 1.
The White Point is then seen to be independent of the intensity of the arriving light, as represents Luminance from the scene. For instance a Standard Daylight Illuminant with a Correlated Color Temperature of 5300k has a White Point of[1]
= [0.9593 1.0000 0.8833] Continue reading White Point, CCT and Tint
Linear Color Transforms
Building on a preceeding article of this series, once demosaiced raw data from a Bayer Color Filter Array sensor represents the captured image as a set of triplets, corresponding to the estimated light intensity at a given pixel under each of the three spectral filters part of the CFA. The filters are band-pass and named for the representative peak wavelength that they let through, typically red, green, blue or , , for short.
Since the resulting intensities are linearly independent they can form the basis of a 3D coordinate system, with each triplet representing a point within it. The system is bounded in the raw data by the extent of the Analog to Digital Converter, with all three channels spanning the same range, from Black Level with no light to clipping with maximum recordable light. Therefore it can be thought to represent a space in the form of a cube – or better, a parallelepiped – with the origin at [0,0,0] and the opposite vertex at the clipping value in Data Numbers, expressed as [1,1,1] if we normalize all data by it.
The job of the color transform is to project demosaiced raw data to a standard output color space designed for viewing. Such spaces have names like , or . The output space can also be shown in 3D as a parallelepiped with the origin at [0,0,0] with no light and the opposite vertex at [1,1,1] with maximum displayable light. Continue reading Linear Color Transforms
Connecting Photographic Raw Data to Tristimulus Color Science
Absolute Raw Data
In the previous article we determined that the three values recorded in the raw data in the center of the image plane in units of Data Numbers per pixel – by a digital camera and lens as a function of absolute spectral radiance at the lens – can be estimated as follows:
(1)
with subscript indicating absolute-referred units and the three system Spectral Sensitivity Functions. In this series of articles is wavelength by wavelength multiplication (what happens to the spectrum of light as it progresses through the imaging system) and the integral just means the area under each of the three resulting curves (integration is what the pixels do during exposure). Together they represent an inner or dot product. All variables in front of the integral were previously described and can be considered constant for a given photographic setup. Continue reading Connecting Photographic Raw Data to Tristimulus Color Science
Phase One IQ3 100MP Trichromatic vs Standard Back Linear Color, Part II
We have seen in the last post that Phase One apparently performed a couple of main tweaks to the Color Filter Array of its Medium Format IQ3 100MP back when it introduced the Trichromatic: it made the shapes of color filter sensitivities more symmetric by eliminating residual transmittance away from the peaks; and it boosted the peak sensitivity of the red (and possibly blue) filter. It did this with the objective of obtaining more accurate, less noisy color out of the hardware, requiring less processing and weaker purple fringing to boot.
Both changes carry the compromises discussed in the last article so the purpose of this one and the one that follows is to attempt to measure – within the limits of my tests, procedures and understanding[1] – the effect of the CFA changes from similar raw captures by the IQ3 100MP Standard Back and Trichromatic, courtesy of David Chew. We will concentrate on color accuracy, leaving purple fringing for another time.
Continue reading Phase One IQ3 100MP Trichromatic vs Standard Back Linear Color, Part II
How Is a Raw Image Rendered?
What are the basic low level steps involved in raw file conversion? In this article I will discuss what happens under the hood of digital camera raw converters in order to turn raw file data into a viewable image, a process sometimes referred to as ‘rendering’. We will use the following raw capture by a Nikon D610 to show how image information is transformed at every step along the way:
Rendering = Raw Conversion + Editing
COMBINING BAYER CFA MTF Curves – II
In this and the previous article I discuss how Modulation Transfer Functions (MTF) obtained from the raw data of each of a Bayer CFA color channel can be combined to provide a meaningful composite MTF curve for the imaging system as a whole.
There are two ways that this can be accomplished: an input-referred approach () that reflects the performance of the hardware only; and an output-referred one () that also takes into consideration how the image will be displayed. Both are valid and differences are typically minor, though the weights of the latter are scene, camera/lens, illuminant dependent – while the former are not. Therefore my recommendation in this context is to stick with input-referred weights when comparing cameras and lenses.1 Continue reading COMBINING BAYER CFA MTF Curves – II