Tag Archives: CFA

Connecting Photographic Raw Data to Tristimulus Color Science

Absolute Raw Data

In the previous article we determined that the three r_{_L}g_{_L}b_{_L} values recorded in the raw data in the center of the image plane in units of Data Numbers per pixel – by a digital camera and lens as a function of absolute spectral radiance L(\lambda) at the lens – can be estimated as follows:

(1)   \begin{equation*} r_{_L}g_{_L}b_{_L} =\frac{\pi p^2 t}{4N^2} \int\limits_{380}^{780}L(\lambda) \odot SSF_{rgb}(\lambda)  d\lambda \end{equation*}

with subscript _L indicating absolute-referred units and SSF_{rgb} the three system Spectral Sensitivity Functions.   In this series of articles \odot is wavelength by wavelength multiplication (what happens to the spectrum of light as it progresses through the imaging system) and the integral just means the area under each of the three resulting curves (integration is what the pixels do during exposure).  Together they represent an inner or dot product.  All variables in front of the integral were previously described and can be considered constant for a given photographic setup. Continue reading Connecting Photographic Raw Data to Tristimulus Color Science

The Physical Units of Raw Data

In the previous article we (I) learned that the Spectral Sensitivity Functions of a given digital camera and lens are the result of the interaction of light from the scene with all of the spectrally varied components that make up the imaging system: mainly the lens, ultraviolet/infrared hot mirror, Color Filter Array and other filters, finally the photoelectric layer of the sensor, which is normally silicon in consumer kit.

Figure 1. The journey of light from source to sensor.  Cone Ω will play a starring role in the narrative that follows.

In this one we will put the process on a more formal theoretical footing, setting the stage for the next few on the role of white balance.

Continue reading The Physical Units of Raw Data

The Spectral Response of Digital Cameras

Photography works because visible light from one or more sources reaches the scene and is reflected in the direction of the camera, which then captures a signal proportional to it.  The journey of light can be described in integrated units of power all the way to the sensor, for instance so many watts per square meter. However ever since Newton we have known that such total power is in fact the result of the weighted sum of contributions by every frequency  that makes up the light, what he called its spectrum.

Our ability to see and record color depends on knowing the distribution of the power contained within a subset of these frequencies and how it interacts with the various objects in its path.  This article is about how a typical digital camera for photographers interacts with the spectrum arriving from the scene: we will dissect what is sometimes referred to as the system’s Spectral Response or Sensitivity.

Figure 1. Spectral Sensitivity Functions of an arbitrary imaging system, resulting from combining the responses of the various components described in the article.

Continue reading The Spectral Response of Digital Cameras

Pi HQ Cam Sensor Performance

Now that we know how to open 12-bit raw files captured with the new Raspberry Pi High Quality Camera, we can learn a bit more about the capabilities of its 1/2.3″ Sony IMX477 sensor from a keen photographer’s perspective.  The subject is a bit dry, so I will give you the summary upfront.  These figures were obtained with my HQ module at room temperature and the raspistill – -raw (-r) command:

Raspberry Pi
HQ Camera
raspistill
--raw -ag 1
Comments
Black Level256.3 DN256.0 - 257.3 based on gain
White Level4095Constant throughout
Analog Gain1Gain Range 1 - 16
Read Noise3 e-, gain 1
1.5 e-, gain 16
1.53 DN from black frame
11.50 DN
Clipping (FWC)8180 e-at base gain, 3400e-/um^2
Dynamic Range11.15 stops
11.3 stops
SNR = 1 to Clipping
Read Noise to Clipping
System Gain0.47 DN/e-at base analog gain
Star Eater AlgorithmPartly DefeatableAll channels - from base gain and from min shutter speed
Low Pass FilterYesAll channels - from base gain and from min shutter speed

Continue reading Pi HQ Cam Sensor Performance

Opening Raspberry Pi High Quality Camera Raw Files

The Raspberry Pi Foundation recently released an interchangeable lens camera module based on the Sony  IMX477, a 1/2.3″ back side illuminated sensor with 3040×4056 pixels of 1.55um pitch.  In this somewhat technical article we will unpack the 12-bit raw still data that it produces and render it in a convenient color space.

still life raw capture data file raspberry pi high quality hq cam f/8 1/2s base analog gain iso adobe rgb
Figure 1. 12-bit raw capture by Raspberry Pi High Quality Camera with 16 mm kit lens at f/8, 1/2 s, base ISO. The image was loaded into Matlab and rendered Half Height Nearest Neighbor in the Adobe RGB color space with a touch of local contrast and sharpening.  Click on it to see it in its own tab and view it at 100% magnification. If your browser is not color managed you may not see colors properly.

Continue reading Opening Raspberry Pi High Quality Camera Raw Files

The HV Spectrogram

A spectrogram, also sometimes referred to as a periodogram, is  a visual representation of the Power Spectrum of a signal.  Power Spectrum answers the question “How much power is contained in the frequency components of the signal”. In digital photography a Power Spectrum can show the relative strength of repeating patterns in captures and whether processing has been applied.

In this article I will describe how you can construct a spectrogram and how to interpret it, using dark field raw images taken with the lens cap on as an example.  This can tell us much about the performance of our imaging devices in the darkest shadows and how well tuned their sensors are there.

Pixel level noise spectrum
Figure 1. Horizontal and Vertical Spectrogram of noise captured in the raw data by a Nikon Z7 at base ISO with  the lens cap on.  The plot shows clear evidence of low-pass filtering in the blue CFA color plane and pattern noise repeating every 6 rows there and in one of the green ones.

Continue reading The HV Spectrogram

The Perfect Color Filter Array

We’ve seen how humans perceive color in daylight as a result of three types of photoreceptors in the retina called cones that absorb wavelengths of light from the scene with different sensitivities to the arriving spectrum.

Figure 1.  Quantitative Color Science.

A photographic digital imager attempts to mimic the workings of cones in the retina by usually having different color filters arranged in an array (CFA) on top of its photoreceptors, which we normally call pixels.  In a Bayer CFA configuration there are three filters named for the predominant wavelengths that each lets through (red, green and blue) arranged in quartets such as shown below:

Figure 2.  Bayer Color Filter Array: RGGB  layout.  Image under license from Cburnett, pixels shifted and text added.

A CFA is just one way to copy the action of cones:  Foveon for instance lets the sensing material itself perform the spectral separation.  It is the quality of the combined spectral filtering part of the imaging system (lenses, UV/IR, CFA, sensing material etc.) that determines how accurately a digital camera is able to capture color information from the scene.  So what are the characteristics of better systems and can perfection be achieved?  In this article I will pick up the discussion where it was last left off and, ignoring noise for now, attempt to answer this  question using CIE conventions, in the process gaining insight in the role of the compromise color matrix and developing a method to visualize its effects.[1]  Continue reading The Perfect Color Filter Array

Phase One IQ3 100MP Trichromatic vs Standard Back Linear Color, Part III

Over the last two posts we’ve been exploring some of the differences introduced by tweaks to the Color Filter Array of the Phase One IQ3 100MP Trichromatic Digital Back versus its original incarnation, the Standard Back.  Refer to those for the background.  In this article we will delve into some of these differences quantitatively[1].

Let’s start with the compromise color matrices we derived from David Chew’s captures of a ColorChecher 24 in the shade of a sunny November morning in Ohio[2].   These are the matrices necessary to convert white balanced raw data to the perceptual CIE XYZ color space, where it is said there should be one-to-one correspondence with colors as perceived by humans, and therefore where most measurements are performed.  They are optimized for each back in the current conditions but they are not perfect, the reason for the word ‘compromise’ in their name:

Figure 1. Optimized Linear Compromise Color Matrices for the Phase One IQ3 100 MP Standard and Trichromatic Backs under approximately D65 light.

Continue reading Phase One IQ3 100MP Trichromatic vs Standard Back Linear Color, Part III

Phase One IQ3 100MP Trichromatic vs Standard Back Linear Color, Part I

It is always interesting when innovative companies push the envelope of the state-of-the-art of a single component in their systems because a lot can be learned from before and after comparisons.   I was therefore excited when Phase One introduced a Trichromatic version of their Medium Format IQ3 100MP Digital Back last September because it could allows us to isolate the effects of tweaks to their Bayer Color Filter Array, assuming all else stays the same.

Figure 1. IQ3 100MP Trichromatic (left) vs the rest (right), from PhaseOne.com.   Units are not specified but one would assume that the vertical axis is relative spectral sensitivity and the horizontal axis represents wavelength.

Thanks to two virtually identical captures by David Chew at getDPI, and Erik Kaffehr’s intelligent questions at DPR, in the following articles I will explore the effect on linear color of the new Trichromatic CFA (TC) vs the old one on the Standard Back (SB).  In the process we will discover that – within the limits of my tests, procedures and understanding[1] – the Standard Back produces apparently more ‘accurate’ color while the Trichromatic produces better looking matrices, potentially resulting in ‘purer’ signals. Continue reading Phase One IQ3 100MP Trichromatic vs Standard Back Linear Color, Part I

Bayer CFA Effect on Sharpness

In this article we shall find that the effect of a Bayer CFA on the spatial frequencies and hence the ‘sharpness’ information captured by a sensor compared to those from the corresponding monochrome version can go from (almost) nothing to halving the potentially unaliased range – based on the chrominance content of the image and the direction in which the spatial frequencies are being stressed. Continue reading Bayer CFA Effect on Sharpness

Linear Color: Applying the Forward Matrix

Now that we know how to create a 3×3 linear matrix to convert white balanced and demosaiced raw data into XYZ_{D50}  connection space – and where to obtain the 3×3 linear matrix to then convert it to a standard output color space like sRGB – we can take a closer look at the matrices and apply them to a real world capture chosen for its wide range of chromaticities.

Figure 1. Image with color converted using the forward linear matrix discussed in the article.

Continue reading Linear Color: Applying the Forward Matrix

Color: Determining a Forward Matrix for Your Camera

We understand from the previous article that rendering color with Adobe DNG raw conversion essentially means mapping raw data in the form of rgb triplets into a standard color space via a Profile Connection Space in a two step process

    \[ Raw Data \rightarrow  XYZ_{D50} \rightarrow RGB_{standard} \]

The first step white balances and demosaics the raw data, which at that stage we will refer to as rgb, followed by converting it to XYZ_{D50} Profile Connection Space through linear projection by an unknown ‘Forward Matrix’ (as DNG calls it) of the form

(1)   \begin{equation*} \left[ \begin{array}{c} X_{D50} \\ Y_{D50} \\ Z_{D50} \end{array} \right] = \begin{bmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{bmatrix} \left[ \begin{array}{c} r \\ g \\ b \end{array} \right] \end{equation*}

with data as column-vectors in a 3xN array.  Determining the nine a coefficients of this matrix M is the main subject of this article[1]. Continue reading Color: Determining a Forward Matrix for Your Camera

Color: From Object to Eye

How do we translate captured image information into a stimulus that will produce the appropriate perception of color?  It’s actually not that complicated[1].

Recall from the introductory article that a photon absorbed by a cone type (\rho, \gamma or \beta) in the fovea produces the same stimulus to the brain regardless of its wavelength[2].  Take the example of the eye of an observer which focuses  on the retina the image of a uniform object with a spectral photon distribution of 1000 photons/nm in the 400 to 720nm wavelength range and no photons outside of it.

Because the system is linear, cones in the foveola will weigh the incoming photons by their relative sensitivity (probability) functions and add the result up to produce a stimulus proportional to the area under the curves.  For instance a \gamma cone may see about 321,000 photons arrive and produce a relative stimulus of about 94,700, the weighted area under the curve:

equal-photons-per-wl
Figure 1. Light made up of 321k photons of broad spectrum and constant Spectral Photon Distribution between 400 and 720nm  is weighted by cone sensitivity to produce a relative stimulus equivalent to 94,700 photons, proportional to the area under the curve

Continue reading Color: From Object to Eye

An Introduction to Color in Digital Cameras

This article will set the stage for a discussion on how pleasing color is produced during raw conversion.  The easiest way to understand how a camera captures and processes ‘color’ is to start with an example of how the human visual system does it.

An Example: Green

Light from the sun strikes leaves on a tree.   The foliage of the tree absorbs some of the light and reflects the rest diffusely  towards the eye of a human observer.  The eye focuses the image of the foliage onto the retina at its back.  Near the center of the retina there is a small circular area called fovea centralis which is dense with light receptors of well defined spectral sensitivities called cones. Information from the cones is pre-processed by neurons and carried by nerve fibers via the optic nerve to the brain where, after some additional psychovisual processing, we recognize the color of the foliage as green[1].

spd-to-cone-quanta3
Figure 1. The human eye absorbs light from an illuminant reflected diffusely by the object it is looking at.

Continue reading An Introduction to Color in Digital Cameras

How Is a Raw Image Rendered?

What are the basic low level steps involved in raw file conversion?  In this article I will discuss what happens under the hood of digital camera raw converters in order to turn raw file data into a viewable image, a process sometimes referred to as ‘rendering’.  We will use the following raw capture by a Nikon D610 to show how image information is transformed at every step along the way:

Nikon D610 with AF-S 24-120mm f/4 lens at 24mm f/8 ISO100, minimally rendered from raw as outlined in the article.
Figure 1. Nikon D610 with AF-S 24-120mm f/4 lens at 24mm f/8 ISO100, minimally rendered from raw by Octave/Matlab following the steps outlined in the article.

Rendering = Raw Conversion + Editing

Continue reading How Is a Raw Image Rendered?

A Simple Model for Sharpness in Digital Cameras – Polychromatic Light

We now know how to calculate the two dimensional Modulation Transfer Function of a perfect lens affected by diffraction, defocus and third order Spherical Aberration  – under monochromatic light at the given wavelength and f-number.  In digital photography however we almost never deal with light of a single wavelength.  So what effect does an illuminant with a wide spectral power distribution, going through the color filter of a typical digital camera CFA  before the sensor have on the spatial frequency responses discussed thus far?

Monochrome vs Polychromatic Light

Not much, it turns out. Continue reading A Simple Model for Sharpness in Digital Cameras – Polychromatic Light

COMBINING BAYER CFA MTF Curves – II

In this and the previous article I discuss how Modulation Transfer Functions (MTF) obtained from every color channel of a Bayer CFA raw capture in isolation can be combined to provide a meaningful composite MTF curve for the imaging system as a whole.

There are two ways that this can be accomplished: an input-referred approach (L) that reflects the performance of the hardware only; and an output-referred one (Y) that also takes into consideration how the image will be displayed.  Both are valid and differences are typically minor, though the weights of the latter are scene, camera/lens, illuminant dependent – while the former are not.  Therefore my recommendation in this context is to stick with input-referred weights when comparing cameras and lenses.1 Continue reading COMBINING BAYER CFA MTF Curves – II

Combining Bayer CFA Modulation Transfer Functions – I

In this and the following article I will discuss my thoughts on how MTF50 results obtained from  raw data of the four Bayer CFA color channels off  a neutral target captured with a typical camera through the slanted edge method can be combined to provide a meaningful composite MTF50 for the imaging system as a whole.   The perimeter of the discussion are neutral slanted edge measurements of Bayer CFA raw data for linear spatial resolution  (‘sharpness’) photographic hardware evaluations.  Corrections, suggestions and challenges are welcome. Continue reading Combining Bayer CFA Modulation Transfer Functions – I

What is the Effective Quantum Efficiency of my Sensor?

Now that we know how to determine how many photons impinge on a sensor we can estimate its Effective Quantum Efficiency, that is the efficiency with which it turns such a photon flux (n_{ph}) into photoelectrons (n_{e^-} ), which will then be converted to raw data to be stored in the capture’s raw file:

(1)   \begin{equation*} EQE = \frac{n_{e^-} \text{ produced by average pixel}}{n_{ph} \text{ incident on average pixel}} \end{equation*}

I call it ‘Effective’, as opposed to ‘Absolute’, because it represents the probability that a photon arriving on the sensing plane from the scene will be converted to a photoelectron by a given pixel in a digital camera sensor.  It therefore includes the effect of microlenses, fill factor, CFA and other filters on top of silicon in the pixel.  Whether Effective or Absolute, QE is usually expressed as a percentage, as seen below in the specification sheet of the KAF-8300 by On Semiconductor, without IR/UV filters:

For instance if  an average of 100 photons per pixel were incident on a uniformly lit spot on the sensor and on average each pixel produced a signal of 20 photoelectrons we would say that the Effective Quantum Efficiency of the sensor is 20%.  Clearly the higher the EQE the better for Image Quality parameters such as SNR. Continue reading What is the Effective Quantum Efficiency of my Sensor?

Nikon CFA Spectral Power Distribution

I measured the Spectral Photon Distribution of the three CFA filters of a Nikon D610 in ‘Daylight’ conditions with a cheap spectrometer.  Taking a cue from this post I pointed it at light from the sun reflected off a gray card  and took a raw capture of the spectrum it produced.

CFA Spectrum Spectrometer

An ImageJ plot did the rest.  I took a dozen captures at slightly different angles to catch the picture of the clearest spectrum.  Shown are the three spectral curves averaged over the two best opposing captures, each proportional to the number of photons let through by the respective Color Filter.   The units on the vertical axis are raw black-subtracted values from the raw file (DN), therefore the units on the vertical axis are proportional to the number of incident photons in each case.   The Photopic Eye Luminous Efficiency Function (2 degree, Sharpe et al 2005) is also shown for reference, scaled to the same maximum as the green curve (although in energy units, my bad). Continue reading Nikon CFA Spectral Power Distribution

What Is Exposure

When capturing a typical photograph, light from one or more sources is reflected from the scene, reaches the lens, goes through it and eventually hits the sensing plane.

In photography Exposure is the quantity of visible light per unit area incident on the image plane during the time that it is exposed to the scene.  Exposure is intuitively proportional to Luminance from the scene $L$ and exposure time $t$.  It is inversely proportional to lens f-number $N$ squared because it determines the relative size of the cone of light captured from the scene.  You can read more about the theory in the article on angles and the Camera Equation.

Continue reading What Is Exposure