The Spectral Response of Digital Cameras

Photography works because visible light from one or more sources reaches the scene and is reflected in the direction of the camera, which then captures a signal proportional to it.  The journey of light can be described in integrated units of power all the way to the sensor, for instance so many watts per square meter. However ever since Newton we have known that such total power is in fact the result of the weighted sum of contributions by every frequency  that makes up the light, what he called its spectrum.

Our ability to see and record color depends on knowing the distribution of the power contained within a subset of these frequencies and how it interacts with the various objects in its path.  This article is about how a typical digital camera for photographers interacts with the spectrum arriving from the scene: we will dissect what is sometimes referred to as the system’s Spectral Response or Sensitivity.

Figure 1. Spectral Sensitivity Functions of an arbitrary imaging system, resulting from combining the responses of the various components described in the article.

Where Spectral Distributions Come from

‘Spectral’ refers to the fact that the relative quantity is made up of a number of quasi-monochromatic samples, each of a limited bandwidth \Delta\lambda around a central wavelength \lambda.  In photography the range of interest is typically sampled every 1, 5 or 10 nanometers (nm) – \Delta\lambda of more than 10nm is considered unsuitable for color calculations.

For every sample in the range, just the sub-spectrum in the relative +/-\frac{1}{2}\Delta\lambda interval is fed to a calibrated power meter, which integrates (sums up) the total power received over it, for instance in units of watts.   The result is then divided by \Delta\lambda to obtain the sample that will be entered at the mean frequency \lambda in units of W/nm.  The procedure is repeated at all other regularly spaced samples in the range of interest. I mention radiometric quantities like power and energy with their units of watts (W) or joules (J) here but it is just as easy to switch to their photometric counterparts: irradiance<->illuminance, radiance<->luminance, watts<->lumens, W/sr<->cd, etc. see the relative article.  These are the units of Color Science.

The Human Visual System is considered to be effectively insensitive to wavelengths below 380nm and above 780nm so typically that’s the range of interest – and what is considered to be visible light.  For photographic applications this is often reduced to something close to 400-700nm since most generic current digital cameras sport ultraviolet-infrared filters (UV/IR ‘hot mirror’) that block most energy beyond that, as we will see below. Sometimes the wavelength axis is shown in units of Angstroms (A) or microns (\mu m) instead of nm, but they are equivalent since 10A = 1nm and 1000nm = 1\mu m.

There are several types of spectral plots found in the literature that look similar but come with potentially incompatible units.  See the Appendix for a description of the main types of such plots.

Imaging System Spectral Components

In photography, light radiance from the scene arrives at the camera, goes through the lens and a number of filters before reaching the sensing area where the pixel array sits.  Thanks to the photoelectric effect, active material in the pixels (usually silicon) converts the arriving energy into photoelectrons during exposure.  These are then roughly counted by the sensor’s downstream electronics and finally stored in the raw file in the form of Data Numbers representing captured image information.  Each one of these components has its own spectral response that factors into the overall system’s.

Below is a sample sequence of such spectral responses, in the order in which light from the scene would encounter them.  Components were chosen arbitrarily thinking about a digital camera and lens from a decade or so ago.  First lens Transmittance by itself:

spectral response transmission of lens
Figure 2. Spectral Transmittance of the Nikon AF-S 24-70mm f/2.8G ED lens, courtesy of LensTip.com

Lens transmittance can vary quite a bit depending on vintage, type (e.g. prime vs zoom), quality and manufacturer. Next the UV/IR filter (a.k.a. hot mirror) by itself:

Figure 3. Spectral transmittance of Nikon D810 UV-IR hot mirror, courtesy of Kolarivision.com

These tend to be relatively stable for a given manufacturer, though Nikon for instance tightened them up over a decade ago.

In older cameras at this stage of the filter stack there could be an antialiasing filter or an AR coated clear glass, possibly with a flattish spectral response – we’ll ignore those for now.

Next are the color dyes used in the Color Filter Array, by themselves:

Figure 4.  Spectral Transmittance of Fujifilm Color Filter Array dyes, courtesy of Fujifilm.com.

These can vary substantially depending on generation, manufacturer and the intended application.

Radiance from the scene has up to this point made it through the lens and the filter stack.  The relative responses have all been a function of transmittance, so if radiance before the lens was expressed as a Spectral Power Distribution in units of W/sr/m^2/nm, those would still be the relative units here.

Light still needs to go through microlenses and interact with silicon in order to let the photoelectric effect perform its magic and produce a signal in units of photoelectrons, proportional to the DN values that will be stored in the raw file.  Both microlenses and silicon have their own spectral response, as shown in the Absolute Quantum Efficiency plot shown in Figure 10 in the Appendix.

However, Quantum Efficiency is relative to quanta, not energy.  It represents the probability that a photon of a given wavelength interacting with silicon in a pixel will be absorbed by it and produce a photoelectron: quanta-in, quanta-out.  So before applying this last spectral response we need to convert the arriving radiance proportional to watts (hence joules) into a signal proportional to photons.   This is easily done because we know that the energy of a photon is \frac{hc}{\lambda} J/ph , with \lambda the wavelength of light, h and c Planck’s constant and the speed of light in the chosen medium respectively.

Since in photography we typically do not need absolute values, we can ignore the constants and simply convert energy to quanta by multiplying the spectral radiance in each sampled interval by its mean wavelength normalized so that the wavelength at the center of the range, in this case 550nm, has relative energy of one per nm.  Doing so on the Absolute QE data for the Monochrome version of On-Semiconductor’s KAF-8300 sensor with microlenses seen in Figure 10 below [2] produces a Responsivity plot: energy-in, quanta out.

Figure 5.  Responsivity of onsemi KAF-8300 monochrome sensor with microlenses, energy-in, quanta-out at each small wavelength interval.

Spectral Sensitivity Functions = System Response

By multiplying the cascaded responses of every component in the imaging system together wavelength by wavelength, we obtain the Spectral Response of the system as a whole, sometimes referred to as its Spectral Sensitivity Functions.  Here they are for the camera and lens cobbled together in this article:

Figure 6.  Spectral Response of the imaging system made up of the ad-hoc components discussed in the article,: proportional to spectral radiance-in, DN-out.

The blue channel seems a bit lower than we are used to seeing – but in general not a bad result, given the ad-hoc nature of component selection (what appeared to be reliably available online).

Most of the SSFs found in papers are the result of a process similar to that described so far, after discounting the radiance of the test source.  They are almost always shown in relative units, often with the green or all peaks normalized to one.  For example here is a comparison to the Spectral Sensitivity Functions of a circa 2011 Nikon D5100 with unknown lens (tsk tsk) measured at the National Physical Laboratory in the UK.[3]  The SSFs in Figure 7 are normalized so they all peak at one and are represented by solid lines, the D5100’s by dashed ones:

Figure 7.  Comparison of the Spectral Sensitivity Functions derived in this article (dashed curves) with those for a Nikon D5100 and unknown lens measured by Darrodi et al. at NPL (solid curves).

Figure 6 would of course apply as-is to spectrally flat radiance arriving at the lens, for instance what is known as equi-energy illuminant Se after reflection by a gray card. The units are relative in that we don’t know the amount of radiance at the lens, just that it was spectrally flat.  We also don’t know exposure time, the aperture of the lens or the area of the pixels.  It’s not difficult to find out all of the above and therefore to obtain an absolute scale.

And that’s the topic of the next article.

 

Appendix: Three Types of Spectral Plots

When studying this interesting subject one comes across three main types of spectral plots that all play a part in determining the Camera’s Spectral Response, with potentially incompatible units: spectral radiance/irradiance, spectral efficiency and spectral responsivity.

1) Spectral Radiance/Irradiance

The first type of plot is normally associated with Spectral irradiance or radiance on/through/from a surface, often under the moniker of a Spectral Power, Energy or Photon Distribution, as shown below for  two  daylight  illuminants.

Figure 8.  Spectral Power Distribution of Standard Daylight illuminants D45 and D65

This type of plot can also show radiance, i.e. irradiance after reflection by an object.  Their units are proportional to the equivalent of watts per square meter per nm (for instance W/sr/m^2/nm).

The sum of the individual contributions of every small wavelength interval is proportional to the total power in watts per square meter that we started with in the first paragraph.  It is an approximation to the area under the curve, its integral.  In photography we deal with a finite exposure time and the small area of the camera’s pixels so spectral units can be shown proportional to energy instead (e.g. J/ \mu m^2/nm), since watts are joules per second.  Alternatively the plot can be shown in quanta, for instance per square micron per nanometer.

2) Spectral Efficiency, Transmission and Reflection

These can be seen in various forms, such as filter Transmission or Object Reflectance, usually expressed as a percentage, with the same units in and out.  For instance the lens and filters in Figures 2-4, that show the distribution of spectral energy attenuation caused by them. Or the reflectance of green grass shown below, another example of a plot of power/energy-in, power/energy-out.

Figure 9. Reflectance of grass, courtesy of eumetrian.org. Keep in mind that the visible wavelength range is about 0.4-0.7 micrometers, a tiny portion of this plot.

But also the ability of a sensing material like silicon to absorb a photon and produce a photoelectron, wavelength by wavelength, usually referred to as Quantum Efficiency: this time quanta-in, quanta-out.  Here for instance  is a plot of the Absolute Quantum Efficiency of the Monochrome version of On-Semiconductor’s KAF-8300 sensor with microlenses (ex Kodak)[2]:

Figure 10.  Absolute Quantum Efficiency of the monochrome version of On-Semiconductor KAF-8300 CCD sensor, 5.4 micron pitch with microlenses but without cover glass, a function assumed to be performed by the UV-IR filter.  Courtesy of onsemi.com

This is what could be expected from a good Front Side Illuminated sensor designed over fifteen years ago.  Today’s Back Side Illuminated sensors with anti-reflective coatings and microlenses can achieve peak efficiency of over 90%.

3) Spectral Responsivity

The third type of spectral response plot seen in the literature is sometimes called a Responsivity plot and although it may look somewhat similar to those in (2) above it is quite different in that it involves a conversion of units, for instance W or J in and current or photoelectrons out; energy-in, quanta out as in Figure 5.

These are quite relevant to a camera’s Spectral Response because the job of a photographic imaging system is to convert Exposure (H_e),  i.e. energy per unit area, into quantal values per pixel to be stored in the raw file in the form of Data Numbers (DN).

Absolute vs Relative Scales

Many of the three types of plot come both in absolute and relative form.  Absolute means that the input and output of the plot can be referred to absolute units of the quantity in question (for instance 3mW/m^2 in, 2mW/m^2 out.  While relative means that we do not necessarily know or care what the starting absolute values are, we are just interested in what happens to them as they interact with the given component as a function of a normalized input.  For instance what percentage of the energy/quanta make it through at a given wavelength.

As it turns out photographers think in terms of stops and the EV system so photography is mainly built on relative relationships and that’s where most of our interest lies.

Mix and Match

Sometimes the spectral effect of several components of the imaging system are merged, for example the combination of the effects of microlenses, anti-reflective coated clear glass, CFA dyes and silicon QE from the On-Semi KAC-12040 CMOS sensor spec sheet shown below[2]:

Figure 11. Absolute Quantum Efficiency of CMOS Bayer CFA KAC-12040 imager from On-Semi, courtesy of onsemi,com. Quanta-in, quanta-out.

One has to be careful when cascading such responses because the relative units may not be compatible as discussed in more detail in the article.

 

Notes and References


1. This article explains how to convert radiometric to photometric units.
2. Links to the specification sheets of onsemi KAF-8300 and KAC12040 imaging  sensors are provided under the respective Figures.  They are available at onsemi.com
3. The Nikon D5100 Spectral Sensitivity Functions shown in Figure 7 come from “Reference data set for camera spectral sensitivity estimation, Maryam Mohammadzadeh Darrodi, Graham Finlayson, Teresa Goodman and Michal Mackiewicz, Vol. 32, No. 3 / March 2015 / J. Opt. Soc. Am. A”. The relative data, the paper and additional information are available from this page.

2 thoughts on “The Spectral Response of Digital Cameras”

  1. Interesting. I read somewhere that cameras couldn’t be modified for UV photography (the way they can for IR) because modern lens coatings blocked all the UV. But you’re showing that this isn’t true. What else would get in the way of UV photography?

  2. Hello PR,

    I guess it would depend on the sensor and the lens, the relative transmission spectra can vary quite a bit depending on specifications.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.