In the previous article we (I) learned that the Spectral Sensitivity Functions of a given digital camera and lens are the result of the interaction of light from the scene with all of the spectrally varied components that make up the imaging system: mainly the lens, ultraviolet/infrared hot mirror, Color Filter Array and other filters, finally the photoelectric layer of the sensor, which is normally silicon in consumer kit.
In this one we will put the process on a more formal theoretical footing, setting the stage for the next few on the role of white balance.
Photography works because visible light from one or more sources reaches the scene and is reflected in the direction of the camera, which then captures a signal proportional to it. The journey of light can be described in integrated units of power all the way to the sensor, for instance so many watts per square meter. However ever since Newton we have known that such total power is in fact the result of the weighted sum of contributions by every frequency that makes up the light, what he called its spectrum.
Our ability to see and record color depends on knowing the distribution of the power contained within a subset of these frequencies and how it interacts with the various objects in its path. This article is about how a typical digital camera for photographers interacts with the spectrum arriving from the scene: we will dissect what is sometimes referred to as the system’s Spectral Response or Sensitivity.