It is always interesting when innovative companies push the envelope of the state-of-the-art of a single component in their systems because a lot can be learned from before and after comparisons. I was therefore excited when Phase One introduced a Trichromatic version of their Medium Format IQ3 100MP Digital Back last September because it could allows us to isolate the effects of tweaks to their Bayer Color Filter Array, assuming all else stays the same.
Thanks to two virtually identical captures by David Chew at getDPI, and Erik Kaffehr’s intelligent questions at DPR, in the following articles I will explore the effect on linear color of the new Trichromatic CFA (TC) vs the old one on the Standard Back (SB). In the process we will discover that – within the limits of my tests, procedures and understanding[1] – the Standard Back produces apparently more ‘accurate’ color while the Trichromatic produces better looking matrices, potentially resulting in ‘purer’ signals.
What CFA Tweaks and Why?
It appears from an online interview with Phase One engineer Lau Nørdgaard[2] that Phase One leveraged their relationship with sensor manufacturer Sony to come up with at least two substantial changes to the Trichromatic CFA:
- They boosted the peak sensitivity of the red (and possibly blue) color filter array so that it would be relatively closer to green’s (Figure 1 left); and
- They tried to keep the shapes of the three color filters more symmetrical by taming the vestigial ripples away from peak wavelengths emphasized in the right image of Figure 1.
The claim from their marketing material is that this will result in “astonishing color definition” … “more accurate[ly] than ever before”, less noise, less processing and less purple fringing. Of course without actual Spectral Sensitivity measurements of the CFA we can only guess at whether this will be true or not. But until someone does just that there is a lot that can be inferred by taking a closer look at what comes out of the black box in a given setting.
Does Boosting Red (and/or Blue) Make sense?
As always in applied science, everything is a compromise. For instance boosting relative red sensitivity should indeed result in cleaner reds but there is a reason why typical CFAs look as follows instead (under equi-energy illuminant Se and a UV/IR filter, with thanks to Christian Mauer[3]):
I’ll let Mark Scott Abeln at DPR’s Photographic Science and Technology Forum explain[4]:
“Imagine the situation where the R, G, and B raw channels on a neutral target are all equal under broad daylight, say, specifically the situation where the channels don’t need any white balance at midday during midsummer at mid-latitudes with direct sunlight. In a certain sense, this seems reasonable, and I’ve toyed with the idea myself in the past, but it is less than optimal if you consider the whole range of camera use. The reason is that white balance under low and high color temperatures would require *reducing* exposure to avoid blowing out the R and/or B channels, respectively. This system would maximize exposure where it needed the least, in bright sunlight, and reduce exposure where it is needed the most, under dim incandescent lights and at dusk.” Also “This new system would reduce the effectiveness of light meters, which are otherwise neutral with respect to color temperature, and this would be too great of a loss. “
Recall that raw data in each color channel is related to the area under the curve resulting from multiplying irradiance from the scene (say Figure 3 below) by the respective CFA sensitivity (Figure 2 above), wavelength by wavelength. Red is typically lower than blue because a generic camera needs to be shot indoors under low temperature light (say halogen, illuminant A) as well as outdoors under high temperature light (say in the shade of a sunny Nordic sky, illuminant D75).
Note that in energy units per small wavelength interval Illuminant A gives a much stronger boost to ‘red’ wavelengths(and much stronger attenuation to ‘blue’ ones), compared to a relatively flat daylight illuminant like D50. D75 does the opposite, although more weakly, boosting the energy of lower wavelengths while attenuating higher ones. The ‘green’ wavelengths sitting roughly in the middle of the range are the least changed in either case: they metaphorically act as a cantilever to the other two.
On the other hand the choice to bring red’s sensitivity up would be more than warranted if a camera were to be used mostly in D50+ ‘daylight’ conditions where red is naturally attenuated – and I suspect that this is indeed the Trichromatic’s target audience. I don’t think the following Hassy, and I assume the TC, would be welcome on the set of Barry Lyndon’s candlelight scenes (1800K correlated color temperature) but I am sure it would do well in the shade of a mountain glacier in the Nordic sun (7000K+)[*].
Does Getting Rid of the Red Bump Make Sense?
Volume digital camera manufacturers like Nikon, Canon etc. typically do not show the wild CFA sensitivity bumps attributed to them at the extremes of Figure 1b. The main reason is that their sensors usually sit under ultraviolet/infrared filters (sometime one, sometime two) that cut off energy below around 400nm and above around 700nm in order to restrict photon collection to the visible range. Such a filter set is sometimes referred to as a ‘hot mirror’ and its spectral transmittance looks something like this:
Typical lenses help with a relatively quick transmission roll-off below 400nm or so but their roll-off above 700nm is much more gradual, hence the need for a hot mirror:
Sometimes photographers remove the filter(s) in order to obtain unique effects in their captures. Here for instance is the relative response of one such IR ‘converted’ camera, with the infrared filter removed:
That may be fine for fun artistic effects but cameras without a hot mirror are usually made so exactly because of the unreal colors they produce. Unreal = inaccurate.
I don’t know whether the IQ3 100MP Standard Back has a UV/IR filter or whether its CFA looks like that in Figure 1 instead. What I do know is that any sensor with a CFA like that would have difficulty producing accurate linear color. The fact is virtually all commercial cameras intended for photography do not show the ripples at the UV and IR ends of the spectrum displayed there because they do sport a UV/IR filter. On the other hand they sometimes have a red bump or a step more or less centered around the blue channel, as shown below between 425 and 550nm (this time CFA peaks are normalized to one):
The bumpy red filter extension to the lower wavelengths could be construed to be undesired leakage but I have heard it said that such a bump may actually be desirable because, depending on the color system used, it makes it easier to reproduce certain colors, say violet, accurately. For instance, when using the ubiquitous CIE XYZ color space conventions, the CIE 1931 Standard Observer’s Color Matching Functions look like this:
What I do know is that Fujifilm’s new generation Color Mosaic Dyes – used in Color Filter Arrays by many manufacturers – have tried to do away with red leakage in the lower wavelengths starting around 2015, so the Trichromatic might be using those.[5] (The quest for the perfect CFA and bespoke variations, not to mention the presence or absence of the bump, are better explored in this article.)
The Proof of the Pudding
As always the proof of the pudding is in the eating. In the next article we will use David Chew’s almost identical captures from an IQ3 100MP Trichromatic and Standard Back to compute respective linear compromise color matrices and estimate relative color ‘accuracy’ – according to the procedure outlined earlier.
Notes and References
1. Lots of provisos and simplifications for clarity as always. I am not a color scientist, so if you spot any mistakes please let me know.
2. The Luminous Landscape: Phase One TriChromatic Sensor Explained, January 2, 2018.
3. Many of the spectral CFAs in this article come from Christian Mauer’s Thesis at the Department of Media and Phototechnology, University of Applied Sciences Cologne: “Measurement of the spectral response of digital cameras with a set of interference filters”, 2009. You can find it here.
4. Mark Scott Abeln’s quote comes from here.
5. The Spectral Responses shown in Figure 8 are about 10 years old and reflect the CFA dyes used at the time. More recent dyes, like Fuji’s New Generation Color Mosaic, produce spectra that attempt to eliminate the red channel’s leakage in lower wavelengths, see for instance “Technology of color filter materials for image sensor“, Hiroshi Taguchi, Masashi Enokido, FUJIFILM Electronic Materials Co.,Ltd (2011).
6. A camera or colorimeter is said to be colorimetric if it satisfies the Luther condition (also called the “Maxwell-Ives criterion”), if the product of the spectral responsivity of the photoreceptor and the spectral transmittance of the filters is a linear combination of the Color Matching Functions. See here (in german, right click to translate it).
> but there is a reason why typical CFAs look as follows instead
That reason is a compromise really. The sensors nowadays are not daylight balanced to achieve the performance across various lighting conditions. Daylight balanced sensors were used in Kodak ProBacks and those used to have more or less equal response in R and G in daylight with B channel half stop under (roughly). From my experience with that digital back that approach needed quite careful IR filtering (reflective filters with sharp cut off areas were used by Kodak) but it did resulted in superb skintone reproduction. Performance under artificial lighting was of course variable.
Interesting, thank you Alexey