Tag Archives: raw data

Off Balance

In this article we confirm quantitatively that getting the White Point, hence the White Balance, right is essential to obtaining natural tones out of our captures.  How quickly do colors degrade if the estimated Correlated Color Temperature is off?

Continue reading Off Balance

A Question of Balance

In this article I bring together qualitatively the main concepts discussed in the series and argue that in many (most) cases a  photographer’s job in order to obtain natural looking tones in their work during raw conversion is to get the illuminant and relative white balance right – and to step away from any slider found in menus with the word ‘color’ in them.

Figure 1. DON’T touch them color dials (including Tint)! courtesy of Capture One

If you are an outdoor photographer trying to get balanced greens under an overcast sky – or a portrait photographer after good skin tones – dialing in the appropriate scene, illuminant and white balance puts the camera/converter manufacturer’s color science to work and gets you most of the way there safely.  Of course the judicious photographer always knew to do that – hopefully now with a better appreciation as for why.

Continue reading A Question of Balance

White Point, CCT and Tint

As we have seen in the previous post, knowing the characteristics of light at the scene is critical to be able to determine the color transform that will allow captured raw data to be naturally displayed from an output color space like ubiquitous sRGB.

White Point

The light source Spectral Power Distribution (SPD) corresponds to a unique White Point, namely a set of coordinates in the XYZ color space, obtained by multiplying wavelength-by-wavelength its SPD (the blue curve below) by the Color Matching Functions of a Standard Observer (\hat{x},\hat{y},\hat{z})

Figure 1.  Spectral Power Distribution of Standard Daylight Illuminant D5300 with a Correlated Color Temperature of  5300 deg. K; and CIE (2012) 2-deg XYZ “physiologically relevant” Color Matching Functions from cvrl.org.

Adding (integrating) the three resulting curves up we get three values that represent the illuminant’s coordinates in the XYZ color space.  The White Point is then obtained by dividing these coordinates by the Y value to normalize it to 1.

The White Point is then seen to be independent of the intensity of the arriving light, as Y represents Luminance from the scene.   For instance a Standard Daylight Illuminant with a Correlated Color Temperature of 5300k has a White Point of[1]

XYZn = [0.9593 1.0000 0.8833] Continue reading White Point, CCT and Tint

Linear Color Transforms

Building on a preceeding article of this series, once demosaiced raw data from a Bayer Color Filter Array sensor represents the captured image as a set of triplets, corresponding to the estimated light intensity at a given pixel under each of the three spectral filters part of the CFA.   The filters are band-pass and named for the representative peak wavelength that they let through, typically red, green, blue or r, g, b for short.

Since the resulting intensities are linearly independent they can form the basis of a 3D coordinate system, with each rgb triplet representing a point within it.  The system is bounded in the raw data by the extent of the Analog to Digital Converter, with all three channels spanning the same range, from Black Level with no light to clipping with maximum recordable light.  Therefore it can be thought to represent a space in the form of a cube – or better, a parallelepiped – with the origin at [0,0,0] and the opposite vertex at the clipping value in Data Numbers, expressed as [1,1,1] if we normalize all data by it.

Figure 1. The linear sRGB Cube, courtesy of Matlab toolbox Optprop.

The job of the color transform is to project demosaiced raw data rgb to a standard output RGB color space designed for viewing.   Such spaces have names like sRGB, Adobe RGB or Rec. 2020 .  The output space can also be shown in 3D as a parallelepiped with the origin at [0,0,0] with no light and the opposite vertex at [1,1,1] with maximum displayable light. Continue reading Linear Color Transforms

Pi HQ Cam Sensor Performance

Now that we know how to open 12-bit raw files captured with the new Raspberry Pi High Quality Camera, we can learn a bit more about the capabilities of its 1/2.3″ Sony IMX477 sensor from a keen photographer’s perspective.  The subject is a bit dry, so I will give you the summary upfront.  These figures were obtained with my HQ module at room temperature and the raspistill – -raw (-r) command:

Raspberry Pi
HQ Camera
raspistill
--raw -ag 1
Comments
Black Level256.3 DN256.0 - 257.3 based on gain
White Level4095Constant throughout
Analog Gain1Gain Range 1 - 16
Read Noise3 e-, gain 1
1.5 e-, gain 16
1.53 DN from black frame
11.50 DN
Clipping (FWC)8180 e-at base gain, 3400e-/um^2
Dynamic Range11.15 stops
11.3 stops
SNR = 1 to Clipping
Read Noise to Clipping
System Gain0.47 DN/e-at base analog gain
Star Eater AlgorithmPartly DefeatableAll channels - from base gain and from min shutter speed
Low Pass FilterYesAll channels - from base gain and from min shutter speed

Continue reading Pi HQ Cam Sensor Performance

The Difference Between Data and Information

In photography, digital cameras capture information about the scene carried by photons reflected by it and store the information as data in a raw file pretty well linearly.  Data is the container, scene information is the substance.  There may or may not be information in the data, no matter what its form.  With a few limitations what counts is the substance, information, not the form, data.

A Simple Example

Imagine for instance that you are taking stock of the number of remaining pieces in your dinner place settings.  You originally had a full set of 6 of everything but today, after many years of losses and breakage, this is the situation in each category: Continue reading The Difference Between Data and Information

Determining Sensor IQ Metrics: RN, FWC, PRNU, DR, gain – 2

There are several ways to extract Sensor IQ metrics like read noise, Full Well Count, PRNU, Dynamic Range and others from mean and standard deviation statistics obtained from a uniform patch in a camera’s raw file.  In the last post we saw how to do it by using such parameters to make observed data match the measured SNR curve.  In this one we will achieve the same objective by fitting mean and  standard deviation data.  Since the measured data is identical, if the fit is good so should be the results.

Sensor Metrics from Measured Mean and Standard Deviation in DN

Continue reading Determining Sensor IQ Metrics: RN, FWC, PRNU, DR, gain – 2

How Sharp are my Camera and Lens?

You want to measure how sharp your camera/lens combination is to make sure it lives up to its specs.  Or perhaps you’d like to compare how well one lens captures spatial resolution compared to another  you own.  Or perhaps again you are in the market for new equipment and would like to know what could be expected from the shortlist.  Or an old faithful is not looking right and you’d like to check it out.   So you decide to do some testing.  Where to start?

In the next four articles I will walk you through my methodology based on captures of slanted edge targets:

  1. The setup (this one)
  2. Why you need to take raw captures
  3. The Slanted Edge method explained
  4. The software to obtain MTF curves

Continue reading How Sharp are my Camera and Lens?