In this article we shall find that the effect of a Bayer CFA on the spatial frequencies and hence the ‘sharpness’ information captured by a sensor compared to those from the corresponding monochrome version can go from (almost) nothing to halving the potentially unaliased range – based on the chrominance content of the image and the direction in which the spatial frequencies are being stressed. Continue reading Bayer CFA Effect on Sharpness
Tag Archives: sampling
Wavefront to PSF to MTF: Physical Units
In the last article we saw that the intensity Point Spread Function and the Modulation Transfer Function of a lens could be easily approximated numerically by applying Discrete Fourier Transforms to its generalized exit pupil function twice in sequence.[1]
Obtaining the 2D DFTs is easy: simply feed MxN numbers representing the two dimensional complex image of the Exit Pupil function in its space to a Fast Fourier Transform routine and, presto, it produces MxN numbers representing the amplitude of the PSF on the sensing plane. Figure 1a shows a simple case where pupil function is a uniform disk representing the circular aperture of a perfect lens with MxN = 1024×1024. Figure 1b is the resulting intensity PSF.
Simple and fast. Wonderful. Below is a slice through the center, the 513th row, zoomed in. Hmm…. What are the physical units on the axes of displayed data produced by the DFT? Continue reading Wavefront to PSF to MTF: Physical Units
Aberrated Wave to Image Intensity to MTF
Goodman, in his excellent Introduction to Fourier Optics[1], describes how an image is formed on a camera sensing plane starting from first principles, that is electromagnetic propagation according to Maxwell’s wave equation. If you want the play by play account I highly recommend his math intensive book. But for the budding photographer it is sufficient to know what happens at the Exit Pupil of the lens because after that the transformations to Point Spread and Modulation Transfer Functions are straightforward, as we will show in this article.
The following diagram exemplifies the last few millimeters of the journey that light from the scene has to travel in order to be absorbed by a camera’s sensing medium. Light from the scene in the form of field arrives at the front of the lens. It goes through the lens being partly blocked and distorted by it as it arrives at its virtual back end, the Exit Pupil, we’ll call this blocking/distorting function . Other than in very simple cases, the Exit Pupil does not necessarily coincide with a specific physical element or Principal surface.[iv] It is a convenient mathematical construct which condenses all of the light transforming properties of a lens into a single plane.
The complex light field at the Exit Pupil’s two dimensional plane is then as shown below (not to scale, the product of the two arrays is element-by-element):
A Simple Model for Sharpness in Digital Cameras – Sampling & Aliasing
Having shown that our simple two dimensional MTF model is able to predict the performance of the combination of a perfect lens and square monochrome pixel with 100% Fill Factor we now turn to the effect of the sampling interval on spatial resolution according to the guiding formula:
(1)
The hats in this case mean the Fourier Transform of the relative component normalized to 1 at the origin (), that is the individual MTFs of the perfect lens PSF, the perfect square pixel and the delta grid; represents two dimensional convolution.
Sampling in the Spatial Domain
While exposed a pixel sees the scene through its aperture and accumulates energy as photons arrive. Below left is the representation of, say, the intensity that a star projects on the sensing plane, in this case resulting in an Airy pattern since we said that the lens is perfect. During exposure each pixel integrates (counts) the arriving photons, an operation that mathematically can be expressed as the convolution of the shown Airy pattern with a square, the size of effective pixel aperture, here assumed to have 100% Fill Factor. It is the convolution in the continuous spatial domain of lens PSF with pixel aperture PSF shown in Equation (2) of the first article in the series.
Sampling is then the product of an infinitesimally small Dirac delta function at the center of each pixel, the red dots below left, by the result of the convolution, producing the sampled image below right.
Continue reading A Simple Model for Sharpness in Digital Cameras – Sampling & Aliasing
A Simple Model for Sharpness in Digital Cameras – I
The next few posts will describe a linear spatial resolution model that can help a photographer better understand the main variables involved in evaluating the ‘sharpness’ of photographic equipment and related captures. I will show numerically that the combined spectral frequency response (MTF) of a perfect AAless monochrome digital camera and lens in two dimensions can be described as the magnitude of the normalized product of the Fourier Transform (FT) of the lens Point Spread Function by the FT of the pixel footprint (aperture), convolved with the FT of a rectangular grid of Dirac delta functions centered at each pixel:
With a few simplifying assumptions we will see that the effect of the lens and sensor on the spatial resolution of the continuous image on the sensing plane can be broken down into these simple components. The overall ‘sharpness’ of the captured digital image can then be estimated by combining the ‘sharpness’ of each of them.
The stage will be set in this first installment with a little background and perfect components. Later additional detail will be provided to take into account pixel aperture and Anti-Aliasing filters. Then we will look at simple aberrations. Next we will learn how to measure MTF curves for our equipment, and look at numerical methods to model PSFs and MTFs from the wavefront at the aperture. Continue reading A Simple Model for Sharpness in Digital Cameras – I
Sub Bit Signal
My camera has a 14-bit ADC. Can it accurately record information lower than 14 stops below full scale? Can it store sub-LSB signals in the raw data?
With a well designed sensor the answer, unsurprisingly if you’ve followed the last few posts, is yes it can. The key to being able to capture such tiny visual information in the raw data is a well behaved imaging system with a properly dithered ADC. Continue reading Sub Bit Signal