Tag Archives: numerical aperture

What is Resolution?

In photography Resolution refers to the ability of an imaging system to capture fine detail from the scene, making it a key determinant of Image Quality.  For instance, with high resolution equipment we might be able to count the number of tiny leaves on a distant tree, while we might not with a lower-res one.  Or the leaves might look sharp with the former and unacceptably mushy with the latter.

We quantify resolution by measuring detail contrast after it has been inevitably smeared by the imaging process.  As detail becomes smaller and closer together in the image, the blurred darker and lighter parts start mixing together until the relative contrast decreases to the point that it disappears, a limit referred to as  diffraction extinction, beyond which all detail is lost and no additional spatial information can be captured from the scene.

Sinusoidal target of increasing frequency to diffraction limit extinction
Increasingly small detail smeared by the imaging process, highly magnified.

The units of resolution are spatial frequencies, the inverse of the size and distance of the detail in question.  Of course at diffraction extinction no visual information is captured, therefore in most cases the criteria for usability are set by larger detail than that – or equivalently at lower frequencies.  Thresholds tend to be application specific and arbitrary.

The type of resolution being measured must also be specified since the term can be applied to different physical quantities: sensor, spatial, temporal, spectral, type of light, medium etc.  In photography we are normally interested in Spatial Resolution from incoherent light traveling in air so that will be the focus here.

Continue reading What is Resolution?

Fourier Optics and the Complex Pupil Function

In the last article we learned that a complex lens can be modeled as just an entrance pupil, an exit pupil and a geometrical optics black-box in between.  Goodman[1] suggests that all optical path errors for a given Gaussian point on the image plane can be thought of as being introduced by a custom phase plate at the pupil plane, delaying or advancing the light wavefront locally according to aberration function \Delta W(u,v) as earlier described.

The phase plate distorts the forming wavefront, introducing diffraction and aberrations, while otherwise allowing us to treat the rest of the lens as if it followed geometrical optics rules.  It can be associated with either the entrance or the exit pupil.  Photographers are usually concerned with the effects of the lens on the image plane so we will associate it with the adjacent Exit Pupil.

aberrations coded as phase plate in exit pupil generalized complex pupil function
Figure 1.  Aberrations can be fully described by distortions introduced by a fictitious phase plate inserted at the uv exit pupil plane.  The phase error distribution is the same as the path length error described by wavefront aberration function ΔW(u,v), introduced in the previous article.

Continue reading Fourier Optics and the Complex Pupil Function

Angles and the Camera Equation

Imagine a bucolic scene on a clear sunny day at the equator, sand warmed by the tropical sun with a typical irradiance (E) of about 1000 watts per square meter.  As discussed earlier we could express this quantity as illuminance in lumens per square meter (lx) – or as a certain number of photons per second (\Phi) over an area of interest (\mathcal{A}).

(1)   \begin{equation*} E = \frac{\Phi}{\mathcal{A}}  \; (W, lm, photons/s) / m^2 \end{equation*}

How many photons/s per unit area can we expect on the camera’s image plane (irradiance E_i )?

Figure 1.  Irradiation transfer from scene to sensor.

In answering this question we will discover the Camera Equation as a function of opening angles – and set the stage for the next article on lens pupils.  By the way, all quantities in this article depend on wavelength and position in the Field of View, which will be assumed in the formulas to make them readable, see Appendix I for a more formally correct version of Equation (1).

Continue reading Angles and the Camera Equation

Wavefront to PSF to MTF: Physical Units

In the last article we saw that the intensity Point Spread Function and the Modulation Transfer Function of a lens could be easily approximated numerically by applying Discrete Fourier Transforms to its generalized exit pupil function \mathcal{P} twice in sequence.[1]

Numerical Fourier Optics: amplitude Point Spread Function, intensity PSF and MTF

Obtaining the 2D DFTs is easy: simply feed MxN numbers representing the two dimensional complex image of the Exit Pupil function in its uv space to a Fast Fourier Transform routine and, presto, it produces MxN numbers representing the amplitude of the PSF on the xy sensing plane.  Figure 1a shows a simple case where pupil function \mathcal{P} is a uniform disk representing the circular aperture of a perfect lens with MxN = 1024×1024.  Figure 1b is the resulting intensity PSF.

Figure 1a, left: A circular array of ones appearing as a white disk on a black background, representing a circular aperture. Figure 1b, right: Array of numbers representing the PSF of image 1a in the classic shape of an Airy Pattern.
Figure 1. 1a Left: Array of numbers representing a circular aperture (zeros for black and ones for white).  1b Right: Array of numbers representing the PSF of image 1a (contrast slightly boosted).

Simple and fast.  Wonderful.  Below is a slice through the center, the 513th row, zoomed in.  Hmm….  What are the physical units on the axes of displayed data produced by the DFT? Continue reading Wavefront to PSF to MTF: Physical Units

Aberrated Wave to Image Intensity to MTF

Goodman, in his excellent Introduction to Fourier Optics[1], describes how an image is formed on a camera sensing plane starting from first principles, that is electromagnetic propagation according to Maxwell’s wave equation.  If you want the play by play account I highly recommend his math intensive book.  But for the budding photographer it is sufficient to know what happens at the Exit Pupil of the lens because after that the transformations to Point Spread and Modulation Transfer Functions are straightforward, as we will show in this article.

The following diagram exemplifies the last few millimeters of the journey that light from the scene has to travel in order to be absorbed by a camera’s sensing medium.  Light from the scene in the form of  field  U arrives at the front of the lens.  It goes through the lens being partly blocked and distorted by it as it arrives at its virtual back end, the Exit Pupil, we’ll call this blocking/distorting function P.   Other than in very simple cases, the Exit Pupil does not necessarily coincide with a specific physical element or Principal surface.[iv]  It is a convenient mathematical construct which condenses all of the light transforming properties of a lens into a single plane.

The complex light field at the Exit Pupil’s two dimensional uv plane is then  U\cdot P as shown below (not to scale, the product of the two arrays is element-by-element):

Figure 1. Simplified schematic diagram of the space between the exit pupil of a camera lens and its sensing plane. The space is assumed to be filled with air.

Continue reading Aberrated Wave to Image Intensity to MTF