Tag Archives: point spread function

Introduction to Texture MTF

Texture MTF is a method to measure the sharpness of a digital camera and lens by capturing the image of a target of known characteristics.  It purports to better evaluate the perception of fine details in low contrast areas of the image – what is referred to as ‘texture’ – in the presence of noise reduction, sharpening or other non-linear processing performed by the camera before writing data to file.

Figure 1. Image of Dead Leaves low contrast target. Such targets are designed to have controlled scale and direction invariant features with a power law Power Spectrum.

The Modulation Transfer Function (MTF) of an imaging system represents its spatial frequency response,  from which many metrics related to perceived sharpness are derived: MTF50, SQF, SQRI, CMT Acutance etc.  In these pages we have used to good effect the slanted edge method to obtain accurate estimates of a system’s MTF curves in the past.[1]

In this article we will explore proposed methods to determine Texture MTF and/or estimate the Optical Transfer Function of the imaging system under test from a reference power-law Power Spectrum target.  All three rely on variations of the ratio of captured to reference image in the frequency domain: straight Fourier Transforms; Power Spectral Density; and Cross Power Density.  In so doing we will develop some intuitions about their strengths and weaknesses. Continue reading Introduction to Texture MTF

Fourier Optics and the Complex Pupil Function

In the last article we learned that a complex lens can be modeled as just an entrance pupil, an exit pupil and a geometrical optics black-box in between.  Goodman[1] suggests that all optical path errors for a given Gaussian point on the image plane can be thought of as being introduced by a custom phase plate at the pupil plane, delaying or advancing the light wavefront locally according to aberration function \Delta W(u,v) as earlier described.

The phase plate distorts the forming wavefront, introducing diffraction and aberrations, while otherwise allowing us to treat the rest of the lens as if it followed geometrical optics rules.  It can be associated with either the entrance or the exit pupil.  Photographers are usually concerned with the effects of the lens on the image plane so we will associate it with the adjacent Exit Pupil.

aberrations coded as phase plate in exit pupil generalized complex pupil function
Figure 1.  Aberrations can be fully described by distortions introduced by a fictitious phase plate inserted at the uv exit pupil plane.  The phase error distribution is the same as the path length error described by wavefront aberration function ΔW(u,v), introduced in the previous article.

Continue reading Fourier Optics and the Complex Pupil Function

An Introduction to Pupil Aberrations

As discussed in the previous article, so far we have assumed ideal optics, with spherical wavefronts propagating into and out of the lens’ Entrance and Exit pupils respectively.  That would only be true if there were no aberrations. In that case the photon distribution within the pupils would be uniform and such an optical system would be said to be diffraction limited.

Figure 1.   Optics as a black box, fully described for our purposes by its terminal properties at the Entrance and Exit pupils.  A horrible attempt at perspective by your correspondent: the Object, Pupils and Image planes should all be parallel and share the optical axis z.

On the other hand if lens imperfections, aka aberrations, were present the photon distribution in the Exit Pupil would be distorted, thus unable to form a perfectly  spherical wavefront out of it, with consequences for the intensity distribution of photons reaching the image.

Either pupil can be used to fully describe the light collection and concentration characteristics of a lens.  In imaging we are typically interested in what happens after the lens so we will choose to associate the performance of the optics with the Exit Pupil. Continue reading An Introduction to Pupil Aberrations

Diffracted DOF Aperture Guides: 24-35mm

As a landscape shooter I often wonder whether old rules for DOF still apply to current small pixels and sharp lenses. I therefore roughly measured  the spatial resolution performance of my Z7 with 24-70mm/4 S in the center to see whether ‘f/8 and be there’ still made sense today.  The journey and the diffraction-simple-aberration aware model were described in the last few posts.  The results are summarized in the Landscape Aperture-Distance charts presented here for the 24, 28 and 35mm focal lengths.

I also present the data in the form of a simplified plot to aid making the right compromises when the focusing distance is flexible.  This information is valid for the Z7 and kit in the center only.  It probably just as easily applies to cameras with similarly spec’d pixels and lenses. Continue reading Diffracted DOF Aperture Guides: 24-35mm

DOF and Diffraction: 24mm Guidelines

After an exhausting two and a half hour hike you are finally resting, sitting on a rock at the foot of your destination, a tiny alpine lake, breathing in the thin air and absorbing the majestic scenery.  A cool light breeze suddenly rips the surface of the water, morphing what has until now been a perfect reflection into an impressionistic interpretation of the impervious mountains in the distance.

The beautiful flowers in the foreground are so close you can touch them, the reflection in the water 10-20m away, the imposing mountains in the background a few hundred meters further out.  You realize you are hungry.  As you search the backpack for the two panini you prepared this morning you begin to ponder how best to capture the scene: subject,  composition, Exposure, Depth of Field.

Figure 1. A typical landscape situation: a foreground a few meters away, a mid-ground a few tens and a background a few hundred meters further out.  Three orders of magnitude.  The focus point was on the running dog, f/16, 1/100s.  Was this a good choice?

Depth of Field.  Where to focus and at what f/stop?  You tip your hat and just as you look up at the bluest of blue skies the number 16 starts enveloping your mind, like rays from the warm noon sun. You dial it in and as you squeeze the trigger that familiar nagging question bubbles up, as it always does in such conditions.  If this were a one shot deal, was that really the best choice?

In this article we attempt to provide information to make explicit some of the trade-offs necessary in the choice of Aperture for 24mm landscapes.  The result of the process is a set of guidelines.  The answers are based on the previously introduced diffraction-aware model for sharpness in the center along the depth of the field – and a tripod-mounted Nikon Z7 + Nikkor 24-70mm/4 S kit lens at 24mm.
Continue reading DOF and Diffraction: 24mm Guidelines

DOF and Diffraction: Setup

The two-thin-lens model for precision Depth Of Field estimates described in the last two articles is almost ready to be deployed.  In this one we will describe the setup that will be used to develop the scenarios that will be outlined in the next one.

The beauty of the hybrid geometrical-Fourier optics approach is that, with an estimate of the field produced at the exit pupil by an on-axis point source, we can generate the image of the resulting Point Spread Function and related Modulation Transfer Function.

Pretend that you are a photon from such a source in front of a f/2.8 lens focused at 10m with about 0.60 microns of third order spherical aberration – and you are about to smash yourself onto the ‘best focus’ observation plane of your camera.  Depending on whether you leave exactly from the in-focus distance of 10 meters or slightly before/after that, the impression you would leave on the sensing plane would look as follows:

Figure 1. PSF of a lens with about 0.6um of third order spherical aberration focused on 10m.

The width of the square above is 30 microns (um), which corresponds to the diameter of the Circle of Confusion used for old-fashioned geometrical DOF calculations with full frame cameras.  The first ring of the in-focus PSF at 10.0m has a diameter of about 2.44\lambda \frac{f}{D} = 3.65 microns.   That’s about the size of the estimated effective square pixel aperture of the Nikon Z7 camera that we are using in these tests.
Continue reading DOF and Diffraction: Setup

DOF and Diffraction: Image Side

This investigation of the effect of diffraction on Depth of Field is based on a two-thin-lens model, as suggested by Alan Robinson[1].  We chose this model because it allows us to associate geometrical optics with one lens and Fourier optics with the other, thus simplifying the underlying math and our understanding.

In the last article we discussed how the front element of the model could present at the rear element the wavefront resulting from an on-axis source as a function of distance from the lens.  We accomplished this by using simple geometry in complex notation.  In this one we will take the relative wavefront present at the exit pupil and project it onto the sensing plane, taking diffraction into account numerically.  We already know how to do it since we dealt with this subject in the recent past.

Figure 1. Where is the plane with the Circle of Least Confusion?  Through Focus Line Spread Function Image of a lens at f/2.8 with the indicated third order spherical aberration coefficient, and relative measures of ‘sharpness’ MTF50 and Acutance curves.  Acutance is scaled to the same peak as MTF50 for ease of comparison and refers to my typical pixel peeping conditions: 100% zoom, 16″ away from my 24″ monitor.

Continue reading DOF and Diffraction: Image Side

DOF and Diffraction: Object Side

In this and the following articles we shall explore the effects of diffraction on Depth of Field through a two-lens model that separates geometrical and Fourier optics in a way that keeps the math simple, though via complex notation.  In the process we will gain a better understanding of how lenses work.

The results of the model are consistent with what can be obtained via classic DOF calculators online but should be more precise in critical situations, like macro photography.  I am not a macro photographer so I would be interested in validation of the results of the explained method by someone who is.

Figure 1. Simple two-thin-lens model for DOF calculations in complex notation.  Adapted under licence.

Continue reading DOF and Diffraction: Object Side

Canon’s High-Res Optical Low Pass Filter

Canon recently introduced its EOS-1D X Mark III Digital Single-Lens Reflex [Edit: and now also possibly the R5 Mirrorless ILC] touting a new and improved Anti-Aliasing filter, which they call a High-Res Gaussian Distribution LPF, claiming that

“This not only helps to suppress moiré and color distortion,
but also improves resolution.”

Figure 1. Artist’s rendition of new High-res Low Pass Filter, courtesy of Canon USA

In this article we will try to dissect the marketing speak and understand a bit better the theoretical implications of the new AA.  For the abridged version, jump to the Conclusions at the bottom.  In a picture:

Canon High-Res Anti-Aliasing filter
Figure 16: The less psychedelic, the better.

Continue reading Canon’s High-Res Optical Low Pass Filter

The Richardson-Lucy Algorithm

Deconvolution by the Richardson-Lucy algorithm is achieved by minimizing the convex loss function derived in the last article

(1)   \begin{equation*} J(O) = \sum \bigg (O**PSF - I\cdot ln(O**PSF) \bigg) \end{equation*}

with

  • J, the scalar quantity to minimize, function of ideal image O(x,y)
  • I(x,y), linear captured image intensity laid out in M rows and N columns, corrupted by Poisson noise and blurred by the PSF
  • PSF(x,y), the known two-dimensional Point Spread Function that should be deconvolved out of I
  • O(x,y), the output image resulting from deconvolution, ideally without shot noise and blurring introduced by the PSF
  • **   two-dimensional convolution
  • \cdot   element-wise product
  • ln, element-wise natural logarithm

In what follows indices x and y, from zero to M-1 and N-1 respectively, are dropped for readability.  Articles about algorithms are by definition dry so continue at your own peril.

So, given captured raw image I blurred by known function PSF, how do we find the minimum value of J yielding the deconvolved image O that we are after?

Continue reading The Richardson-Lucy Algorithm

Elements of Richardson-Lucy Deconvolution

We have seen that deconvolution by naive division in the frequency domain only works in ideal conditions not typically found in normal photographic settings, in part because of shot noise inherent in light from the scene. Half a century ago William Richardson (1972)[1] and Leon Lucy (1974)[2] independently came up with a better way to deconvolve blurring introduced by an imaging system in the presence of shot noise. Continue reading Elements of Richardson-Lucy Deconvolution

Capture Sharpening: Estimating Lens PSF

The next few articles will outline the first tiny few steps towards achieving perfect capture sharpening, that is deconvolution of an image by the Point Spread Function (PSF) of the lens used to capture it.  This is admittedly  a complex subject, fraught with a myriad ever changing variables even in a lab, let alone in the field.  But studying it can give a glimpse of the possibilities and insights into the processes involved.

I will explain the steps I followed and show the resulting images and measurements.  Jumping the gun, the blue line below represents the starting system Spatial Frequency Response (SFR)[1], the black one unattainable/undesirable perfection and the orange one the result of part of the process outlined in this series.

Figure 1. Spatial Frequency Response of the imaging system before and after Richardson-Lucy deconvolution by the PSF of the lens that captured the original image.

Continue reading Capture Sharpening: Estimating Lens PSF

Wavefront to PSF to MTF: Physical Units

In the last article we saw that the intensity Point Spread Function and the Modulation Transfer Function of a lens could be easily approximated numerically by applying Discrete Fourier Transforms to its generalized exit pupil function \mathcal{P} twice in sequence.[1]

Numerical Fourier Optics: amplitude Point Spread Function, intensity PSF and MTF

Obtaining the 2D DFTs is easy: simply feed MxN numbers representing the two dimensional complex image of the Exit Pupil function in its uv space to a Fast Fourier Transform routine and, presto, it produces MxN numbers representing the amplitude of the PSF on the xy sensing plane.  Figure 1a shows a simple case where pupil function \mathcal{P} is a uniform disk representing the circular aperture of a perfect lens with MxN = 1024×1024.  Figure 1b is the resulting intensity PSF.

Figure 1a, left: A circular array of ones appearing as a white disk on a black background, representing a circular aperture. Figure 1b, right: Array of numbers representing the PSF of image 1a in the classic shape of an Airy Pattern.
Figure 1. 1a Left: Array of numbers representing a circular aperture (zeros for black and ones for white).  1b Right: Array of numbers representing the PSF of image 1a (contrast slightly boosted).

Simple and fast.  Wonderful.  Below is a slice through the center, the 513th row, zoomed in.  Hmm….  What are the physical units on the axes of displayed data produced by the DFT? Continue reading Wavefront to PSF to MTF: Physical Units

Aberrated Wave to Image Intensity to MTF

Goodman, in his excellent Introduction to Fourier Optics[1], describes how an image is formed on a camera sensing plane starting from first principles, that is electromagnetic propagation according to Maxwell’s wave equation.  If you want the play by play account I highly recommend his math intensive book.  But for the budding photographer it is sufficient to know what happens at the Exit Pupil of the lens because after that the transformations to Point Spread and Modulation Transfer Functions are straightforward, as we will show in this article.

The following diagram exemplifies the last few millimeters of the journey that light from the scene has to travel in order to be absorbed by a camera’s sensing medium.  Light from the scene in the form of  field  U arrives at the front of the lens.  It goes through the lens being partly blocked and distorted by it as it arrives at its virtual back end, the Exit Pupil, we’ll call this blocking/distorting function P.   Other than in very simple cases, the Exit Pupil does not necessarily coincide with a specific physical element or Principal surface.[iv]  It is a convenient mathematical construct which condenses all of the light transforming properties of a lens into a single plane.

The complex light field at the Exit Pupil’s two dimensional uv plane is then  U\cdot P as shown below (not to scale, the product of the two arrays is element-by-element):

Figure 1. Simplified schematic diagram of the space between the exit pupil of a camera lens and its sensing plane. The space is assumed to be filled with air.

Continue reading Aberrated Wave to Image Intensity to MTF

A Simple Model for Sharpness in Digital Cameras – Defocus

This series of articles has dealt with modeling an ideal imaging system’s ‘sharpness’ in the frequency domain.  We looked at the effects of the hardware on spatial resolution: diffraction, sampling interval, sampling aperture (e.g. a squarish pixel), anti-aliasing OLPAF filters.  The next two posts will deal with modeling typical simple imperfections related to the lens: defocus and spherical aberrations.

Defocus = OOF

Defocus means that the sensing plane is not exactly where it needs to be for image formation in our ideal imaging system: the image is therefore out of focus (OOF).  Said another way, light from a point source would go through the lens but converge either behind or in front of the sensing plane, as shown in the following diagram, for a lens with a circular aperture:

Figure 1. Back Focus, In Focus, Front Focus.
Figure 1. Top to bottom: Back Focus, In Focus, Front Focus.  To the right is how the relative PSF would look like on the sensing plane.  Image under license courtesy of Brion.

Continue reading A Simple Model for Sharpness in Digital Cameras – Defocus

A Simple Model for Sharpness in Digital Cameras – AA

This article will discuss a simple frequency domain model for an AntiAliasing (or Optical Low Pass) Filter, a hardware component sometimes found in a digital imaging system[1].  The filter typically sits just above the sensor and its objective is to block as much of the aliasing and moiré creating energy above the monochrome Nyquist spatial frequency while letting through as much as possible of the real image forming energy below that, hence the low-pass designation.

Downsizing Box 4X
Figure 1. The blue line indicates the pass through performance of an ideal anti-aliasing filter presented with an Airy PSF (Original): pass all spatial frequencies below Nyquist (0.5 c/p) and none above that. No filter has such ideal characteristics and if it did its hard edges would result in undesirable ringing in the image.

In consumer digital cameras it is often implemented  by introducing one or two birefringent plates in the sensor’s filter stack.  This is how Nikon shows it for one of its DSLRs:

d800-aa1
Figure 2. Typical Optical Low Pass Filter implementation  in a current Digital Camera, courtesy of Nikon USA (yellow displacement ‘d’ added).

Continue reading A Simple Model for Sharpness in Digital Cameras – AA

The Units of Discrete Fourier Transforms

This article is about specifying the units of the Discrete Fourier Transform of an image and the various ways that they can be expressed.  This apparently simple task can be fiendishly unintuitive.

The image we will use as an example is the familiar Airy Pattern from the last few posts, at f/16 with light of mean 530nm wavelength. Zoomed in to the left in Figure 1; and as it looks in its 1024×1024 sample image to the right:

Airy Mesh and Intensity
Figure 1. Airy disc image I(x,y). Left, 1a, 3D representation, zoomed in. Right, 1b, as it would appear on the sensing plane (yes, the rings are there but you need to squint to see them).

Continue reading The Units of Discrete Fourier Transforms

A Simple Model for Sharpness in Digital Cameras – Sampling & Aliasing

Having shown that our simple two dimensional MTF model is able to predict the performance of the combination of a perfect lens and square monochrome pixel with 100% Fill Factor we now turn to the effect of the sampling interval on spatial resolution according to the guiding formula:

(1)   \begin{equation*} MTF_{Sys2D} = \left|(\widehat{ PSF_{lens} }\cdot \widehat{PIX_{ap} })\right|_{pu}\ast\ast\: \delta\widehat{\delta_{pitch}} \end{equation*}

The hats in this case mean the Fourier Transform of the relative component normalized to 1 at the origin (_{pu}), that is the individual MTFs of the perfect lens PSF, the perfect square pixel and the delta grid;  ** represents two dimensional convolution.

Sampling in the Spatial Domain

While exposed a pixel sees the scene through its aperture and accumulates energy as photons arrive.  Below left is the representation of, say, the intensity that a star projects on the sensing plane, in this case resulting in an Airy pattern since we said that the lens is perfect.  During exposure each pixel integrates (counts) the arriving photons, an operation that mathematically can be expressed as the convolution of the shown Airy pattern with a square, the size of effective pixel aperture, here assumed to have 100% Fill Factor.  It is the convolution in the continuous spatial domain of lens PSF with pixel aperture PSF shown in Equation (2) of the first article in the series.

Sampling is then the product of an infinitesimally small Dirac delta function at the center of each pixel, the red dots below left, by the result of the convolution, producing the sampled image below right.

Footprint-PSF3
Figure 1. Left, 1a: A highly zoomed (3200%) image of the lens PSF, an Airy pattern, projected onto the imaging plane where the sensor sits. Pixels shown outlined in yellow. A red dot marks the sampling coordinates. Right, 1b: The sampled image zoomed at 16000%, 5x as much, because in this example each pixel’s width is 5 linear units on the side.

Continue reading A Simple Model for Sharpness in Digital Cameras – Sampling & Aliasing

A Simple Model for Sharpness in Digital Cameras – Diffraction and Pixel Aperture

Now that we know from the introductory article that the spatial frequency response of a typical perfect digital camera and lens (its Modulation Transfer Function) can be modeled simply as the product of the Fourier Transform of the Point Spread Function of the lens and pixel aperture, convolved with a Dirac delta grid at cycles-per-pixel pitch spacing

(1)   \begin{equation*} MTF_{Sys2D} = \left|\widehat{ PSF_{lens} }\cdot \widehat{PIX_{ap} }\right|_{pu}\ast\ast\: \delta\widehat{\delta_{pitch}} \end{equation*}

we can take a closer look at each of those components (pu here indicating normalization to one at the origin).   I used Matlab to generate the examples below but you can easily do the same with a spreadsheet.   Continue reading A Simple Model for Sharpness in Digital Cameras – Diffraction and Pixel Aperture

A Simple Model for Sharpness in Digital Cameras – I

The next few posts will describe a linear spatial resolution model that can help a photographer better understand the main variables involved in evaluating the ‘sharpness’ of photographic equipment and related captures.   I will show numerically that the combined spectral frequency response (MTF) of a perfect AAless monochrome digital camera and lens in two dimensions can be described as the magnitude of the normalized product of the Fourier Transform (FT) of the lens Point Spread Function by the FT of the pixel footprint (aperture), convolved with the FT of a rectangular grid of Dirac delta functions centered at each  pixel:

    \[ MTF_{2D} = \left|\widehat{ PSF_{lens} }\cdot \widehat{PIX_{ap} }\right|_{pu}\ast\ast\: \delta\widehat{\delta_{pitch}} \]

With a few simplifying assumptions we will see that the effect of the lens and sensor on the spatial resolution of the continuous image on the sensing plane can be broken down into these simple components.  The overall ‘sharpness’ of the captured digital image can then be estimated by combining the ‘sharpness’ of each of them.

The stage will be set in this first installment with a little background and perfect components.  Later additional detail will be provided to take into account pixel aperture and Anti-Aliasing filters.  Then we will look at simple aberrations.  Next we will learn how to measure MTF curves for our equipment, and look at numerical methods to model PSFs and MTFs from the wavefront at the aperture. Continue reading A Simple Model for Sharpness in Digital Cameras – I

COMBINING BAYER CFA MTF Curves – II

In this and the previous article I discuss how Modulation Transfer Functions (MTF) obtained from every color channel of a Bayer CFA raw capture in isolation can be combined to provide a meaningful composite MTF curve for the imaging system as a whole.

There are two ways that this can be accomplished: an input-referred approach (L) that reflects the performance of the hardware only; and an output-referred one (Y) that also takes into consideration how the image will be displayed.  Both are valid and differences are typically minor, though the weights of the latter are scene, camera/lens, illuminant dependent – while the former are not.  Therefore my recommendation in this context is to stick with input-referred weights when comparing cameras and lenses.1 Continue reading COMBINING BAYER CFA MTF Curves – II

The Units of Spatial Resolution

Several sites for photographers perform spatial resolution ‘sharpness’ testing of a specific lens and digital camera set up by capturing a target.  You can also measure your own equipment relatively easily to determine how sharp your hardware is.  However comparing results from site to site and to your own can be difficult and/or misleading, starting from the multiplicity of units used: cycles/pixel, line pairs/mm, line widths/picture height, line pairs/image height, cycles/picture height etc.

This post will address the units involved in spatial resolution measurement using as an example readings from the popular slanted edge method, although their applicability is generic.

Continue reading The Units of Spatial Resolution

How to Get MTF Performance Curves for Your Camera and Lens

You have obtained a raw file containing the image of a slanted edge  captured with good technique.  How do you get the Modulation Transfer Function of the camera and lens combination that took it?  Download and feast your eyes on open source MTF Mapper version 0.4.16 by Frans van den Bergh.

[Edit, several years later: MTF Mapper has kept improving over time, making it in my opinion the most accurate slanted edge measuring tool available today, used in applications that range from photography to machine vision to the Mars Rover.   Did I mention that it is open source?

It now sports a Graphical User Interface which can load raw files and allow the arbitrary selection of individual edges by simply pointing and clicking, making this post largely redundant.  The procedure outlined will still work but there are easier ways to accomplish the same task today.  To obtain the same result with raw data and version 0.7.38 just install MTF Mapper, set the “Settings/Preferences” tab as follows and leave all else at default:

“Pixel size” is only needed to also show SFR in units of lp/mm and the “Arguments” field only if using an unspecified raw data CFA layout.  “Accept” and “File/Open with manual edge selection” your raw files.  Follow the instructions to select as many edges as desired.  Then in “Data set” open an “annotated” file and shift-click on the chosen edges to see the relative MTF plots.]

The first thing we are going to do is crop the edges and package them into a TIFF file format so that MTF Mapper has an easier time reading them.  Let’s use as an example a Nikon D810+85mm:1.8G ISO 64 studio raw capture by DPReview so that you can follow along if you wish.   Continue reading How to Get MTF Performance Curves for Your Camera and Lens

The Slanted Edge Method

My preferred method for measuring the spatial resolution performance of photographic equipment these days is the slanted edge method.  It requires a minimum amount of additional effort compared to capturing and simply eye-balling a pinch, Siemens or other chart but it gives more, useful, accurate, quantitative information in the language and units that have been used to characterize optical systems for over a century: it produces a good approximation to  the Modulation Transfer Function of the two dimensional camera/lens system impulse response – at the location of the edge in the direction perpendicular to it.

Much of what there is to know about an imaging system’s spatial resolution performance can be deduced by analyzing its MTF curve, which represents the system’s ability to capture increasingly fine detail from the scene, starting from perceptually relevant metrics like MTF50, discussed a while back.

In fact the area under the curve weighted by some approximation of the Contrast Sensitivity Function of the Human Visual System is the basis for many other, better accepted single figure ‘sharpness‘ metrics with names like Subjective Quality Factor (SQF), Square Root Integral (SQRI), CMT Acutance, etc.   And all this simply from capturing the image of a slanted edge, which one can actually and somewhat easily do at home, as presented in the next article.

Continue reading The Slanted Edge Method

What Radius to Use for Deconvolution Capture Sharpening

The following approach will work if you know the spatial frequency at which a certain MTF relative energy level (e.g. MTF50) is achieved by your camera/lens combination as set up at the time that the capture was taken.

The process by which our hardware captures images and stores them  in the raw data inevitably blurs detail information from the scene. Continue reading What Radius to Use for Deconvolution Capture Sharpening