The Slanted Edge Method

My preferred method for measuring the spatial resolution performance of photographic equipment these days is the slanted edge method.  It requires a minimum amount of additional effort compared to capturing and simply eye-balling a pinch, Siemens or other chart but it gives more, useful, accurate, quantitative information in the language and units that have been used to characterize optical systems for over a century: it produces a good approximation to  the Modulation Transfer Function of the two dimensional camera/lens system impulse response – at the location of the edge in the direction perpendicular to it.

Much of what there is to know about an imaging system’s spatial resolution performance can be deduced by analyzing its MTF curve, which represents the system’s ability to capture increasingly fine detail from the scene, starting from perceptually relevant metrics like MTF50, discussed a while back.

In fact the area under the curve weighted by some approximation of the Contrast Sensitivity Function of the Human Visual System is the basis for many other, better accepted single figure ‘sharpness‘ metrics with names like Subjective Quality Factor (SQF), Square Root Integral (SQRI), CMT Acutance, etc.   And all this simply from capturing the image of a slanted edge, which one can actually and somewhat easily do at home, as presented in the next article.

In the Beginning there was Frans

I was introduced to the slanted edge method by Frans van den Bergh, author of reference, open-source MTF Mapper, used extensively in these pages to produce linear spatial resolution  curves from raw data.  I highly recommend Frans’ program, documentation and blog.

I will assume that you know that the Modulation Transfer Function (MTF) represents the Spatial Frequency Response of a linear, space invariant imaging system that can be obtained by taking the magnitude (modulus) of the Fourier Transform of the system’s impulse response otherwise known as its Point Spread Function (PSF).  It allows us to determine a system’s ‘sharpness’ performance at all spatial frequencies in one go.

The terms MTF (Modulation Transfer Function) and SFR (Spatial Frequency Response) are often  used interchangeably in photography.  They both refer to the expected Michelson Contrast at the given linear spatial frequency so in a way in imaging they can also be considered a Contrast Transfer Function, as explained in a dedicated article.

Point to Line to Edge

In a nutshell, the method is based on the idea that it is difficult to obtain the PSF of a camera/lens system by, say, taking a capture of a single distant star, a POINT source,  against a black sky because of the relative intensity and size of the target: it’s typically too small compared to the size of a pixel, too dim and too noisy.   Because the imaging system is assumed to be linear one could build intensity up and reduce noise by capturing a number of identical closely spaced stars aligned in a row (a straight LINE), then collapse the line into a unidimensional Line Spread Function to obtain the MTF in the direction perpendicular to the line.    Even better would be capturing a number of contiguous such lines of stars, which at this point could be considered a white EDGE against a darker sky.   Alas such constellations are hard to come by – but not to worry because we can make our own.  Here is what one looks like (228×146 raw pixels):

Figure 1. A slanted edge captured in a raw file.

Pretty simple, right?

Edge to Line to Modulation Transfer Function

The following picture is how Frans explains the slanted edge method: the raw data intensities of the two dimensional edge captured by our imaging system is projected onto a one dimensional line perpendicular to the edge, producing its intensity profile (the Edge Spread Function or ESF); the derivative of the ESF is the Line Spread Function (LSF, not shown below), which is then Fourier transformed into the Modulation Transfer Function (MTF) that we are after:

Edge Spread Function
Figure 2. How the 2D raw capture of a slanted edge is converted to a 1D edge intensity profile perpendicular to it, the Edge Spread Function.  Image courtesy of Frans van den Bergh.

Wait, I thought you said that the MTF was obtained from the Point Spread Function, not the Line Spread Function?

It turns out that the Fourier Transform of the 1-dimensional LSF is in fact equal to the Fourier Transform of the 2-dimensional PSF in the direction perpendicular to the edge, shown as the green arrow labeled ‘Edge Normal’ in the picture above.  So, as better explained further down, by applying the slanted-edge method we obtain a measurement of the 2D MTF of the imaging system as setup in just one direction, the direction perpendicular to the edge.

Such a Modulation Transfer Function in our context represents the ability of an imaging system as a whole to transfer linear spatial resolution information (i.e. subject detail) to the raw file – at the location of the edge and in the direction perpendicular to it.

The Edge is an Integral Part of the System

The edge is ideally perfectly ‘sharp’, straight and of good contrast.  One of the better such targets is a back-lit utility knife edge or razor blade.  However in practice it can be the shadow cast by a bridge on a satellite photo or a high quality print of a uniformly black rectangle on uniformly white matte paper.

Depending on the regularization method used and abstract concepts like rational dependence, the edge needs to be tilted (slanted) ideally between 4 and 6 degrees off the vertical or horizontal and be at least 50 pixels long, 100 or more is better.  But not too long if we want to minimize the effects of lens distortion and are interested in the performance of the system in just a small spot: the resulting MTF is the localized average of the performance of the system along the length of the edge.  Even excellent lenses can vary significantly over the distance covered by a long edge.

The edge on its own was produced by a process that itself has an MTF.  This edge MTF gets multiplied with the MTF of the other imaging components (lens, filter stack, sensor) to obtain the system’s MTF.  Therefore the relative quality of the edge (i.e. how much it can approximate an ideal step function at the capturing distance) determines the upper limit on the measured MTF.  If one uses a lower definition edge, such as one printed at home, the results may be less sensitive but still have comparative value to other measurements taken with the same target.

Reconstituting the Continuous Intensity Profile

The advantage of the slanted-edge method is that it effectively super-samples the edge by the number of pixels along it.  Assuming the edge is perfectly straight  – not a given in some recent camera formats that rely on software to correct for poor lens distortion characteristics – if it is 200 pixels long then the edge profile is over-sampled two hundred times with great benefits in terms of effectively cancelling quantization and reducing the impact of noise and aliasing on spatial resolution measurement.  With the proper setup, this gets around the fact that a digital sensor is typically not space invariant  because of its edgy pixels and rectangular sampling grid.

Figure 3. Green channel raw data from the captured edge image projected onto the normal axis. The Edge Spread Function (orange line) is constructed based on this noisy profile, which can be considered to be continuous. Note the higher standard deviation with higher intensity, the result of shot noise and photon response non uniformities. This edge was 152 pixels long at an angle of 4.286 degrees.

There are many ways to construct the ESF from projected raw data (see the answer to Julio in the comments below for one based on constrained regression, resulting in the orange curve in Figure 3).  MTF Mapper collects it into bins 1/8th of a pixel wide, leaving plenty of latitude for later calculations in a photographic context.  Binning has the effect of regularizing the data while providing initial noise reduction.  The physical units on the normal axis are pixels – here short for pixel pitch, which is the horizontal or vertical distance between the centers of two contiguous pixels.

Assuming a perfect edge of good contrast, no distortion, low noise and excellent capture technique, after some massaging the resulting Edge Spread Function can be considered for all intents and purposes a one dimensional representation of the profile of the continuous light intensity about the edge after it has gone through optics and filters,  as detected by a very small pixel aperture usually assumed to be squarish.  Since for a, say, ideally printed and illuminated edge the transition from black to white on paper is a step function centered at zero pixels below, any degradation in the ESF from this ideal can be ascribed to loss of sharpness due to the imaging system as setup:

Figure 4. Deviation from the ideal edge profile due to an imperfect imaging system (and target).

As mentioned a printed edge cannot approximate a perfect step function in most situations so a back lit utility knife edge is better.

From ESF to PSF Intensity Profile to MTF

The differential of the ESF so obtained is the Line Spread Function, which is directly proportional to the intensity profile of the system’s 2D Point Spread Function in the direction perpendicular to the edge/line.  Yes, that’s approximately the projection of the impulse response of the imaging system in that one direction, aka what the two-dimensional continuous intensity profile of a star would look like if viewed from the side.

Figure 5. The Line Spread Function (LSF) is computed as the differential of the measured Edge Spread Function (ESF). The LSF is the 1D projection of the 2D Point Spread Function (PSF) of the system as a whole onto the edge normal.  Curves produced by MTF Mapper.

If the imaging system and the edge were perfect the LSF would be an infinitely narrow vertical pulse at zero pixels (a delta function) – but in practice they never are, so the pulse is instead spread out as shown in Figure 5.

Mathematically, the LSF represents an approximate instance of the Radon Transform of the two dimensional system PSF onto the edge normal.  Note that the ‘bright’ side of the LSF is noisier than the ‘dark’ side in this base ISO capture as a result of shot noise and Photo Response Non Uniformities.  Differentiation amplifies noise.

By taking the magnitude (the quadrature sum of the real and imaginary parts) of the Fourier Transform of the one-dimensional LSF we are able to determine fairly accurately* the combined Modulation Transfer Function of the edge, camera and lens at the position of the edge in the direction perpendicular to it.  The resulting curve tells us how good our equipment is at capturing detail of increasingly small size in that one direction.

Mathematically we have applied the Fourier-Slice Theorem to the projection of the system’s PSF obtaining a radial slice through the two-dimensional MTF of the two-dimensional PSF (see the answer to Daniel’s question below for a little more detail on this interesting bit of theory).

Here is MTF Mapper’s output, the SFR computed from the green raw channels of a capture of a centrally located back-lit razor edge 4.61 degrees off vertical, therefore representative of spatial frequency performance in an approximately easterly direction (the 24MP FF camera used for the capture is effectively AAless in this direction in landscape orientation):

Figure 6. MTF/SFR produced by MTF Mapper from a back-lit razor edge captured by a Nikon D610 mounting a 50mm/1.8D at ISO 100 and f/5.6.

In this case the blue MTF curve of the raw green channels of a Nikon D610 mounting a 50mm:1.8D @ f/5.6 indicates that system performance is excellent for a 24MP sensor, with good contrast transfer at higher frequencies and a reference MTF50 value of about 1590 lp/ph (or 66.3 lp/mm – see this article for an explanation of how the units used in spatial resolution are related).

On the other hand the curve hits the grayscale Nyquist frequency with a lot of energy still, at an MTF of around 0.37 (aka MTF37), which does not bode well for aliasing in very fine detail in this direction.  The grayscale Nyquist limit refers to the maximum number of monochrome line pairs that can be represented accurately by the sensor, in this case about 2020 since the D610 has twice that many pixels on the short side.

Super-Sampling Lets us See Far

We are able to measure the imaging system’s frequency response above the Nyquist frequency because of the method’s super-sampling, which allows us to take a much closer look at an approximation of the continuous PSF profile of the system on the imaging plane than possible for example with visual charts.  For instance we can see the null caused by pixel aperture (width) just above 4000 lp/ph or about  170 lp/mm.

You can compare this MTF curve and reference values directly with those obtained by others with similarly good technique straight from raw green channels – a hard feat though, given the number of variables involved – to make an informed call as to which system is ‘sharper’ or whether an old lens has lost its touch.  With great care the method can provide absolute results but in most cases it is instead used to obtain relative results, with only one variable in the process varied at a time (for example  just the lens).  Keep in mind that in typical situations a difference of 10% in MTF50 values is noticeable, but a trained eye is needed to perceive a difference of 5% even when pixel peeping.

So how do we use MTF Mapper to obtain MTF curves?  Next.

 

*Slanted edge MTF results obtained with excellent technique should be comparable to those obtained with other methods.

 

 

25 thoughts on “The Slanted Edge Method”

          1. Hello Larry,

            Yes, sfrmat5 does appear to have a slight lower bias, probably some improperly handled filtering. I also find it hard to use when looking at a single MTF level because of its noisiness, a feature inherited from previous versions.

            Here is an example with a very easy, long edge of known parameters without any distortion or gradients:
            sfrmat5 vs MTF Mapper vs Truth
            The blue curve is the actual, known MTF. It is compared to MTF Mapper in an advanced mode and my own minimalist spline regression algorithm: they both seem to better track the true curve.

            Here is another edge in higher resolution, generated with known parameters (those of a Nikon Z7+24-70/4) by ‘mtf_generate_rectangle –b16 -d 300 -a 4 –adc-depth 14 –adc-gain 4 –pattern-noise 0.0043 –read-noise 5.4 -p wavefront-box –w020 -0.55 –w040 0.55 –lambda 0.535 –pixel-pitch 4.35 –aperture 5.6 –fill-factor 0.75’, where ‘mtf_mapper -eq –single-roi –nosmoothing’ and spline regression follow closely the ideal (not shown) but sfrmat5 hangs low:
            sfrmat5 vs MTF Mapper and spline regression - low bias

            Jack

  1. How do you pass from the edge projected on normal (many intensity values per pixel) to the ESF? I’ve been reading ISO 12233 and Frans’ blog but I don’t understand this step

    1. Hi Julio,

      Many ways to pass from the projected raw data of Figure 3 to the ESF shown in Figure 5. The simplest is to average values within the chosen bin width (1/4 or 1/8 of a pixel typically). The result is then regularized but tends to be a bit noisy with typical slanted edge captures.

      There are more sophisticated methods, however. Recently I am liking a linear regression algorithm by Kruscal that presumes a monotonic result (a good assumption with raw data, right?). It works really well. MTF Mapper is very clever though and produces even cleaner results.

      Ideally I would like to see a fast isotonic linear regression that works orthogonally to the resulting curve, as opposed to least squares. If anybody has any suggestions I’d be happy to give them a try.

      Jack

      1. Hello Jack,

        Thanks for your fast reply! 🙂

        Oh, that’s exactly what I did (average values) and it worked OK. In case I need something more sophisticated in the future, I’ll refer to Jan de Leeuw’s site, it looks interesting, and yes, one could say that ESF will be monotonic generally.

        Isotonic regression sounds like a good idea too! In case I come up with something, I’ll let you know!

        Julio

      2. Hi Jack! I implemented a RANSAC fit to ERF function. I figured that if the LSF perpendicular to the edge is expected to have a gaussian profile, it make sense to fit the ESF to its primitive. ERF is obviously monotonous so this is taken care of. The data can be noisy (mine is) and I take care of this using RANSAC. Having said all of this I admit this doesn’t work perfectly for my (low-res high-sensitivity) sensor which is not a typical camera, and I suspect this might be related to nonlinearity of the sensor. I do think this method can be a good and simple candidate for regular cameras.

        1. Interesting approach Oren, I wasn’t aware of RANSAC, good idea. Have you tried comparing it to MTF Mapper or Kruscal?

          Two thoughts come to mind on a quick wikipedia read of RANSAC. The first is that it presumes there is a valid model for inliers while outliers do not contribute to it. This is not necessarily the case with the raw ESF where shot noise, which tends to dominate the ‘outliers’, is part of the model – so it is possible that RANSAC would not be using all useful information from a capture. The second is that an Error Function is symmetrical by definition – while the ESF as captured by an imager often isn’t because of aberrations and its convolution with a de-centered edgy pixel aperture on a slant. But perhaps I do not understand how you worked the ERF into the process.

          I have gotten best results by using monotonic cubic spline regression after a suggestion by DeLeeuw of the Kruscal link above. I’ll send you an email with some code. Even then I believe it is only as good as MTF Mapper, the real champ.

  2. This is a good explanation of how to estimate a 1D slice through the 2D MTF in a particular direction, but isn’t it making an implicit assumption that the MTF around a particular point on the image is 2D rotationally invariant about that point?

    In other words, should we NOT be applying this technique to images of the edge at different slant angles passing through the image point of interest, so we can at least check that the MTF is the same in all directions at that point?

    1. Hi Daniel, good question. There is no presumption of rotational symmetry for the 2D PSF or MTF with the slanted edge method. In fact, asymmetry is pretty well guaranteed by the contribution to the PSF and MTF of typically non-circular pixel apertures.

      One can collapse (project) a non isotropic 2D PSF at different angles, thus obtaining different 1D LSFs, each of which will produce a different slice from the 2D MTF of the 2D PSF – valid in the direction relative to the chosen angle. Such is the beauty of Radon Transforms combined with the Fourier Slice theorem (see for instance Gonzalez and Woods).

      With the slanted edge method, the slant of the edge fixes the angle of projection. One can capture the edge several times at increasing angles of rotation about its center, thus obtaining an LSFs in each direction. An approximation of the 2D MTF can then be obtained by superimposing each corresponding 1D MTF slice in the respective direction. An approximation of the 2D PSF can be obtained by applying the inverse Radon Transform to a complete series of LSFs (i.e. through 180 degrees). This btw is how CAT scans and MRIs work.

      Jack

  3. Quick comment on the diagram from Frans van den Bergh / MTF Mapper; it skips the differentiation step between edge aggregation and the Fourier Transform. Might confuse people who read the directions and then see that diagram and it’s not there.

  4. Hey thank you very much for your effort. I have learned a lot and we are employing this method now in our production. However I am wondering what you would reccomend to be a sufficient method to produce the Slanted Edge target?

    Is a professional print on a white paper realy enough? Best whishes, René from Cubert.

    1. Hello René,

      It depends on your application: the lower quality is the edge, the less latitude you have in measurement. For lens testing sites I see that professionally printed A0 charts seem to be sufficient, different sizes for different focal lengths. The better ones went to backlit charts, you may want to check out articles on this subject at lensrentals.com’s or Frans van den Berg’s blogs.

      For absolute, repeatable results there is nothing better than a backlit utility knife or shaving blade. But if you are trying to test several spots simultaneously keeping them positioned and aligned properly can be challenging. For my purposes I just use one utility knife edge which I then move around the field of view. Focus stacking is almost mandatory.

      Jack

  5. Hi Jack,
    I really appreciate your time in writing this article. What is the best approach to calculate the 2D PSF from slanted edge method. 1. calculating 2D MTF from LSF and, applying the inverse FFT or 2. calculate 2D PSF by rotating the LSF assuming the rotational symmetry?

    1. Hi Damber, good question. The LSF is a 1D function so it will provide a 1D MTF, which corresponds to the slice of the 2D MTF in the direction perpendicular to the edge.

      So for Method 1, assuming rotational symmetry, you could rotate the 1D MTF slice to get the 2D MTF and then take the inverse of it for a putative 2D PSF. Keep in mind that, since MTF is only a magnitude, this method will lose any phase information in the round trip (this may or may not be relevant for your application).

      Method 2 doesn’t work as-is, I believe, because the LSF is not a 1D slice of the 2D PSF, it is a projection in 1D of the full 2D LSF – so every pixel in the LSF contains all intensities from the 2D PSF along the respective normal.

      Therefore the way to achieve what you are getting at is to apply the inverse Radon Transform (parallel-beam filtered back projection), see my answer to Daniel Lyddy a few comments up: for rotational symmetry, assume the same LSF rotated about its center every 5 degrees or so. This is the way I would do it.

      Jack

  6. Hi,
    how would you explain the shape of MTF? Why it’s not linear? What are the effects that are responsible for it?
    Anne

    1. Hello Anne,

      The MTF curve obtained with the slanted edge method is the magnitude of the Fourier Transform of the Line Spread Function as described in this article. It represents the cascaded frequency response of every hardware component in the imaging system, each of which has its own signature, in the direction perpendicular to the edge.

      In digital photography, at its simplest there are at least two components: a lens with a circular aperture and a sensor with square pixels. The MTF of neither component is a straight line, even if the lens is aberration free and the pixels are otherwise perfect. You can read more about the MTF of typical simple components in cameras in the series of articles around here:

      https://www.strollswithmydog.com/resolution-model-digital-cameras-ii/

      https://www.strollswithmydog.com/resolution-model-digital-cameras-aa/

      https://www.strollswithmydog.com/simple-model-for-sharpness-in-digital-cameras-spherical-aberration/

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.