Monday, 18 June 2012

Combining box filters, AA filters and diffraction: Do I need an AA filter?

I have been building up to this post for some time now, so it should not surprise you too much. What happens when we string together the various components in the image formation chain?

Specifically, what happens when we combine the square pixel aperture, the sensor OLPF (based on a 4-dot beam splitter) and the Airy function (representing diffraction)? First off, this is what the MTF curves of our contestants look like:
The solid black curve represents a combined sensor OLPF (4-dot beam splitter type) + pixel aperture + lens MTF (diffraction only) model. This was recently shown to be a good fit for the D40 and D7000 sensors. The dashed blue curve represents the MTF of the square pixel aperture (plus diffraction), i.e., a box filter as wide as the pixel. The dashed red curve illustrates what a Gaussian MTF (plus diffraction) would look like, fitted to have an MTF50 value that is comparable to the OLPF model. Lastly, the solid vertical grey line illustrates the remaining contrast at a frequency of 0.65 cycles per pixel, which is well above the Nyquist limit at 0.5 cycles per pixel (dashed vertical grey line).

Note how both the Gaussian and the OLPF model have low contrast values at 0.65 cycles per pixel (0.04 and 0.02, respectively), while the square pixel aperture + lens MTF, representing a sensor without an AA filter, still has a contrast value of 0.27. It is generally accepted that patterns at a contrast below 0.1 are not really visible in photos. That illustrates how the OLPF successfully attenuates the frequencies above Nyquist, but how does this look in a photo?

Ok, but how would it affect my photos visually?

I will now present some synthetic images to illustrate how much (or little) anti-aliasing we obtain at various apertures, both with and without an AA filter. The images will look like this:



The left panel is a stack of four sub-images (rows) separated by white horizontal bars. Each sub-image is simply a pattern of black-and-white bars, with both black and white bars being exactly 5 pixels wide (in this example). The four stacked sub-images differ only in phase, i.e., in each of the four rows the black-and-white pattern of bars is offset by a horizontal distance between 0 and 1 pixels in length.

The right panel is a 2x magnification of the left panel. Note that the third row in the stack is nice and crisp, containing almost pure black and pure white. The other rows have some grey values at the transition between the black and white bars, because the image has been rendered without any anti-aliasing.
These images are rendered by sampling each pixel at 2362369 sub-pixel positions, weighting each sampled point with the relevant point spread function.

The aliasing phenomenon known as frequency folding was illustrated in a previous post. When a scene contains patterns at a frequency exceeding the Nyquist limit (highest frequency representable in the final image), the patterns alias, i.e, the frequencies above Nyquist appear as patterns below the Nyquist limit, and are in fact indistinguishable from real image content at that frequency. Here is a relevant example, illustrating how a frequency of 0.65 cycles per pixel (cycle length of 1.538 pixels) aliases onto a frequency of 0.35 cycles per pixel (cycle length of 2.857 pixels) if no AA filter is present:
 
This set was generated at a simulated aperture of f/1.4, which does not attenuate the high frequencies much. Observe how the two images in the "No OLPF" column look virtually the same, except for a slight contrast difference; it is not possible to tell from the image whether the original scene contained a pattern at 1.538 pixels per cycle, or 2.857 pixels per cycle.

The "4-dot OLPF" column shows a clear difference between these two cases. If you look closely you will see some faint stripes in the 2x magnified version at 1.538 pixels per cycle, i.e., the OLPF did not completely suppress the pattern, but attenuated it strongly.

If we repeat the experiment at f/4, we obtain this image:
At f/4, we do not really see anything different compared to the f/1.4 images, except an overall decrease in contrast in all the panels.

Ok, rinse & repeat at f/8:
Now we can see the contrast in the "No OLPF" column, at 1.538 pixels per cycle, dropping noticeably. Diffraction is acting as a natural AA filter, effectively attenuating the frequencies above Nyquist.

Finally, at f/11 we see some strong attenuation in the sensor without the AA filter too:
You can still see some clear stripes (top left panel) in the 2x magnified view, but in the original size sub-panel the stripes are almost imperceptible.

Conclusion

So there you have it. A sensor without an AA filter can only really attain a significant increase in resolution at large apertures, where diffraction is not attenuating the contrast at higher frequencies too strongly. Think f/5.6 or larger apertures.

Unfortunately, this is exactly the aperture range in which aliasing is clearly visible, as shown above. In other words, if you have something like a D800E, you can avoid aliasing by stopping down to f/8 or smaller, but at those apertures your resolution will be closer to that of the D800. At apertures of f/5.6 and larger, you may experience aliasing, but you are also likely to have better sharpness than the D800.

Not an easy choice to make.

Personally, I would take the sensor with the AA filter.

Nikon D40 and D7000 AA filter MTF revisited

In an earlier post, I have shown some early MTF measurements taken with both the D40 and the D7000 at the centre of a Sigma 17-50 mm f/2.8 lens.
In this post, I revisit those measurements, presenting some new results for the D7000, and a model of the sensor AA filter (or OLPF).

Optical Low Pass Filters (OLPFs) in DSLRs

I have noticed previously that the measured MTF of the D40 was fairly close to a Gaussian, but that there were some small systematic discrepancies that were not accounted for. But how do you build an optical filter that has a Gaussian transfer function?

It turns out that the OLPFs in DSLRs are not really Gaussian, but something much simpler: beam splitters. A slice of Lithium Niobate crystal, cut at a specific angle, presents a different index of refraction depending on the polarization of the incident light. Let us assume that a beam of horizontally polarized light passes through straight, for the sake of the argument. Vertically polarized light, on the other hand, refracts (bends) as it enters the crystal, effectively taking a longer path through the crystal. As the vertically polarized light leaves the crystal, it refracts again to form a beam parallel to the horizontally polarized beam, but displaced sideways by a distance dependent on the thickness of the crystal.

Using a single layer of Lithium Niobate crystal, you can split a single beam into two parallel beams separated by a distance d, which is typically chosen to match the pixel pitch of the sensor. Since this happens for all beams, the image leaving the OLPF is the sum (average) of the incoming image and a shifted version of itself, translated by exactly one pixel pitch.

If you stack two of of these filters, with the second one rotated through 90 degrees, you effectively split a beam into four, forming a square with sides equal to the pixel pitch (but often slightly less than the pitch, to improve resolution). A circular polariser is usually inserted between the two Lithium Niobate layers to "reset" the polarisation of the light before it enters the second Niobate layer.

Combining pixel aperture MTF with the OLPF MTF

So how does this beam-splitter effect the desired blurring? We can compute the combined PSF by convolving the beam-splitter impulse response with the pixel aperture impulse response (a box function).

The beam splitter is represented as four impulses, i.e., infinitely thin points. Basic Fourier transform theory tells us that the Fourier transform of an impulse is just a cosine function, so the MTF of the beam splitter will be the sum of four cosines.

The "default" 4-way beam splitter impulse response filter can be denoted as the sum of four diagonally-placed impulses, i.e.,
f(x,y) = δ(x-d, y-d) + δ(x+d, y+d) + δ(x-d, y+d) + δ(x+d, y-d)
where δ(x,y) represents a 2D Dirac delta function, which is non-zero only if both x and y are zero, and d represents the OLPF split distance. More sophisticated OLPF designs are possible (e.g., 8-way splitters), but the 4-way designs appear to be popular. In my notation here, the distance between two beams would be 2d; this is to accommodate my usual notation of a pixel being defined over the area [-0.5, -0.5] to [0.5, 0.5].


The degree of blur is controlled by the d parameter, with d = 0 yielding no blurring, and d = 0.5 giving us a two-pixel-wide blur. Since d can be varied by controlling the thickness of the Lithium Niobate layers, a manufacturer can fine-tune the strength of the OLPF for a given sensor.

Convolving a square pixel aperture with four impulse functions is straightforward: just sum four copies of the box filter PSF, each shifted by d in the required direction. For d = 0.375, we obtain the following PSF:

in two dimensions, or
in three dimensions. Not exactly a smooth PSF, but then neither is the square pixel aperture.

Simulating the combined effects of diffraction, OLPF and pixel aperture

We can derive the combined PSF directly by convolving the diffraction, OLPF and pixel aperture PSFs. Note that this combined PSF is parameterized by wavelength, lens aperture, pixel pitch, and OLPF split distance. For a wavelength of 0.55 micron, an aperture of f/4, a pixel pitch of 4.73 micron and an OLPF split distance of 0.375 pixels (=2.696 micron), we obtain the following PSF:
in two dimensions, or
in three dimensions. Notice how diffraction has smoothed out the combined OLPF and pixel aperture PSF (from previous section).



Using this PSF we can generate synthetic images of a step edge. The MTF of this synthetic edge image can then be compared to a real image captured with a given sensor to see how well this model holds up.

Enter the Death Star

You can certainly measure lens or sensor sharpness (MTF) using any old chart that you printed on office paper, but if you are after resolution records, you will have to take a more rigorous approach. One of the simplest ways of obtaining a reasonably good slanted edge target is to blacken a razor blade with candle soot. This gives you a very straight edge, and very good contrast, since the soot does not reflect much light.

That leaves only two other questions: what do you use as a background against which you will capture the razor blade, and how do you illuminate your target?

Previously I have used an SB600 speedlight to illuminate my razor blade, which was mounted on a high-grade white paper backing. This produced reasonably good results, but it did not maximize contrast because the flash was lighting the scene from the front. There is also a possibility that the D7000 cycles its mirror when using a flash in live view mode, which could lead to mirror slap. So the flash had to go.

In its place I used a home made integrating sphere, which I call the "Death Star" (sorry George, please do not sue me). Here is what it looks like with the razor blade mounted over the integrating sphere's port:
Yes, I know the highlight is blown out ....
Why use an integrating sphere? Well, the integrating sphere produces perfectly uniform diffuse lighting, which is ideal for creating a uniform white backdrop for the razor blade.

In addition, my home made integrating sphere produces enough light to require a shutter speed of 1/200 s at f/4, ISO100 to prevent blowing out the highlights. This is perfect for minimizing the influence of vibration (although higher shutter speeds would be even better).

Using this set-up, I then capture a number of shots focused in live view, using a remote shutter release. It appears that with this target, the AF is really accurate, since I could not really do much better with manual focus bracketing. One day I will get a focusing rail, which will make focus bracketing much simpler.

Lastly, all MTF measurements on the razor edge were performed using MTF Mapper. The "--bayer green" option was used to measure MTF using only the green photosites, thus avoiding any problems with Bayer demosaicing. The raw files were converted using dcraw's "-D" option.

D40 MTF and OLPF model

Here is the MTF plot of a D40 razor image captured at f/4 (manually focus bracketed), Bayer green channel only:

D40 green channel MTF, Sigma 17-50 mm f/2.8 at f/4
The D40's PSF was modelled by convolving the square pixel aperture (7.8 micron wide), the 4-point beam splitter (d=0.375 pixels), and the Airy function (f/4). A synthetic image of a step edge with this PSF was generated, and measured using MTF Mapper.

Purely judging the match between the model and the measured MTF by eye, one would have to conclude that the model captures the interesting parts of the MTF rather well. The measured MTF is slightly below the model, which is most likely caused by a smidgen of defocus.

The fact that the model fits so well could also be taken to imply that the Sigma 17-50 mm f/2.8 lens is relatively aberration-free at f/4, i.e., it is diffraction limited in the centre.

MTF50 resolution came in at  0.334 cycles per pixel, or 42.84 line pairs per mm, or 671 line pairs per picture height.

D7000 MTF and OLPF model

Here is the MTF plot of a D7000 razor image captured at f/4 (AF in live view), Bayer green channel only:

D7000 green channel MTF, Sigma 17-50 mm f/2.8 at f/4
The D7000's PSF was modelled by convolving the square pixel aperture, the 4-point beam splitter (d=0.375 pixels), and the Airy function (f/4). A synthetic image of a step edge with this PSF was generated, and measured using MTF Mapper.

The measured MTF does not fit the model MTF quite as well as it did in the D40's case. Given that the physical linear resolution is 60% higher, it is correspondingly harder to obtain optimal focus. The shape of the measured MTF relative to the model MTF is consistent with defocus blur.


The actual resolution figures are impressive: MTF50 is 0.304 cycles per pixel, or about 64 lp/mm, or equivalently, 992 line pairs per picture height.

If the model is indeed accurate, it would mean that the D7000 can theoretically obtain resolution figures close to 68 lp/mm at f/4 in the green channel, provided that the lens is purely diffraction limited, and perfectly focused.

Summary

Perhaps this is not such a surprising result, but it appears that Nikon is using the same relative strength of AA filter in both the D40 and the D7000; this can be deduced from the fact that both the D40 and the D7000 OLPF models fitted best with an OLPF split distance of 0.375 pixels.


The somewhat unexpected result, for me at least, was that the MTF shape is so sensitive to perfect focus. Specifically, it seems that the first zero of the MTF curve, at around 0.6875 cycles per pixel, is not readily visible unless focus is perfect. The zero is quite clear in the D40 curve, but not quite so visible in the D7000 curve. You are extremely unlikely to achieve this kind of focus in real world photography, though.

References

1. Russ Palum, Optical Antialiasing filters, in Single-Sensor Imaging: Methods and Applications for Digital Cameras, Edited by Rastislav Lukac, CRC Press 2008

Wednesday, 6 June 2012

D800E versus diffraction

In a previous post (here), I have illustrated how diffraction through a circular aperture can be modelled either in the spatial domain as a point spread function (PSF), or in the frequency domain as a modulation transfer function (MTF). I will now put these models to use to investigate the influence of diffraction on the resolution that can be achieved with the D800E at various apertures.

Simulating the effect of diffraction

I will not go into the maths behind the diffraction MTF; this was discussed in another post (here). For now, it is sufficient to understand that we can combine the diffraction MTF with the sensor's MTF through multiplication in the frequency domain.

Assume for the moment that the D800E effectively does not have an AA filter (in practice, this might not be entirely true, i.e., the D800E may just have a very weak AA filter compared to other cameras). This allows us to model the pixel's MTF curve as a sinc(x), as was shown in a previous post. Next, we assume that the lens is diffraction limited, i.e., the other lens aberrations are negligible, and thus the lens MTF is just the diffraction MTF.
For a D800(E) pixel pitch of 4.88 micron, and an aperture of f/8, we obtain the following combined MTF curve:
D800E combined MTF at f/8
The dashed grey curve represents the sensor's MTF, and the black curve represents the diffraction MTF. The blue curve is the product of these two curves, and represents the combined diffraction-and-sensor MTF.
At f/8, our peak MTF50 value will be 0.344 c/p, or 70.4 lp/mm. Note that this is still higher than what I measured on a D7000 at f/5, which peaked at about 0.29 c/p (61 lp/mm), but the D7000  has an AA filter. 

Moving to even smaller apertures will cost us resolution, thus at f/11 the curve looks like this:
D800E combined MTF at f/11
At f/11, MTF50 peaks at only 0.278 c/p, or 57 lp/mm. This is still extremely crisp, although you might barely be able to see the difference compared to f/8 under ideal conditions. Pushing through to f/16:
D800E combined MTF at f/16
Note how close the combined MTF curve and the diffraction MTF curve have now become; this indicates that diffraction is starting to dominate the MTF curve, and thus also resolution. At f/16, MTF50 has dropped to 0.207, or about 42.3 lp/mm, which is not bad, but quite far from the 70 lp/mm we achieved at f/8.

What about going in the other direction? Here is what happens at f/5.6:
D800E combined MTF at f/5.6
MTF50 now reaches 0.412 c/p, or 84.4 lp/mm. At f/4 (not shown as a plot) we get 0.465 c/p (95.3 lp/mm), and so on. Below f/4 we will start seeing the residual aberrations of the lens take over, which will reduce effective resolution. I have no model for those yet, so I will stop here for now.

Ok, so I will go one step further. Here is the MTF plot at f/1.4, but keep in mind that for a real lens, other lens aberrations will alter the lens MTF so that it is no longer diffraction limited. But this is what it would have looked like if those aberrations were absent:
D800E combined MTF at f/1.4
Off the charts! MTF50 will sit at 0.557 c/p, or 114.1 lp/mm. The pixel MTF and the combined MTF are now very similar, which is to be expected, since diffraction effects are now almost negligible. Now if only they could build this lens ...

In closing

These results seem to support the suggestions floating around on the web that the D800E will start to visibly lose sharpness after f/8, compared to what it achieves at f/5.6. But this does not mean that f/11 is not sharp, since 57 lp/mm is not something to be sneezed at! Even more importantly, there is no "magical f-stop" after which diffraction causes the resolution to drop; diffraction will lower resolution at all f-stop values. The balance between diffraction blur and blur caused by other lens aberrations tends to cause lens resolution to peak at a certain aperture (around f/4 to f/5.6 for many lenses), but even at f/1.4 you will lose resolution to diffraction, just not a lot.

There are also some claims that the D700 was usable at f/16, but now suddenly the D800E will not be usable at f/16 any more. This is not true. If we compare a hypothetical D700E with our hypothetical D800E above, we see that the D800E attains an MTF50 value of 42.3 lp/mm at f/16, and the hypothetical D700E would reach only 37.2 lp/mm.

The real D700 has an AA filter. If we approximate the strength of this filter as a Gaussian with a standard deviation of 0.6246, then the D700 would only reach an MTF50 of 25.6 lp/mm at f/16. A similar approximation of the AA filter for the D800 would produce an MTF50 of 34.4 lp/mm at f/16. So the D800 (or D800E) will always capture more detail than the D700 at all apertures. The D800E is perfectly usable at f/16, and more so than the D700. 

[Incidentally, the diffraction + Gaussian AA filter approximation used here appears to be quite accurate. Roger Cicala's Imatest results on the D800 and D700 with the Zeiss 25 mm f/2 (see here) agree with my figures. From Roger's charts, we see the D800 at f/5.6 achieves 1200 lp/ph, or about 50.06 lp/mm, compared to my figure of 50.7 lp/mm. The D700 at f/5.6 attains roughly 750 lp/ph (31.38 lp/mm) in Roger's test, and my model predicts 31.9 lp/mm.]

The catch, though, is that the D700's MTF50 at f/16 is 0.216 c/p (25.6 lp/mm), whereas the D800's MTF50 at f/16 is 0.168 c/p (34.4 lp/mm). The apparent per-pixel sharpness of the D700 will exceed that of the D800 at 100% magnification on-screen. If you view them at the same size, though, the D800 will be somewhat sharper.

An introduction to diffraction

Incorporating diffraction

A while ago I posted an article on the MTF of a sensor without an AA filter in the absence of diffraction (here if you've missed it). By not including the effects of diffraction, we could see how the lack of an AA filter produced significant aliasing, which shows up visually as false detail.

Now it is time to add the effects of diffraction back in. I do not have the required background in physics to explain this from first principles, but we can probably blame diffraction on the wave properties of light. If we take a beam of light and pass it through a narrow vertical slit, and then project it onto a surface, we do not see quite what we expected.

Instead of seeing this:

we actually see this

Intensity pattern of diffraction through narrow slit

Of course, to illustrate this principle I had to compress the intensity of the diffraction image quite a bit, since the secondary stripes are much dimmer than the main one. Making the slit narrower causes the intensity pattern to spread out more.
The intensity pattern that we observe is in fact sinc2(x), and looks like this:
Intensity profile of narrow slit, sinc2(x)
Recall that we first observed the sinc(x) function, which is defined as
sinc(x) = sin(x) / (x),
when we computed the Fourier transform of the box function. Note that a cross-section of the vertical slit (along constant y values) is in fact a box function. Why is the intensity function sinc2(x) rather than sinc(x) ? Again, I lack the physics background to explain this properly, but it is squared because we are dealing with incoherent light, which is a safe assumption for typical photographic light sources. Coherent light sources, such as lasers, will have a sinc(x) diffraction pattern upon passing through a narrow slit.
If we have a square hole (aperture) rather than a vertical slit, we will observe this intensity pattern:
Intensity pattern of square aperture

which is simply sinc2(x)*sinc2(y).

If we want to introduce a circular aperture, like the one in a real lens, then we can no longer treat x and y separately. A circular aperture produces an intensity pattern that looks like this:
Intensity pattern of circular aperture (Airy pattern)
This pattern is also known as the Airy pattern, with the bright central circular area (before first black ring) known as the Airy disc. Again, a smaller aperture causes the size of this disc to increase, and the entire pattern to spread out.

Although x and y can no longer be treated independently, we can transform them to polar coordinates to obtain r = sqrt(x2 + y2), which yields the intensity function jinc2(r) where
jinc(r) = 2*BesselJ(r)/(r)
and BesselJ is a Bessel function of the first kind, of order 1 (see Wikipedia). Since this function is radially symmetrical (as can be seen in the intensity image above) we can plot it simply as a function of r
Intensity profile (radial) of circular aperture, jinc2(r)

The result is that the circular aperture behaves somewhat like a square aperture, except that instead of observing a sinc2(x) function, we instead observe a jinc2(r). Note that the side-lobes of the jinc2(r) function are much smaller than those of the sinc2(x) --- you can probably guess what effect this will have on MTF.

Diffraction MTF

So what does the MTF curve of a diffraction pattern look like? For the first case, namely the MTF of the diffraction pattern of a vertical slit, we can perform the analysis in one dimension, to obtain the following curve:
MTF of sinc2(x) PSF

If you have guessed that it looks an awful lot like a straight line, then you guessed correctly. Recall that the intensity function of the vertical slit is sinc2(x). The Fourier transform of sinc(x) is a box function (see here for a recap), and vice versa, so the MTF of sinc(x) is just a box function in the frequency domain. Since multiplication in the time (or spatial) domain is equivalent to convolution in the frequency domain, we can see that the Fourier transform of sinc2(x) must be the convolution of a box function with itself in the frequency domain. Lastly, recall that the convolution of a box filter with itself (once only) yields a triangle function, and we are only plotting the positive frequencies in the MTF plot, i.e., you can mirror the MTF around the y axis to see the triangle more clearly, like this:
MTF of sinc2(x) PSF, showing negative and positive frequencies


This result extends to the MTF of the diffraction pattern of a square aperture, since x and y can simply be treated independently. As you have probably guessed, the 2D MTF of the square aperture looks like this:

MTF (left) and PSF (right) of square aperture
Note that this MTF shape can also be obtained through the convolution of two 2D box functions.

You may be tempted to treat the circular aperture in the same way, but trying to compute the MTF in one dimension will not give you the correct result. If you can visualise the MTF of the square aperture as the convolution of a box function with itself, then you can visualise the MTF of the circular aperture as the convolution of two "pillbox" functions that look like this:

A pillbox function
The convolution of two of these pillbox functions will give you the MTF on the left:

MTF (left) and PSF (right) of circular aperture
which is called the Chinese hat function, or chat(x). We can take a cross-section through the centre of this 2D function to obtain this plot:
chat(f) = MTF of jinc2(r)
Note that this function is very nearly a triangle function --- except that it curves a bit. This should not be totally surprising, since you can imagine that you could fit a square box function inscribed in the cylinder formed by the pillbox function.

If we focus only on the positive half of the frequency spectrum, the function can be expressed as
chat(s) = 2/π * (acos(s) - s*√(1 - s2))
which is the form that you will usually see in optics literature. Now we know what the MTF curve looks like, so we can obtain the corresponding point spread function by computing the inverse Fourier transform of the MTF curve.


If you perform this inversion in two dimensions, and then take a cross-section through the result, you will obtain the known intensity function, jinc2(r).
Right. Now sample the jinc2(r) function in one dimension, and apply the Fast Fourier Transform (FFT) to obtain the 1D MTF:
Trying to obtain chat(f) by computing FFT of jinc2(r)
Whoops! That does not work as expected. You have to perform the calculations in 2D for this to work. It only took me a (tortuous) weekend to discover this useful fact. If only someone had warned me ...

Other useful facts regarding diffraction

Just as with a Gaussian, the width of the MTF function is inversely proportional to the width of the PSF, i.e., smaller apertures produce larger diffraction patterns.

Diffraction is wavelength-dependent. If jinc2(r) is your PSF, then r should be expressed as
r = π * d / (λ * N)
where d is your radial distance from the optical axis, as measured in the focal plane. If you are modelling the diffraction PSF in terms of pixels, then you can express d in pixels, but you must adjust λ accordingly, such that
rp = π * dp / ( (λ/Δ) * N)
where Δ denotes your pixel pitch, expressed in the same units as λ. For example, the Nikon D800 has a pixel pitch of 4.88 micron, and green light has a wavelength of 550 nm, or 0.55 micron. This means that the diffraction PSF expressed in D800 pixel-units would be
rpπ * dp / ( 0.112705 * N).

If we want to obtain the equivalent function in the frequency domain, i.e., the diffraction MTF, we use the scale
sp = (λ/Δ) * N * fp
where fp  is the frequency scale in cycles per pixel, and sp is normalized frequency we plug into the equation
chat(sp) = 2/π * (acos(sp) - sp*√(1 - sp2)).
For the D800 pixel size, we would use
sp ≈ 0.112705 * N * fp

In a future post, some examples of MTF curves, incorporating diffraction, will be investigated using the D800E as example.