Monday 18 June 2012

Combining box filters, AA filters and diffraction: Do I need an AA filter?

I have been building up to this post for some time now, so it should not surprise you too much. What happens when we string together the various components in the image formation chain?

Specifically, what happens when we combine the square pixel aperture, the sensor OLPF (based on a 4-dot beam splitter) and the Airy function (representing diffraction)? First off, this is what the MTF curves of our contestants look like:
The solid black curve represents a combined sensor OLPF (4-dot beam splitter type) + pixel aperture + lens MTF (diffraction only) model. This was recently shown to be a good fit for the D40 and D7000 sensors. The dashed blue curve represents the MTF of the square pixel aperture (plus diffraction), i.e., a box filter as wide as the pixel. The dashed red curve illustrates what a Gaussian MTF (plus diffraction) would look like, fitted to have an MTF50 value that is comparable to the OLPF model. Lastly, the solid vertical grey line illustrates the remaining contrast at a frequency of 0.65 cycles per pixel, which is well above the Nyquist limit at 0.5 cycles per pixel (dashed vertical grey line).

Note how both the Gaussian and the OLPF model have low contrast values at 0.65 cycles per pixel (0.04 and 0.02, respectively), while the square pixel aperture + lens MTF, representing a sensor without an AA filter, still has a contrast value of 0.27. It is generally accepted that patterns at a contrast below 0.1 are not really visible in photos. That illustrates how the OLPF successfully attenuates the frequencies above Nyquist, but how does this look in a photo?

Ok, but how would it affect my photos visually?

I will now present some synthetic images to illustrate how much (or little) anti-aliasing we obtain at various apertures, both with and without an AA filter. The images will look like this:



The left panel is a stack of four sub-images (rows) separated by white horizontal bars. Each sub-image is simply a pattern of black-and-white bars, with both black and white bars being exactly 5 pixels wide (in this example). The four stacked sub-images differ only in phase, i.e., in each of the four rows the black-and-white pattern of bars is offset by a horizontal distance between 0 and 1 pixels in length.

The right panel is a 2x magnification of the left panel. Note that the third row in the stack is nice and crisp, containing almost pure black and pure white. The other rows have some grey values at the transition between the black and white bars, because the image has been rendered without any anti-aliasing.
These images are rendered by sampling each pixel at 2362369 sub-pixel positions, weighting each sampled point with the relevant point spread function.

The aliasing phenomenon known as frequency folding was illustrated in a previous post. When a scene contains patterns at a frequency exceeding the Nyquist limit (highest frequency representable in the final image), the patterns alias, i.e, the frequencies above Nyquist appear as patterns below the Nyquist limit, and are in fact indistinguishable from real image content at that frequency. Here is a relevant example, illustrating how a frequency of 0.65 cycles per pixel (cycle length of 1.538 pixels) aliases onto a frequency of 0.35 cycles per pixel (cycle length of 2.857 pixels) if no AA filter is present:
 
This set was generated at a simulated aperture of f/1.4, which does not attenuate the high frequencies much. Observe how the two images in the "No OLPF" column look virtually the same, except for a slight contrast difference; it is not possible to tell from the image whether the original scene contained a pattern at 1.538 pixels per cycle, or 2.857 pixels per cycle.

The "4-dot OLPF" column shows a clear difference between these two cases. If you look closely you will see some faint stripes in the 2x magnified version at 1.538 pixels per cycle, i.e., the OLPF did not completely suppress the pattern, but attenuated it strongly.

If we repeat the experiment at f/4, we obtain this image:
At f/4, we do not really see anything different compared to the f/1.4 images, except an overall decrease in contrast in all the panels.

Ok, rinse & repeat at f/8:
Now we can see the contrast in the "No OLPF" column, at 1.538 pixels per cycle, dropping noticeably. Diffraction is acting as a natural AA filter, effectively attenuating the frequencies above Nyquist.

Finally, at f/11 we see some strong attenuation in the sensor without the AA filter too:
You can still see some clear stripes (top left panel) in the 2x magnified view, but in the original size sub-panel the stripes are almost imperceptible.

Conclusion

So there you have it. A sensor without an AA filter can only really attain a significant increase in resolution at large apertures, where diffraction is not attenuating the contrast at higher frequencies too strongly. Think f/5.6 or larger apertures.

Unfortunately, this is exactly the aperture range in which aliasing is clearly visible, as shown above. In other words, if you have something like a D800E, you can avoid aliasing by stopping down to f/8 or smaller, but at those apertures your resolution will be closer to that of the D800. At apertures of f/5.6 and larger, you may experience aliasing, but you are also likely to have better sharpness than the D800.

Not an easy choice to make.

Personally, I would take the sensor with the AA filter.

3 comments:

  1. Your blog is very interesting! A few comments:

    In all cases the "No OLPF" column shows higher contrast for the 2.857 pixels/cycle, even at high-diffraction f/11. So "No OLPF" can show advantages even when there is a lot of diffraction.

    I think aliasing has to be evaluated with real systems and real images, because it will be extremely rare to match your simulated diffraction-limited performance at f/4 (~400 line pairs/mm). The Nikon D800e will need over 100 lp/mm for luminance aliasing to occur - this is not easy - and even then the aliasing will not necessarily be be visible. Maybe an Ocean Optics $$$ lens. Although rare, it must happen sometimes but I have not been able to find on the web any examples (verifiable with the raw NEF file) of luminance aliasing by the D800e.

    It seems to me that as long as the fill factor is close to 100% there is sufficient filtering from the pixel aperture to suppress visible luminance aliasing.

    Color moire from demosaicking the Bayer array is another matter. This is by far the most common aliasing effect. I think that the real purpose of aa filters is to make it easier and faster to minimize color moire while demosaicking in-camera. The red and blue color channels in the Bayer array, if considered in isolation, have a Nyquist rate only 1/4 the sampling rate of the full sensor, in other words 1/2 Nyquist. Anything between 1/2 Nyquist and Nyquist can cause aliasing in those channels. Modern demosaicking methods take advantage of correlations among the channels so that, for example, the un-aliased higher frequencies in the green or luminance channels can be combined with the red and blue channels, while aliased content in red and blue can be estimated from the green or luminance channel and subtracted. For example, see Glotzbach

    Regards,
    Cliff

    ReplyDelete
    Replies
    1. (completing my thoughts on this)

      As long as the luminance channel is relatively clean (thanks to 100% fill factor, high sampling rate, diffraction and aberrations), and color moire can be addressed with software, doesn't the aa filter become unnecessary? Granted it all hinges on the software, which might not be available.

      Delete
    2. Thanks for reading the blog and taking time to comment!
      The Glotzbach presentation is quite interesting.

      I agree with your observation that the "No OLPF" case will always offer more detail, even when there is significant diffraction blurring. This benefit will decrease at smaller apertures, though. The only catch is that we do not obtain this extra contrast (resolution) "for free" --- we pay for it with a higher risk of aliasing.

      My synthetic examples certainly picks on a worst-case example, and I agree that this is unlikely to be a problem in real-world use.

      I would like to offer my perspective on the demosaicing problem. We can consider the following approaches:
      1) No OLPF, large aperture lens (i.e., definite chances of aliasing), clever demosaicing algorithm that uses high-frequency info from green channel to reduce R & B aliasing;
      2) Standard OLPF, followed by standard demosaicing, followed by sharpening (e.g., Richardson-Lucy deconvolution with appropriate PSF);
      3) Same as (2), but use the demosaicing algorithm from (1).

      We know that option (1) will work very well as long as there is strong correlation between the green channel and the other two (R & B). The problems arise when the decorrelation between the green channel and other channels happens only at the higher frequencies. We know that the green channel is not a true luminance channel, hence it does not actually cover the R&B channels, so no actual measurement of the high frequencies between Nyquist/2 and Nyquist are available in the R&B channels. The demosaicing algorithm cannot distinguish this case from the case where there truely is no detail in R&B in this frequency range (N/2 to N). So the algorithm must guess incorrectly in one of the cases. I do not expect this to happen frequently in real images, but I am trying to determine which strategy is the safest under all possible conditions.

      In practice, I think such a demosaicing algorithm will introduce false detail, which will probably be far less objectionable than colour Moire.

      But what happens in case (1) when we have detail above Nyquist in the green channel? Detail at 0.65 cycles per pixel will definitely cause visible aliasing in the green channel (contrast ~ 0.3) if we have no OLPF. This false detail will be propagated by the demosaicing algorithm, since aliasing is by definition not detectable.

      This is where strategies (2) and (3) come in. These strategies have a much lower chance of suffering from aliasing: at 0.65 cycles/pixel their contrast is below 0.1, so the remaining aliasing should not be too visible. The demosaicing algorithm cannot propagate false detail, since there is virtually no false detail in the green channel to begin with. The loss of contrast caused by the OLPF can be reversed by using deconvolution, which will theoretically give use the same MTF response as the "No OLPF" case below Nyquist.

      Of course, we cannot really achieve perfect deconvolution, because we have sensor noise, shot noise, and that little bit of residual aliasing to contend with.

      So which strategy is better: Capture maximum detail by omitting the OLPF, and suffer from aliasing artifacts, which may be propagated from the green channel onto the other channels, or remove most of the aliasing with an OLPF, and try to restore detail using deconvolution (in the presence of noise)?

      I doubt there is a single "correct" answer to this question. Since we expect that real-world lenses will not be diffraction limited, we know that aliasing in the green channel will probably not be too noticeable. Under these conditions, strategy (1) probably wins, especially as we move towards ever-smaller pixel pitches.

      I just happen to like strategies (2) and (3), because they have some theoretical advantages. I cannot demonstrate (yet) whether these advantages are realized in real-world images.

      But stay tuned :)

      Delete