Wednesday 18 December 2013

mtf_generate_rectangle grows up

Fed up with squares?

If you have used mtf_generate_rectangle before you will know why it is called mtf_generate_rectangle. A bit over two years ago, when I started working on the first release of MTF Mapper, I ran into a specific problem: I could not find any synthetic images rendered with specific, exactly known point spread functions. This meant that the only way that I could tell if MTF Mapper was working correctly was to compare its output to other slanted edge implementations.

While this is sufficient for some, it did not sit well with me. What if all those other implementations were tested in the same way? If that was the case, then all slanted edge implementations (available on the Internet) could be flawed. Clearly, some means of verifying MTF Mapper independently of other slanted edge algorithm implementations was required.

Thus mtf_generate_rectangle was born. The original implementation relied on sampling a dense grid, but quickly progressed to an importance sampling rendering algorithm tailored to a Gaussian PSF. The Gaussian PSF had some important advantages over other (perhaps more realistic) PSFs: the analytical MTF curve was know, and simple to compute (the MTF of a Gaussian PSF is just another scaled Gaussian). Since the slanted edge algorithm requires a step edge as input, it seemed logical to choose a square as the target shape; this would give us four step edges for the price of one.

Such synthetic images, composed of a black square on a white background, are perfectly suited to the task of testing the slanted edge algorithm. Unfortunately, they are not great for illustrating the visual differences between different PSFs. There are a few well-know target patterns that are found on resolution test charts designed for visual interpretation. The USAF1951 pattern consists of sets of three short bars (see examples later on); the width of these bars are decreased in a geometric progression, and the user is supposed to note the scale at which the bars are no longer clearly distinguishable.

Another popular test pattern is the Siemens star. This pattern comprises circular wedges radiating from the centre of the design. The main advantage of the Siemens star is that resolution (spacing between the edges of the wedges) decreases in a continuous fashion, as opposed to the discrete intervals of the USAF1951 chart. I am not a huge fan of the Siemens star, though, mostly because it is hard to determine the exact point at which the converging bars (wedges) blur into a gray mess. It is far too easy to confuse aliasing with real detail on this type of chart. Nevertheless, other people seem to like this chart.

Lastly, there is the familiar "pinched wedge" pattern (also illustrated later in this post), which contains a set of asymptotically convergent bars. The rate of convergence is much slower than the Siemens star, and a resolution scale usually accompanies the pattern, making it possible to visually measure resolution in a fashion similar to the USAF1951 chart, but with slightly more accuracy. I rather like this design, if only for the fact that the resulting pictures are aesthetically pleasing.

Today I announce the introduction of a fairly powerful new feature in mtf_generate_rectangle: the ability to render arbitrary polygon shapes.

The implementation

You can now pass fairly general polygonal targets using the "--target_poly <filename>" command line option. The format of the file specified using <filename> is straightforward: any number of polygons can be specified, with each polygon defined by an integer <n> denoting the number of vertices, followed by <n> pairs of <x,y> coordinates. I usually separate these components with newline characters, but this is not critical.

The polygons themselves can be convex or concave. In theory, non-trivial self intersections should be supported, but I have not tested this myself yet. There is currently no way to associate multiple contours with a single polygon, thus you cannot specify any polygons containing holes. I work around this by simply splitting the input polygons with a line passing through such a hole: for example, a polygon representing the number "0" can be split down the middle, producing two concave polygons that touch on the split line.

General polygon intersections

For a while now I have cheated by relying on the Sutherland-Hodgeman algorithm to compute the intersection between two polygons. Specifically, this operation is required by all the importance sampling algorithms involving a non-degenerate photosite aperture (e.g., "airy-box" and "airy-4dot-olpf" PSF options specified with the "-p" command to mtf_generate_rectangle). This article explains the process in more detail, but the gist is that each "sample" during the rendering process is proportional to the area of the intersection between the target polygon geometry and a polygon representing the photosite aperture (suitably translated). If we assume that the photosite aperture polygon is simply a square (or more general, convex) then we can rely on the Sutherland-Hodgeman algorithm to compute the intersection: we simply "clip" the target polygon with the photosite aperture polygon, and compute the area of the resulting clipped polygon.

Now this is where the cheat comes in: the clipped result produced by the Sutherland-Hodgeman algorithm is only correct if both polygons are convex. If the target polygon (the clipee) is concave, and the clipping polygon is convex, the Sutherland-Hodgeman algorithm may produce degenerate vertices. (see figure 1 below). The cheat that I employed relied on the observation that degenerate sections of a polygon have zero area, thus they have no influence on the sampling process.

Fig 1: Clipping a concave polygon with the Sutherland-Hodgeman algorithm

This allowed me to continue using the Sutherland-Hodgeman algorithm, mostly because the algorithm is simple to implement, and very efficient. This efficiency stems from the way in which an actual intersection vertex is only computed if an edge of the clipee polygon is known to cross an edge of the clipper polygon. This is a huge saving, especially if one considers that the photosite aperture polygon (clipper) will typically be either entirely outside the target polygon, or entirely inside the target polygon, except of course near the edges of the target polygon.

All this comes apart when the photosite aperture polygon becomes concave. To solve that problem requires significantly more effort. For a start, a more general polygon clipping algorithm is required. I chose the Greiner-Hormann algorithm, with Kim's extensions to cater for the degenerate cases. In this context, the degenerate cases occur when some of the vertices of the clipee polygon coincide with vertices (or edges) of the clipper polygon. This happens fairly often when constructing a quadtree (more on that later). At any rate, the original Greiner-Hormann algorithm is fairly straightforward to implement, but adding Kim's enhancements for handling the degenerate cases required a substantial amount of effort (read: hours of debugging). The Greiner-Hormann algorithm is quite elegant, and I can highly recommend reading the original paper.

Internally, mtf_generate_rectangle classifies polygons as being either convex or concave. If the photosite aperture is convex, the Sutherland-Hodgeman algorithm is employed during the sampling process, otherwise it will fall back to Kim's version of the Greiner-Hormann algorithm. The performance impact is significant: concave photosite polygons render 20 times more slowly than square photosite polygons when rendering complex scenes. For simpler scenes, the concave photosite polygons will render about four times more slowly than squares; circular (well, 60-sided regular polygons, actually) will render about two times more slowly than squares.

Part of this difference is due to the asymptotic complexity of the two clipping algorithms, expressed in terms of the number of intersection point calculations: the Sutherland-Hodgeman algorithm has a complexity of O(c*n), where "c" is the number of crossing edges, i.e., c << m, where "m" is the number of vertices in the clipee polygon, and "n" is the number of edges in the clipper. The Greiner-Hormann algorithm has a complexity of O(n*m); on top of that, each intersection vertex requires a significant amount of additional processing.

Divide and conquer

To offset some of the additional complexity of allowing arbitrary target polygons to be specified, a quadtree spatial index was introduced. The quadtree does for 2D searches what a binary tree does for linear searches: it reduces the number of operations from O(n) to O(log(n)).

First up, each polygon is wrapped with an axis-aligned bounding box (AABB), which is just an educated-sounding way of saying that the minimum and maximum values of the vertices are recorded for both x and y dimensions of a polygon. This step already offers us a tremendous potential speed-up, because two polygons can only intersect if their bounds overlap. The bounds check is reduced to four comparisons, which can be implemented using short-circuit boolean operators, so non-overlapping bounds can be detected with as little as a single comparison in the best case.

Once each individual polygon has a bounding box, we can start to aggregate them into a scene (internally, mtf_generate_rectangle treats this as a multipolygon with its own bounding box). The quadtree algorithm starts with this global bounding box, and splits it into four quadrants. The bounding box of each quadrant is taken as a clipping polygon, clipping all the polygons to fit exactly inside the quadrant.

After one iteration, we have potentially reduced the number of intersection tests by a factor of four. For example, if we determine (using the bounding boxes) that the photosite aperture polygon falls entirely inside the top-right quadrant, then we only have to process the (clipped) polygons found inside that quadrant. If a quadrant is empty, we can simply skip it; otherwise, we can shrink the bounding box to fit tightly around the remaining clipped polygons (see figure 2 below).

The next logical step is to apply this quadrant subdivision recursively to each of the original quadrants. We can keep on recursively subdividing the quadrants until a manageable number of polygons (or more correctly, a manageable number of polygon edges) is reached in each quadrant subtree. We must balance the cost of further subdivision against the gains of reducing the number of edges in each subdivided quadrant. Every time that we add another level to the quadtree we add four additional bounds checks --- eventually the cost of the bounds checks add up.
Fig 2: Levels 0 (green), 1 (blue) and 2 (magenta) of the Quadtree decomposition of the scene (light gray)
Fig 3: Levels 2 (green), 5 (blue), and 6 (magenta) of the Quadtree decomposition

If the number of quadtree levels is sufficient, we end up "covering" the target polygons with rectangular tiles (the bounding boxes of the quadrants), providing a coarse approximation of the target polygon shape (see figure 3 above). Every sampling location outside these bounding boxes can be discarded early on, so rendering time is not really influenced by the size of the "background" any more.

If the quadtree is well-balanced, the amount of work (number of actual polygon-polygon intersection tests) can be kept almost constant throughout the entire rendered image, regardless of the number of vertices in the scene. I have confirmed this with some quick-and-dirty tests: halving the number of vertices in a scene (by using a polygon simplification method) has almost no impact on rendering time.

Some examples

Enough talk. Time for some images:


USAF1951-style chart with 4 levels

Siemens star chart
Pinch chart (click for full size)
All the charts above were rendered with a photosite pitch of 4.73 micron, using the Airy + square photosite (100% fill-factor) model at an aperture of f/4, simulating light at 550 nm wavelength. The command for generating the last chart would look something like this:

mtf_generate_rectangle.exe --pixel-pitch 4.73 --aperture 4 -p airy-box -n 0 --target-poly pinch.txt 

where "pinch.txt" is the file specifying the polygon geometry (which happens to be in the same folder as mtf_generate_rectangle.exe in my example above). These target polygon geometry files are included in the MTF Mapper package from version 0.4.16 onwards (their names are usaf1951r.txt, siemens.txt, and pinch.txt.)

OLPF demonstration

The "pinch chart" provides a very clear demonstration of the effects of the Optical Low-Pass Filter (OLPF) found on many DSLRs (actually, most, depending on when you read this).

Before I present some images, first a note about effective chart magnification. Most real-world test charts are printed at a known size, i.e., you can say with confidence that a particular target (say, a USAF1951 pattern) has a known physical size, and thus measures physical resolution expressed in line pairs per millimetre (lp/mm). It is relatively straightforward to extend this to synthetic images generated with mtf_generate_rectangle by carefully scaling your target polygon dimensions. For the time being, though, I prefer to fall back to a pixel-centric view of the universe. In other words, I chose to specify my target polygon geometry in terms of pixel dimensions. This was mostly motivated by my desire to illustrate specific effects (aliasing, etc.) visually. Just keep that in mind: the images I present below are not intended to express resolution in physical units; they are pretty pictures.

With that out of the way, here is a sample of a hypothetical D7000 without an OLPF --- this could be something like the Pentax k5-IIs.

Hypothetical D7000 without OLPF, f/4

And here is the same simulated image, but this time using an OLPF, i.e., this should be quite close to the real D7000:

D7000 (with OLPF), f/4

I repeated the simulations using an f/8 aperture:

Hypothetical D7000 without OLPF, f/8


and again, the D7000 (with OLPF) simulated at f/8:
D7000 (with OLPF), f/8

Here is a crop comparing the interesting part of the chart across these four configurations:

First up, notice the false detail in the "f/4, no OLPF" panel, occurring to the right of the scale bar tick labelled "1". This is a good example of aliasing --- compare that to the "f/4, OLPF" panel, which just fades to gray mush to the right of its tick mark. In the bottom two panels we can see the situation is significantly improved at f/8, where diffraction suppresses most of the objectionable aliasing.

2 comments:

  1. Excellent tour de force, Frans. How does _generate_ compare to your real life D7k both visually and as measured? And Where can we download version 4.1.6 from?

    ReplyDelete
    Replies
    1. Thanks Jack.

      I'll see if I can match up a real photo captured using the D7k using one of the new charts.

      Version 0.4.16 is working perfectly on Linux, but I had a bit of a battle getting the build to work under Visual Studio Express 2013. The Windows binaries will be at the usual sourceforge location "real soon" :)

      Delete