Wednesday, 18 December 2013

mtf_generate_rectangle grows up

Fed up with squares?

If you have used mtf_generate_rectangle before you will know why it is called mtf_generate_rectangle. A bit over two years ago, when I started working on the first release of MTF Mapper, I ran into a specific problem: I could not find any synthetic images rendered with specific, exactly known point spread functions. This meant that the only way that I could tell if MTF Mapper was working correctly was to compare its output to other slanted edge implementations.

While this is sufficient for some, it did not sit well with me. What if all those other implementations were tested in the same way? If that was the case, then all slanted edge implementations (available on the Internet) could be flawed. Clearly, some means of verifying MTF Mapper independently of other slanted edge algorithm implementations was required.

Thus mtf_generate_rectangle was born. The original implementation relied on sampling a dense grid, but quickly progressed to an importance sampling rendering algorithm tailored to a Gaussian PSF. The Gaussian PSF had some important advantages over other (perhaps more realistic) PSFs: the analytical MTF curve was know, and simple to compute (the MTF of a Gaussian PSF is just another scaled Gaussian). Since the slanted edge algorithm requires a step edge as input, it seemed logical to choose a square as the target shape; this would give us four step edges for the price of one.

Such synthetic images, composed of a black square on a white background, are perfectly suited to the task of testing the slanted edge algorithm. Unfortunately, they are not great for illustrating the visual differences between different PSFs. There are a few well-know target patterns that are found on resolution test charts designed for visual interpretation. The USAF1951 pattern consists of sets of three short bars (see examples later on); the width of these bars are decreased in a geometric progression, and the user is supposed to note the scale at which the bars are no longer clearly distinguishable.

Another popular test pattern is the Siemens star. This pattern comprises circular wedges radiating from the centre of the design. The main advantage of the Siemens star is that resolution (spacing between the edges of the wedges) decreases in a continuous fashion, as opposed to the discrete intervals of the USAF1951 chart. I am not a huge fan of the Siemens star, though, mostly because it is hard to determine the exact point at which the converging bars (wedges) blur into a gray mess. It is far too easy to confuse aliasing with real detail on this type of chart. Nevertheless, other people seem to like this chart.

Lastly, there is the familiar "pinched wedge" pattern (also illustrated later in this post), which contains a set of asymptotically convergent bars. The rate of convergence is much slower than the Siemens star, and a resolution scale usually accompanies the pattern, making it possible to visually measure resolution in a fashion similar to the USAF1951 chart, but with slightly more accuracy. I rather like this design, if only for the fact that the resulting pictures are aesthetically pleasing.

Today I announce the introduction of a fairly powerful new feature in mtf_generate_rectangle: the ability to render arbitrary polygon shapes.

The implementation

You can now pass fairly general polygonal targets using the "--target_poly <filename>" command line option. The format of the file specified using <filename> is straightforward: any number of polygons can be specified, with each polygon defined by an integer <n> denoting the number of vertices, followed by <n> pairs of <x,y> coordinates. I usually separate these components with newline characters, but this is not critical.

The polygons themselves can be convex or concave. In theory, non-trivial self intersections should be supported, but I have not tested this myself yet. There is currently no way to associate multiple contours with a single polygon, thus you cannot specify any polygons containing holes. I work around this by simply splitting the input polygons with a line passing through such a hole: for example, a polygon representing the number "0" can be split down the middle, producing two concave polygons that touch on the split line.

General polygon intersections

For a while now I have cheated by relying on the Sutherland-Hodgeman algorithm to compute the intersection between two polygons. Specifically, this operation is required by all the importance sampling algorithms involving a non-degenerate photosite aperture (e.g., "airy-box" and "airy-4dot-olpf" PSF options specified with the "-p" command to mtf_generate_rectangle). This article explains the process in more detail, but the gist is that each "sample" during the rendering process is proportional to the area of the intersection between the target polygon geometry and a polygon representing the photosite aperture (suitably translated). If we assume that the photosite aperture polygon is simply a square (or more general, convex) then we can rely on the Sutherland-Hodgeman algorithm to compute the intersection: we simply "clip" the target polygon with the photosite aperture polygon, and compute the area of the resulting clipped polygon.

Now this is where the cheat comes in: the clipped result produced by the Sutherland-Hodgeman algorithm is only correct if both polygons are convex. If the target polygon (the clipee) is concave, and the clipping polygon is convex, the Sutherland-Hodgeman algorithm may produce degenerate vertices. (see figure 1 below). The cheat that I employed relied on the observation that degenerate sections of a polygon have zero area, thus they have no influence on the sampling process.

Fig 1: Clipping a concave polygon with the Sutherland-Hodgeman algorithm

This allowed me to continue using the Sutherland-Hodgeman algorithm, mostly because the algorithm is simple to implement, and very efficient. This efficiency stems from the way in which an actual intersection vertex is only computed if an edge of the clipee polygon is known to cross an edge of the clipper polygon. This is a huge saving, especially if one considers that the photosite aperture polygon (clipper) will typically be either entirely outside the target polygon, or entirely inside the target polygon, except of course near the edges of the target polygon.

All this comes apart when the photosite aperture polygon becomes concave. To solve that problem requires significantly more effort. For a start, a more general polygon clipping algorithm is required. I chose the Greiner-Hormann algorithm, with Kim's extensions to cater for the degenerate cases. In this context, the degenerate cases occur when some of the vertices of the clipee polygon coincide with vertices (or edges) of the clipper polygon. This happens fairly often when constructing a quadtree (more on that later). At any rate, the original Greiner-Hormann algorithm is fairly straightforward to implement, but adding Kim's enhancements for handling the degenerate cases required a substantial amount of effort (read: hours of debugging). The Greiner-Hormann algorithm is quite elegant, and I can highly recommend reading the original paper.

Internally, mtf_generate_rectangle classifies polygons as being either convex or concave. If the photosite aperture is convex, the Sutherland-Hodgeman algorithm is employed during the sampling process, otherwise it will fall back to Kim's version of the Greiner-Hormann algorithm. The performance impact is significant: concave photosite polygons render 20 times more slowly than square photosite polygons when rendering complex scenes. For simpler scenes, the concave photosite polygons will render about four times more slowly than squares; circular (well, 60-sided regular polygons, actually) will render about two times more slowly than squares.

Part of this difference is due to the asymptotic complexity of the two clipping algorithms, expressed in terms of the number of intersection point calculations: the Sutherland-Hodgeman algorithm has a complexity of O(c*n), where "c" is the number of crossing edges, i.e., c << m, where "m" is the number of vertices in the clipee polygon, and "n" is the number of edges in the clipper. The Greiner-Hormann algorithm has a complexity of O(n*m); on top of that, each intersection vertex requires a significant amount of additional processing.

Divide and conquer

To offset some of the additional complexity of allowing arbitrary target polygons to be specified, a quadtree spatial index was introduced. The quadtree does for 2D searches what a binary tree does for linear searches: it reduces the number of operations from O(n) to O(log(n)).

First up, each polygon is wrapped with an axis-aligned bounding box (AABB), which is just an educated-sounding way of saying that the minimum and maximum values of the vertices are recorded for both x and y dimensions of a polygon. This step already offers us a tremendous potential speed-up, because two polygons can only intersect if their bounds overlap. The bounds check is reduced to four comparisons, which can be implemented using short-circuit boolean operators, so non-overlapping bounds can be detected with as little as a single comparison in the best case.

Once each individual polygon has a bounding box, we can start to aggregate them into a scene (internally, mtf_generate_rectangle treats this as a multipolygon with its own bounding box). The quadtree algorithm starts with this global bounding box, and splits it into four quadrants. The bounding box of each quadrant is taken as a clipping polygon, clipping all the polygons to fit exactly inside the quadrant.

After one iteration, we have potentially reduced the number of intersection tests by a factor of four. For example, if we determine (using the bounding boxes) that the photosite aperture polygon falls entirely inside the top-right quadrant, then we only have to process the (clipped) polygons found inside that quadrant. If a quadrant is empty, we can simply skip it; otherwise, we can shrink the bounding box to fit tightly around the remaining clipped polygons (see figure 2 below).

The next logical step is to apply this quadrant subdivision recursively to each of the original quadrants. We can keep on recursively subdividing the quadrants until a manageable number of polygons (or more correctly, a manageable number of polygon edges) is reached in each quadrant subtree. We must balance the cost of further subdivision against the gains of reducing the number of edges in each subdivided quadrant. Every time that we add another level to the quadtree we add four additional bounds checks --- eventually the cost of the bounds checks add up.
Fig 2: Levels 0 (green), 1 (blue) and 2 (magenta) of the Quadtree decomposition of the scene (light gray)
Fig 3: Levels 2 (green), 5 (blue), and 6 (magenta) of the Quadtree decomposition

If the number of quadtree levels is sufficient, we end up "covering" the target polygons with rectangular tiles (the bounding boxes of the quadrants), providing a coarse approximation of the target polygon shape (see figure 3 above). Every sampling location outside these bounding boxes can be discarded early on, so rendering time is not really influenced by the size of the "background" any more.

If the quadtree is well-balanced, the amount of work (number of actual polygon-polygon intersection tests) can be kept almost constant throughout the entire rendered image, regardless of the number of vertices in the scene. I have confirmed this with some quick-and-dirty tests: halving the number of vertices in a scene (by using a polygon simplification method) has almost no impact on rendering time.

Some examples

Enough talk. Time for some images:


USAF1951-style chart with 4 levels

Siemens star chart
Pinch chart (click for full size)
All the charts above were rendered with a photosite pitch of 4.73 micron, using the Airy + square photosite (100% fill-factor) model at an aperture of f/4, simulating light at 550 nm wavelength. The command for generating the last chart would look something like this:

mtf_generate_rectangle.exe --pixel-pitch 4.73 --aperture 4 -p airy-box -n 0 --target-poly pinch.txt 

where "pinch.txt" is the file specifying the polygon geometry (which happens to be in the same folder as mtf_generate_rectangle.exe in my example above). These target polygon geometry files are included in the MTF Mapper package from version 0.4.16 onwards (their names are usaf1951r.txt, siemens.txt, and pinch.txt.)

OLPF demonstration

The "pinch chart" provides a very clear demonstration of the effects of the Optical Low-Pass Filter (OLPF) found on many DSLRs (actually, most, depending on when you read this).

Before I present some images, first a note about effective chart magnification. Most real-world test charts are printed at a known size, i.e., you can say with confidence that a particular target (say, a USAF1951 pattern) has a known physical size, and thus measures physical resolution expressed in line pairs per millimetre (lp/mm). It is relatively straightforward to extend this to synthetic images generated with mtf_generate_rectangle by carefully scaling your target polygon dimensions. For the time being, though, I prefer to fall back to a pixel-centric view of the universe. In other words, I chose to specify my target polygon geometry in terms of pixel dimensions. This was mostly motivated by my desire to illustrate specific effects (aliasing, etc.) visually. Just keep that in mind: the images I present below are not intended to express resolution in physical units; they are pretty pictures.

With that out of the way, here is a sample of a hypothetical D7000 without an OLPF --- this could be something like the Pentax k5-IIs.

Hypothetical D7000 without OLPF, f/4

And here is the same simulated image, but this time using an OLPF, i.e., this should be quite close to the real D7000:

D7000 (with OLPF), f/4

I repeated the simulations using an f/8 aperture:

Hypothetical D7000 without OLPF, f/8


and again, the D7000 (with OLPF) simulated at f/8:
D7000 (with OLPF), f/8

Here is a crop comparing the interesting part of the chart across these four configurations:

First up, notice the false detail in the "f/4, no OLPF" panel, occurring to the right of the scale bar tick labelled "1". This is a good example of aliasing --- compare that to the "f/4, OLPF" panel, which just fades to gray mush to the right of its tick mark. In the bottom two panels we can see the situation is significantly improved at f/8, where diffraction suppresses most of the objectionable aliasing.

Friday, 6 December 2013

Simulating microlenses: kicking it up a notch

Preamble

My first stab at simulating microlenses made some strong assumptions regarding the effective shape of the photosite aperture. Reader IlliasG subsequently pointed me to an illustration depicting a more realistic photosite aperture shape --- which happens to be a concave polygon.

At first, it might seem trivial to use this photosite aperture shape in the usual importance sampling algorithm employed by mtf_generate_rectangle. It turns out to be a bit more involved than that ....

The mtf_generate_rectangle tool relied on an implementation of the Sutherland-Hodgeman polygon clipping routine to compute the area of the intersection of the photosite aperture and the target polygon (which is typically a rectangle). The Sutherland-Hodgeman algorithm is simple to implement, and reasonably efficient, but it requires the clipping polygon to be convex, so I required a new polygon clipping routine to allow concave/concave polygon intersections (astute readers may spot that I could simply exchange the clipping/clippee polygons, but I wanted concave/concave intersections anyway). After some reading, it seemed that the Greiner-Hormann algorithm had a fairly simple implementation ...

... but it did not handle the degenerate cases (vertices of clipping/clippee polygons coinciding, or a vertex falling on the edge of the other polygon). Kim's extension solves that problem, but it took me a while to implement.

Effective photosite aperture (with microlenses)

The Suede (on dpreview forums) posted a diagram of the effective aperture shape after taking the microlenses into account. I thumb-sucked an analytical form for this shape, which looks like this (my shape in cyan overlaid on The Suede's image):
The fit of my thumb-sucked approximation is not perfect, but I declare it to be good enough for government work. I decided to call this the rounded-square photosite aperture (that is the identifier used by mtf_generate_rectangle).

I am not sure how to scale this shape relative to the 100% fill-factor square. Intuitively, it seems that the shape should remain inscribed within the square photosite, or otherwise the microlens would be collecting light from the neighbouring photosites too. This type of scaling (as illustrated above) still leaves the corners of the photosite somewhat darkened, which is what we were aiming for. Incidentally, this scaling only gives me a fill-factor of ~89.5%. I guess the "100% fill-factor" claim sometimes seen in connection with microlenses applies to equivalent light-gathering ability, rather than geometric area.

Results

MTF curves for 0-degree step edge


MTF curves for 45-degree step edge

The two plots above illustrate the MTF curves of three possible photosite aperture shapes, combined with an Airy PSF (aperture=f/5.6, photosite pitch=4.73 micron, lambda=550 nm). The first plot is obtained by orienting the step edge at 0 degrees, i.e., our MTF cross-section is along the x-axis of the photosite. In the second plot, the step edge was oriented at 45 degrees relative the photosite, i.e., it represents the diagonal across the photosite.
Both plots include the MTF curves for an inscribed circle aperture, for comparison. Note that the fill-factors have not been normalized, that is, each aperture appears at its native size, which maximizes aperture area without going outside the square photosite's bounds.

Purely based on its fill factor of ~90%, we would expect the first zero of the rounded-square aperture's MTF curve to land between the 100% fill-factor square and the 78% fill factor circle, which is clearly visible in the first plot. In fact, the rounded-square aperture's MTF curve appears to be a blend of the square and circle curves, which makes sense.

The second plot above shows that the rounded-square aperture still exhibits some anisotropic behaviour, but that the effect is less pronounced than that observed with a square photosite (see this article for more details on anisotropic behaviour); this also seems logical given the shape.

In the real world (well, simulated real world, at least)

The MTF curves show some small but measurable differences between the 100% fill-factor square photosite aperture and the ~90% rounded-square photosite aperture response to a step edge. But can you see these differences in an image?

USAF1951-style chart, f/1.4, 100% fill-factor square photosite aperture
USAF1951-style chart, f/1.4, ~90% fill-factor rounded square photosite aperture


Well ... not really (click on the image to see full-size version). I even opened up the aperture to f/1.4 to accentuate the differences in the photosite apertures. Just to show you something, here is a rendering using a highly astigmatic photosite aperture (a rectangle that is 0.01 times the photosite pitch in height, but one times the pitch wide):
USAF1951-style chart, f/1.4, 2% fill-factor thin rectangular photosite aperture
Note that this is basically point-sampling in the vertical direction, but box-sampling in the horizontal direction. This shows up as rather severe aliasing (jaggies) in the vertical direction.

In the real real world

So how do these simulated MTF curves compare to actual measured MTF curves? In a previous post I described the method I used to capture the MTF of a Nikon D40 camera with a sharp lens set to an aperture of f/4. Here is a comparison of the simulated MTF curves to the empirically measured MTF curve.

Nikon D40 at f/4
At first glance it might seem that the 100% fill-factor square photosite aperture simulation is marginally closer to the measured curve, but keep in mind that these simulations were both performed with an OLPF split factor of 0.375. This value of 0.375 was determined by trial and error using the 100% fill-factor photosite simulation --- it is likely that the optimal OLPF split factor for the rounded-square photosite aperture model is different. In fact, I would expect a slightly larger value, say around 0.38 or 0.385 to perform better, purely on the difference in fill factor (100% vs ~90%) between the two simulations.

So yes, you could say I am lazy for not optimizing the OLPF split factor for the rounded-square photosite aperture model right now, but I do not feel comfortable doing any sort of quantitative comparison between the models with only one empirical sample at hand (one measured D40 MTF curve). Until such time as I have sufficient data to perform a proper optimization and evaluation of the models, I will leave it at the following statement: it certainly appears that the rounded-square model is a viable approximation of the photosite aperture of the D40.

Thursday, 24 October 2013

Simulating microlenses, first take.

Up to now, mtf_generate_rectangle assumed that the simulated sensor had square pixels with a 100% fill factor. This assumption does not reflect reality all that well, but it does simplify the derivation of analytical MTF curves for certain cases.

The effect of fill factor on a square photosite (assuming that the active part of the photosite is just a smaller square centred in the outer square representing the photosite) is fairly straighforward: we are keeping the sampling rate the same, since the photosite pitch is unaffected, but we are reducing the size of the square being convolved with the incoming image. As a result, we would expect a lower fill factor to yield a better MTF curve, i.e., contrast will be higher than the 100% fill factor baseline. But it is still a good idea to test this, just to be sure ...


Implementing variable fill factors

Using the importance sampling algorithm described here, all we have to do is replace the square polygon representing the active area of the photosite with a smaller one, and we are done. The resulting PSF is thus the convolution of the photosite aperture and the Airy function (representing diffraction through the lens aperture). Unless otherwise stated, results were obtained at a wavelength of 550 nm, a photosite pitch of 4.73 micron, and an aperture of f/8, using a simulated system without an optical low-pass filter (OLPF), which appears to be all the rage lately.

MTF curve of Airy + square photosite PSF, at 100% and 50% fill factors
This result confirms our suspicions: if we decrease the fill factor by shrinking the square photosite aperture, the cut-off frequency of the low-pass sinc() filter is increased correspondingly (see here for an overview of diffraction and box functions). The MTF50 of the 50% fill factor sensor is ≈0.38, compared to an MTF50 of ≈0.34 for the 100% fill factor case.

So what are the downsides to using a smaller fill factor? Well, we are allowing substantially more contrast through above the Nyquist frequency (0.5 cycles per pixel), which will definitely increase the chances of aliasing artifacts (moiré, and/or "jaggies"). In the limit, we can imagine the fill factor approaching zero, which gives us a point-sampler, which will result in severe aliasing artifacts, such as the typical jagged edges we see when we render a polygon by taking only one sample at the centre of each pixel.

There is another effect that photographers care deeply about: noise. The relative magnitude of photon shot noise increases inversely with fill factor, since the photon shot noise is directly proportional to the active area of the photosite. The simulation above was conducted with zero noise, mostly to illustrate the pure geometric effects of the fill factor.

Speaking of geometric effects, a slight diversion into the interaction between edge orientation and photosite aperture shape is in order.

Square photosites are anisotropic

It is rather important to recall that an MTF curve is only a 1D cross-section of the true 2D MTF. If the 2D MTF is radially symmetric (e.g., the Airy MTF due to a circular lens aperture), then the orientation of our 1D cross-section is irrelevant.

The 2D sinc() function representing the MTF of a square aperture is not radially symmetric, hence the 1D MTF curve is only representative of the specific orientation that was chosen. The results in this post were all derived using a combined Airy and photosite aperture simulation; since the Airy MTF is radially symmetric, and the photosite aperture MTF is not, we can expect the combined system MTF to lack perfect radial symmetry. The question remains, though: is the combined MTF symmetric enough to ignore this matter entirely?

Feeling somewhat lazy today, I chose to evaluate this empirically, rather than deriving the analytical combined MTF at arbitrary orientations. Since we can directly simulate the edge spread function of a given PSF using mtf_generate_rectangle, I decided to vary the orientation of the simulated step edge relative to the simulated photosite grid, which is equivalent to taking our 1D cross-section of the 2D MTF at the chosen orientation.

Before we get to the results, first some predictions: We saw that the first zero of the sinc() low-pass filter of the square photosite aperture moved to a higher frequency when we decreased the fill factor. Intuitively, a wider photosite aperture produces stronger low-pass filtering. The length of the diagonal of a square is √2 × side_length, so we might expect a stronger low-pass filtering effect if the step edge is parallel to a diagonal of the square photosite aperture. And now the results ...
MTF curves of a square photosite (plus diffraction) at different edge orientations
Notice that there is a minute difference: the 45-degree edge orientation produced a slightly weaker low-pass filtering effect!
Subtracting the 45-degree MTF curve from the 0-degree MTF curve gives us a better view of the difference:
MTF difference between 0-degree edge and 45-degree edge
The difference certainly appears to by structured, and not in the expected direction. Well, certainly not the direction that I expected.

Fortunately the explanation is relatively simple. Consider the following diagram:
Representation of the area of the photosite (orange) covered by the step edge (blueish), for 0-degree and 45-degree edge orientations
If w represents the side length of our square, then the left-hand diagram shows us that the area covered by the 0-degree step edge is simply t × w over the range 0 < t < w/2. The right-hand diagram illustrates that the area covered by the 45-degree step edge (bluish rectangle) is √0.5 × t × t, over the range 0 < t < √0.5 × w (in both cases we only have to integrate up to the midpoint to study the behaviour in question). The area covered by the step edge can be plotted as functions:
We can see that although the 45-degree case starts out with a lead (the first part of the corner starts at roughly -0.2071 if we align them so that they reach an area of 0.5 simultaneously), the 0-degree case catches up near t=0.1. From that point onwards, the 0-degree step edge covers a larger part of the photosite aperture than the 45-degree step edge does. In practise, this means that although the 45-degree case is technically "wider", the 0-degree case presents a stronger low-pass filter. Keep in mind that on top of this rather small difference due to the anisotropy of the square photosite aperture, we are blending in the radially symmetric Airy MTF, which further suppresses the anisotropy.

The size of this effect is minute, as can be seen in the MTF difference diagram above. The MTF50 values are ≈0.3407 and ≈0.342 for the 0-degree and the 45-degree cases, respectively. In conclusion, we see that the anisotropy of the square photosite aperture is mostly masked by the strong isotropy of the Airy MTF at f/8. At larger apertures, the anisotropy is likely to be more apparent, but further analyses will be performed with a step edge orientation of 0 degrees only.

Approximating microlenses

It has been suggested that the microlenses alter the effective shape of the active area of a photosite. (For example, reader IlliasG contributed this info here). A regular polygon approximating a circle seems to be a reasonable starting point for simulating more realistic microlenses. Similar to the fill factor implementation, this merely requires swapping out the polygon used to specify the geometry of the active part of the photosite, and performing importance sampling as usual. (If you can point me at a more accurate description of the effective shape of the combined microlens and photosite aperture, I would be happy to incorporate that into MTF Mapper).

Before we look at the results, first a prediction: modelling the active area of the photosite as a circular disc, we should see a net decrease of the geometric fill factor, hence the low-pass filtering effect is expected to decrease. 

MTF curve of square photosite aperture (1x1) versus circular photosite aperture (radius 1)
No real surprises in these results. For a circular photosite aperture, I chose the inscribed circle to the square photosite, since this seemed more reasonable. Note that the fill factor of the circular photosite aperture is ≈78.4%, rather than the expected π/4 ≈ 0.7854, because I approximated the circle as a 60-sided regular polygon. So how much of the difference between the 100% fill factor square aperture and the 78% fill factor circular aperture is due directly to fill factor, and how much is due to the actual shape?
By subtracting the MTF curves as indicated in the legend of the plot above, we can see that, after matching the effective fill factor, the remaining differences are quite small. From the red dashed curve we can see that the circular (well, 60-sided regular polygon) photosite aperture behaves isotropically, whereas the 78% fill factor square photosite aperture still exhibits anisotropy (dashed blue curve).


Conclusion

I have not performed sufficient experiments to make any inferences regarding behaviour at larger apertures, but at f/8 on a 4.73 micron pitch, it definitely appears as if the geometric fill factor of the photosite is responsible for the bulk of the difference between a 100% fill factor square photosite and a 78% fill factor inscribed circular photosite aperture.

Once we match the effective fill factors, the difference between the square aperture and the circular aperture are of the same magnitude as the differences due to the anisotropy of the square aperture. At larger apertures, we should see more significant differences, but at f/8 the differences are not as significant as one might suspect.

I would like to revisit my D40 experiment armed with the new fill factor and photosite geometry functionality in MTF Mapper. Stay tuned for that!

MTF Mapper will include new options for controlling photosite aperture fill factor and shape from version 0.4.16 onwards, which should be released relatively shortly.

Friday, 11 October 2013

How sharpness interacts with the accuracy of the slanted edge method

How MTF50 error varies with sharpness (click for a larger version)
Just a brief post to show how the absolute error in MTF50 measurement increases with increasing MTF50 values. The chart above is a box plot of the the MTF50 error at a range of MTF50 values.

Before we jump into a discussion of the chart itself, I would like to quickly explain how these values were obtained. Inside the MTF Mapper package you will find a tool called mtf_generate_rectangle.exe (in the Windows distribution). This tool generates synthetic images comprising a black rectangular target on a white background, i.e., exactly like the blocks you see in typical test charts (e.g., Imatest charts). These synthetic images simulate a specified point spread function, optionally adding some realistic sensor noise to create a synthetic image that is quite close to that which you would be able to capture with your actual camera. Since the point spread function controls the resulting MTF50 value of the image, we can choose to generate an image with an exact, known MTF50 value. The chart above is thus obtained by generating a large number of synthetic images at each of the MTF50 levels indicated on the x-axis. The MTF50 error is just the difference between the known MTF50 value of a given synthetic image, and the actual MTF50 value measured by MTF Mapper on the same image. By generating a number of images at each MTF50 level, each image with a pseudo-random noise component that differs slightly from the other images at the same MTF50 level, we obtain the distribution of MTF Mapper's measurement error at the given MTF50 level. With that out of the way, what can we learn from the chart?

The black bar in the centre of each red box is the median error, which stays fairly close to zero. This is good news, since it means that on average MTF Mapper measurements are unbiased.

The red box itself gives an indication of the spread of the MTF50 measurement error. The most important message here is that the absolute MTF50 measurement error increases with the nominal MTF50 level. If you have a sharp lens, the absolute measurement error (in cycles per pixel, or line pairs per mm) will be greater than that of a soft lens. In my experience, a sharp lens will have an MTF50 value of about 0.25 cycles per pixel or higher when perfectly focused.

If we divide the MTF50 error by the MTF50 level to obtain the relative error (e.g., the percentage error), we still see the same trend of increasing relative error with increasing MTF50 level.  I did not include a plot of that, but MTF Mapper's measurement error remains below 5% at real-world noise levels all the way up to an MTF50 value of 0.5 cycles per pixel. You will never obtain MTF50 values that high from a normal DSLR. For a more realistic value of about 0.3 cycles per pixel (a really, really sharp lens), MTF Mapper's relative measurement error will remain below 2% at real-world noise levels.

The bottom line: it is harder to obtain an accurate MTF50 estimate of a sharp lens than it is to do so for a soft lens. In reality, this means you have to evaluate more samples (images) for sharp lenses than for soft lenses.

What about Imatest or DxOLabs measurements?

I do not own a copy of either, so I could not test their software comprehensively using the same method. I can tell you that other freely available slanted edge implementations (e.g., Mitre) behave in exactly the same way as MTF Mapper did on the same synthetic images.

Looking a the maths behind the slanted edge method, I would expect that all implementations should behave exactly like MTF Mapper in this regard, i.e., the measurement error increases with increasing MTF50 values. This follows directly from the steeper slope we see in the MTF curve of a sharp lens, which means that the MTF50 value is more sensitive to small observation errors, such as those caused by sensor noise.

DxO uses a different method of computing sharpness, but ultimately they end up evaluating the MTF curve as well, so their method is likely to be similarly affected by increasing sensitivity to sensor noise with increasing nominal sharpness.

How to obtain your own copy of MTF Mapper

MTF Mapper is a free-as-in-beer Open Source project, currently hosted on Sourceforge.net. You can download pre-built binaries for both Windows and Ubuntu Linux, as well as the source code if you like.

Thursday, 31 January 2013

Effects of ISO on MTF50 measurements

How does the ISO setting affect my MTF50 measurements?

This question arose in the comments section of Roger Cicala's blog (specifically, this article). The short answer is: the expected MTF50 score (the mean, in other words) is unaffected by high ISO noise, but the reliability of MTF50 measurements decrease with increasing ISO settings.

The mtf_generate_rectangle tool included in the MTF Mapper package allows us to generate synthetic images of black rectangular targets on a white background, while specifying the exact MTF50 value we would like to obtain. In addition, we can simulate sensor noise either as plain additive Gaussian white noise, or using a more realistic (but still simple) noise model that incorporates three noise terms: read noise, photon shot noise, and pixel-response non-uniformity (PRNU).

To generate noise using the more realistic sensor noise model, we must provide three parameters: PRNU magnitude, read noise magnitude (standard deviation in electrons), and ADC gain (electrons per DN). I used Marianne Oelund's data (posted here) for the D7000 sensor. An example of the command to generate a realistic ISO 100 image using our simulated D7000 sensor would be:

./mtf_generate_rectangle -m 0.3 -l --b16  --pattern-noise 0.0085  --read-noise 3.7 --adc-gain 2.643  --adc-depth 12 -c 0.33

(Please excuse the --adc-depth 12 parameter. I know the D7000 has a 14-bit ADC, but specifying 12 here is a dirty hack to produce an output file that covers the full 16-bit range). I have verified that the statistics of a synthetic image generated using these parameters matches that obtained from an actual raw D7000 image (currently, I have only verified at ISO 100).

To generate a simulated image at ISO 400, you can use this command:
./mtf_generate_rectangle -m 0.3 -l --b16  --pattern-noise 0.0085  --read-noise 2.5 --adc-gain 0.641  --adc-depth 12 -c 0.33

Note that I have compressed the contrast of the simulated edge quite a bit (the -c 0.33 parameter) to avoid clipping at higher ISO values. This will increase MTF50 standard deviation values slightly --- I usually use -c 0.2 when generating ISO 100 images.

Ok, enough of the preamble. I decided on an MTF50 value of 0.3 cycles/pixel, which is close to the best value I have managed to obtain from the D7000. This corresponds to 979 line pairs per picture height (the units that Roger reports his results in). I generated 400 synthetic edges at the various simulated ISO settings; here is the resulting box plot:
Measured MTF50 values (lp/mm), with a target of 979 lp/mm
It is pretty clear from the plot that the median MTF50 value remains largely unchanged over the whole ISO range, only dipping slightly at ISO 12800, but this may simply be a sampling artifact given the large standard deviation.

This graph tells us two important facts: 
  1. The mean MTF50 value (over many experiments) is unaffected by ISO setting, and
  2. The standard deviation of MTF50 values increases with increasing ISO setting. In other words, if you are forced to use ISO 1600, you will have to take many more measurements to obtain a reasonable estimate of your mean MTF50 value.
The same results are presented below in tabular form:


ISO Mean MTF50 (lp/ph) Std.dev
100 980.0 10.1
200 979.7 12.6
400 979.2 16.7
800 978.8 22.4
1600 979.1 32.1
3200 979.1 45.2
6400 979.2 66.0
12800 976.6 91.8

I did perform a quick Shapiro-Wilk normality test, and the data definitely have a Gaussian distribution within each ISO category, so you can go ahead and compute confidence intervals using n=400 and the values in the table above.

If you want to perform MTF50 testing at ISO 800, I would recommend that you compute the trimmed mean using only the middle 50% of the data (around the median). Put differently, at ISO 800 you could capture twice as many MTF50 test charts, and compute your mean using the middle 50%, which should yield comparable results to performing the test at ISO 100.

Even so, you should find that about 80% of your measurements will have an error of less than 29 lp/ph at ISO 800, which amounts to only 3%. I would be willing to bet that other errors (quality of your test chart, vibrations, etc.) will probably be on a similar magnitude, so I would not really worry too much about performing MTF50 measurements at ISO800.

Why is MTF50 not affected by ISO?

These results go counter to any subjective evaluation of sharpness at higher ISO settings. The reason is fairly simple: the slanted edge method used to compute the MTF50 measurements applies fairly heavy noise suppression internally. 

Specifically, the slanted edge method constructs an oversampled edge profile across the edge being measured. This effectively involves computing the mean image intensity along lines running parallel to the edge. Since this mean is unweighted, we obtain maximal suppression of Additive Gaussian White Noise (this is a known property of the unweighted mean). Because we are averaging along lines parallel to the edge, we expect the signal level (intensity) to be constant, so unweighted averaging gives us the maximum likelihood estimate of a constant with additive Gaussian white noise. The result is that the sensor noise (aggravated by higher ISO settings) is filtered out quite effectively.

The moral of the story is that the slanted edge method does exactly what it is supposed to do: measure the modulation transfer curve (MTF) of the optical system. This is a property of the optical system that is independent of sensor noise. It does lead to some confusion, though, since subjective evaluation of noisy high-ISO images definitely create the perception of reduced sharpness. This implies that we must appreciate the difference between an MTF estimated using the slanted edge method, and the (subjective) human perception of sharpness which is sensitive to high-ISO noise.

Why we see increased variability at high ISO

We have seen some evidence (constant mean MTF50 with increasing ISO from the experimental results above) that MTF50 is not affected by typical sensor noise, and we have a plausible mechanism to explain why we should expect slanted edge MTF50 values to be unaffected by increasing sensor noise. Despite this, we observe an increase in MTF50 standard deviation, which could only mean that on some high ISO runs we obtain MTF50 values that differ a lot from the mean MTF50 value.

I can think of two possible explanations:
  1. The slanted edge method cannot suppress the noise completely, thus at higher noise levels we will see some errors creeping in (because the edge profile becomes noisy).
  2. The slanted edge method depends on accurate estimation of the edge orientation. This becomes harder to achieve when the image is noisy. MTF Mapper employs three different techniques to try and improve the accuracy of edge orientation estimates (least-squares fitting to the edge, joint estimation of parallel edges, and systematic fine-tuning to reduce local edge profile variance), but orientation errors do tend to increase with increasing image noise.

Appendix

Of course, people would like to see what the simulated images look like, so here is a selection:
 
ISO 100
ISO 400
ISO 800
ISO 1600
ISO 3200
ISO 6400
ISO 12800

A few comments are in order. These images are simulations of the green channel (which is the channel from which Marianne's data was derived). Also note that the images are displayed here in sRGB space as 8-bit files, but that the actual measurements were performed on their linear 16-bit counterparts. CFA demosaicing may actually affect your MTF50 measurements, but this should not have a huge impact unless the lens has severe spherical or chromatic aberration, so I chose to work only with the green channel for simplicity.