tag:blogger.com,1999:blog-65554604658135828472017-10-10T03:37:56.529-07:00MTF MapperFrans van den Berghhttps://plus.google.com/104242390274256831576noreply@blogger.comBlogger41125tag:blogger.com,1999:blog-6555460465813582847.post-91855330210779378922017-08-22T04:44:00.000-07:002017-08-22T04:44:25.630-07:00A brief overview of lens distortion correctionBefore I post an article on the details of MTF Mapper's automatic lens distortion correction features, I would like to describe in some detail the lens distortion model adopted by MTF Mapper.<br /><br /><h3>The basics</h3><div>Radial lens distortion is pretty much as the name suggests: the lens distorts the apparent radial position of an imaged point, relative to its ideal position predicted by the simple pinhole model. The pinhole model tells us that the position of a point in the scene, P(x, y, z) [assumed to be in the camera reference frame], is projected onto the image plane at position p(x, y) as governed by the focal length f, such that</div><div> p<sub>x</sub> = (P<sub>x</sub> - C<sub>x</sub>) * f/(P<sub>z</sub> - C<sub>z</sub>)</div><div> p<sub>y</sub> = (P<sub>y</sub> - C<sub>y</sub>) * f/(P<sub>z</sub> - C<sub>z</sub>)</div><div>where C(x, y, z) represents the centre of projection of the lens (i.e., the apex of the imaging cone).</div><div><br /></div><div>We can express the point p(x, y) in polar coordinates as p(r, theta), where r<sup>2</sup> = p<sub>x</sub><sup>2</sup> + p<sub>y</sub><sup>2</sup>; the angle theta is dropped, since we assume that the radial distortion is symmetrical around the optical axis.</div><div><br /></div><div>Given this description of the pinhole part of the camera model, we can then model the observed radial position r<sub>d</sub> as </div><div> r<sub>d</sub> = r<sub>u</sub> * F(r<sub>u</sub>)</div><div>where the function F() is some function that describes the distortion, and r<sub>u</sub> is the undistorted radial position, which we simply called "r" above in the pinhole model.</div><div><br /></div><div>Popular choices of F() include:</div><div><ul><li>Polynomial model (simplified version of Brown's model), with <br />F(r<sub>u</sub>) = 1 + k<sub>1</sub> * r<sub>u</sub><sup>2</sup> + k<sub>2</sub> * r<sub>u</sub><sup>4</sup></li><li>Division model (extended version of Fitzgibbon's model), with <br />F(r<sub>u</sub>) = 1 / (1 + k<sub>1</sub> * r<sub>u</sub><sup>2</sup> + k<sub>2</sub> * r<sub>u</sub><sup>4</sup>)</li></ul><div>Note that these models are really just simple approximations to the true radial distortion function of the lens; these simple models persist because they appear to be sufficiently good approximations for practical use.</div></div><div><br /></div><div>I happen to prefer the <i>division model</i>, mostly because it is reported in the literature to perform slightly better than the polynomial model [1, 2].</div><div><br /><h3>Some examples of radial distortion</h3></div><div>Now for some obligatory images of grid lines to illustrate the common types of radial lens distortion we are likely to encounter. First off, the undistorted grid:</div><div class="separator" style="clear: both; text-align: center;"></div><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://3.bp.blogspot.com/-guRj4PpWwq0/WZvdkpm3HSI/AAAAAAAABek/sVA6fislNBwoVsAqWogZZ2p5WbfAC1v9ACLcBGAs/s1600/gridlines_undistorted.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="400" data-original-width="600" height="266" src="https://3.bp.blogspot.com/-guRj4PpWwq0/WZvdkpm3HSI/AAAAAAAABek/sVA6fislNBwoVsAqWogZZ2p5WbfAC1v9ACLcBGAs/s400/gridlines_undistorted.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 1: What the grid should look like on a pinhole camera</td></tr></tbody></table><div>Add some barrel distortion (k<sub>1</sub> = -0.3, k<sub>2</sub> = 0 using division model) to obtain this:</div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://3.bp.blogspot.com/-ZmPk5XqLvAU/WZvd5rvJWnI/AAAAAAAABeo/-E7_1MZwuUQ9t_WpuadTslyKJBavx-gkwCLcBGAs/s1600/gridlines_barrel.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="400" data-original-width="600" height="266" src="https://3.bp.blogspot.com/-ZmPk5XqLvAU/WZvd5rvJWnI/AAAAAAAABeo/-E7_1MZwuUQ9t_WpuadTslyKJBavx-gkwCLcBGAs/s400/gridlines_barrel.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 2: Barrel distortion, although I think "Surface of an inflated balloon distortion" would be more apt.</td></tr></tbody></table><div>Note how the outer corners of our grid lines appear at positions closer to the centre than we saw in the undistorted grid. We can instead move those corners further outwards from where they were in the undistorted grid to obtain pincushion distortion (k<sub>1</sub> = 0.3, k<sub>2</sub> = 0 using division model):</div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://2.bp.blogspot.com/-BJbzSxlvDSQ/WZvemrCBRtI/AAAAAAAABew/m0jxpzV5KCAoOftB-dctTbY83WZTEkCbACLcBGAs/s1600/gridlines_pincushion.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="400" data-original-width="600" height="266" src="https://2.bp.blogspot.com/-BJbzSxlvDSQ/WZvemrCBRtI/AAAAAAAABew/m0jxpzV5KCAoOftB-dctTbY83WZTEkCbACLcBGAs/s400/gridlines_pincushion.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 3: Pincushion distortion, although I would prefer "inaccurate illustration of gravitationally-induced distortion in space-time".</td></tr></tbody></table><div>If we combine these two main distortion types, we obtain moustache distortion (k<sub>1</sub> = -1.0, k<sub>2</sub> = 1.1 using division model):</div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://3.bp.blogspot.com/-e-DCvd7L8vQ/WZvfqJ2uDkI/AAAAAAAABe8/Njr47HhcjhkjvO2z60tO4-hLSY-b0KlvgCLcBGAs/s1600/gridlines_moustache.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="400" data-original-width="600" height="266" src="https://3.bp.blogspot.com/-e-DCvd7L8vQ/WZvfqJ2uDkI/AAAAAAAABe8/Njr47HhcjhkjvO2z60tO4-hLSY-b0KlvgCLcBGAs/s400/gridlines_moustache.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 4: Moustache distortion.</td></tr></tbody></table><div>We can swap the order of the barrel and pincushion components to obtain another type of moustache distortion, although I do not know if any extant lenses actually exhibit this combination (k<sub>1</sub> = 0.5, k<sub>2</sub> = -0.5 using division model):</div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://3.bp.blogspot.com/-X8RDjkHCn4g/WZvgpe__eQI/AAAAAAAABfI/tNxMXxEEqpQjPFRiMAUie9w7NTzNfleFACLcBGAs/s1600/gridlines_moustache_inv.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="400" data-original-width="600" height="266" src="https://3.bp.blogspot.com/-X8RDjkHCn4g/WZvgpe__eQI/AAAAAAAABfI/tNxMXxEEqpQjPFRiMAUie9w7NTzNfleFACLcBGAs/s400/gridlines_moustache_inv.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 5: Alternative (inverted?) moustache distortion.</td></tr></tbody></table><br /><div><h3>Quantifying distortion</h3></div><div>Other than using the k<sub>1</sub> and k<sub>2</sub> parameters (which might be a bit hardcore for public consumption), how would we summarize both the type and the magnitude of a lens' radial distortion? It appears that this is more of a rhetorical question than we would like it to be. There are several metrics currently in use, most of them unsatisfying in some respect or another.</div><div><br /></div><div>One of the most widely used metrics is SMIA "TV distortion", which expresses distortion as a percentage in accordance with the following diagram:</div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://4.bp.blogspot.com/-ISRxtiEu03A/WZv-gKTsnwI/AAAAAAAABfY/Qy7YPun7T402ul4lJzBxxYCuBXuJ2zN-QCLcBGAs/s1600/smia.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="266" data-original-width="454" height="233" src="https://4.bp.blogspot.com/-ISRxtiEu03A/WZv-gKTsnwI/AAAAAAAABfY/Qy7YPun7T402ul4lJzBxxYCuBXuJ2zN-QCLcBGAs/s400/smia.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 6: Slightly simplified SMIA TV distortion</td></tr></tbody></table><div>The SMIA TV distortion metric is just 100*(A - B)/B. If the value is negative you have barrel distortion, and positive values imply pincushion distortion. If you have moustache distortion like shown in Figures 4 and 5, then you could very likely obtain a value of 0% distortion. Whoops!</div><div><br /></div><div>I only show SMIA TV distortion here to make a concrete link to the k<sub>1</sub> parameter, and to highlight that SMIA TV distortion is not useful in the face of moustache distortion.</div><div><br /></div><h3>Using the division model</h3><div>There is one subtlety that is worth pondering a while: are we modelling the forward distortion, i.e, the distortion model maps our undistorted pinhole projected points to their distorted projected points, or are we modelling the reverse mapping, i.e., we model the correction required to map the distorted projected points to their undistorted pinhole projected points?</div><div><br /></div><div>The important point to note is that neither the polynomial model, nor the division model, compels us to choose a specific direction, and both models can successfully be applied in either direction by simply swapping r<sub>d</sub> and r<sub>u</sub> in the equations above. I can think of two practical implications of choosing a specific direction:</div><div><ol><li>If we choose the forward direction (such as presented above in "The basics") where r<sub>d</sub> = r<sub>u</sub> * F(r<sub>u</sub>), then we must have a way of inverting the distortion if we want to correct an actual distorted image as received from the camera. If we undistort an entire image, then we would prefer to have an efficient implementation of the reverse mapping, i.e., we require an efficient inverse function F<sup>-1</sup>() so that we may calculate F<sup>-1</sup>(r<sub>d</sub>) = r<sub>d</sub>/r<sub>u</sub>. In this form it is not immediately clear that we can find a closed-form solution to the reverse mapping, and we may have to resort to an iterative method to effect the reverse mapping. Depending on how we plan to obtain our distortion coefficients k<sub>1</sub> and k<sub>2</sub>, it may be that the forward distortion approach could be far more computationally costly than the reverse distortion approach. To summarize: inverting the distortion model for each pixel in the image can be costly.</li><li>The process of estimating k<sub>1</sub> and k<sub>2</sub> typically involves a non-linear optimization process, which can be computationally costly if we have to compute the reverse mapping on a large number of points during each iteration of the optimization algorithm. I have a strong aversion to using an iterative approximation method inside of an iterative optimization process, since this is almost certainly going to be rather slow. To summarize: inverting the distortion model during non-linear optimization of k<sub>1</sub> and k<sub>2</sub> can be costly.</li></ol><div>Just how costly is it to compute the undistorted points given the distorted points and a forward distortion model?</div></div><div><ul><li>Polynomial model: <br />r<sub>d</sub> = r_u * (1 + k<sub>1</sub> * r<sub>u</sub><sup>2</sup> + k<sub>2</sub> * r<sub>u</sub><sup>4</sup>), or after collecting terms,<br />r<sub>u</sub> + r<sub>u</sub> * k<sub>1</sub> * r<sub>u</sub><sup>2</sup> + r<sub>u</sub> * k<sub>2</sub> * r<sub>u</sub><sup>4</sup> - r<sub>d</sub> = 0<br />k<sub>1</sub> * r<sub>u</sub><sup>3</sup> + k<sub>2</sub> * r<sub>u</sub><sup>5</sup> + r<sub>u</sub> - r<sub>d</sub> = 0<br />Since we are given r<sub>d</sub>, we can compute potential solutions for r<sub>u</sub> by finding the roots of a 5th-order polynomial.</li><li>Division model:<br />r<sub>d</sub> = r<sub>u</sub> / (1 + k<sub>1</sub> * r<sub>u</sub><sup>2</sup> + k<sub>2</sub> * r<sub>u</sub><sup>4</sup>), or<br />r<sub>d</sub> * k<sub>1</sub> * r<sub>u</sub><sup>2</sup> + r<sub>d</sub> * k<sub>2</sub> * r<sub>u</sub><sup>4</sup> - r<sub>u</sub> + r<sub>d</sub> = 0<br />This looks similar to the polynomial model, but at least we only have to find the roots of a 4th-order polynomial, which we can do using Ferrari's formula because the r<sub>u</sub><sup>3</sup> term has already been deflated.</li><br /></ul><div>In both cases we have to find the all the roots, including the complex ones, and then choose the appropriate real root to obtain r<sub>u</sub> given r<sub>d</sub> (I assume here that the distortion is invertible, which we can enforce in practice by constraining k<sub>1</sub> and k<sub>2</sub> as proposed by Santana-Cedres et al. [3]).<br />Alternatively, we could try a fixed-point iteration scheme, i.e., initially guess that r<sub>u</sub> = r<sub>d</sub>, substitute this into the equation r<sub>u</sub> = r<sub>d</sub> / F(r<sub>u</sub>) to obtain a new estimate of r<sub>u</sub>, rinse and repeat until convergence (this is what OpenCV does). Both of these approaches are far too computationally demanding to calculate for every pixel in the image, so it would appear that we would be better off by estimating the reverse distortion model.</div></div><div><br /></div><div>But there is a trick that we can employ to speed up the process considerably. First, we note that our normalized distorted radial values are in the range [0, 1], if we normalize such that the corner points of our image have r = 1, and the image centre has r = 0. Because the interval is closed, it is straightforward to construct a look-up table to give us r<sub>u</sub> for a given r<sub>d</sub>, using, for example, the root-finding solutions above. If we construct our look-up table such that r<sub>d</sub> is sampled with a uniform step length, then we can use a pre-computed quadratic fit to interpolate through the closest three r<sub>d</sub> values to obtain a very accurate estimate of r<sub>u</sub>. The combination of a look-up table plus quadratic interpolation is almost as fast as evaluating the forward distortion equation. The only limitation to the look-up table approach, though, is that we have to recompute the table whenever k<sub>1</sub> or k<sub>2</sub> changes, meaning that the look-up table method is perfect for undistorting an entire image for a given k<sub>1</sub> and k<sub>2</sub>, but probably too expensive to use during the optimization task to find k<sub>1</sub> and k<sub>2</sub>.</div><div><br /></div><div>So this is exactly what MTF Mapper does: the forward distortion model is adopted so that the optimization of k<sub>1</sub> and k<sub>2</sub> is efficient, with a look-up table + quadratic interpolation implementation for undistorting the entire image.<br /><br /><h3>Some further observations on the models</h3></div><div>If you stare at the equation for the inversion of the division model for a while, you will see that </div><div> r<sub>d</sub> * k<sub>1</sub> * r<sub>u</sub><sup>2</sup> + r<sub>d</sub> * k<sub>2</sub> * r<sub>u</sub><sup>4</sup> - r<sub>u</sub> + r<sub>d</sub> = 0</div><div>neatly reduces to</div><div> r<sub>d</sub> * k<sub>1</sub> * r<sub>u</sub><sup>2</sup> - r<sub>u</sub> + r<sub>d</sub> = 0</div><div>if we assume that k<sub>2</sub> = 0. This greatly simplifies the root-finding process, since we can use the well-known quadratic formula, or at least, the numerically stable version of it. This is such a tempting simplification of the problem that many authors [1, 2] claim that a division model with only a single k<sub>1</sub> parameter is entirely adequate for modeling radial distortion in lenses.</div><div>That, however, is demonstrably false in the case of moustache distortion, which requires a local extremum or inflection point in the radial distortion function. For example, the distortion function that produces Figure 4 above looks like this:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://1.bp.blogspot.com/-OEPX1nNkiLs/WZvdia_i6oI/AAAAAAAABes/x-0l3l1emV0FobS3xCEwk3sc5nU-JoSZgCEwYBhgL/s1600/moustache.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="480" data-original-width="640" height="300" src="https://1.bp.blogspot.com/-OEPX1nNkiLs/WZvdia_i6oI/AAAAAAAABes/x-0l3l1emV0FobS3xCEwk3sc5nU-JoSZgCEwYBhgL/s400/moustache.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 7: The distortion function F() corresponding to Figure 4.</td></tr></tbody></table>It is clear that the division model with k<sub>2</sub> = 0 cannot simultaneously produce the local minimum observed at the left (r<sub>d</sub> = 0) and the local maximum to the right (r<sub>d</sub> ~ 0.65).<br /><br />Similar observations apply to the polynomial model, i.e., we require k<sub>2</sub> ≠ 0 to model moustache distortion.<br /><br /><h3>Wrapping up</h3></div><div>I think that covers the basics of radial distortion modelling. In a future article I will demonstrate how one would go about determining the parameters k<sub>1</sub> and k<sub>2</sub> from a sample image.</div><div><br /></div><h3>References</h3><div><ol><li>Fitzgibbon, A.W., Simultaneous linear estimation of multiple view geometry and lens distortion, Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2001.</li><li>Wu, F, Wei, H, Wang, X, Correction of image radial distortion based on division model, SPIE Optical Engineering, 56(1), 2017.</li><li>Santana-Cedres, D, et al., Invertibility and estimation of two-parameter polynomial and division lens distortion models, SIAM Journal on Imaging Sciences, 8(3):1574-1606, 2015.</li></ol></div>Frans van den Berghhttps://plus.google.com/104242390274256831576noreply@blogger.com0tag:blogger.com,1999:blog-6555460465813582847.post-47788226418992287222017-08-01T04:29:00.000-07:002017-08-03T23:40:12.440-07:00Image interpolation: Fighting the fade, part 1Over the last six months or so I kept on bumping into a particularly vexing problem related to image interpolation: the contrast in an interpolated image drops to zero in the worst case of interpolating at an exact half-pixel shift relative to the original image.<br /><br />Consider the case where you are trying to co-register (align) two images, such as two images captured by the same camera, but with a small translation of the camera between the two shots. If we translate the camera purely in the horizontal direction, then the shift between the two images will be <i>h</i> pixels, where <i>h </i>can be any real number. The integer part of <i>h </i>will not cause us any trouble for reasonable values of <i>h</i>, such that the two images still overlap, of course. The trouble really lies in the fractional part of <i>h</i>, since this forces us to interpolate pixel values from the <i>moving image </i>if we want it to line up correctly with the <i>fixed image.</i><br /><i><br /></i>The worst case scenario, as mentioned above, is if the fractional part of <i>h</i> is exactly 0.5 pixels, since this implies that the value of a pixel in the interpolated moving image will be the mean of the two closest pixels from the original moving image. Figure 1 illustrates what such an interpolated moving image will look like for a half-pixel shift; the edges are annotated with their MTF50 values.<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://4.bp.blogspot.com/-R4aYDhty31c/WYApP-jDYJI/AAAAAAAABb4/A-gEU3tY6LAh548Qt0Qg2DNB3L0DbbcowCLcBGAs/s1600/annotated_shift_05_crop.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="1046" data-original-width="1272" height="328" src="https://4.bp.blogspot.com/-R4aYDhty31c/WYApP-jDYJI/AAAAAAAABb4/A-gEU3tY6LAh548Qt0Qg2DNB3L0DbbcowCLcBGAs/s400/annotated_shift_05_crop.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 1: Scaled (Nearest Neighbour interpolation) view of an interpolated moving image that experienced a half-pixel shift. The numbers are the measured MTF50 values, in cycles/pixel.</td></tr></tbody></table>Looking closely at the vertical edges of the gray square, we can see that there are some visible interpolation artifacts manifesting as overshoot and undershoot. This image was interpolated using OMOMS cubic spline interpolation [2], which is best method that I am aware of. Linear interpolation would produce much more blurring (but no overshoot/undershoot). And of course we see a marked drop in MTF50 on the vertical edges!<br /><br />At any rate, the MTF curves for the edges are illustrated in Figure 2. The blue curve corresponds to the horizontal edge (i.e., the direction that experienced no interpolation), and the orange curve corresponds to the vertical edge (an 0.5-pixel horizontal shift interpolation). The green curve was obtained from another simulation where the moving image was shifted by 0.25 pixels.<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://3.bp.blogspot.com/-jhpJ5EU_aok/WYAqR1b6lDI/AAAAAAAABcA/fTqBIARf1aAbDZNcI_gWH4G0dpD4LKCkACLcBGAs/s1600/mtf_shift.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="473" data-original-width="750" height="252" src="https://3.bp.blogspot.com/-jhpJ5EU_aok/WYAqR1b6lDI/AAAAAAAABcA/fTqBIARf1aAbDZNcI_gWH4G0dpD4LKCkACLcBGAs/s400/mtf_shift.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 2: MTF curves of interpolated moving images corresponding to fractional horizontal shifts of zero (blue), 0.25 pixels (green), and 0.5 pixels (orange).</td></tr></tbody></table><br />Certainly the most striking feature of the orange curve is how the contrast drops to exactly zero at the Nyquist frequency (0.5 cycles/pixel). The smaller 0.25-pixel shift (green curve) shows a dip in contrast around Nyquist, but this would probably not be noticeable in most images.<br /><br />In Figure 3 we can see that this loss of contrast around Nyquist follows a smooth progression as we approach a fractional shift of 0.5 pixels.<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://3.bp.blogspot.com/-fz2INqOzJG4/WYAsRZBsQQI/AAAAAAAABcM/6axD0KCC3yEyDVn60U-ZhwBerB4PcSHDQCLcBGAs/s1600/mtf_shift_fractions.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="473" data-original-width="750" height="251" src="https://3.bp.blogspot.com/-fz2INqOzJG4/WYAsRZBsQQI/AAAAAAAABcM/6axD0KCC3yEyDVn60U-ZhwBerB4PcSHDQCLcBGAs/s400/mtf_shift_fractions.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 3: MTF curves of interpolated moving images corresponding to fractional horizontal shifts of 0.25 pixels (blue), 0.333 pixels (green), and 0.425 pixels (orange).</td></tr></tbody></table>The conclusion from this experiment is that we really want to avoid interpolating an image with a fractional shift of 0.5 pixels (in either/both horizontal and vertical directions), since this will produce a very noticeable loss of contrast at higher frequencies, i.e., we will lose all the fine details in the interpolated image.<br /><br /><h3>Radial distortion lens correction</h3><div>An applied example of where this interpolation problem crops up is when we apply a radial distortion correction model to improve the geometry of images captured by a lens exhibiting some distortion (think barrel or pincushion distortion). I aim to write a more thorough article on this topic soon, but for now it suffices to say that our radial distortion correction model specifies for each pixel (x, y) in our corrected image where we have to go and sample the distorted image.</div><div><br /></div><div>I prefer to use the division model [1], which implies that for a pixel (x, y) in the corrected image, we go and sample the pixel at</div><div> x' = (x - x<sub>c</sub>) / (1 + k<sub>1</sub>r<sup>2</sup> + k<sub>2</sub>r<sup>4</sup>) + x<sub>c</sub></div><div>where</div><div> r = sqrt((x - x<sub>c</sub>)<sup>2</sup> + (y - y<sub>c</sub>)<sup>2</sup>)<br />and (x<sub>c, </sub>y<sub>c</sub>) denotes the centre of distortion (which could be the centre of the image, for example).</div><div>The value of y' is calculated the same way. The actual distortion correction is then simply a matter of visiting each pixel (x, y) in our undistorted image, and setting its value to the interpolated value extracted from (x', y') in the distorted image.</div><div><br />The important part to remember here is that the value (x', y') can assume any fractional pixel value, including the dreaded half-pixel shift.<br /><br /><h3>An example of mild pincushion distortion</h3></div><div>In order to illustrate the effects of radial distortion correction, I thought it best to start with synthetic images with known properties. Figure 4 illustrates<b> a 100% crop near the top-left corner </b>of the reference image, i.e., what we would have obtained if the lens did not have any distortion.</div><div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://1.bp.blogspot.com/-XATb1bSZQUI/WYBXXbpMwfI/AAAAAAAABdA/sDiXdb0XPakp8T321pwTeY9G2ktQQ2iTACLcBGAs/s1600/distort_000_crop.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="680" data-original-width="1000" height="271" src="https://1.bp.blogspot.com/-XATb1bSZQUI/WYBXXbpMwfI/AAAAAAAABdA/sDiXdb0XPakp8T321pwTeY9G2ktQQ2iTACLcBGAs/s400/distort_000_crop.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 4: The pure, undistorted reference image. Note that the closely-spaced black lines blur into gray bars because of the simulated Gaussian Point Spread Function (PSF) with an MTF50 of 0.35 c/p. If you squint hard enough, you can see some traces of the original black bars. Rendered at 400% size with nearest-neighbour upscaling. (click for 100% view)</td></tr></tbody></table><div class="separator" style="clear: both; text-align: center;"></div><br />I simulated a very mild pincushion distortion with k<sub>1</sub> = 0.025 and k<sub>2</sub> = 0, which produces an SMIA lens distortion figure of about -1.62%. This distortion was applied to the polygon geometry, which was again rendered with a Gaussian PSF with an MTF50 of 0.35 c/p. The result is shown in Figure 5. Keep in mind that you cannot really see the pincusion distortion at this scale, since we are only looking at the top-left corner of a much larger image.<br /><div class="separator" style="clear: both; text-align: center;"></div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://3.bp.blogspot.com/-CvD08QQT5y8/WYBX4arOJPI/AAAAAAAABdE/z-zus8fLfYA_7-Fo6PLRuXKqJNeT6QA8wCLcBGAs/s1600/distort_0025_crop.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="680" data-original-width="1000" height="271" src="https://3.bp.blogspot.com/-CvD08QQT5y8/WYBX4arOJPI/AAAAAAAABdE/z-zus8fLfYA_7-Fo6PLRuXKqJNeT6QA8wCLcBGAs/s400/distort_0025_crop.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 5: Similar to Figure 4, but with about 1.62% pincushion distortion applied to the polygon geometry. Rendered at 400% size with nearest-neighbour upscaling. (click for 100% view)</td></tr></tbody></table><br />We can see the first signs of trouble in Figure 5: Notice how the black/white bars appear to "fade out" at regular intervals. The straight lines of Figure 4 are no longer perfectly straight, nor are they aligned with the image rows and columns. The lines thus cross from one row (or column) to the next, and the gray patches correspond to the regions where the lines fell halfway between two rows (or columns), leading to the apparent loss of contrast.<br /><br />It is important to understand at this point that the fading in Figure 5 is not a processing artifact; this is exactly what would happen if you were to photograph similar thin bars that are not aligned with the image rows/columns.<br /><br />Finally, we arrive at the radial distortion correction phase. Figure 6 illustrates what the corrected image would look like if we used standard cubic interpolation to resample the image.<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://2.bp.blogspot.com/-M_wZUfNlIGQ/WYBYO_41SVI/AAAAAAAABdI/spM2P5W2nekiUJgAKw7nHjuPoZpEyU1jwCLcBGAs/s1600/undistort_0025_cubic_crop.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="680" data-original-width="1000" height="271" src="https://2.bp.blogspot.com/-M_wZUfNlIGQ/WYBYO_41SVI/AAAAAAAABdI/spM2P5W2nekiUJgAKw7nHjuPoZpEyU1jwCLcBGAs/s400/undistort_0025_cubic_crop.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 6: The undistorted version of Figure 5. Resampling was performed using standard cubic interpolation. Rendered at 400% size with nearest-neighbour upscaling. (click for 100% view).</td></tr></tbody></table>We see some additional fading that appears in Figure 6. If you flick between Figures 5 and 6 (after clicking for 100% view) you will notice that an extra set of fading patches appear in between the original fading patches. These extra fades are the manifestation of the phenomenon illustrated in Figure 2: the contrast drops to zero as the interpolation sample position approaches a fractional pixel offset of 0.5. The interesting thing about these additional fades is that they are not recoverable using sharpening --- once the contrast reaches zero, no amount of sharpening will be able to recover it.<br /><br /><h3>A potential workaround</h3></div><div>The aim of radial distortion correction is to remove the long range (or large scale) distortion, since the curving of supposedly straight lines (e.g., building walls) is only really visible once the distortion produces a shift of more than one pixel. Unfortunately we cannot simply ignore the fractional pixel shifts --- this would be equivalent to using nearest-neighbour interpolation, with its associated artifacts.</div><div><br /></div><div>Perhaps we can cheat a little: what if we pushed out interpolation coordinates away from a fractional pixel shift of 0.5? Let x' be the real-valued x component of our interpolation coordinate obtained from the radial distortion correction model above. Further, let x<sub>f</sub> be the largest integer less than x' (the floor of x'). If x' - x<sub>f</sub> < 0.5, then let d = x' - x<sub>f</sub>. (We can deal with the d > 0.5 case by symmetry).<br /><br />Now, if d > 0.375, we compress the value of d linearly such that 0.375 <= d' <= 0.425. We can obtain the new value of x', which we can call x", such that x" = x<sub>f</sub> + (x' - x<sub>f</sub> ) * 0.4 + 0.225. Looking back at Figure 3, we see that a fractional pixel shift of 0.425 seems to leave us with at least a little bit of contrast; this is where the magic numbers and thresholds were divined from.<br /><br />Does this work? Well, Figure 7 shows the result of the above manipulation of the interpolation coordinates, followed by the same cubic interpolation method used in Figure 6.<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://4.bp.blogspot.com/-IcVpn-Cbxpg/WYBfo7eFbqI/AAAAAAAABdY/-YwNHiiQSXQ-aDZbfZtrE-Aw8eKeBhdkgCLcBGAs/s1600/undistort_0025_proposed_no_fs_crop.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="680" data-original-width="1000" height="271" src="https://4.bp.blogspot.com/-IcVpn-Cbxpg/WYBfo7eFbqI/AAAAAAAABdY/-YwNHiiQSXQ-aDZbfZtrE-Aw8eKeBhdkgCLcBGAs/s400/undistort_0025_proposed_no_fs_crop.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 7: The undistorted version of Figure 5. Resampling was performed using the modified interpolation coordinates followed by cubic interpolation. Rendered at 400% size with nearest-neighbour upscaling. (click for 100% view).</td></tr></tbody></table>Careful squinting reveals that the additional fading patches observed in Figure 6 have been reduced noticeably. This looks promising. Of course, one might argue that I have just added some more aliasing to the image. Which might be the case.<br /><br />Further testing will be necessary, especially on more natural looking scenes. I might be able to coax sufficient distortion from one of my lenses to perform some real-world experiments.<br /><br /><h3>Further possibilities</h3></div><div>Using the forced geometric error method proposed above, we can now extract at least some contrast at the frequencies near Nyquist. We also know what the fractional pixel shift was in both x and y, so we know what the worst-case loss-of-contrast would be. By combining these two bits of information we can sharpen the image adaptively, where the sharpening strength is adjusted according to the expected loss of contrast.</div><div><br /></div><div>Stay tuned for part two, where I plan to investigate this further.</div><div><br /></div><h3>References</h3><div><ol><li>Fitzgibbon, A.W.: Simultaneous linear estimation of multiple view geometry and lens distortion. In: Proc. IEEE International Conference on Computer Vision and Pattern Recognition, pp. 125–132 (2001).</li><li>Thevenaz, P., Blu T. and Unser, M.: Interpolation revisited, IEEE Transactions on medical imaging, 19(7), pp. 39–758, 2000.</li></ol></div>Frans van den Berghhttps://plus.google.com/104242390274256831576noreply@blogger.com0tag:blogger.com,1999:blog-6555460465813582847.post-78289050160753291562017-07-18T03:06:00.000-07:002017-07-18T03:06:07.785-07:00Windows binaries are now 64-bitI figured that by 2017 most Windows users will probably be running a 64-bit version of Windows, so it should be reasonably safe to switch to distributing 64-bit binaries from version 0.6 onward.<br /><br />The practical benefit of this move is that larger images can now be processed safely; late in the 0.5 series of MTF Mapper you could cause it to crash by feeding it a 100 MP image. While it is possible to rework some of MTF Mapper's code to use substantially less memory (e.g., some of the algorithms can be run in a sliding-window fashion rather than whole-image-at-a-time, and I could add some on-the-fly compression in other places), it just seemed like much less work to switch to 64-bit Windows binaries.<br /><br />That being said, if there is sufficient demand, I am willing to build 32-bit binaries occasionally.<br /><br />There are quite a few new things in the 0.6 series of MTF Mapper (check the Settings dialog of the GUI):<br /><br /><ol><li>Fully automatic radial distortion correction using only one image (preferably of an MTF Mapper test chart, but anything with black trapezoidal targets on a white background will work). Enabling this feature slows down the processing quite a bit, so I do not recommend using this by default. More on this in an upcoming blog article.</li><li>Correction of equiangular (f-theta) fisheye lens images.</li><li>Correction of stereographic fisheye lens images.</li></ol><div>I plan on posting an article or two on these new features, so stay tuned!</div>Frans van den Berghhttps://plus.google.com/104242390274256831576noreply@blogger.com0tag:blogger.com,1999:blog-6555460465813582847.post-41618979787317099142017-05-03T01:25:00.000-07:002017-05-03T01:25:09.652-07:00New --single-roi input modeMy original vision was for MTF Mapper to be fully automated; all you had to do was provide it with an image of one of the MTF Mapper test charts. The implementation was centered on the idea that detecting a dark, roughly rectangular target on a white background was a much more tractable problem than detecting arbitrary edges (hopefully representing slanted edges) in arbitrary input images. Figure 1 illustrates what a suitable MTF Mapper input image looks like.<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://1.bp.blogspot.com/-1VFGruWzcPs/WQmEfrDrsuI/AAAAAAAABZo/IYRwvzeQN1on9X7UtbbXq00srSo1Yv_XACLcB/s1600/rect.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://1.bp.blogspot.com/-1VFGruWzcPs/WQmEfrDrsuI/AAAAAAAABZo/IYRwvzeQN1on9X7UtbbXq00srSo1Yv_XACLcB/s1600/rect.png" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 1: A single target (black rectangle) on a white background. MTF mapper can detect any number of such shapes in your input image; the target objects need not be perfectly rectangular either, as some deviation from perfect 90-degree corners is allowed.</td></tr></tbody></table><div><div>This approach did pay off, and still does, allowing users to design their own test charts that just work with MTF Mapper without requiring specific support for each custom test chart design.</div><div><br /></div><div>As it turns out, many users have a very different workflow which does not allow them to specify their own chart. Examples of this include Jack Hogan's analysis of the DP Review test chart images, or Jim Kasson's razor-blade focus rail experiments. This type of workflow produces a rectangular Region Of Interest (ROI) that contains only a single edge. Figure 2 illustrates what a typical input image from this use case looks like.</div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://2.bp.blogspot.com/-Feil6IrO1FM/WQmFTUNeGqI/AAAAAAAABZ0/81xvWPxX3WMnWdg4itKn-C8UEBkMOpHaACLcB/s1600/f5_500.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://2.bp.blogspot.com/-Feil6IrO1FM/WQmFTUNeGqI/AAAAAAAABZ0/81xvWPxX3WMnWdg4itKn-C8UEBkMOpHaACLcB/s1600/f5_500.png" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 2: A rectangular ROI containing a single slanted edge.</td></tr></tbody></table><div>In the past, MTF Mapper could only process images that look like Figure 2 by specifying the <span style="font-family: Courier New, Courier, monospace;">-b </span>option, which would add a white border around the image, thereby transforming it to look more like the expected input convention illustrated in Figure 1. This was a bit of a hack, and has some severe drawbacks. The most prominent disadvantage of the <span style="font-family: Courier New, Courier, monospace;">-b</span> option is that the automatic dark target detection code in MTF Mapper could fail to detect the target if the edge contrast was poor, or if the edge was extremely blurry. Fussing with the detection threshold (<span style="font-family: Courier New, Courier, monospace;">-t</span> option) sometimes helped, but this just highlighted the fact that the <span style="font-family: Courier New, Courier, monospace;">-b</span> option was a hack.</div></div><div><br /></div><div>From MTF Mapper version 0.5.21 onwards, there is a new option, <span style="font-family: Courier New, Courier, monospace;">--single-roi</span><span style="font-family: inherit;">, which is intended to replace the use of the </span><span style="font-family: Courier New, Courier, monospace;">-b</span><span style="font-family: inherit;"> option when the input images look like Figure 2. </span></div><div><span style="font-family: inherit;">The </span><span style="font-family: Courier New, Courier, monospace;">--single-roi</span><span style="font-family: inherit;"> input mode completely bypasses the automatic thresholding and target detection code, and instead assumes that the input image contains only a single edge. The ROI does not have to be centered perfectly on the edge, but I recommend that your ROI must include <i>at least</i> 30 pixels on each side of the edge. MTF Mapper will automatically restrict the analysis to the region of the image that falls within a distance of 28 pixels from the actual edge, so it does not hurt to have a few extra pixels on the sides of the edge (meaning the left and/or right side of an edge oriented as shown in Figure 2).</span></div><div><span style="font-family: inherit;"><br /></span></div><div><span style="font-family: inherit;">A typical invocation would look like this:</span></div><div><span style="font-family: Courier New, Courier, monospace;">mtf_mapper.exe --single-roi -q image.png output_dir</span></div><div><span style="font-family: inherit;">which would produce two files (</span><span style="font-family: Courier New, Courier, monospace;">edge_mtf_values.txt</span><span style="font-family: inherit;">, </span><span style="font-family: Courier New, Courier, monospace;">edge_sfr_values.txt</span><span style="font-family: inherit;">) in </span><span style="font-family: "Courier New", Courier, monospace;">output_dir. </span><span style="font-family: inherit;">The second and third columns of </span><span style="font-family: Courier New, Courier, monospace;">edge_mtf_values.txt</span> give you the image coordinates of the centre of the detected edge (not really that useful in combination with <span style="font-family: Courier New, Courier, monospace;">--single-roi</span>), and the fourth column gives you the measured MTF50 value. To learn the mysteries of the format of the <span style="font-family: "Courier New", Courier, monospace;">edge_sfr_values.txt</span><span style="font-family: inherit;"> file you must first signal the secret MTF Mapper handshake.</span></div><div><br /></div><div>Note that it is also possible to use the <span style="font-family: Courier New, Courier, monospace;">--single-roi</span> mode in conjunction with the MTF Mapper GUI, provided that your images have already been cropped to look like Figure 2. Just add the string "--single-roi" to the "Arguments" field of the Settings dialog; now you can view the SFR curve of your edge as described in <a href="http://mtfmapper.blogspot.co.za/2017/04/view-mtf-sfr-curves-in-gui.html">this post</a>. </div><div><br /></div><div>You can still use the <span style="font-family: Courier New, Courier, monospace;">--bayer red</span> option with the <span style="font-family: Courier New, Courier, monospace;">--single-roi</span> option to process only the red channel (for example) from an un-demosaiced Bayer image, such as produced by <span style="font-family: Courier New, Courier, monospace;">dcraw -4 -d;</span></div><div>just be careful that your ROI is cropped such that the starting row/column of the Bayer pattern is RGGB (the only format currently supported by MTF Mapper).</div><div><br /></div><div><br /></div><div><span style="font-family: inherit;"><br /></span></div>Frans van den Berghhttps://plus.google.com/104242390274256831576noreply@blogger.com0tag:blogger.com,1999:blog-6555460465813582847.post-27457535195826139702017-04-15T01:02:00.001-07:002017-04-16T11:04:10.012-07:00View MTF (SFR) curves in the GUIThe Easter bunny has delivered. That most elusive of egg-laying mammals has brought you a new GUI feature, which finally completes the feature set I originally envisioned for MTF Mapper.<br /><br />To visualize the MTF curve (or SFR curve, if you prefer), load up any suitable image in MTF Mapper using the menu option "File -> Open". Select the "Annotated image" output mode, like so:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://3.bp.blogspot.com/-grxGOGwjaeg/WPHFVWY_wxI/AAAAAAAABXk/3vhKIxwiQ3QoeSNyyGkQT8LskpuiuncbACLcB/s1600/gui_open.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="265" src="https://3.bp.blogspot.com/-grxGOGwjaeg/WPHFVWY_wxI/AAAAAAAABXk/3vhKIxwiQ3QoeSNyyGkQT8LskpuiuncbACLcB/s400/gui_open.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Make sure "Annotated image" is selected in the desired output types box</td></tr></tbody></table>You may select any of the other output types (e.g., "Grid") concurrently, except the "Focus position" output type, which is not currently compatible with the other output types.<br /><br />Click the "Open" button, and wait for the outputs to start appearing in the "Data set" tree-view panel. Expand the entry in the tree-view to expose the "annotated" entry, and click on it:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://3.bp.blogspot.com/-6QZThAUEHUI/WPHGrzsnD-I/AAAAAAAABXw/FcdSUxb3ugAFDAgI7AEav8s_Iu163agwgCLcB/s1600/gui_annotated.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="265" src="https://3.bp.blogspot.com/-6QZThAUEHUI/WPHGrzsnD-I/AAAAAAAABXw/FcdSUxb3ugAFDAgI7AEav8s_Iu163agwgCLcB/s400/gui_annotated.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Note the cyan-coloured text superimposed on the edges of your black target squares. These values are the MTF50 values of the edges, expressed in cycles per pixel.</td></tr></tbody></table>Those cyan-coloured text labels serve two purposes: a) they tell you the MTF50 (cycles per pixel) of the edge on top of which it is drawn, and b) they are clickable (left mouse button) targets to bring up the MTF curve display for the selected edge.<br /><br /><i>A brief digression: If the text label is displayed in yellow (rather than cyan), then it indicates that MTF Mapper has deemed the edge to be of "medium" quality. This usually means that the edge orientation is poor, that is, close to one of the <span id="goog_1586900253"></span><span id="goog_1586900254"></span><a href="http://mtfmapper.blogspot.co.za/2015/06/critical-angles.html">critical angles</a> that could mean the displayed MTF50 value is less accurate than the ideal. The edge will also be displayed in yellow if the edge length is sub-optimal (meaning too few pixels along the edge), which also degrades the MTF accuracy. Normally the values displayed in yellow are still usable, but be careful.</i><br /><i><br /></i><i>Occasionally you will see some of the MTF50 labels displayed in red, like in this example:</i><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://1.bp.blogspot.com/-l7SchMP67SI/WPHK1l1jXXI/AAAAAAAABX8/15Y84yZwKPUznq7YdIEUIV8_vY8lEGDlwCLcB/s1600/gui_red.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="265" src="https://1.bp.blogspot.com/-l7SchMP67SI/WPHK1l1jXXI/AAAAAAAABX8/15Y84yZwKPUznq7YdIEUIV8_vY8lEGDlwCLcB/s400/gui_red.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">The leftmost red label coincides with an edge with a ~25 degree orientation, which is within 2 degrees of the critical angle at 26.565 degrees</td></tr></tbody></table><i>It is possible that an edge labelled in red is still usable, but there is no way to know for certain. I would rather recommend that you try to re-align your camera so that no edges end up with red labels, or that you ignore the edges with red labels for any serious analysis.</i><br /><br />Back to the main story: clicking on an edge label pops up the MTF curve display window,<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://3.bp.blogspot.com/-TuDMQsYUNn0/WPHMM_mJQfI/AAAAAAAABYI/msIMPugAkgkREtFH5Gh6fIy4soVn9mWjQCLcB/s1600/gui_one_mtf.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="265" src="https://3.bp.blogspot.com/-TuDMQsYUNn0/WPHMM_mJQfI/AAAAAAAABYI/msIMPugAkgkREtFH5Gh6fIy4soVn9mWjQCLcB/s400/gui_one_mtf.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">The result of a single left-button click on an MTF50 label</td></tr></tbody></table>I hope the plot is self-explanatory. It corresponds to the MTF curve plots found in other slanted edge tools (Imatest, QuickMTF, ImageJ slanted edge plugin, etc.), with the x-axis of the plot indicating spatial frequency in cycles per pixel, and the y-axis indicating the contrast at the indicated frequency. In the top-right corner you can see a tag "MTF50=0.087" in a shade of blue that matches the plotted curve; this indicates the MTF50 value of the edge you just clicked on, as you might have expected.<br /><br />The vertical gray bar is a cursor that follows the mouse, which will read off the actual contrast value corresponding to the MTF curve at the indicated spatial frequency; the read-out of this cursor is displayed just below the plot ("frequency: 0.098 contrast: 0.403" in the example). Again, the colour of the "contrast: <xyz>" readout matches that of the plotted curve.<br /><br />While the MTF curve window is open, you may left-click on any other edge in the "annotated" image to replace the contents of the MTF curve window with the data corresponding to the newly selected edge. This includes clicking on edges in <i>any other</i> "annotated" output available in the "Data set" tree-view.<br /><br />If you would like to compare the MTF curves of two edges, select the first edge as above, but hold down <shift> while left-clicking on the second edge. This adds the second edge to the plot:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://2.bp.blogspot.com/-Ix5Hd6RZO0A/WPHPTJm8YMI/AAAAAAAABYU/AlQrRSVWIjYvyqIqyGX-NBqnjs3pr5EGwCLcB/s1600/gui_two_mtf.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="265" src="https://2.bp.blogspot.com/-Ix5Hd6RZO0A/WPHPTJm8YMI/AAAAAAAABYU/AlQrRSVWIjYvyqIqyGX-NBqnjs3pr5EGwCLcB/s400/gui_two_mtf.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Note the addition of the green curve, and the two green text labels, corresponding to the newly added edge's MTF</td></tr></tbody></table>The read-out below the plot tracks and displays the MTF for both curves at the current spatial frequency, making it easy to read off accurate values for comparison purposes. Note that you can again select the second edge from <i>any other</i> "annoted" output in the "Data set" tree-view, making it easy to compare curves from different lenses or cameras.<br /><br />Lastly, you can add a third curve to the plot using the same <shitf>+left click method.<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://3.bp.blogspot.com/-wgRq-y9EI08/WPHQoapR5GI/AAAAAAAABYg/njlXpkFv6OwLyapOJDjhq0L13Ea1hkXWACLcB/s1600/gui_three_mtf.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="265" src="https://3.bp.blogspot.com/-wgRq-y9EI08/WPHQoapR5GI/AAAAAAAABYg/njlXpkFv6OwLyapOJDjhq0L13Ea1hkXWACLcB/s400/gui_three_mtf.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Adding a third curve behaves as expected</td></tr></tbody></table><br />If you already have three curves plotted, the last curve is replaced by this action. If you left click on a new edge without holding down shift, the plot reverts back to displaying only a single MTF curve (using the newly selected edge's MTF).<br /><br /><h4><span style="background-color: #666666;">Update (2017/04/16)</span></h4><div><span style="background-color: #666666;">I have since added the two save buttons, grab a copy of version 0.5.19 or later to give it a try.</span></div><div><span style="background-color: #7f6000;"><br /></span></div><h4>Future improvements</h4><div>It might be useful to drop a pin, or some other visual marker onto the "annotated" image to indicate which edge was selected. It might be even more useful to show the actual ROI used by MTF Mapper, but that information is not currently available in the outputs.</div><div><br /></div>Let me know if you have any other suggestions for useful features to add to the MTF curve display function.<br /><br /><h4>Where?</h4>This feature is available from MTF Mapper 0.5.18 onwards, available from <a href="https://sourceforge.net/projects/mtfmapper/files/windows/">SourceForge.</a> <br /><br />Frans van den Berghhttps://plus.google.com/104242390274256831576noreply@blogger.com0tag:blogger.com,1999:blog-6555460465813582847.post-55531408242386224092017-04-04T07:14:00.000-07:002017-04-09T04:01:56.640-07:00Focus peak measurement with MTF Mapper: Description and ValidationIt is a truth universally acknowledged that a single man in possession of a large aperture lens must capture images with as shallow a depth of field as he can manage. (Jane, please forgive me ...).<br /><br />All kidding aside, the downside to employing a shallow depth of field is the way in which it accentuates even the smallest focus error. By focus error I mean that the apparent position of the focus plane is not where the photographer intended. And of course there is no such thing as a focus plane, since in reality it is a curved surface, but for convenience I will use the term focus plane here.<br /><br />Even if we accept the convenient notion of a focus plane, we still have not really explained clearly what a focus plane is. One way of describing the focus plane would be to say that it is the distance at which the <i>circle of confusion</i> is minimized (as projected onto the image sensor). Personally, I am not a fan of using the circle of confusion to measure focus (or defocus, to be more precise), mostly because it is hard to measure the circle of confusion. The other difficulty with the notion of the circle of confusion is that it conjures up the image of these perfect little circular discs being formed on the image sensor, which is a rather crude simplification that does not take into account the actual point spread function (PSF) of the imaging system.<br /><br />A much more convenient (to me, at least) way of defining a focus plane is to do so in terms of MTF, since this explicitly acknowledges the full PSF. This idea has been proposed recently by Jim Kasson (<a href="http://blog.kasson.com/the-last-word/towards-a-macro-mtf-test-protocol/" target="_blank">example from his blog</a>, <a href="https://www.dpreview.com/forums/post/57877800" target="_blank">sample discussion from DPR forum</a>), using MTF50 as the final criterion. It only takes a little bit of thought to see that <i>circle of confusion diameter </i>and MTF50 are both approximations of the degree of sharpness of an image; note that using MTF50 might discard some of the useful information that we could extract from the full MTF curve, but it is convenient to have only a single value to express our measure of "sharpness" (I am deliberately avoiding the term "resolution" here, since the slanted edge method measures MTF, not resolution).<br /><br />We can plot the "sharpness" measure of our choice (MTF50) as a function of distance from the camera to produce a curve like this one:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://2.bp.blogspot.com/-rW0kkD1ERi8/WONf0v9YYJI/AAAAAAAABP4/M_ViSr_FPAMlXkTl9oLf_dhfVWvDp-fFwCLcB/s1600/mtf_dof_curve.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="https://2.bp.blogspot.com/-rW0kkD1ERi8/WONf0v9YYJI/AAAAAAAABP4/M_ViSr_FPAMlXkTl9oLf_dhfVWvDp-fFwCLcB/s400/mtf_dof_curve.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 1: An example of MTF50 at a fixed position on a hypothetical image sensor, plotted as a function of the distance between the sensor and the slanted edge target (it happens to be a 50 mm f/1.8 lens at f/2.8)</td></tr></tbody></table><br /><div class="separator" style="clear: both; text-align: center;"></div>Figure 1 illustrates the MTF50 that we would measure as we move our slanted edge target relative to the camera. Now that we have something to visualize, it is easy to explain what a focus plane is: the hypothetical plane that is parallel to our image sensor, located at the distance that maximizes MTF50 (e.g., the peak of the curve, as indicated by the green line in Figure 1). Similarly, we could define depth of field (DOF) as the length of the interval between the two dashed gray lines, where the dashed gray lines correspond to the distances from the camera at which MTF50 = 0.15 cycles per pixel. Note that this is an arbitrary re-definition of DOF only for illustration of the concept as it applies to the curve shown in Figure 1.<br /><br />In this article, I will use the terms "focus peak distance", "focus distance", and "focus plane position" interchangeably to refer to the distance corresponding to the green line in Figure 1. As you can probably deduce from the title, this article deals with the measurement of this focus distance value using MTF Mapper.<br /><br /><h3>A new test chart</h3><div>The "classic" MTF Mapper test charts proved to be inadequate when it came to accurate measurement of the focus peak distance. Firstly, none of the older charts provided the required density of slanted edges to obtain a robust measurement. Secondly, the older charts did not allow MTF Mapper to convert image space (pixel) coordinates to real-world coordinates (in mm). The new chart is illustrated in Figure 2:</div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://2.bp.blogspot.com/-MfzKmm31lIw/WONjN4fjeiI/AAAAAAAABQE/fYURU6qedvwohO2M5k-qT9QUbAnWUtTJQCLcB/s1600/chart.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="https://2.bp.blogspot.com/-MfzKmm31lIw/WONjN4fjeiI/AAAAAAAABQE/fYURU6qedvwohO2M5k-qT9QUbAnWUtTJQCLcB/s400/chart.png" width="282" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 2: The new "focus" MTF Mapper chart type</td></tr></tbody></table><div>This chart is a 45-degree slanted chart design. The camera should be pointed towards the centre of the chart; the centre of the chart falls on the dashed line, halfway between the two central fiducials (the large black dots). The chart should be tilted at 45 degrees around the axis illustrated with the dashed (horizontal) line. Notice that the large black bars down the centre of the chart decrease in size towards the bottom of the chart --- that end of the chart should be closer to the camera (so that perspective ends up distorting the bars to have roughly the same size in the final image, although this is not critical). Figures 3 (a) and (b) illustrate two possible chart orientations relative to the camera.</div><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"></div><div><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"></div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://1.bp.blogspot.com/-qQeIAgxdjk4/WOc44L0cUvI/AAAAAAAABUg/_1i-ULWQXjw5CrPvvs0iW5M-TqOq6aWggCLcB/s1600/chart_diagram_na.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="https://1.bp.blogspot.com/-qQeIAgxdjk4/WOc44L0cUvI/AAAAAAAABUg/_1i-ULWQXjw5CrPvvs0iW5M-TqOq6aWggCLcB/s400/chart_diagram_na.png" width="360" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 3a: One possible set-up, with the chart tilting top-to-bottom at 45 degrees</td></tr></tbody></table><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://4.bp.blogspot.com/-fwfHTyiAzrU/WOc7QXhheuI/AAAAAAAABUw/J_DAWU0_1EAOmOqM2jc4DQM3wUw0k8vBACLcB/s1600/chart_diagram_alt_na.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="190" src="https://4.bp.blogspot.com/-fwfHTyiAzrU/WOc7QXhheuI/AAAAAAAABUw/J_DAWU0_1EAOmOqM2jc4DQM3wUw0k8vBACLcB/s400/chart_diagram_alt_na.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 3b: An alternative set-up, with the chart tilting left-to-right at 45 degrees</td></tr></tbody></table><br /></div><div class="separator" style="clear: both; text-align: center;"></div><div>The chart does not have to be positioned in a portrait orientation; landscape orientation works just fine (compare Figure 3(a) to 3(b)). The camera can also be used in either landscape or portrait orientation, as long as you can fit in most of the chart in the image. If you happen to have a sub-optimal combination of focal length, chart size and distance from the chart, then you may crop the chart a little if you must. It is important that the 45-degree tilt of the chart is around the correct axis (running through the dashed line of the chart shown in Figure 2), and that the other two axes must be close to being square.</div><div><br /></div><div><b>It is critical to print the chart at the correct size (without "fit to page" scaling). </b>MTF Mapper relies on the fact that distances on the chart are correct --- note the four "+" markers near the corners of the chart, which you can use to verify that your print came out at the right scale. Of course, if you do print with some page scaling, or you print the A3 chart on an A4 page, then MTF Mapper will still work, but the distances that it reports will no longer be accurate. Lastly, note that the fiducials are coded to allow MTF Mapper to identify the correct chart size, so if you pay close attention, you will see the differently sized charts are not just scaled copies of a base chart size.</div><div><br /></div><div><b>This is a manual-focus chart only.</b> The camera should preferably be focused (manually) on the centre of the chart, i.e., roughly at the point halfway between the two central fiducials (black dots). The chart features around this point are not suitable for auto-focus use because there is no way to tell what part of the chart (in the general region around the centre) the camera chooses to focus on. Just to clarify: <b>This chart should not be used to perform PDAF micro-adjust / fine tuning with.</b></div><div><b><br /></b></div><div>A minor digression: Although this chart is not suitable for use with auto-focus, nothing prevents you from using a removable overlay target. You could, for example, use a second printed page containing only a large black rectangle (like the central rectangle in the MTF Mapper "perspective" chart type) to perform the auto-focus operation. Lock the focus, remove the overlay, and capture the image of this new chart. If you plan ahead, you could use fridge magnets to make the process of adding/removing the auto-focus overlay target more convenient. Or you could wait for the eventual release of a new auto-focus MTF Mapper chart I plan on introducing.</div><div><br /></div><h3>A new output type</h3><div>To process images of the new "focus" chart just introduced, a new output type has been added to MTF Mapper. As of MTF Mapper version 0.5.16, this output type is not compatible with other typical output types produced by MTF Mapper, i.e., when you choose the "focus position" output type, then you should not enable any other output types (they will not produce usable output). This is a temporary inconvenience, and I aim to fix this sometime. The corresponding command-line switch for this new output type is "--focus"; it produces a file called "focus_peak.png".</div><div><br /></div><div>An example of this output type is illustrated in Figure 4:</div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://4.bp.blogspot.com/-UJaqwQItQTY/WON4k_MPQ9I/AAAAAAAABQw/Ms8hfSIQsbQA6ExqQDFWz52EKl0-tKraQCLcB/s1600/focus_peak.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="https://4.bp.blogspot.com/-UJaqwQItQTY/WON4k_MPQ9I/AAAAAAAABQw/Ms8hfSIQsbQA6ExqQDFWz52EKl0-tKraQCLcB/s400/focus_peak.png" width="377" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 4: An example of the "--focus" MTF Mapper output. Note that the chart was oriented as shown in Figure 3b</td></tr></tbody></table><div>The curve illustrated in Figure 1 is overlaid on top of the captured image by back-projecting the curve onto the image using the estimated camera perspective transformation, to produce the green curve in Figure 4. The "height" of this curve is simply scaled to fill the image of the chart, so the peak of the curve will always be on the midline of the black slanted edge bars.</div><div>The dark blue line illustrates the intersection of the hypothetical focus plane with the surface of the chart. In the centre of the image illustrated in Figure 4 we see an orange-ish coordinate origin marker (four outwards pointing arrows), representing the physical center of the chart. The red reticule (with its four inwards pointing arrows) indicate the centre of the captured image; together these two features provide feedback for centering the camera to the chart.</div><div><br /></div><div>Right under the peak of the green curve we see two values reported in cyan-coloured text. The first line is the MTF50 value measured at the peak, and the second is the focus plane position relative to the centre of the chart. In other words, MTF Mapper subtracts the estimated position (distance from the camera) of the centre of the chart from the focus peak distance to compute the value displayed in this second line below the green curve.</div><div><br /></div><div>Lastly, it may be worth reading my article on <a href="http://mtfmapper.blogspot.co.za/2017/02/automatic-chart-orientation-estimation.html" target="_blank">chart orientation estimation</a>, since the underlying method of extracting the camera pose parameters is the same one used by the "--chart-orientation" output mode. If you are using the command line version of MTF Mapper, take note that you may have to specify the focal ratio of your camera + lens combination in order for the camera pose parameters to be correct. For example, a 105 mm lens mounted on a Nikon APS-C body (23.6 mm sensor width) would require the command "--focal-ratio 4.45" to improve the accuracy of the "Estimated chart distance" value reported at the bottom of the "focus_peak.png" output image. The value 4.45 is derived from (lens focal length)/(sensor width), i.e., 105/23.6 ~ 4.45. Because the "focus peak depth" value reported in the output (the -24.7 mm in Figure 4) is a relative measurement, it is expected that an incorrect "Estimated chart distance" value will <i>not</i> have a large impact, but more testing has to be performed to confirm this.</div><div><br /></div><h3>The principle</h3><div>The curve presented in Figure 1 seems to imply that we have a single slanted edge that we measure as we move it away from the camera, starting at a distance closer than the focus plane distance. This is an entirely valid way of obtaining the measurements required to produce Figure 1, and Jim Kasson has done exactly that. It does require a good linear rail, preferably a computer controlled one to automate the capture of a large number of images from our desired range of distances (from the camera).</div><div><br /></div><div>We can obtain a fairly decent approximation if we use a 45-degree chart with a large number of slanted edges. The tilt in the chart naturally ensures that these slanted edges appear at different distances from the camera. All that MTF Mapper has to do is extract slanted edge MTF values, and reconstruct the MTF50 vs distance curve.</div><div><br /></div><div>That sounds straightforward, but there is one fairly large caveat: if our edge is slanted (as required for the slanted edge method to work), then that edge will pass through a range of distances, e.g, the starting tip of the edge is closer to the camera, and the endpoint of the edge is further from the camera. If the MTF50 value varies as a function of distance (as illustrated in Figure 1), then strictly speaking the PSF of the image formation process also varies along this edge. This violates the central assumption of the slanted edge method, which implicitly assumes that we can measure the MTF at a single location in the field by examining a small region around that location. In practice, the MTF we measure with the slanted edge method is a blend of the MTFs at the various distances the edge passes through.</div><div><br /></div><div>There is not much we can do about it, but it helps to oversample. Each of the long edges of the slanted edge bars in the "focus" test chart (see Figure 2) is processed with a sliding window that uses only a small section of the edge to apply the slanted edge method to. This approach increases our sampling density whilst minimising the range of depth values over which each slanted edge MTF calculation is performed, i.e., we only assume that the true MTF is constant over a very small section of the edge. It is a well-known fact that the slanted edge method produces estimates with a smaller standard deviation if the length of the edge that it is applied to is increased (and the true MTF remains constant); conversely, we expect that each of our individual slanted edge measurements performed with the sliding window method will result in a large standard deviation in the estimated MTF50 value. Fortunately, we can safely assume that our desired MTF50 vs distance curve must be smooth, thus we can fit a smooth model to our multiple noise-contaminated MTF50 measurements. It turns out that a rational polynomial function of order (4, 2) seems to fit rather nicely in all the cases I have examined so far, so that is what MTF Mapper uses internally.</div><div><br /></div><div>This strategy violates any number of model-fitting assumptions (e.g., my noisy samples are bound to be correlated, and the noise might be correlated too), but it seems to work in practice.</div><div><br /></div><div>One last observation: Why are the slanted edge bars oriented so that they run left-to-right if the chart is tilted at 45-degrees top-to-bottom (assuming portrait orientation, as shown in Figure 2)? What would happen if we had only a single edge running top-to-bottom, and we applied the sliding window approach to that edge? It turns out that this top-to-bottom method is viable, but because each short edge segment passes through a larger range of distance (from the camera) values, compared to the left-to-right edges, the sensitivity of the detection of the peak of the MTF50 vs distance curve is compromised. So it works, just not as well as the edge orientation of Figure 2.</div><div><br /></div><h3>Accuracy assessment: set-up</h3><div>So does it work? This turns out to be a fairly hard question to answer. One approach would be to validate the single-image-45-degree-chart method against a computer-controlled focusing rail (i.e., physically moving the edge like Jim Kasson does), but I do not have one of those handy.</div><div><br /></div><div>I settled on a rather different approach that relies on the observation that a lens fitted on an extension tube can no longer focus at infinity. From what I could gather, a lens set to focus at infinity will focus at a distance <i>d </i>= <i>f</i>*(<i>f/e</i> + 2) + <i>e</i>, where <i>f </i>is the focal length, and <i>e</i> is the extension length (update: see Appendix A below for a discussion of this formula). I happen to have a Micro-Nikkor 105 mm f/4 Ai lens with a hard stop at infinity. After a bit of iterative experimentation (translation: building something, then going back to the drawing board, then salvaging the hardware) I found that an extension of about 6.4 mm will cause the 105 mm lens to focus at a distance of about 1939 mm. At this distance, the lens covers an object size just a tad smaller than an A3 test chart.</div><div><br /></div><div>Of course, there are some practical problems. Firstly, it is rather difficult to measure a distance of 1939 mm with good accuracy using my available tools. More importantly, I am not quite sure where to measure this distance <i>from </i>(update: As mentioned in Appendix A, this is the total lens conjugate distance, i.e., the distance between the image plane and the focus plane. Since I do not know the principal plane separation distance of my lens, I still cannot use the total lens conjugate distance directly)<i>. </i>Even if I could solve the measurement problem, I would still only end up with a single measurement, and no experimental variables to vary.</div><div><br /></div><div>My solution was to build a variable-length extension tube. The idea was that I could preset the effective length of the extension tube with good accuracy if I used shim stock (or feeler gauges) --- all I had to do is build an extension tube from scratch, since none of the commercially available ones appear to go below 8 mm. Here is a photo of my custom extension tube mounted between my D7000 and the 105 mm Nikkor lens:</div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://2.bp.blogspot.com/-xibY-_P0JkM/WOOP7pQhgWI/AAAAAAAABRQ/X_K9uJP0BIECZsIARi0N_grhbpMp9eC9wCLcB/s1600/mount_with_lens.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="286" src="https://2.bp.blogspot.com/-xibY-_P0JkM/WOOP7pQhgWI/AAAAAAAABRQ/X_K9uJP0BIECZsIARi0N_grhbpMp9eC9wCLcB/s400/mount_with_lens.jpg" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 5: The bronze-coloured ring is part of my extension tube</td></tr></tbody></table><div>Here is what the front of my extension tube looks like with the lens removed:</div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://4.bp.blogspot.com/-2C_8FBkCMmw/WOOQTU9xLUI/AAAAAAAABRU/ucDehD7EyfsCzJifRQZeWZ1NZ5UQ9fw0gCLcB/s1600/mount_only.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="331" src="https://4.bp.blogspot.com/-2C_8FBkCMmw/WOOQTU9xLUI/AAAAAAAABRU/ucDehD7EyfsCzJifRQZeWZ1NZ5UQ9fw0gCLcB/s400/mount_only.jpg" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 6: extension tube with the lens removed</td></tr></tbody></table><div>As you can see from Figure 6, it was a bit of a tight fit to build an adaptor that was wide enough to allow adjustment without removing the lens, but still small enough to physically fit below the prism housing.</div><div>Here is what the extension tube looks like with some shims installed:</div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://2.bp.blogspot.com/-lAWXidTLllY/WOOQ_DVrQOI/AAAAAAAABRY/-IGLF-Y7WUwBE9J7_JlLWRjfi_9nShOoACLcB/s1600/front_with_shims.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="322" src="https://2.bp.blogspot.com/-lAWXidTLllY/WOOQ_DVrQOI/AAAAAAAABRY/-IGLF-Y7WUwBE9J7_JlLWRjfi_9nShOoACLcB/s400/front_with_shims.jpg" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 7: extension tube with some shims installed, front view</td></tr></tbody></table><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://3.bp.blogspot.com/-auu60KlRHCg/WOORKZ1PGcI/AAAAAAAABRc/__xjXGYMwpcNj3u11wjnHkM6aJlXl3wQwCLcB/s1600/rear_with_shims.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="317" src="https://3.bp.blogspot.com/-auu60KlRHCg/WOORKZ1PGcI/AAAAAAAABRc/__xjXGYMwpcNj3u11wjnHkM6aJlXl3wQwCLcB/s400/rear_with_shims.jpg" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 8: extension tube with some shims installed, rear view</td></tr></tbody></table><div>As can be seen in Figure 7, the front part of the extension tube comprises two parts: the front flange (visible as the large ring with the black pen markings), and a Nikon F-mount female bayonet mount. The inner four screws fix the female mount the the outer flange. </div><div><br /></div><div>The rear flange has an integrated Nikon F-mount male bayonet. I discovered that the male bayonet is a lot easier to manufacture than the female F-mount bayonet --- that probably explains why Nikon sells them :) Figure 8 also shows how the shims are installed between the front and rear flanges. Careful lapping of the flanges (an a bit of shimming with aluminium foil) ensured that the front surface of the female bayonet mount was parallel to the rear surface of the male bayonet mount to within 5 micron.</div><div><br /></div><div>The front of the rear flange looks like this when we open up the extension tube:</div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://1.bp.blogspot.com/-VTyCNRo6JaE/WOOTgeDKl2I/AAAAAAAABRg/-ugaLEKWYLkIBwA0-WlpIg716Lto3xzPACLcB/s1600/open_adaptor.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="280" src="https://1.bp.blogspot.com/-VTyCNRo6JaE/WOOTgeDKl2I/AAAAAAAABRg/-ugaLEKWYLkIBwA0-WlpIg716Lto3xzPACLcB/s400/open_adaptor.jpg" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 9: front face of rear flange seen in the foreground</td></tr></tbody></table><div>And lastly, we can see the rear face of the front flange:</div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://3.bp.blogspot.com/-8v5xIAFQkMg/WOOT1ACM4vI/AAAAAAAABRk/DIabIm8qgIINfCY18SPROCT75a9R5WyPQCLcB/s1600/adaptor_front_plate.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="285" src="https://3.bp.blogspot.com/-8v5xIAFQkMg/WOOT1ACM4vI/AAAAAAAABRk/DIabIm8qgIINfCY18SPROCT75a9R5WyPQCLcB/s400/adaptor_front_plate.jpg" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 10: rear face of front flange.</td></tr></tbody></table><div>Notice the dowel pins in Figure 10: these acted as the registration mechanism so that the flanges always line up correctly without any rotation or tilt.</div><div><br /></div><div>The length of the extension tube can be adjusted by installing three shims between the front and rear flanges. Measurements with a micrometer show that the repeatability of this process was around 5 micron. I also learned that bargain-store feeler gauges are not necessarily manufactured down to the tolerances that this experiment demanded --- I found some evidence that the feeler gauge thickness varied a little bit across their surfaces. I compensated as much as possible by labeling the position at which a particular shim should be installed, and I measured the effective extension tube length rather than relying on the nominal shim thickness.</div><div><br /></div><div>I ended up with the following (effective) shim thicknesses: 130, 94, 72, 58, and 45 micron.</div><div><br /></div><h3>Accuracy assessment: the results</h3><div>The basic experiment involves setting up the chart so that the apparent focus plane position was just slightly in front of the chart center when the 130 micron shims were installed. For each shim set, I then captured 10 images to yield 50 images in total. I repeated the whole experiment a second time to check for repeatability. Figure 11 presents the resulting box-and-whisker plot.</div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://4.bp.blogspot.com/-gmbb8DKoI70/WOOXMpkfT3I/AAAAAAAABRo/36WUXP8KOQk6Ts9VnhZrOHqVA2B9p1ESwCLcB/s1600/shims.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="https://4.bp.blogspot.com/-gmbb8DKoI70/WOOXMpkfT3I/AAAAAAAABRo/36WUXP8KOQk6Ts9VnhZrOHqVA2B9p1ESwCLcB/s400/shims.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 11: Focus position shift measured by MTF Mapper as a function of shim thickness</td></tr></tbody></table><div>The y-axis denotes the "focus peak depth" value reported by MTF Mapper; all the values are positive indicating that the measured focus plane position was slightly in front of the chart centre in all cases.</div><div>Other than the large variability (across 10 images) of Set A with 45 micron shims, it would appear that the individual measurements were quite robust. Typical standard deviation within a particular batch of 10 images was below 0.3 mm.</div><div><br /></div><div>Using the formula presented above, we can compute the expected focus plane position for each of the shim sets, however, we still have no idea how to measure these absolute distances (and the principal plane separation distance of the lens is unknown). Instead, we can subtract the focus distance obtained from the formula with the 45 micron shim; doing the same for the focus peak depth values reported by MTF Mapper allows us to perform a relative comparison. The results are presented in Figure 12:</div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://2.bp.blogspot.com/-eI5wtqQ52J0/WOOc4aG1q0I/AAAAAAAABR0/fc59s-Sdh-Yy5jRhselV0bMQbXeEoZd2QCLcB/s1600/table1.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="222" src="https://2.bp.blogspot.com/-eI5wtqQ52J0/WOOc4aG1q0I/AAAAAAAABR0/fc59s-Sdh-Yy5jRhselV0bMQbXeEoZd2QCLcB/s400/table1.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 12: Summary of results. All values reported in millimeters</td></tr></tbody></table><div><br /></div><div>The second column contains the "focus peak depth" value reported by MTF Mapper. The second last column lists the relative focus peak depth value; these values should be compared to the relative values derived from the formula appearing in the last column.</div><div><br /></div><div>Overall we see a reasonable agreement between the relative values derived from the formula, and the relative values as measured by MTF Mapper. There are some outliers (set B, 58 micron shim), but the difference between expected and measured values are typically below 1 mm. Keep in mind that a 5 micron change in shim thickness produces a change of 1.3 mm in focus plane position using the formula, i.e., the measured values appear to be within the mechanical repeatability of the shimming process itself.</div><div><br /></div><div>The smallest change in shim thickness tested here was 13 micron (45 micron shim set swapped out with 58 micron shim set), followed closely by the 72 vs 58 micron shim combination with a difference of 14 micron. In both those cases it is clear (see Figure 11) that, using the "focus peak depth" values reported by MTF Mapper, one can easily discern the change in focus plane position induced by a 13 micron change in shim thickness.</div><div><br /></div><div>Why would we want to do this? One application would be the calibration of a camera system where we have to shim the flange distance (distance from sensor to lens mounting flange front surface) to ensure that the image formed on the sensor is in focus when a reference lens is mounted. This is particularly useful for systems with hard infinity focus stops.</div><div><br /></div><div>Of course, one would have to consider things like actual image magnification relative to sensor resolution when considering this "smallest discernible change in flange distance" measurement, because MTF Mapper performs the analysis of images at the pixel level. More testing!<br /><br /><h3>References</h3><div>[Burke2012]: Burke, Michael W, Image acquisition: handbook of machine vision engineering, Springer Science & Business Media, 2012.<br /><br /></div><h3>Appendix A</h3></div><div>The formula used to calculate the focus distance of the lens with focal length <i>f</i> and extension <i>e</i> is <i>d </i>= <i>f</i>*(<i>f/e</i> + 2) + <i>e. </i>This formula is taken from [Burke2012, p311], where <i>d </i>is stated to be the total lens conjugate distance. The total lens conjugate distance is the sum of the object-to-lens-centre and image-to-lens-centre distances when looking at the thin lens model. Burke notes that the derivation of this equation depends on the lens being symmetric, which allows us to assume that <i>d</i> = <i>d<sub>o</sub></i> + 2<i>f </i>+ <i>d<sub>i</sub></i>, where <i>d<sub>o</sub></i> is the object-to-focal-point distance, and <i>d<sub>i</sub></i> is the image-to-focal-point distance.<br /><br />I strongly doubt that my Micro-Nikkor 105 mm f/4 Ai is really a symmetric lens, so I just <i>assume </i>that this formula still gives reasonable results. Burke's formula only applies to a thin lens, which I am fairly certain my lens is not (being a compound lens). The implication of this, from my understanding, is that there is an additional distance <i>d<sub>p</sub></i> that separates the two principal planes which must be added to the total lens conjugate distance, which implies that <i>d</i> = <i>d<sub>o</sub></i> + 2<i>f </i>+ <i>d<sub>i </sub></i>+ <i>d<sub>p</sub></i>. Using my convention above, where <i>d<sub>i</sub></i> is called <i>e </i>(denoting extension), we see that the thick lens version of this equation should be <i>d </i>= <i>f</i>*(<i>f/e</i> + 2) + <i>e </i>+ <i>d<sub>p</sub></i>.<br /><br />Unfortunately I have no idea what the value of <i>d<sub>p</sub></i> would be for my lens. Serendipitously, I only end up using the difference between <i>d </i>values computed using different values of <i>e</i>, meaning that the subtraction removes the <i>d<sub>p</sub></i> term from the difference, so I can get away with using the thin lens version of the formula.</div><div><br /></div>Frans van den Berghhttps://plus.google.com/104242390274256831576noreply@blogger.com0tag:blogger.com,1999:blog-6555460465813582847.post-1098980669915032482017-02-10T06:41:00.002-08:002017-02-13T01:22:45.785-08:00Automatic chart orientation estimation: validation experimentIn my previous post I mentioned that it is rather important to ensure that your MTF Mapper test chart is parallel to your sensor (or that the chart is perpendicular to the camera's optical axis, which is almost the same thing) to ensure that you do not confuse chart misalignment with a tilted lens element. I have added the functionality to automatically estimate the orientation of the MTF Mapper test chart relative to the camera using circular fiducials embedded in the test chart. Here is an early sample of the output, which nicely demonstrates what I am talking about:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://4.bp.blogspot.com/-1AtCBj1zZHU/WJ2lW4DrPOI/AAAAAAAABN4/0_fFrDhkqgwJ-LaD8T0fTixpKc4xU8lxQCLcB/s1600/chart_orientation_sample.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="265" src="https://4.bp.blogspot.com/-1AtCBj1zZHU/WJ2lW4DrPOI/AAAAAAAABN4/0_fFrDhkqgwJ-LaD8T0fTixpKc4xU8lxQCLcB/s400/chart_orientation_sample.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 1: Sample output of chart orientation estimation</td></tr></tbody></table>Figure 1 shows an example of the MTF Mapper "lensprofile" chart type, with the new embedded circular fiducials (they are a bit like 2D circular bar codes). Notice that the actual photo of the chart is rendered in black-and-white; everything that appears in colour was drawn in by MTF Mapper.<br />There is an orange plus-shaped coordinate origin marker (in the centre of the chart), as well as a reticle (the red circle with the four triangles) to indicate where the camera is aimed at. Lastly, we have the three orientation indicators in red, green and blue, showing us the three Tait-Bryan angles: Roll, Pitch and Yaw.<br /><br />But how do I know that the angles reported by MTF Mapper are accurate?<br /><br /><h3>The set-up</h3><div>I do not have access to any actual optics lab hardware, but I do have some machinist tools. Fortunately, being able to ensure that things are flat, parallel or perpendicular is a fairly important part of machining, so this might just work. First I have to ensure that I have a sturdy device for mounting my camera; in Figure 2 you can see the hefty steel block that serves as the base of my camera mount.</div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://2.bp.blogspot.com/-rBPMcRpey44/WJ2osAftkLI/AAAAAAAABOE/dgrXaRXxrk8z27H_RcG7kZAwqKoC54oggCLcB/s1600/DSC_3468_overview.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="363" src="https://2.bp.blogspot.com/-rBPMcRpey44/WJ2osAftkLI/AAAAAAAABOE/dgrXaRXxrk8z27H_RcG7kZAwqKoC54oggCLcB/s400/DSC_3468_overview.jpg" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 2: Overview of my set-up</td></tr></tbody></table><div>I machined the steel block on a lathe to produce a "true" block, meaning that the two large faces of the large shiny steel block are parallel, and that those two large faces are also perpendicular to the rear face on which the steel block is standing in the photo. The large black block in Figure 2 is a granite surface plate; this one is flat to something ridiculous like 3.5 micron maximum deviation over its entire surface. The instrument with the clock face is a dial test indicator; this one has a resolution of 2 micron per division. It is used to accurately measure small relative displacements through the pivoting action of the lever you can see in contact with the lens mount flange of the camera body. </div><div><br /></div><div>Using this dial test indicator, surface plate and surface gauge, I first checked that the two large faces of the steel block were parallel: they were parallel to within about 4 micron. Next, I stood up the block on its rear face (bottom face in Figure 2), and measured the perpendicularity. The description of that method is a bit outside of the the scope of this post, but the answer is what matters: near the top of the steel block the deviation from perpendicularity was also about 4 micron. The result of all this fussing with parallelism and perpendicularity is that I know (because I measured it) that my camera mounting block can be flipped through 90 degrees by either placing it on the large face with the camera pointing horizontally, or stood up with the camera pointing to the ceiling.</div><div><br /></div><div>That was the easiest part of the job. Now I had to align my camera mount so that the actual mounting flange was parallel to the granite surface plate. </div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://2.bp.blogspot.com/-dZQON7Sbmf4/WJ2uhvhzMTI/AAAAAAAABOU/cEhOwNIHWNMLqxuHPl9x6QcSF9E3riidQCLcB/s1600/DSC_3465_4pt1.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="311" src="https://2.bp.blogspot.com/-dZQON7Sbmf4/WJ2uhvhzMTI/AAAAAAAABOU/cEhOwNIHWNMLqxuHPl9x6QcSF9E3riidQCLcB/s400/DSC_3465_4pt1.jpg" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 3: Still busy tweaking the mounting flange parallel to the surface plate</td></tr></tbody></table><div>The idea is that you keep on adjusting the camera (bumping it with the tripod screw partially tightened, or adding shims) until the dial test indicator reads almost zero at four points, as illustrated between Figures 2 and 3. Eventually I got it parallel to the surface plate to within 10 micron, and called it good.</div><div><br /></div><div>This means that when I flip the steel block into its horizontal position (see Figure 4) the lens mount flange is perpendicular to the surface plate with a reasonably high degree of accuracy. Eventually, I will arrange my test chart in a similar fashion, but bear with me while I go through the process.</div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://4.bp.blogspot.com/-uHTBX01IuSw/WJ2wNvNAT8I/AAAAAAAABOg/-ZFMCC8jtnQDIa-HVDUY26DhyemGth6CwCLcB/s1600/DSC_3473_level.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="225" src="https://4.bp.blogspot.com/-uHTBX01IuSw/WJ2wNvNAT8I/AAAAAAAABOg/-ZFMCC8jtnQDIa-HVDUY26DhyemGth6CwCLcB/s400/DSC_3473_level.jpg" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 4: Using a precision level to ensure my two reference surfaces are parallel</td></tr></tbody></table>In Figure 4 you can see more of my set-up. The camera is close to its final position, and you can see a precision level placed on the granite surface plate just in front of the camera itself. That spirit level measures down to a one-division movement of the bubble for each 20 micron height change at a distance of one metre, or 0.0011459 decimal degrees if you prefer. I leveled the granite surface plate in both directions. Next, I placed a rotary table about 1 metre from the camera --- you can see it to the left in Figure 4. The rotary table is fairly heavy (always a good thing), quite flat, and will later be used to rotate the test chart. The rotary table was shimmed until it too was level in both directions.<br /><div><br /></div><div>The logic is as follows: I cannot directly measure if the rotary table's surface is parallel with the granite surface plate, but I can ensure that both of them are level, which is going to ensure that their surfaces are parallel to within the tolerances that I am working to here. This means that I know that my camera lens mount is perpendicular to the rotary table's surface. All I now have to do is place my test chart so that it is perpendicular to the rotary table's surface, and I can be certain that my test chart is parallel to my camera's mounting flange. I aligned and shimmed my test chart until it was perpendicular to the rotary table top, using a precision square, resulting in the set-up shown in Figure 5.</div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://4.bp.blogspot.com/-ndM1yhyZSS4/WJ232xt9YFI/AAAAAAAABOw/UILPqpVoTk0pFWIuP3n6n8tXCQ4RaRUAgCLcB/s1600/DSC_3479_final_setup.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="268" src="https://4.bp.blogspot.com/-ndM1yhyZSS4/WJ232xt9YFI/AAAAAAAABOw/UILPqpVoTk0pFWIuP3n6n8tXCQ4RaRUAgCLcB/s400/DSC_3479_final_setup.jpg" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 5: overview of the final set-up. Note the obvious change in colour temperature relative to Figure 4. Yes, it took that long to get two surfaces shimmed level.</td></tr></tbody></table><div><br /><h3>One tiny little detail (or make that two)</h3></div><div>Astute readers may have picked up on two important details:</div><div><ol><li>I am assuming that my camera's lens mounting flange is parallel to the sensor. In theory, I could stick the dial test indicator into the camera and drag the stylus over the sensor itself to check, but I do actually use my camera to take photographs occasionally, so no sense in ruining it just yet. Not even in the name of science.</li><li>The entire process above only ensures that I have two planes (the test chart, and the camera's sensor) standing perpendicularly on a common plane. From the camera's point of view, this means there is no up/down tilt, but there may be any amount of left/right tilt between the sensor and the chart. This is not the end of the world, since my initial test will only involve the measurement of pitch (as illustrated in Figure 1).</li></ol><h3>The first measurements</h3></div><div><span style="color: #990000;">Note: Results updated on 13/02/2017 to reflect improvements in MTF Mapper code. New results are a bit more robust, i.e., lower standard deviations.</span><br /><span style="color: #990000;"><br /></span>From the set-up above, I know that my expected pitch angle should be zero. Or at least small. MTF Mapper appears to agree: the first measurement yielded a pitch angle of -0.163148 degrees, which is promising. Of course, if your software gives you the expected answer on the first try, you may not be quite done yet. More testing!</div><div><br /></div><div>I decided to shim the base of the plywood board that the test chart was mounted on. The board is 20 mm thick, so the 180 micron shim (0.18 mm) that I happened to have handy should give me a tilt of about 0.52 degrees. I also had a 350 micron (0.35 mm) shim nearby, which yields a 1 degree tilt. That gives me three test cases (~zero degrees, ~zero degrees plus 0.52 degree relative tilt, and ~zero degrees plus 1 degree relative tilt). I captured 10 shots at each setting, which produced the following results:</div><div><ol><li>Expected = 0 degrees. Measurements ranged from -0.163 degrees to -0.153 degrees, for a mean measurement of -0.1597 degrees and a standard deviation of 0.00286 degrees.</li><li>Expected = 0.52 degrees. Measurements ranged from 0.377 to 0.394 degrees, for a mean measurement of 0.3910 degrees with a standard deviation of 0.00509 degrees. Given that our zero measurement started at -0.16 degrees, relative angle between the two test cases comes down to 0.5507 degrees (compared to the expected 0.52 degrees).</li><li>Expected = 1.00 degrees. Measurements ranged from 0.814 to 0.828, for a mean measurement of 0.8210 degrees with a standard deviation of 0.00423 degrees. The tilt relative to the starting point is 0.9806 degrees (compared to the expected 1.00 degrees).</li></ol><div>I am calling that good enough for government work. It seems that there may have been a small residual error in my set-up, leading to the initial "zero" measurement coming in at -0.16 degrees instead, or perhaps there is another source of bias that I have not considered.</div></div><div><br /></div><h3>Compound angles</h3><div>Having established that the pitch angle measurement appears to be fairly close to the expected absolute angle, I set out to test the relative accuracy of yaw angle measurements. Since my set-up above does not establish an absolute zero for the yaw angle, I cheated a bit: I used MTF Mapper to bring the yaw angle close to zero by nudging the chart a bit, so I started from an estimated yaw angle of 0.67 degrees. At this setting, I zeroed my rotary table, which as you can see from Figure 5 above, will rotate the test chart approximately around the vertical (y) axis to produce a desired (relative) yaw angle. At this point I got a bit lazy, and only captured 5 shots per setting, but I did rotate the chart to produce the sequence of relative yaw rotations in 0.5 degree increments. The mean values measured over each set of 5 shots were 0.673, 1.189, 1.685, 2.211, 2.717, and 3.157. If we subtract the initial 0.67 degrees (which represents our zero for relative measurements), the we get 0.000, 0.5165, 1.012, 1.538, 2.044, and 2.484, which seems pretty close to the expected multiples of 0.5.</div><div><br /></div><div>In the final position, I introduced the 0.18 mm shim to produce a pitch angle of 0.5 degrees. Over 5 shots a mean yaw angle of 3.132 degrees was measured (or 2.459 if we subtract out zero-angle of 0.67). I should have captured a few more shots, since at such small sample sizes it is hard to tell if the added yaw angle has changed the pitch angle, or not. It is entirely possible that I moved the chart while inserting the shim. That is what you get with a shoddy experimental procedure, I guess. Next time I will have to machine a more positive mechanism for adjusting the chart position.</div><div><br /></div><h3>Discussion</h3><div>Note that MTF Mapper could only extract the chart orientation correctly if I provided the focal length of the lens explicitly. My <a href="http://mtfmapper.blogspot.co.za/2017/02/limitations-of-using-single-shot-planar.html" target="_blank">previous post</a> demonstrated why it appears to be impossible to estimate the focal length automatically when the test chart is so close to being parallel with the sensor. This is unfortunate, because it means that there is no way that MTF Mapper can estimate the chart orientation completely automatically --- some user-provided input is required.</div><div><br /></div><div>The good news is that it seems that MTF Mapper can actually estimate the chart orientation with sufficient accuracy to aid the alignment of the test chart. Both repeatability (worst-case spread) and relative error appears to be better than 0.05 degrees, or about three minutes of arc, which compares favourably with the claimed accuracy of Hasselblad's linear mirror unit. Keep in mind that I tested under reasonably good conditions (ISO 100, 1/200 s shutter speed, f/2.8), so my accuracy figures do not represent the worst-case scenario. Lastly, because of the limitations of my set-up, my absolute error was around 0.16 degrees, or 10 minutes of arc; it is possible that actual accuracy was better than this.<br /><br />How does this angular accuracy relate to the DOF of the set-up? To put some numbers up: I used a 50 mm lens on an APS-C size sensor at a focus distance of about 1 metre. If we take the above results, and simplify it to say that MTF Mapper can probably get us to within 0.1 degrees under these conditions, then we can calculate the depth error at the extreme edges of the test chart. I used an A3 chart, so our chart width is 420 mm. If the chart has a yaw angle of 0.1 degrees (and we are shooting for 0 degrees), then the right edge of our chart will be 0.37 mm further away than expected, or our total depth error from the left edge of the chart to the right edge will be twice that, about 0.73 mm. If I run the numbers through vwdof.exe, the "critical" DOF criterion (CoC of 0.01 mm) yields a DOF of 8.95 mm. So our total depth error will be around 8% of our DOF. Will that be enough to cause us to think our lens is tilted when we look at a full-field MTF map? </div><div><br /></div><div>Only one way to find out. More testing!</div><div><br /></div>Frans van den Berghhttps://plus.google.com/104242390274256831576noreply@blogger.com0tag:blogger.com,1999:blog-6555460465813582847.post-82321813647911641612017-02-08T01:21:00.000-08:002017-04-07T01:31:37.463-07:00Limitations of using single-shot planar targets to perform automatic camera calibrationWhen you are trying to measure the performance of your system across the entire field, it is rather important to ensure that your test chart is parallel to your sensor. If you are not careful, then a slight tilt in your test chart could look very much like a tilted lens element if you are looking at the MTF values, i.e., two opposite corners of your MTF image would appear to be soft: is your lens titled along the diagonal, or is the chart tilted along the same diagonal?<br /><br />My solution to this problem is to directly estimate the camera pose from the MTF test chart. I have embedded fiducial markers in the latest MTF Mapper test charts which will allow me to measure the angle between your sensor and your test chart. This post details a particular difficulty I encountered while implementing the camera pose estimation method as part of MTF Mapper.<br /><br /><h3>The classical approach</h3><div>Classical planar calibration target methods like Tsai [Tsai1987] or Zhang [Zhang2000] prescribe that you capture several images of your planar calibration target, while ensuring that there is sufficient translation and rotation between the individually captured images. From each of the images you can extract a set of correspondences, e.g., the location of a prominent image feature (corner of a square, for example) and the corresponding real-world coordinates of that feature.</div><div><br /></div><div>This sounds tricky, until you realize that you are allowed to express the real-world coordinates in a special coordinate system attached to your planar calibration target. This implies that you can put all the reference features at z=0 in your world coordinate system (their other two coordinates are known through measurement with a ruler, for example), meaning that even if you moved the calibration object (rather than the camera) to capture your multiple calibration images, the model assumes that the calibration object was fixed and the camera moved around it.</div><div><br /></div><div>A set of four such correspondences are sufficient to estimate a 3x3 homography matrix up to a scale factor, since four correspondences yields 8 equations to solve for the 8 free parameters of the matrix. A homography is a linear transformation that can map one plane onto another, such as mapping our planar calibration target onto the image sensor. For each of our captured calibration images we can solve these equations to obtain a different homography matrix. The key insight is that this homography matrix can be decomposed to separate the intrinsic camera parameters from the extrinsic camera parameters. We can use a top-down approach to understand how the homography matrix is composed.</div><div><br /></div><div>To keep things a bit simpler, we can assume that the principal point of the system is fixed at the centre of the captured image. We can thus normalize our image coordinates so that the principal point maps to (0,0) in normalized image coordinates, and while we are at it we can divide the result by the width of the image so that <i>x</i> coordinates run from -0.5 to 0.5 in normalized image coordinates. This centering and rescaling generaly improves the numerical stability of the camera parameter estimation process. This gives us the intrinsic camera matrix <b>K</b>, such that<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-Rex90LLLTuc/WOdNGmMxchI/AAAAAAAABVA/jtjpUsUatXktFxDgRXkoACBiRXFPALpugCLcB/s1600/eq1_na.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://4.bp.blogspot.com/-Rex90LLLTuc/WOdNGmMxchI/AAAAAAAABVA/jtjpUsUatXktFxDgRXkoACBiRXFPALpugCLcB/s1600/eq1_na.png" /></a></div></div><div><div class="separator" style="clear: both; text-align: center;"></div>where <i>f </i>denotes the focal length of the camera. Note that I am forcing square pixels without skew. This appears to be a reasonable starting point for interchangeable lens cameras. We can combine the intrinsic camera parameters and the extrinsic camera parameters into a single 3x4 matrix <b>P</b>, such that<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-3S8JoS1-ecM/WOdNgRu-VFI/AAAAAAAABVE/kndhNPn3qKg3MJnvTuMdSkIX5D-w8scCACLcB/s1600/eq2_na.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://4.bp.blogspot.com/-3S8JoS1-ecM/WOdNgRu-VFI/AAAAAAAABVE/kndhNPn3qKg3MJnvTuMdSkIX5D-w8scCACLcB/s1600/eq2_na.png" /></a></div></div><div class="separator" style="clear: both; text-align: center;"></div><div>where the 3x3 matrix <b>R</b> represents a rotation matrix, and the vector <b>t </b>represents a translation vector. The extrinsic camera parameters <b>R</b> and <b>t</b> is often referred to as the camera pose, and represents the transformation required to transform from world coordinates (i.e., our calibration target local coordinates) to homogeneous camera coordinates. If we have multiple calibration images, then we obtain a different <b>R</b> and <b>t</b> for each image, but the intrinsic camera matrix <b>K </b>must be common to all views of the chart.</div><div><br /></div><div>The process of estimating <b>K </b>and the set of <b>R</b><sub><i>i</i></sub> and <b>t</b><sub><i>i</i></sub> over all the images <i>i</i> is called <i>bundle adjustment</i> [Triggs1999]. Typically we will use all the available point correspondences (hopefully more than four) from each view to minimized the backprojection error, i.e., we take our known chart-local world coordinates from each correspondence, transform it with the appropriate <b>P </b>matrix, divide by the third (<i>z</i>) coordinate to convert homogeneous coordinates to normalized image coordinates, and calculate the Euclidean distance between this back-projected image point and the measured image coordinates (e.g., output of a corner-finding algorithm) of the corresponding point in the captured image. The usual recommendation is to use a Levenberg-Marquardt algorithm to solve this non-linear optimization problem to minimize the sum of the squared backprojection errors.<br /><br />Strictly speaking, we usually include a radial distortion coefficient or two in the camera model to arrive at a more realistic camera model than the pinhole model presented here, but I am going to ignore radial distortion here to simplify the discussion.<br /><br /><h3>Single-view calibration using a planar target</h3></div><div>From the definition of the camera matrix <b>P </b>above we can see that even if we only have a single view of the planar calibration target, we can still estimate both our intrinsic and extrinsic camera parameters using the usual bundle adjustment algorithms. Zhang observed that when a planar calibration target is employed, we can estimate a 3x3 homography matrix <b>H </b>such that<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-ZrijEHpKfyI/WOdNr9YWudI/AAAAAAAABVI/0UY6O-r5bx0rVqClng2AgF4vPIrXZ6USgCLcB/s1600/eq3_na.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://1.bp.blogspot.com/-ZrijEHpKfyI/WOdNr9YWudI/AAAAAAAABVI/0UY6O-r5bx0rVqClng2AgF4vPIrXZ6USgCLcB/s1600/eq3_na.png" /></a></div></div><div class="separator" style="clear: both; text-align: center;"></div><div>where the vectors <b>r</b><sub><i>1</i></sub> and <b>r</b><sub><i>2</i></sub> define the first two basis vectors of the world coordinate frame in camera coordinates, and <b>t</b> is a translation vector. Since we require <b>r</b><sub><i>1</i></sub> and <b>r</b><sub><i>2</i></sub> to be orthonormal, the third basis vector of the world coordinate frame is just the cross product of <b>r</b><sub><i>1</i></sub> and <b>r</b><sub><i>2</i></sub>. This little detail explains how the 8 free parameters of the homograph <b>H</b> are able to represent all the required degrees of freedom we expect in our full camera matrix <b>P</b>.<br /><br />In the previous section we restricted our intrinsic camera parameters to a single unknown <i>f</i>, since both <i>P</i><sub><i>x</i></sub> and <i>P</i><sub><i>y</i></sub> are already know because we assume the principal point coincides with the image centre. With a little bit of algebraic manipulation we can see that Zhang's orthonormality constraints allows us to estimate the focal length <i>f</i> directly from the homography matrix <b>H </b>(see Appendix A below).<br /><br />So this leaves me with a burning question: if we can estimate all the required camera parameters using only a single view of a planar calibration target, why do all the classical methods require multiple views (with different camera poses)?<br /><br /><h3>Limitations of single-view calibration using planar targets</h3></div><div>To answer that question, we simply have to find an example of where the single-view case would fail to estimate the camera parameters correctly. The simplest case would be to assume that our rotation matrix <b>R</b> is the 3x3 identity matrix (camera axis is perpendicular to planar calibration target), and that our translation vector is of the form [0 0 <i>d</i>] where <i>d </i>represents the distance of the calibration target from the camera's centre of projection. This scenario reduces our camera matrix <b>P</b> to<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-P7glQsZOpfY/WOdNyRoNDQI/AAAAAAAABVM/ern6PMA5X5EjoWhjUGW_VRkUOXoC0S8bgCLcB/s1600/eq4_na.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://3.bp.blogspot.com/-P7glQsZOpfY/WOdNyRoNDQI/AAAAAAAABVM/ern6PMA5X5EjoWhjUGW_VRkUOXoC0S8bgCLcB/s1600/eq4_na.png" /></a></div></div><div class="separator" style="clear: both; text-align: center;"></div><div> A given point [<i>x y</i> 0] in world coordinates is thus transformed to [<i>fx fy d</i>] in homogeneous camera coordinates. We can divide out the homogeneous coordinate to obtain our desired normalized image coordinates as [<i>fx</i>/<i>d fy</i>/<i>d</i>].</div><div>And there we see the problem: the normalized image coordinates depend only on the ratio <i>f</i>/<i>d, </i>which implies that we do not have sufficient constraints to estimate both <i>f</i> and <i>d</i> from this single view. The intuitive interpretation is simple to understand: you can always increase <i>d, </i>i.e., move further away from the calibration target while adjusting the focal length <i>f </i>(zooming in) to keep <i>f</i>/<i>d </i>constant without affecting the image captured by the camera.</div><div>This happens because there is no variation in the depth of the calibration target correspondence points expressed in camera coordinates, thus the depth-dependent properties of a perspective projection are entirely absent.<br /><br />We can try to apply the formula in Appendix A to estimate the focal length directly from the homography corresponding to the matrix <b>P</b> above, but we quickly run into a divide-by-zero problem. This should give us a hint. If we choose to ignore the hint, we can apply a bundle adjustment algorithm to estimate both the intrinsic and extrinsic camera parameters from correspondences generated using the matrix <b>P</b>. All that this will achieve is that we will find an arbitrary pair of <i>f</i> and <i>d </i>values that satisfy the constant ratio <i>f</i>/<i>d </i>imposed by <b>P</b>.<br /><br /><h3>The middle road</h3></div><div>What happens if we have a slightly less pathological scenario? Let us assume that there is a small tilt between the calibration target plane and the sensor. For simplicity, we can just choose a rotation around the <i>y </i>axis so that<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-r0IzMqxh8yw/WOdN5WJZKAI/AAAAAAAABVQ/T3J1_QB8zLAuiAsqhmksWDHTm5LwufIQgCLcB/s1600/eq7_na.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://4.bp.blogspot.com/-r0IzMqxh8yw/WOdN5WJZKAI/AAAAAAAABVQ/T3J1_QB8zLAuiAsqhmksWDHTm5LwufIQgCLcB/s1600/eq7_na.png" /></a></div><div class="separator" style="clear: both; text-align: center;"></div>We know that for a small angle θ, sin(θ) ≈ 0, so our matrix <b>P</b> will be very similar to the sensor-parallel-to-chart case above. The corresponding homography <b>H</b> should be<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-Y1v1ajlqTwQ/WOdN8-1PSrI/AAAAAAAABVU/U8WSAspLIjwPqCEm5N9tASOBpAnM5EXjgCLcB/s1600/eq8_na.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://4.bp.blogspot.com/-Y1v1ajlqTwQ/WOdN8-1PSrI/AAAAAAAABVU/U8WSAspLIjwPqCEm5N9tASOBpAnM5EXjgCLcB/s1600/eq8_na.png" /></a></div></div><div class="separator" style="clear: both; text-align: center;"></div><div>We can apply the formula in Appendix A to <b>H</b>, which simplifies to f<sup>2</sup> = f<sup>2</sup>, which is a relief. The question is: how accurately can we estimate the homography <b>H </b>using actual correspondences extracted from the captured images?<br /><br />I know from simulations using MTF Mapper that the position of my circular fiducials can readily be estimated to an accuracy of 0.1 pixels under fairly heavy simulated noise. The objective now is to measure the impact of this uncertainty on the accuracy of the homography estimated using OpenCV's <span style="font-family: "courier new" , "courier" , monospace;">findHomography</span><span style="font-family: inherit;"> function. I start out with a camera matrix <b>P </b>like the one above with only a rotation around the <i>y</i> axis. A set of 25 points are generated on my virtual calibration target, serving as the world coordinates (with the same real-world dimensions as the actual A3 chart used by MTF Mapper). These are transformed using <b>P</b> to obtain the `perfect' simulated corresponding image coordinates representing the position of the fiducials. I perturb these perfect coordinates by adding Gaussian noise with a standard deviation of about </span>0.000020210 units, which corresponds to an error of 0.1 pixels, but expressed in normalized image coordinates (divided by 4948, the width of a D7000 raw image). Now I can systematically measure the uncertainty in the focal length estimated with the formula of Appendix A as a function of the angle between the chart and the sensor, θ. I ran 100000 iterations at a selection of angles, and calculated the difference between the 75th and 50th percentile of the estimated focal length as a measure of spread.<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://1.bp.blogspot.com/-LNCItXehbQI/WJrW2SwRYCI/AAAAAAAABNA/qAewV88KoEw3jMLjmpY9SV9ieigVdmZtwCLcB/s1600/spread_f.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="300" src="https://1.bp.blogspot.com/-LNCItXehbQI/WJrW2SwRYCI/AAAAAAAABNA/qAewV88KoEw3jMLjmpY9SV9ieigVdmZtwCLcB/s400/spread_f.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 1</td></tr></tbody></table>In Figure 1 we see that the spread of the focal length estimates increases dramatically once the angle θ drops below about 2 degrees. For the purpose of using the estimated camera pose to measure if you have aligned your chart parallel to your camera sensor, this is really terrible news: essentially, we cannot estimate the focal length of the camera reliably if the chart is close to being correctly aligned.</div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://2.bp.blogspot.com/--ISL9G57nlY/WJrg0q6bdsI/AAAAAAAABNg/DmPJqQ1Z9twGJZgZOLqkpz58ISiua5uhwCLcB/s1600/median_f.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="300" src="https://2.bp.blogspot.com/--ISL9G57nlY/WJrg0q6bdsI/AAAAAAAABNg/DmPJqQ1Z9twGJZgZOLqkpz58ISiua5uhwCLcB/s400/median_f.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 2</td></tr></tbody></table><div>Figure 2 shows that the focal length estimate is relatively unbiased for angles above about 1 degree, but once the angle becomes small enough, we overestimate the focal length dramatically.<br /><br />This experiment demonstrated that small errors in the estimated position of features (e.g., corners or centre of circular targets) leads to dramatic errors in focal length estimation. Intuitively, this makes sense, since the relative magnitude of perspective effects decreases the closer we approach a parallel alignment between the sensor and the calibration target. Since perspective effects depend on the distance from the chart, and the estimated distance from the chart is effectively controlled by the estimated focal length (assume the same framing), this seems reasonable.<br /><br />I have tried using bundle adjustment, rather than homography estimation as an intermediate step, but clearly the problem lies with the unfavourable viewing geometry and the resulting subtlety of the perspective effects, not with the algorithm used to estimate the focal length. At least, as far as I can tell.</div><div><br /><h3>Hobson's choice</h3>If we take the focal length of the camera as a given parameter, then the ambiguity is resolved, and we can obtain a valid, unique estimate of the calibration target distance <i>d. </i>This is not entirely surprising, since our assumed constrained intrinsic camera parameters depend only of the focal length <i>f</i>, i.e., <b>K </b>is known, thus the pose of the camera can be estimated for any given view, even the degenerate case where the calibration target is parallel to the sensor.<br /><br />In other words, I see no way other than requiring the user to specify the focal length as an input to MTF Mapper. I will try to extract this information from the EXIF data when the MTF Mapper GUI is used, but it seems that not all cameras report this information. Fortunately, it seems that a user-provided focal length need not be 100% accurate in order to obtain a reasonable estimate of the chart orientation relative to the camera. </div><div><br /></div><h4>References</h4><div><ul><li>[Zhang2000], Z. Zhang, A flexible new technique for camera calibration, IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(11), pp. 1330-1334, 2000.</li><li>[Tsai1987], R. Tsai, A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses, IEEE Journal on Robotics and Automation, 3(4), pp. 323-344, 1987.</li><li>[Triggs1999], B. Triggs, P. McLauchlan, R. Hartley, A. Fitzgibbon, Bundle Adjustment — A Modern Synthesis, ICCV '99: Proceedings of the International Workshop on Vision Algorithms, Springer-Verlag, pp. 298-372, 1999.</li></ul><div><br /></div></div><h4>Appendix A</h4><div>If we have a homography <b>H </b>between our normalized image coordinate plane and our planar calibration target, such that<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-zvocu2glNaU/WOdOExIxcXI/AAAAAAAABVY/sA8ixUCiCTwIFryd44-fHD_5axMoVWWswCLcB/s1600/eq5_na.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://2.bp.blogspot.com/-zvocu2glNaU/WOdOExIxcXI/AAAAAAAABVY/sA8ixUCiCTwIFryd44-fHD_5axMoVWWswCLcB/s1600/eq5_na.png" /></a></div></div><div class="separator" style="clear: both; text-align: center;"></div><div>where <i>h</i><sub>33</sub> is an arbitrary scale factor, then the focal length of the camera can be estimated assuming square pixels, zero skew and a principal point of (0,0) in normalized image coordinates, using the formula<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-niMobU0VAlE/WOdOJjhSuII/AAAAAAAABVc/prmcLN4z3S8QhYyfT0gRatNHWYEeMaNEgCLcB/s1600/eq6_na.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://2.bp.blogspot.com/-niMobU0VAlE/WOdOJjhSuII/AAAAAAAABVc/prmcLN4z3S8QhYyfT0gRatNHWYEeMaNEgCLcB/s1600/eq6_na.png" /></a></div></div><div class="separator" style="clear: both; text-align: center;"></div><div>Note that this is only one possibility, derived from the constraint that <b>r</b><sub>1</sub> is a unit vector.</div>Frans van den Berghhttps://plus.google.com/104242390274256831576noreply@blogger.com2tag:blogger.com,1999:blog-6555460465813582847.post-74495603033722251632016-11-25T04:30:00.001-08:002016-11-25T04:30:18.348-08:00MTF Mapper finally gets a logo!<span style="font-family: inherit;">It is a sad day for command line enthusiasts, but MTF Mapper has finally conformed by adopting a logo for its GUI version.</span><br /><br />I guess in the world of graphical user interfaces, a logo is to an application what a flag is to a nation (cue the <a href="http://www.goodreads.com/quotes/239641-we-stole-countries-with-the-cunning-use-of-flags-just" target="_blank">Eddie Izzard reference</a>).<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://2.bp.blogspot.com/-xvq1Qt_xMO8/WDgtrGTV2YI/AAAAAAAABIg/eZZHC1lHGx8Fys7W7gjWsjSySy0smyPIgCLcB/s1600/mtf_mapper_gui_256.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://2.bp.blogspot.com/-xvq1Qt_xMO8/WDgtrGTV2YI/AAAAAAAABIg/eZZHC1lHGx8Fys7W7gjWsjSySy0smyPIgCLcB/s1600/mtf_mapper_gui_256.png" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><br /></td></tr></tbody></table>There is of course a new version of MTF Mapper (0.5.11 or later) available over on <a href="http://sourceforge.net/projects/mtfmapper/files/windows/" target="_blank">SourceForge</a>. Lots of fixes and cleanup to the GUI; please let me know what you think of the new(ish) interface.Frans van den Berghhttps://plus.google.com/104242390274256831576noreply@blogger.com6tag:blogger.com,1999:blog-6555460465813582847.post-49711173063511685492016-06-13T23:08:00.000-07:002016-06-14T05:03:05.825-07:00Running MTF Mapper under WineMTF Mapper 0.5.2 was compiled using MSVC Express 2013, which Microsoft calls "vc12". The Windows binaries have been linked statically against the runtime, but this does not appear to be sufficient to run MTF Mapper under wine without further tweaks.<br /><br />For me, running "<span style="font-family: "courier new" , "courier" , monospace;">winetricks vcrun2013</span>" in the console seemed to do the trick. I would say that this is a necessary step to get MTF Mapper to work under wine.<br /><br />In case you are wondering, without the winetricks step I get the following error:<br /><span style="font-family: "courier new" , "courier" , monospace;">wine: Call from 0x7b83c506 to unimplemented function msvcr120.dll.?_Trace_ppl_function@Concurrency@@YAXABU_GUID@@EW4ConcRT_EventType@1@@Z, aborting</span><br /><span style="font-family: "courier new" , "courier" , monospace;"><br /></span>Let me know if there are any other issues related to wine, and I'll see what I can do.Frans van den Berghhttps://plus.google.com/104242390274256831576noreply@blogger.com0tag:blogger.com,1999:blog-6555460465813582847.post-43082430951968134772016-04-13T08:30:00.000-07:002016-04-13T08:30:53.011-07:00MTF Mapper vs Imatest vs Quick MTFI recently noticed that <a href="http://www.quickmtf.com/" target="_blank">Quick MTF</a> now has an automated region-of-interest (ROI) detection function. This allows me (in theory) to perform the same type of automated testing that I applied to MTF Mapper and Imatest. Now would be a good time to read the <a href="http://mtfmapper.blogspot.co.za/2015/07/taking-on-imatest.html" target="_blank">Imatest comparison</a> post to familiarise yourself with my testing procedure.<br /><br />Anyhow, the automatic ROI functionality in Quick MTF is <i>almost</i> able to work with the simulated Imatest charts I produced with mtf_generate_rectangle. I had to manually adjust about half of the ROIs to ensure that Quick MTF was using as much of each edge as possible, i.e., similar ROIs to what Imatest and MTF Mapper used. Since the edge locations remain the same across all the test images, I used the "open with the same ROI" option to keep the experiment as fair as possible.<br /><br />I also discovered that QuickMTF's "trial" limit of 40 tests can be bypassed with relatively little fuss (Oleg, if you are reading this, I promise not to share the secret).<br /><br />Lastly, note that I performed these tests using the "ISO 12233" mode of Quick MTF. The default settings produces much smoother plots, but these are severely biased, i.e., they report MTF50 values that are much too low. To illustrate: the default settings produce a 95th percentile relative error of 13% when measured using images with an expected MTF50 of 0.25 c/p; switching to ISO 12233 mode reduces the error to only 5%. As expected, the standard deviation of MTF50 error is lower in the default mode, but I maintain that bias and variance should <i>both </i>be managed well.<br /><br /><h4>The results </h4><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://3.bp.blogspot.com/-Yhi8xBBYOxw/Vw5cReL-rcI/AAAAAAAABFQ/uchdK5OuSD8DUW3HrlOri9bxENRt2TL9QCLcB/s1600/qmtf_boxplot.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="https://3.bp.blogspot.com/-Yhi8xBBYOxw/Vw5cReL-rcI/AAAAAAAABFQ/uchdK5OuSD8DUW3HrlOri9bxENRt2TL9QCLcB/s400/qmtf_boxplot.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 1: Quick MTF MTF50 relative error boxplot</td></tr></tbody></table>Figure 1 illustrates the relative MTF50 error boxplot, calculated as <span style="font-family: inherit;"> 100*(measured_mtf50 - expected_mtf50)/expected_mtf50. Firstly, Quick MTF should be commended for its unbiased performance between expected MTF50 values of 0.1 and 0.4 cycles/pixel; the median error is exactly zero. Unfortunately, a strong bias appears after 0.4 c/p, which is consistent with some (light) smoothing of the ESF. The boxes, and especially the whiskers, are a bit wide, which is more readily seen in Figure 2.</span><br /><span style="font-family: inherit;"><br /></span><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://2.bp.blogspot.com/-iF4YpCyB_48/Vw5d8DV5Q8I/AAAAAAAABFc/5HIRVSbRxGI89Zu2kOLaZxOlNR0PITzhACLcB/s1600/sd_comparison.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="https://2.bp.blogspot.com/-iF4YpCyB_48/Vw5d8DV5Q8I/AAAAAAAABFc/5HIRVSbRxGI89Zu2kOLaZxOlNR0PITzhACLcB/s400/sd_comparison.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 2: Standard deviation of relative MTF50 error</td></tr></tbody></table><span style="font-family: inherit;">Things go a bit pear shaped when we look at the standard deviation of the relative MTF50 error. If we consider the "usable" range of 0.08 to 0.5 c/p, then Quick MTF contains the standard deviation below 3.5%, which is not bad, but Imatest and MTF Mapper perform a bit better here. A more useful (and my preferred) measure is the 95th percentile of relative MTF50 error magnitude, as illustrated in Figure 3.</span><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://2.bp.blogspot.com/-tOpbn87J56c/Vw5fMsnrWTI/AAAAAAAABFo/9sNbJcwFi4E9-q6GM0LPUnRGbbskTQCTwCLcB/s1600/p95_comparison.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="https://2.bp.blogspot.com/-tOpbn87J56c/Vw5fMsnrWTI/AAAAAAAABFo/9sNbJcwFi4E9-q6GM0LPUnRGbbskTQCTwCLcB/s400/p95_comparison.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 3: 95th percentile of relative MTF50 error magnitude</td><td class="tr-caption" style="text-align: center;"><br /></td></tr></tbody></table><span style="font-family: inherit;"><br /></span><span style="font-family: inherit;">The values in Figure 3 have a natural interpretation: the magnitude of the error will remain below the indicated value in about 95% of the edges measured with each tool. This measure combines the effects of bias (Figure 1) and variance (Figure 2) in one convenient value. Consider again the "usable" range of 0.08 to 0.5 c/p: Quick MTF only manages to keep the error below about 9% across the range. It does quite a bit better in the centre of the range, almost matching Imatest at 0.2 c/p.</span><br /><span style="font-family: inherit;"><br /></span><h4><span style="font-family: inherit;">Conclusion</span></h4><span style="font-family: inherit;">The Imatest results were not based on the latest version; I do not have an Imatest license, and my trial has expired, so it will take a fair bit of effort to refresh the Imatest results. The Quick MTF 2.09 results are current, though.</span><br /><span style="font-family: inherit;">Based on these versions, it would appear that MTF Mapper still produces competitive results. And you cannot beat MTF Mapper's price.</span><br /><span style="font-family: inherit;"><br /></span><h4><br /></h4>Frans van den Berghhttps://plus.google.com/104242390274256831576noreply@blogger.com0tag:blogger.com,1999:blog-6555460465813582847.post-48950142779990955522015-11-03T02:16:00.001-08:002015-11-03T04:14:09.466-08:00PffffFFTttt...There is no doubt that FFTW is one of the fastest FFT implementations available. It can be a pain to include in a Microsoft Visual Studio project, though. Maybe I am "using it wrong"...<br /><br />One solution to this problem is to include my own FFT implementation in MTF Mapper, thereby avoiding the FFTW dependency entirely. Although it is generally frowned upon to use a homebrew FFT implementation in lieu of an existing, proven library, I decided it was time to ditch FFTW.<br /><br />One of the main advantages of using a homebrew FFT implementation is that it avoids the GPL license of FFTW. Not that I have any fundamental objection to the GPL, but the main sources of MTF Mapper are available under a BSD license, which is a less strict license than the GPL. In particular, the BSD license makes allowance for commercial use of the code. Before anyone asks, no, MTF Mapper is not going closed source or anything like that. All things being equal, the BSD license is just less restrictive, and avoiding FFTW brings MTF Mapper closer to being a pure BSD (or compatible) license project.<br /><br /><h3>FFT Implementation</h3>After playing around with a few alternative options, including considering the my first c++ FFT implementation way back from first year at university, I settled on Sorenson's radix-2 real-valued FFT (Sorenson, H.B, et al, Real-Valued Fast Fourier Transform Algorithms, IEEE Transactions on Accoustics, Speech, and Signal Processing, 35(6), 1987). This algorithm appears to be a decent balance between complexity and theoretical efficiency, but I had to work fairly hard at the code to produce a reasonably efficient implementation.<br /><br />I tried to implement it in fairly straightforward c++, but taking care to use pointer walks in stead of array indexing, and using look up tables for both the bit-reversal process and the sine/cosine functions. These changes produced an algorithm that was at least as fast as my similarly optimized complex FFT implementation augmented with a two-for-the-price-of-one step for real-valued inputs.<br /><br />One thing I did notice is that the FFT in its "natural" form does not lend itself to an efficient streaming implementation. For example, the first pass of the radix-2 algorithm looks like this:<br /><blockquote class="tr_bq">for (; xp <= xp_sentinel; xp += 2) { <br /> double xt = *xp;<br /> *(xp) = xt + *(xp+1);<br /> *(xp+1) = xt - *(xp+1);<br />}</blockquote>Note that the value of x[i] (here *xp) is overwritten in the 3rd line of the code, while the original value of x[i] (copied into xt) is still required in the 4th line of the code. This write-after-read dependency causes problems for out-of-order execution. Maybe the compiler is smart enough to unroll the loop and intersperse the reads and writes to achieve maximal utilization of all the processing units on the CPU, but the stride of the loop and the packing of the values is not ideal for SSE2/AVX instructions either. I suppose that this can be addressed with better code, but before I spend time on that I first have to determine how significant raw performance of the FFT is in the context of MTF Mapper.<br /><br /><h3>Real world performance in MTF Mapper</h3>So how much time does MTF Mapper spend calculating FFTs? Well, one FFT for every edge. A high-density grid-style test chart has roughly 1452 edges. According to a "callgrind" trace produced using valgrind, MTF Mapper v0.4.21 spends 0.09% of its instruction count inside FFTW's real-valued FFT algorithm.<br /><br />Using the homebrew FFT of MTF Mapper 0.4.23 the total number of instruction fetches increase by about 1.34%, but this does not imply a 1.34% increase in runtime. The callgrind trace indicates that 0.31% of v0.4.23's instructions are spent in the new FFT routine.<br /><br />In relative terms, this implies that the new routine is roughly 3.5 times slower, but this does not account for the additional overheads incurred by FFTW's memory allocation routines (the FFTW routine is not in-place, hence requires a new buffer to be allocated before every FFT to keep the process thread-safe). <br /><br />Measuring the actual wall-clock time gives us a result of 22.27 ± 0.14 seconds for 20 runs of MTF Mapper v0.4.21 on my test image, versus 21.631 ± 0.16 seconds for 20 runs of v0.4.23 (each experiment repeated 4 times for computing standard deviations). These timings were obtained on a Sandy-bridge laptop with 8/4 threads. The somewhat surprising reversal of the standings (the homebrew FFT now outperforms the FFTW implementation) just goes to show that the interaction between hyperthreading, caching, and SSE/AVX unit contention can produce some surprising results.<br /><br />Bottom line: the homebrew FFT is fast enough (at least on the two hardware/compiler combinations I tested).<br /><br /><h3>Are we done yet?</h3>Well, surely you want to know how fast the homebrew FFT is in relation to FFTW in a fair fight, right?<br /><br />I set up a simple test using FFTW version 3.3.4 built on gentoo using gcc-4.9.3, running on a Sandy-bridge laptop cpu (i7-2720QM) running at a base clock of 2.2 GHz. This was a single-threaded test, so we should see a maximum clock speed of 3.3GHz, if we are lucky.<br /><br />For a 1024-sample real-valued FFT, 2 million iterations took 14.683 seconds using the homebrew code, and only 5.798 seconds using FFTW. That is a ratio of ~2.53.<br /><br />For a 512-sample (same as what MTF Mapper uses) real-valued FFT, 2 million iterations took 6.635 seconds using the homebrew code, and only 2.743 seconds using FFTW. That is a ratio of ~2.42.<br /><br />According to general impressions gathered from the Internet, you are doing a good-enough job if you are less than 4x slower than FFTW. I ran metaFFT's benchmarks, which gave a ratio of 2.4x and 2.1x relative to FFTW for size 1024 and 512, respectively (these were probably complex transforms, so not a straight comparison).<br /><br />The MTF Mapper homebrew FFT at least appears to be in the right ballpark, at least fast enough not to cause embarrassment....Frans van den Berghhttps://plus.google.com/104242390274256831576noreply@blogger.com3tag:blogger.com,1999:blog-6555460465813582847.post-51255245997216625752015-07-05T10:30:00.000-07:002015-07-05T10:30:00.252-07:00A critical lookMost of the posts on this blog are tutorial / educational in style. I have come across a paper published by an Imatest employee that requires some commentary of a more critical nature. With some experience in the academic peer review process, I hope I can maintain the appropriate degree of objectivity in my commentary.<br /><br />At any rate, if you have no interest in this kind of commentary / post, please feel free to skip it.<br /><br /><h3>The paper</h3>The paper in question is : Jackson K. M. Roland, " A study of slanted-edge MTF stability and repeatability ", Proc. SPIE 9396, Image Quality and System Performance XII, 93960L (January 8, 2015); doi:10.1117/12.2077755; http://dx.doi.org/10.1117/12.2077755.<br /><br />A copy can be obtained directly from Imatest <a href="http://www.imatest.com/wp-content/uploads/2015/02/Slanted-Edge_MTF_Stability_Repeatability.pdf" target="_blank">here</a>.<br /><br /><h3>Interesting point of view</h3>One of the contributions of the paper is a discussion of the impact of edge orientation on MTF measurements. The paper appears to approach the problem from a direction that is more closely aligned with the ISO12233:2000 standard, rather than Kohm's method ("Modulation transfer function measurement method and results for the Orbview-3 high resolution imaging satellite", Proceedings of ISPRS, 2004).<br /><br />By that I mean that Kohm's approach (and MTF Mapper's approach) is to compute an estimate of the edge normal, followed by projection of the pixel centre coordinates (paired with their intensity values) onto this normal. This produces a dense set of samples across the edge in a very intuitive way; the main drawback of this approach being the potential increase in the processing cost because it lends itself better to a floating point implementation.<br /><br />The ISO12233:2000 approach rather attempts to project the edge "down" (assuming a vertical edge) onto the bottom-most row of pixels in the region of interest (ROI). Using the slope of the edge (estimated earlier), each pixel's intensity (sample) can be shifted left or right by the appropriate phase offset before being projected onto the bottom row. If the bottom row is modelled as bins with 0.25-pixel spacing, this process allows us to construct our 4x-oversampled, binned ESF estimate with the minimum amount of computational effort (although that might depend on whether a particular platform has strong floating-point capabilities).<br /><br />The method proposed in the Imatest paper is definitely of the ISO12233:2000 variety. How can we tell? Well, the Imatest paper proposes that the ESF must be corrected by appropriate scaling of the x values using a scaling factor of cos(theta), where theta is the edge orientation angle. What this accomplishes is to "squash" the range of x values (i.e. pixel column) to be spaced at an interval that is consistent with the pixel's distance as measured along the normal to the edge. For a 5 degree angle, this correction factor is only 0.9962, meaning that distances will be squashed by a very small amount indeed. So little, in fact, that the ISO12233:2000 standard ignores this correction factor, because a pixel at a horizontal distance of 16 pixels will be mapped to a normal distance of 15.94. Keeping in mind that the ESF bins are 0.25 pixels wide, this error must have seemed small.<br /><br />I recognize that the Imatest paper proposes a valid solution to this "stretching" of the ESF that would occur in its absence, and that this stretching would become quite large at larger angles (about a 1.5 pixel shift at 25 degrees for our pixel at a horizontal distance of 16 pixels).<br /><br />My critique of this approach is that it would typically involve the use of floating point calculations, the potential avoidance of which appears to have been one of the main advantages of the ISO12233:2000 method. If you are going to use floating point values, then Kohm's method is more intuitive.<br /><br /><h3>Major technical issues</h3><ol><li>The Point Spread Functions (PSFs) used to perform the "real world" and simulated experiments were rather different, particularly in one very important aspect. The Canon 6D camera has a PSF that is anisotropic, which follows directly from its square (or even L-shaped) photosites. The composite PSF for the 6D would be an Airy pattern (diffraction) convolved with a square photosite aperture (physical sensor) convolved with a 4-dot beam splitter (the OLPF). Of course I do not have inside information on the exact photosite aperture (maybe chipworks has an image) nor the OLPF (although a 4-dot Lithium Niobate splitter seems reasonable). The point remains that this type of PSF will yield noticeably higher MTF50 values when the slanted edge approaches 45 degrees. Between the 5 and 15 degree orientations employed in the Imatest paper, we would expect a difference of about 1%. This is below the error margin of Imatest, but with a large enough set of observations this systematic effect should be visible.<br /><br />In contrast, the Gaussian PSF employed to produce the simulated images is (or at least is supposed to be) isotropic, and should show no edge-orientation dependent bias. Bottom line: the "real world" images had an anisotropic PSF, and the simulated images had an isotropic PSF. This means that the one cannot be used in the place of the other to evaluate the effects of edge orientation on measured MTF. Well, at least not without separating the PSF anisotropy from the residual orientation-depended artifacts of the slanted edge method.</li><li> On page 7 the Imatest paper states that "The sampling of the small Gaussian is such that the normally rotationally-invariant Gaussian function has directional factors as you approach 45 degree increments." This is further "illustrated" in Figure 13.<br /><br />At this point I take issue with the reviewers who allowed the Imatest paper to be published in this state. If you suddenly find that your Gaussian PSF becomes anisotropic, you have to take a hard look at your implementation. The only reason that the Gaussian (with a small standard deviation) is starting to develop "directional factors" is because you are undersampling the Gaussian beyond repair.<br /><br />The usual solution to this problem is to increase the resolution of your synthetic image. By generating your synthetic image at, say, 10x the scale, all your Gaussian PSFs will be reasonably wide in terms of samples in the oversampled image. For MTF measurement using the slanted edge method, you do not even have to downsize your oversampled image before applying the slanted edge method. All you have to do is to change the scale of your resolution axis in your MTF plot. That way you do not even have to worry about the MTF of the downsampling kernel.<br /><br />There are several methods that produce even higher quality simulated images. At this point I will plug my own work: see <a href="http://mtfmapper.blogspot.com/2012/04/accurate-method-for-rendering-synthetic.html" target="_blank">this post</a> or <a href="http://www.prasa.org/proceedings/2012/prasa2012-13.pdf" target="_blank">this paper</a>. These approaches rely on importance sampling (for diffraction PSFs) or direct numerical integration of the Gaussian in two dimensions; both these approaches avoid any issues with downsampling and do not sample on a regular grid. These methods are implemented in <span style="font-family: "Courier New", Courier, monospace;">mtf_generate_rectangle.exe<span style="font-family: inherit;"></span></span>, which is part of the MTF Mapper package.</li></ol><br /><h3>Minor technical issues</h3><ol><li>On page 1 the Imatest paper states that the ISO 12233:2014 standard lowered the edge contrast "because with high contrast the measurement becomes unstable". This statement is quite vague, and appears to contradict the results presented in Figure 8, which shows no degradation of performance at high contrast, even in the presence of noise.<br /><br />I would offer some alternative explanations: the ISO12233 standard is often applied to images compressed with DCT-based quantization methods, such as JPEG. A high-contrast edge typically shows up with a large-magnitude DCT coefficient at higher frequencies; exactly the frequencies that are more strongly quantized, hence the well-kown appearance of "mosquito noise" in JPEG images. A lower contrast edge will reduce the relative energy at higher frequencies, thus the stronger quantization of high frequencies will have a proportionately smaller effect. I am quite temtpted to go and test this theory right away.<br /><br />Another explanation, one that is covered in some depth on Imatest's own website, is of course the potential intensity clipping that may result from incorrect exposure. Keeping the edge contrast in a more manageable range reduces the chance of clipping. Another more subtle reason is that a lower contrast chart allows more headroom for sharpening without clipping. By this I mean that sharpening (of the unsharp masking type) usually results in some "ringing" which manifests as overshoot (on the bright side of the edge) and undershoot (on the dark side of the edge). If chart contrast was so high that the overshoot of overzealous sharpening would be clipped, then it would be harder to measure (and observe) the extent of oversharpening.</li><li>The noise model is employed a little basic. Strictly speaking the standard deviation of the additive Gaussian white noise should be signal dependent; this is a more accurate model of photon shot noise, and is trivial to implement. I have not done a systematic study of the effects of noise simulation models on the slanted edge method, but in 2015 one really should simulate photon shot noise as the dominant component of additive noise.</li><li>Page 6 of the Imatest paper states that "There is a problem with this 5 degree angle that has not yet been addressed in any standard or paper." All I can say to this is that Kohm's paper has presented an alternative solution to this problem that really should be recognized in the Imatest paper.</li></ol><h3>Summary</h3>Other than the unforgivable error in the generation of the simulated images, a fair effort, but more time spent on the literature, especially papers like Kohm's, would have changed the tone of the paper considerably, which in turn would have made it more credible.<br /> <br /><ol></ol>Frans van den Berghhttps://plus.google.com/104242390274256831576noreply@blogger.com1tag:blogger.com,1999:blog-6555460465813582847.post-34230501438321847342015-07-05T05:38:00.000-07:002015-07-31T04:40:13.430-07:00Taking on Imatest<br />After having worked on MTF Mapper for almost five years now, I have decided that it is time to go head-to-head with Imatest. I downloaded a trial version of Imatest 4.1.12 to face off against MTF Mapper 0.4.18.<br /><br />For the purpose of this comparison I decided to generate synthetic images using <span style="font-family: "Courier New",Courier,monospace;">mtf_generate_rectangle</span>. This allows me to use a set of images rendered using an accurately known PSF, meaning that we know exactly what the actual MTF50 value should be for those images. I decided to render a test chart conforming to the SFRPlus format, since that allows me to extract a fair number of edges for each test case. The approximately-sfrplus-chart looks like this:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/--cXVvDfj9fg/VZj5Tp-avGI/AAAAAAAAA9A/qPgtIijihNA/s1600/sfr_m_25_5_2.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="286" src="http://3.bp.blogspot.com/--cXVvDfj9fg/VZj5Tp-avGI/AAAAAAAAA9A/qPgtIijihNA/s400/sfr_m_25_5_2.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 1: SFRPlus style chart with an MTF50 value of 0.35 cycles/pixel</td></tr></tbody></table><span style="font-family: Times, "Times New Roman", serif;"><span style="font-family: inherit;"> </span></span>SFRPlus was quite happy to automatically identify and extract regions of interest (ROIs) over all the relevant edges from this image. MTF Mapper can also extract edges from this image automatically. One notable difference is that SFRPlus includes the edges of the squares that overlap with the black bars at the top and bottom of the images, whereas MTF Mapper only considers edges that form part of a complete square. To keep the comparison fair, I discarded the results from the top and bottom rows of squares (as extracted by SFRPlus), leaving us with 19*4 edges per image (SFRPlus ignores the third square in the middle column).<br /><br /><h3>Validating the test images</h3>(This section can be skipped if you trust my methodology)<br /><br />Although I have posted quite a few posts here on this blog regarding the algorithms used by <span style="font-family: Times, "Times New Roman", serif;"><span style="font-family: "Courier New",Courier,monospace;">mtf_generate_rectangle</span><span style="font-family: inherit;"> to</span> render synthetic images, I will now show from first principles that the synthetic images truely have the claimed point spread functions (PSFs), and thus known MTFs.</span><br /><span style="font-family: Times, "Times New Roman", serif;"><br /></span><span style="font-family: Times, "Times New Roman", serif;">I rendered the synthetic image using a command like this:</span><br /><br /><span style="font-family: "Helvetica Neue",Arial,Helvetica,sans-serif;">mtf_generate_rectangle.exe --b16 --pattern-noise 0.0085 --read-noise 2.5 --adc-gain 0.641 --adc-depth 12 -c 0.33 --target-poly sfrchart.txt -m 0.35 -p gaussian-sampled --airy-samples 100</span><br /><span style="font-family: inherit;"><br /></span><span style="font-family: inherit;">This particular command renders the SFRPlus chart using a Gaussian PSF with an MTF50 value of 0.35. Reasonably realistic sensor noise is simulated, including photon shot noise, which implies that the noise standard deviation scales as the square root of the signal level; in plain English: we have more noise in bright parts of the image.</span><br /><br /><span style="font-family: "Helvetica Neue",Arial,Helvetica,sans-serif;"><span style="font-family: inherit;">I ran a version of <span style="font-family: "Courier New",Courier,monospace;">mtf_mapper<span style="font-family: inherit;"> that</span></span> dumped the raw samples extracted from the image (normally used to construct the binned ESF); I specified the edge angle as 5 degrees to remove all possible sources of error. NB: the "raw_esf_values.txt" file produced by MTF Mapper contains the binned ESF, and is not suitable for this particular experiment because of the smoothing inherent in the binning.</span></span><br /><span style="font-family: "Helvetica Neue",Arial,Helvetica,sans-serif;"><br /></span><span style="font-family: inherit;">Given that I specified an MTF50 value of 0.35 cycles per pixel, we know that the standard deviation of the true PSF should be 0.5354018 pixels [ sqrt( log(0.5)/(-2*pi*pi*0.35*0.35) ]. From this we can calculate the expected analytical ESF, which is simply erf(x/sigma)*(upper-lower) + lower, where erf() is the standard "error function", defined as the integral of the unit Gaussian. The values upper and lower merely represent the mean white and black levels, which were defined as lower = 65536*0.33/2 and upper = 65536 - lower. With these values, I can now plot the expected analytical ESF along with the raw ESF samples dumped by MTF Mapper. </span><br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-lQJo99iKHSQ/VZkAIOaYv_I/AAAAAAAAA9Q/5iLTIVSiIZg/s1600/esf_with_erf.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://4.bp.blogspot.com/-lQJo99iKHSQ/VZkAIOaYv_I/AAAAAAAAA9Q/5iLTIVSiIZg/s400/esf_with_erf.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 2: Raw ESF samples along with analytical ESF</td></tr></tbody></table><span style="font-family: inherit;"><span style="font-family: inherit;">I should mention that I shifted the analytical ESF along the "d" axis to compensate for any residual bias in MTF Mapper's edge position estimate. We can see that the overall shape of the analytical ESF appears to line up quite well with the ESF samples extracted from the synthetic image. Next we look at the difference between the two curves:</span></span><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-dKdr9HVs8cI/VZkA-1G82aI/AAAAAAAAA9c/yR4nRTokZvk/s1600/esf_minus_erf.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://1.bp.blogspot.com/-dKdr9HVs8cI/VZkA-1G82aI/AAAAAAAAA9c/yR4nRTokZvk/s400/esf_minus_erf.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 3: ESF difference</td></tr></tbody></table><span style="font-family: inherit;"><span style="font-family: inherit;"> </span><br />We see two things in Figure 3: The mean difference appears to be close to zero, and the noise magnitude appears to increase with increasing signal levels (to the right). The increase in noise was expected, since that follows from the photon shot noise model used to simulate sensor noise. We can normalize the noise by dividing the ESF difference (noise) by the square root of the analytical ESF, which gives us this plot:</span><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-r46oLhYUoKw/VZkBtsouWjI/AAAAAAAAA9k/hzwSY0ofEes/s1600/esf_residuals.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://4.bp.blogspot.com/-r46oLhYUoKw/VZkBtsouWjI/AAAAAAAAA9k/hzwSY0ofEes/s400/esf_residuals.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 4: Normalised ESF difference</td></tr></tbody></table><span style="font-family: inherit;">This normalization appears to keep the noise standard deviation constant, which would be consistent with garden-variety additive Gaussian white noise. The density estimate of the normalized noise looks Gaussian:</span><br /><div class="separator" style="clear: both; text-align: center;"></div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-JhrHDysiChs/VZkCxUnD92I/AAAAAAAAA90/IKQseSXEObU/s1600/esf_residual_density.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://1.bp.blogspot.com/-JhrHDysiChs/VZkCxUnD92I/AAAAAAAAA90/IKQseSXEObU/s400/esf_residual_density.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 5: Normalized ESF difference density</td></tr></tbody></table><span style="font-family: inherit;">Running the normalized residuals through the Shapiro-Wilk normality test gives us a p-value of 0.03722 over our 3285 samples. That is bad news, because it means our data is non-Gaussian at a 5% significance level. <strike>It is, however, Gaussian at a 10% confidence level.</strike> Correction: The normalized residuals are Gaussian at a 3% (or 2.5%, or 1%) significance level. The qqnorm() plot is pretty straight too, which tells us it is more likely that the Shapiro-Wilk test is negatively affected by the large number of samples, than that the residuals are truely not Gaussian. </span><br /><br /><span style="font-family: inherit;">Now that we have confirmed that the distribution of the residuals are Gaussian, we can fit a line through them. This line comes out with a slope of -0.005765, which means that our normalized residuals are fairly flat. Lastly, we can perform some LOESS smoothing on the normalized residuals:</span><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-77yLOFB05ec/VZkF480ZzEI/AAAAAAAAA-A/EvpAx4iCvos/s1600/esf_residual_loess.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://3.bp.blogspot.com/-77yLOFB05ec/VZkF480ZzEI/AAAAAAAAA-A/EvpAx4iCvos/s400/esf_residual_loess.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 6: LOESS fit on normalized ESF difference</td></tr></tbody></table><span style="font-family: inherit;">Again, we can see that the LOESS-smoothed values oscillate around 0, i.e., there is no trend in the difference between the analyical ESF and the ESF measured from our synthetic image.</span><br /><br /><span style="font-family: inherit;">The mean signal-to-noise ratio in the bright regions of the images comes out at around 15dB; because we compute the LSF (or PSF if you prefer)) from the derivative of the ESF, the bright parts of the image are representative of the worst-case noise. Alternatively, we can say that the noise is quite similar to that produced by a Nikon D7000 at ISO400, for an SRFplus test chart at a 5:1 contrast ratio.</span><br /><span style="font-family: inherit;"><br /></span><span style="font-family: inherit;">I have shown that there is no systematic difference between the ESF extracted from a synthetic image and the expected analytical ESF. The simulated noise also behaves in the way that we would expect from properties of the simulated sensor. Based on these observations, we can safely assume that the synthetic images have the desired PSF, i.e., the simulated MTF50 values are spot-on. (In previous posts I examined the properties of the simulated ESF values in the absence of noise, but here I chose to demonstrate the PSF properties directly on the actual images used in the Imatest vs MTF Mapper comparison).</span><br /><span style="font-family: inherit;"><br /></span><br /><h3><span style="font-family: inherit;">The results</span></h3><span style="font-family: inherit;">The results presented here were obtained by running Imatest 4.1.12 and MTF Mapper 0.4.18 on <a href="http://sourceforge.net/projects/mtfmapper/files/simulated_sfrplus_charts.zip/download" target="_blank">these</a> images (about 100MB). SFRPlus (from Imatest, of course) was configured to enable the LSF correction that was recently introduced. Other than that, all settings were left to defaults, including leaving the apodization option enabled. I turned off the "quick mtf" option, although I did not check to see whether this affected the results. After a run of SFRPlus, the "save data" option was used to store the results, after which the "MTF50" column values were extracted, discarding the top and bottom row edges as explained before.</span><br /><span style="font-family: inherit;"><br /></span><span style="font-family: inherit;">MTF Mapper was run using the "-t 0.5 -r" settings; the "-t 0.5" option is required to allow MTF Mapper to work with the rather low 5:1 contrast ratio. The values output to "raw_mtf_values.txt" were used as the representative MTF50 values extracted by MTF Mapper.</span><br /><span style="font-family: inherit;"><br /></span><span style="font-family: inherit;">Simulated images were produced over the MTF50 range 0.1 cycles/pixel to 0.7 cycles/pixel in increments of 0.05 cycles/pixel, with one extra data point at 0.08 cycles/pixel to represent the low end (which is quite blurry). For each MTF50 level a total of three images were simulated, each with a different seed to produce unique sensor noise. </span><span style="font-family: inherit;"><span style="font-family: inherit;"> This gives us 19*3*4 = 228 samples at each MTF50 level. </span> </span><br /><br /><span style="font-family: inherit;">As in previous posts, the results will be evaluated in two ways: bias and variance. The first plots to consider illustrate both bias and variance simultaneously, although it is somewhat harder to compare the variance of the methods on these plots.</span><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-gl5PjYesU9I/VZkPb0cqr-I/AAAAAAAAA-Q/LzGbbktVqn4/s1600/imatest_boxplot.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://2.bp.blogspot.com/-gl5PjYesU9I/VZkPb0cqr-I/AAAAAAAAA-Q/LzGbbktVqn4/s400/imatest_boxplot.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 7: Imatest relative error boxplot</td></tr></tbody></table><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-nvEagE8ibXk/VZkPociYILI/AAAAAAAAA-Y/YeUQ0RDv1Sc/s1600/mapper_boxplot.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://4.bp.blogspot.com/-nvEagE8ibXk/VZkPociYILI/AAAAAAAAA-Y/YeUQ0RDv1Sc/s400/mapper_boxplot.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 8: MTF Mapper relative error boxplot</td></tr></tbody></table><span style="font-family: inherit;">In figures 7 and 8, the relative difference (or error) is calculated as 100*(measured_mtf50 - expected_mtf50)/expected_mtf50. It is clear that Imatest 4.1.12 underestimates MTF50 values sligthly for MTF50 values above 0.2 cycles/pixel; this pattern is typical of what one would expect if the MTF curve is not adequately corrected for the low-pass filtering effect of the ESF binning step (see <a href="http://mtfmapper.blogspot.com/2015/06/improved-apodization-and-bias-correction.html" target="_blank">this post; </a></span>). MTF Mapper corrects for this low-pass filtering effect, producing no clear trend in median MTF50 error over the range considered. We can plot the median measured MTF50 relative error for Imatest and MTF Mapper on the same plot:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-1FSH4MeT9GA/VboyvdjYCVI/AAAAAAAAA_Y/45eAX9_Izas/s1600/median_comparison.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://2.bp.blogspot.com/-1FSH4MeT9GA/VboyvdjYCVI/AAAAAAAAA_Y/45eAX9_Izas/s400/median_comparison.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 9: Median relative MTF50 error comparison</td></tr></tbody></table><br /><div class="separator" style="clear: both; text-align: center;"></div>Figure 9 shows us that the Imatest bias is not all that severe; it remains below 2% over the range of MTF50 values we are likely to encounter in actual photos. (NB: Up to July 30, 2015, this figure had Imatest and MTF Mapper swapped around).<br /><br />So that illustrates bias. To measure variance we can plot the standard deviation at each MTF50 level:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-1JmjSGVPuD4/VZkXqxBMmhI/AAAAAAAAA-w/iw9BGOaA74Q/s1600/sd_comparison.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://4.bp.blogspot.com/-1JmjSGVPuD4/VZkXqxBMmhI/AAAAAAAAA-w/iw9BGOaA74Q/s400/sd_comparison.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 10: Standard deviation of relative MTF50 error</td></tr></tbody></table>Other than at very low MTF50 values (say, 0.08 cycles/pixel and lower), it would appear that MTF Mapper 0.4.18 produces more consistent MTF50 measurements than Imatest 4.1.12.<br /><br />A final performance metric to consider is the 95th percentile of relative MTF50 error. By computing this value on the absolute value of the relative error, it combines both variance and bias into a single measurement that tells us how close our measurements will be to the true MTF50 value, in 95% of measurements. Here is the plot:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-jqbLg3dY--o/VZkYvQ4A8kI/AAAAAAAAA-8/O-nXOddCXjk/s1600/p95_comparison.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://1.bp.blogspot.com/-jqbLg3dY--o/VZkYvQ4A8kI/AAAAAAAAA-8/O-nXOddCXjk/s400/p95_comparison.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 11: 95th percentile of MTF50 error</td></tr></tbody></table>Of all the performance metrics presented here, I consider Figure 11 to be the most practical measure of accuracy.<br /><br /><h3>Conclusion</h3>It took quite a bit of effort on my part to improve MTF Mapper to the point where it produces more accurate results than Imatest. There are some other aspects I have not touched on here, such as how accuracy varies with edge orientation. For now, I will say that MTF Mapper produces accurate results at known critical angles, whereas Imatest appears to fail at an angle of 26.565 degrees. Given that Imatest never claimed to work well at angles other than 5 degrees, I will let that one slide.<br /><br />I have also not included any comparisons to other freely available slanted edge implementations (sfrmat, Quick MTF, the slanted edge ImageJ plugin, mitreSFR). I can tell you from informal testing that most of them appear to perform significantly worse than Imatest, mostly because none of those implementations appear to include the finite-difference-derivative correction. Maybe I will back this opinion up with some more detailed results in future.<br /><br />So where does that leave your typical Imatest user? Well, the difference in accuracy between Imatest and MTF Mapper is relatively small. What I mean by that is that these results do not imply that Imatest users have to switch over to using MTF Mapper, rather, these results show that MTF Mapper users can trust their measurements to be at least as good as those obtained by Imatest. And, of course, MTF Mapper is free, and the source code is available.<br /><br />There are some fairly nifty features that I noticed in SFRPlus during this experiment. It appears that SFRPlus will perform lens correction automatically, meaning that radial distortion curvature can be corrected for on the fly. MTF Mapper currently limits the length of the edge it will include in the analysis as a means of avoiding the effects of strong radial distortion. But now that I am aware of this feature, I think it would be relatively straightforward to include lens distortion correction in MTF Mapper. So little time, so many neat ideas to play with ...<br /><br /><br />Frans van den Berghhttps://plus.google.com/104242390274256831576noreply@blogger.com0tag:blogger.com,1999:blog-6555460465813582847.post-24511458518265566812015-06-24T04:29:00.000-07:002015-06-24T04:29:26.863-07:00Truncation of the ESFA really quick post to highlight one specific aspect: what happens to the MTF produced by the slanted edge method if the ESF is truncated.<br /><br />To recap: The slanted edge method projects image intensity values onto the normal of the edge to produce the Edge Spread Function (ESF). Any practical implementation has to place an upper limit on the maximum distance that pixels can be from the edge (as measured along the edge normal). MTF Mapper, for example, only considers pixels up to a distance of 16 pixels from the edge.<br /><br />Looking back at the Airy pattern that results from the diffraction of light through a circular aperture we can see that the jinc<sup>2</sup> function has infinite support, in other words, it tapers off to zero but never quite reaches zero if we consider a finite domain.<br /><br />We also know that the effective width of the Airy pattern increases with increasing f-number. Herein lies the problem: a slanted edge implementation that truncates the ESF will necessarily discard part of the Airy pattern. The discarded part is of course the samples furthest from the edge, and we know that those samples tend to contribute more to the lower frequencies in the MTF.<br /><br />Simulating a slanted edge image using the Airy + photosite aperture model, with an aperture of f/8, light at 550 nm, a 100% fill-factor square photosite aperture, and 4.886 micron photosite pitch (something approximating the D810), we can investigate the impact of the truncation distance on the MTF as measured by the slanted edge method. Here goes:<br /><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-mlj_H6n9v9c/VYqQIiPzBjI/AAAAAAAAA8Y/GEWgfBECqfo/s1600/airy_esf_truncation.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="http://4.bp.blogspot.com/-mlj_H6n9v9c/VYqQIiPzBjI/AAAAAAAAA8Y/GEWgfBECqfo/s400/airy_esf_truncation.png" width="400" /></a></div>The green dotted line represents the expected MTF curve (from our simple model). I have zoomed in on the low-frequency region, but we can see that both the truncated MTF measurements (red and black curves) tend to follow the green curve more closely after about 0.10 cycles per pixel. We also note that both the red and black curves contain a few points that are clearly above the green curve between 0 and 0.05 cycles per pixel. It is physically impossible for the measured MTF to exceed the diffraction MTF (blue curve), so we can state with confidence that this is a measurement error.<br /><br />If we compare the red and the black curves we can see that a wider truncation window (red curve) reduces the overshoot at low frequencies. If we had the opportunity to use an even wider truncation window, we would be able to reduce the overshoot to even lower levels.<br /><br />Lastly, if we introduce <a href="http://mtfmapper.blogspot.com/2015/06/improved-apodization-and-bias-correction.html" target="_blank">apodization</a> into the mix we are compounding the problem even further by attenuating the edges of the PSF. This leads to even greater overshoot (at low frequencies) in our measured MTF curve.<br /><br />Bottom line: The slanted edge method is constrained by practical limitations, most notably the desire to have a finite truncation window, and the desire to reduce the impact of image noise using apodization of the PSF. These constraints lead to overshoot in the lowest frequencies of the measured MTF. It may be possible to apply an empirical correction to minimize the overshoot, but only at the cost of making strong assumptions regarding the shape of the MTF, which is best avoided.Frans van den Berghhttps://plus.google.com/104242390274256831576noreply@blogger.com1tag:blogger.com,1999:blog-6555460465813582847.post-58010420364769150892015-06-23T08:20:00.000-07:002015-06-23T08:20:38.462-07:00AnisotropyIn my post on "critical angles" I mentioned that there was one other factor to consider when looking at the influence of edge orientation on slanted edge analysis. I will refer to that phenomenon as the influence of <i>anisotropic</i> point spread functions. In this context, I use the term anisotropic to refer to point spread functions that are not radially symmetric.<br /><br />The simplest example of an anisotropic PSF is to consider just a square photosite aperture, without any lens aperture diffraction.<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-QEn4Py4OURs/VYkwpwEhkgI/AAAAAAAAA58/55iD8IQyNZo/s1600/square_integration.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="183" src="http://2.bp.blogspot.com/-QEn4Py4OURs/VYkwpwEhkgI/AAAAAAAAA58/55iD8IQyNZo/s400/square_integration.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"> Figure 1: Edge orientation relative to photosite aperture</td></tr></tbody></table><br /><div class="separator" style="clear: both; text-align: center;"></div>In figure 1 we can see the interaction between our slanted edge (shown in blue here) and the photosite aperture (orange). If the value <i>t</i> represents the distance from the centre of our photosite aperture to the right edge of our slanted edge (rectangle or step edge), then we can consider the overlapping area between the two as a function of t. The interesting range of values for <i>t</i> would be between -√0.5 and √0.5, if we assume the photosite is a square with sides of length 1. Plotting this overlapping area as a function of <i>t</i> gives us Figure 2:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-STpf-DM37pE/VYkxiEuQUHI/AAAAAAAAA6I/itPKfjon-4g/s1600/square_integration_area.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://4.bp.blogspot.com/-STpf-DM37pE/VYkxiEuQUHI/AAAAAAAAA6I/itPKfjon-4g/s400/square_integration_area.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 2: Fraction of square photosite covered by slanted edge as a function of edge distance to photosite centre</td></tr></tbody></table>When the edge orientation angle theta is 0 degrees, then we obtain a linear function, which is what one would expect. If the edge is at a 45 degree angle (as shown in the right panel of Figure 1), then we obtain the other extreme. Angles between 0 and 45 degrees produce a curve that is somewhere in between these extremes.<br /><br />What can we learn from these curves? Well, we can see that an edge orientation of 45 degrees will overlap with the photosite square from -√0.5 to √0.5, whereas the 0 degrees edge orientation only results in overlap between -0.5 and 0.5. From this we can infer that the square appears wider when approached by an edge with a 45 degree orientation. We also know that a square photosite acts as a low-pass filter, in the sense that the image captured by our sensor is the convolution of this low-pass filter and the analytical model of our scene. This might lead one to believe that the 45 degree case would result in a stronger low-pass filter, because it is clearly "wider" than the 0 degree case.<br /><br />We can plot the derivative of the curves from Figure 2:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-EmA20bdL9_I/VYlMPVURH8I/AAAAAAAAA6o/fE1ro30LlHo/s1600/square_integration_width.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://1.bp.blogspot.com/-EmA20bdL9_I/VYlMPVURH8I/AAAAAAAAA6o/fE1ro30LlHo/s400/square_integration_width.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 3: Instantaneous width of PSF </td></tr></tbody></table><br />The 0 degree case is easy to visualize with the help of the left panel of Figure 1: clearly, the width of the photosite square (measured along the step edge) is constant. The 45 degree case is also readily visualized by noting that the we cross the widest part of the photosite square when t=0 (right panel of Figure 1); this nicely corresponds to the peak instantaneous width of √2 in Figure 3.<br /><br />We can interpret the curve in Figure 3 as a weighting function, i.e., the relative contribution to the convolution of the edge and the photosite aperture at distance <i>t </i>from the centre of the photosite aperture. Looking at the problem this way reveals a new angle: the 45 degree case presents a fair amount of its total weight located close to t=0. Roughly 50.6% of its weight is located in the part where it is wider than the 0 degree case, corresponding to the central of Figure 3 where the red curve is above the gray curve. In contrast, only about 8.6% of the weight of the 45 degree curve is located in the two tail ends (t < -0.5 and t > 0.5). If we compare this to the 0 degree case, we obtain 42% in the centre (area under gray curve where the red curve is above the gray curve), and of course 0% in the tails.<br /><br />This is a rather unexpected turn of events, since it implies that even though the 45 degree case starts overlapping with the edge sooner (the regions -√0.5 < t < -0.5 and 0.5 < t < √0.5), it represents only a small fraction of the total interaction with the edge. Instead of the 45 degree case being a stronger low-pass filter than the 0 degree case, we expect the opposite because the 45 degree case has roughly 20% (50.6/42) more of its weight located close to t=0.<br /><br />We appear to have two mildly conflicting views:<br />a) the 45 degree case is "wider" at its widest point, thus it should be a stronger low-pass filter than the 0 degree case, and<br />b) more of the weight of the 45 degree case is close to the centre, hence it should present a <i>weaker </i>low-pass filter than the 0 degree case.<br /><br />I am betting on outcome b), mostly because I already know what the empirical results will tell us .... <br /><br /><h3>Empirical results for square photosites (no diffraction)</h3>The prediction favoured by outcome b) in the previous section tells us that we should expect MTF50 values to increase as we progress from a relative edge orientation of 0 degrees through to 45 degrees. Simulations were performed in the absence of noise, using 30 repetitions over sub-pixel shifts. Keep in mind that the MTF50 value of a square photosite aperture is about 0.6033 cycles per pixel, which is quite high.<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-t8EO9TkydE8/VYlcT5cvEmI/AAAAAAAAA64/3_1wzwHZY1w/s1600/pure_square_psf.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://1.bp.blogspot.com/-t8EO9TkydE8/VYlcT5cvEmI/AAAAAAAAA64/3_1wzwHZY1w/s400/pure_square_psf.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 4: Square (box) PSF relative MFT50 error as function of edge orientation</td></tr></tbody></table>We can see that MTF50 overestimation steadily increase to about 5% as we approach 45 degrees.<br /><br />Just to check, let us examine an isotropic PSF: a pure Gaussian without any photosite aperture simulation. This should yield a purely Gaussian MTF. Same simulation, but with the radially symmetric Gaussian PSF:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-aJvhjbr2Jhg/VYlevkjXAEI/AAAAAAAAA7E/bhdIWZOfHm0/s1600/pure_gaussian_psf.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://4.bp.blogspot.com/-aJvhjbr2Jhg/VYlevkjXAEI/AAAAAAAAA7E/bhdIWZOfHm0/s400/pure_gaussian_psf.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 5: Gaussian PSF relative MTF50 error as a function of edge orientation</td></tr></tbody></table>Other than a bit of a glitch at 2 degrees producing a few outliers, we see a fairly flat median MTF50 error with the Gaussian PSF. No systematically increasing MTF50 error with increasing angle appears.<br /><br /><h3>Somewhat real world: squares plus diffraction</h3>We have seen that a box PSF (without diffraction) produces strong anisotropy, and that a Gaussian PSF (without photosite aperture) produces no noticeable anisotropy. Using a PSF consisting of an Airy pattern convolved with a square photosite aperture should put us somewhere in the middle of the anisotropy scale.<br /><br />Simulations were repeated using a simulated aperture at f/2.8, light at 550 nm, a photosite pitch of 4.73 micron and no AA (OLPF) filter. These settings give an expected MTF50 value of ~ 0.504 cycles per pixel, which is slightly lower than the expected MTF50 value of ~ 0.6 cycles per pixel seen in the previous section. Accordingly, the MTF50 errors may be slightly reduced (or at least the expected variance should be reduced).<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-hl5wVl-jF9E/VYly1zzvwKI/AAAAAAAAA8E/mnSOIP9d_OE/s1600/pure_airybox_psf.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://3.bp.blogspot.com/-hl5wVl-jF9E/VYly1zzvwKI/AAAAAAAAA8E/mnSOIP9d_OE/s400/pure_airybox_psf.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 6: Airy+box PSF relative MTF50 error as a function of edge orientation</td></tr></tbody></table><br />The trend is clearly visible, but appears to be only about 60% of the magnitude of the case without diffraction (about 2.5% at 44 degrees, vs about 4% without diffraction). Smaller apertures (larger f-numbers) will reduce the anisotropy as the Airy component of the PSF will start to dominate the photosite aperture PSF.<br /><br /><h3>Any practical implications?</h3>The effect of PSF anisotropy on MTF measurements is real, but appears to be relatively small. At 2.5%, do we even have to worry about it?<br /><br />Unfortunately, we have to at least be aware of this for certain types of testing and measurement. Because the error (overestimation) is systematic, it will show up in any measurement that sweeps through a range of angles, just like the MTF Mapper grid test chart, pictured here:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-DU-SrlPox4I/VYlqty0cxJI/AAAAAAAAA7U/1cPxuSqfLEM/s1600/grid_sample.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="272" src="http://2.bp.blogspot.com/-DU-SrlPox4I/VYlqty0cxJI/AAAAAAAAA7U/1cPxuSqfLEM/s400/grid_sample.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">MTF Mapper grid test chart</td></tr></tbody></table>This chart can be used to produce Sagittal/Meridional MTF50 plots across your lens/sensor/camera. The chart aims to keep one edge perpendicular to the virtual line connecting that edge to the centre of the chart, which inevitably causes some of the squares to approach a 45 degree edge orientation.<br /><br />I simulated this chart using <span style="font-family: "Courier New",Courier,monospace;">mtf_generate_rectangle</span>, using an aperture of f/4, an Airy+box PSF, green light and a photosite pitch of 4.73 micron. Passing this synthetic image through MTF mapper to produce a surface plot (-s option) produces this result:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-wnvxW4u-qHc/VYls100BvRI/AAAAAAAAA7g/opdemWtkPY8/s1600/grid_image_airybox_f4.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="640" src="http://4.bp.blogspot.com/-wnvxW4u-qHc/VYls100BvRI/AAAAAAAAA7g/opdemWtkPY8/s640/grid_image_airybox_f4.png" width="466" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 7: MTF50 plot obtained from simulated rendering of the MTF Mapper grid test chart using the Airy-box MTF with a square photosite aperture</td></tr></tbody></table><br />The systematic distortion of MTF50 values is clearly visible, even though the range of values is quite small. The maximum value on the scale is 0.47, which is only about 2% higher than the expected MTF50 value of 0.46073 (at 0 degrees, of course). But the cross pattern is clearly visible. At least I have confirmed the cause.<br /><br />Pushing for even greater realism I repeated the simulation using the "rounded-square" photosite aperture that MTF Mapper provides. Here is the surface plot:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-UUYQyPVi2ro/VYlvyXiMu2I/AAAAAAAAA7s/GuJfXFK0RPo/s1600/grid_image_airybox_f4_rounded.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="640" src="http://3.bp.blogspot.com/-UUYQyPVi2ro/VYlvyXiMu2I/AAAAAAAAA7s/GuJfXFK0RPo/s640/grid_image_airybox_f4_rounded.png" width="466" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 8: MTF50 plot obtained from simulated rendering of the MTF Mapper grid test chart using the Airy-box MTF with a rounded-square photosite aperture</td></tr></tbody></table>We can see that the MTF50 values are slightly higher (I think the effective fill factor is slightly lower for my hand-crafted rounded corner photosite aperture), but ignore that bit for the moment. Instead, notice that the range is even smaller than the square aperture case (Figure 7), but the cross pattern is still visible.<br /><br />Lastly, if we use a circular photosite aperture, we get this:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-qSnndCdbvoI/VYlxhS8PVzI/AAAAAAAAA74/4SIQSLZozzE/s1600/grid_image_airybox_f4_circle.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="640" src="http://2.bp.blogspot.com/-qSnndCdbvoI/VYlxhS8PVzI/AAAAAAAAA74/4SIQSLZozzE/s640/grid_image_airybox_f4_circle.png" width="465" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"> Figure 9: MTF50 plot obtained from simulated rendering of the MTF Mapper grid test chart using the Airy-box MTF with a circular photosite aperture</td></tr></tbody></table>Other than the fact that the resulting image is appallingly ugly, we can see that the cross structure has disappeared, as expected.<br /><br /><h3>Conclusion</h3>Anisotropy is a reality that we have to deal with if we apply the slanted edge method to edges that approach a relative orientation of 45 degrees with respect to the (presumed) square photosites. The isotropy of the Airy pattern helps to attenuate the overestimation of edges approaching 45 degrees, but the systematic effect is still clearly visible in simulated images.<br /><br />I tried to construct an elegant analytical explanation for the interaction between the edge orientation and a square photosite aperture. This turned out to be harder than I expected, so I only have some interesting plots to offer for now. What did emerge from the theory is that we should not focus on the apparent width of the photosite aperture, but rather on the distribution of its weight relative to the centre. The somewhat startling conclusion is that we should observe higher MTF50 measurements when the orientation approaches 45 degrees.<br /><br />This was supported by the actual experiments using simulated imagery. <br /><br />So what can we do about this systematic distortion? Well, the only sound solution would be to stick to edges with a relative orientation of about 5 degrees. This is not a universal solution, though, because it makes it impossible to measure in the true Sagittal/Meridional directions. Imatest solved the problem by sticking to 5 degree angles and referring to "horizontal" and "vertical" MTF. This works well enough if you wish to measure peak astigmatism, but it does not allow you to measure MTF in the optically more appropriate sagittal/meridional directions.<br /><br />I might add a 5-degree test chart to MTF Mapper in future, just to cover all bases.Frans van den Berghhttps://plus.google.com/104242390274256831576noreply@blogger.com0tag:blogger.com,1999:blog-6555460465813582847.post-48504828297245789422015-06-16T04:04:00.003-07:002015-06-16T04:04:31.411-07:00MTF Mapper v0.4.17 Windows binary releasedMust be a slow news day.<br /><br />Anyhow, a Windows binary of the latest release of MTF Mapper, v.0.4.17, is now available on <a href="https://sourceforge.net/projects/mtfmapper/files/windows/" target="_blank">sourceforge.</a><br /><br />Version 0.4.17 does not add any new functionality as such, but it does incorporate a few improvements in measurement accuracy. If I broke anything, please let me know!<br /><br />Also, I finally upgraded the dcraw version included in the Windows binaries to 9.26.Frans van den Berghhttps://plus.google.com/104242390274256831576noreply@blogger.com0tag:blogger.com,1999:blog-6555460465813582847.post-16624934500403276492015-06-15T08:10:00.001-07:002017-04-06T00:27:43.389-07:00Critical anglesIt is often said that there is more than one way to skin a cat.<br /><br />Well, today I discovered an Imatest <a href="http://www.imatest.com/wp-content/uploads/2015/02/Slanted-Edge_MTF_Stability_Repeatability.pdf" target="_blank">article</a> that demonstrates just how wildly different slanted edge implementations can (and apparently do) vary. I will leave my critique of said article for another day, but I will note that this article makes reference to the "5 degrees" rule that is often seen when slanted edge measurements are performed.<br /><br />The "5 degrees" rule states that the orientation of the edge relative to the sensor's photosite grid should be approximately 5 degrees (either horizontal or vertical).<br /><br />There are two notable reasons for this: firstly, a 5 degree angle is far from the critical angles (the topic of this post), and secondly, a 5 degree angle ensures that the potential non-rotationally symmetric behaviour of the PSF is minimized. A discussion of the non-rotationally symmetric PSFs will also be postponed to a future article.<br /><br /><h3>A closer look at the slanted edge method</h3>Figure 1 illustrates how MTF Mapper constructs the oversampled edge spread function (ESF) that is the starting point of the MTF calculation.<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://3.bp.blogspot.com/-0A2l5AlUmzk/WOXtiE4WUGI/AAAAAAAABSY/JcEPrbgi5YUY-NoV2WEcP6r6Ve21mGqBACLcB/s1600/se_method1_na.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="295" src="https://3.bp.blogspot.com/-0A2l5AlUmzk/WOXtiE4WUGI/AAAAAAAABSY/JcEPrbgi5YUY-NoV2WEcP6r6Ve21mGqBACLcB/s400/se_method1_na.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 1: How the ESF is sampled</td></tr></tbody></table><br /><div class="separator" style="clear: both; text-align: center;"></div>We want to oversample the ESF so that we can increase the effective Nyquist limit; this is extremely important if we want to measure frequencies close to the natural Nyquist limit of 0.5 cycles per pixel of our sensor. The Shannon-Nyquist theorem shows us that we will have aliasing at frequencies above 0.5 cycles per pixel if we sample at a rate of 1 sample per pixel.<br /><br />Pushing up our sampling rate to 8x moves the Nyquist limit up to 4 cycles per pixel, which allows us to examine the behaviour of our MTF curve near 0.5 cycles per pixel without fear that we are being misled by aliasing artifacts.<br /><br />How can we increase the spatial sampling rate of our sensor? Well, we cannot change the sensor, but we can use a trick to generate a synthetic ESF. Looking at Figure 1 above we can see that the edge (represented as a black line) crosses the pixel grid in different places as we move along the edge. More importantly, pay attention to the shortest distance from each black dot (representing the centre of each pixel/photosite) to the black edge. Notice how this distance varies by a fraction of the pixel spacing as we move along the edge.<br /><br />Let us assume that we have a coordinate system with its origin at the centre of our top/leftmost pixel of our sensor, such that the black dots representing the pixel centres can be addressed by integer coordinates. If we take the (x, y) coordinate of a pixel near the edge, and project this coordinate onto the vector representing the edge normal (i.e., the vector perpendicular to the edge under analysis), then we obtain a real-valued scalar that represents the distance of the pixel centre from our edge. We can pair this projected distance-from-edge value with the intensity of that pixel to form a sample point on our synthetic ESF, as shown in Figure 1.<br /><br />How does this help us to oversample the ESF? Well, if we choose an appropriate edge orientation angle, say, 5 degrees, then the projected ESF points will be densely spaced. In other words, the average distance between to consecutive samples in our projected ESF will be a fraction of the pixel spacing. We can partition the projected ESF points into bins of width 0.125 pixels to produce a regularly-spaced sampled ESF with 8x oversampling.<br /><br />We know this works well for 5 degrees (because that is what everyone is doing), but what is so special about 5 degrees? To answer that, we have to slog through some elementary math.<br /><br /><h3>Spacing of projected samples</h3>Figure 2 illustrates one possible way in which we can assign integer coordinates to the pixels near the edge under analysis.<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://1.bp.blogspot.com/-ZFC7sfgKavs/WOXtvihSyFI/AAAAAAAABSc/WtOrcOyg-2szx1nTP1qdrLm6ikvppkpYACLcB/s1600/se_method_proj_na.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="360" src="https://1.bp.blogspot.com/-ZFC7sfgKavs/WOXtvihSyFI/AAAAAAAABSc/WtOrcOyg-2szx1nTP1qdrLm6ikvppkpYACLcB/s400/se_method_proj_na.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 2: How pixel coordinates are assigned</td></tr></tbody></table><div class="separator" style="clear: both; text-align: center;"></div><br /><div class="separator" style="clear: both; text-align: center;"></div>Note that we can pick an arbitrary origin (shown as (X<sub>0</sub>,Y<sub>0</sub>) in red); this just simplifies the math that will follow. This point need not fall exactly on the edge, but without loss of generality we can pretend that it does, since this means we can use integer coordinates to refer to the pixel centres of pixels near the edge.<br /><br />The orientation of the edge can be specified in degrees as measured from the horizontal, but I prefer using the slope of the line. If the angle between the edge and the horizontal is θ, then the direction perpendicular to the edge can be represented as the unit length vector (-sin(θ), cos(θ)). This would be expressed as a slope 1/Δx = tan(θ), such that Δx = 1/tan(θ).<br /><br />The normal vector (-sin(θ), cos(θ)) then becomes (-1, Δx) * 1/√(1 + Δx<sup>2</sup>). We project our pixel centres, represented as integer coordinates (x, y), onto this normal vector by computing the dot product (x, y) · (-1, Δx) * 1/√(1 + Δx<sup>2</sup>), which evaluates to d(x,y) = 1/√(1 + Δx<sup>2</sup>) * (-x + yΔx).<br /><br />The function d(x,y) thus computes the distance that the pixel located at (x, y) is from the origin (X<sub>0</sub>,Y<sub>0</sub>), which we will pretend falls on the edge; this means that d(x,y) measures the perpendicular distance of point (x, y) from the edge. The projected ESF point is thus [d(x,y), I(x+X<sub>0</sub>, y + Y<sub>0</sub>)], where I(i, j) denotes the intensity of the pixel located at pixel(i,j).<br /><br />Suppose that we focus only on the subset of pixels with integer coordinates (p, q) such that 0 ≤ d(p, q) < 1. If we are to achieve 8x oversampling, then there must be at least 8 unique distance values d(p, q) in this interval. In fact, we would require these 8 points to be spread out uniformly such that at least one d(p, q) value falls in the interval [0, 0.125), one in [0.125, 0.25), and so on, such that each of the sub-intervals of length 0.125 between 0 and 1 contain at least one point.<br /><br />Consider, for example, the case where Δx = 4. This reduces d(p, q) to 1/√(1 + 4<sup>2</sup>) * (-p + 4q) = (-p + 4q)/√17. Because both p and q are integers, we can deduce that d(p, q) must be an integer multiple of 1/√17. How many integer multiples of 1/√17 can we fit in between 0 and 1? If we enumerate them, we can choose p and q such that (-p + 4q) is the set {0, 1, 2, 3, 4, 5, 6 ...}. But √17 = 4.123106 (and change), so if (-p + 4q) ≥ 5, then d(p, q) > 1. That leaves only the set {0, 1, 2, 3, 4}, such that the only values of 0 ≤ d(p, q) < 1 are {0, 1/√17, 2/√17, 3/√17, 4/√17}.<br /><br />Whoops! If Δx = 4, then there will only be 5 unique values of d(p, q) between 0 and 1, and we need at least 8 points between 0 and 1 to achieve 8x oversampling! The implications of the failure to achieve 8x oversampling will be covered a bit later; first we must identify the critical angles.<br /><br /><h3>Enumerating the problem angles</h3>We already know that Δx = 4 causes our 8x oversampling to fail; this corresponds to an angle of atan(1/4) = 14.036 degrees. In fact, it is fairly simple to see that for any integer value Δx, we will have Δx + 1 unique values between 0 and 1 (if we include the 0 in our count). For 8x oversampling, the spacing between d(p, q) values must be less than 0.125, which happens when we have at least 8 unique d(p, q) values between 0 and 1. For Δx = 8, we see that 1/√(1 + Δx<sup>2</sup>) = 1/√65 ≈ 0.12403.<br /><br />The angles that will lead to a failure of the 8x oversampling mechanism are thus: 45, 26.565051, 18.434949, 14.036243, 11.309932, 9.462322, and 8.130102.<br /><br />Some other Δx values are also problematic: 1.5, and 2.5. These yield only 2Δx + 1 unique values (including zero). Setting Δx = 1.25 only yields 4Δx + 1 a total of 7 unique values. These fractional slopes occur at angles of 33.69007, 21.80141, and 38.65981 degrees.<br /><br />There may even be more of these problematic angles, but this is as far as I have come with this analysis. Feel free to comment if you can help me identify other values of Δx that will lead to undersampling.<br /><br /><h3>Dealing with the critical angles</h3>So what exactly happens when we do not have at least one sample every 0.125 pixels along the ESF? The corresponding bin in the resampled ESF will be missing, and leaving gaps in the resampled ESF leads to severe distortion of the MTF because those gaps show up as high-frequency transitions in the FFT.<br /><br />A workable strategy is to fall back on 4x oversampling. Another strategy is to simply interpolate the from nearby bins. Both of these solutions address the primary issue (gaps in the ESF/PSF), but the residual impact of the interpolation/replacement on the final MTF is harder to mitigate.<br /><br /><h3>A new hope</h3>After my previous post (<a href="http://mtfmapper.blogspot.com/2015/06/improved-apodization-and-bias-correction.html" target="_blank">on improved apodization</a>) I started thinking about the notion of applying low-pass filters to an interpolating function applied directly to the dense ESF samples, before binning is performed. I realized that my explanation of the equivalence between binning and fitting an interpolating function + low-pass filtering + sampling only holds when the points are relatively uniformly distributed within each bin.<br /><br />This got me thinking that I can probably apply a low-pass filter directly to the dense ESF samples, even before binning. The implementation of this approach feels familiar; it turns out to be similar to the method I implemented to perform importance sampling when using an Airy + photosite aperture PSF (<a href="http://mtfmapper.blogspot.com/2012/11/importance-sampling-how-to-simulate.html" target="_blank">this post</a>). Before describing the new method, first consider this illustration of plain vanilla unweighted binning:<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-iW106sWO4tM/VX7ITvw8PqI/AAAAAAAAA4c/joXvTjYvWxU/s1600/uniform_binning.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="300" src="https://3.bp.blogspot.com/-iW106sWO4tM/VX7ITvw8PqI/AAAAAAAAA4c/joXvTjYvWxU/s320/uniform_binning.png" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 3: Unweighted binning</td></tr></tbody></table><div class="separator" style="clear: both; text-align: center;"></div><br />The pink boxes denote the bins, each 0.125 pixels wide; the horizontal direction depicted here corresponds to the "d" axis in Figure 2. The midpoint, or representative "x" value for each bin is indicated by the arrows and the values in blue. The green dots represent individual dense ESF samples --- their "y" values are not important in this diagram; the position of the green dots are merely to illustrate where each dense ESF sample is located within each bin in terms of x value, and the number of dots give a rough indication of the density of the dense ESF samples.<br /><br />If we use plain binning, then we choose as representative x value for each bin the midpoint of the bin. The representative y value is obtained as the mean of the y values of the ESF samples within that bin. In Figure 3, the rightmost bin has many ESF samples quite close to the midpoint of the bin, but almost as many ESF samples near the edge of the bin. The effect of unweighted averaging would be that the samples near the right edge of the bin will carry roughly the same weight as the samples near the middle of our bin, but clearly the samples near the middle of the bin should have had a larger weight in computing the representative value for this bin.<br /><br />A much better way of binning would be to combine the binning step with the low-pass filtering step. Instead of representing each dense ESF sample as a point, it instead becomes a small rectangle, as shown here:<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-3bpvyDhPW6g/VX7IcaW4dTI/AAAAAAAAA4k/sQiHAREbxIU/s1600/weighted_binning.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="300" src="https://3.bp.blogspot.com/-3bpvyDhPW6g/VX7IcaW4dTI/AAAAAAAAA4k/sQiHAREbxIU/s320/weighted_binning.png" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 4: Weighted binning</td></tr></tbody></table>Now we can make the weight of each sample point proportional to the overlap of the sample's rectangle and the bin extents. This will allow the samples closer to the midpoint more weight, but it also allows a point to contribute to multiple adjacent bins, depending on the width of the rectangle. This smooths out the transition from one bin to the next, especially if the rectangle is wider than the bin width. (Ok, so the rectangle in the diagram is really just a 1-D interval, not a 2D shape. But the principle still holds.)<br /><br />Yes, I have just reinvented kernel density estimation. Sigh. <br /><br />Anyhow, this binning approach also makes the low-pass filtering step explicit, so if each dense ESF sample is now represented by an interval of width w pixels, then we are effectively convolving the ESF with a rect(w * x) function. We can remove the low-pass filtering effect on the MTF (calculated further down the pipeline) by dividing the MTF by sinc(0.5 * w * f), as I have shown in my previous post.<br /><br />Our binning process is beginning to look more like a proper approach to sampling: we apply a low-pass filter to our dense ESF points to remove (or at least strongly attenuate) higher frequencies, followed by choosing one representative value at the midpoint of each bin (the downsampling step). By choosing w = 0.33333 pixels, we have a fairly strong low-pass filter, but one that still has a cut-off frequency that is high enough to allow good detail at least up to 3 cycles per pixel.<br /><br />Because of the (relatively) wide low-pass filter, we could probably drop from 8x oversampling down to 4x oversampling, but I like the extra frequency resolution the 8x oversampling produces in the MTF.<br /><br /><h3>Results</h3>Simulating synthetic images with noise similar to that produced by a D7000 at ISO 800 (but a Gaussian PSF), we can investigate the benefits of the new binning method. Ideally, what we would like to see is no difference between accuracy at a 4 degree angle, and accuracy at one of the critical angles. To quantify this, here is a comparison of 95% percentile of the relative MTF50 error (over a range of MTF50 values from 0.08 cycles/pixel to 0.5 cycles/pixel):<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-w34IYU1hExA/VX7llD1gBXI/AAAAAAAAA5E/B4hoYYvWVSw/s1600/gauss_iso800_newbin.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="200" src="https://1.bp.blogspot.com/-w34IYU1hExA/VX7llD1gBXI/AAAAAAAAA5E/B4hoYYvWVSw/s400/gauss_iso800_newbin.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 5: 95% percentile of relative MTF50 error (click to enlarge)</td></tr></tbody></table><div class="separator" style="clear: both; text-align: center;"></div>The results are very promising. Most notable is the fact that the new binning method performs virtually identically regardless of edge orientation, with 26.565 degrees being the only angle that is <i>slightly</i> worse than the others. There may be a slight drop relative to MTF Mapper v0.4.16 (at 4 degrees), but keep in mind the contribution of the change in windowing method discussed in my previous post.<br /><br /> Just the be sure, I checked for bias at an edge orientation of 4 degrees (although I recycled the ISO800 images):<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-vwsx9Ml5h9o/VX7l5kmEdaI/AAAAAAAAA5M/PQ_UM9swQMY/s1600/gauss_noise_bias.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="https://4.bp.blogspot.com/-vwsx9Ml5h9o/VX7l5kmEdaI/AAAAAAAAA5M/PQ_UM9swQMY/s400/gauss_noise_bias.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 6: Relative MTF50 deviation (%)</td></tr></tbody></table>We can see that the new binning method does not introduce any bias in MTF50 estimates --- of course this is after correction using the MTF of the low-pass filter, as described above.<br /><br /><h3>Conclusion</h3>With the new binning method I can say that MTF Mapper no longer has significant problems with edges of certain orientations. More testing is required, but the 95% percentile of relative MTF50 error appears to be below 5%, regardless of edge orientation, for MTF50 values from 0.08 cycles/pixel through to 0.5 cycles/pixel.<br /><br />The improved binning method will be included in the next release (which should be v0.4.17).<br /><br />Frans van den Berghhttps://plus.google.com/104242390274256831576noreply@blogger.com0tag:blogger.com,1999:blog-6555460465813582847.post-75611754780301179192015-06-11T06:52:00.000-07:002017-04-06T00:36:47.853-07:00Improved apodization and bias correctionFollowing on the relatively recent addition of LSF correction to Imatest, I decided to revisit some of the implementation details of MTF Mapper.<br /><br />The brutal truth is that MTF Mapper used an empirical correction factor (shock, shock, horror!) to remove the observed bias in measured MTF curves. The empirical correction factor (or rather, family of correction factors) was obtained by generating a synthetic image with a known, analytical MTF curve, and calculating the resulting ratio of the measured curve (as produced by MTF Mapper) to the expected analytical curve.<br /><br />This had the advantage that it would remove both known distortions, such as that generated by the finite-difference approximation to the derivative (which Imatest refers to as the <a href="http://www.imatest.com/2015/04/lsf-correction-factor-for-slanted-edge-mtf-measurements/" target="_blank">LSF correction factor</a>), and other distortions which were produced by processes that I did not fully understand at the time.<br /><br />This post will deal with two of the distortions that I have identified, and I will propose solutions that will enable MTF Mapper to do away with the empirical correction approach.<br /><br /><h3>Apodization</h3>Apodization, also called "windowing", is a way to attenuate some of the artifacts resulting from the application of the FFT (or DFT, if you like) to a signal of a finite length. The DFT/FFT assumes that the signal is periodic, that is, the first (leftmost) sample is preceded (circularly) by the last (rightmost) sample. Applying the FFT to a signal that is discontinuous when treated in this circularly wrapped-around way usually results in significant energy spuriously appearing on the high frequency end of the frequency spectrum.<br /><br />A common windowing function is the Hamming window, which looks like a cosine function centered on the center of the sequence of samples. The samples are multiplied component-wise with the window function, effectively producing a new set of samples such that the leftmost and rightmost samples are scaled to very low magnitudes. Since the left- and rightmost samples are now all close to zero, we are guaranteed to have a signal that no longer has a discontinuity when wrapping around the left/right ends.<br /><br />So why would we use apodization as part of the slanted edge method? First, recall how the slanted edge method works:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://3.bp.blogspot.com/-Km63cEpSnow/WOXuSjkbDFI/AAAAAAAABSk/h6oQ118FZ640VVbNmPC4TTBVUUUhWcE2QCLcB/s1600/se_method1_na.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="295" src="https://3.bp.blogspot.com/-Km63cEpSnow/WOXuSjkbDFI/AAAAAAAABSk/h6oQ118FZ640VVbNmPC4TTBVUUUhWcE2QCLcB/s400/se_method1_na.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Step 1: generate the edge spread function (ESF)</td></tr></tbody></table><div class="separator" style="clear: both; text-align: center;"></div>This diagram shows how the individual pixel intensities are projected along a line that coincides with the edge we are analyzing. Owing to the angle of the edge relative to the pixel grid, the density of the projected values (along the direction perpendicular to the edge) is much greater than the original pixel spacing. The densely-spaced projected values are binned to form a regularly-spaced set of samples at (usually) 4x or 8x oversampling relative to the pixel grid. This allows us to measure frequencies above the Nyquist limit imposed by the original pixel grid.<br /><br />Now we can compute the MTF as illustrated here:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://1.bp.blogspot.com/-FOEu0zYfn44/WOXuximAfoI/AAAAAAAABSo/LsiphEP0KUofV5lwIQtgFL8prRosg7gOgCLcB/s1600/se_method2_na.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="640" src="https://1.bp.blogspot.com/-FOEu0zYfn44/WOXuximAfoI/AAAAAAAABSo/LsiphEP0KUofV5lwIQtgFL8prRosg7gOgCLcB/s640/se_method2_na.png" width="417" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Step 2: Compute MTF from PSF using FFT</td></tr></tbody></table><br /><div class="separator" style="clear: both; text-align: center;"></div>Notice that the PSF is usually quite compact, i.e., most of the area under the PSF curve is located close to the centre of the PSF curve. This is typical of a PSF extracted from a real-world edge. We see some noise on the tails of the PSF, with visibly more noise on the right side --- this is an artifact of photon shot noise being relative to the signal level, so the noise magnitude is larger in the bright parts of the image.<br /><br />Anyhow, since the noise is random, we might end up with large values on the edges, such as can be seen on the right end of the PSF samples. This is exactly the scenario which we would like to avoid, so we can apply a window to "squash" the samples near the edges of the PSF.<br /><br />MTF Mapper had been using a plain Hamming window up to now --- this resulted in a systematic bias in MTF measurements, particularly affecting edges with an MTF50 value below 0.1 cycles per pixel.<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://2.bp.blogspot.com/-xuH2bDBF58Y/WOXvI9m7aUI/AAAAAAAABSs/Kci0ht_ran0r4Jc6g-8w71IBVpCFCyXSQCLcB/s1600/hamming_window_na.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="158" src="https://2.bp.blogspot.com/-xuH2bDBF58Y/WOXvI9m7aUI/AAAAAAAABSs/Kci0ht_ran0r4Jc6g-8w71IBVpCFCyXSQCLcB/s400/hamming_window_na.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Hamming window</td></tr></tbody></table><br />Two things are visible here: the noise is suppressed reasonably well (on the ends of the green curve) after multiplying the PSF by the Hamming window function (see right side of illustration), and the PSF appears to contract slightly, effectively becoming slightly narrower after windowing.<br /><br />The apparent narrowing of the PSF has the expected impact on MTF50 values: they are overestimated slightly.<br /><br />I identified three possible methods to address this systematic overestimation of MTF50 values (on the low end of MTF50 values): empirical correction (as MTF Mapper has been doing so far), deconvolution, and using a different window function.<br /><br />We can "reverse" the effect of the windowing after we have applied the FFT to obtain the MTF. By the convolution theorem, we know that convolution in the time domain becomes multiplication in the frequency domain. Since we multiply the PSF by the window function in the time domain, it stands to reason that we must deconvolve the MTF by the Fourier transform of the window function. Except that deconvolution is a black art that is best avoided.<br /><br />I have tried many different approaches, but the high noise levels in the PSF makes for a poor experience, more apt to inject additional distortion into our MTF than to undo the slight distortion caused by windowing in the first place.<br /><br /><br />That leaves us only with the last option: choose a different window function. Purely based on aesthetics, I decided on the Tukey window with an alpha parameter of 0.6:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://1.bp.blogspot.com/-DbDnmE8EVho/WOXvzJbAi4I/AAAAAAAABS0/QZ_xSQGySgQg8Q-jijMAItxHMdorF6ucQCLcB/s1600/tukey_window_na.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="183" src="https://1.bp.blogspot.com/-DbDnmE8EVho/WOXvzJbAi4I/AAAAAAAABS0/QZ_xSQGySgQg8Q-jijMAItxHMdorF6ucQCLcB/s400/tukey_window_na.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Tukey window</td></tr></tbody></table><div class="separator" style="clear: both; text-align: center;"></div>Notice that we may get slightly less noise suppression, but in return we distort the PSF far less. In fact, at this level (MTF50 = 0.05) the distortion is negligible, and no further correction factors are required. This is the new apodization method employed by MTF Mapper.<br /><br /><h3>LSF correction and beyond</h3>As already mentioned, the finite-difference method used to calculate the PSF (or LSF, if you are pedantic) from the ESF is not identical to the ideal analytical derivative of the ESF. A sin(x)/(x) correction factor can be employed to effectively remove this distortion. The Imatest <a href="http://www.imatest.com/2015/04/lsf-correction-factor-for-slanted-edge-mtf-measurements/" target="_blank">article</a> on this topic does a fine job of explaining the maths behind this correction; the method was originally published by Burns while working at Kodac.<br /><br />Since MTF Mapper employs 8x oversampling, we must divide the calculated MTF by the function sin(π * f/4)/(π * f/4). Clarification: This stems from the sample spacing that is 0.125 pixels. Plugging this into the finite-difference derivative calculation as explained in the Imatest article we see that for 8x oversampling we will have a correction factor of sin(π * f/4)/(π * f/4) as opposed to the sin(π * f/2)/(π * f/2) we would have had for 4x oversampling. <br /><br />Even after applying this correction factor, though, we can see a systematic difference between the expected ideal MTF and the MTF produced by the slanted edge method. To understand this (final?) distortion, we have to rewind back to the step where we construct the ESF (helpfully captioned "Step 1" above...).<br /><br />The projection used to form the dense ESF samples produces a dense set of points, but these points are no longer spaced at convenient regular intervals. The FFT rather depends on being fed regularly spaced samples, so the simplest solution is to bin the samples at our desired oversampling factor. An oversampling factor of 8x thus produces bins that are 0.125 pixels wide.<br /><br />Again following the path of least resistance, we simply average all the values in each bin to obtain our regularly-sampled ESF. This seems like such a harmless little detail, but if we stop and think about it, we realize that this must be a low-pass filter. Why?<br /><br />Well, consider first a continuous interpolation function passing through all the ESF samples before binning. We would like to sample this function at regular intervals (0.125 pixels, to be exact), but we know that point sampling will produce horrible aliasing artifacts. The correct approach is to apply a low-pass filter, i.e., convolve our interpolating function with some filter. Let us choose a simple box filter of width 0.125 pixels. If we first convolve the interpolating function with this box filter, and then point-sample at intervals of 0.125 pixels, we end up with exactly the same result as we would obtain from binning followed by averaging all the values in each bin. This approach is optimal in terms of noise suppression for a Gaussian noise source, so even though it sounds simplistic, it is a good solution.<br /><br />Fortunately, this process is easily reversible by indiscriminate application of the convolution theorem: convolution in the time domain can be reversed by dividing the MTF (in the frequency domain) by the Fourier transform of our low-pass filter. And by now we know that the Fourier transform of a box filter is the sinc() function --- all we have to do is choose the proper frequency.<br /><br />At 8x oversampling, our bin width is 0.125 pixels, resulting in a low-pass filter of rect(8x). In the Fourier domain, this means we must divide the MTF by sinc(π * f/8) --- this will effectively reverse the attenuation of the MTF induced by the low-pass filter.<br /><br />To illustrate the effect of these two components (discrete derivative and binning low-pass filter) we can look at a simple example using a Gaussian PSF, with no added noise, and no apoditization. We start with the dense ESF of an edge with an MTF50 value of exactly 0.25 cycles/pixel:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-rshhs7d2WYc/VXlD4ge6GvI/AAAAAAAAA0c/-BEpfhbmlKw/s1600/dense_esf.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="https://2.bp.blogspot.com/-rshhs7d2WYc/VXlD4ge6GvI/AAAAAAAAA0c/-BEpfhbmlKw/s400/dense_esf.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 1: Dense ESF</td></tr></tbody></table><br />This ESF is binned into bins of width 0.125 pixels:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-c6pAakzsoJ0/VXlF_RSpqLI/AAAAAAAAA0o/YLgRc7M7p-Y/s1600/resampled_esf.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="https://1.bp.blogspot.com/-c6pAakzsoJ0/VXlF_RSpqLI/AAAAAAAAA0o/YLgRc7M7p-Y/s400/resampled_esf.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 2: binned ESF</td></tr></tbody></table><br />Next we calculate the discrete derivative to obtain the PSF:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-EVMbotjMDaI/VXlKfFWAtJI/AAAAAAAAA1I/mVe-gV2Vgtk/s1600/binned_psf.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="https://4.bp.blogspot.com/-EVMbotjMDaI/VXlKfFWAtJI/AAAAAAAAA1I/mVe-gV2Vgtk/s400/binned_psf.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 3: discrete PSF</td></tr></tbody></table><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-60FhaPGdTE4/VXlI5twyHeI/AAAAAAAAA08/FnwW7zRSEYA/s1600/binned_psf.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><br /></a></div> This PSF is passed through the FFT to obtain the following MTF curve:<br /><br /><div class="separator" style="clear: both; text-align: center;"></div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-2-d4wwHy-l8/VXlKj0HolRI/AAAAAAAAA1Q/xhqc1Np7FrE/s1600/raw_mtf.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="https://3.bp.blogspot.com/-2-d4wwHy-l8/VXlKj0HolRI/AAAAAAAAA1Q/xhqc1Np7FrE/s400/raw_mtf.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 4: measured MTF curve</td></tr></tbody></table>That MTF curve looks pretty good. And it looks very much like half of a Gaussian, just as we would expect. But looks can be deceiving at this scale. We know the true analytical MTF curve that we would expect: a Gaussian with a standard deviation of about 0.2123305 (and change). So next we plot the measured MTF curve divided by the expected MTF curve:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-cK7W6ootrok/VXlQf3D4ksI/AAAAAAAAA1w/GZjb5JQemU4/s1600/basic_ratio.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="https://3.bp.blogspot.com/-cK7W6ootrok/VXlQf3D4ksI/AAAAAAAAA1w/GZjb5JQemU4/s400/basic_ratio.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 5: Uncorrected MTF ratio (red)</td></tr></tbody></table>The dashed blue curve is the sin(π * f/4)/(π * f/4) function, corresponding to the discrete derivative correction, and the red curve is the ratio of measured to expected MTF. Clearly these two curves have roughly the same shape. Let us take our measured MTF curve, divide it by the sinc(f) curve to apply the discrete derivative correction, and plot the ratio of the corrected curve to the expected curve:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-EXUdljvITSM/VXlSc7GVAEI/AAAAAAAAA2A/eqR1yL7Om0A/s1600/corrected_ratio.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="https://2.bp.blogspot.com/-EXUdljvITSM/VXlSc7GVAEI/AAAAAAAAA2A/eqR1yL7Om0A/s400/corrected_ratio.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 6: Partially corrected MTF ratio (red)</td></tr></tbody></table>Note how the red curve (corrected MTF divided by expected MTF) has flattened out --- keep in mind that we would expect this curve to flatten out into a straight line. The black dashed line is the function sin(π * f/8)/(π * f/8), i.e., the Fourier transform of the rect(8x) low-pass filter induced by the binning process. Now we can combine the two corrections, i.e., take the measured MTF, divide by the discrete derivative correction, and then divide the result by the low-pass correction; this gives us the "fully corrected" MTF curve. Plotting the fully corrected MTF curve divided by the expected analytical MTF curve yields this:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-j98wiX4w1vE/VXlUVdE6pLI/AAAAAAAAA2M/-8MwM11AZKs/s1600/fully_corrected_ratio.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="https://1.bp.blogspot.com/-j98wiX4w1vE/VXlUVdE6pLI/AAAAAAAAA2M/-8MwM11AZKs/s400/fully_corrected_ratio.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 7: Fully corrected MTF ratio (red)</td></tr></tbody></table>The red curve is almost, but not quite, a constant value of 1.0. This demonstrates that the low-pass correction helps to bring us closer to the expected ideal MTF curve.<br /><br />If we zoom out a bit on the last plot, we see things are not entirely rosy:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-h-hIk3hJs7o/VXlVJEXmTLI/AAAAAAAAA2Y/ecZbSrnLQSc/s1600/fully_corrected_ratio_wide.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="https://3.bp.blogspot.com/-h-hIk3hJs7o/VXlVJEXmTLI/AAAAAAAAA2Y/ecZbSrnLQSc/s400/fully_corrected_ratio_wide.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 8: Fully corrected MTF ratio (red), wide view</td></tr></tbody></table>Once we move past a frequency of 1 cycle per pixel, the corrected curve does not match the expected curve so well anymore, at least not when expressed as a ratio. But looking back at Figure 4 above, we see that the measured MTF curve is practically zero beyond 1 cyc/pixel anyway, so we should expect some numerical instability when dividing the measured curve by the expected curve. This also explains my choice of scale in a few of the plots above.<br /><br />If we express the difference between the fully corrected curve and the expected analytical curve as a percentage of the magnitude of the analytical curve, we see that the fully corrected curve deviates only about 0.15% at 1 cyc/pixel, and only about 0.05% at 0.5 cyc/pixel (Nyquist). For reference, the relative deviation of a completely uncorrected curve is about 10% and 3% at 1 and 0.5 cyc/pixel respectively. Applying only the discrete derivative correction leaves a deviation of about 2.8% and 0.6%.<br /><br />So adding the correction for the low-pass filter effect of the binning is definitely in the diminishing returns category, but I certainly aim to make MTF Mapper the most accurate tool out there, so no expense is spared.<br /><br />Summary: The full correction to take care of both the finite-difference correction, and the removal of the attenuation induced by the low-pass filter (implicitly part of the binning operation) is the product of the two individual term, i.e.,<br /><div style="text-align: center;">c(f) = sin(π * f/4)/(π * f/4) * sin(π * f/8)/(π * f/8),</div>The MTF curve is corrected by dividing by this correction factor.<br /><br /><h3>Accuracy evaluation</h3>To demonstrate the effect of the new apoditization and MTF correction approaches, we can look at the MTF50 accuracy over a range of MTF50 values. For each of the MTF50 levels shown below, a number of synthetic images were rendered without adding any simulated noise --- this is to emphasize the inherent bias in measured MTF50 values. All edges were kept at a relative angle of 4.5 degrees, with 30 repetitions rendered using small sub-pixel shifts of the rectangle.<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-n32gU2MTx_g/VXlhtGvh_LI/AAAAAAAAA2o/sIBafCY_MDM/s1600/gauss_nonoise.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="https://1.bp.blogspot.com/-n32gU2MTx_g/VXlhtGvh_LI/AAAAAAAAA2o/sIBafCY_MDM/s400/gauss_nonoise.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 9: Relative MTF50 deviation on a Gaussian PSF</td></tr></tbody></table>Our three contestants are MTF Mapper v0.4.16, which employs a Hamming windowing function and empirical MTF curve correction, followed by an implementation that uses a Hamming window with only the discrete derivative correction, and finally the new implementation using a Tukey windowing function with both discrete derivative and binning low-pass corrections.<br /><br />It is clear that the Hamming window + derivative correction (blue curve) produces a significant bias at low MTF50 values, raising their values artificially (as expected from the apparent narrowing of the PSF). Also note how the MTF50 values are underestimated at higher MTF50 values, which is again consistent with the effects of the binning low-pass filter.<br /><br />Both the empirical correction method (red curve) and the new Tukey window plus full correction (black curve) display much lower bias in their MTF50 estimates, as seen in Figure 9.<br /><br />What happens when we use a different PSF to generate our synthetic images? This time I chose the Airy + photosite aperture (square aperture, 100% fill factor) as a representative. This corresponds to something like a D7000 sensor without an OLPF, but without noise.<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-5hGQ9MXlg7Y/VXl-maHL5AI/AAAAAAAAA24/zqaIp8OP4SE/s1600/airy_nonoise.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="https://1.bp.blogspot.com/-5hGQ9MXlg7Y/VXl-maHL5AI/AAAAAAAAA24/zqaIp8OP4SE/s400/airy_nonoise.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 10: Relative MTF50 deviation on an Airy+box PSF</td></tr></tbody></table>Firstly, we see some shockingly large errors on the low MTF50 side. The data points correspond to a simulated aperture of f/64, followed by f/32, f/16, f/8, f/5.6, f/4 and finally f/2.8. A reasonable explanation for the difference between the results in Figure 9 and 10 might be the wider support of the Airy PSF. Typically, the central peak of the Airy PSF is narrower than a Gaussian, but the Gaussian also drops off to zero more quickly, i.e., the Airy PSF has more energy in the tails of the PSF. This means that a wide (f/64) Airy PSF will be affected more strongly by the windowing function, and may even suffer from some truncation of the PSF --- this notion seems to be supported by the difference between the Tukey and Hamming window curves (black vs blue).<br /><br />Interestingly the empirical correction performed better than expected, doing almost as well as the Tukey + full correction method. This is somewhat unexpected, since the empirical correction factors were calculated from a Gaussian PSF.<br /><br />Since these experiments were all performed in the absence of simulated noise, they really only test the inherent <i>bias </i>of the various methods. The good news is that the Tukey + full correction approach appears to be an overall improvement over the existing empirical correction, even thought the improvement is really quite small.<br /><br /><h3>Adding in some noise</h3>It always makes sense to look at both bias and variance when comparing the quality of two competing models. In this spirit, the experiments above were repeated under mild noise conditions, corresponding to roughly ISO 800 on a D7000 sensor. First up, the Gaussian PSF:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-yJo6-LbOs4g/VXmI6elQ8FI/AAAAAAAAA3I/pUXfvxs7idk/s1600/gauss_iso800.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="https://1.bp.blogspot.com/-yJo6-LbOs4g/VXmI6elQ8FI/AAAAAAAAA3I/pUXfvxs7idk/s400/gauss_iso800.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 11: Standard deviation of relative MTF error on Gaussian PSF</td></tr></tbody></table>Figure 11 presents the standard deviation of the relative MTF50 error, expressed as a percentage. We see the impact of the Tukey windowing function quite clearly: since the Tukey window does not attenuate such a large part of the PSF (i.e., less of the edge of the PSF is attenuated), we see a small increase in the standard deviation of the relative error. As expected, both the methods using the Hamming window perform nearly identically.<br /><br /><h3>Conclusion</h3>MTF Mapper will employ the new apodization function (Tukey window) as well as the analytically-derived full correction in lieu of the older Hamming window + empirical correction, starting from the next release. This should be v0.4.17 onwards.<br /><br />The new correction method is more elegant, and makes fewer assumptions regarding the shape of the MTF curve, unlike the empirical correction that was trained on only Gaussian MTFs. But throwing out the empirical correction brings back the strong attenuation of the PSF at lower MTF50 values, so the Hamming window had to be replaced with the Tukey window.<br /><br />We pay a small price for using the Tukey window, but realistically the MTF50 error should remain below 5% (for an expected MTF50 value of 0.5 c/p) even under quite noisy conditions.<br /><br />In theory it should be possible to incorporate strong low-pass filtering of the PSF, followed by suitable reversal-via-division of the low-pass filter in the frequency domain. In practice, I have not seen any worthwhile improvement in accuracy. I suspect that some non-linear adaptive filter may be able to strike the right balance, but that will have to wait for now.Frans van den Berghhttps://plus.google.com/104242390274256831576noreply@blogger.com5tag:blogger.com,1999:blog-6555460465813582847.post-84335311607462149802015-04-15T05:01:00.000-07:002015-04-15T05:01:52.339-07:00Trust, but verifyA while back I wrote:<br /> <i>"I could not find any synthetic images rendered with specific, exactly known point spread functions. This meant that the only way that I could tell if MTF Mapper was working correctly was to compare its output to other slanted edge implementations."</i> <a href="http://mtfmapper.blogspot.com/2013/12/mtfgeneraterectangle-grows-up.html" target="_blank">(here)</a><br /><br />This was the main motivation for developing <span style="font-family: "Courier New",Courier,monospace;">mtf_generate_rectangle</span>.<br /><br />It turns out that Imatest recently updated their SFR measurement algorithm to include the well-known finite-difference-correction <a href="http://www.imatest.com/2015/04/lsf-correction-factor-for-slanted-edge-mtf-measurements/" target="_blank">(here).</a> I first encountered this correction in one of Burns' papers. According to the Imatest news article, this correction is now included in the ISO 12233:2014 standard. <br /><br />I have yet to test the new version of Imatest against the synthetic images produced by <span style="font-family: "Courier New",Courier,monospace;">mtf_generate_rectangle</span>, but I do recall that some quick-and-dirty testing a few years back hinted at Imatest underestimating MTF50 slightly, which would be consistent with an algorithm that does not apply the finite difference correction. As pointed out in the Imatest news article, this difference is really only noticeable when dealing with higher MTF50 values, so this does not imply that all older Imatest results are now suddenly obsolete.<br /><br /><br />It does raise an important point about traceability and independent verification, though. Frans van den Berghhttps://plus.google.com/104242390274256831576noreply@blogger.com0tag:blogger.com,1999:blog-6555460465813582847.post-26699452283507516372014-01-09T04:33:00.000-08:002014-01-09T04:33:12.523-08:00Analogue image zoom: comparing 6 megapixel images with 16 megapixel images<h2>The problem</h2>Let us say you have captured a shot of a test chart with a D40, and repeated the process (same lens) with a D7000. The D40 gives you a 6 megapixel image, and the D7000 gives you a 16 megapixel image. You would like to compare the sharpness of the one camera to that of the other.<br /><br />There several options for executing this comparison:<br /><ol><li>Print both shots at the same size. This is probably the way to go if you intend to print a lot of uncropped photos.</li><li>Scale down the 16 MP image to 6 MP, and compare at 100% view.</li><li>Scale up the 6 MP image to 16 MP, and compare at 100% view.</li><li>Scale both images to some other resolution, e.g., 8 MP (like DxO labs do), or maybe 24 MP.</li></ol>There is at least one sound reason why option 4 is the better choice amongst options 2 through 4: scaling artifacts. Performing an MTF analysis of an image upscaled with a popular cubic scaling algorithm (Mitchell) reveals that there is some effective contrast enhancement that takes places as part of the scaling process, visible as overshoot and undershoot in an edge profile plot. By scaling both images, you are at least trying to compare apples to apples, especially if you are unsure of exactly what sharpening algorithm your software will employ.<br /><br />Of course, this entire post deals with visual interpretation of test chart images. If you are interested in other properties (e.g., MTF) then go ahead and use MTF Mapper to perform such measurements directly. The normal slanted edge MTF analysis does not really tell you what your aliasing will look like after dropping the OLPF from your sensor, nor how apparent sharpness is influenced by demosaicing algorithms. For such evaluations visual interpretation might prove useful still.<br /><br /><h2>Another option: simulation</h2>Once you embrace simulated images, such as those produced with <span style="font-family: "Courier New",Courier,monospace;">mtf_generate_rectangle</span>, you have a fifth option: directly produce a 16 MP (equivalent) image but keep the point spread function (PSF) of the 6 MP camera. I call this feature "analogue scaling", mostly because it effectively resamples the 6 MP image to 16 MP, but without using a discretized 6 MP image as the source. Instead, the usual "infinite precision" analytical description of the target scene is simply scaled down (relative to the photosite pitch), and the sample spacing of the rendered image is adjusted accordingly.<br /><br />Here is a comparison between a simulated D40 (photosite pitch 7.8 micron, f/4, green light, 4-dot OLPF) and a D7000 (photosite pitch 4.73 micron, f/4, green light, 4-dot OLPF):<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-Iqo3JgRqOcQ/Us6LvHqOcVI/AAAAAAAAAqA/rQ38GSnTS08/s1600/pinch_d40.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://3.bp.blogspot.com/-Iqo3JgRqOcQ/Us6LvHqOcVI/AAAAAAAAAqA/rQ38GSnTS08/s1600/pinch_d40.png" height="373" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Simulated D40 image, effectively magnified by a factor of ~1.65 (click for full-size)</td></tr></tbody></table><br /> <br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-pNWse9e8vLA/Us6MLnywKSI/AAAAAAAAAqI/ZUFeBE8T95Q/s1600/pinch_d7k.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://1.bp.blogspot.com/-pNWse9e8vLA/Us6MLnywKSI/AAAAAAAAAqI/ZUFeBE8T95Q/s1600/pinch_d7k.png" height="374" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Simulated D7000 image (click for full-size)</td></tr></tbody></table><br />These images were generated with the following commands:<br /><blockquote class="tr_bq">mtf_generate_rectangle.exe --target-poly doc/pinch.txt -p airy-box -n 0 --analogue-scale 0.606410 --pixel-pitch 7.8 --aperture 4 -o pinch_d40.png</blockquote>and<br /><blockquote class="tr_bq">mtf_generate_rectangle.exe --target-poly doc/pinch.txt -p airy-4dot-olpf -n 0 --pixel-pitch 4.73 --aperture 4 -o pinch_d7k.png</blockquote>respectively. Note that the analogue scale factor is ~0.606, which is 4.73/7.8, i.e., the ratio between the photosite pitch of the D7000 and D40. Specifying an <span style="font-family: "Courier New",Courier,monospace;">--analogue-scale</span> factor of greater than 1.0 will produce aliasing (and an apparent increase in sharpness), while a factor of less than 1.0 will produce smoothing, as would be expected from upscaling.<br /><br />Note that the "-n 0" switch turns off simulated sensor noise. Since sensor noise is currently computed in the domain of the output image, this switch is required to produce a correct pair of images. If noise is left on, you would obtain a scaled D40 image with image noise appearing at the size of D7000 pixels, which will clearly cause the D40 image to appear better than it should. This can be fixed (in the mtf_generate_rectangle code) by scaling the "noise image" correctly, but I honestly only thought of this problem now as I am busy writing this blog post :)<br /><br /><h2>Discusssion</h2>Does this approach work? Well, take a look at the point where the converging lines blur into a gray mess in the D40 image (top image above). This appears to happen after the tick marked "2" in the horizontal set of lines --- maybe about one-third of the way from "2" to "1".<br /><br />In the D7000 image, the extinction point (gray mess) appears almost exactly at the tick marked "1" in the horizontal set of converging lines. This appears about right, since we know the linear resolution of the D40 is about 0.606 that of the D7000. Not really a rigorous proof, but at least reassuring.<br /><br />To summarize: the <span style="font-family: "Courier New",Courier,monospace;">--analogue-scale</span> option of <span style="font-family: "Courier New",Courier,monospace;">mtf_generate_rectangle</span> will allow you to produce a pair of synthetic images of a test chart to simulate two different cameras, with different photosite pitch values, but without the hassle or potential artifacts introduced by upscaling the discrete image produced by the lower-resolution camera.<br /><br />Of course, this type of simulation will allow you to investigate potential future sensors too. How would a 50 MP APS-C camera render a resolution test chart .... ?<br /><br />ps: MTF Mapper version 0.4.16 is finally available <a href="https://sourceforge.net/projects/mtfmapper/files/">here</a> --- this is the first version to support all the required features to produce the results found in this post.Frans van den Berghhttps://plus.google.com/104242390274256831576noreply@blogger.com0tag:blogger.com,1999:blog-6555460465813582847.post-21839996998835626062013-12-18T03:39:00.000-08:002017-04-06T00:45:35.871-07:00mtf_generate_rectangle grows up<h2>Fed up with squares?</h2>If you have used <span style="font-family: "courier new" , "courier" , monospace;">mtf_generate_rectangle</span> before you will know why it is called mtf_generate_<i>rectangle</i>. A bit over two years ago, when I started working on the first release of MTF Mapper, I ran into a specific problem: I could not find any synthetic images rendered with specific, exactly known point spread functions. This meant that the only way that I could tell if MTF Mapper was working correctly was to compare its output to other slanted edge implementations.<br /><br />While this is sufficient for some, it did not sit well with me. What if all those other implementations were tested in the same way? If that was the case, then <i>all</i> slanted edge implementations (available on the Internet) could be flawed. Clearly, some means of verifying MTF Mapper independently of other slanted edge algorithm implementations was required.<br /><br />Thus <span style="font-family: "courier new" , "courier" , monospace;">mtf_generate_rectangle</span> was born. The original implementation relied on sampling a dense grid, but quickly progressed to an importance sampling rendering algorithm tailored to a Gaussian PSF. The Gaussian PSF had some important advantages over other (perhaps more realistic) PSFs: the analytical MTF curve was know, and simple to compute (the MTF of a Gaussian PSF is just another scaled Gaussian). Since the slanted edge algorithm requires a step edge as input, it seemed logical to choose a square as the target shape; this would give us four step edges for the price of one.<br /><br />Such synthetic images, composed of a black square on a white background, are perfectly suited to the task of testing the slanted edge algorithm. Unfortunately, they are not great for illustrating the visual differences between different PSFs. There are a few well-know target patterns that are found on resolution test charts designed for visual interpretation. The USAF1951 pattern consists of sets of three short bars (see examples later on); the width of these bars are decreased in a geometric progression, and the user is supposed to note the scale at which the bars are no longer clearly distinguishable.<br /><br />Another popular test pattern is the Siemens star. This pattern comprises circular wedges radiating from the centre of the design. The main advantage of the Siemens star is that resolution (spacing between the edges of the wedges) decreases in a continuous fashion, as opposed to the discrete intervals of the USAF1951 chart. I am not a huge fan of the Siemens star, though, mostly because it is hard to determine the exact point at which the converging bars (wedges) blur into a gray mess. It is far too easy to confuse aliasing with real detail on this type of chart. Nevertheless, other people seem to like this chart.<br /><br />Lastly, there is the familiar "pinched wedge" pattern (also illustrated later in this post), which contains a set of asymptotically convergent bars. The rate of convergence is much slower than the Siemens star, and a resolution scale usually accompanies the pattern, making it possible to visually measure resolution in a fashion similar to the USAF1951 chart, but with slightly more accuracy. I rather like this design, if only for the fact that the resulting pictures are aesthetically pleasing.<br /><br />Today I announce the introduction of a fairly powerful new feature in <span style="font-family: "courier new" , "courier" , monospace;">mtf_generate_rectangle</span>: the ability to render arbitrary polygon shapes.<br /><br /><h2>The implementation</h2>You can now pass fairly general polygonal targets using the "--target_poly <filename>" command line option. The format of the file specified using <filename> is straightforward: any number of polygons can be specified, with each polygon defined by an integer <n> denoting the number of vertices, followed by <n> pairs of <x,y> coordinates. I usually separate these components with newline characters, but this is not critical.<br /><br />The polygons themselves can be convex or concave. In theory, non-trivial self intersections should be supported, but I have not tested this myself yet. There is currently no way to associate multiple contours with a single polygon, thus you cannot specify any polygons containing holes. I work around this by simply splitting the input polygons with a line passing through such a hole: for example, a polygon representing the number "0" can be split down the middle, producing two concave polygons that touch on the split line.<br /><br /><h3>General polygon intersections</h3>For a while now I have cheated by relying on the Sutherland-Hodgeman algorithm to compute the intersection between two polygons. Specifically, this operation is required by all the importance sampling algorithms involving a non-degenerate photosite aperture (e.g., "airy-box" and "airy-4dot-olpf" PSF options specified with the "-p" command to mtf_generate_rectangle). <a href="http://mtfmapper.blogspot.com/2012/11/importance-sampling-how-to-simulate.html">This article</a> explains the process in more detail, but the gist is that each "sample" during the rendering process is proportional to the area of the intersection between the target polygon geometry and a polygon representing the photosite aperture (suitably translated). If we assume that the photosite aperture polygon is simply a square (or more general, convex) then we can rely on the Sutherland-Hodgeman algorithm to compute the intersection: we simply "clip" the target polygon with the photosite aperture polygon, and compute the area of the resulting clipped polygon.<br /><br />Now this is where the cheat comes in: the clipped result produced by the Sutherland-Hodgeman algorithm is only correct if both polygons are convex. If the target polygon (the clipee) is concave, and the clipping polygon is convex, the Sutherland-Hodgeman algorithm may produce degenerate vertices. (see figure 1 below). The cheat that I employed relied on the observation that degenerate sections of a polygon have zero area, thus they have no influence on the sampling process.<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-fZ_UJe7aP6g/UrF8DyBeEaI/AAAAAAAAApQ/uJJyhcEh4aU/s1600/sutherland.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://1.bp.blogspot.com/-fZ_UJe7aP6g/UrF8DyBeEaI/AAAAAAAAApQ/uJJyhcEh4aU/s1600/sutherland.png" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Fig 1: Clipping a concave polygon with the Sutherland-Hodgeman algorithm</td></tr></tbody></table><br />This allowed me to continue using the Sutherland-Hodgeman algorithm, mostly because the algorithm is simple to implement, and very efficient. This efficiency stems from the way in which an actual intersection vertex is only computed if an edge of the clipee polygon is known to cross an edge of the clipper polygon. This is a <i>huge</i> saving, especially if one considers that the photosite aperture polygon (clipper) will typically be either entirely outside the target polygon, or entirely inside the target polygon, except of course near the edges of the target polygon.<br /><br />All this comes apart when the photosite aperture polygon becomes concave. To solve that problem requires significantly more effort. For a start, a more general polygon clipping algorithm is required. I chose the Greiner-Hormann algorithm, with Kim's extensions to cater for the degenerate cases. In this context, the degenerate cases occur when some of the vertices of the clipee polygon coincide with vertices (or edges) of the clipper polygon. This happens fairly often when constructing a quadtree (more on that later). At any rate, the original Greiner-Hormann algorithm is fairly straightforward to implement, but adding Kim's enhancements for handling the degenerate cases required a substantial amount of effort (read: hours of debugging). The Greiner-Hormann algorithm is quite elegant, and I can highly recommend reading the original paper.<br /><br />Internally, mtf_generate_rectangle classifies polygons as being either convex or concave. If the photosite aperture is convex, the Sutherland-Hodgeman algorithm is employed during the sampling process, otherwise it will fall back to Kim's version of the Greiner-Hormann algorithm. The performance impact is significant: concave photosite polygons render 20 times more slowly than square photosite polygons when rendering complex scenes. For simpler scenes, the concave photosite polygons will render about four times more slowly than squares; circular (well, 60-sided regular polygons, actually) will render about two times more slowly than squares.<br /><br />Part of this difference is due to the asymptotic complexity of the two clipping algorithms, expressed in terms of the number of intersection point calculations: the Sutherland-Hodgeman algorithm has a complexity of O(c*n), where "c" is the number of crossing edges, i.e., c << m, where "m" is the number of vertices in the clipee polygon, and "n" is the number of edges in the clipper. The Greiner-Hormann algorithm has a complexity of O(n*m); on top of that, each intersection vertex requires a significant amount of additional processing.<br /><br /><h3>Divide and conquer</h3>To offset some of the additional complexity of allowing arbitrary target polygons to be specified, a quadtree spatial index was introduced. The quadtree does for 2D searches what a binary tree does for linear searches: it reduces the number of operations from O(n) to O(log(n)).<br /><br />First up, each polygon is wrapped with an axis-aligned bounding box (AABB), which is just an educated-sounding way of saying that the minimum and maximum values of the vertices are recorded for both x and y dimensions of a polygon. This step already offers us a tremendous potential speed-up, because two polygons can only intersect if their bounds overlap. The bounds check is reduced to four comparisons, which can be implemented using short-circuit boolean operators, so non-overlapping bounds can be detected with as little as a single comparison in the best case.<br /><br />Once each individual polygon has a bounding box, we can start to aggregate them into a scene (internally, mtf_generate_rectangle treats this as a multipolygon with its own bounding box). The quadtree algorithm starts with this global bounding box, and splits it into four quadrants. The bounding box of each quadrant is taken as a clipping polygon, clipping all the polygons to fit exactly inside the quadrant.<br /><br />After one iteration, we have potentially reduced the number of intersection tests by a factor of four. For example, if we determine (using the bounding boxes) that the photosite aperture polygon falls entirely inside the top-right quadrant, then we only have to process the (clipped) polygons found inside that quadrant. If a quadrant is empty, we can simply skip it; otherwise, we can shrink the bounding box to fit tightly around the remaining clipped polygons (see figure 2 below).<br /><br />The next logical step is to apply this quadrant subdivision recursively to each of the original quadrants. We can keep on recursively subdividing the quadrants until a manageable number of polygons (or more correctly, a manageable number of polygon edges) is reached in each quadrant subtree. We must balance the cost of further subdivision against the gains of reducing the number of edges in each subdivided quadrant. Every time that we add another level to the quadtree we add four additional bounds checks --- eventually the cost of the bounds checks add up.<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://2.bp.blogspot.com/-CBwLNE2Y0Xc/WOXxuKxnOvI/AAAAAAAABTA/GGLpgrKZLhsjrq8Pt_Jzo2ZCXBR0ew_RACLcB/s1600/quadtree_levels012_na.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://2.bp.blogspot.com/-CBwLNE2Y0Xc/WOXxuKxnOvI/AAAAAAAABTA/GGLpgrKZLhsjrq8Pt_Jzo2ZCXBR0ew_RACLcB/s1600/quadtree_levels012_na.png" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Fig 2: Levels 0 (green), 1 (blue) and 2 (magenta) of the Quadtree decomposition of the scene (light gray)</td></tr></tbody></table><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://2.bp.blogspot.com/-FtUfMBntbMs/WOXx6DjFLnI/AAAAAAAABTE/5kbYTAEZ33Awv85qSMt7TKjPhgnO0La0wCLcB/s1600/quadtree_levels256_na.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://2.bp.blogspot.com/-FtUfMBntbMs/WOXx6DjFLnI/AAAAAAAABTE/5kbYTAEZ33Awv85qSMt7TKjPhgnO0La0wCLcB/s1600/quadtree_levels256_na.png" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Fig 3: Levels 2 (green), 5 (blue), and 6 (magenta) of the Quadtree decomposition</td></tr></tbody></table><br />If the number of quadtree levels is sufficient, we end up "covering" the target polygons with rectangular tiles (the bounding boxes of the quadrants), providing a coarse approximation of the target polygon shape (see figure 3 above). Every sampling location outside these bounding boxes can be discarded early on, so rendering time is not really influenced by the size of the "background" any more.<br /><br />If the quadtree is well-balanced, the amount of work (number of actual polygon-polygon intersection tests) can be kept almost constant throughout the entire rendered image, regardless of the number of vertices in the scene. I have confirmed this with some quick-and-dirty tests: halving the number of vertices in a scene (by using a polygon simplification method) has almost no impact on rendering time.<br /><br /><h2>Some examples</h2>Enough talk. Time for some images:<br /><br /><span id="goog_436470973"></span><span id="goog_436470974"></span><span id="goog_436470975"></span><span id="goog_436470976"></span><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-826r27EYuw8/UrFaXLf_MdI/AAAAAAAAAoE/i9hZdjB1Bo0/s1600/usaf1951_p473_noolpf_f4.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="https://4.bp.blogspot.com/-826r27EYuw8/UrFaXLf_MdI/AAAAAAAAAoE/i9hZdjB1Bo0/s400/usaf1951_p473_noolpf_f4.png" width="352" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">USAF1951-style chart with 4 levels</td></tr></tbody></table><span id="goog_436470981"></span><span id="goog_436470982"></span><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/--OGdOekuxxA/UrFar_4LrQI/AAAAAAAAAoM/lTNbzIsvIuU/s1600/siemens_p473_noolpf_f4.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="https://4.bp.blogspot.com/--OGdOekuxxA/UrFar_4LrQI/AAAAAAAAAoM/lTNbzIsvIuU/s400/siemens_p473_noolpf_f4.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Siemens star chart</td></tr></tbody></table><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-9_sYA_Iy8kw/UrFa3bl-GBI/AAAAAAAAAoQ/xO7YH5qBLO8/s1600/pinch_p473_noolpf_f4.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="373" src="https://3.bp.blogspot.com/-9_sYA_Iy8kw/UrFa3bl-GBI/AAAAAAAAAoQ/xO7YH5qBLO8/s400/pinch_p473_noolpf_f4.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Pinch chart (click for full size)</td></tr></tbody></table>All the charts above were rendered with a photosite pitch of 4.73 micron, using the Airy + square photosite (100% fill-factor) model at an aperture of f/4, simulating light at 550 nm wavelength. The command for generating the last chart would look something like this:<br /><br /><span style="font-family: "courier new" , "courier" , monospace;">mtf_generate_rectangle.exe --pixel-pitch 4.73 --aperture 4 -p airy-box -n 0 --target-poly pinch.txt </span><br /><br />where "pinch.txt" is the file specifying the polygon geometry (which happens to be in the same folder as <span style="font-family: "courier new" , "courier" , monospace;">mtf_generate_rectangle.exe</span> in my example above). These target polygon geometry files are included in the MTF Mapper package from version 0.4.16 onwards (their names are <span style="font-family: "courier new" , "courier" , monospace;">usaf1951r.txt</span>, <span style="font-family: "courier new" , "courier" , monospace;">siemens.txt</span>, and <span style="font-family: "courier new" , "courier" , monospace;">pinch.txt</span>.)<br /><br /><h3>OLPF demonstration</h3>The "pinch chart" provides a very clear demonstration of the effects of the Optical Low-Pass Filter (OLPF) found on many DSLRs (actually, most, depending on when you read this).<br /><br />Before I present some images, first a note about effective chart magnification. Most real-world test charts are printed at a known size, i.e., you can say with confidence that a particular target (say, a USAF1951 pattern) has a known physical size, and thus measures physical resolution expressed in line pairs per millimetre (lp/mm). It is relatively straightforward to extend this to synthetic images generated with mtf_generate_rectangle by carefully scaling your target polygon dimensions. For the time being, though, I prefer to fall back to a pixel-centric view of the universe. In other words, I chose to specify my target polygon geometry in terms of pixel dimensions. This was mostly motivated by my desire to illustrate specific effects (aliasing, etc.) visually. Just keep that in mind: the images I present below are not intended to express resolution in physical units; they are pretty pictures.<br /><br />With that out of the way, here is a sample of a hypothetical D7000 without an OLPF --- this could be something like the Pentax k5-IIs.<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-9_sYA_Iy8kw/UrFa3bl-GBI/AAAAAAAAAoU/TutyjgWwD-g/s1600/pinch_p473_noolpf_f4.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="373" src="https://2.bp.blogspot.com/-9_sYA_Iy8kw/UrFa3bl-GBI/AAAAAAAAAoU/TutyjgWwD-g/s400/pinch_p473_noolpf_f4.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Hypothetical D7000 without OLPF, f/4</td></tr></tbody></table><br />And here is the same simulated image, but this time using an OLPF, i.e., this should be quite close to the real D7000:<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-h4w88WqybFE/UrFeK0peNHI/AAAAAAAAAoc/H6L-VO0-gPk/s1600/pinch_p473_olpf_f4.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="373" src="https://3.bp.blogspot.com/-h4w88WqybFE/UrFeK0peNHI/AAAAAAAAAoc/H6L-VO0-gPk/s400/pinch_p473_olpf_f4.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">D7000 (with OLPF), f/4</td></tr></tbody></table><br />I repeated the simulations using an f/8 aperture:<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-XeME2ya51sA/UrFfd-AJemI/AAAAAAAAAoo/n3A-4vNRU-w/s1600/pinch_p473_noolpf_f8.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="373" src="https://3.bp.blogspot.com/-XeME2ya51sA/UrFfd-AJemI/AAAAAAAAAoo/n3A-4vNRU-w/s400/pinch_p473_noolpf_f8.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Hypothetical D7000 without OLPF, f/8</td></tr></tbody></table><br /><div class="separator" style="clear: both; text-align: center;"></div><br />and again, the D7000 (with OLPF) simulated at f/8: <br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-1j1d_rZUbEc/UrFfq4d1OgI/AAAAAAAAAow/5B8FOFrtzyY/s1600/pinch_p473_olpf_f8.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="373" src="https://1.bp.blogspot.com/-1j1d_rZUbEc/UrFfq4d1OgI/AAAAAAAAAow/5B8FOFrtzyY/s400/pinch_p473_olpf_f8.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">D7000 (with OLPF), f/8</td><td class="tr-caption" style="text-align: center;"><br /></td></tr></tbody></table><br />Here is a crop comparing the interesting part of the chart across these four configurations:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-d6m__kXYk_E/UrFmXZuwPTI/AAAAAAAAApA/1Murp5JCQWM/s1600/closeup_rc.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://1.bp.blogspot.com/-d6m__kXYk_E/UrFmXZuwPTI/AAAAAAAAApA/1Murp5JCQWM/s1600/closeup_rc.png" /></a></div>First up, notice the false detail in the "f/4, no OLPF" panel, occurring to the right of the scale bar tick labelled "1". This is a good example of aliasing --- compare that to the "f/4, OLPF" panel, which just fades to gray mush to the right of its tick mark. In the bottom two panels we can see the situation is significantly improved at f/8, where diffraction suppresses most of the objectionable aliasing.Frans van den Berghhttps://plus.google.com/104242390274256831576noreply@blogger.com2tag:blogger.com,1999:blog-6555460465813582847.post-36674752705103931732013-12-06T05:01:00.000-08:002013-12-06T05:01:18.452-08:00Simulating microlenses: kicking it up a notch<h3>Preamble </h3>My <a href="http://mtfmapper.blogspot.com/2013/10/simulating-microlenses-first-take.html">first stab</a> at simulating microlenses made some strong assumptions regarding the effective shape of the photosite aperture. Reader IlliasG subsequently pointed me to an illustration depicting a more realistic photosite aperture shape --- which happens to be a concave polygon.<br /><br />At first, it might seem trivial to use this photosite aperture shape in the usual importance sampling algorithm employed by <span style="font-family: "Courier New",Courier,monospace;">mtf_generate_rectangle</span>. It turns out to be a bit more involved than that ....<br /><br />The <span style="font-family: "Courier New",Courier,monospace;">mtf_generate_rectangle</span><span style="font-family: inherit;"> tool relied on an implementation of the Sutherland-Hodgeman polygon clipping routine to compute the area of the intersection of the photosite aperture and the target polygon (which is typically a rectangle). The Sutherland-Hodgeman algorithm is simple to implement, and reasonably efficient, but it requires the clipping polygon to be convex, so I required a new polygon clipping routine </span>to allow concave/concave polygon intersections (astute readers may spot that I could simply exchange the clipping/clippee polygons, but I wanted concave/concave intersections anyway). After some reading, it seemed that the Greiner-Hormann algorithm had a fairly simple implementation ...<br /><br />... but it did not handle the degenerate cases (vertices of clipping/clippee polygons coinciding, or a vertex falling on the edge of the other polygon). Kim's extension solves that problem, but it took me a while to implement.<br /><br /><h3>Effective photosite aperture (with microlenses)</h3>The Suede (on dpreview forums) posted a <a href="http://www.dpreview.com/forums/post/51904462">diagram</a> of the effective aperture shape after taking the microlenses into account. I thumb-sucked an analytical form for this shape, which looks like this (my shape in cyan overlaid on The Suede's image):<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-FiY-le_cjOU/UqF4aT5wO3I/AAAAAAAAAmo/y0U9_PiU-4w/s1600/suede.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-FiY-le_cjOU/UqF4aT5wO3I/AAAAAAAAAmo/y0U9_PiU-4w/s1600/suede.png" /></a></div>The fit of my thumb-sucked approximation is not perfect, but I declare it to be good enough for government work. I decided to call this the <span style="font-family: "Courier New", Courier, monospace;">rounded-square</span> photosite aperture (that is the identifier used by <span style="font-family: "Courier New", Courier, monospace;">mtf_generate_rectangle</span>).<br /><br />I am not sure how to scale this shape relative to the 100% fill-factor square. Intuitively, it seems that the shape should remain inscribed within the square photosite, or otherwise the microlens would be collecting light from the neighbouring photosites too. This type of scaling (as illustrated above) still leaves the corners of the photosite somewhat darkened, which is what we were aiming for. Incidentally, this scaling only gives me a fill-factor of ~89.5%. I guess the "100% fill-factor" claim sometimes seen in connection with microlenses applies to equivalent light-gathering ability, rather than geometric area.<br /><br /><h3>Results</h3><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-4A3rSrYe61s/UqGAV2RD1FI/AAAAAAAAAm4/IAyS23Y4GHA/s1600/box_a00_default_ff.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://1.bp.blogspot.com/-4A3rSrYe61s/UqGAV2RD1FI/AAAAAAAAAm4/IAyS23Y4GHA/s400/box_a00_default_ff.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">MTF curves for 0-degree step edge</td></tr></tbody></table><h3></h3><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-L5RJjNBhQ5E/UqGAjTPhR4I/AAAAAAAAAnA/JIK7Je_a0V8/s1600/box_a45_default_ff.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://4.bp.blogspot.com/-L5RJjNBhQ5E/UqGAjTPhR4I/AAAAAAAAAnA/JIK7Je_a0V8/s400/box_a45_default_ff.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">MTF curves for 45-degree step edge</td></tr></tbody></table><br />The two plots above illustrate the MTF curves of three possible photosite aperture shapes, combined with an Airy PSF (aperture=f/5.6, photosite pitch=4.73 micron, lambda=550 nm). The first plot is obtained by orienting the step edge at 0 degrees, i.e., our MTF cross-section is along the x-axis of the photosite. In the second plot, the step edge was oriented at 45 degrees relative the photosite, i.e., it represents the diagonal across the photosite.<br />Both plots include the MTF curves for an inscribed circle aperture, for comparison. Note that the fill-factors have not been normalized, that is, each aperture appears at its native size, which maximizes aperture area without going outside the square photosite's bounds.<br /><br />Purely based on its fill factor of ~90%, we would expect the first zero of the rounded-square aperture's MTF curve to land between the 100% fill-factor square and the 78% fill factor circle, which is clearly visible in the first plot. In fact, the rounded-square aperture's MTF curve appears to be a blend of the square and circle curves, which makes sense.<br /><br />The second plot above shows that the rounded-square aperture still exhibits some anisotropic behaviour, but that the effect is less pronounced than that observed with a square photosite (see <a href="http://mtfmapper.blogspot.com/2013/10/simulating-microlenses-first-take.html">this article</a> for more details on anisotropic behaviour); this also seems logical given the shape.<br /><br /><h3>In the real world (well, simulated real world, at least)</h3>The MTF curves show some small but measurable differences between the 100% fill-factor square photosite aperture and the ~90% rounded-square photosite aperture response to a step edge. But can you see these differences in an image?<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-EaEw0rAM9kI/UqHEYK3oB9I/AAAAAAAAAng/wT6G-sb0XsQ/s1600/render_usaf_square.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://2.bp.blogspot.com/-EaEw0rAM9kI/UqHEYK3oB9I/AAAAAAAAAng/wT6G-sb0XsQ/s400/render_usaf_square.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">USAF1951-style chart, f/1.4, 100% fill-factor square photosite aperture</td></tr></tbody></table><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-xcIre4BSKRM/UqHEpAj02RI/AAAAAAAAAno/9f2Hlr-w-S4/s1600/render_usaf_rsquare.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://2.bp.blogspot.com/-xcIre4BSKRM/UqHEpAj02RI/AAAAAAAAAno/9f2Hlr-w-S4/s400/render_usaf_rsquare.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">USAF1951-style chart, f/1.4, ~90% fill-factor rounded square photosite aperture</td></tr></tbody></table><br /><br />Well ... not really (click on the image to see full-size version). I even opened up the aperture to f/1.4 to accentuate the differences in the photosite apertures. Just to show you <i>something,</i> here is a rendering using a highly astigmatic photosite aperture (a rectangle that is 0.01 times the photosite pitch in height, but one times the pitch wide):<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-fKBVzvJLTSI/UqHE2xNo3PI/AAAAAAAAAnw/WSk-n8aWAv0/s1600/render_usaf_astig.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://2.bp.blogspot.com/-fKBVzvJLTSI/UqHE2xNo3PI/AAAAAAAAAnw/WSk-n8aWAv0/s400/render_usaf_astig.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">USAF1951-style chart, f/1.4, 2% fill-factor thin rectangular photosite aperture</td></tr></tbody></table>Note that this is basically point-sampling in the vertical direction, but box-sampling in the horizontal direction. This shows up as rather severe aliasing (jaggies) in the vertical direction.<br /><br /><h3>In the real real world</h3>So how do these simulated MTF curves compare to actual measured MTF curves? In <a href="http://mtfmapper.blogspot.com/2012/06/nikon-d40-and-d7000-aa-filter-mtf.html">a previous post</a> I described the method I used to capture the MTF of a Nikon D40 camera with a sharp lens set to an aperture of f/4. Here is a comparison of the simulated MTF curves to the empirically measured MTF curve. <br /><h3></h3><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-bQjG1G7qo9U/UqGejet_rzI/AAAAAAAAAnQ/IUUEgdGHTdI/s1600/d40_comparison.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://3.bp.blogspot.com/-bQjG1G7qo9U/UqGejet_rzI/AAAAAAAAAnQ/IUUEgdGHTdI/s400/d40_comparison.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Nikon D40 at f/4</td></tr></tbody></table>At first glance it might seem that the 100% fill-factor square photosite aperture simulation is marginally closer to the measured curve, but keep in mind that these simulations were both performed with an OLPF split factor of 0.375. This value of 0.375 was determined by trial and error using the 100% fill-factor photosite simulation --- it is likely that the optimal OLPF split factor for the rounded-square photosite aperture model is different. In fact, I would expect a slightly larger value, say around 0.38 or 0.385 to perform better, purely on the difference in fill factor (100% vs ~90%) between the two simulations.<br /><br />So yes, you could say I am lazy for not optimizing the OLPF split factor for the rounded-square photosite aperture model right now, but I do not feel comfortable doing any sort of quantitative comparison between the models with only one empirical sample at hand (one measured D40 MTF curve). Until such time as I have sufficient data to perform a proper optimization and evaluation of the models, I will leave it at the following statement: it certainly appears that the rounded-square model is a viable approximation of the photosite aperture of the D40.Frans van den Berghhttps://plus.google.com/104242390274256831576noreply@blogger.com0tag:blogger.com,1999:blog-6555460465813582847.post-13630278409648152542013-10-24T07:01:00.002-07:002017-04-06T00:49:02.662-07:00Simulating microlenses, first take.Up to now, <span style="font-family: "courier new" , "courier" , monospace;">mtf_generate_rectangle</span> assumed that the simulated sensor had square pixels with a 100% fill factor. This assumption does not reflect reality all that well, but it does simplify the derivation of analytical MTF curves for certain cases.<br /><br />The effect of fill factor on a square photosite (assuming that the active part of the photosite is just a smaller square centred in the outer square representing the photosite) is fairly straighforward: we are keeping the sampling rate the same, since the photosite pitch is unaffected, but we are reducing the size of the square being convolved with the incoming image. As a result, we would expect a lower fill factor to yield a better MTF curve, i.e., contrast will be higher than the 100% fill factor baseline. But it is still a good idea to test this, just to be sure ...<br /><br /><br /><h2>Implementing variable fill factors</h2>Using the importance sampling algorithm described <a href="http://mtfmapper.blogspot.com/2012/11/importance-sampling-how-to-simulate.html">here,</a> all we have to do is replace the square polygon representing the active area of the photosite with a smaller one, and we are done. The resulting PSF is thus the convolution of the photosite aperture and the Airy function (representing diffraction through the lens aperture). Unless otherwise stated, results were obtained at a wavelength of 550 nm, a photosite pitch of 4.73 micron, and an aperture of f/8, using a simulated system without an optical low-pass filter (OLPF), which appears to be all the rage lately.<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-aZbFy1KO86Q/Umjavs0vS-I/AAAAAAAAAks/JpQUXOS8sKE/s1600/box_ff100vsff50.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="https://2.bp.blogspot.com/-aZbFy1KO86Q/Umjavs0vS-I/AAAAAAAAAks/JpQUXOS8sKE/s400/box_ff100vsff50.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">MTF curve of Airy + square photosite PSF, at 100% and 50% fill factors</td></tr></tbody></table>This result confirms our suspicions: if we decrease the fill factor by shrinking the square photosite aperture, the cut-off frequency of the low-pass sinc() filter is increased correspondingly (see <a href="http://mtfmapper.blogspot.com/2012/06/diffraction-and-box-filters.html">here</a> for an overview of diffraction and box functions). The MTF50 of the 50% fill factor sensor is ≈0.38, compared to an MTF50 of ≈0.34 for the 100% fill factor case.<br /><br />So what are the downsides to using a smaller fill factor? Well, we are allowing substantially more contrast through above the Nyquist frequency (0.5 cycles per pixel), which will definitely increase the chances of aliasing artifacts (moiré, and/or "jaggies"). In the limit, we can imagine the fill factor approaching zero, which gives us a point-sampler, which will result in severe aliasing artifacts, such as the typical jagged edges we see when we render a polygon by taking only one sample at the centre of each pixel.<br /><br />There is another effect that photographers care deeply about: noise. The relative magnitude of photon shot noise increases inversely with fill factor, since the photon shot noise is directly proportional to the active area of the photosite. The simulation above was conducted with zero noise, mostly to illustrate the pure geometric effects of the fill factor.<br /><br />Speaking of geometric effects, a slight diversion into the interaction between edge orientation and photosite aperture shape is in order.<br /><br /><h3>Square photosites are anisotropic</h3>It is rather important to recall that an MTF curve is only a 1D cross-section of the true 2D MTF. If the 2D MTF is radially symmetric (e.g., the Airy MTF due to a circular lens aperture), then the orientation of our 1D cross-section is irrelevant.<br /><br />The 2D sinc() function representing the MTF of a square aperture is not radially symmetric, hence the 1D MTF curve is only representative of the specific orientation that was chosen. The results in this post were all derived using a combined Airy and photosite aperture simulation; since the Airy MTF is radially symmetric, and the photosite aperture MTF is not, we can expect the combined system MTF to lack perfect radial symmetry. The question remains, though: is the combined MTF symmetric enough to ignore this matter entirely?<br /><br />Feeling somewhat lazy today, I chose to evaluate this empirically, rather than deriving the analytical combined MTF at arbitrary orientations. Since we can directly simulate the edge spread function of a given PSF using <span style="font-family: "courier new" , "courier" , monospace;">mtf_generate_rectangle</span>, I decided to vary the orientation of the simulated step edge relative to the simulated photosite grid, which is equivalent to taking our 1D cross-section of the 2D MTF at the chosen orientation.<br /><br />Before we get to the results, first some predictions: We saw that the first zero of the sinc() low-pass filter of the square photosite aperture moved to a higher frequency when we decreased the fill factor. Intuitively, a wider photosite aperture produces stronger low-pass filtering. The length of the diagonal of a square is √2 × <i>side_length</i>, so we might expect a stronger low-pass filtering effect if the step edge is parallel to a diagonal of the square photosite aperture. And now the results ...<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-qN_7ZUWs0Uo/UmjoLuapmBI/AAAAAAAAAk8/icbvIM6EbUU/s1600/box_0vs45.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="https://2.bp.blogspot.com/-qN_7ZUWs0Uo/UmjoLuapmBI/AAAAAAAAAk8/icbvIM6EbUU/s400/box_0vs45.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">MTF curves of a square photosite (plus diffraction) at different edge orientations</td></tr></tbody></table>Notice that there is a minute difference: the 45-degree edge orientation produced a slightly <i>weaker</i> low-pass filtering effect!<br />Subtracting the 45-degree MTF curve from the 0-degree MTF curve gives us a better view of the difference:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-r_Mokh-ftUo/UmjocfCLXPI/AAAAAAAAAlE/IPKey0U7C1M/s1600/box_0vs45diff.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="https://2.bp.blogspot.com/-r_Mokh-ftUo/UmjocfCLXPI/AAAAAAAAAlE/IPKey0U7C1M/s400/box_0vs45diff.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">MTF difference between 0-degree edge and 45-degree edge</td></tr></tbody></table>The difference certainly appears to by structured, and not in the expected direction. Well, certainly not the direction that I expected.<br /><br />Fortunately the explanation is relatively simple. Consider the following diagram:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://2.bp.blogspot.com/-enkKh-vaKoE/WOXyo7k9RMI/AAAAAAAABTM/-TgaoCPyKWk6iD81ROT9dNQow1TpNR3rwCLcB/s1600/square_integration_na.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="188" src="https://2.bp.blogspot.com/-enkKh-vaKoE/WOXyo7k9RMI/AAAAAAAABTM/-TgaoCPyKWk6iD81ROT9dNQow1TpNR3rwCLcB/s400/square_integration_na.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Representation of the area of the photosite (orange) covered by the step edge (blueish), for 0-degree and 45-degree edge orientations</td></tr></tbody></table><div class="separator" style="clear: both; text-align: center;"></div>If <i>w </i>represents the side length of our square, then the left-hand diagram shows us that the area covered by the 0-degree step edge is simply <i>t </i>× <i>w</i> over the range 0 < <i>t </i>< <i>w</i>/2. The right-hand diagram illustrates that the area covered by the 45-degree step edge (bluish rectangle) is √0.5 × <i>t × t, </i>over the range 0 < t < √0.5 × <i>w </i>(in both cases we only have to integrate up to the midpoint to study the behaviour in question). The area covered by the step edge can be plotted as functions:<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-3DoHAgq6qJY/Umj3DVJjx7I/AAAAAAAAAlw/XlRPEfqCcjI/s1600/square_integration_plot.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="https://4.bp.blogspot.com/-3DoHAgq6qJY/Umj3DVJjx7I/AAAAAAAAAlw/XlRPEfqCcjI/s400/square_integration_plot.png" width="400" /></a></div>We can see that although the 45-degree case starts out with a lead (the first part of the corner starts at roughly -0.2071 if we align them so that they reach an area of 0.5 simultaneously), the 0-degree case catches up near t=0.1. From that point onwards, the 0-degree step edge covers a larger part of the photosite aperture than the 45-degree step edge does. In practise, this means that although the 45-degree case is technically "wider", the 0-degree case presents a stronger low-pass filter. Keep in mind that on top of this rather small difference due to the anisotropy of the square photosite aperture, we are blending in the radially symmetric Airy MTF, which further suppresses the anisotropy.<br /><br />The size of this effect is minute, as can be seen in the MTF difference diagram above. The MTF50 values are ≈0.3407 and ≈0.342 for the 0-degree and the 45-degree cases, respectively. In conclusion, we see that the anisotropy of the square photosite aperture is mostly masked by the strong isotropy of the Airy MTF at f/8. At larger apertures, the anisotropy is likely to be more apparent, but further analyses will be performed with a step edge orientation of 0 degrees only.<br /><br /><h2>Approximating microlenses</h2>It has been suggested that the microlenses alter the effective shape of the active area of a photosite. (For example, reader IlliasG contributed this info <a href="http://mtfmapper.blogspot.com/2012/06/nikon-d40-and-d7000-aa-filter-mtf.html">here</a>). A regular polygon approximating a circle seems to be a reasonable starting point for simulating more realistic microlenses. Similar to the fill factor implementation, this merely requires swapping out the polygon used to specify the geometry of the active part of the photosite, and performing importance sampling as usual. (If you can point me at a more accurate description of the effective shape of the combined microlens and photosite aperture, I would be happy to incorporate that into MTF Mapper).<br /><br />Before we look at the results, first a prediction: modelling the active area of the photosite as a circular disc, we should see a net decrease of the geometric fill factor, hence the low-pass filtering effect is expected to <i>decrease</i>.<i> </i><br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-d87eXVRnvC4/UmkAXzxkNuI/AAAAAAAAAmA/2hvgfYkaP_g/s1600/box_vs_ml.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="https://3.bp.blogspot.com/-d87eXVRnvC4/UmkAXzxkNuI/AAAAAAAAAmA/2hvgfYkaP_g/s400/box_vs_ml.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">MTF curve of square photosite aperture (1x1) versus circular photosite aperture (radius 1)</td></tr></tbody></table>No real surprises in these results. For a circular photosite aperture, I chose the <i>inscribed </i>circle to the square photosite, since this seemed more reasonable. Note that the fill factor of the circular photosite aperture is ≈78.4%, rather than the expected π/4 ≈ 0.7854, because I approximated the circle as a 60-sided regular polygon. So how much of the difference between the 100% fill factor square aperture and the 78% fill factor circular aperture is due directly to fill factor, and how much is due to the actual shape?<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-4soUKau7XgI/UmkTD4-zBZI/AAAAAAAAAmQ/4Sc0-fIxXdo/s1600/box_vs_ml_ff78_diff.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="https://1.bp.blogspot.com/-4soUKau7XgI/UmkTD4-zBZI/AAAAAAAAAmQ/4Sc0-fIxXdo/s400/box_vs_ml_ff78_diff.png" width="400" /></a></div>By subtracting the MTF curves as indicated in the legend of the plot above, we can see that, after matching the effective fill factor, the remaining differences are quite small. From the red dashed curve we can see that the circular (well, 60-sided regular polygon) photosite aperture behaves isotropically, whereas the 78% fill factor square photosite aperture still exhibits anisotropy (dashed blue curve).<br /><br /><br /><h2>Conclusion</h2>I have not performed sufficient experiments to make any inferences regarding behaviour at larger apertures, but at f/8 on a 4.73 micron pitch, it definitely appears as if the geometric fill factor of the photosite is responsible for the bulk of the difference between a 100% fill factor square photosite and a 78% fill factor inscribed circular photosite aperture.<br /><br />Once we match the effective fill factors, the difference between the square aperture and the circular aperture are of the same magnitude as the differences due to the anisotropy of the square aperture. At larger apertures, we should see more significant differences, but at f/8 the differences are not as significant as one might suspect.<br /><br />I would like to revisit my D40 experiment armed with the new fill factor and photosite geometry functionality in MTF Mapper. Stay tuned for that!<br /><br />MTF Mapper will include new options for controlling photosite aperture fill factor and shape from version 0.4.16 onwards, which should be released relatively shortly.<br /><br />Frans van den Berghhttps://plus.google.com/104242390274256831576noreply@blogger.com4tag:blogger.com,1999:blog-6555460465813582847.post-24292908591964079072013-10-11T02:30:00.002-07:002013-10-11T02:44:47.273-07:00How sharpness interacts with the accuracy of the slanted edge method<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-y9WvMR1_nXI/Ule9LlyD43I/AAAAAAAAAkQ/D2o9Iq4ktv0/s1600/test.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="240" src="http://4.bp.blogspot.com/-y9WvMR1_nXI/Ule9LlyD43I/AAAAAAAAAkQ/D2o9Iq4ktv0/s400/test.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">How MTF50 error varies with sharpness (click for a larger version)</td><td class="tr-caption" style="text-align: center;"><br /></td></tr></tbody></table>Just a brief post to show how the absolute error in MTF50 measurement increases with increasing MTF50 values. The chart above is a box plot of the the MTF50 error at a range of MTF50 values.<br /><br />Before we jump into a discussion of the chart itself, I would like to quickly explain how these values were obtained. Inside the MTF Mapper package you will find a tool called <span style="font-family: "Courier New", Courier, monospace;">mtf_generate_rectangle.exe</span> (in the Windows distribution). This tool generates synthetic images comprising a black rectangular target on a white background, i.e., exactly like the blocks you see in typical test charts (e.g., Imatest charts). These synthetic images simulate a specified point spread function, optionally adding some realistic sensor noise to create a synthetic image that is quite close to that which you would be able to capture with your actual camera. Since the point spread function controls the resulting MTF50 value of the image, we can choose to generate an image with an exact, known MTF50 value. The chart above is thus obtained by generating a large number of synthetic images at each of the MTF50 levels indicated on the x-axis. The MTF50 error is just the difference between the known MTF50 value of a given synthetic image, and the actual MTF50 value measured by MTF Mapper on the same image. By generating a number of images at each MTF50 level, each image with a pseudo-random noise component that differs slightly from the other images at the same MTF50 level, we obtain the distribution of MTF Mapper's measurement error at the given MTF50 level. With that out of the way, what can we learn from the chart?<br /><br />The black bar in the centre of each red box is the median error, which stays fairly close to zero. This is good news, since it means that on average MTF Mapper measurements are unbiased.<br /><br />The red box itself gives an indication of the spread of the MTF50 measurement error. The most important message here is that the absolute MTF50 measurement error increases with the nominal MTF50 level. If you have a sharp lens, the absolute measurement error (in cycles per pixel, or line pairs per mm) will be greater than that of a soft lens. In my experience, a sharp lens will have an MTF50 value of about 0.25 cycles per pixel or higher when perfectly focused.<br /><br />If we divide the MTF50 error by the MTF50 level to obtain the relative error (e.g., the percentage error), we still see the same trend of increasing relative error with increasing MTF50 level. I did not include a plot of that, but MTF Mapper's measurement error remains below 5% at real-world noise levels all the way up to an MTF50 value of 0.5 cycles per pixel. You will never obtain MTF50 values that high from a normal DSLR. For a more realistic value of about 0.3 cycles per pixel (a really, really sharp lens), MTF Mapper's relative measurement error will remain below 2% at real-world noise levels.<br /><br />The bottom line: it is harder to obtain an accurate MTF50 estimate of a sharp lens than it is to do so for a soft lens. In reality, this means you have to evaluate more samples (images) for sharp lenses than for soft lenses.<br /><br /><h3>What about Imatest or DxOLabs measurements?</h3>I do not own a copy of either, so I could not test their software comprehensively using the same method. I can tell you that other freely available slanted edge implementations (e.g., Mitre) behave in exactly the same way as MTF Mapper did on the same synthetic images.<br /><br />Looking a the maths behind the slanted edge method, I would expect that all implementations should behave exactly like MTF Mapper in this regard, i.e., the measurement error increases with increasing MTF50 values. This follows directly from the steeper slope we see in the MTF curve of a sharp lens, which means that the MTF50 value is more sensitive to small observation errors, such as those caused by sensor noise.<br /><br />DxO uses a different method of computing sharpness, but ultimately they end up evaluating the MTF curve as well, so their method is likely to be similarly affected by increasing sensitivity to sensor noise with increasing nominal sharpness.<br /><br /><h3>How to obtain your own copy of MTF Mapper</h3>MTF Mapper is a free-as-in-beer Open Source project, currently hosted on <a href="http://sourceforge.net/projects/mtfmapper/">Sourceforge.net</a>. You can download pre-built binaries for both Windows and Ubuntu Linux, as well as the source code if you like. Frans van den Berghhttps://plus.google.com/104242390274256831576noreply@blogger.com1