tag:blogger.com,1999:blog-65554604658135828472017-03-22T16:16:29.970-07:00MTF MapperFrans van den Berghhttps://plus.google.com/104242390274256831576noreply@blogger.comBlogger35125tag:blogger.com,1999:blog-6555460465813582847.post-1098980669915032482017-02-10T06:41:00.002-08:002017-02-13T01:22:45.785-08:00Automatic chart orientation estimation: validation experimentIn my previous post I mentioned that it is rather important to ensure that your MTF Mapper test chart is parallel to your sensor (or that the chart is perpendicular to the camera's optical axis, which is almost the same thing) to ensure that you do not confuse chart misalignment with a tilted lens element. I have added the functionality to automatically estimate the orientation of the MTF Mapper test chart relative to the camera using circular fiducials embedded in the test chart. Here is an early sample of the output, which nicely demonstrates what I am talking about:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://4.bp.blogspot.com/-1AtCBj1zZHU/WJ2lW4DrPOI/AAAAAAAABN4/0_fFrDhkqgwJ-LaD8T0fTixpKc4xU8lxQCLcB/s1600/chart_orientation_sample.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="265" src="https://4.bp.blogspot.com/-1AtCBj1zZHU/WJ2lW4DrPOI/AAAAAAAABN4/0_fFrDhkqgwJ-LaD8T0fTixpKc4xU8lxQCLcB/s400/chart_orientation_sample.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 1: Sample output of chart orientation estimation</td></tr></tbody></table>Figure 1 shows an example of the MTF Mapper "lensprofile" chart type, with the new embedded circular fiducials (they are a bit like 2D circular bar codes). Notice that the actual photo of the chart is rendered in black-and-white; everything that appears in colour was drawn in by MTF Mapper.<br />There is an orange plus-shaped coordinate origin marker (in the centre of the chart), as well as a reticle (the red circle with the four triangles) to indicate where the camera is aimed at. Lastly, we have the three orientation indicators in red, green and blue, showing us the three Tait-Bryan angles: Roll, Pitch and Yaw.<br /><br />But how do I know that the angles reported by MTF Mapper are accurate?<br /><br /><h3>The set-up</h3><div>I do not have access to any actual optics lab hardware, but I do have some machinist tools. Fortunately, being able to ensure that things are flat, parallel or perpendicular is a fairly important part of machining, so this might just work. First I have to ensure that I have a sturdy device for mounting my camera; in Figure 2 you can see the hefty steel block that serves as the base of my camera mount.</div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://2.bp.blogspot.com/-rBPMcRpey44/WJ2osAftkLI/AAAAAAAABOE/dgrXaRXxrk8z27H_RcG7kZAwqKoC54oggCLcB/s1600/DSC_3468_overview.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="363" src="https://2.bp.blogspot.com/-rBPMcRpey44/WJ2osAftkLI/AAAAAAAABOE/dgrXaRXxrk8z27H_RcG7kZAwqKoC54oggCLcB/s400/DSC_3468_overview.jpg" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 2: Overview of my set-up</td></tr></tbody></table><div>I machined the steel block on a lathe to produce a "true" block, meaning that the two large faces of the large shiny steel block are parallel, and that those two large faces are also perpendicular to the rear face on which the steel block is standing in the photo. The large black block in Figure 2 is a granite surface plate; this one is flat to something ridiculous like 3.5 micron maximum deviation over its entire surface. The instrument with the clock face is a dial test indicator; this one has a resolution of 2 micron per division. It is used to accurately measure small relative displacements through the pivoting action of the lever you can see in contact with the lens mount flange of the camera body. </div><div><br /></div><div>Using this dial test indicator, surface plate and surface gauge, I first checked that the two large faces of the steel block were parallel: they were parallel to within about 4 micron. Next, I stood up the block on its rear face (bottom face in Figure 2), and measured the perpendicularity. The description of that method is a bit outside of the the scope of this post, but the answer is what matters: near the top of the steel block the deviation from perpendicularity was also about 4 micron. The result of all this fussing with parallelism and perpendicularity is that I know (because I measured it) that my camera mounting block can be flipped through 90 degrees by either placing it on the large face with the camera pointing horizontally, or stood up with the camera pointing to the ceiling.</div><div><br /></div><div>That was the easiest part of the job. Now I had to align my camera mount so that the actual mounting flange was parallel to the granite surface plate. </div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://2.bp.blogspot.com/-dZQON7Sbmf4/WJ2uhvhzMTI/AAAAAAAABOU/cEhOwNIHWNMLqxuHPl9x6QcSF9E3riidQCLcB/s1600/DSC_3465_4pt1.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="311" src="https://2.bp.blogspot.com/-dZQON7Sbmf4/WJ2uhvhzMTI/AAAAAAAABOU/cEhOwNIHWNMLqxuHPl9x6QcSF9E3riidQCLcB/s400/DSC_3465_4pt1.jpg" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 3: Still busy tweaking the mounting flange parallel to the surface plate</td></tr></tbody></table><div>The idea is that you keep on adjusting the camera (bumping it with the tripod screw partially tightened, or adding shims) until the dial test indicator reads almost zero at four points, as illustrated between Figures 2 and 3. Eventually I got it parallel to the surface plate to within 10 micron, and called it good.</div><div><br /></div><div>This means that when I flip the steel block into its horizontal position (see Figure 4) the lens mount flange is perpendicular to the surface plate with a reasonably high degree of accuracy. Eventually, I will arrange my test chart in a similar fashion, but bear with me while I go through the process.</div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://4.bp.blogspot.com/-uHTBX01IuSw/WJ2wNvNAT8I/AAAAAAAABOg/-ZFMCC8jtnQDIa-HVDUY26DhyemGth6CwCLcB/s1600/DSC_3473_level.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="225" src="https://4.bp.blogspot.com/-uHTBX01IuSw/WJ2wNvNAT8I/AAAAAAAABOg/-ZFMCC8jtnQDIa-HVDUY26DhyemGth6CwCLcB/s400/DSC_3473_level.jpg" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 4: Using a precision level to ensure my two reference surfaces are parallel</td></tr></tbody></table>In Figure 4 you can see more of my set-up. The camera is close to its final position, and you can see a precision level placed on the granite surface plate just in front of the camera itself. That spirit level measures down to a one-division movement of the bubble for each 20 micron height change at a distance of one metre, or 0.0011459 decimal degrees if you prefer. I leveled the granite surface plate in both directions. Next, I placed a rotary table about 1 metre from the camera --- you can see it to the left in Figure 4. The rotary table is fairly heavy (always a good thing), quite flat, and will later be used to rotate the test chart. The rotary table was shimmed until it too was level in both directions.<br /><div><br /></div><div>The logic is as follows: I cannot directly measure if the rotary table's surface is parallel with the granite surface plate, but I can ensure that both of them are level, which is going to ensure that their surfaces are parallel to within the tolerances that I am working to here. This means that I know that my camera lens mount is perpendicular to the rotary table's surface. All I now have to do is place my test chart so that it is perpendicular to the rotary table's surface, and I can be certain that my test chart is parallel to my camera's mounting flange. I aligned and shimmed my test chart until it was perpendicular to the rotary table top, using a precision square, resulting in the set-up shown in Figure 5.</div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://4.bp.blogspot.com/-ndM1yhyZSS4/WJ232xt9YFI/AAAAAAAABOw/UILPqpVoTk0pFWIuP3n6n8tXCQ4RaRUAgCLcB/s1600/DSC_3479_final_setup.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="268" src="https://4.bp.blogspot.com/-ndM1yhyZSS4/WJ232xt9YFI/AAAAAAAABOw/UILPqpVoTk0pFWIuP3n6n8tXCQ4RaRUAgCLcB/s400/DSC_3479_final_setup.jpg" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 5: overview of the final set-up. Note the obvious change in colour temperature relative to Figure 4. Yes, it took that long to get two surfaces shimmed level.</td></tr></tbody></table><div><br /><h3>One tiny little detail (or make that two)</h3></div><div>Astute readers may have picked up on two important details:</div><div><ol><li>I am assuming that my camera's lens mounting flange is parallel to the sensor. In theory, I could stick the dial test indicator into the camera and drag the stylus over the sensor itself to check, but I do actually use my camera to take photographs occasionally, so no sense in ruining it just yet. Not even in the name of science.</li><li>The entire process above only ensures that I have two planes (the test chart, and the camera's sensor) standing perpendicularly on a common plane. From the camera's point of view, this means there is no up/down tilt, but there may be any amount of left/right tilt between the sensor and the chart. This is not the end of the world, since my initial test will only involve the measurement of pitch (as illustrated in Figure 1).</li></ol><h3>The first measurements</h3></div><div><span style="color: #990000;">Note: Results updated on 13/02/2017 to reflect improvements in MTF Mapper code. New results are a bit more robust, i.e., lower standard deviations.</span><br /><span style="color: #990000;"><br /></span>From the set-up above, I know that my expected pitch angle should be zero. Or at least small. MTF Mapper appears to agree: the first measurement yielded a pitch angle of -0.163148 degrees, which is promising. Of course, if your software gives you the expected answer on the first try, you may not be quite done yet. More testing!</div><div><br /></div><div>I decided to shim the base of the plywood board that the test chart was mounted on. The board is 20 mm thick, so the 180 micron shim (0.18 mm) that I happened to have handy should give me a tilt of about 0.52 degrees. I also had a 350 micron (0.35 mm) shim nearby, which yields a 1 degree tilt. That gives me three test cases (~zero degrees, ~zero degrees plus 0.52 degree relative tilt, and ~zero degrees plus 1 degree relative tilt). I captured 10 shots at each setting, which produced the following results:</div><div><ol><li>Expected = 0 degrees. Measurements ranged from -0.163 degrees to -0.153 degrees, for a mean measurement of -0.1597 degrees and a standard deviation of 0.00286 degrees.</li><li>Expected = 0.52 degrees. Measurements ranged from 0.377 to 0.394 degrees, for a mean measurement of 0.3910 degrees with a standard deviation of 0.00509 degrees. Given that our zero measurement started at -0.16 degrees, relative angle between the two test cases comes down to 0.5507 degrees (compared to the expected 0.52 degrees).</li><li>Expected = 1.00 degrees. Measurements ranged from 0.814 to 0.828, for a mean measurement of 0.8210 degrees with a standard deviation of 0.00423 degrees. The tilt relative to the starting point is 0.9806 degrees (compared to the expected 1.00 degrees).</li></ol><div>I am calling that good enough for government work. It seems that there may have been a small residual error in my set-up, leading to the initial "zero" measurement coming in at -0.16 degrees instead, or perhaps there is another source of bias that I have not considered.</div></div><div><br /></div><h3>Compound angles</h3><div>Having established that the pitch angle measurement appears to be fairly close to the expected absolute angle, I set out to test the relative accuracy of yaw angle measurements. Since my set-up above does not establish an absolute zero for the yaw angle, I cheated a bit: I used MTF Mapper to bring the yaw angle close to zero by nudging the chart a bit, so I started from an estimated yaw angle of 0.67 degrees. At this setting, I zeroed my rotary table, which as you can see from Figure 5 above, will rotate the test chart approximately around the vertical (y) axis to produce a desired (relative) yaw angle. At this point I got a bit lazy, and only captured 5 shots per setting, but I did rotate the chart to produce the sequence of relative yaw rotations in 0.5 degree increments. The mean values measured over each set of 5 shots were 0.673, 1.189, 1.685, 2.211, 2.717, and 3.157. If we subtract the initial 0.67 degrees (which represents our zero for relative measurements), the we get 0.000, 0.5165, 1.012, 1.538, 2.044, and 2.484, which seems pretty close to the expected multiples of 0.5.</div><div><br /></div><div>In the final position, I introduced the 0.18 mm shim to produce a pitch angle of 0.5 degrees. Over 5 shots a mean yaw angle of 3.132 degrees was measured (or 2.459 if we subtract out zero-angle of 0.67). I should have captured a few more shots, since at such small sample sizes it is hard to tell if the added yaw angle has changed the pitch angle, or not. It is entirely possible that I moved the chart while inserting the shim. That is what you get with a shoddy experimental procedure, I guess. Next time I will have to machine a more positive mechanism for adjusting the chart position.</div><div><br /></div><h3>Discussion</h3><div>Note that MTF Mapper could only extract the chart orientation correctly if I provided the focal length of the lens explicitly. My <a href="http://mtfmapper.blogspot.co.za/2017/02/limitations-of-using-single-shot-planar.html" target="_blank">previous post</a> demonstrated why it appears to be impossible to estimate the focal length automatically when the test chart is so close to being parallel with the sensor. This is unfortunate, because it means that there is no way that MTF Mapper can estimate the chart orientation completely automatically --- some user-provided input is required.</div><div><br /></div><div>The good news is that it seems that MTF Mapper can actually estimate the chart orientation with sufficient accuracy to aid the alignment of the test chart. Both repeatability (worst-case spread) and relative error appears to be better than 0.05 degrees, or about three minutes of arc, which compares favourably with the claimed accuracy of Hasselblad's linear mirror unit. Keep in mind that I tested under reasonably good conditions (ISO 100, 1/200 s shutter speed, f/2.8), so my accuracy figures do not represent the worst-case scenario. Lastly, because of the limitations of my set-up, my absolute error was around 0.16 degrees, or 10 minutes of arc; it is possible that actual accuracy was better than this.<br /><br />How does this angular accuracy relate to the DOF of the set-up? To put some numbers up: I used a 50 mm lens on an APS-C size sensor at a focus distance of about 1 metre. If we take the above results, and simplify it to say that MTF Mapper can probably get us to within 0.1 degrees under these conditions, then we can calculate the depth error at the extreme edges of the test chart. I used an A3 chart, so our chart width is 420 mm. If the chart has a yaw angle of 0.1 degrees (and we are shooting for 0 degrees), then the right edge of our chart will be 0.37 mm further away than expected, or our total depth error from the left edge of the chart to the right edge will be twice that, about 0.73 mm. If I run the numbers through vwdof.exe, the "critical" DOF criterion (CoC of 0.01 mm) yields a DOF of 8.95 mm. So our total depth error will be around 8% of our DOF. Will that be enough to cause us to think our lens is tilted when we look at a full-field MTF map? </div><div><br /></div><div>Only one way to find out. More testing!</div><div><br /></div>Frans van den Berghhttps://plus.google.com/104242390274256831576noreply@blogger.com0tag:blogger.com,1999:blog-6555460465813582847.post-82321813647911641612017-02-08T01:21:00.000-08:002017-02-08T01:21:15.742-08:00Limitations of using single-shot planar targets to perform automatic camera calibrationWhen you are trying to measure the performance of your system across the entire field, it is rather important to ensure that your test chart is parallel to your sensor. If you are not careful, then a slight tilt in your test chart could look very much like a tilted lens element if you are looking at the MTF values, i.e., two opposite corners of your MTF image would appear to be soft: is your lens titled along the diagonal, or is the chart tilted along the same diagonal?<br /><br />My solution to this problem is to directly estimate the camera pose from the MTF test chart. I have embedded fiducial markers in the latest MTF Mapper test charts which will allow me to measure the angle between your sensor and your test chart. This post details a particular difficulty I encountered while implementing the camera pose estimation method as part of MTF Mapper.<br /><br /><h3>The classical approach</h3><div>Classical planar calibration target methods like Tsai [Tsai1987] or Zhang [Zhang2000] prescribe that you capture several images of your planar calibration target, while ensuring that there is sufficient translation and rotation between the individually captured images. From each of the images you can extract a set of correspondences, e.g., the location of a prominent image feature (corner of a square, for example) and the corresponding real-world coordinates of that feature.</div><div><br /></div><div>This sounds tricky, until you realize that you are allowed to express the real-world coordinates in a special coordinate system attached to your planar calibration target. This implies that you can put all the reference features at z=0 in your world coordinate system (their other two coordinates are known through measurement with a ruler, for example), meaning that even if you moved the calibration object (rather than the camera) to capture your multiple calibration images, the model assumes that the calibration object was fixed and the camera moved around it.</div><div><br /></div><div>A set of four such correspondences are sufficient to estimate a 3x3 homography matrix up to a scale factor, since four correspondences yields 8 equations to solve for the 8 free parameters of the matrix. A homography is a linear transformation that can map one plane onto another, such as mapping our planar calibration target onto the image sensor. For each of our captured calibration images we can solve these equations to obtain a different homography matrix. The key insight is that this homography matrix can be decomposed to separate the intrinsic camera parameters from the extrinsic camera parameters. We can use a top-down approach to understand how the homography matrix is composed.</div><div><br /></div><div>To keep things a bit simpler, we can assume that the principal point of the system is fixed at the centre of the captured image. We can thus normalize our image coordinates so that the principal point maps to (0,0) in normalized image coordinates, and while we are at it we can divide the result by the width of the image so that <i>x</i> coordinates run from -0.5 to 0.5 in normalized image coordinates. This centering and rescaling generaly improves the numerical stability of the camera parameter estimation process. This gives us the intrinsic camera matrix <b>K</b>, such that</div><div><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-DLGC3yU2knA/WJmLSeKzMPI/AAAAAAAABLk/g_0_dw283WYCJQx_5A0i8hea0DKaWdXdQCLcB/s1600/eq1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://2.bp.blogspot.com/-DLGC3yU2knA/WJmLSeKzMPI/AAAAAAAABLk/g_0_dw283WYCJQx_5A0i8hea0DKaWdXdQCLcB/s1600/eq1.png" /></a></div>where <i>f </i>denotes the focal length of the camera. Note that I am forcing square pixels without skew. This appears to be a reasonable starting point for interchangeable lens cameras. We can combine the intrinsic camera parameters and the extrinsic camera parameters into a single 3x4 matrix <b>P</b>, such that</div><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-fUB6BKBTZTE/WJlkmCHieJI/AAAAAAAABK0/2n-u1Q1NnFU6m5zbdqxJ6sWAsTLRhowxwCLcB/s1600/eq2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://2.bp.blogspot.com/-fUB6BKBTZTE/WJlkmCHieJI/AAAAAAAABK0/2n-u1Q1NnFU6m5zbdqxJ6sWAsTLRhowxwCLcB/s1600/eq2.png" /></a></div><div>where the 3x3 matrix <b>R</b> represents a rotation matrix, and the vector <b>t </b>represents a translation vector. The extrinsic camera parameters <b>R</b> and <b>t</b> is often referred to as the camera pose, and represents the transformation required to transform from world coordinates (i.e., our calibration target local coordinates) to homogeneous camera coordinates. If we have multiple calibration images, then we obtain a different <b>R</b> and <b>t</b> for each image, but the intrinsic camera matrix <b>K </b>must be common to all views of the chart.</div><div><br /></div><div>The process of estimating <b>K </b>and the set of <b>R</b><sub><i>i</i></sub> and <b>t</b><sub><i>i</i></sub> over all the images <i>i</i> is called <i>bundle adjustment</i> [Triggs1999]. Typically we will use all the available point correspondences (hopefully more than four) from each view to minimized the backprojection error, i.e., we take our known chart-local world coordinates from each correspondence, transform it with the appropriate <b>P </b>matrix, divide by the third (<i>z</i>) coordinate to convert homogeneous coordinates to normalized image coordinates, and calculate the Euclidean distance between this back-projected image point and the measured image coordinates (e.g., output of a corner-finding algorithm) of the corresponding point in the captured image. The usual recommendation is to use a Levenberg-Marquardt algorithm to solve this non-linear optimization problem to minimize the sum of the squared backprojection errors.<br /><br />Strictly speaking, we usually include a radial distortion coefficient or two in the camera model to arrive at a more realistic camera model than the pinhole model presented here, but I am going to ignore radial distortion here to simplify the discussion.<br /><br /><h3>Single-view calibration using a planar target</h3></div><div>From the definition of the camera matrix <b>P </b>above we can see that even if we only have a single view of the planar calibration target, we can still estimate both our intrinsic and extrinsic camera parameters using the usual bundle adjustment algorithms. Zhang observed that when a planar calibration target is employed, we can estimate a 3x3 homography matrix <b>H </b>such that </div><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-OTon-k66mYk/WJlpiNg5BAI/AAAAAAAABLE/bkar_tjL-3MZydbBaP4RsgRGttWmJ0T8wCLcB/s1600/eq3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://2.bp.blogspot.com/-OTon-k66mYk/WJlpiNg5BAI/AAAAAAAABLE/bkar_tjL-3MZydbBaP4RsgRGttWmJ0T8wCLcB/s1600/eq3.png" /></a></div><div>where the vectors <b>r</b><sub><i>1</i></sub> and <b>r</b><sub><i>2</i></sub> define the first two basis vectors of the world coordinate frame in camera coordinates, and <b>t</b> is a translation vector. Since we require <b>r</b><sub><i>1</i></sub> and <b>r</b><sub><i>2</i></sub> to be orthonormal, the third basis vector of the world coordinate frame is just the cross product of <b>r</b><sub><i>1</i></sub> and <b>r</b><sub><i>2</i></sub>. This little detail explains how the 8 free parameters of the homograph <b>H</b> are able to represent all the required degrees of freedom we expect in our full camera matrix <b>P</b>.<br /><br />In the previous section we restricted our intrinsic camera parameters to a single unknown <i>f</i>, since both <i>P</i><sub><i>x</i></sub> and <i>P</i><sub><i>y</i></sub> are already know because we assume the principal point coincides with the image centre. With a little bit of algebraic manipulation we can see that Zhang's orthonormality constraints allows us to estimate the focal length <i>f</i> directly from the homography matrix <b>H </b>(see Appendix A below).<br /><br />So this leaves me with a burning question: if we can estimate all the required camera parameters using only a single view of a planar calibration target, why do all the classical methods require multiple views (with different camera poses)?<br /><br /><h3>Limitations of single-view calibration using planar targets</h3></div><div>To answer that question, we simply have to find an example of where the single-view case would fail to estimate the camera parameters correctly. The simplest case would be to assume that our rotation matrix <b>R</b> is the 3x3 identity matrix (camera axis is perpendicular to planar calibration target), and that our translation vector is of the form [0 0 <i>d</i>] where <i>d </i>represents the distance of the calibration target from the camera's centre of projection. This scenario reduces our camera matrix <b>P</b> to</div><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-4KugPs3Q11c/WJmL_pljDlI/AAAAAAAABLs/u-qt3LUM8O8svMwb7Q0Fs4n5dn2VIbINQCLcB/s1600/eq4.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://4.bp.blogspot.com/-4KugPs3Q11c/WJmL_pljDlI/AAAAAAAABLs/u-qt3LUM8O8svMwb7Q0Fs4n5dn2VIbINQCLcB/s1600/eq4.png" /></a></div><div> A given point [<i>x y</i> 0] in world coordinates is thus transformed to [<i>fx fy d</i>] in homogeneous camera coordinates. We can divide out the homogeneous coordinate to obtain our desired normalized image coordinates as [<i>fx</i>/<i>d fy</i>/<i>d</i>].</div><div>And there we see the problem: the normalized image coordinates depend only on the ratio <i>f</i>/<i>d, </i>which implies that we do not have sufficient constraints to estimate both <i>f</i> and <i>d</i> from this single view. The intuitive interpretation is simple to understand: you can always increase <i>d, </i>i.e., move further away from the calibration target while adjusting the focal length <i>f </i>(zooming in) to keep <i>f</i>/<i>d </i>constant without affecting the image captured by the camera.</div><div>This happens because there is no variation in the depth of the calibration target correspondence points expressed in camera coordinates, thus the depth-dependent properties of a perspective projection are entirely absent.<br /><br />We can try to apply the formula in Appendix A to estimate the focal length directly from the homography corresponding to the matrix <b>P</b> above, but we quickly run into a divide-by-zero problem. This should give us a hint. If we choose to ignore the hint, we can apply a bundle adjustment algorithm to estimate both the intrinsic and extrinsic camera parameters from correspondences generated using the matrix <b>P</b>. All that this will achieve is that we will find an arbitrary pair of <i>f</i> and <i>d </i>values that satisfy the constant ratio <i>f</i>/<i>d </i>imposed by <b>P</b>.<br /><br /><h3>The middle road</h3></div><div>What happens if we have a slightly less pathological scenario? Let us assume that there is a small tilt between the calibration target plane and the sensor. For simplicity, we can just choose a rotation around the <i>y </i>axis so that<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-Wqb4BNuhOIg/WJrffA7RZBI/AAAAAAAABNU/oPwc7LhvcHIpzkDlZ4AeSYCv56DHa-fHwCLcB/s1600/eq7.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://3.bp.blogspot.com/-Wqb4BNuhOIg/WJrffA7RZBI/AAAAAAAABNU/oPwc7LhvcHIpzkDlZ4AeSYCv56DHa-fHwCLcB/s1600/eq7.png" /></a></div>We know that for a small angle <span style="font-family: inherit;"><span style="background-color: white; color: #252525;">θ, sin(</span><span style="background-color: white; color: #252525;">θ) </span><span style="background-color: white; color: #222222;">≈ 0</span><span style="background-color: white; color: #222222;">, </span></span>so our matrix <b>P</b> will be very similar to the sensor-parallel-to-chart case above. The corresponding homography <b>H</b> should be</div><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-ON4oYcjXyzM/WJnCVniIi4I/AAAAAAAABMg/2kOReEFhhGgDIVFU_g8bOaZRP1LNqLB1ACLcB/s1600/eq8.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://3.bp.blogspot.com/-ON4oYcjXyzM/WJnCVniIi4I/AAAAAAAABMg/2kOReEFhhGgDIVFU_g8bOaZRP1LNqLB1ACLcB/s1600/eq8.png" /></a></div><div>We can apply the formula in Appendix A to <b>H</b>, which simplifies to f<sup>2</sup> = f<sup>2</sup>, which is a relief. The question is: how accurately can we estimate the homography <b>H </b>using actual correspondences extracted from the captured images?<br /><br />I know from simulations using MTF Mapper that the position of my circular fiducials can readily be estimated to an accuracy of 0.1 pixels under fairly heavy simulated noise. The objective now is to measure the impact of this uncertainty on the accuracy of the homography estimated using OpenCV's <span style="font-family: Courier New, Courier, monospace;">findHomography</span><span style="font-family: inherit;"> function. I start out with a camera matrix <b>P </b>like the one above with only a rotation around the <i>y</i> axis. A set of 25 points are generated on my virtual calibration target, serving as the world coordinates (with the same real-world dimensions as the actual A3 chart used by MTF Mapper). These are transformed using <b>P</b> to obtain the `perfect' simulated corresponding image coordinates representing the position of the fiducials. I perturb these perfect coordinates by adding Gaussian noise with a standard deviation of about </span>0.000020210 units, which corresponds to an error of 0.1 pixels, but expressed in normalized image coordinates (divided by 4948, the width of a D7000 raw image). Now I can systematically measure the uncertainty in the focal length estimated with the formula of Appendix A as a function of the angle between the chart and the sensor, <span style="background-color: white; color: #252525;">θ. I ran </span><span style="color: #252525;">100000 iterations at a selection of angles, and calculated the difference between the 75th and 50th percentile of the estimated focal length as a measure of spread.</span><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://1.bp.blogspot.com/-LNCItXehbQI/WJrW2SwRYCI/AAAAAAAABNA/qAewV88KoEw3jMLjmpY9SV9ieigVdmZtwCLcB/s1600/spread_f.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="300" src="https://1.bp.blogspot.com/-LNCItXehbQI/WJrW2SwRYCI/AAAAAAAABNA/qAewV88KoEw3jMLjmpY9SV9ieigVdmZtwCLcB/s400/spread_f.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 1</td></tr></tbody></table>In Figure 1 we see that the spread of the focal length estimates increases dramatically once the angle <span style="background-color: white; color: #252525;">θ drops below about 2 degrees. For the purpose of using the estimated camera pose to measure if you have aligned your chart parallel to your camera sensor, this is really terrible news: essentially, we cannot estimate the focal length of the camera reliably if the chart is close to being correctly aligned.</span></div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://2.bp.blogspot.com/--ISL9G57nlY/WJrg0q6bdsI/AAAAAAAABNg/DmPJqQ1Z9twGJZgZOLqkpz58ISiua5uhwCLcB/s1600/median_f.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="300" src="https://2.bp.blogspot.com/--ISL9G57nlY/WJrg0q6bdsI/AAAAAAAABNg/DmPJqQ1Z9twGJZgZOLqkpz58ISiua5uhwCLcB/s400/median_f.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 2</td></tr></tbody></table><div>Figure 2 shows that the focal length estimate is relatively unbiased for angles above about 1 degree, but once the angle becomes small enough, we overestimate the focal length dramatically.<br /><br />This experiment demonstrated that small errors in the estimated position of features (e.g., corners or centre of circular targets) leads to dramatic errors in focal length estimation. Intuitively, this makes sense, since the relative magnitude of perspective effects decreases the closer we approach a parallel alignment between the sensor and the calibration target. Since perspective effects depend on the distance from the chart, and the estimated distance from the chart is effectively controlled by the estimated focal length (assume the same framing), this seems reasonable.<br /><br />I have tried using bundle adjustment, rather than homography estimation as an intermediate step, but clearly the problem lies with the unfavourable viewing geometry and the resulting subtlety of the perspective effects, not with the algorithm used to estimate the focal length. At least, as far as I can tell.</div><div><br /><h3>Hobson's choice</h3>If we take the focal length of the camera as a given parameter, then the ambiguity is resolved, and we can obtain a valid, unique estimate of the calibration target distance <i>d. </i>This is not entirely surprising, since our assumed constrained intrinsic camera parameters depend only of the focal length <i>f</i>, i.e., <b>K </b>is known, thus the pose of the camera can be estimated for any given view, even the degenerate case where the calibration target is parallel to the sensor.<br /><br />In other words, I see no way other than requiring the user to specify the focal length as an input to MTF Mapper. I will try to extract this information from the EXIF data when the MTF Mapper GUI is used, but it seems that not all cameras report this information. Fortunately, it seems that a user-provided focal length need not be 100% accurate in order to obtain a reasonable estimate of the chart orientation relative to the camera. </div><div><br /></div><h4>References</h4><div><ul><li>[Zhang2000], Z. Zhang, A flexible new technique for camera calibration, IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(11), pp. 1330-1334, 2000.</li><li>[Tsai1987], R. Tsai, A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses, IEEE Journal on Robotics and Automation, 3(4), pp. 323-344, 1987.</li><li>[Triggs1999], B. Triggs, P. McLauchlan, R. Hartley, A. Fitzgibbon, Bundle Adjustment — A Modern Synthesis, ICCV '99: Proceedings of the International Workshop on Vision Algorithms, Springer-Verlag, pp. 298-372, 1999.</li></ul><div><br /></div></div><h4>Appendix A</h4><div>If we have a homography <b>H </b>between our normalized image coordinate plane and our planar calibration target, such that</div><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-5CvusqQ5GCE/WJmQ85QFozI/AAAAAAAABL8/Wpviq4OLLdAmX6tUKeqWJyZ0EccP7obSwCLcB/s1600/eq5.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://2.bp.blogspot.com/-5CvusqQ5GCE/WJmQ85QFozI/AAAAAAAABL8/Wpviq4OLLdAmX6tUKeqWJyZ0EccP7obSwCLcB/s1600/eq5.png" /></a></div><div>where <i>h</i><sub>33</sub> is an arbitrary scale factor, then the focal length of the camera can be estimated assuming square pixels, zero skew and a principal point of (0,0) in normalized image coordinates, using the formula</div><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-PVGRUgdj8Uk/WJmRVe82dII/AAAAAAAABMA/cRXEZKFqNOokQub7SWeA8Mvh4TTSoYblwCLcB/s1600/eq6.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://4.bp.blogspot.com/-PVGRUgdj8Uk/WJmRVe82dII/AAAAAAAABMA/cRXEZKFqNOokQub7SWeA8Mvh4TTSoYblwCLcB/s1600/eq6.png" /></a></div><div>Note that this is only one possibility, derived from the constraint that <b>r</b><sub>1</sub> is a unit vector.</div>Frans van den Berghhttps://plus.google.com/104242390274256831576noreply@blogger.com2tag:blogger.com,1999:blog-6555460465813582847.post-74495603033722251632016-11-25T04:30:00.001-08:002016-11-25T04:30:18.348-08:00MTF Mapper finally gets a logo!<span style="font-family: inherit;">It is a sad day for command line enthusiasts, but MTF Mapper has finally conformed by adopting a logo for its GUI version.</span><br /><br />I guess in the world of graphical user interfaces, a logo is to an application what a flag is to a nation (cue the <a href="http://www.goodreads.com/quotes/239641-we-stole-countries-with-the-cunning-use-of-flags-just" target="_blank">Eddie Izzard reference</a>).<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://2.bp.blogspot.com/-xvq1Qt_xMO8/WDgtrGTV2YI/AAAAAAAABIg/eZZHC1lHGx8Fys7W7gjWsjSySy0smyPIgCLcB/s1600/mtf_mapper_gui_256.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://2.bp.blogspot.com/-xvq1Qt_xMO8/WDgtrGTV2YI/AAAAAAAABIg/eZZHC1lHGx8Fys7W7gjWsjSySy0smyPIgCLcB/s1600/mtf_mapper_gui_256.png" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><br /></td></tr></tbody></table>There is of course a new version of MTF Mapper (0.5.11 or later) available over on <a href="http://sourceforge.net/projects/mtfmapper/files/windows/" target="_blank">SourceForge</a>. Lots of fixes and cleanup to the GUI; please let me know what you think of the new(ish) interface.Frans van den Berghhttps://plus.google.com/104242390274256831576noreply@blogger.com6tag:blogger.com,1999:blog-6555460465813582847.post-49711173063511685492016-06-13T23:08:00.000-07:002016-06-14T05:03:05.825-07:00Running MTF Mapper under WineMTF Mapper 0.5.2 was compiled using MSVC Express 2013, which Microsoft calls "vc12". The Windows binaries have been linked statically against the runtime, but this does not appear to be sufficient to run MTF Mapper under wine without further tweaks.<br /><br />For me, running "<span style="font-family: "courier new" , "courier" , monospace;">winetricks vcrun2013</span>" in the console seemed to do the trick. I would say that this is a necessary step to get MTF Mapper to work under wine.<br /><br />In case you are wondering, without the winetricks step I get the following error:<br /><span style="font-family: "courier new" , "courier" , monospace;">wine: Call from 0x7b83c506 to unimplemented function msvcr120.dll.?_Trace_ppl_function@Concurrency@@YAXABU_GUID@@EW4ConcRT_EventType@1@@Z, aborting</span><br /><span style="font-family: "courier new" , "courier" , monospace;"><br /></span>Let me know if there are any other issues related to wine, and I'll see what I can do.Frans van den Berghhttps://plus.google.com/104242390274256831576noreply@blogger.com0tag:blogger.com,1999:blog-6555460465813582847.post-43082430951968134772016-04-13T08:30:00.000-07:002016-04-13T08:30:53.011-07:00MTF Mapper vs Imatest vs Quick MTFI recently noticed that <a href="http://www.quickmtf.com/" target="_blank">Quick MTF</a> now has an automated region-of-interest (ROI) detection function. This allows me (in theory) to perform the same type of automated testing that I applied to MTF Mapper and Imatest. Now would be a good time to read the <a href="http://mtfmapper.blogspot.co.za/2015/07/taking-on-imatest.html" target="_blank">Imatest comparison</a> post to familiarise yourself with my testing procedure.<br /><br />Anyhow, the automatic ROI functionality in Quick MTF is <i>almost</i> able to work with the simulated Imatest charts I produced with mtf_generate_rectangle. I had to manually adjust about half of the ROIs to ensure that Quick MTF was using as much of each edge as possible, i.e., similar ROIs to what Imatest and MTF Mapper used. Since the edge locations remain the same across all the test images, I used the "open with the same ROI" option to keep the experiment as fair as possible.<br /><br />I also discovered that QuickMTF's "trial" limit of 40 tests can be bypassed with relatively little fuss (Oleg, if you are reading this, I promise not to share the secret).<br /><br />Lastly, note that I performed these tests using the "ISO 12233" mode of Quick MTF. The default settings produces much smoother plots, but these are severely biased, i.e., they report MTF50 values that are much too low. To illustrate: the default settings produce a 95th percentile relative error of 13% when measured using images with an expected MTF50 of 0.25 c/p; switching to ISO 12233 mode reduces the error to only 5%. As expected, the standard deviation of MTF50 error is lower in the default mode, but I maintain that bias and variance should <i>both </i>be managed well.<br /><br /><h4>The results </h4><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://3.bp.blogspot.com/-Yhi8xBBYOxw/Vw5cReL-rcI/AAAAAAAABFQ/uchdK5OuSD8DUW3HrlOri9bxENRt2TL9QCLcB/s1600/qmtf_boxplot.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="https://3.bp.blogspot.com/-Yhi8xBBYOxw/Vw5cReL-rcI/AAAAAAAABFQ/uchdK5OuSD8DUW3HrlOri9bxENRt2TL9QCLcB/s400/qmtf_boxplot.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 1: Quick MTF MTF50 relative error boxplot</td></tr></tbody></table>Figure 1 illustrates the relative MTF50 error boxplot, calculated as <span style="font-family: inherit;"> 100*(measured_mtf50 - expected_mtf50)/expected_mtf50. Firstly, Quick MTF should be commended for its unbiased performance between expected MTF50 values of 0.1 and 0.4 cycles/pixel; the median error is exactly zero. Unfortunately, a strong bias appears after 0.4 c/p, which is consistent with some (light) smoothing of the ESF. The boxes, and especially the whiskers, are a bit wide, which is more readily seen in Figure 2.</span><br /><span style="font-family: inherit;"><br /></span><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://2.bp.blogspot.com/-iF4YpCyB_48/Vw5d8DV5Q8I/AAAAAAAABFc/5HIRVSbRxGI89Zu2kOLaZxOlNR0PITzhACLcB/s1600/sd_comparison.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="https://2.bp.blogspot.com/-iF4YpCyB_48/Vw5d8DV5Q8I/AAAAAAAABFc/5HIRVSbRxGI89Zu2kOLaZxOlNR0PITzhACLcB/s400/sd_comparison.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 2: Standard deviation of relative MTF50 error</td></tr></tbody></table><span style="font-family: inherit;">Things go a bit pear shaped when we look at the standard deviation of the relative MTF50 error. If we consider the "usable" range of 0.08 to 0.5 c/p, then Quick MTF contains the standard deviation below 3.5%, which is not bad, but Imatest and MTF Mapper perform a bit better here. A more useful (and my preferred) measure is the 95th percentile of relative MTF50 error magnitude, as illustrated in Figure 3.</span><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://2.bp.blogspot.com/-tOpbn87J56c/Vw5fMsnrWTI/AAAAAAAABFo/9sNbJcwFi4E9-q6GM0LPUnRGbbskTQCTwCLcB/s1600/p95_comparison.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="https://2.bp.blogspot.com/-tOpbn87J56c/Vw5fMsnrWTI/AAAAAAAABFo/9sNbJcwFi4E9-q6GM0LPUnRGbbskTQCTwCLcB/s400/p95_comparison.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 3: 95th percentile of relative MTF50 error magnitude</td><td class="tr-caption" style="text-align: center;"><br /></td></tr></tbody></table><span style="font-family: inherit;"><br /></span><span style="font-family: inherit;">The values in Figure 3 have a natural interpretation: the magnitude of the error will remain below the indicated value in about 95% of the edges measured with each tool. This measure combines the effects of bias (Figure 1) and variance (Figure 2) in one convenient value. Consider again the "usable" range of 0.08 to 0.5 c/p: Quick MTF only manages to keep the error below about 9% across the range. It does quite a bit better in the centre of the range, almost matching Imatest at 0.2 c/p.</span><br /><span style="font-family: inherit;"><br /></span><h4><span style="font-family: inherit;">Conclusion</span></h4><span style="font-family: inherit;">The Imatest results were not based on the latest version; I do not have an Imatest license, and my trial has expired, so it will take a fair bit of effort to refresh the Imatest results. The Quick MTF 2.09 results are current, though.</span><br /><span style="font-family: inherit;">Based on these versions, it would appear that MTF Mapper still produces competitive results. And you cannot beat MTF Mapper's price.</span><br /><span style="font-family: inherit;"><br /></span><h4><br /></h4>Frans van den Berghhttps://plus.google.com/104242390274256831576noreply@blogger.com0tag:blogger.com,1999:blog-6555460465813582847.post-48950142779990955522015-11-03T02:16:00.001-08:002015-11-03T04:14:09.466-08:00PffffFFTttt...There is no doubt that FFTW is one of the fastest FFT implementations available. It can be a pain to include in a Microsoft Visual Studio project, though. Maybe I am "using it wrong"...<br /><br />One solution to this problem is to include my own FFT implementation in MTF Mapper, thereby avoiding the FFTW dependency entirely. Although it is generally frowned upon to use a homebrew FFT implementation in lieu of an existing, proven library, I decided it was time to ditch FFTW.<br /><br />One of the main advantages of using a homebrew FFT implementation is that it avoids the GPL license of FFTW. Not that I have any fundamental objection to the GPL, but the main sources of MTF Mapper are available under a BSD license, which is a less strict license than the GPL. In particular, the BSD license makes allowance for commercial use of the code. Before anyone asks, no, MTF Mapper is not going closed source or anything like that. All things being equal, the BSD license is just less restrictive, and avoiding FFTW brings MTF Mapper closer to being a pure BSD (or compatible) license project.<br /><br /><h3>FFT Implementation</h3>After playing around with a few alternative options, including considering the my first c++ FFT implementation way back from first year at university, I settled on Sorenson's radix-2 real-valued FFT (Sorenson, H.B, et al, Real-Valued Fast Fourier Transform Algorithms, IEEE Transactions on Accoustics, Speech, and Signal Processing, 35(6), 1987). This algorithm appears to be a decent balance between complexity and theoretical efficiency, but I had to work fairly hard at the code to produce a reasonably efficient implementation.<br /><br />I tried to implement it in fairly straightforward c++, but taking care to use pointer walks in stead of array indexing, and using look up tables for both the bit-reversal process and the sine/cosine functions. These changes produced an algorithm that was at least as fast as my similarly optimized complex FFT implementation augmented with a two-for-the-price-of-one step for real-valued inputs.<br /><br />One thing I did notice is that the FFT in its "natural" form does not lend itself to an efficient streaming implementation. For example, the first pass of the radix-2 algorithm looks like this:<br /><blockquote class="tr_bq">for (; xp <= xp_sentinel; xp += 2) { <br /> double xt = *xp;<br /> *(xp) = xt + *(xp+1);<br /> *(xp+1) = xt - *(xp+1);<br />}</blockquote>Note that the value of x[i] (here *xp) is overwritten in the 3rd line of the code, while the original value of x[i] (copied into xt) is still required in the 4th line of the code. This write-after-read dependency causes problems for out-of-order execution. Maybe the compiler is smart enough to unroll the loop and intersperse the reads and writes to achieve maximal utilization of all the processing units on the CPU, but the stride of the loop and the packing of the values is not ideal for SSE2/AVX instructions either. I suppose that this can be addressed with better code, but before I spend time on that I first have to determine how significant raw performance of the FFT is in the context of MTF Mapper.<br /><br /><h3>Real world performance in MTF Mapper</h3>So how much time does MTF Mapper spend calculating FFTs? Well, one FFT for every edge. A high-density grid-style test chart has roughly 1452 edges. According to a "callgrind" trace produced using valgrind, MTF Mapper v0.4.21 spends 0.09% of its instruction count inside FFTW's real-valued FFT algorithm.<br /><br />Using the homebrew FFT of MTF Mapper 0.4.23 the total number of instruction fetches increase by about 1.34%, but this does not imply a 1.34% increase in runtime. The callgrind trace indicates that 0.31% of v0.4.23's instructions are spent in the new FFT routine.<br /><br />In relative terms, this implies that the new routine is roughly 3.5 times slower, but this does not account for the additional overheads incurred by FFTW's memory allocation routines (the FFTW routine is not in-place, hence requires a new buffer to be allocated before every FFT to keep the process thread-safe). <br /><br />Measuring the actual wall-clock time gives us a result of 22.27 ± 0.14 seconds for 20 runs of MTF Mapper v0.4.21 on my test image, versus 21.631 ± 0.16 seconds for 20 runs of v0.4.23 (each experiment repeated 4 times for computing standard deviations). These timings were obtained on a Sandy-bridge laptop with 8/4 threads. The somewhat surprising reversal of the standings (the homebrew FFT now outperforms the FFTW implementation) just goes to show that the interaction between hyperthreading, caching, and SSE/AVX unit contention can produce some surprising results.<br /><br />Bottom line: the homebrew FFT is fast enough (at least on the two hardware/compiler combinations I tested).<br /><br /><h3>Are we done yet?</h3>Well, surely you want to know how fast the homebrew FFT is in relation to FFTW in a fair fight, right?<br /><br />I set up a simple test using FFTW version 3.3.4 built on gentoo using gcc-4.9.3, running on a Sandy-bridge laptop cpu (i7-2720QM) running at a base clock of 2.2 GHz. This was a single-threaded test, so we should see a maximum clock speed of 3.3GHz, if we are lucky.<br /><br />For a 1024-sample real-valued FFT, 2 million iterations took 14.683 seconds using the homebrew code, and only 5.798 seconds using FFTW. That is a ratio of ~2.53.<br /><br />For a 512-sample (same as what MTF Mapper uses) real-valued FFT, 2 million iterations took 6.635 seconds using the homebrew code, and only 2.743 seconds using FFTW. That is a ratio of ~2.42.<br /><br />According to general impressions gathered from the Internet, you are doing a good-enough job if you are less than 4x slower than FFTW. I ran metaFFT's benchmarks, which gave a ratio of 2.4x and 2.1x relative to FFTW for size 1024 and 512, respectively (these were probably complex transforms, so not a straight comparison).<br /><br />The MTF Mapper homebrew FFT at least appears to be in the right ballpark, at least fast enough not to cause embarrassment....Frans van den Berghhttps://plus.google.com/104242390274256831576noreply@blogger.com3tag:blogger.com,1999:blog-6555460465813582847.post-51255245997216625752015-07-05T10:30:00.000-07:002015-07-05T10:30:00.252-07:00A critical lookMost of the posts on this blog are tutorial / educational in style. I have come across a paper published by an Imatest employee that requires some commentary of a more critical nature. With some experience in the academic peer review process, I hope I can maintain the appropriate degree of objectivity in my commentary.<br /><br />At any rate, if you have no interest in this kind of commentary / post, please feel free to skip it.<br /><br /><h3>The paper</h3>The paper in question is : Jackson K. M. Roland, " A study of slanted-edge MTF stability and repeatability ", Proc. SPIE 9396, Image Quality and System Performance XII, 93960L (January 8, 2015); doi:10.1117/12.2077755; http://dx.doi.org/10.1117/12.2077755.<br /><br />A copy can be obtained directly from Imatest <a href="http://www.imatest.com/wp-content/uploads/2015/02/Slanted-Edge_MTF_Stability_Repeatability.pdf" target="_blank">here</a>.<br /><br /><h3>Interesting point of view</h3>One of the contributions of the paper is a discussion of the impact of edge orientation on MTF measurements. The paper appears to approach the problem from a direction that is more closely aligned with the ISO12233:2000 standard, rather than Kohm's method ("Modulation transfer function measurement method and results for the Orbview-3 high resolution imaging satellite", Proceedings of ISPRS, 2004).<br /><br />By that I mean that Kohm's approach (and MTF Mapper's approach) is to compute an estimate of the edge normal, followed by projection of the pixel centre coordinates (paired with their intensity values) onto this normal. This produces a dense set of samples across the edge in a very intuitive way; the main drawback of this approach being the potential increase in the processing cost because it lends itself better to a floating point implementation.<br /><br />The ISO12233:2000 approach rather attempts to project the edge "down" (assuming a vertical edge) onto the bottom-most row of pixels in the region of interest (ROI). Using the slope of the edge (estimated earlier), each pixel's intensity (sample) can be shifted left or right by the appropriate phase offset before being projected onto the bottom row. If the bottom row is modelled as bins with 0.25-pixel spacing, this process allows us to construct our 4x-oversampled, binned ESF estimate with the minimum amount of computational effort (although that might depend on whether a particular platform has strong floating-point capabilities).<br /><br />The method proposed in the Imatest paper is definitely of the ISO12233:2000 variety. How can we tell? Well, the Imatest paper proposes that the ESF must be corrected by appropriate scaling of the x values using a scaling factor of cos(theta), where theta is the edge orientation angle. What this accomplishes is to "squash" the range of x values (i.e. pixel column) to be spaced at an interval that is consistent with the pixel's distance as measured along the normal to the edge. For a 5 degree angle, this correction factor is only 0.9962, meaning that distances will be squashed by a very small amount indeed. So little, in fact, that the ISO12233:2000 standard ignores this correction factor, because a pixel at a horizontal distance of 16 pixels will be mapped to a normal distance of 15.94. Keeping in mind that the ESF bins are 0.25 pixels wide, this error must have seemed small.<br /><br />I recognize that the Imatest paper proposes a valid solution to this "stretching" of the ESF that would occur in its absence, and that this stretching would become quite large at larger angles (about a 1.5 pixel shift at 25 degrees for our pixel at a horizontal distance of 16 pixels).<br /><br />My critique of this approach is that it would typically involve the use of floating point calculations, the potential avoidance of which appears to have been one of the main advantages of the ISO12233:2000 method. If you are going to use floating point values, then Kohm's method is more intuitive.<br /><br /><h3>Major technical issues</h3><ol><li>The Point Spread Functions (PSFs) used to perform the "real world" and simulated experiments were rather different, particularly in one very important aspect. The Canon 6D camera has a PSF that is anisotropic, which follows directly from its square (or even L-shaped) photosites. The composite PSF for the 6D would be an Airy pattern (diffraction) convolved with a square photosite aperture (physical sensor) convolved with a 4-dot beam splitter (the OLPF). Of course I do not have inside information on the exact photosite aperture (maybe chipworks has an image) nor the OLPF (although a 4-dot Lithium Niobate splitter seems reasonable). The point remains that this type of PSF will yield noticeably higher MTF50 values when the slanted edge approaches 45 degrees. Between the 5 and 15 degree orientations employed in the Imatest paper, we would expect a difference of about 1%. This is below the error margin of Imatest, but with a large enough set of observations this systematic effect should be visible.<br /><br />In contrast, the Gaussian PSF employed to produce the simulated images is (or at least is supposed to be) isotropic, and should show no edge-orientation dependent bias. Bottom line: the "real world" images had an anisotropic PSF, and the simulated images had an isotropic PSF. This means that the one cannot be used in the place of the other to evaluate the effects of edge orientation on measured MTF. Well, at least not without separating the PSF anisotropy from the residual orientation-depended artifacts of the slanted edge method.</li><li> On page 7 the Imatest paper states that "The sampling of the small Gaussian is such that the normally rotationally-invariant Gaussian function has directional factors as you approach 45 degree increments." This is further "illustrated" in Figure 13.<br /><br />At this point I take issue with the reviewers who allowed the Imatest paper to be published in this state. If you suddenly find that your Gaussian PSF becomes anisotropic, you have to take a hard look at your implementation. The only reason that the Gaussian (with a small standard deviation) is starting to develop "directional factors" is because you are undersampling the Gaussian beyond repair.<br /><br />The usual solution to this problem is to increase the resolution of your synthetic image. By generating your synthetic image at, say, 10x the scale, all your Gaussian PSFs will be reasonably wide in terms of samples in the oversampled image. For MTF measurement using the slanted edge method, you do not even have to downsize your oversampled image before applying the slanted edge method. All you have to do is to change the scale of your resolution axis in your MTF plot. That way you do not even have to worry about the MTF of the downsampling kernel.<br /><br />There are several methods that produce even higher quality simulated images. At this point I will plug my own work: see <a href="http://mtfmapper.blogspot.com/2012/04/accurate-method-for-rendering-synthetic.html" target="_blank">this post</a> or <a href="http://www.prasa.org/proceedings/2012/prasa2012-13.pdf" target="_blank">this paper</a>. These approaches rely on importance sampling (for diffraction PSFs) or direct numerical integration of the Gaussian in two dimensions; both these approaches avoid any issues with downsampling and do not sample on a regular grid. These methods are implemented in <span style="font-family: "Courier New", Courier, monospace;">mtf_generate_rectangle.exe<span style="font-family: inherit;"></span></span>, which is part of the MTF Mapper package.</li></ol><br /><h3>Minor technical issues</h3><ol><li>On page 1 the Imatest paper states that the ISO 12233:2014 standard lowered the edge contrast "because with high contrast the measurement becomes unstable". This statement is quite vague, and appears to contradict the results presented in Figure 8, which shows no degradation of performance at high contrast, even in the presence of noise.<br /><br />I would offer some alternative explanations: the ISO12233 standard is often applied to images compressed with DCT-based quantization methods, such as JPEG. A high-contrast edge typically shows up with a large-magnitude DCT coefficient at higher frequencies; exactly the frequencies that are more strongly quantized, hence the well-kown appearance of "mosquito noise" in JPEG images. A lower contrast edge will reduce the relative energy at higher frequencies, thus the stronger quantization of high frequencies will have a proportionately smaller effect. I am quite temtpted to go and test this theory right away.<br /><br />Another explanation, one that is covered in some depth on Imatest's own website, is of course the potential intensity clipping that may result from incorrect exposure. Keeping the edge contrast in a more manageable range reduces the chance of clipping. Another more subtle reason is that a lower contrast chart allows more headroom for sharpening without clipping. By this I mean that sharpening (of the unsharp masking type) usually results in some "ringing" which manifests as overshoot (on the bright side of the edge) and undershoot (on the dark side of the edge). If chart contrast was so high that the overshoot of overzealous sharpening would be clipped, then it would be harder to measure (and observe) the extent of oversharpening.</li><li>The noise model is employed a little basic. Strictly speaking the standard deviation of the additive Gaussian white noise should be signal dependent; this is a more accurate model of photon shot noise, and is trivial to implement. I have not done a systematic study of the effects of noise simulation models on the slanted edge method, but in 2015 one really should simulate photon shot noise as the dominant component of additive noise.</li><li>Page 6 of the Imatest paper states that "There is a problem with this 5 degree angle that has not yet been addressed in any standard or paper." All I can say to this is that Kohm's paper has presented an alternative solution to this problem that really should be recognized in the Imatest paper.</li></ol><h3>Summary</h3>Other than the unforgivable error in the generation of the simulated images, a fair effort, but more time spent on the literature, especially papers like Kohm's, would have changed the tone of the paper considerably, which in turn would have made it more credible.<br /> <br /><ol></ol>Frans van den Berghhttps://plus.google.com/104242390274256831576noreply@blogger.com1tag:blogger.com,1999:blog-6555460465813582847.post-34230501438321847342015-07-05T05:38:00.000-07:002015-07-31T04:40:13.430-07:00Taking on Imatest<br />After having worked on MTF Mapper for almost five years now, I have decided that it is time to go head-to-head with Imatest. I downloaded a trial version of Imatest 4.1.12 to face off against MTF Mapper 0.4.18.<br /><br />For the purpose of this comparison I decided to generate synthetic images using <span style="font-family: "Courier New",Courier,monospace;">mtf_generate_rectangle</span>. This allows me to use a set of images rendered using an accurately known PSF, meaning that we know exactly what the actual MTF50 value should be for those images. I decided to render a test chart conforming to the SFRPlus format, since that allows me to extract a fair number of edges for each test case. The approximately-sfrplus-chart looks like this:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/--cXVvDfj9fg/VZj5Tp-avGI/AAAAAAAAA9A/qPgtIijihNA/s1600/sfr_m_25_5_2.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="286" src="http://3.bp.blogspot.com/--cXVvDfj9fg/VZj5Tp-avGI/AAAAAAAAA9A/qPgtIijihNA/s400/sfr_m_25_5_2.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 1: SFRPlus style chart with an MTF50 value of 0.35 cycles/pixel</td></tr></tbody></table><span style="font-family: Times, "Times New Roman", serif;"><span style="font-family: inherit;"> </span></span>SFRPlus was quite happy to automatically identify and extract regions of interest (ROIs) over all the relevant edges from this image. MTF Mapper can also extract edges from this image automatically. One notable difference is that SFRPlus includes the edges of the squares that overlap with the black bars at the top and bottom of the images, whereas MTF Mapper only considers edges that form part of a complete square. To keep the comparison fair, I discarded the results from the top and bottom rows of squares (as extracted by SFRPlus), leaving us with 19*4 edges per image (SFRPlus ignores the third square in the middle column).<br /><br /><h3>Validating the test images</h3>(This section can be skipped if you trust my methodology)<br /><br />Although I have posted quite a few posts here on this blog regarding the algorithms used by <span style="font-family: Times, "Times New Roman", serif;"><span style="font-family: "Courier New",Courier,monospace;">mtf_generate_rectangle</span><span style="font-family: inherit;"> to</span> render synthetic images, I will now show from first principles that the synthetic images truely have the claimed point spread functions (PSFs), and thus known MTFs.</span><br /><span style="font-family: Times, "Times New Roman", serif;"><br /></span><span style="font-family: Times, "Times New Roman", serif;">I rendered the synthetic image using a command like this:</span><br /><br /><span style="font-family: "Helvetica Neue",Arial,Helvetica,sans-serif;">mtf_generate_rectangle.exe --b16 --pattern-noise 0.0085 --read-noise 2.5 --adc-gain 0.641 --adc-depth 12 -c 0.33 --target-poly sfrchart.txt -m 0.35 -p gaussian-sampled --airy-samples 100</span><br /><span style="font-family: inherit;"><br /></span><span style="font-family: inherit;">This particular command renders the SFRPlus chart using a Gaussian PSF with an MTF50 value of 0.35. Reasonably realistic sensor noise is simulated, including photon shot noise, which implies that the noise standard deviation scales as the square root of the signal level; in plain English: we have more noise in bright parts of the image.</span><br /><br /><span style="font-family: "Helvetica Neue",Arial,Helvetica,sans-serif;"><span style="font-family: inherit;">I ran a version of <span style="font-family: "Courier New",Courier,monospace;">mtf_mapper<span style="font-family: inherit;"> that</span></span> dumped the raw samples extracted from the image (normally used to construct the binned ESF); I specified the edge angle as 5 degrees to remove all possible sources of error. NB: the "raw_esf_values.txt" file produced by MTF Mapper contains the binned ESF, and is not suitable for this particular experiment because of the smoothing inherent in the binning.</span></span><br /><span style="font-family: "Helvetica Neue",Arial,Helvetica,sans-serif;"><br /></span><span style="font-family: inherit;">Given that I specified an MTF50 value of 0.35 cycles per pixel, we know that the standard deviation of the true PSF should be 0.5354018 pixels [ sqrt( log(0.5)/(-2*pi*pi*0.35*0.35) ]. From this we can calculate the expected analytical ESF, which is simply erf(x/sigma)*(upper-lower) + lower, where erf() is the standard "error function", defined as the integral of the unit Gaussian. The values upper and lower merely represent the mean white and black levels, which were defined as lower = 65536*0.33/2 and upper = 65536 - lower. With these values, I can now plot the expected analytical ESF along with the raw ESF samples dumped by MTF Mapper. </span><br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-lQJo99iKHSQ/VZkAIOaYv_I/AAAAAAAAA9Q/5iLTIVSiIZg/s1600/esf_with_erf.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://4.bp.blogspot.com/-lQJo99iKHSQ/VZkAIOaYv_I/AAAAAAAAA9Q/5iLTIVSiIZg/s400/esf_with_erf.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 2: Raw ESF samples along with analytical ESF</td></tr></tbody></table><span style="font-family: inherit;"><span style="font-family: inherit;">I should mention that I shifted the analytical ESF along the "d" axis to compensate for any residual bias in MTF Mapper's edge position estimate. We can see that the overall shape of the analytical ESF appears to line up quite well with the ESF samples extracted from the synthetic image. Next we look at the difference between the two curves:</span></span><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-dKdr9HVs8cI/VZkA-1G82aI/AAAAAAAAA9c/yR4nRTokZvk/s1600/esf_minus_erf.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://1.bp.blogspot.com/-dKdr9HVs8cI/VZkA-1G82aI/AAAAAAAAA9c/yR4nRTokZvk/s400/esf_minus_erf.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 3: ESF difference</td></tr></tbody></table><span style="font-family: inherit;"><span style="font-family: inherit;"> </span><br />We see two things in Figure 3: The mean difference appears to be close to zero, and the noise magnitude appears to increase with increasing signal levels (to the right). The increase in noise was expected, since that follows from the photon shot noise model used to simulate sensor noise. We can normalize the noise by dividing the ESF difference (noise) by the square root of the analytical ESF, which gives us this plot:</span><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-r46oLhYUoKw/VZkBtsouWjI/AAAAAAAAA9k/hzwSY0ofEes/s1600/esf_residuals.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://4.bp.blogspot.com/-r46oLhYUoKw/VZkBtsouWjI/AAAAAAAAA9k/hzwSY0ofEes/s400/esf_residuals.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 4: Normalised ESF difference</td></tr></tbody></table><span style="font-family: inherit;">This normalization appears to keep the noise standard deviation constant, which would be consistent with garden-variety additive Gaussian white noise. The density estimate of the normalized noise looks Gaussian:</span><br /><div class="separator" style="clear: both; text-align: center;"></div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-JhrHDysiChs/VZkCxUnD92I/AAAAAAAAA90/IKQseSXEObU/s1600/esf_residual_density.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://1.bp.blogspot.com/-JhrHDysiChs/VZkCxUnD92I/AAAAAAAAA90/IKQseSXEObU/s400/esf_residual_density.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 5: Normalized ESF difference density</td></tr></tbody></table><span style="font-family: inherit;">Running the normalized residuals through the Shapiro-Wilk normality test gives us a p-value of 0.03722 over our 3285 samples. That is bad news, because it means our data is non-Gaussian at a 5% significance level. <strike>It is, however, Gaussian at a 10% confidence level.</strike> Correction: The normalized residuals are Gaussian at a 3% (or 2.5%, or 1%) significance level. The qqnorm() plot is pretty straight too, which tells us it is more likely that the Shapiro-Wilk test is negatively affected by the large number of samples, than that the residuals are truely not Gaussian. </span><br /><br /><span style="font-family: inherit;">Now that we have confirmed that the distribution of the residuals are Gaussian, we can fit a line through them. This line comes out with a slope of -0.005765, which means that our normalized residuals are fairly flat. Lastly, we can perform some LOESS smoothing on the normalized residuals:</span><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-77yLOFB05ec/VZkF480ZzEI/AAAAAAAAA-A/EvpAx4iCvos/s1600/esf_residual_loess.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://3.bp.blogspot.com/-77yLOFB05ec/VZkF480ZzEI/AAAAAAAAA-A/EvpAx4iCvos/s400/esf_residual_loess.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 6: LOESS fit on normalized ESF difference</td></tr></tbody></table><span style="font-family: inherit;">Again, we can see that the LOESS-smoothed values oscillate around 0, i.e., there is no trend in the difference between the analyical ESF and the ESF measured from our synthetic image.</span><br /><br /><span style="font-family: inherit;">The mean signal-to-noise ratio in the bright regions of the images comes out at around 15dB; because we compute the LSF (or PSF if you prefer)) from the derivative of the ESF, the bright parts of the image are representative of the worst-case noise. Alternatively, we can say that the noise is quite similar to that produced by a Nikon D7000 at ISO400, for an SRFplus test chart at a 5:1 contrast ratio.</span><br /><span style="font-family: inherit;"><br /></span><span style="font-family: inherit;">I have shown that there is no systematic difference between the ESF extracted from a synthetic image and the expected analytical ESF. The simulated noise also behaves in the way that we would expect from properties of the simulated sensor. Based on these observations, we can safely assume that the synthetic images have the desired PSF, i.e., the simulated MTF50 values are spot-on. (In previous posts I examined the properties of the simulated ESF values in the absence of noise, but here I chose to demonstrate the PSF properties directly on the actual images used in the Imatest vs MTF Mapper comparison).</span><br /><span style="font-family: inherit;"><br /></span><br /><h3><span style="font-family: inherit;">The results</span></h3><span style="font-family: inherit;">The results presented here were obtained by running Imatest 4.1.12 and MTF Mapper 0.4.18 on <a href="http://sourceforge.net/projects/mtfmapper/files/simulated_sfrplus_charts.zip/download" target="_blank">these</a> images (about 100MB). SFRPlus (from Imatest, of course) was configured to enable the LSF correction that was recently introduced. Other than that, all settings were left to defaults, including leaving the apodization option enabled. I turned off the "quick mtf" option, although I did not check to see whether this affected the results. After a run of SFRPlus, the "save data" option was used to store the results, after which the "MTF50" column values were extracted, discarding the top and bottom row edges as explained before.</span><br /><span style="font-family: inherit;"><br /></span><span style="font-family: inherit;">MTF Mapper was run using the "-t 0.5 -r" settings; the "-t 0.5" option is required to allow MTF Mapper to work with the rather low 5:1 contrast ratio. The values output to "raw_mtf_values.txt" were used as the representative MTF50 values extracted by MTF Mapper.</span><br /><span style="font-family: inherit;"><br /></span><span style="font-family: inherit;">Simulated images were produced over the MTF50 range 0.1 cycles/pixel to 0.7 cycles/pixel in increments of 0.05 cycles/pixel, with one extra data point at 0.08 cycles/pixel to represent the low end (which is quite blurry). For each MTF50 level a total of three images were simulated, each with a different seed to produce unique sensor noise. </span><span style="font-family: inherit;"><span style="font-family: inherit;"> This gives us 19*3*4 = 228 samples at each MTF50 level. </span> </span><br /><br /><span style="font-family: inherit;">As in previous posts, the results will be evaluated in two ways: bias and variance. The first plots to consider illustrate both bias and variance simultaneously, although it is somewhat harder to compare the variance of the methods on these plots.</span><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-gl5PjYesU9I/VZkPb0cqr-I/AAAAAAAAA-Q/LzGbbktVqn4/s1600/imatest_boxplot.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://2.bp.blogspot.com/-gl5PjYesU9I/VZkPb0cqr-I/AAAAAAAAA-Q/LzGbbktVqn4/s400/imatest_boxplot.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 7: Imatest relative error boxplot</td></tr></tbody></table><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-nvEagE8ibXk/VZkPociYILI/AAAAAAAAA-Y/YeUQ0RDv1Sc/s1600/mapper_boxplot.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://4.bp.blogspot.com/-nvEagE8ibXk/VZkPociYILI/AAAAAAAAA-Y/YeUQ0RDv1Sc/s400/mapper_boxplot.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 8: MTF Mapper relative error boxplot</td></tr></tbody></table><span style="font-family: inherit;">In figures 7 and 8, the relative difference (or error) is calculated as 100*(measured_mtf50 - expected_mtf50)/expected_mtf50. It is clear that Imatest 4.1.12 underestimates MTF50 values sligthly for MTF50 values above 0.2 cycles/pixel; this pattern is typical of what one would expect if the MTF curve is not adequately corrected for the low-pass filtering effect of the ESF binning step (see <a href="http://mtfmapper.blogspot.com/2015/06/improved-apodization-and-bias-correction.html" target="_blank">this post; </a></span>). MTF Mapper corrects for this low-pass filtering effect, producing no clear trend in median MTF50 error over the range considered. We can plot the median measured MTF50 relative error for Imatest and MTF Mapper on the same plot:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-1FSH4MeT9GA/VboyvdjYCVI/AAAAAAAAA_Y/45eAX9_Izas/s1600/median_comparison.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://2.bp.blogspot.com/-1FSH4MeT9GA/VboyvdjYCVI/AAAAAAAAA_Y/45eAX9_Izas/s400/median_comparison.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 9: Median relative MTF50 error comparison</td></tr></tbody></table><br /><div class="separator" style="clear: both; text-align: center;"></div>Figure 9 shows us that the Imatest bias is not all that severe; it remains below 2% over the range of MTF50 values we are likely to encounter in actual photos. (NB: Up to July 30, 2015, this figure had Imatest and MTF Mapper swapped around).<br /><br />So that illustrates bias. To measure variance we can plot the standard deviation at each MTF50 level:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-1JmjSGVPuD4/VZkXqxBMmhI/AAAAAAAAA-w/iw9BGOaA74Q/s1600/sd_comparison.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://4.bp.blogspot.com/-1JmjSGVPuD4/VZkXqxBMmhI/AAAAAAAAA-w/iw9BGOaA74Q/s400/sd_comparison.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 10: Standard deviation of relative MTF50 error</td></tr></tbody></table>Other than at very low MTF50 values (say, 0.08 cycles/pixel and lower), it would appear that MTF Mapper 0.4.18 produces more consistent MTF50 measurements than Imatest 4.1.12.<br /><br />A final performance metric to consider is the 95th percentile of relative MTF50 error. By computing this value on the absolute value of the relative error, it combines both variance and bias into a single measurement that tells us how close our measurements will be to the true MTF50 value, in 95% of measurements. Here is the plot:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-jqbLg3dY--o/VZkYvQ4A8kI/AAAAAAAAA-8/O-nXOddCXjk/s1600/p95_comparison.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://1.bp.blogspot.com/-jqbLg3dY--o/VZkYvQ4A8kI/AAAAAAAAA-8/O-nXOddCXjk/s400/p95_comparison.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 11: 95th percentile of MTF50 error</td></tr></tbody></table>Of all the performance metrics presented here, I consider Figure 11 to be the most practical measure of accuracy.<br /><br /><h3>Conclusion</h3>It took quite a bit of effort on my part to improve MTF Mapper to the point where it produces more accurate results than Imatest. There are some other aspects I have not touched on here, such as how accuracy varies with edge orientation. For now, I will say that MTF Mapper produces accurate results at known critical angles, whereas Imatest appears to fail at an angle of 26.565 degrees. Given that Imatest never claimed to work well at angles other than 5 degrees, I will let that one slide.<br /><br />I have also not included any comparisons to other freely available slanted edge implementations (sfrmat, Quick MTF, the slanted edge ImageJ plugin, mitreSFR). I can tell you from informal testing that most of them appear to perform significantly worse than Imatest, mostly because none of those implementations appear to include the finite-difference-derivative correction. Maybe I will back this opinion up with some more detailed results in future.<br /><br />So where does that leave your typical Imatest user? Well, the difference in accuracy between Imatest and MTF Mapper is relatively small. What I mean by that is that these results do not imply that Imatest users have to switch over to using MTF Mapper, rather, these results show that MTF Mapper users can trust their measurements to be at least as good as those obtained by Imatest. And, of course, MTF Mapper is free, and the source code is available.<br /><br />There are some fairly nifty features that I noticed in SFRPlus during this experiment. It appears that SFRPlus will perform lens correction automatically, meaning that radial distortion curvature can be corrected for on the fly. MTF Mapper currently limits the length of the edge it will include in the analysis as a means of avoiding the effects of strong radial distortion. But now that I am aware of this feature, I think it would be relatively straightforward to include lens distortion correction in MTF Mapper. So little time, so many neat ideas to play with ...<br /><br /><br />Frans van den Berghhttps://plus.google.com/104242390274256831576noreply@blogger.com0tag:blogger.com,1999:blog-6555460465813582847.post-24511458518265566812015-06-24T04:29:00.000-07:002015-06-24T04:29:26.863-07:00Truncation of the ESFA really quick post to highlight one specific aspect: what happens to the MTF produced by the slanted edge method if the ESF is truncated.<br /><br />To recap: The slanted edge method projects image intensity values onto the normal of the edge to produce the Edge Spread Function (ESF). Any practical implementation has to place an upper limit on the maximum distance that pixels can be from the edge (as measured along the edge normal). MTF Mapper, for example, only considers pixels up to a distance of 16 pixels from the edge.<br /><br />Looking back at the Airy pattern that results from the diffraction of light through a circular aperture we can see that the jinc<sup>2</sup> function has infinite support, in other words, it tapers off to zero but never quite reaches zero if we consider a finite domain.<br /><br />We also know that the effective width of the Airy pattern increases with increasing f-number. Herein lies the problem: a slanted edge implementation that truncates the ESF will necessarily discard part of the Airy pattern. The discarded part is of course the samples furthest from the edge, and we know that those samples tend to contribute more to the lower frequencies in the MTF.<br /><br />Simulating a slanted edge image using the Airy + photosite aperture model, with an aperture of f/8, light at 550 nm, a 100% fill-factor square photosite aperture, and 4.886 micron photosite pitch (something approximating the D810), we can investigate the impact of the truncation distance on the MTF as measured by the slanted edge method. Here goes:<br /><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-mlj_H6n9v9c/VYqQIiPzBjI/AAAAAAAAA8Y/GEWgfBECqfo/s1600/airy_esf_truncation.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="http://4.bp.blogspot.com/-mlj_H6n9v9c/VYqQIiPzBjI/AAAAAAAAA8Y/GEWgfBECqfo/s400/airy_esf_truncation.png" width="400" /></a></div>The green dotted line represents the expected MTF curve (from our simple model). I have zoomed in on the low-frequency region, but we can see that both the truncated MTF measurements (red and black curves) tend to follow the green curve more closely after about 0.10 cycles per pixel. We also note that both the red and black curves contain a few points that are clearly above the green curve between 0 and 0.05 cycles per pixel. It is physically impossible for the measured MTF to exceed the diffraction MTF (blue curve), so we can state with confidence that this is a measurement error.<br /><br />If we compare the red and the black curves we can see that a wider truncation window (red curve) reduces the overshoot at low frequencies. If we had the opportunity to use an even wider truncation window, we would be able to reduce the overshoot to even lower levels.<br /><br />Lastly, if we introduce <a href="http://mtfmapper.blogspot.com/2015/06/improved-apodization-and-bias-correction.html" target="_blank">apodization</a> into the mix we are compounding the problem even further by attenuating the edges of the PSF. This leads to even greater overshoot (at low frequencies) in our measured MTF curve.<br /><br />Bottom line: The slanted edge method is constrained by practical limitations, most notably the desire to have a finite truncation window, and the desire to reduce the impact of image noise using apodization of the PSF. These constraints lead to overshoot in the lowest frequencies of the measured MTF. It may be possible to apply an empirical correction to minimize the overshoot, but only at the cost of making strong assumptions regarding the shape of the MTF, which is best avoided.Frans van den Berghhttps://plus.google.com/104242390274256831576noreply@blogger.com1tag:blogger.com,1999:blog-6555460465813582847.post-58010420364769150892015-06-23T08:20:00.000-07:002015-06-23T08:20:38.462-07:00AnisotropyIn my post on "critical angles" I mentioned that there was one other factor to consider when looking at the influence of edge orientation on slanted edge analysis. I will refer to that phenomenon as the influence of <i>anisotropic</i> point spread functions. In this context, I use the term anisotropic to refer to point spread functions that are not radially symmetric.<br /><br />The simplest example of an anisotropic PSF is to consider just a square photosite aperture, without any lens aperture diffraction.<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-QEn4Py4OURs/VYkwpwEhkgI/AAAAAAAAA58/55iD8IQyNZo/s1600/square_integration.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="183" src="http://2.bp.blogspot.com/-QEn4Py4OURs/VYkwpwEhkgI/AAAAAAAAA58/55iD8IQyNZo/s400/square_integration.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"> Figure 1: Edge orientation relative to photosite aperture</td></tr></tbody></table><br /><div class="separator" style="clear: both; text-align: center;"></div>In figure 1 we can see the interaction between our slanted edge (shown in blue here) and the photosite aperture (orange). If the value <i>t</i> represents the distance from the centre of our photosite aperture to the right edge of our slanted edge (rectangle or step edge), then we can consider the overlapping area between the two as a function of t. The interesting range of values for <i>t</i> would be between -√0.5 and √0.5, if we assume the photosite is a square with sides of length 1. Plotting this overlapping area as a function of <i>t</i> gives us Figure 2:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-STpf-DM37pE/VYkxiEuQUHI/AAAAAAAAA6I/itPKfjon-4g/s1600/square_integration_area.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://4.bp.blogspot.com/-STpf-DM37pE/VYkxiEuQUHI/AAAAAAAAA6I/itPKfjon-4g/s400/square_integration_area.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 2: Fraction of square photosite covered by slanted edge as a function of edge distance to photosite centre</td></tr></tbody></table>When the edge orientation angle theta is 0 degrees, then we obtain a linear function, which is what one would expect. If the edge is at a 45 degree angle (as shown in the right panel of Figure 1), then we obtain the other extreme. Angles between 0 and 45 degrees produce a curve that is somewhere in between these extremes.<br /><br />What can we learn from these curves? Well, we can see that an edge orientation of 45 degrees will overlap with the photosite square from -√0.5 to √0.5, whereas the 0 degrees edge orientation only results in overlap between -0.5 and 0.5. From this we can infer that the square appears wider when approached by an edge with a 45 degree orientation. We also know that a square photosite acts as a low-pass filter, in the sense that the image captured by our sensor is the convolution of this low-pass filter and the analytical model of our scene. This might lead one to believe that the 45 degree case would result in a stronger low-pass filter, because it is clearly "wider" than the 0 degree case.<br /><br />We can plot the derivative of the curves from Figure 2:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-EmA20bdL9_I/VYlMPVURH8I/AAAAAAAAA6o/fE1ro30LlHo/s1600/square_integration_width.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://1.bp.blogspot.com/-EmA20bdL9_I/VYlMPVURH8I/AAAAAAAAA6o/fE1ro30LlHo/s400/square_integration_width.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 3: Instantaneous width of PSF </td></tr></tbody></table><br />The 0 degree case is easy to visualize with the help of the left panel of Figure 1: clearly, the width of the photosite square (measured along the step edge) is constant. The 45 degree case is also readily visualized by noting that the we cross the widest part of the photosite square when t=0 (right panel of Figure 1); this nicely corresponds to the peak instantaneous width of √2 in Figure 3.<br /><br />We can interpret the curve in Figure 3 as a weighting function, i.e., the relative contribution to the convolution of the edge and the photosite aperture at distance <i>t </i>from the centre of the photosite aperture. Looking at the problem this way reveals a new angle: the 45 degree case presents a fair amount of its total weight located close to t=0. Roughly 50.6% of its weight is located in the part where it is wider than the 0 degree case, corresponding to the central of Figure 3 where the red curve is above the gray curve. In contrast, only about 8.6% of the weight of the 45 degree curve is located in the two tail ends (t < -0.5 and t > 0.5). If we compare this to the 0 degree case, we obtain 42% in the centre (area under gray curve where the red curve is above the gray curve), and of course 0% in the tails.<br /><br />This is a rather unexpected turn of events, since it implies that even though the 45 degree case starts overlapping with the edge sooner (the regions -√0.5 < t < -0.5 and 0.5 < t < √0.5), it represents only a small fraction of the total interaction with the edge. Instead of the 45 degree case being a stronger low-pass filter than the 0 degree case, we expect the opposite because the 45 degree case has roughly 20% (50.6/42) more of its weight located close to t=0.<br /><br />We appear to have two mildly conflicting views:<br />a) the 45 degree case is "wider" at its widest point, thus it should be a stronger low-pass filter than the 0 degree case, and<br />b) more of the weight of the 45 degree case is close to the centre, hence it should present a <i>weaker </i>low-pass filter than the 0 degree case.<br /><br />I am betting on outcome b), mostly because I already know what the empirical results will tell us .... <br /><br /><h3>Empirical results for square photosites (no diffraction)</h3>The prediction favoured by outcome b) in the previous section tells us that we should expect MTF50 values to increase as we progress from a relative edge orientation of 0 degrees through to 45 degrees. Simulations were performed in the absence of noise, using 30 repetitions over sub-pixel shifts. Keep in mind that the MTF50 value of a square photosite aperture is about 0.6033 cycles per pixel, which is quite high.<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-t8EO9TkydE8/VYlcT5cvEmI/AAAAAAAAA64/3_1wzwHZY1w/s1600/pure_square_psf.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://1.bp.blogspot.com/-t8EO9TkydE8/VYlcT5cvEmI/AAAAAAAAA64/3_1wzwHZY1w/s400/pure_square_psf.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 4: Square (box) PSF relative MFT50 error as function of edge orientation</td></tr></tbody></table>We can see that MTF50 overestimation steadily increase to about 5% as we approach 45 degrees.<br /><br />Just to check, let us examine an isotropic PSF: a pure Gaussian without any photosite aperture simulation. This should yield a purely Gaussian MTF. Same simulation, but with the radially symmetric Gaussian PSF:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-aJvhjbr2Jhg/VYlevkjXAEI/AAAAAAAAA7E/bhdIWZOfHm0/s1600/pure_gaussian_psf.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://4.bp.blogspot.com/-aJvhjbr2Jhg/VYlevkjXAEI/AAAAAAAAA7E/bhdIWZOfHm0/s400/pure_gaussian_psf.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 5: Gaussian PSF relative MTF50 error as a function of edge orientation</td></tr></tbody></table>Other than a bit of a glitch at 2 degrees producing a few outliers, we see a fairly flat median MTF50 error with the Gaussian PSF. No systematically increasing MTF50 error with increasing angle appears.<br /><br /><h3>Somewhat real world: squares plus diffraction</h3>We have seen that a box PSF (without diffraction) produces strong anisotropy, and that a Gaussian PSF (without photosite aperture) produces no noticeable anisotropy. Using a PSF consisting of an Airy pattern convolved with a square photosite aperture should put us somewhere in the middle of the anisotropy scale.<br /><br />Simulations were repeated using a simulated aperture at f/2.8, light at 550 nm, a photosite pitch of 4.73 micron and no AA (OLPF) filter. These settings give an expected MTF50 value of ~ 0.504 cycles per pixel, which is slightly lower than the expected MTF50 value of ~ 0.6 cycles per pixel seen in the previous section. Accordingly, the MTF50 errors may be slightly reduced (or at least the expected variance should be reduced).<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-hl5wVl-jF9E/VYly1zzvwKI/AAAAAAAAA8E/mnSOIP9d_OE/s1600/pure_airybox_psf.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://3.bp.blogspot.com/-hl5wVl-jF9E/VYly1zzvwKI/AAAAAAAAA8E/mnSOIP9d_OE/s400/pure_airybox_psf.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 6: Airy+box PSF relative MTF50 error as a function of edge orientation</td></tr></tbody></table><br />The trend is clearly visible, but appears to be only about 60% of the magnitude of the case without diffraction (about 2.5% at 44 degrees, vs about 4% without diffraction). Smaller apertures (larger f-numbers) will reduce the anisotropy as the Airy component of the PSF will start to dominate the photosite aperture PSF.<br /><br /><h3>Any practical implications?</h3>The effect of PSF anisotropy on MTF measurements is real, but appears to be relatively small. At 2.5%, do we even have to worry about it?<br /><br />Unfortunately, we have to at least be aware of this for certain types of testing and measurement. Because the error (overestimation) is systematic, it will show up in any measurement that sweeps through a range of angles, just like the MTF Mapper grid test chart, pictured here:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-DU-SrlPox4I/VYlqty0cxJI/AAAAAAAAA7U/1cPxuSqfLEM/s1600/grid_sample.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="272" src="http://2.bp.blogspot.com/-DU-SrlPox4I/VYlqty0cxJI/AAAAAAAAA7U/1cPxuSqfLEM/s400/grid_sample.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">MTF Mapper grid test chart</td></tr></tbody></table>This chart can be used to produce Sagittal/Meridional MTF50 plots across your lens/sensor/camera. The chart aims to keep one edge perpendicular to the virtual line connecting that edge to the centre of the chart, which inevitably causes some of the squares to approach a 45 degree edge orientation.<br /><br />I simulated this chart using <span style="font-family: "Courier New",Courier,monospace;">mtf_generate_rectangle</span>, using an aperture of f/4, an Airy+box PSF, green light and a photosite pitch of 4.73 micron. Passing this synthetic image through MTF mapper to produce a surface plot (-s option) produces this result:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-wnvxW4u-qHc/VYls100BvRI/AAAAAAAAA7g/opdemWtkPY8/s1600/grid_image_airybox_f4.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="640" src="http://4.bp.blogspot.com/-wnvxW4u-qHc/VYls100BvRI/AAAAAAAAA7g/opdemWtkPY8/s640/grid_image_airybox_f4.png" width="466" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 7: MTF50 plot obtained from simulated rendering of the MTF Mapper grid test chart using the Airy-box MTF with a square photosite aperture</td></tr></tbody></table><br />The systematic distortion of MTF50 values is clearly visible, even though the range of values is quite small. The maximum value on the scale is 0.47, which is only about 2% higher than the expected MTF50 value of 0.46073 (at 0 degrees, of course). But the cross pattern is clearly visible. At least I have confirmed the cause.<br /><br />Pushing for even greater realism I repeated the simulation using the "rounded-square" photosite aperture that MTF Mapper provides. Here is the surface plot:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-UUYQyPVi2ro/VYlvyXiMu2I/AAAAAAAAA7s/GuJfXFK0RPo/s1600/grid_image_airybox_f4_rounded.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="640" src="http://3.bp.blogspot.com/-UUYQyPVi2ro/VYlvyXiMu2I/AAAAAAAAA7s/GuJfXFK0RPo/s640/grid_image_airybox_f4_rounded.png" width="466" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 8: MTF50 plot obtained from simulated rendering of the MTF Mapper grid test chart using the Airy-box MTF with a rounded-square photosite aperture</td></tr></tbody></table>We can see that the MTF50 values are slightly higher (I think the effective fill factor is slightly lower for my hand-crafted rounded corner photosite aperture), but ignore that bit for the moment. Instead, notice that the range is even smaller than the square aperture case (Figure 7), but the cross pattern is still visible.<br /><br />Lastly, if we use a circular photosite aperture, we get this:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-qSnndCdbvoI/VYlxhS8PVzI/AAAAAAAAA74/4SIQSLZozzE/s1600/grid_image_airybox_f4_circle.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="640" src="http://2.bp.blogspot.com/-qSnndCdbvoI/VYlxhS8PVzI/AAAAAAAAA74/4SIQSLZozzE/s640/grid_image_airybox_f4_circle.png" width="465" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"> Figure 9: MTF50 plot obtained from simulated rendering of the MTF Mapper grid test chart using the Airy-box MTF with a circular photosite aperture</td></tr></tbody></table>Other than the fact that the resulting image is appallingly ugly, we can see that the cross structure has disappeared, as expected.<br /><br /><h3>Conclusion</h3>Anisotropy is a reality that we have to deal with if we apply the slanted edge method to edges that approach a relative orientation of 45 degrees with respect to the (presumed) square photosites. The isotropy of the Airy pattern helps to attenuate the overestimation of edges approaching 45 degrees, but the systematic effect is still clearly visible in simulated images.<br /><br />I tried to construct an elegant analytical explanation for the interaction between the edge orientation and a square photosite aperture. This turned out to be harder than I expected, so I only have some interesting plots to offer for now. What did emerge from the theory is that we should not focus on the apparent width of the photosite aperture, but rather on the distribution of its weight relative to the centre. The somewhat startling conclusion is that we should observe higher MTF50 measurements when the orientation approaches 45 degrees.<br /><br />This was supported by the actual experiments using simulated imagery. <br /><br />So what can we do about this systematic distortion? Well, the only sound solution would be to stick to edges with a relative orientation of about 5 degrees. This is not a universal solution, though, because it makes it impossible to measure in the true Sagittal/Meridional directions. Imatest solved the problem by sticking to 5 degree angles and referring to "horizontal" and "vertical" MTF. This works well enough if you wish to measure peak astigmatism, but it does not allow you to measure MTF in the optically more appropriate sagittal/meridional directions.<br /><br />I might add a 5-degree test chart to MTF Mapper in future, just to cover all bases.Frans van den Berghhttps://plus.google.com/104242390274256831576noreply@blogger.com0tag:blogger.com,1999:blog-6555460465813582847.post-48504828297245789422015-06-16T04:04:00.003-07:002015-06-16T04:04:31.411-07:00MTF Mapper v0.4.17 Windows binary releasedMust be a slow news day.<br /><br />Anyhow, a Windows binary of the latest release of MTF Mapper, v.0.4.17, is now available on <a href="https://sourceforge.net/projects/mtfmapper/files/windows/" target="_blank">sourceforge.</a><br /><br />Version 0.4.17 does not add any new functionality as such, but it does incorporate a few improvements in measurement accuracy. If I broke anything, please let me know!<br /><br />Also, I finally upgraded the dcraw version included in the Windows binaries to 9.26.Frans van den Berghhttps://plus.google.com/104242390274256831576noreply@blogger.com0tag:blogger.com,1999:blog-6555460465813582847.post-16624934500403276492015-06-15T08:10:00.001-07:002016-08-30T22:50:18.594-07:00Critical anglesIt is often said that there is more than one way to skin a cat.<br /><br />Well, today I discovered an Imatest <a href="http://www.imatest.com/wp-content/uploads/2015/02/Slanted-Edge_MTF_Stability_Repeatability.pdf" target="_blank">article</a> that demonstrates just how wildly different slanted edge implementations can (and apparently do) vary. I will leave my critique of said article for another day, but I will note that this article makes reference to the "5 degrees" rule that is often seen when slanted edge measurements are performed.<br /><br />The "5 degrees" rule states that the orientation of the edge relative to the sensor's photosite grid should be approximately 5 degrees (either horizontal or vertical).<br /><br />There are two notable reasons for this: firstly, a 5 degree angle is far from the critical angles (the topic of this post), and secondly, a 5 degree angle ensures that the potential non-rotationally symmetric behaviour of the PSF is minimized. A discussion of the non-rotationally symmetric PSFs will also be postponed to a future article.<br /><br /><h3>A closer look at the slanted edge method</h3>Figure 1 illustrates how MTF Mapper constructs the oversampled edge spread function (ESF) that is the starting point of the MTF calculation. <br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-_n8JJY37_mk/VXqkFV5yjoI/AAAAAAAAA3s/VzNWz4CIICc/s1600/se_method1.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="292" src="https://4.bp.blogspot.com/-_n8JJY37_mk/VXqkFV5yjoI/AAAAAAAAA3s/VzNWz4CIICc/s400/se_method1.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 1: How the ESF is sampled</td></tr></tbody></table>We want to oversample the ESF so that we can increase the effective Nyquist limit; this is extremely important if we want to measure frequencies close to the natural Nyquist limit of 0.5 cycles per pixel of our sensor. The Shannon-Nyquist theorem shows us that we will have aliasing at frequencies above 0.5 cycles per pixel if we sample at a rate of 1 sample per pixel.<br /><br />Pushing up our sampling rate to 8x moves the Nyquist limit up to 4 cycles per pixel, which allows us to examine the behaviour of our MTF curve near 0.5 cycles per pixel without fear that we are being misled by aliasing artifacts.<br /><br />How can we increase the spatial sampling rate of our sensor? Well, we cannot change the sensor, but we can use a trick to generate a synthetic ESF. Looking at Figure 1 above we can see that the edge (represented as a black line) crosses the pixel grid in different places as we move along the edge. More importantly, pay attention to the shortest distance from each black dot (representing the centre of each pixel/photosite) to the black edge. Notice how this distance varies by a fraction of the pixel spacing as we move along the edge.<br /><br />Let us assume that we have a coordinate system with its origin at the centre of our top/leftmost pixel of our sensor, such that the black dots representing the pixel centres can be addressed by integer coordinates. If we take the (x, y) coordinate of a pixel near the edge, and project this coordinate onto the vector representing the edge normal (i.e., the vector perpendicular to the edge under analysis), then we obtain a real-valued scalar that represents the distance of the pixel centre from our edge. We can pair this projected distance-from-edge value with the intensity of that pixel to form a sample point on our synthetic ESF, as shown in Figure 1.<br /><br />How does this help us to oversample the ESF? Well, if we choose an appropriate edge orientation angle, say, 5 degrees, then the projected ESF points will be densely spaced. In other words, the average distance between to consecutive samples in our projected ESF will be a fraction of the pixel spacing. We can partition the projected ESF points into bins of width 0.125 pixels to produce a regularly-spaced sampled ESF with 8x oversampling.<br /><br />We know this works well for 5 degrees (because that is what everyone is doing), but what is so special about 5 degrees? To answer that, we have to slog through some elementary math.<br /><br /><h3>Spacing of projected samples</h3>Figure 2 illustrates one possible way in which we can assign integer coordinates to the pixels near the edge under analysis. <br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-O1uy136fRcg/VXq0Nz27CGI/AAAAAAAAA38/qrpGLNng-Ag/s1600/se_method_proj.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="358" src="https://2.bp.blogspot.com/-O1uy136fRcg/VXq0Nz27CGI/AAAAAAAAA38/qrpGLNng-Ag/s400/se_method_proj.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 2: How pixel coordinates are assigned</td></tr></tbody></table>Note that we can pick an arbitrary origin (shown as (X<sub>0</sub>,Y<sub>0</sub>) in red); this just simplifies the math that will follow. This point need not fall exactly on the edge, but without loss of generality we can pretend that it does, since this means we can use integer coordinates to refer to the pixel centres of pixels near the edge.<br /><br />The orientation of the edge can be specified in degrees as measured from the horizontal, but I prefer using the slope of the line. If the angle between the edge and the horizontal is θ, then the direction perpendicular to the edge can be represented as the unit length vector (-sin(θ), cos(θ)). This would be expressed as a slope 1/Δx = tan(θ), such that Δx = 1/tan(θ).<br /><br />The normal vector (-sin(θ), cos(θ)) then becomes (-1, Δx) * 1/√(1 + Δx<sup>2</sup>). We project our pixel centres, represented as integer coordinates (x, y), onto this normal vector by computing the dot product (x, y) · (-1, Δx) * 1/√(1 + Δx<sup>2</sup>), which evaluates to d(x,y) = 1/√(1 + Δx<sup>2</sup>) * (-x + yΔx).<br /><br />The function d(x,y) thus computes the distance that the pixel located at (x, y) is from the origin (X<sub>0</sub>,Y<sub>0</sub>), which we will pretend falls on the edge; this means that d(x,y) measures the perpendicular distance of point (x, y) from the edge. The projected ESF point is thus [d(x,y), I(x+X<sub>0</sub>, y + Y<sub>0</sub>)], where I(i, j) denotes the intensity of the pixel located at pixel(i,j).<br /><br />Suppose that we focus only on the subset of pixels with integer coordinates (p, q) such that 0 ≤ d(p, q) < 1. If we are to achieve 8x oversampling, then there must be at least 8 unique distance values d(p, q) in this interval. In fact, we would require these 8 points to be spread out uniformly such that at least one d(p, q) value falls in the interval [0, 0.125), one in [0.125, 0.25), and so on, such that each of the sub-intervals of length 0.125 between 0 and 1 contain at least one point.<br /><br />Consider, for example, the case where Δx = 4. This reduces d(p, q) to 1/√(1 + 4<sup>2</sup>) * (-p + 4q) = (-p + 4q)/√17. Because both p and q are integers, we can deduce that d(p, q) must be an integer multiple of 1/√17. How many integer multiples of 1/√17 can we fit in between 0 and 1? If we enumerate them, we can choose p and q such that (-p + 4q) is the set {0, 1, 2, 3, 4, 5, 6 ...}. But √17 = 4.123106 (and change), so if (-p + 4q) ≥ 5, then d(p, q) > 1. That leaves only the set {0, 1, 2, 3, 4}, such that the only values of 0 ≤ d(p, q) < 1 are {0, 1/√17, 2/√17, 3/√17, 4/√17}.<br /><br />Whoops! If Δx = 4, then there will only be 5 unique values of d(p, q) between 0 and 1, and we need at least 8 points between 0 and 1 to achieve 8x oversampling! The implications of the failure to achieve 8x oversampling will be covered a bit later; first we must identify the critical angles.<br /><br /><h3>Enumerating the problem angles</h3>We already know that Δx = 4 causes our 8x oversampling to fail; this corresponds to an angle of atan(1/4) = 14.036 degrees. In fact, it is fairly simple to see that for any integer value Δx, we will have Δx + 1 unique values between 0 and 1 (if we include the 0 in our count). For 8x oversampling, the spacing between d(p, q) values must be less than 0.125, which happens when we have at least 8 unique d(p, q) values between 0 and 1. For Δx = 8, we see that 1/√(1 + Δx<sup>2</sup>) = 1/√65 ≈ 0.12403.<br /><br />The angles that will lead to a failure of the 8x oversampling mechanism are thus: 45, 26.565051, 18.434949, 14.036243, 11.309932, 9.462322, and 8.130102.<br /><br />Some other Δx values are also problematic: 1.5, and 2.5. These yield only 2Δx + 1 unique values (including zero). Setting Δx = 1.25 only yields 4Δx + 1 a total of 7 unique values. These fractional slopes occur at angles of 33.69007, 21.80141, and 38.65981 degrees.<br /><br />There may even be more of these problematic angles, but this is as far as I have come with this analysis. Feel free to comment if you can help me identify other values of Δx that will lead to undersampling.<br /><br /><h3>Dealing with the critical angles</h3>So what exactly happens when we do not have at least one sample every 0.125 pixels along the ESF? The corresponding bin in the resampled ESF will be missing, and leaving gaps in the resampled ESF leads to severe distortion of the MTF because those gaps show up as high-frequency transitions in the FFT.<br /><br />A workable strategy is to fall back on 4x oversampling. Another strategy is to simply interpolate the from nearby bins. Both of these solutions address the primary issue (gaps in the ESF/PSF), but the residual impact of the interpolation/replacement on the final MTF is harder to mitigate.<br /><br /><h3>A new hope</h3>After my previous post (<a href="http://mtfmapper.blogspot.com/2015/06/improved-apodization-and-bias-correction.html" target="_blank">on improved apodization</a>) I started thinking about the notion of applying low-pass filters to an interpolating function applied directly to the dense ESF samples, before binning is performed. I realized that my explanation of the equivalence between binning and fitting an interpolating function + low-pass filtering + sampling only holds when the points are relatively uniformly distributed within each bin.<br /><br />This got me thinking that I can probably apply a low-pass filter directly to the dense ESF samples, even before binning. The implementation of this approach feels familiar; it turns out to be similar to the method I implemented to perform importance sampling when using an Airy + photosite aperture PSF (<a href="http://mtfmapper.blogspot.com/2012/11/importance-sampling-how-to-simulate.html" target="_blank">this post</a>). Before describing the new method, first consider this illustration of plain vanilla unweighted binning:<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-iW106sWO4tM/VX7ITvw8PqI/AAAAAAAAA4c/joXvTjYvWxU/s1600/uniform_binning.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="300" src="https://3.bp.blogspot.com/-iW106sWO4tM/VX7ITvw8PqI/AAAAAAAAA4c/joXvTjYvWxU/s320/uniform_binning.png" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 3: Unweighted binning</td></tr></tbody></table><div class="separator" style="clear: both; text-align: center;"></div><br />The pink boxes denote the bins, each 0.125 pixels wide; the horizontal direction depicted here corresponds to the "d" axis in Figure 2. The midpoint, or representative "x" value for each bin is indicated by the arrows and the values in blue. The green dots represent individual dense ESF samples --- their "y" values are not important in this diagram; the position of the green dots are merely to illustrate where each dense ESF sample is located within each bin in terms of x value, and the number of dots give a rough indication of the density of the dense ESF samples.<br /><br />If we use plain binning, then we choose as representative x value for each bin the midpoint of the bin. The representative y value is obtained as the mean of the y values of the ESF samples within that bin. In Figure 3, the rightmost bin has many ESF samples quite close to the midpoint of the bin, but almost as many ESF samples near the edge of the bin. The effect of unweighted averaging would be that the samples near the right edge of the bin will carry roughly the same weight as the samples near the middle of our bin, but clearly the samples near the middle of the bin should have had a larger weight in computing the representative value for this bin.<br /><br />A much better way of binning would be to combine the binning step with the low-pass filtering step. Instead of representing each dense ESF sample as a point, it instead becomes a small rectangle, as shown here:<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-3bpvyDhPW6g/VX7IcaW4dTI/AAAAAAAAA4k/sQiHAREbxIU/s1600/weighted_binning.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="300" src="https://3.bp.blogspot.com/-3bpvyDhPW6g/VX7IcaW4dTI/AAAAAAAAA4k/sQiHAREbxIU/s320/weighted_binning.png" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 4: Weighted binning</td></tr></tbody></table>Now we can make the weight of each sample point proportional to the overlap of the sample's rectangle and the bin extents. This will allow the samples closer to the midpoint more weight, but it also allows a point to contribute to multiple adjacent bins, depending on the width of the rectangle. This smooths out the transition from one bin to the next, especially if the rectangle is wider than the bin width. (Ok, so the rectangle in the diagram is really just a 1-D interval, not a 2D shape. But the principle still holds.)<br /><br />Yes, I have just reinvented kernel density estimation. Sigh. <br /><br />Anyhow, this binning approach also makes the low-pass filtering step explicit, so if each dense ESF sample is now represented by an interval of width w pixels, then we are effectively convolving the ESF with a rect(w * x) function. We can remove the low-pass filtering effect on the MTF (calculated further down the pipeline) by dividing the MTF by sinc(0.5 * w * f), as I have shown in my previous post.<br /><br />Our binning process is beginning to look more like a proper approach to sampling: we apply a low-pass filter to our dense ESF points to remove (or at least strongly attenuate) higher frequencies, followed by choosing one representative value at the midpoint of each bin (the downsampling step). By choosing w = 0.33333 pixels, we have a fairly strong low-pass filter, but one that still has a cut-off frequency that is high enough to allow good detail at least up to 3 cycles per pixel.<br /><br />Because of the (relatively) wide low-pass filter, we could probably drop from 8x oversampling down to 4x oversampling, but I like the extra frequency resolution the 8x oversampling produces in the MTF.<br /><br /><h3>Results</h3>Simulating synthetic images with noise similar to that produced by a D7000 at ISO 800 (but a Gaussian PSF), we can investigate the benefits of the new binning method. Ideally, what we would like to see is no difference between accuracy at a 4 degree angle, and accuracy at one of the critical angles. To quantify this, here is a comparison of 95% percentile of the relative MTF50 error (over a range of MTF50 values from 0.08 cycles/pixel to 0.5 cycles/pixel):<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-w34IYU1hExA/VX7llD1gBXI/AAAAAAAAA5E/B4hoYYvWVSw/s1600/gauss_iso800_newbin.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="200" src="https://1.bp.blogspot.com/-w34IYU1hExA/VX7llD1gBXI/AAAAAAAAA5E/B4hoYYvWVSw/s400/gauss_iso800_newbin.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 5: 95% percentile of relative MTF50 error (click to enlarge)</td></tr></tbody></table><div class="separator" style="clear: both; text-align: center;"></div>The results are very promising. Most notable is the fact that the new binning method performs virtually identically regardless of edge orientation, with 26.565 degrees being the only angle that is <i>slightly</i> worse than the others. There may be a slight drop relative to MTF Mapper v0.4.16 (at 4 degrees), but keep in mind the contribution of the change in windowing method discussed in my previous post.<br /><br /> Just the be sure, I checked for bias at an edge orientation of 4 degrees (although I recycled the ISO800 images):<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-vwsx9Ml5h9o/VX7l5kmEdaI/AAAAAAAAA5M/PQ_UM9swQMY/s1600/gauss_noise_bias.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="https://4.bp.blogspot.com/-vwsx9Ml5h9o/VX7l5kmEdaI/AAAAAAAAA5M/PQ_UM9swQMY/s400/gauss_noise_bias.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 6: Relative MTF50 deviation (%)</td></tr></tbody></table>We can see that the new binning method does not introduce any bias in MTF50 estimates --- of course this is after correction using the MTF of the low-pass filter, as described above.<br /><br /><h3>Conclusion</h3>With the new binning method I can say that MTF Mapper no longer has significant problems with edges of certain orientations. More testing is required, but the 95% percentile of relative MTF50 error appears to be below 5%, regardless of edge orientation, for MTF50 values from 0.08 cycles/pixel through to 0.5 cycles/pixel.<br /><br />The improved binning method will be included in the next release (which should be v0.4.17).<br /><br />Frans van den Berghhttps://plus.google.com/104242390274256831576noreply@blogger.com0tag:blogger.com,1999:blog-6555460465813582847.post-75611754780301179192015-06-11T06:52:00.000-07:002015-06-16T02:44:16.700-07:00Improved apodization and bias correctionFollowing on the relatively recent addition of LSF correction to Imatest, I decided to revisit some of the implementation details of MTF Mapper.<br /><br />The brutal truth is that MTF Mapper used an empirical correction factor (shock, shock, horror!) to remove the observed bias in measured MTF curves. The empirical correction factor (or rather, family of correction factors) was obtained by generating a synthetic image with a known, analytical MTF curve, and calculating the resulting ratio of the measured curve (as produced by MTF Mapper) to the expected analytical curve.<br /><br />This had the advantage that it would remove both known distortions, such as that generated by the finite-difference approximation to the derivative (which Imatest refers to as the <a href="http://www.imatest.com/2015/04/lsf-correction-factor-for-slanted-edge-mtf-measurements/" target="_blank">LSF correction factor</a>), and other distortions which were produced by processes that I did not fully understand at the time.<br /><br />This post will deal with two of the distortions that I have identified, and I will propose solutions that will enable MTF Mapper to do away with the empirical correction approach.<br /><br /><h3>Apodization</h3>Apodization, also called "windowing", is a way to attenuate some of the artifacts resulting from the application of the FFT (or DFT, if you like) to a signal of a finite length. The DFT/FFT assumes that the signal is periodic, that is, the first (leftmost) sample is preceded (circularly) by the last (rightmost) sample. Applying the FFT to a signal that is discontinuous when treated in this circularly wrapped-around way usually results in significant energy spuriously appearing on the high frequency end of the frequency spectrum.<br /><br />A common windowing function is the Hamming window, which looks like a cosine function centered on the center of the sequence of samples. The samples are multiplied component-wise with the window function, effectively producing a new set of samples such that the leftmost and rightmost samples are scaled to very low magnitudes. Since the left- and rightmost samples are now all close to zero, we are guaranteed to have a signal that no longer has a discontinuity when wrapping around the left/right ends.<br /><br />So why would we use apodization as part of the slanted edge method? First, recall how the slanted edge method works:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-zxl2kgA6DJE/VXmM2W42n4I/AAAAAAAAA3U/9KWL3DOUSq8/s1600/se_method1.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="292" src="http://3.bp.blogspot.com/-zxl2kgA6DJE/VXmM2W42n4I/AAAAAAAAA3U/9KWL3DOUSq8/s400/se_method1.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Step 1: generate the edge spread function (ESF)</td></tr></tbody></table><br /><div class="separator" style="clear: both; text-align: center;"></div>This diagram shows how the individual pixel intensities are projected along a line that coincides with the edge we are analyzing. Owing to the angle of the edge relative to the pixel grid, the density of the projected values (along the direction perpendicular to the edge) is much greater than the original pixel spacing. The densely-spaced projected values are binned to form a regularly-spaced set of samples at (usually) 4x or 8x oversampling relative to the pixel grid. This allows us to measure frequencies above the Nyquist limit imposed by the original pixel grid.<br /><br />Now we can compute the MTF as illustrated here:<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-YDC377GXmwU/VXg9Lhw1FvI/AAAAAAAAAzs/R1GfJc-lMZo/s1600/se_method2.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="640" src="http://3.bp.blogspot.com/-YDC377GXmwU/VXg9Lhw1FvI/AAAAAAAAAzs/R1GfJc-lMZo/s640/se_method2.png" width="414" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Step 2: Compute MTF from PSF using FFT</td></tr></tbody></table><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-BWAV7bGvXtU/VXg6T1VEv3I/AAAAAAAAAzU/fjecKvoFzpI/s1600/se_method2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><br /></a></div><br /><div class="separator" style="clear: both; text-align: center;"></div>Notice that the PSF is usually quite compact, i.e., most of the area under the PSF curve is located close to the centre of the PSF curve. This is typical of a PSF extracted from a real-world edge. We see some noise on the tails of the PSF, with visibly more noise on the right side --- this is an artifact of photon shot noise being relative to the signal level, so the noise magnitude is larger in the bright parts of the image.<br /><br />Anyhow, since the noise is random, we might end up with large values on the edges, such as can be seen on the right end of the PSF samples. This is exactly the scenario which we would like to avoid, so we can apply a window to "squash" the samples near the edges of the PSF.<br /><br />MTF Mapper had been using a plain Hamming window up to now --- this resulted in a systematic bias in MTF measurements, particularly affecting edges with an MTF50 value below 0.1 cycles per pixel.<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-rdBT78AlAKs/VXhC2NeJK9I/AAAAAAAAAz4/q44O62Fon0c/s1600/hamming_window.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="151" src="http://2.bp.blogspot.com/-rdBT78AlAKs/VXhC2NeJK9I/AAAAAAAAAz4/q44O62Fon0c/s400/hamming_window.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Hamming window</td></tr></tbody></table><br />Two things are visible here: the noise is suppressed reasonably well (on the ends of the green curve) after multiplying the PSF by the Hamming window function (see right side of illustration), and the PSF appears to contract slightly, effectively becoming slightly narrower after windowing.<br /><br />The apparent narrowing of the PSF has the expected impact on MTF50 values: they are overestimated slightly.<br /><br />I identified three possible methods to address this systematic overestimation of MTF50 values (on the low end of MTF50 values): empirical correction (as MTF Mapper has been doing so far), deconvolution, and using a different window function.<br /><br />We can "reverse" the effect of the windowing after we have applied the FFT to obtain the MTF. By the convolution theorem, we know that convolution in the time domain becomes multiplication in the frequency domain. Since we multiply the PSF by the window function in the time domain, it stands to reason that we must deconvolve the MTF by the Fourier transform of the window function. Except that deconvolution is a black art that is best avoided.<br /><br />I have tried many different approaches, but the high noise levels in the PSF makes for a poor experience, more apt to inject additional distortion into our MTF than to undo the slight distortion caused by windowing in the first place.<br /><br /><br />That leaves us only with the last option: choose a different window function. Purely based on aesthetics, I decided on the Tukey window with an alpha parameter of 0.6:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-S89GP8CfYXc/VXhHtwc-k5I/AAAAAAAAA0I/aJv2kr6GlSo/s1600/tukey_window.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="176" src="http://1.bp.blogspot.com/-S89GP8CfYXc/VXhHtwc-k5I/AAAAAAAAA0I/aJv2kr6GlSo/s400/tukey_window.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Tukey window</td></tr></tbody></table>Notice that we may get slightly less noise suppression, but in return we distort the PSF far less. In fact, at this level (MTF50 = 0.05) the distortion is negligible, and no further correction factors are required. This is the new apodization method employed by MTF Mapper.<br /><br /><h3>LSF correction and beyond</h3>As already mentioned, the finite-difference method used to calculate the PSF (or LSF, if you are pedantic) from the ESF is not identical to the ideal analytical derivative of the ESF. A sin(x)/(x) correction factor can be employed to effectively remove this distortion. The Imatest <a href="http://www.imatest.com/2015/04/lsf-correction-factor-for-slanted-edge-mtf-measurements/" target="_blank">article</a> on this topic does a fine job of explaining the maths behind this correction; the method was originally published by Burns while working at Kodac.<br /><br />Since MTF Mapper employs 8x oversampling, we must divide the calculated MTF by the function sin(π * f/4)/(π * f/4). Clarification: This stems from the sample spacing that is 0.125 pixels. Plugging this into the finite-difference derivative calculation as explained in the Imatest article we see that for 8x oversampling we will have a correction factor of sin(π * f/4)/(π * f/4) as opposed to the sin(π * f/2)/(π * f/2) we would have had for 4x oversampling. <br /><br />Even after applying this correction factor, though, we can see a systematic difference between the expected ideal MTF and the MTF produced by the slanted edge method. To understand this (final?) distortion, we have to rewind back to the step where we construct the ESF (helpfully captioned "Step 1" above...).<br /><br />The projection used to form the dense ESF samples produces a dense set of points, but these points are no longer spaced at convenient regular intervals. The FFT rather depends on being fed regularly spaced samples, so the simplest solution is to bin the samples at our desired oversampling factor. An oversampling factor of 8x thus produces bins that are 0.125 pixels wide.<br /><br />Again following the path of least resistance, we simply average all the values in each bin to obtain our regularly-sampled ESF. This seems like such a harmless little detail, but if we stop and think about it, we realize that this must be a low-pass filter. Why?<br /><br />Well, consider first a continuous interpolation function passing through all the ESF samples before binning. We would like to sample this function at regular intervals (0.125 pixels, to be exact), but we know that point sampling will produce horrible aliasing artifacts. The correct approach is to apply a low-pass filter, i.e., convolve our interpolating function with some filter. Let us choose a simple box filter of width 0.125 pixels. If we first convolve the interpolating function with this box filter, and then point-sample at intervals of 0.125 pixels, we end up with exactly the same result as we would obtain from binning followed by averaging all the values in each bin. This approach is optimal in terms of noise suppression for a Gaussian noise source, so even though it sounds simplistic, it is a good solution.<br /><br />Fortunately, this process is easily reversible by indiscriminate application of the convolution theorem: convolution in the time domain can be reversed by dividing the MTF (in the frequency domain) by the Fourier transform of our low-pass filter. And by now we know that the Fourier transform of a box filter is the sinc() function --- all we have to do is choose the proper frequency.<br /><br />At 8x oversampling, our bin width is 0.125 pixels, resulting in a low-pass filter of rect(8x). In the Fourier domain, this means we must divide the MTF by sinc(π * f/8) --- this will effectively reverse the attenuation of the MTF induced by the low-pass filter.<br /><br />To illustrate the effect of these two components (discrete derivative and binning low-pass filter) we can look at a simple example using a Gaussian PSF, with no added noise, and no apoditization. We start with the dense ESF of an edge with an MTF50 value of exactly 0.25 cycles/pixel:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-rshhs7d2WYc/VXlD4ge6GvI/AAAAAAAAA0c/-BEpfhbmlKw/s1600/dense_esf.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://2.bp.blogspot.com/-rshhs7d2WYc/VXlD4ge6GvI/AAAAAAAAA0c/-BEpfhbmlKw/s400/dense_esf.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 1: Dense ESF</td></tr></tbody></table><br />This ESF is binned into bins of width 0.125 pixels:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-c6pAakzsoJ0/VXlF_RSpqLI/AAAAAAAAA0o/YLgRc7M7p-Y/s1600/resampled_esf.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://1.bp.blogspot.com/-c6pAakzsoJ0/VXlF_RSpqLI/AAAAAAAAA0o/YLgRc7M7p-Y/s400/resampled_esf.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 2: binned ESF</td></tr></tbody></table><br />Next we calculate the discrete derivative to obtain the PSF:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-EVMbotjMDaI/VXlKfFWAtJI/AAAAAAAAA1I/mVe-gV2Vgtk/s1600/binned_psf.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://4.bp.blogspot.com/-EVMbotjMDaI/VXlKfFWAtJI/AAAAAAAAA1I/mVe-gV2Vgtk/s400/binned_psf.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 3: discrete PSF</td></tr></tbody></table><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-60FhaPGdTE4/VXlI5twyHeI/AAAAAAAAA08/FnwW7zRSEYA/s1600/binned_psf.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><br /></a></div> This PSF is passed through the FFT to obtain the following MTF curve:<br /><br /><div class="separator" style="clear: both; text-align: center;"></div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-2-d4wwHy-l8/VXlKj0HolRI/AAAAAAAAA1Q/xhqc1Np7FrE/s1600/raw_mtf.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://3.bp.blogspot.com/-2-d4wwHy-l8/VXlKj0HolRI/AAAAAAAAA1Q/xhqc1Np7FrE/s400/raw_mtf.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 4: measured MTF curve</td></tr></tbody></table>That MTF curve looks pretty good. And it looks very much like half of a Gaussian, just as we would expect. But looks can be deceiving at this scale. We know the true analytical MTF curve that we would expect: a Gaussian with a standard deviation of about 0.2123305 (and change). So next we plot the measured MTF curve divided by the expected MTF curve:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-cK7W6ootrok/VXlQf3D4ksI/AAAAAAAAA1w/GZjb5JQemU4/s1600/basic_ratio.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://3.bp.blogspot.com/-cK7W6ootrok/VXlQf3D4ksI/AAAAAAAAA1w/GZjb5JQemU4/s400/basic_ratio.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 5: Uncorrected MTF ratio (red)</td></tr></tbody></table>The dashed blue curve is the sin(π * f/4)/(π * f/4) function, corresponding to the discrete derivative correction, and the red curve is the ratio of measured to expected MTF. Clearly these two curves have roughly the same shape. Let us take our measured MTF curve, divide it by the sinc(f) curve to apply the discrete derivative correction, and plot the ratio of the corrected curve to the expected curve:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-EXUdljvITSM/VXlSc7GVAEI/AAAAAAAAA2A/eqR1yL7Om0A/s1600/corrected_ratio.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://2.bp.blogspot.com/-EXUdljvITSM/VXlSc7GVAEI/AAAAAAAAA2A/eqR1yL7Om0A/s400/corrected_ratio.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 6: Partially corrected MTF ratio (red)</td></tr></tbody></table>Note how the red curve (corrected MTF divided by expected MTF) has flattened out --- keep in mind that we would expect this curve to flatten out into a straight line. The black dashed line is the function sin(π * f/8)/(π * f/8), i.e., the Fourier transform of the rect(8x) low-pass filter induced by the binning process. Now we can combine the two corrections, i.e., take the measured MTF, divide by the discrete derivative correction, and then divide the result by the low-pass correction; this gives us the "fully corrected" MTF curve. Plotting the fully corrected MTF curve divided by the expected analytical MTF curve yields this:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-j98wiX4w1vE/VXlUVdE6pLI/AAAAAAAAA2M/-8MwM11AZKs/s1600/fully_corrected_ratio.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://1.bp.blogspot.com/-j98wiX4w1vE/VXlUVdE6pLI/AAAAAAAAA2M/-8MwM11AZKs/s400/fully_corrected_ratio.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 7: Fully corrected MTF ratio (red)</td></tr></tbody></table>The red curve is almost, but not quite, a constant value of 1.0. This demonstrates that the low-pass correction helps to bring us closer to the expected ideal MTF curve.<br /><br />If we zoom out a bit on the last plot, we see things are not entirely rosy:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-h-hIk3hJs7o/VXlVJEXmTLI/AAAAAAAAA2Y/ecZbSrnLQSc/s1600/fully_corrected_ratio_wide.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://3.bp.blogspot.com/-h-hIk3hJs7o/VXlVJEXmTLI/AAAAAAAAA2Y/ecZbSrnLQSc/s400/fully_corrected_ratio_wide.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 8: Fully corrected MTF ratio (red), wide view</td></tr></tbody></table>Once we move past a frequency of 1 cycle per pixel, the corrected curve does not match the expected curve so well anymore, at least not when expressed as a ratio. But looking back at Figure 4 above, we see that the measured MTF curve is practically zero beyond 1 cyc/pixel anyway, so we should expect some numerical instability when dividing the measured curve by the expected curve. This also explains my choice of scale in a few of the plots above.<br /><br />If we express the difference between the fully corrected curve and the expected analytical curve as a percentage of the magnitude of the analytical curve, we see that the fully corrected curve deviates only about 0.15% at 1 cyc/pixel, and only about 0.05% at 0.5 cyc/pixel (Nyquist). For reference, the relative deviation of a completely uncorrected curve is about 10% and 3% at 1 and 0.5 cyc/pixel respectively. Applying only the discrete derivative correction leaves a deviation of about 2.8% and 0.6%.<br /><br />So adding the correction for the low-pass filter effect of the binning is definitely in the diminishing returns category, but I certainly aim to make MTF Mapper the most accurate tool out there, so no expense is spared.<br /><br />Summary: The full correction to take care of both the finite-difference correction, and the removal of the attenuation induced by the low-pass filter (implicitly part of the binning operation) is the product of the two individual term, i.e.,<br /><div style="text-align: center;">c(f) = sin(π * f/4)/(π * f/4) * sin(π * f/8)/(π * f/8),</div>The MTF curve is corrected by dividing by this correction factor.<br /><br /><h3>Accuracy evaluation</h3>To demonstrate the effect of the new apoditization and MTF correction approaches, we can look at the MTF50 accuracy over a range of MTF50 values. For each of the MTF50 levels shown below, a number of synthetic images were rendered without adding any simulated noise --- this is to emphasize the inherent bias in measured MTF50 values. All edges were kept at a relative angle of 4.5 degrees, with 30 repetitions rendered using small sub-pixel shifts of the rectangle.<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-n32gU2MTx_g/VXlhtGvh_LI/AAAAAAAAA2o/sIBafCY_MDM/s1600/gauss_nonoise.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://1.bp.blogspot.com/-n32gU2MTx_g/VXlhtGvh_LI/AAAAAAAAA2o/sIBafCY_MDM/s400/gauss_nonoise.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 9: Relative MTF50 deviation on a Gaussian PSF</td></tr></tbody></table>Our three contestants are MTF Mapper v0.4.16, which employs a Hamming windowing function and empirical MTF curve correction, followed by an implementation that uses a Hamming window with only the discrete derivative correction, and finally the new implementation using a Tukey windowing function with both discrete derivative and binning low-pass corrections.<br /><br />It is clear that the Hamming window + derivative correction (blue curve) produces a significant bias at low MTF50 values, raising their values artificially (as expected from the apparent narrowing of the PSF). Also note how the MTF50 values are underestimated at higher MTF50 values, which is again consistent with the effects of the binning low-pass filter.<br /><br />Both the empirical correction method (red curve) and the new Tukey window plus full correction (black curve) display much lower bias in their MTF50 estimates, as seen in Figure 9.<br /><br />What happens when we use a different PSF to generate our synthetic images? This time I chose the Airy + photosite aperture (square aperture, 100% fill factor) as a representative. This corresponds to something like a D7000 sensor without an OLPF, but without noise.<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-5hGQ9MXlg7Y/VXl-maHL5AI/AAAAAAAAA24/zqaIp8OP4SE/s1600/airy_nonoise.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://1.bp.blogspot.com/-5hGQ9MXlg7Y/VXl-maHL5AI/AAAAAAAAA24/zqaIp8OP4SE/s400/airy_nonoise.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 10: Relative MTF50 deviation on an Airy+box PSF</td></tr></tbody></table>Firstly, we see some shockingly large errors on the low MTF50 side. The data points correspond to a simulated aperture of f/64, followed by f/32, f/16, f/8, f/5.6, f/4 and finally f/2.8. A reasonable explanation for the difference between the results in Figure 9 and 10 might be the wider support of the Airy PSF. Typically, the central peak of the Airy PSF is narrower than a Gaussian, but the Gaussian also drops off to zero more quickly, i.e., the Airy PSF has more energy in the tails of the PSF. This means that a wide (f/64) Airy PSF will be affected more strongly by the windowing function, and may even suffer from some truncation of the PSF --- this notion seems to be supported by the difference between the Tukey and Hamming window curves (black vs blue).<br /><br />Interestingly the empirical correction performed better than expected, doing almost as well as the Tukey + full correction method. This is somewhat unexpected, since the empirical correction factors were calculated from a Gaussian PSF.<br /><br />Since these experiments were all performed in the absence of simulated noise, they really only test the inherent <i>bias </i>of the various methods. The good news is that the Tukey + full correction approach appears to be an overall improvement over the existing empirical correction, even thought the improvement is really quite small.<br /><br /><h3>Adding in some noise</h3>It always makes sense to look at both bias and variance when comparing the quality of two competing models. In this spirit, the experiments above were repeated under mild noise conditions, corresponding to roughly ISO 800 on a D7000 sensor. First up, the Gaussian PSF:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-yJo6-LbOs4g/VXmI6elQ8FI/AAAAAAAAA3I/pUXfvxs7idk/s1600/gauss_iso800.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://1.bp.blogspot.com/-yJo6-LbOs4g/VXmI6elQ8FI/AAAAAAAAA3I/pUXfvxs7idk/s400/gauss_iso800.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 11: Standard deviation of relative MTF error on Gaussian PSF</td></tr></tbody></table>Figure 11 presents the standard deviation of the relative MTF50 error, expressed as a percentage. We see the impact of the Tukey windowing function quite clearly: since the Tukey window does not attenuate such a large part of the PSF (i.e., less of the edge of the PSF is attenuated), we see a small increase in the standard deviation of the relative error. As expected, both the methods using the Hamming window perform nearly identically.<br /><br /><h3>Conclusion</h3>MTF Mapper will employ the new apodization function (Tukey window) as well as the analytically-derived full correction in lieu of the older Hamming window + empirical correction, starting from the next release. This should be v0.4.17 onwards.<br /><br />The new correction method is more elegant, and makes fewer assumptions regarding the shape of the MTF curve, unlike the empirical correction that was trained on only Gaussian MTFs. But throwing out the empirical correction brings back the strong attenuation of the PSF at lower MTF50 values, so the Hamming window had to be replaced with the Tukey window.<br /><br />We pay a small price for using the Tukey window, but realistically the MTF50 error should remain below 5% (for an expected MTF50 value of 0.5 c/p) even under quite noisy conditions.<br /><br />In theory it should be possible to incorporate strong low-pass filtering of the PSF, followed by suitable reversal-via-division of the low-pass filter in the frequency domain. In practice, I have not seen any worthwhile improvement in accuracy. I suspect that some non-linear adaptive filter may be able to strike the right balance, but that will have to wait for now.Frans van den Berghhttps://plus.google.com/104242390274256831576noreply@blogger.com5tag:blogger.com,1999:blog-6555460465813582847.post-84335311607462149802015-04-15T05:01:00.000-07:002015-04-15T05:01:52.339-07:00Trust, but verifyA while back I wrote:<br /> <i>"I could not find any synthetic images rendered with specific, exactly known point spread functions. This meant that the only way that I could tell if MTF Mapper was working correctly was to compare its output to other slanted edge implementations."</i> <a href="http://mtfmapper.blogspot.com/2013/12/mtfgeneraterectangle-grows-up.html" target="_blank">(here)</a><br /><br />This was the main motivation for developing <span style="font-family: "Courier New",Courier,monospace;">mtf_generate_rectangle</span>.<br /><br />It turns out that Imatest recently updated their SFR measurement algorithm to include the well-known finite-difference-correction <a href="http://www.imatest.com/2015/04/lsf-correction-factor-for-slanted-edge-mtf-measurements/" target="_blank">(here).</a> I first encountered this correction in one of Burns' papers. According to the Imatest news article, this correction is now included in the ISO 12233:2014 standard. <br /><br />I have yet to test the new version of Imatest against the synthetic images produced by <span style="font-family: "Courier New",Courier,monospace;">mtf_generate_rectangle</span>, but I do recall that some quick-and-dirty testing a few years back hinted at Imatest underestimating MTF50 slightly, which would be consistent with an algorithm that does not apply the finite difference correction. As pointed out in the Imatest news article, this difference is really only noticeable when dealing with higher MTF50 values, so this does not imply that all older Imatest results are now suddenly obsolete.<br /><br /><br />It does raise an important point about traceability and independent verification, though. Frans van den Berghhttps://plus.google.com/104242390274256831576noreply@blogger.com0tag:blogger.com,1999:blog-6555460465813582847.post-26699452283507516372014-01-09T04:33:00.000-08:002014-01-09T04:33:12.523-08:00Analogue image zoom: comparing 6 megapixel images with 16 megapixel images<h2>The problem</h2>Let us say you have captured a shot of a test chart with a D40, and repeated the process (same lens) with a D7000. The D40 gives you a 6 megapixel image, and the D7000 gives you a 16 megapixel image. You would like to compare the sharpness of the one camera to that of the other.<br /><br />There several options for executing this comparison:<br /><ol><li>Print both shots at the same size. This is probably the way to go if you intend to print a lot of uncropped photos.</li><li>Scale down the 16 MP image to 6 MP, and compare at 100% view.</li><li>Scale up the 6 MP image to 16 MP, and compare at 100% view.</li><li>Scale both images to some other resolution, e.g., 8 MP (like DxO labs do), or maybe 24 MP.</li></ol>There is at least one sound reason why option 4 is the better choice amongst options 2 through 4: scaling artifacts. Performing an MTF analysis of an image upscaled with a popular cubic scaling algorithm (Mitchell) reveals that there is some effective contrast enhancement that takes places as part of the scaling process, visible as overshoot and undershoot in an edge profile plot. By scaling both images, you are at least trying to compare apples to apples, especially if you are unsure of exactly what sharpening algorithm your software will employ.<br /><br />Of course, this entire post deals with visual interpretation of test chart images. If you are interested in other properties (e.g., MTF) then go ahead and use MTF Mapper to perform such measurements directly. The normal slanted edge MTF analysis does not really tell you what your aliasing will look like after dropping the OLPF from your sensor, nor how apparent sharpness is influenced by demosaicing algorithms. For such evaluations visual interpretation might prove useful still.<br /><br /><h2>Another option: simulation</h2>Once you embrace simulated images, such as those produced with <span style="font-family: "Courier New",Courier,monospace;">mtf_generate_rectangle</span>, you have a fifth option: directly produce a 16 MP (equivalent) image but keep the point spread function (PSF) of the 6 MP camera. I call this feature "analogue scaling", mostly because it effectively resamples the 6 MP image to 16 MP, but without using a discretized 6 MP image as the source. Instead, the usual "infinite precision" analytical description of the target scene is simply scaled down (relative to the photosite pitch), and the sample spacing of the rendered image is adjusted accordingly.<br /><br />Here is a comparison between a simulated D40 (photosite pitch 7.8 micron, f/4, green light, 4-dot OLPF) and a D7000 (photosite pitch 4.73 micron, f/4, green light, 4-dot OLPF):<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-Iqo3JgRqOcQ/Us6LvHqOcVI/AAAAAAAAAqA/rQ38GSnTS08/s1600/pinch_d40.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://3.bp.blogspot.com/-Iqo3JgRqOcQ/Us6LvHqOcVI/AAAAAAAAAqA/rQ38GSnTS08/s1600/pinch_d40.png" height="373" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Simulated D40 image, effectively magnified by a factor of ~1.65 (click for full-size)</td></tr></tbody></table><br /> <br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-pNWse9e8vLA/Us6MLnywKSI/AAAAAAAAAqI/ZUFeBE8T95Q/s1600/pinch_d7k.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://1.bp.blogspot.com/-pNWse9e8vLA/Us6MLnywKSI/AAAAAAAAAqI/ZUFeBE8T95Q/s1600/pinch_d7k.png" height="374" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Simulated D7000 image (click for full-size)</td></tr></tbody></table><br />These images were generated with the following commands:<br /><blockquote class="tr_bq">mtf_generate_rectangle.exe --target-poly doc/pinch.txt -p airy-box -n 0 --analogue-scale 0.606410 --pixel-pitch 7.8 --aperture 4 -o pinch_d40.png</blockquote>and<br /><blockquote class="tr_bq">mtf_generate_rectangle.exe --target-poly doc/pinch.txt -p airy-4dot-olpf -n 0 --pixel-pitch 4.73 --aperture 4 -o pinch_d7k.png</blockquote>respectively. Note that the analogue scale factor is ~0.606, which is 4.73/7.8, i.e., the ratio between the photosite pitch of the D7000 and D40. Specifying an <span style="font-family: "Courier New",Courier,monospace;">--analogue-scale</span> factor of greater than 1.0 will produce aliasing (and an apparent increase in sharpness), while a factor of less than 1.0 will produce smoothing, as would be expected from upscaling.<br /><br />Note that the "-n 0" switch turns off simulated sensor noise. Since sensor noise is currently computed in the domain of the output image, this switch is required to produce a correct pair of images. If noise is left on, you would obtain a scaled D40 image with image noise appearing at the size of D7000 pixels, which will clearly cause the D40 image to appear better than it should. This can be fixed (in the mtf_generate_rectangle code) by scaling the "noise image" correctly, but I honestly only thought of this problem now as I am busy writing this blog post :)<br /><br /><h2>Discusssion</h2>Does this approach work? Well, take a look at the point where the converging lines blur into a gray mess in the D40 image (top image above). This appears to happen after the tick marked "2" in the horizontal set of lines --- maybe about one-third of the way from "2" to "1".<br /><br />In the D7000 image, the extinction point (gray mess) appears almost exactly at the tick marked "1" in the horizontal set of converging lines. This appears about right, since we know the linear resolution of the D40 is about 0.606 that of the D7000. Not really a rigorous proof, but at least reassuring.<br /><br />To summarize: the <span style="font-family: "Courier New",Courier,monospace;">--analogue-scale</span> option of <span style="font-family: "Courier New",Courier,monospace;">mtf_generate_rectangle</span> will allow you to produce a pair of synthetic images of a test chart to simulate two different cameras, with different photosite pitch values, but without the hassle or potential artifacts introduced by upscaling the discrete image produced by the lower-resolution camera.<br /><br />Of course, this type of simulation will allow you to investigate potential future sensors too. How would a 50 MP APS-C camera render a resolution test chart .... ?<br /><br />ps: MTF Mapper version 0.4.16 is finally available <a href="https://sourceforge.net/projects/mtfmapper/files/">here</a> --- this is the first version to support all the required features to produce the results found in this post.Frans van den Berghhttps://plus.google.com/104242390274256831576noreply@blogger.com0tag:blogger.com,1999:blog-6555460465813582847.post-21839996998835626062013-12-18T03:39:00.000-08:002013-12-18T03:39:52.020-08:00mtf_generate_rectangle grows up<h2>Fed up with squares?</h2>If you have used <span style="font-family: "Courier New", Courier, monospace;">mtf_generate_rectangle</span> before you will know why it is called mtf_generate_<i>rectangle</i>. A bit over two years ago, when I started working on the first release of MTF Mapper, I ran into a specific problem: I could not find any synthetic images rendered with specific, exactly known point spread functions. This meant that the only way that I could tell if MTF Mapper was working correctly was to compare its output to other slanted edge implementations.<br /><br />While this is sufficient for some, it did not sit well with me. What if all those other implementations were tested in the same way? If that was the case, then <i>all</i> slanted edge implementations (available on the Internet) could be flawed. Clearly, some means of verifying MTF Mapper independently of other slanted edge algorithm implementations was required.<br /><br />Thus <span style="font-family: "Courier New", Courier, monospace;">mtf_generate_rectangle</span> was born. The original implementation relied on sampling a dense grid, but quickly progressed to an importance sampling rendering algorithm tailored to a Gaussian PSF. The Gaussian PSF had some important advantages over other (perhaps more realistic) PSFs: the analytical MTF curve was know, and simple to compute (the MTF of a Gaussian PSF is just another scaled Gaussian). Since the slanted edge algorithm requires a step edge as input, it seemed logical to choose a square as the target shape; this would give us four step edges for the price of one.<br /><br />Such synthetic images, composed of a black square on a white background, are perfectly suited to the task of testing the slanted edge algorithm. Unfortunately, they are not great for illustrating the visual differences between different PSFs. There are a few well-know target patterns that are found on resolution test charts designed for visual interpretation. The USAF1951 pattern consists of sets of three short bars (see examples later on); the width of these bars are decreased in a geometric progression, and the user is supposed to note the scale at which the bars are no longer clearly distinguishable.<br /><br />Another popular test pattern is the Siemens star. This pattern comprises circular wedges radiating from the centre of the design. The main advantage of the Siemens star is that resolution (spacing between the edges of the wedges) decreases in a continuous fashion, as opposed to the discrete intervals of the USAF1951 chart. I am not a huge fan of the Siemens star, though, mostly because it is hard to determine the exact point at which the converging bars (wedges) blur into a gray mess. It is far too easy to confuse aliasing with real detail on this type of chart. Nevertheless, other people seem to like this chart.<br /><br />Lastly, there is the familiar "pinched wedge" pattern (also illustrated later in this post), which contains a set of asymptotically convergent bars. The rate of convergence is much slower than the Siemens star, and a resolution scale usually accompanies the pattern, making it possible to visually measure resolution in a fashion similar to the USAF1951 chart, but with slightly more accuracy. I rather like this design, if only for the fact that the resulting pictures are aesthetically pleasing.<br /><br />Today I announce the introduction of a fairly powerful new feature in <span style="font-family: "Courier New", Courier, monospace;">mtf_generate_rectangle</span>: the ability to render arbitrary polygon shapes.<br /><br /><h2>The implementation</h2>You can now pass fairly general polygonal targets using the "--target_poly <filename>" command line option. The format of the file specified using <filename> is straightforward: any number of polygons can be specified, with each polygon defined by an integer <n> denoting the number of vertices, followed by <n> pairs of <x,y> coordinates. I usually separate these components with newline characters, but this is not critical.<br /><br />The polygons themselves can be convex or concave. In theory, non-trivial self intersections should be supported, but I have not tested this myself yet. There is currently no way to associate multiple contours with a single polygon, thus you cannot specify any polygons containing holes. I work around this by simply splitting the input polygons with a line passing through such a hole: for example, a polygon representing the number "0" can be split down the middle, producing two concave polygons that touch on the split line.<br /><br /><h3>General polygon intersections</h3>For a while now I have cheated by relying on the Sutherland-Hodgeman algorithm to compute the intersection between two polygons. Specifically, this operation is required by all the importance sampling algorithms involving a non-degenerate photosite aperture (e.g., "airy-box" and "airy-4dot-olpf" PSF options specified with the "-p" command to mtf_generate_rectangle). <a href="http://mtfmapper.blogspot.com/2012/11/importance-sampling-how-to-simulate.html">This article</a> explains the process in more detail, but the gist is that each "sample" during the rendering process is proportional to the area of the intersection between the target polygon geometry and a polygon representing the photosite aperture (suitably translated). If we assume that the photosite aperture polygon is simply a square (or more general, convex) then we can rely on the Sutherland-Hodgeman algorithm to compute the intersection: we simply "clip" the target polygon with the photosite aperture polygon, and compute the area of the resulting clipped polygon.<br /><br />Now this is where the cheat comes in: the clipped result produced by the Sutherland-Hodgeman algorithm is only correct if both polygons are convex. If the target polygon (the clipee) is concave, and the clipping polygon is convex, the Sutherland-Hodgeman algorithm may produce degenerate vertices. (see figure 1 below). The cheat that I employed relied on the observation that degenerate sections of a polygon have zero area, thus they have no influence on the sampling process.<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-fZ_UJe7aP6g/UrF8DyBeEaI/AAAAAAAAApQ/uJJyhcEh4aU/s1600/sutherland.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://1.bp.blogspot.com/-fZ_UJe7aP6g/UrF8DyBeEaI/AAAAAAAAApQ/uJJyhcEh4aU/s1600/sutherland.png" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Fig 1: Clipping a concave polygon with the Sutherland-Hodgeman algorithm</td></tr></tbody></table><br />This allowed me to continue using the Sutherland-Hodgeman algorithm, mostly because the algorithm is simple to implement, and very efficient. This efficiency stems from the way in which an actual intersection vertex is only computed if an edge of the clipee polygon is known to cross an edge of the clipper polygon. This is a <i>huge</i> saving, especially if one considers that the photosite aperture polygon (clipper) will typically be either entirely outside the target polygon, or entirely inside the target polygon, except of course near the edges of the target polygon.<br /><br />All this comes apart when the photosite aperture polygon becomes concave. To solve that problem requires significantly more effort. For a start, a more general polygon clipping algorithm is required. I chose the Greiner-Hormann algorithm, with Kim's extensions to cater for the degenerate cases. In this context, the degenerate cases occur when some of the vertices of the clipee polygon coincide with vertices (or edges) of the clipper polygon. This happens fairly often when constructing a quadtree (more on that later). At any rate, the original Greiner-Hormann algorithm is fairly straightforward to implement, but adding Kim's enhancements for handling the degenerate cases required a substantial amount of effort (read: hours of debugging). The Greiner-Hormann algorithm is quite elegant, and I can highly recommend reading the original paper.<br /><br />Internally, mtf_generate_rectangle classifies polygons as being either convex or concave. If the photosite aperture is convex, the Sutherland-Hodgeman algorithm is employed during the sampling process, otherwise it will fall back to Kim's version of the Greiner-Hormann algorithm. The performance impact is significant: concave photosite polygons render 20 times more slowly than square photosite polygons when rendering complex scenes. For simpler scenes, the concave photosite polygons will render about four times more slowly than squares; circular (well, 60-sided regular polygons, actually) will render about two times more slowly than squares.<br /><br />Part of this difference is due to the asymptotic complexity of the two clipping algorithms, expressed in terms of the number of intersection point calculations: the Sutherland-Hodgeman algorithm has a complexity of O(c*n), where "c" is the number of crossing edges, i.e., c << m, where "m" is the number of vertices in the clipee polygon, and "n" is the number of edges in the clipper. The Greiner-Hormann algorithm has a complexity of O(n*m); on top of that, each intersection vertex requires a significant amount of additional processing.<br /><br /><h3>Divide and conquer</h3>To offset some of the additional complexity of allowing arbitrary target polygons to be specified, a quadtree spatial index was introduced. The quadtree does for 2D searches what a binary tree does for linear searches: it reduces the number of operations from O(n) to O(log(n)).<br /><br />First up, each polygon is wrapped with an axis-aligned bounding box (AABB), which is just an educated-sounding way of saying that the minimum and maximum values of the vertices are recorded for both x and y dimensions of a polygon. This step already offers us a tremendous potential speed-up, because two polygons can only intersect if their bounds overlap. The bounds check is reduced to four comparisons, which can be implemented using short-circuit boolean operators, so non-overlapping bounds can be detected with as little as a single comparison in the best case.<br /><br />Once each individual polygon has a bounding box, we can start to aggregate them into a scene (internally, mtf_generate_rectangle treats this as a multipolygon with its own bounding box). The quadtree algorithm starts with this global bounding box, and splits it into four quadrants. The bounding box of each quadrant is taken as a clipping polygon, clipping all the polygons to fit exactly inside the quadrant.<br /><br />After one iteration, we have potentially reduced the number of intersection tests by a factor of four. For example, if we determine (using the bounding boxes) that the photosite aperture polygon falls entirely inside the top-right quadrant, then we only have to process the (clipped) polygons found inside that quadrant. If a quadrant is empty, we can simply skip it; otherwise, we can shrink the bounding box to fit tightly around the remaining clipped polygons (see figure 2 below).<br /><br />The next logical step is to apply this quadrant subdivision recursively to each of the original quadrants. We can keep on recursively subdividing the quadrants until a manageable number of polygons (or more correctly, a manageable number of polygon edges) is reached in each quadrant subtree. We must balance the cost of further subdivision against the gains of reducing the number of edges in each subdivided quadrant. Every time that we add another level to the quadtree we add four additional bounds checks --- eventually the cost of the bounds checks add up.<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-lBnavhs5XK0/UrGD_BITxTI/AAAAAAAAApg/2bS2DXDVyDM/s1600/quadtree_levels012.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://3.bp.blogspot.com/-lBnavhs5XK0/UrGD_BITxTI/AAAAAAAAApg/2bS2DXDVyDM/s1600/quadtree_levels012.png" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Fig 2: Levels 0 (green), 1 (blue) and 2 (magenta) of the Quadtree decomposition of the scene (light gray)</td></tr></tbody></table><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-GSBpX5i-TQk/UrGEPAvuu_I/AAAAAAAAApo/gWGsU-YG7jg/s1600/quadtree_levels256.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://1.bp.blogspot.com/-GSBpX5i-TQk/UrGEPAvuu_I/AAAAAAAAApo/gWGsU-YG7jg/s1600/quadtree_levels256.png" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Fig 3: Levels 2 (green), 5 (blue), and 6 (magenta) of the Quadtree decomposition</td></tr></tbody></table><br />If the number of quadtree levels is sufficient, we end up "covering" the target polygons with rectangular tiles (the bounding boxes of the quadrants), providing a coarse approximation of the target polygon shape (see figure 3 above). Every sampling location outside these bounding boxes can be discarded early on, so rendering time is not really influenced by the size of the "background" any more.<br /><br />If the quadtree is well-balanced, the amount of work (number of actual polygon-polygon intersection tests) can be kept almost constant throughout the entire rendered image, regardless of the number of vertices in the scene. I have confirmed this with some quick-and-dirty tests: halving the number of vertices in a scene (by using a polygon simplification method) has almost no impact on rendering time.<br /><br /><h2>Some examples</h2>Enough talk. Time for some images:<br /><br /><span id="goog_436470973"></span><span id="goog_436470974"></span><span id="goog_436470975"></span><span id="goog_436470976"></span><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-826r27EYuw8/UrFaXLf_MdI/AAAAAAAAAoE/i9hZdjB1Bo0/s1600/usaf1951_p473_noolpf_f4.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://4.bp.blogspot.com/-826r27EYuw8/UrFaXLf_MdI/AAAAAAAAAoE/i9hZdjB1Bo0/s400/usaf1951_p473_noolpf_f4.png" width="352" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">USAF1951-style chart with 4 levels</td></tr></tbody></table><span id="goog_436470981"></span><span id="goog_436470982"></span><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/--OGdOekuxxA/UrFar_4LrQI/AAAAAAAAAoM/lTNbzIsvIuU/s1600/siemens_p473_noolpf_f4.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://4.bp.blogspot.com/--OGdOekuxxA/UrFar_4LrQI/AAAAAAAAAoM/lTNbzIsvIuU/s400/siemens_p473_noolpf_f4.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Siemens star chart</td></tr></tbody></table><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-9_sYA_Iy8kw/UrFa3bl-GBI/AAAAAAAAAoQ/xO7YH5qBLO8/s1600/pinch_p473_noolpf_f4.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="373" src="http://3.bp.blogspot.com/-9_sYA_Iy8kw/UrFa3bl-GBI/AAAAAAAAAoQ/xO7YH5qBLO8/s400/pinch_p473_noolpf_f4.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Pinch chart (click for full size)</td></tr></tbody></table>All the charts above were rendered with a photosite pitch of 4.73 micron, using the Airy + square photosite (100% fill-factor) model at an aperture of f/4, simulating light at 550 nm wavelength. The command for generating the last chart would look something like this:<br /><br /><span style="font-family: "Courier New", Courier, monospace;">mtf_generate_rectangle.exe --pixel-pitch 4.73 --aperture 4 -p airy-box -n 0 --target-poly pinch.txt </span><br /><br />where "pinch.txt" is the file specifying the polygon geometry (which happens to be in the same folder as <span style="font-family: "Courier New",Courier,monospace;">mtf_generate_rectangle.exe</span> in my example above). These target polygon geometry files are included in the MTF Mapper package from version 0.4.16 onwards (their names are <span style="font-family: "Courier New",Courier,monospace;">usaf1951r.txt</span>, <span style="font-family: "Courier New",Courier,monospace;">siemens.txt</span>, and <span style="font-family: "Courier New",Courier,monospace;">pinch.txt</span>.)<br /><br /><h3>OLPF demonstration</h3>The "pinch chart" provides a very clear demonstration of the effects of the Optical Low-Pass Filter (OLPF) found on many DSLRs (actually, most, depending on when you read this).<br /><br />Before I present some images, first a note about effective chart magnification. Most real-world test charts are printed at a known size, i.e., you can say with confidence that a particular target (say, a USAF1951 pattern) has a known physical size, and thus measures physical resolution expressed in line pairs per millimetre (lp/mm). It is relatively straightforward to extend this to synthetic images generated with mtf_generate_rectangle by carefully scaling your target polygon dimensions. For the time being, though, I prefer to fall back to a pixel-centric view of the universe. In other words, I chose to specify my target polygon geometry in terms of pixel dimensions. This was mostly motivated by my desire to illustrate specific effects (aliasing, etc.) visually. Just keep that in mind: the images I present below are not intended to express resolution in physical units; they are pretty pictures.<br /><br />With that out of the way, here is a sample of a hypothetical D7000 without an OLPF --- this could be something like the Pentax k5-IIs.<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-9_sYA_Iy8kw/UrFa3bl-GBI/AAAAAAAAAoU/TutyjgWwD-g/s1600/pinch_p473_noolpf_f4.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="373" src="http://2.bp.blogspot.com/-9_sYA_Iy8kw/UrFa3bl-GBI/AAAAAAAAAoU/TutyjgWwD-g/s400/pinch_p473_noolpf_f4.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Hypothetical D7000 without OLPF, f/4</td></tr></tbody></table><br />And here is the same simulated image, but this time using an OLPF, i.e., this should be quite close to the real D7000:<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-h4w88WqybFE/UrFeK0peNHI/AAAAAAAAAoc/H6L-VO0-gPk/s1600/pinch_p473_olpf_f4.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="373" src="http://3.bp.blogspot.com/-h4w88WqybFE/UrFeK0peNHI/AAAAAAAAAoc/H6L-VO0-gPk/s400/pinch_p473_olpf_f4.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">D7000 (with OLPF), f/4</td></tr></tbody></table><br />I repeated the simulations using an f/8 aperture:<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-XeME2ya51sA/UrFfd-AJemI/AAAAAAAAAoo/n3A-4vNRU-w/s1600/pinch_p473_noolpf_f8.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="373" src="http://3.bp.blogspot.com/-XeME2ya51sA/UrFfd-AJemI/AAAAAAAAAoo/n3A-4vNRU-w/s400/pinch_p473_noolpf_f8.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Hypothetical D7000 without OLPF, f/8</td></tr></tbody></table><br /><div class="separator" style="clear: both; text-align: center;"></div><br />and again, the D7000 (with OLPF) simulated at f/8: <br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-1j1d_rZUbEc/UrFfq4d1OgI/AAAAAAAAAow/5B8FOFrtzyY/s1600/pinch_p473_olpf_f8.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="373" src="http://1.bp.blogspot.com/-1j1d_rZUbEc/UrFfq4d1OgI/AAAAAAAAAow/5B8FOFrtzyY/s400/pinch_p473_olpf_f8.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">D7000 (with OLPF), f/8</td><td class="tr-caption" style="text-align: center;"><br /></td></tr></tbody></table><br />Here is a crop comparing the interesting part of the chart across these four configurations:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-d6m__kXYk_E/UrFmXZuwPTI/AAAAAAAAApA/1Murp5JCQWM/s1600/closeup_rc.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-d6m__kXYk_E/UrFmXZuwPTI/AAAAAAAAApA/1Murp5JCQWM/s1600/closeup_rc.png" /></a></div>First up, notice the false detail in the "f/4, no OLPF" panel, occurring to the right of the scale bar tick labelled "1". This is a good example of aliasing --- compare that to the "f/4, OLPF" panel, which just fades to gray mush to the right of its tick mark. In the bottom two panels we can see the situation is significantly improved at f/8, where diffraction suppresses most of the objectionable aliasing.Frans van den Berghhttps://plus.google.com/104242390274256831576noreply@blogger.com2tag:blogger.com,1999:blog-6555460465813582847.post-36674752705103931732013-12-06T05:01:00.000-08:002013-12-06T05:01:18.452-08:00Simulating microlenses: kicking it up a notch<h3>Preamble </h3>My <a href="http://mtfmapper.blogspot.com/2013/10/simulating-microlenses-first-take.html">first stab</a> at simulating microlenses made some strong assumptions regarding the effective shape of the photosite aperture. Reader IlliasG subsequently pointed me to an illustration depicting a more realistic photosite aperture shape --- which happens to be a concave polygon.<br /><br />At first, it might seem trivial to use this photosite aperture shape in the usual importance sampling algorithm employed by <span style="font-family: "Courier New",Courier,monospace;">mtf_generate_rectangle</span>. It turns out to be a bit more involved than that ....<br /><br />The <span style="font-family: "Courier New",Courier,monospace;">mtf_generate_rectangle</span><span style="font-family: inherit;"> tool relied on an implementation of the Sutherland-Hodgeman polygon clipping routine to compute the area of the intersection of the photosite aperture and the target polygon (which is typically a rectangle). The Sutherland-Hodgeman algorithm is simple to implement, and reasonably efficient, but it requires the clipping polygon to be convex, so I required a new polygon clipping routine </span>to allow concave/concave polygon intersections (astute readers may spot that I could simply exchange the clipping/clippee polygons, but I wanted concave/concave intersections anyway). After some reading, it seemed that the Greiner-Hormann algorithm had a fairly simple implementation ...<br /><br />... but it did not handle the degenerate cases (vertices of clipping/clippee polygons coinciding, or a vertex falling on the edge of the other polygon). Kim's extension solves that problem, but it took me a while to implement.<br /><br /><h3>Effective photosite aperture (with microlenses)</h3>The Suede (on dpreview forums) posted a <a href="http://www.dpreview.com/forums/post/51904462">diagram</a> of the effective aperture shape after taking the microlenses into account. I thumb-sucked an analytical form for this shape, which looks like this (my shape in cyan overlaid on The Suede's image):<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-FiY-le_cjOU/UqF4aT5wO3I/AAAAAAAAAmo/y0U9_PiU-4w/s1600/suede.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-FiY-le_cjOU/UqF4aT5wO3I/AAAAAAAAAmo/y0U9_PiU-4w/s1600/suede.png" /></a></div>The fit of my thumb-sucked approximation is not perfect, but I declare it to be good enough for government work. I decided to call this the <span style="font-family: "Courier New", Courier, monospace;">rounded-square</span> photosite aperture (that is the identifier used by <span style="font-family: "Courier New", Courier, monospace;">mtf_generate_rectangle</span>).<br /><br />I am not sure how to scale this shape relative to the 100% fill-factor square. Intuitively, it seems that the shape should remain inscribed within the square photosite, or otherwise the microlens would be collecting light from the neighbouring photosites too. This type of scaling (as illustrated above) still leaves the corners of the photosite somewhat darkened, which is what we were aiming for. Incidentally, this scaling only gives me a fill-factor of ~89.5%. I guess the "100% fill-factor" claim sometimes seen in connection with microlenses applies to equivalent light-gathering ability, rather than geometric area.<br /><br /><h3>Results</h3><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-4A3rSrYe61s/UqGAV2RD1FI/AAAAAAAAAm4/IAyS23Y4GHA/s1600/box_a00_default_ff.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://1.bp.blogspot.com/-4A3rSrYe61s/UqGAV2RD1FI/AAAAAAAAAm4/IAyS23Y4GHA/s400/box_a00_default_ff.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">MTF curves for 0-degree step edge</td></tr></tbody></table><h3></h3><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-L5RJjNBhQ5E/UqGAjTPhR4I/AAAAAAAAAnA/JIK7Je_a0V8/s1600/box_a45_default_ff.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://4.bp.blogspot.com/-L5RJjNBhQ5E/UqGAjTPhR4I/AAAAAAAAAnA/JIK7Je_a0V8/s400/box_a45_default_ff.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">MTF curves for 45-degree step edge</td></tr></tbody></table><br />The two plots above illustrate the MTF curves of three possible photosite aperture shapes, combined with an Airy PSF (aperture=f/5.6, photosite pitch=4.73 micron, lambda=550 nm). The first plot is obtained by orienting the step edge at 0 degrees, i.e., our MTF cross-section is along the x-axis of the photosite. In the second plot, the step edge was oriented at 45 degrees relative the photosite, i.e., it represents the diagonal across the photosite.<br />Both plots include the MTF curves for an inscribed circle aperture, for comparison. Note that the fill-factors have not been normalized, that is, each aperture appears at its native size, which maximizes aperture area without going outside the square photosite's bounds.<br /><br />Purely based on its fill factor of ~90%, we would expect the first zero of the rounded-square aperture's MTF curve to land between the 100% fill-factor square and the 78% fill factor circle, which is clearly visible in the first plot. In fact, the rounded-square aperture's MTF curve appears to be a blend of the square and circle curves, which makes sense.<br /><br />The second plot above shows that the rounded-square aperture still exhibits some anisotropic behaviour, but that the effect is less pronounced than that observed with a square photosite (see <a href="http://mtfmapper.blogspot.com/2013/10/simulating-microlenses-first-take.html">this article</a> for more details on anisotropic behaviour); this also seems logical given the shape.<br /><br /><h3>In the real world (well, simulated real world, at least)</h3>The MTF curves show some small but measurable differences between the 100% fill-factor square photosite aperture and the ~90% rounded-square photosite aperture response to a step edge. But can you see these differences in an image?<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-EaEw0rAM9kI/UqHEYK3oB9I/AAAAAAAAAng/wT6G-sb0XsQ/s1600/render_usaf_square.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://2.bp.blogspot.com/-EaEw0rAM9kI/UqHEYK3oB9I/AAAAAAAAAng/wT6G-sb0XsQ/s400/render_usaf_square.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">USAF1951-style chart, f/1.4, 100% fill-factor square photosite aperture</td></tr></tbody></table><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-xcIre4BSKRM/UqHEpAj02RI/AAAAAAAAAno/9f2Hlr-w-S4/s1600/render_usaf_rsquare.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://2.bp.blogspot.com/-xcIre4BSKRM/UqHEpAj02RI/AAAAAAAAAno/9f2Hlr-w-S4/s400/render_usaf_rsquare.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">USAF1951-style chart, f/1.4, ~90% fill-factor rounded square photosite aperture</td></tr></tbody></table><br /><br />Well ... not really (click on the image to see full-size version). I even opened up the aperture to f/1.4 to accentuate the differences in the photosite apertures. Just to show you <i>something,</i> here is a rendering using a highly astigmatic photosite aperture (a rectangle that is 0.01 times the photosite pitch in height, but one times the pitch wide):<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-fKBVzvJLTSI/UqHE2xNo3PI/AAAAAAAAAnw/WSk-n8aWAv0/s1600/render_usaf_astig.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://2.bp.blogspot.com/-fKBVzvJLTSI/UqHE2xNo3PI/AAAAAAAAAnw/WSk-n8aWAv0/s400/render_usaf_astig.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">USAF1951-style chart, f/1.4, 2% fill-factor thin rectangular photosite aperture</td></tr></tbody></table>Note that this is basically point-sampling in the vertical direction, but box-sampling in the horizontal direction. This shows up as rather severe aliasing (jaggies) in the vertical direction.<br /><br /><h3>In the real real world</h3>So how do these simulated MTF curves compare to actual measured MTF curves? In <a href="http://mtfmapper.blogspot.com/2012/06/nikon-d40-and-d7000-aa-filter-mtf.html">a previous post</a> I described the method I used to capture the MTF of a Nikon D40 camera with a sharp lens set to an aperture of f/4. Here is a comparison of the simulated MTF curves to the empirically measured MTF curve. <br /><h3></h3><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-bQjG1G7qo9U/UqGejet_rzI/AAAAAAAAAnQ/IUUEgdGHTdI/s1600/d40_comparison.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://3.bp.blogspot.com/-bQjG1G7qo9U/UqGejet_rzI/AAAAAAAAAnQ/IUUEgdGHTdI/s400/d40_comparison.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Nikon D40 at f/4</td></tr></tbody></table>At first glance it might seem that the 100% fill-factor square photosite aperture simulation is marginally closer to the measured curve, but keep in mind that these simulations were both performed with an OLPF split factor of 0.375. This value of 0.375 was determined by trial and error using the 100% fill-factor photosite simulation --- it is likely that the optimal OLPF split factor for the rounded-square photosite aperture model is different. In fact, I would expect a slightly larger value, say around 0.38 or 0.385 to perform better, purely on the difference in fill factor (100% vs ~90%) between the two simulations.<br /><br />So yes, you could say I am lazy for not optimizing the OLPF split factor for the rounded-square photosite aperture model right now, but I do not feel comfortable doing any sort of quantitative comparison between the models with only one empirical sample at hand (one measured D40 MTF curve). Until such time as I have sufficient data to perform a proper optimization and evaluation of the models, I will leave it at the following statement: it certainly appears that the rounded-square model is a viable approximation of the photosite aperture of the D40.Frans van den Berghhttps://plus.google.com/104242390274256831576noreply@blogger.com0tag:blogger.com,1999:blog-6555460465813582847.post-13630278409648152542013-10-24T07:01:00.002-07:002013-10-24T07:01:20.434-07:00Simulating microlenses, first take.Up to now, <span style="font-family: "Courier New",Courier,monospace;">mtf_generate_rectangle</span> assumed that the simulated sensor had square pixels with a 100% fill factor. This assumption does not reflect reality all that well, but it does simplify the derivation of analytical MTF curves for certain cases.<br /><br />The effect of fill factor on a square photosite (assuming that the active part of the photosite is just a smaller square centred in the outer square representing the photosite) is fairly straighforward: we are keeping the sampling rate the same, since the photosite pitch is unaffected, but we are reducing the size of the square being convolved with the incoming image. As a result, we would expect a lower fill factor to yield a better MTF curve, i.e., contrast will be higher than the 100% fill factor baseline. But it is still a good idea to test this, just to be sure ...<br /><br /><br /><h2>Implementing variable fill factors</h2>Using the importance sampling algorithm described <a href="http://mtfmapper.blogspot.com/2012/11/importance-sampling-how-to-simulate.html">here,</a> all we have to do is replace the square polygon representing the active area of the photosite with a smaller one, and we are done. The resulting PSF is thus the convolution of the photosite aperture and the Airy function (representing diffraction through the lens aperture). Unless otherwise stated, results were obtained at a wavelength of 550 nm, a photosite pitch of 4.73 micron, and an aperture of f/8, using a simulated system without an optical low-pass filter (OLPF), which appears to be all the rage lately.<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-aZbFy1KO86Q/Umjavs0vS-I/AAAAAAAAAks/JpQUXOS8sKE/s1600/box_ff100vsff50.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://2.bp.blogspot.com/-aZbFy1KO86Q/Umjavs0vS-I/AAAAAAAAAks/JpQUXOS8sKE/s400/box_ff100vsff50.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">MTF curve of Airy + square photosite PSF, at 100% and 50% fill factors</td></tr></tbody></table>This result confirms our suspicions: if we decrease the fill factor by shrinking the square photosite aperture, the cut-off frequency of the low-pass sinc() filter is increased correspondingly (see <a href="http://mtfmapper.blogspot.com/2012/06/diffraction-and-box-filters.html">here</a> for an overview of diffraction and box functions). The MTF50 of the 50% fill factor sensor is ≈0.38, compared to an MTF50 of ≈0.34 for the 100% fill factor case.<br /><br />So what are the downsides to using a smaller fill factor? Well, we are allowing substantially more contrast through above the Nyquist frequency (0.5 cycles per pixel), which will definitely increase the chances of aliasing artifacts (moiré, and/or "jaggies"). In the limit, we can imagine the fill factor approaching zero, which gives us a point-sampler, which will result in severe aliasing artifacts, such as the typical jagged edges we see when we render a polygon by taking only one sample at the centre of each pixel.<br /><br />There is another effect that photographers care deeply about: noise. The relative magnitude of photon shot noise increases inversely with fill factor, since the photon shot noise is directly proportional to the active area of the photosite. The simulation above was conducted with zero noise, mostly to illustrate the pure geometric effects of the fill factor.<br /><br />Speaking of geometric effects, a slight diversion into the interaction between edge orientation and photosite aperture shape is in order.<br /><br /><h3>Square photosites are anisotropic</h3>It is rather important to recall that an MTF curve is only a 1D cross-section of the true 2D MTF. If the 2D MTF is radially symmetric (e.g., the Airy MTF due to a circular lens aperture), then the orientation of our 1D cross-section is irrelevant.<br /><br />The 2D sinc() function representing the MTF of a square aperture is not radially symmetric, hence the 1D MTF curve is only representative of the specific orientation that was chosen. The results in this post were all derived using a combined Airy and photosite aperture simulation; since the Airy MTF is radially symmetric, and the photosite aperture MTF is not, we can expect the combined system MTF to lack perfect radial symmetry. The question remains, though: is the combined MTF symmetric enough to ignore this matter entirely?<br /><br />Feeling somewhat lazy today, I chose to evaluate this empirically, rather than deriving the analytical combined MTF at arbitrary orientations. Since we can directly simulate the edge spread function of a given PSF using <span style="font-family: "Courier New",Courier,monospace;">mtf_generate_rectangle</span>, I decided to vary the orientation of the simulated step edge relative to the simulated photosite grid, which is equivalent to taking our 1D cross-section of the 2D MTF at the chosen orientation.<br /><br />Before we get to the results, first some predictions: We saw that the first zero of the sinc() low-pass filter of the square photosite aperture moved to a higher frequency when we decreased the fill factor. Intuitively, a wider photosite aperture produces stronger low-pass filtering. The length of the diagonal of a square is √2 × <i>side_length</i>, so we might expect a stronger low-pass filtering effect if the step edge is parallel to a diagonal of the square photosite aperture. And now the results ...<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-qN_7ZUWs0Uo/UmjoLuapmBI/AAAAAAAAAk8/icbvIM6EbUU/s1600/box_0vs45.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://2.bp.blogspot.com/-qN_7ZUWs0Uo/UmjoLuapmBI/AAAAAAAAAk8/icbvIM6EbUU/s400/box_0vs45.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">MTF curves of a square photosite (plus diffraction) at different edge orientations</td></tr></tbody></table>Notice that there is a minute difference: the 45-degree edge orientation produced a slightly <i>weaker</i> low-pass filtering effect!<br />Subtracting the 45-degree MTF curve from the 0-degree MTF curve gives us a better view of the difference:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-r_Mokh-ftUo/UmjocfCLXPI/AAAAAAAAAlE/IPKey0U7C1M/s1600/box_0vs45diff.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://2.bp.blogspot.com/-r_Mokh-ftUo/UmjocfCLXPI/AAAAAAAAAlE/IPKey0U7C1M/s400/box_0vs45diff.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">MTF difference between 0-degree edge and 45-degree edge</td></tr></tbody></table>The difference certainly appears to by structured, and not in the expected direction. Well, certainly not the direction that I expected.<br /><br />Fortunately the explanation is relatively simple. Consider the following diagram:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-3ZR27cbVItw/Umjx5kQ_dmI/AAAAAAAAAlU/7H6gQf58U7s/s1600/square_integration.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="182" src="http://4.bp.blogspot.com/-3ZR27cbVItw/Umjx5kQ_dmI/AAAAAAAAAlU/7H6gQf58U7s/s400/square_integration.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Representation of the area of the photosite (orange) covered by the step edge (blueish), for 0-degree and 45-degree edge orientations</td></tr></tbody></table>If <i>w </i>represents the side length of our square, then the left-hand diagram shows us that the area covered by the 0-degree step edge is simply <i>t </i>× <i>w</i> over the range 0 < <i>t </i>< <i>w</i>/2. The right-hand diagram illustrates that the area covered by the 45-degree step edge (bluish rectangle) is √0.5 × <i>t × t, </i>over the range 0 < t < √0.5 × <i>w </i>(in both cases we only have to integrate up to the midpoint to study the behaviour in question). The area covered by the step edge can be plotted as functions:<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-3DoHAgq6qJY/Umj3DVJjx7I/AAAAAAAAAlw/XlRPEfqCcjI/s1600/square_integration_plot.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="http://4.bp.blogspot.com/-3DoHAgq6qJY/Umj3DVJjx7I/AAAAAAAAAlw/XlRPEfqCcjI/s400/square_integration_plot.png" width="400" /></a></div>We can see that although the 45-degree case starts out with a lead (the first part of the corner starts at roughly -0.2071 if we align them so that they reach an area of 0.5 simultaneously), the 0-degree case catches up near t=0.1. From that point onwards, the 0-degree step edge covers a larger part of the photosite aperture than the 45-degree step edge does. In practise, this means that although the 45-degree case is technically "wider", the 0-degree case presents a stronger low-pass filter. Keep in mind that on top of this rather small difference due to the anisotropy of the square photosite aperture, we are blending in the radially symmetric Airy MTF, which further suppresses the anisotropy.<br /><br />The size of this effect is minute, as can be seen in the MTF difference diagram above. The MTF50 values are ≈0.3407 and ≈0.342 for the 0-degree and the 45-degree cases, respectively. In conclusion, we see that the anisotropy of the square photosite aperture is mostly masked by the strong isotropy of the Airy MTF at f/8. At larger apertures, the anisotropy is likely to be more apparent, but further analyses will be performed with a step edge orientation of 0 degrees only.<br /><br /><h2>Approximating microlenses</h2>It has been suggested that the microlenses alter the effective shape of the active area of a photosite. (For example, reader IlliasG contributed this info <a href="http://mtfmapper.blogspot.com/2012/06/nikon-d40-and-d7000-aa-filter-mtf.html">here</a>). A regular polygon approximating a circle seems to be a reasonable starting point for simulating more realistic microlenses. Similar to the fill factor implementation, this merely requires swapping out the polygon used to specify the geometry of the active part of the photosite, and performing importance sampling as usual. (If you can point me at a more accurate description of the effective shape of the combined microlens and photosite aperture, I would be happy to incorporate that into MTF Mapper).<br /><br />Before we look at the results, first a prediction: modelling the active area of the photosite as a circular disc, we should see a net decrease of the geometric fill factor, hence the low-pass filtering effect is expected to <i>decrease</i>.<i> </i><br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-d87eXVRnvC4/UmkAXzxkNuI/AAAAAAAAAmA/2hvgfYkaP_g/s1600/box_vs_ml.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://3.bp.blogspot.com/-d87eXVRnvC4/UmkAXzxkNuI/AAAAAAAAAmA/2hvgfYkaP_g/s400/box_vs_ml.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">MTF curve of square photosite aperture (1x1) versus circular photosite aperture (radius 1)</td></tr></tbody></table>No real surprises in these results. For a circular photosite aperture, I chose the <i>inscribed </i>circle to the square photosite, since this seemed more reasonable. Note that the fill factor of the circular photosite aperture is ≈78.4%, rather than the expected π/4 ≈ 0.7854, because I approximated the circle as a 60-sided regular polygon. So how much of the difference between the 100% fill factor square aperture and the 78% fill factor circular aperture is due directly to fill factor, and how much is due to the actual shape?<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-4soUKau7XgI/UmkTD4-zBZI/AAAAAAAAAmQ/4Sc0-fIxXdo/s1600/box_vs_ml_ff78_diff.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="http://1.bp.blogspot.com/-4soUKau7XgI/UmkTD4-zBZI/AAAAAAAAAmQ/4Sc0-fIxXdo/s400/box_vs_ml_ff78_diff.png" width="400" /></a></div>By subtracting the MTF curves as indicated in the legend of the plot above, we can see that, after matching the effective fill factor, the remaining differences are quite small. From the red dashed curve we can see that the circular (well, 60-sided regular polygon) photosite aperture behaves isotropically, whereas the 78% fill factor square photosite aperture still exhibits anisotropy (dashed blue curve).<br /><br /><br /><h2>Conclusion</h2>I have not performed sufficient experiments to make any inferences regarding behaviour at larger apertures, but at f/8 on a 4.73 micron pitch, it definitely appears as if the geometric fill factor of the photosite is responsible for the bulk of the difference between a 100% fill factor square photosite and a 78% fill factor inscribed circular photosite aperture.<br /><br />Once we match the effective fill factors, the difference between the square aperture and the circular aperture are of the same magnitude as the differences due to the anisotropy of the square aperture. At larger apertures, we should see more significant differences, but at f/8 the differences are not as significant as one might suspect.<br /><br />I would like to revisit my D40 experiment armed with the new fill factor and photosite geometry functionality in MTF Mapper. Stay tuned for that!<br /><br />MTF Mapper will include new options for controlling photosite aperture fill factor and shape from version 0.4.16 onwards, which should be released relatively shortly.<br /><br />Frans van den Berghhttps://plus.google.com/104242390274256831576noreply@blogger.com4tag:blogger.com,1999:blog-6555460465813582847.post-24292908591964079072013-10-11T02:30:00.002-07:002013-10-11T02:44:47.273-07:00How sharpness interacts with the accuracy of the slanted edge method<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-y9WvMR1_nXI/Ule9LlyD43I/AAAAAAAAAkQ/D2o9Iq4ktv0/s1600/test.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="240" src="http://4.bp.blogspot.com/-y9WvMR1_nXI/Ule9LlyD43I/AAAAAAAAAkQ/D2o9Iq4ktv0/s400/test.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">How MTF50 error varies with sharpness (click for a larger version)</td><td class="tr-caption" style="text-align: center;"><br /></td></tr></tbody></table>Just a brief post to show how the absolute error in MTF50 measurement increases with increasing MTF50 values. The chart above is a box plot of the the MTF50 error at a range of MTF50 values.<br /><br />Before we jump into a discussion of the chart itself, I would like to quickly explain how these values were obtained. Inside the MTF Mapper package you will find a tool called <span style="font-family: "Courier New", Courier, monospace;">mtf_generate_rectangle.exe</span> (in the Windows distribution). This tool generates synthetic images comprising a black rectangular target on a white background, i.e., exactly like the blocks you see in typical test charts (e.g., Imatest charts). These synthetic images simulate a specified point spread function, optionally adding some realistic sensor noise to create a synthetic image that is quite close to that which you would be able to capture with your actual camera. Since the point spread function controls the resulting MTF50 value of the image, we can choose to generate an image with an exact, known MTF50 value. The chart above is thus obtained by generating a large number of synthetic images at each of the MTF50 levels indicated on the x-axis. The MTF50 error is just the difference between the known MTF50 value of a given synthetic image, and the actual MTF50 value measured by MTF Mapper on the same image. By generating a number of images at each MTF50 level, each image with a pseudo-random noise component that differs slightly from the other images at the same MTF50 level, we obtain the distribution of MTF Mapper's measurement error at the given MTF50 level. With that out of the way, what can we learn from the chart?<br /><br />The black bar in the centre of each red box is the median error, which stays fairly close to zero. This is good news, since it means that on average MTF Mapper measurements are unbiased.<br /><br />The red box itself gives an indication of the spread of the MTF50 measurement error. The most important message here is that the absolute MTF50 measurement error increases with the nominal MTF50 level. If you have a sharp lens, the absolute measurement error (in cycles per pixel, or line pairs per mm) will be greater than that of a soft lens. In my experience, a sharp lens will have an MTF50 value of about 0.25 cycles per pixel or higher when perfectly focused.<br /><br />If we divide the MTF50 error by the MTF50 level to obtain the relative error (e.g., the percentage error), we still see the same trend of increasing relative error with increasing MTF50 level. I did not include a plot of that, but MTF Mapper's measurement error remains below 5% at real-world noise levels all the way up to an MTF50 value of 0.5 cycles per pixel. You will never obtain MTF50 values that high from a normal DSLR. For a more realistic value of about 0.3 cycles per pixel (a really, really sharp lens), MTF Mapper's relative measurement error will remain below 2% at real-world noise levels.<br /><br />The bottom line: it is harder to obtain an accurate MTF50 estimate of a sharp lens than it is to do so for a soft lens. In reality, this means you have to evaluate more samples (images) for sharp lenses than for soft lenses.<br /><br /><h3>What about Imatest or DxOLabs measurements?</h3>I do not own a copy of either, so I could not test their software comprehensively using the same method. I can tell you that other freely available slanted edge implementations (e.g., Mitre) behave in exactly the same way as MTF Mapper did on the same synthetic images.<br /><br />Looking a the maths behind the slanted edge method, I would expect that all implementations should behave exactly like MTF Mapper in this regard, i.e., the measurement error increases with increasing MTF50 values. This follows directly from the steeper slope we see in the MTF curve of a sharp lens, which means that the MTF50 value is more sensitive to small observation errors, such as those caused by sensor noise.<br /><br />DxO uses a different method of computing sharpness, but ultimately they end up evaluating the MTF curve as well, so their method is likely to be similarly affected by increasing sensitivity to sensor noise with increasing nominal sharpness.<br /><br /><h3>How to obtain your own copy of MTF Mapper</h3>MTF Mapper is a free-as-in-beer Open Source project, currently hosted on <a href="http://sourceforge.net/projects/mtfmapper/">Sourceforge.net</a>. You can download pre-built binaries for both Windows and Ubuntu Linux, as well as the source code if you like. Frans van den Berghhttps://plus.google.com/104242390274256831576noreply@blogger.com1tag:blogger.com,1999:blog-6555460465813582847.post-60206934054640214572013-01-31T01:47:00.000-08:002013-02-05T00:57:36.495-08:00Effects of ISO on MTF50 measurements<div class="separator" style="clear: both; text-align: center;"></div><h3>How does the ISO setting affect my MTF50 measurements?</h3>This question arose in the comments section of Roger Cicala's blog (specifically, <a href="http://www.lensrentals.com/blog/2013/01/a-24-70mm-system-comparison/comment-page-1#comment-28163">this article</a>). The short answer is: the expected MTF50 score (the mean, in other words) is unaffected by high ISO noise, but the reliability of MTF50 measurements decrease with increasing ISO settings.<br /><br />The <span style="font-family: "Courier New",Courier,monospace;">mtf_generate_rectangle</span> <span style="font-family: inherit;">tool included in the MTF Mapper</span> package allows us to generate synthetic images of black rectangular targets on a white background, while specifying the exact MTF50 value we would like to obtain. In addition, we can simulate sensor noise either as plain additive Gaussian white noise, or using a more realistic (but still simple) noise model that incorporates three noise terms: read noise, photon shot noise, and pixel-response non-uniformity (PRNU).<br /><br />To generate noise using the more realistic sensor noise model, we must provide three parameters: <i>PRNU magnitude</i>, <i>read noise magnitude</i> (standard deviation in electrons), and <i>ADC gain</i> (electrons per DN). I used <a href="http://www.dpreview.com/members/8931023692">Marianne Oelund's</a> data (<a href="http://forums.dpreview.com/forums/post/37165492">posted here</a>) for the D7000 sensor. An example of the command to generate a realistic ISO 100 image using our simulated D7000 sensor would be:<br /><br /><span style="font-family: "Courier New",Courier,monospace;">./mtf_generate_rectangle -m 0.3 -l --b16 --pattern-noise 0.0085 --read-noise 3.7 --adc-gain 2.643 --adc-depth 12 -c 0.33</span><br /><br /><span style="font-family: inherit;">(Please excuse the --adc-depth 12 parameter. I know the D7000 has a 14-bit ADC, but specifying 12 here is a dirty hack to produce an output file that covers the full 16-bit range).</span> I have verified that the statistics of a synthetic image generated using these parameters matches that obtained from an actual raw D7000 image (currently, I have only verified at ISO 100).<br /><br />To generate a simulated image at ISO 400, you can use this command:<br /><span style="font-family: "Courier New",Courier,monospace;">./mtf_generate_rectangle -m 0.3 -l --b16 --pattern-noise 0.0085 --read-noise 2.5 --adc-gain 0.641 --adc-depth 12 -c 0.33</span><br /><br /><span style="font-family: inherit;">Note that I have compressed the contrast of the simulated edge quite a bit (the -c 0.33 parameter) to avoid clipping at higher ISO values. This will increase MTF50 standard deviation values slightly --- I usually use -c 0.2 when generating ISO 100 images.</span><br /><br /><span style="font-family: inherit;">Ok, enough of the preamble. I decided on an MTF50 value of 0.3 cycles/pixel, which is close to the best value I have managed to obtain from the D7000. This corresponds to 979 line pairs per picture height (the units that Roger reports his results in). I generated 400 synthetic edges at the various simulated ISO settings; here is the resulting box plot:</span><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-Yr_B3rmAJqs/UQolfl7oeUI/AAAAAAAAAf4/1iu6FUST8LY/s1600/m0.3_c0.33_boxplot.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://2.bp.blogspot.com/-Yr_B3rmAJqs/UQolfl7oeUI/AAAAAAAAAf4/1iu6FUST8LY/s400/m0.3_c0.33_boxplot.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Measured MTF50 values (lp/mm), with a target of 979 lp/mm</td></tr></tbody></table><span style="font-family: inherit;">It is pretty clear from the plot that the median MTF50 value remains largely unchanged over the whole ISO range, only dipping slightly at ISO 12800, but this may simply be a sampling artifact given the large standard deviation.</span><br /><span style="font-family: inherit;"><br /></span><span style="font-family: inherit;">This graph tells us two important facts: </span><br /><ol><li><span style="font-family: inherit;">The mean MTF50 value (over many experiments) is unaffected by ISO setting, and</span></li><li><span style="font-family: inherit;">The standard deviation of MTF50 values increases with increasing ISO setting. In other words, if you are forced to use ISO 1600, you will have to take many more measurements to obtain a reasonable estimate of your mean MTF50 value.</span></li></ol><span style="font-family: inherit;">The same results are presented below in tabular form:</span><br /><br /><span style="font-family: "Courier New",Courier,monospace;"></span> <br /><center><table border="0" cellspacing="0" cols="3"> <colgroup span="3" width="89"></colgroup> <tbody><tr> <td align="RIGHT" bgcolor="#FFCC99" height="18" ign="LEFT" style="border-bottom: 1px solid #000000; border-top: 1px solid #000000;">ISO</td> <td align="RIGHT" bgcolor="#FFCC99" style="border-bottom: 1px solid #000000; border-top: 1px solid #000000;">Mean MTF50 (lp/ph)</td> <td align="RIGHT" bgcolor="#FFCC99" style="border-bottom: 1px solid #000000; border-top: 1px solid #000000;">Std.dev</td> </tr><tr> <td align="RIGHT" height="17" sdnum="1033;" sdval="100">100</td> <td align="RIGHT" sdnum="1033;0;0.0" sdval="980.0193">980.0</td> <td align="RIGHT" sdnum="1033;0;0.0" sdval="10.07233">10.1</td> </tr><tr> <td align="RIGHT" bgcolor="#CFE7E5" height="17" sdnum="1033;" sdval="200">200</td> <td align="RIGHT" bgcolor="#CFE7E5" sdnum="1033;0;0.0" sdval="979.7256">979.7</td> <td align="RIGHT" bgcolor="#CFE7E5" sdnum="1033;0;0.0" sdval="12.63983">12.6</td> </tr><tr> <td align="RIGHT" height="17" sdnum="1033;" sdval="400">400</td> <td align="RIGHT" sdnum="1033;0;0.0" sdval="979.189">979.2</td> <td align="RIGHT" sdnum="1033;0;0.0" sdval="16.68402">16.7</td> </tr><tr> <td align="RIGHT" bgcolor="#CFE7E5" height="17" sdnum="1033;" sdval="800">800</td> <td align="RIGHT" bgcolor="#CFE7E5" sdnum="1033;0;0.0" sdval="978.7639">978.8</td> <td align="RIGHT" bgcolor="#CFE7E5" sdnum="1033;0;0.0" sdval="22.43218">22.4</td> </tr><tr> <td align="RIGHT" height="17" sdnum="1033;" sdval="1600">1600</td> <td align="RIGHT" sdnum="1033;0;0.0" sdval="979.1205">979.1</td> <td align="RIGHT" sdnum="1033;0;0.0" sdval="32.06091">32.1</td> </tr><tr> <td align="RIGHT" bgcolor="#CFE7E5" height="17" sdnum="1033;" sdval="3200">3200</td> <td align="RIGHT" bgcolor="#CFE7E5" sdnum="1033;0;0.0" sdval="979.1245">979.1</td> <td align="RIGHT" bgcolor="#CFE7E5" sdnum="1033;0;0.0" sdval="45.22945">45.2</td> </tr><tr> <td align="RIGHT" height="17" sdnum="1033;" sdval="6400">6400</td> <td align="RIGHT" sdnum="1033;0;0.0" sdval="979.2275">979.2</td> <td align="RIGHT" sdnum="1033;0;0.0" sdval="65.99316">66.0</td> </tr><tr> <td align="RIGHT" bgcolor="#CFE7E5" dval="12800" height="17" s="" sdnum="1033;" style="border-bottom: 1px solid #000000;">12800</td> <td align="RIGHT" bgcolor="#CFE7E5" sdnum="1033;0;0.0" sdval="976.57 2" style="border-bottom: 1px solid #000000;">976.6</td> <td align="RIGHT" bgcolor="#CFE7E5" sdnum="1033;0;0.0" sdval="91.754 38" style="border-bottom: 1px solid #000000;">91.8</td> </tr></tbody></table></center><br />I did perform a quick Shapiro-Wilk normality test, and the data definitely have a Gaussian distribution within each ISO category, so you can go ahead and compute confidence intervals using n=400 and the values in the table above.<br /><br />If you want to perform MTF50 testing at ISO 800, I would recommend that you compute the trimmed mean using only the middle 50% of the data (around the median). Put differently, at ISO 800 you could capture twice as many MTF50 test charts, and compute your mean using the middle 50%, which should yield comparable results to performing the test at ISO 100.<br /><br />Even so, you should find that about 80% of your measurements will have an error of less than 29 lp/ph at ISO 800, which amounts to only 3%. I would be willing to bet that other errors (quality of your test chart, vibrations, etc.) will probably be on a similar magnitude, so I would not really worry too much about performing MTF50 measurements at ISO800.<br /><br /><h3>Why is MTF50 not affected by ISO?</h3><div>These results go counter to any subjective evaluation of sharpness at higher ISO settings. The reason is fairly simple: the <i>slanted edge </i>method used to compute the MTF50 measurements applies fairly heavy noise suppression internally. </div><div><br /></div><div>Specifically, the slanted edge method constructs an oversampled edge profile across the edge being measured. This effectively involves computing the mean image intensity along lines running parallel to the edge. Since this mean is unweighted, we obtain maximal suppression of Additive Gaussian White Noise (this is a known property of the unweighted mean). Because we are averaging along lines parallel to the edge, we expect the signal level (intensity) to be constant, so unweighted averaging gives us the maximum likelihood estimate of a constant with additive Gaussian white noise. The result is that the sensor noise (aggravated by higher ISO settings) is filtered out quite effectively.</div><div><br /></div><div>The moral of the story is that the slanted edge method does exactly what it is supposed to do: measure the modulation transfer curve (MTF) of the optical system. This is a property of the optical system that is independent of sensor noise. It does lead to some confusion, though, since subjective evaluation of noisy high-ISO images definitely create the perception of reduced sharpness. This implies that we must appreciate the difference between an MTF estimated using the slanted edge method, and the (subjective) human perception of sharpness which <i>is</i> sensitive to high-ISO noise.<br /><br /><h3><span style="font-family: inherit;">Why we see increased variability at high ISO</span></h3><span style="font-family: inherit;">We have seen some evidence (constant mean MTF50 with increasing ISO from the experimental results above) that MTF50 is not affected by typical sensor noise, and we have a plausible mechanism to explain why we should expect slanted edge MTF50 values to be unaffected</span> by increasing sensor noise. Despite this, we observe an increase in MTF50 standard deviation, which could only mean that on some high ISO runs we obtain MTF50 values that differ a lot from the mean MTF50 value.<br /><br />I can think of two possible explanations:<br /><ol><li>The slanted edge method cannot suppress the noise completely, thus at higher noise levels we will see some errors creeping in (because the edge profile becomes noisy). </li><li>The slanted edge method depends on accurate estimation of the edge orientation. This becomes harder to achieve when the image is noisy. MTF Mapper employs three different techniques to try and improve the accuracy of edge orientation estimates (least-squares fitting to the edge, joint estimation of parallel edges, and systematic fine-tuning to reduce local edge profile variance), but orientation errors do tend to increase with increasing image noise.</li></ol></div><h3>Appendix</h3>Of course, people would like to see what the simulated images look like, so here is a selection:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-sGziEqxwnRg/UQo2I4Q4MuI/AAAAAAAAAgU/aveZMchMREQ/s1600/rect_100.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://3.bp.blogspot.com/-sGziEqxwnRg/UQo2I4Q4MuI/AAAAAAAAAgU/aveZMchMREQ/s1600/rect_100.png" /> </a></td><td style="text-align: center;"></td></tr><tr><td class="tr-caption" style="text-align: center;">ISO 100</td></tr></tbody></table><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-L-hc4mMc8Ts/UQo2Qp2174I/AAAAAAAAAgc/MQJchC-ulPM/s1600/rect_400.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://2.bp.blogspot.com/-L-hc4mMc8Ts/UQo2Qp2174I/AAAAAAAAAgc/MQJchC-ulPM/s1600/rect_400.png" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">ISO 400</td></tr></tbody></table><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-DqD2ET6bwHk/UQo2WmbkCrI/AAAAAAAAAgk/9NPbOugCXL4/s1600/rect_800.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://2.bp.blogspot.com/-DqD2ET6bwHk/UQo2WmbkCrI/AAAAAAAAAgk/9NPbOugCXL4/s1600/rect_800.png" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">ISO 800</td></tr></tbody></table><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-H_93INI-_Ao/UQo2cXajZYI/AAAAAAAAAgs/0kBuY8c8xXk/s1600/rect_1600.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://3.bp.blogspot.com/-H_93INI-_Ao/UQo2cXajZYI/AAAAAAAAAgs/0kBuY8c8xXk/s1600/rect_1600.png" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">ISO 1600</td></tr></tbody></table><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-iwTfX9gpp8Q/UQo2hd4V0FI/AAAAAAAAAg0/MA6t7eoG_2U/s1600/rect_3200.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://2.bp.blogspot.com/-iwTfX9gpp8Q/UQo2hd4V0FI/AAAAAAAAAg0/MA6t7eoG_2U/s1600/rect_3200.png" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">ISO 3200</td><td class="tr-caption" style="text-align: center;"><br /></td></tr></tbody></table><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-0pMI_R9QILA/UQo2u_2JTgI/AAAAAAAAAg8/XTfZP-jRHOY/s1600/rect_6400.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://1.bp.blogspot.com/-0pMI_R9QILA/UQo2u_2JTgI/AAAAAAAAAg8/XTfZP-jRHOY/s1600/rect_6400.png" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">ISO 6400</td></tr></tbody></table><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-rHmBlW_WE90/UQo20fyRvXI/AAAAAAAAAhE/-8tr5J4K4m4/s1600/rect_12800.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://2.bp.blogspot.com/-rHmBlW_WE90/UQo20fyRvXI/AAAAAAAAAhE/-8tr5J4K4m4/s1600/rect_12800.png" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">ISO 12800</td></tr></tbody></table><br />A few comments are in order. These images are simulations of the green channel (which is the channel from which Marianne's data was derived). Also note that the images are displayed here in sRGB space as 8-bit files, but that the actual measurements were performed on their linear 16-bit counterparts. CFA demosaicing may actually affect your MTF50 measurements, but this should not have a huge impact unless the lens has severe spherical or chromatic aberration, so I chose to work only with the green channel for simplicity.Frans van den Berghhttps://plus.google.com/104242390274256831576noreply@blogger.com4tag:blogger.com,1999:blog-6555460465813582847.post-40193684942362923912012-11-02T06:37:00.000-07:002012-11-02T06:37:14.289-07:00Importance sampling: How to simulate diffraction and OLPF effectsSo how exactly does one render a synthetic image that simulates both the effects of diffraction (circular aperture) and the 4-dot blur of a Lithium Niobate optical low-pass filter (OLPF)?<br /><br />Before jumping into the combined effect, I will start with a method of rendering the effects of diffraction only.<br /><br /><h3>Diffraction simulation using weights on a regular grid</h3>The most intuitive method of rendering diffraction effects is direct convolution. This involves convolution in the spatial domain, i.e., for each pixel in the output image, sample the underlying scene (usually a black square on a white rectangle in my examples) at many points arranged on a fine, sub-pixel-spaced regular grid. Each of these samples is then multiplied with an Airy disc function centred on that pixel, and added to the running total for each pixel.<br /><br />This works reasonably well because it is very simple to sample the underlying scene: you just have to determine whether a sample point is inside the black rectangle, or not. The appropriate weights for the Airy disc function are obtained by directly evaluating the appropriately scaled jinc function (see <a href="http://mtfmapper.blogspot.com/2012/06/diffraction-and-box-filters.html">here</a> for an overview).<br /><br />This regular grid sampling strategy is described in some detail in a <a href="http://mtfmapper.blogspot.com/2012/04/accurate-method-for-rendering-synthetic.html">previous post</a>. It works well enough for some functions, but it turns out to be a poor choice for the Airy disc function, for which the weights are close to zero almost everywhere outside of the central peak:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-mOqywAg7_Lo/UBZrusNmRNI/AAAAAAAAAXk/ogQozgfegN4/s1600/d2_sjinc_psf.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="320" src="http://1.bp.blogspot.com/-mOqywAg7_Lo/UBZrusNmRNI/AAAAAAAAAXk/ogQozgfegN4/s320/d2_sjinc_psf.png" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">The Airy disc function (jinc squared)</td><td class="tr-caption" style="text-align: center;"><br /></td></tr></tbody></table>Those peripheral rings have very low magnitude compared to the central peak, which means that samples from those regions will have a much smaller influence on our answer than samples from the centre.<br /><br />Looking at the problem from another angle we see that this discrete sampling and convolution is really just a discrete approximation to the continuous convolution of the Airy function and the underlying scene, sampled at the pixel positions of our synthetic output image. This implies that other techniques, such as Monte Carlo integration, could potentially be applied to evaluate the integral.<br /><br /><h3>Monte Carlo integration</h3>It may sound exotic, but MC integration is a very straightforward concept. Imagine that you want to compute the surface area of a unit circle (radius=1). Fit a square tightly around this circle (say, width=2, centred on circle), and generate (<i>x,y</i>) coordinates randomly (uniform distribution) within this square. For each of these random coordinates that fall inside the circle, increment a counter A.<br />After a fair number of samples (N), the value A/N will approximate <i>π/4</i>, which is the ratio of the area of the circle to that of the square.<br /><br />This may seem like a roundabout method of computing the area of a circle, but the method is very general. For example, instead of simply incrementing the counter A when a point is inside the circle, we could have evaluated some function f(<i>x,y</i>) at each point inside the circle. This means that we can approximate complex double integrals over complex, non-convex regions, without breaking a sweat.<br /><br />Note that this method is very similar to the direct convolution approach described above, and if we assume that our "random" coordinates just happened to fall on a regular grid, then we can see that these two approaches are really very similar. So why would you choose random coordinates over a regular grid?<br /><br />I can think of two disadvantages to a regular grid: 1) you have to know how finely spaced your grid should be in advance, and 2) your regular grid may interfere with the shape of the region you are integrating over. If you have some way of measuring convergence (say, the variance of your estimate), then you can keep on generating random samples until convergence; with a regular grid, you must sample all the points, or your integral will be severely biased.<br /><br />Sampling using random (<i>x,y</i>) coordinates is not optimal, though, since random numbers have a tendency of forming clumps, and leaving gaps (they must, or they would not be random!). What works better in practice is a quasi-random sequence of numbers, such as a Sobol sequence or the Halton sequence. These quasi-random sequences will fill the space more evenly, and you can still stop at any point during the integration if convergence has been achieved; they also tend to produce a lower variance in the integral than random sampling for the same number of samples.<br /><br /><h3>Importance sampling</h3>While uniform random number sequences (and uniform quasi-random sequences) provide a convenient way of choosing new sampling positions for our Monte Carlo integrator, they can be wasteful if the function we are integrating has large regions over which the function value is small compared to the function's maximum, such as the Airy disc function above. What would happen if we could choose our sampling points in a biased manner, such that we choose sampling positions proportional to the function value at those positions?<br /><br />This is the essence of importance sampling. If we can choose our sampling points according to a distribution that better fits the shape of the function we are integrating, then we can concentrate our samples in the areas that have a larger function value (weight), and are thus more important. This strategy reduces the variance of our MC integration procedure significantly.<br /><br />Take the Airy disc function as an example: we could choose our sampling positions according to a Gaussian distribution. This will concentrate samples closer to the centre, but we have to be careful keep the standard deviation of the Gaussian wide enough so that sufficient samples can be generated from the outer rings of the Airy disc function. There is one very important thing to remember when applying importance sampling: you must divide the function value at each sampling position by the probability of observing that sampling position.<br /><br />Thus, if generate our sampling points using a Gaussian distribution, then we have a Gaussian pdf p(<i>x,y</i>) = exp(-x<sup>2</sup>/(2s<sub>x</sub><sup>2</sup>) - y<sup>2</sup>/(2s<sub>y</sub><sup>2</sup>)) where s<sub>x</sub> and s<sub>y </sub>denotes the respective standard deviations in <i>x</i> and <i>y</i>, which means we must add the value f(<i>x,y</i>)/p(<i>x,y</i>) to our accumulator, rather than f(<i>x,y</i>)/N, as we would normally do with a regular grid or with uniform sampling. This is what it looks like when we are rendering a grey rectangle on a white background:<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-njDe-2-utFM/UJOp96a-tmI/AAAAAAAAAcI/vy8r9z7z0VI/s1600/pixel_fragment_gauss_is.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="http://2.bp.blogspot.com/-njDe-2-utFM/UJOp96a-tmI/AAAAAAAAAcI/vy8r9z7z0VI/s320/pixel_fragment_gauss_is.png" width="304" /></a></div>The gridlines denote the boundaries of the discrete pixels of our synthetic image. The red dots represent the sampling points of one pixel (the one the arrow points to).<br />Notice how the density of the sampling points decreases as we move away from the centre of the pixel --- this density is exactly Gaussian.<br /><br /><br />Since we are really computing the product of the weighting function and the underlying scene, we are accumulating I(<i>x,y</i>)*f(<i>x,y</i>)/p(<i>x,y</i>), where I(<i>x,y</i>) measures whether a point is inside or outside our target object (black rectangle). <br /><br />What happens if the distribution of our sampling points match the distribution of f(<i>x,y</i>) exactly? Then f(<i>x,y</i>) = p(<i>x,y</i>), and we are effectively weighting each point equally, with sampling density effectively achieving the desired weighting. This strategy is optimal, since it makes the most effective use of every single sample. The only way to improve on this is to stratify according to the scene content as well, but that makes things a bit complicated.<br /><br /><h3>Importance sampling and the Airy disc</h3>So how do you generate quasi-random (<i>x,y</i>) coordinates with a distribution that matches the Airy disc function? Same way you generate points from a Gaussian: by inverting the cumulative distribution. This technique is called the "inverse transform sampling method". For a Gaussian, you can use Moro's inversion, but I am not aware of any fitted polynomials for inverting the cumulative distribution of the Airy disc function. What now?<br /><br />Well, I decided to use a look-up table to approximate the cumulative distribution of the Airy disc function. Since the function is radially symmetrical, this is just a 1-dimensional look-up table, which I have implemented as a piecewise-linear function. Thus, given a pair of uniform variates (<i>x,y</i>) in the range [0,1][0,1], you can obtain a sample following the Airy disc function density by choosing an angle θ = <i>2π * x,</i> and a radial distance <i>r </i>by looking up the value <i>y</i> in the cumulative distribution of the unit Airy disc function. Scaling for wavelength, pixel pitch and f-number can be performed on <i>r</i> afterwards.<br /><br />There is only small trick, though: If you generate a polar 2D coordinate as [r cos(θ), r sin(θ)], where <i>r</i> has a uniform distribution, you will end up with more points close to the centre than on the outer rim. You want to partition the circular disc into parts of equal area as a function of radius, which means that your <i>r</i> must first be transformed to r' = √r. This is critical, or your true distribution of points will differ from your assumed distribution, and your weighting of samples will be biased.<br /><br />To apply this to the Airy disc function cumulative distribution table, we just go back to basics. The cumulative distribution as a function of the radius <i>r</i> can be approximated as a finite sum:<br /><div style="text-align: center;">F(r<sub>n</sub>) = F(r<sub>n-1</sub>) + (2 jinc(r<sub>n</sub>))<sup>2</sup> * (<i>πr<sub>n</sub><sup>2</sup> - </i><i>πr<sub>n-1</sub><sup>2</sup>)</i></div>where r<sub>n</sub> is simply our discrete sample along the radius (something like r<sub>n</sub> = n/N). This looks like a simple Riemann sum, with the important change being that our "width" parameter is not a linear function of r<sub>n</sub>, but in fact quadratic. This small change ensures that outer radii are assigned an area-proportionally larger weight, so that we can generate our sampling positions in polar coordinates without biasing them towards the centre. <br /><br /><h3>Summary of Airy disc function sampling</h3>To briefly recap, here is the recipe for simulating the effects of diffraction:<br /><ol><li>Generate a cumulative distribution of the unit Airy disc function and store it in a look-up table.</li><li>Generate N (<i>x,y</i>) coordinate pairs in the range [0,1][0,1] using a quasi-random sequence such as the Halton sequence.</li><li>Transform these coordinates to an Airy disc distribution by <br />θ = <i>2π * x<br />r = </i>LUT[sqrt<i>(y)</i>]<i><br /> </i>(<i>x</i>',<i>y</i>') = [r cos(θ), r sin(θ)]</li><li> For each pixel, add the pixel centre coordinates to each sampling point (<i>x</i>',<i>y</i>') to obtain (<i>x</i>",<i>y</i>").</li><li>Evaluate the scene (<i>x</i>",<i>y</i>"), thus accumulating I(<i>x</i>",<i>y</i>") * f(<i>x</i>",<i>y</i>")/p(<i>x</i>",<i>y</i>").</li><li>repeat steps 4-5 for all pixels in target image.</li></ol>You may wonder about the scaling term f(<i>x</i>",<i>y</i>")/p(<i>x</i>",<i>y</i>"), which seems superfluous given that we expect this value to be equal to 1.0. Well, since I have a discrete approximation of the density (the LUT), I decided to use the actual probabilty from the LUT as p(<i>x</i>",<i>y</i>"), and the Airy disc function as f(<i>x</i>",<i>y</i>"). This way, if there are any residual errors in the approximation, the weighting should correct for it.<br /><br />This algorithm can be illustrated as follows:<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-rdLdurBKWW0/UJOqrriQe4I/AAAAAAAAAcQ/GRjMed059Pc/s1600/pixel_fragment_diff_is.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="http://4.bp.blogspot.com/-rdLdurBKWW0/UJOqrriQe4I/AAAAAAAAAcQ/GRjMed059Pc/s320/pixel_fragment_diff_is.png" width="304" /></a></div>Notice the typical Airy pattern rings that are formed by the higher-density regions of our sampling point distribution.<br /><br />One tiny detail has been omitted so far: the diameter of the region that we will sample. The magnitude of the Airy pattern drops off fairly rapidly, and it is tempting to only build our look-up table for the range 0 <= r <= 5. This yields very good sampling of the centre of the Airy pattern, and thus the higher frequencies in the MTF curve of our synthetic image. Unfortunately, such severe truncation distorts the lower frequencies of the MTF curve noticeably. I have obtained reasonable results with 0 <= r <= 45, storing roughly 20000 points in my look-up table.<br /><br /><h3>Convolution of Airy PSF and OLPF</h3>Unfortunately, we are not done yet. For a sensor without an OLPF, we must still convolve the Airy PSF with the pixel PSF (typically a box function) to obtain the desired value for a given pixel. There are two ways of doing this: 1) convolve a sampled Airy PSF with a sampled box PSF to produce a sampled combined PSF, or 2) sample the scene using importance-sampled points, but perform the box function convolution with the scene at each sampling point.<br /><br />The first method is simple and straightforward, but suffers from all the usual disadvantages of regular grid sampling. It requires a very fine grid to produce good results; somewhere around 1050625 samples per pixel in my experience. The second method is really quite efficient if we have a good method of performing the box function convolution efficiently.<br /><br />As it turns out, the convolution of a box function centred as a specific coordinate with our target object is just the area of intersection between the polygon defining the box function, and the rectangle defining our target object (provided, of course, that our target is a rectangle). I relied on the Sutherland-Hodgman polygon clipping routine to clip the box function polygon with the target rectangle's polygon, which is quite efficient. Here is an illustration of such a box function intersecting our polygon, with the box function (in blue) just happening to align with the pixel grid:<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/--CsqLBSaPUk/UJOs72kBFXI/AAAAAAAAAcY/RBhUTys_Sik/s1600/pixel_fragment_area_weighted.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="http://3.bp.blogspot.com/--CsqLBSaPUk/UJOs72kBFXI/AAAAAAAAAcY/RBhUTys_Sik/s320/pixel_fragment_area_weighted.png" width="304" /></a></div><br /><br />The importance sampling algorithm from the previous section remains largely unchanged: in step 5, the evaluation of I(<i>x</i>",<i>y</i>") now simply denotes the result of the box function convolution, i.e., the area of overlap between a box function (width 1 pixel) centred at (<i>x</i>",<i>y</i>"), and the target rectangle.<br /><br />Finally, to render a 4-dot OLPF blur, such as that effected by a Lithium Niobate AA filter, you simply take the average of four samples at the coordinates (<i>x</i>" ± 0.375,<i> y</i>" ± 0.375), assuming of course a split distance of 0.375 pixels (or total spread of 0.75 pixels). Each sample thus required four polygon intersection calculations, like this:<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-0DSknne_xxQ/UJPFBnAtk_I/AAAAAAAAAc0/N3cEKQzf7UY/s1600/pixel_fragment_olpf.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="309" src="http://4.bp.blogspot.com/-0DSknne_xxQ/UJPFBnAtk_I/AAAAAAAAAc0/N3cEKQzf7UY/s320/pixel_fragment_olpf.png" width="320" /></a></div><br />This approach is conceptually simple, and fairly flexible. The main disadvantage is that rendering times will increase by roughly a factor four. Fortunately, the larger support of the 4-dot OLPF PSF means that the synthetic image rendered using it will be smoother, which means we can use reduce the number of samples required to obtain a reasonable result.<br /><br />One more advantage: since this rendering approach implements the photosite aperture as a polygon intersection, it is trivial to model different aperture designs. For example, the default choice of a "gap less" photosite aperture is not entirely realistic, since practical sensors typically do not have 100% fill factors. As pointed out by one of the MTF Mapper blog readers, modern "gap less" microlens designs still suffer from attenuation in the corners, resulting in a near-circular photosite aperture.<br /><br /><h3>Demonstration</h3>We have time for a quick demonstration of the various PSF types, using a photosite pitch of 4.73 micron, i.e., like the Nikon D7000, assuming green light at 0.55 micron wavelenghts. Here are some synthetic images:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-xgM4EaL8qRk/UJPJqyWveLI/AAAAAAAAAdQ/iiqOix-5mNg/s1600/rect_gaussian.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://3.bp.blogspot.com/-xgM4EaL8qRk/UJPJqyWveLI/AAAAAAAAAdQ/iiqOix-5mNg/s1600/rect_gaussian.png" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">a) Gaussian PSF with sd=0.57 pixels, mtf50=0.33</td></tr></tbody></table><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-2T0-grMxYgU/UJPKUI97rMI/AAAAAAAAAdg/QCshRIvkpzk/s1600/rect_airy-box_f8.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://4.bp.blogspot.com/-2T0-grMxYgU/UJPKUI97rMI/AAAAAAAAAdg/QCshRIvkpzk/s1600/rect_airy-box_f8.png" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">b) f/8 circular aperture + square pixel aperture, mtf50=0.337</td><td class="tr-caption" style="text-align: center;"><br /></td></tr></tbody></table><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-EDS3k4_qEkw/UJPJ1Sy5dkI/AAAAAAAAAdY/x4JBjn-l-OQ/s1600/rect_airy-4dot-olpf_f8.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://1.bp.blogspot.com/-EDS3k4_qEkw/UJPJ1Sy5dkI/AAAAAAAAAdY/x4JBjn-l-OQ/s1600/rect_airy-4dot-olpf_f8.png" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">c) f/8 circular aperture diffraction + 4-dot OLPF + square pixel aperture, mtf50=0.26</td></tr></tbody></table>Note that the MTF50 values of examples (a) and (b) above are almost the same, and unsurprisingly, the images also look very much the same. Sample (c) looks just a tad softer --- exactly what we would expect image (b) to look like after adding an OLPF.<br /><br />It seems like quite a lot of effort to simulate images with PSFs that correspond to diffraction effects, only to end up with images that look like those generated with Gaussian PSFs.<br /><br /><h3>Conclusion</h3>That is probably enough for one day. In a future post I will provide more information on rendering time and accuracy.<br /><br />All the algorithms discussed here have been implemented in the <i>mtf_generate_rectangle </i>tool included in the MTF Mapper package from version 0.4.12 onwards. See the documentation on the "-p" option, which now includes "gaussian", "airy", "airy-box" and "airy-4dot-olpf" PSF types.Frans van den Berghhttps://plus.google.com/104242390274256831576noreply@blogger.com0tag:blogger.com,1999:blog-6555460465813582847.post-32419206875668175872012-10-23T07:52:00.002-07:002012-10-31T06:08:33.204-07:00Ultimate macro photography camera?<h3>The problem </h3>If you have ever played around with macro photography at a magnification of 1x or more, you will have encountered the curse of shallow Depth of Field (DOF). It is often desirable in portrait photography to isolate the subject by having only the subject in focus, with the background nicely out of focus, i.e., you want relatively shallow DOF.<br /><br />Unfortunately, there is such a thing as too little DOF, where it becomes difficult to keep the entire subject in focus, or at least the parts you would like to keep in focus. This problem pops up in macro photography all the time. Consider this example shot at 2.8x magnification (and then cropped a bit):<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-TCyfcjvMkKk/UIZjqvsesII/AAAAAAAAAbs/X5rQDvHdY10/s1600/ant.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://1.bp.blogspot.com/-TCyfcjvMkKk/UIZjqvsesII/AAAAAAAAAbs/X5rQDvHdY10/s1600/ant.jpg" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">An ant at 2.8x magnification, lens aperture f/4</td></tr></tbody></table>Depending on your preference, you may decide that the DOF in this shot is just fine. Personally, I would have liked to have more of the hair running down the centre line of the ant's head, especially between the mandibles and the antennae, to be in focus.<br /><br />Normally, you can just stop down the lens to increase DOF. Unfortunately, there is the small matter of diffraction that gets in your way. To fully appreciate the problem, first consider the way in which magnification affects the <i>effective aperture</i> (also called the "working aperture"). The aperture of the lens has to be scaled according to magnification, so that<br /><div style="text-align: center;">N<sub>e</sub> = N * (1 + <i>m</i>)</div><div style="text-align: left;">where N<sub>e</sub> denotes the effective aperture, N denotes the aperture as set on the lens, and <i>m</i> is the magnification ratio. At 1:1 magnification, which is usually considered the start of the "macro" range, the value of <i>m</i> is equal to 1.0. This means that the image projected onto the sensor is physically the same size as the object being photographed. (Please note that this equation is an approximation that assumes that the pupil ratio is equal to 1.0; many real lenses have pupil ratios that differ significantly from 1.0, and a different form of the equation is required to include the pupil ratio in such cases, but the overall behaviour of the equation is unchanged. For simplicity, I assume a pupil ratio of 1.0 in this article.)<br /><br />For non-macro photography, the value of <i>m</i> is usually small, around 0.1 or less, which implies that the effective aperture N<sub>e</sub> is approximately equal to N, the aperture selected on the lens. Under these conditions, you can just stop down the lens to increase DOF, at least to around f/11 or f/16 on modern sensors with roughly 4.8 micron photosite pitch values. Going beyond f/11 on these sensors will increase DOF, but diffraction softening will start to become visible.<br /><br />In the macro domain, though, it is an entirely different kettle of fish. The shot of the ant above serves as an example: at 2.8x magnification, with the lens set to f/4, we obtain an effective aperture of f/15.2. If we stop down the lens just a little bit, say to f/5, we are already at f/19. The minimum aperture of the lens I used is f/22, which gives us a mind-bogglingly small effective aperture of f/83.6. Keep in mind that we were running into visible diffraction softening even at a lens aperture of f/4.<br /><br />How bad is this diffraction softening? Well, assuming an aberration-free lens that is in its diffraction-limited range, the f/4 lens on a D7000 body would give us a maximum resolution of 39.6 line pairs per mm. A 12x8 inch print would thus render at 3 lp/mm (divide the 39.6 by the print magnification factor of ~13), which is barely acceptable at approximately 152 DPI.<br /><br />Bumping up the f-number to f/22 on the lens gives us only 8.7 lp/mm of resolution on the sensor, or 0.675 lp/mm (34 DPI) in print. To reach roughly 152 DPI we can print at a maximum size of 2.6x1.75 inch; anything larger will look visibly softer than the f/4 example in the preceding paragraph.<br /><br />So how much DOF do we have at 2.8x magnification with the lens set to f/4? Only about 76 micron, or less than 1/10th of a millimetre. Stopping down the lens to f/22 increases DOF to 418 micron, or 0.418 mm. I would consider a DOF of 0.3 mm to be workable for many insects, depending on their pose relative to the focus plane.<br /><br /><b>To summarise:</b> At magnifications of greater than 1x, we can increase DOF by stopping down the lens, but the effective aperture quickly becomes so small that diffraction softening destroys much of the detail we were hoping to capture.<br /><br />Can we work around this problem?<br /><br /><h3>Defining "depth of field"</h3>Compact cameras usually have significantly more DOF than SLR cameras. The explanation of this is actually quite straightforward, if rarely heard. But first we must define what we mean by depth of field. (You can safely skip ahead to the next section if you are confident that you know what DOF is.)<br /><br />Shallow DOF is not the same thing as a blurred-out background; shallow DOF just means that the region of the image which will be acceptably sharp is shallow. Nothing is said about the character of the part of the image which is not in focus. The common misconception that long focal lengths produce shallow DOF is based on the confusion of these concepts: lenses with longer focal lengths produce more blurry backgrounds, but their DOF is actually very similar to lenses with shorter focal lengths after you have moved back to compensate for the smaller field of view. <br /><br />DOF is only meaningful in the context of specific viewing conditions. First up, DOF only pertains to the region of the image which is <i>acceptably sharp</i> under the conditions in which the final image is viewed. This is usually taken to be a given print size viewed at a specified distance. Working back from what constitutes an acceptably sharp print, we arrive at the definition of the <i>circle of confusion</i> (CoC). In other words, we take a single, sharp point in the print, and "project" this back onto the sensor. This projected point forms a little disc on the sensor, with a diameter equal to the CoC.<br /><br />Reasoning backwards, we can see that at the sensor, anything image feature smaller than the disc formed by the CoC will be compressed to a single point in the final print. Any feature larger than the CoC will be printed across more than one dot in print, and will thus be visible to the viewer. This ties the CoC to our definition of the region of acceptable sharpness: point light sources that are in front of (or behind) the exact plane of focus (in the scene) will project onto the sensor as small discs. If these discs are smaller than the CoC, then they will appear in focus. If the point source is moved away from the exact plane of focus, it will reach the distance where the disc that it forms on the sensor first matches, and then exceeds, the CoC, at which point it will start to appear blurry.<br /><br />The DOF is thus the range of distances between which image features are rendered as points in the print, i.e., appear acceptably sharp. At this point it should be clear that the size of the print relative to the size of the sensor has a direct impact on the perceived DOF in print, since a small-sensor image will have to be magnified more to reach the desired print size, compared to the image formed on a large sensor (with the same field of view). This implies that the CoC of a small sensor will be proportionally smaller, i.e., the CoC for a 35 mm sensor size ("full-frame") is usually taken as 0.03 mm, while a compact camera with a 5.9 mm sensor width will have a CoC of 0.0049 mm.<br /><br /><h3>Why small sensors have more DOF</h3>So why does a small-sensor compact camera appear to have a large depth of field? The physical size of the aperture is the key concept. An f/4 lens with a focal length of 105 mm will have a physical aperture diameter (actually, entrance pupil) of 26.25 mm. A 42 mm lens at f/4 will only have an aperture diameter of 10.5 mm.<br /><br />If the physical diameter of the aperture is small, the DOF will be large; just think of how a pinhole camera works. The catch is of course that a pinhole camera will suffer from diffraction softening, but you can hide this softening if your film/sensor is large enough that you do not have to magnify the image in print (i.e., contact prints).<br /><br />A small sensor requires a lens with a shorter focal length to achieve the same field of view as another lens fitted to a large sensor. For example, both the 105 mm lens and the 42 mm lens will produce the same field of view if we attach them to an APS-C sized sensor and a 5.9 mm width sensor, respectively. The 5.9 mm width sensor, though will have a larger DOF because it has a smaller physical aperture.<br /><br />Note that you can substitute a change in position for the change in focal length. Say you use a 50 mm lens on a full-frame camera at f/2.8. To achieve the same subject size on an APS-C (crop) camera, you would have to move further backwards to match the field of view of the full-frame camera. You can keep the lens at f/2.8, but the full-frame image will have a shallower depth of field, even though we did not change the physical aperture size. The trick is to realize that DOF is also a function of subject distance (magnification), so that the APS-C camera will have increased depth of field because it is further from the subject.<br /><br />It is instructive to play with VWDOF (available <a href="http://toothwalker.org/optics/vwdof.html">here</a>) to observe these effects first-hand.<br /><br /><h3>A small-sensor dedicated macro camera at 1:1</h3>Can we solve the problem of insufficient macro DOF by using a small sensor?<br />Will diffraction prevent us from obtaining sufficient detail with such small photosites?<br /><br />The idea is simple: What would happen if you increased the linear pixel density of your DX camera by a factor of four (each pixel is replaced by a 4x4 group of pixels)? We would end up with extremely small photosites, but still within the limits of what is currently possible. Then, instead of increasing the magnification of our lens, which would decrease our effective aperture a lot, we just crop out the centre 25% of our higher-resolution image. This effectively gives us an additional 4x magnification. If we are going to crop out the centre 25% of every image, then we might just as well use a smaller sensor. So, we keep the same number of pixels, but we use a sensor that is only 1/4 the size of our initial sensor (DX, in this case). Once we have a smaller sensor, we can replace our lens with a shorter focal length lens, which will be smaller, lighter, hopefully cheaper, but also easier to build to achieve exceptional sharpness. To illustrate this idea, a practical example now follows.<br /><br />Suppose we have a subject that is 28.3 mm wide (roughly the width you can achieve with a 105 mm lens at 1x magnification on a Nikon DX sensor). What we want to achieve is to capture the exact same subject, but with different sensor sizes. To achieve this, we will have to vary both the focal length and the lens magnification factor.<br /><br />To illustrate this concept, I will define three (hypothetical) cameras:<br /><ol><li>DX: 4980 pixels over 23.6 mm sensor width, <br />focal length = 105 mm, <br />required magnification = 1x, <br />lens aperture = f/4, <br />effective aperture = f/8</li><li>FX: 4980 pixels over 36 mm sensor width, <br />focal length = 127 mm, <br />required magnification = 1.525x, <br />lens aperture = f/4, <br />effective aperture = f/10</li><li>1/4DX: 4980 pixels over 5.9 mm sensor width, <br />focal length = 42 mm, <br />required magnification = 0.25x, <br />lens aperture = f/4, <br />effective aperture = f/5</li></ol><br />With these specifications, all the cameras will capture the subject at a distance of 210 mm, with the same field-of-view (FOV) of roughly 7.71 degrees, and the same subject size relative to the image frame. The 1/4DX camera is just what the name implies: scale down the sensor size by a factor of four in each dimension, but keep the same number of pixels. You can calculate the photosite pitch by dividing the sensor width by the number of pixels; the 1/4DX sensor would have a pitch of 1.18 micron, which is close to what is currently used in state-of-the-art cellphone sensors.<br /><br />We define DOF the same way for all the cameras, i.e., being able to produce the same relative circle of confusion when looking at a final print size of 12 inches wide. The actual circle of confusion for the 1/4DX camera will have to be much smaller (1/4 of the DX CoC) to compensate for the fact that we have to magnify the image 4x larger than the DX image to arrive at a 12 inch print.<br /><br />Computing the actual DOF using VWDOF:<br /><ol><li>DX: 0.314 mm</li><li>FX: 0.261 mm</li><li>1/4DX: 0.784 mm</li></ol><br />This is looking promising. The 1/4DX sensor gives us roughly 2.5 times more DOF (compared to DX) in the final print. The FX sensor gives us less DOF, which is more or less what we expected.<br /><br />Using the <a href="http://sourceforge.net/projects/mtfmapper">MTF Mapper</a> tool mtf_generate_rectangle, we can model the MTF50 values after taking into account the softening effect of diffraction (MTF50 values are a quantitative metric that correlates well with perceived sharpness). This allows us to calculate how sharp the lens will have to be for the 1/4DX camera to work, as well as what our final resolution will be once we have scaled the images up to 12-inch prints.<br /><br />The actual resolution of the final print, which arguably is the thing we want to maximise, turns out as follows:<br /><ol><li>DX: 4.31 line pairs / mm</li><li>FX: 4.6 lp/mm</li><li>1/4DX: 2.5 lp/mm</li></ol><br />What happened? Diffraction destroyed a lot of our resolution on the 1/4DX sensor. In short, we decreased our photosite pitch by a factor four, which means that diffraction softening will now already be visible at f/4 (probably by f/2.8, even). We expected our smaller pixels to be able to capture smaller details, but diffraction blurred out the details to the point where very little detail remained at the pixel level.<br />We can try to remove the AA filter on the 1/4DX sensor (which we arguably no longer need at f/4 with a 1.18 micron photosite pitch) to win back a little resolution, which will give us 2.7 lp/mm in print. Not a huge gain, but we might as well use it.<br /><br />One tiny little detail is also quite important: the 1/4DX camera would require a really, really sharp lens: around 129 lp/mm. I think this is still possible, though, but we'll need a lens designer's opinion.<br /><br />So going for a smaller sensor with 4 times higher (linear) pixel density does indeed give you more DOF in a single shot, provided that you keep the lens aperture the same. The price for this is that the sharpest part of the printed picture will be noticeably less sharp than the prints produced on the DX camera. DOF increased by a factor of 2.5, but overall resolution decreased by a factor of 1.724.<br /><br />We could take our DX camera and stop it down to roughly f/9.1 on the lens, producing an effective aperture of f/18.2. This would produce comparable resolution to the 1/4DX camera (around 2.7 lp/mm in a 12-inch print), and the DOF would be 0.713 mm, which is ever so slightly less than what the 1/4DX camera would produce.<br /><br /><h3>Detours at 1:1</h3>Ok, so if we lose resolution but gain DOF by stopping down the aperture, what would happen if opened up the aperture on the 1/4DX camera a bit?<br /><br />Well, at a lens aperture of f/1.6, the effective aperture would be f/2. This would produce a 12-inch print at a resolution of 5.5 lp/mm, which is higher than any of the other options above. But the DOF is exactly 0.314 mm, so we are no better off in that respect, except that we have increased our final print resolution slightly.<br /><br />A slightly less extreme aperture of f/2.25 would give us an effective aperture of f/2.8, which will match the print resolution of the DX camera, and give us 0.441 mm of DOF. That is a decent 40% increase in DOF over the DX camera, and still gives us the same print resolution.<br /><br />Proceeding along this path, we can choose a lens aperture of f/3.2, resulting in an effective aperture of f/4, yielding print resolution of 3.29 lp/mm. This is about 30% less resolution, but we have a DOF of 0.627 mm. The DX camera will reach the same print resolution (3.29 lp/mm) at an aperture of f/6.8 (effective aperture is f/13.5), which will yield a DOF of 0.533, so the 1/4DX camera is looking less attractive at this point.<br /><br /><br /><h3>What happens at 2.8x magnification?</h3>We can repeat the process at 2.8x magnification, which gives us the following cameras:<br /><ol><li>DX: 4980 pixels over 23.6 mm sensor width, <br />focal length = 105 mm, <br />required magnification = 2.8x, <br />lens aperture = f/4, <br />effective aperture = f/15.2</li><li>1/4DX: 4980 pixels over 5.9 mm sensor width, <br />focal length = 58.5 mm, <br />required magnification = 0.7x, <br />lens aperture = f/2.6, <br />effective aperture = f/4.4</li></ol>Note that the aperture for 1/4DX was chosen so that the print resolution of the 1/4DX camera matches that of the DX camera. With these settings, the DX camera has a DOF of 0.076 mm, and the 1/4DX camera has a DOF of 0.0884 mm, so no real improvement in DOF if we keep the print resolution the same.<br /><br /><h3>Conclusion</h3>There does not appear to be a free lunch here. It is possible to increase the DOF of a 1/4DX camera relative to that of the DX camera, but it requires an extremely sharp lens. The lens will fortunately be quite small, so it may be feasible to construct such a lens.<br /><br />The price we pay for the increased DOF is that we have much smaller photosites, which will have a negative impact on image quality in the form of noise. Specifically, photon shot noise is related to the full-well capacity of a photosite, which in turn is linked to the photosite size. So while the 1/4DX camera may be able to offer slightly more DOF at just the right settings (e.g., lens aperture set to f/2.25), the trade-offs in terms of noise are unlikely to make this approach attractive.<br /><br />It would be interesting to explore the parameter space more methodically. For example, what if we try a 1/2DX camera instead? I am betting against it, but I probably should run the numbers and see ...</div>Frans van den Berghhttps://plus.google.com/104242390274256831576noreply@blogger.com0tag:blogger.com,1999:blog-6555460465813582847.post-36323257768659165032012-06-18T13:09:00.001-07:002012-06-19T02:54:17.512-07:00Combining box filters, AA filters and diffraction: Do I need an AA filter?I have been building up to this post for some time now, so it should not surprise you too much. What happens when we string together the various components in the image formation chain?<br /><br />Specifically, what happens when we combine the square pixel aperture, the sensor OLPF (based on a 4-dot beam splitter) and the Airy function (representing diffraction)? First off, this is what the MTF curves of our contestants look like:<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-jfeSr88IBt0/T9-C3S52ohI/AAAAAAAAAXE/WDe-JEOlJ5o/s1600/mtf.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="http://2.bp.blogspot.com/-jfeSr88IBt0/T9-C3S52ohI/AAAAAAAAAXE/WDe-JEOlJ5o/s400/mtf.png" width="400" /></a></div>The solid black curve represents a combined sensor OLPF (4-dot beam splitter type) + pixel aperture + lens MTF (diffraction only) model. This was recently shown to be a <a href="http://mtfmapper.blogspot.com/2012/06/nikon-d40-and-d7000-aa-filter-mtf.html">good fit for the D40 and D7000 sensors.</a> The dashed blue curve represents the MTF of the square pixel aperture (plus diffraction), i.e., a box filter as wide as the pixel. The dashed red curve illustrates what a Gaussian MTF (plus diffraction) would look like, fitted to have an MTF50 value that is comparable to the OLPF model. Lastly, the solid vertical grey line illustrates the remaining contrast at a frequency of 0.65 cycles per pixel, which is well above the Nyquist limit at 0.5 cycles per pixel (dashed vertical grey line).<br /><br />Note how both the Gaussian and the OLPF model have low contrast values at 0.65 cycles per pixel (0.04 and 0.02, respectively), while the square pixel aperture + lens MTF, representing a sensor without an AA filter, still has a contrast value of 0.27. It is generally accepted that patterns at a contrast below 0.1 are not really visible in photos. That illustrates how the OLPF successfully attenuates the frequencies above Nyquist, but how does this look in a photo?<br /><br /><h3> Ok, but how would it affect my photos visually? </h3>I will now present some synthetic images to illustrate how much (or little) anti-aliasing we obtain at various apertures, both with and without an AA filter. The images will look like this:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-JsIbeDWkywg/T99iUdf-7KI/AAAAAAAAAWU/2dkTvJcMHKo/s1600/rot0_w10_box.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-JsIbeDWkywg/T99iUdf-7KI/AAAAAAAAAWU/2dkTvJcMHKo/s1600/rot0_w10_box.png" /></a></div><br /><br />The left panel is a stack of four sub-images (rows) separated by white horizontal bars. Each sub-image is simply a pattern of black-and-white bars, with both black and white bars being exactly 5 pixels wide (in this example). The four stacked sub-images differ only in phase, i.e., in each of the four rows the black-and-white pattern of bars is offset by a horizontal distance between 0 and 1 pixels in length.<br /><br />The right panel is a 2x magnification of the left panel. Note that the third row in the stack is nice and crisp, containing almost pure black and pure white. The other rows have some grey values at the transition between the black and white bars, because the image has been rendered without any anti-aliasing.<br />These images are rendered by sampling each pixel at 2362369 sub-pixel positions, weighting each sampled point with the relevant point spread function.<br /><br />The aliasing phenomenon known as <i>frequency folding</i> was illustrated in a previous <a href="http://mtfmapper.blogspot.com/2012/05/pixels-aa-filters-box-filters-and-mtf.html">post</a>. When a scene contains patterns at a frequency exceeding the Nyquist limit (highest frequency representable in the final image), the patterns <i>alias</i>, i.e, the frequencies above Nyquist appear as patterns below the Nyquist limit, and are in fact indistinguishable from real image content at that frequency. Here is a relevant example, illustrating how a frequency of 0.65 cycles per pixel (cycle length of 1.538 pixels) aliases onto a frequency of 0.35 cycles per pixel (cycle length of 2.857 pixels) if no AA filter is present:<br /><div class="separator" style="clear: both; text-align: center;"> <a href="http://3.bp.blogspot.com/-5ZhF4g_5tR0/T9-ANtlBbLI/AAAAAAAAAW0/jsTtRw8Hli4/s1600/cchirp_1.4.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-5ZhF4g_5tR0/T9-ANtlBbLI/AAAAAAAAAW0/jsTtRw8Hli4/s1600/cchirp_1.4.png" /></a></div>This set was generated at a simulated aperture of f/1.4, which does not attenuate the high frequencies much. Observe how the two images in the "No OLPF" column look virtually the same, except for a slight contrast difference; it is not possible to tell from the image whether the original scene contained a pattern at 1.538 pixels per cycle, or 2.857 pixels per cycle.<br /><br />The "4-dot OLPF" column shows a clear difference between these two cases. If you look closely you will see some faint stripes in the 2x magnified version at 1.538 pixels per cycle, i.e., the OLPF did not completely suppress the pattern, but attenuated it strongly.<br /><br />If we repeat the experiment at f/4, we obtain this image:<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-U816SNzagvA/T9-CJgH_haI/AAAAAAAAAW8/51TFXTCUHUw/s1600/cchirp_4.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-U816SNzagvA/T9-CJgH_haI/AAAAAAAAAW8/51TFXTCUHUw/s1600/cchirp_4.png" /></a></div>At f/4, we do not really see anything different compared to the f/1.4 images, except an overall decrease in contrast in all the panels.<br /><br />Ok, rinse & repeat at f/8:<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-aoeKw2Jksl8/T9-FH0of-BI/AAAAAAAAAXM/4EwV5AJfcBM/s1600/cchirp_8.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-aoeKw2Jksl8/T9-FH0of-BI/AAAAAAAAAXM/4EwV5AJfcBM/s1600/cchirp_8.png" /></a></div>Now we can see the contrast in the "No OLPF" column, at 1.538 pixels per cycle, dropping noticeably. Diffraction is acting as a natural AA filter, effectively attenuating the frequencies above Nyquist.<br /><br />Finally, at f/11 we see some strong attenuation in the sensor without the AA filter too:<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-v3mxhgtYJgI/T9-Ikas181I/AAAAAAAAAXY/-sK6JaSGqN8/s1600/cchirp_11.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-v3mxhgtYJgI/T9-Ikas181I/AAAAAAAAAXY/-sK6JaSGqN8/s1600/cchirp_11.png" /></a></div>You can still see some clear stripes (top left panel) in the 2x magnified view, but in the original size sub-panel the stripes are almost imperceptible.<br /><br /><h3> Conclusion</h3>So there you have it. A sensor without an AA filter can only really attain a significant increase in resolution at large apertures, where diffraction is not attenuating the contrast at higher frequencies too strongly. Think f/5.6 or larger apertures.<br /><br />Unfortunately, this is exactly the aperture range in which aliasing is clearly visible, as shown above. In other words, if you have something like a D800E, you can avoid aliasing by stopping down to f/8 or smaller, but at those apertures your resolution will be closer to that of the D800. At apertures of f/5.6 and larger, you may experience aliasing, but you are also likely to have better sharpness than the D800. <br /><br />Not an easy choice to make.<br /><br />Personally, I would take the sensor with the AA filter.Frans van den Berghhttps://plus.google.com/104242390274256831576noreply@blogger.com3tag:blogger.com,1999:blog-6555460465813582847.post-14098410969181786482012-06-18T09:50:00.001-07:002012-06-18T09:50:18.737-07:00Nikon D40 and D7000 AA filter MTF revisitedIn an earlier post, I have shown some early MTF measurements taken with both the D40 and the D7000 at the centre of a Sigma 17-50 mm f/2.8 lens.<br />In this post, I revisit those measurements, presenting some new results for the D7000, and a model of the sensor AA filter (or OLPF).<br /><br /><h3> Optical Low Pass Filters (OLPFs) in DSLRs</h3>I have noticed previously that the measured MTF of the D40 was fairly close to a Gaussian, but that there were some small systematic discrepancies that were not accounted for. But how do you build an optical filter that has a Gaussian transfer function?<br /><br />It turns out that the OLPFs in DSLRs are not really Gaussian, but something much simpler: beam splitters. A slice of Lithium Niobate crystal, cut at a specific angle, presents a different index of refraction depending on the polarization of the incident light. Let us assume that a beam of horizontally polarized light passes through straight, for the sake of the argument. Vertically polarized light, on the other hand, refracts (bends) as it enters the crystal, effectively taking a longer path through the crystal. As the vertically polarized light leaves the crystal, it refracts again to form a beam parallel to the horizontally polarized beam, but displaced sideways by a distance dependent on the thickness of the crystal.<br /><br />Using a single layer of Lithium Niobate crystal, you can split a single beam into two parallel beams separated by a distance <i>d</i>, which is typically chosen to match the pixel pitch of the sensor. Since this happens for all beams, the image leaving the OLPF is the sum (average) of the incoming image and a shifted version of itself, translated by exactly one pixel pitch.<br /><br />If you stack two of of these filters, with the second one rotated through 90 degrees, you effectively split a beam into four, forming a square with sides equal to the pixel pitch (but often slightly less than the pitch, to improve resolution). A circular polariser is usually inserted between the two Lithium Niobate layers to "reset" the polarisation of the light before it enters the second Niobate layer.<br /><br /><h3> Combining pixel aperture MTF with the OLPF MTF</h3>So how does this beam-splitter effect the desired blurring? We can compute the combined PSF by convolving the beam-splitter impulse response with the pixel aperture impulse response (a box function).<br /><br />The beam splitter is represented as four impulses, i.e., infinitely thin points. Basic Fourier transform theory tells us that the Fourier transform of an impulse is just a cosine function, so the MTF of the beam splitter will be the sum of four cosines.<br /><br />The "default" 4-way beam splitter impulse response filter can be denoted as the sum of four diagonally-placed impulses, i.e.,<br /><div style="text-align: center;">f(<i>x</i>,<i>y</i>) = δ(<i>x</i>-<i>d</i>, <i>y</i>-<i>d</i>) + δ(<i>x</i>+<i>d</i>, <i>y</i>+<i>d</i>) + δ(<i>x</i>-<i>d</i>, <i>y</i>+<i>d</i>) + δ(<i>x+d</i>, <i>y</i>-<i>d</i>)</div>where δ(x,y) represents a 2D Dirac delta function, which is non-zero only if both x and y are zero, and <i>d</i> represents the OLPF <i>split distance.</i> More sophisticated OLPF designs are possible (e.g., 8-way splitters), but the 4-way designs appear to be popular. In my notation here, the distance between two beams would be 2<i>d</i>; this is to accommodate my usual notation of a pixel being defined over the area [-0.5, -0.5] to [0.5, 0.5]. <br /><br /><br />The degree of blur is controlled by the <i>d</i> parameter, with <i>d</i> = 0 yielding no blurring, and <i>d =</i> 0.5 giving us a two-pixel-wide blur. Since <i>d </i>can be varied by controlling the thickness of the Lithium Niobate layers, a manufacturer can fine-tune the strength of the OLPF for a given sensor. <br /><br />Convolving a square pixel aperture with four impulse functions is straightforward: just sum four copies of the box filter PSF, each shifted by <i>d</i> in the required direction. For <i>d</i> = 0.375, we obtain the following PSF:<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-QanhX0Uozfo/T98T2JASy_I/AAAAAAAAAVc/WrGloAB9Yi4/s1600/psf_olpf.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="http://1.bp.blogspot.com/-QanhX0Uozfo/T98T2JASy_I/AAAAAAAAAVc/WrGloAB9Yi4/s320/psf_olpf.png" width="320" /></a></div><br />in two dimensions, or<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-8N8ARepOe0o/T98SsQAQFhI/AAAAAAAAAVM/Gr2sds4MsEY/s1600/psf_olpf_3d_rs.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="http://1.bp.blogspot.com/-8N8ARepOe0o/T98SsQAQFhI/AAAAAAAAAVM/Gr2sds4MsEY/s320/psf_olpf_3d_rs.png" width="298" /></a></div>in three dimensions. Not exactly a smooth PSF, but then neither is the square pixel aperture.<br /><br /><h3> Simulating the combined effects of diffraction, OLPF and pixel aperture</h3>We can derive the combined PSF directly by convolving the diffraction, OLPF and pixel aperture PSFs. Note that this combined PSF is parameterized by wavelength, lens aperture, pixel pitch, and OLPF split distance. For a wavelength of 0.55 micron, an aperture of f/4, a pixel pitch of 4.73 micron and an OLPF split distance of 0.375 pixels (=2.696 micron), we obtain the following PSF:<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-EP1f4PUypbE/T98Vbi1wmyI/AAAAAAAAAVs/7BOhVyIw1AM/s1600/psf_diff_olpf.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="http://1.bp.blogspot.com/-EP1f4PUypbE/T98Vbi1wmyI/AAAAAAAAAVs/7BOhVyIw1AM/s320/psf_diff_olpf.png" width="320" /></a></div>in two dimensions, or<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-plxiHPS4Z3o/T98VhQv5vAI/AAAAAAAAAV0/LBZF5jn2-4E/s1600/psf_diff_olpf_3d_rs.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="http://4.bp.blogspot.com/-plxiHPS4Z3o/T98VhQv5vAI/AAAAAAAAAV0/LBZF5jn2-4E/s320/psf_diff_olpf_3d_rs.png" width="298" /></a></div>in three dimensions. Notice how diffraction has smoothed out the combined OLPF and pixel aperture PSF (from previous section).<br /><div class="separator" style="clear: both; text-align: center;"><br /></div><br /><br />Using this PSF we can generate synthetic images of a step edge. The MTF of this synthetic edge image can then be compared to a real image captured with a given sensor to see how well this model holds up.<br /><br /><h3> Enter the Death Star</h3>You can certainly measure lens or sensor sharpness (MTF) using any old chart that you printed on office paper, but if you are after resolution records, you will have to take a more rigorous approach. One of the simplest ways of obtaining a reasonably good slanted edge target is to blacken a razor blade with candle soot. This gives you a very straight edge, and very good contrast, since the soot does not reflect much light.<br /><br />That leaves only two other questions: what do you use as a background against which you will capture the razor blade, and how do you illuminate your target?<br /><br />Previously I have used an SB600 speedlight to illuminate my razor blade, which was mounted on a high-grade white paper backing. This produced reasonably good results, but it did not maximize contrast because the flash was lighting the scene from the front. There is also a possibility that the D7000 cycles its mirror when using a flash in live view mode, which could lead to mirror slap. So the flash had to go.<br /><br />In its place I used a home made integrating sphere, which I call the "Death Star" (sorry George, please do not sue me). Here is what it looks like with the razor blade mounted over the integrating sphere's port:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-0bpUeGPt58k/T9zPCQDVZII/AAAAAAAAAUY/h5tHUjmZVdM/s1600/deathstar.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="393" src="http://2.bp.blogspot.com/-0bpUeGPt58k/T9zPCQDVZII/AAAAAAAAAUY/h5tHUjmZVdM/s400/deathstar.jpg" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Yes, I know the highlight is blown out ....</td></tr></tbody></table>Why use an integrating sphere? Well, the integrating sphere produces perfectly uniform diffuse lighting, which is ideal for creating a uniform white backdrop for the razor blade.<br /><br />In addition, my home made integrating sphere produces enough light to require a shutter speed of 1/200 s at f/4, ISO100 to prevent blowing out the highlights. This is perfect for minimizing the influence of vibration (although higher shutter speeds would be even better).<br /><br />Using this set-up, I then capture a number of shots focused in live view, using a remote shutter release. It appears that with this target, the AF is really accurate, since I could not really do much better with manual focus bracketing. One day I will get a focusing rail, which will make focus bracketing much simpler.<br /><br />Lastly, all MTF measurements on the razor edge were performed using <a href="http://sourceforge.net/projects/mtfmapper">MTF Mapper</a>. The "--bayer green" option was used to measure MTF using only the green photosites, thus avoiding any problems with Bayer demosaicing. The raw files were converted using dcraw's "-D" option.<br /><br /><h2> D40 MTF and OLPF model</h2>Here is the MTF plot of a D40 razor image captured at f/4 (manually focus bracketed), Bayer green channel only:<br /><span id="goog_2095538923"></span><span id="goog_2095538924"></span><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-GmfbdUtuJwQ/T98XALJncmI/AAAAAAAAAV8/8T58__UYu1s/s1600/d40_mtf.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://3.bp.blogspot.com/-GmfbdUtuJwQ/T98XALJncmI/AAAAAAAAAV8/8T58__UYu1s/s400/d40_mtf.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">D40 green channel MTF, Sigma 17-50 mm f/2.8 at f/4</td></tr></tbody></table>The D40's PSF was modelled by convolving the square pixel aperture (7.8 micron wide), the 4-point beam splitter (<i>d</i>=0.375 pixels), and the Airy function (f/4). A synthetic image of a step edge with this PSF was generated, and measured using MTF Mapper.<br /><br />Purely judging the match between the model and the measured MTF by eye, one would have to conclude that the model captures the interesting parts of the MTF rather well. The measured MTF is slightly below the model, which is most likely caused by a smidgen of defocus.<br /><br />The fact that the model fits so well could also be taken to imply that the Sigma 17-50 mm f/2.8 lens is relatively aberration-free at f/4, i.e., it is diffraction limited in the centre.<br /><br />MTF50 resolution came in at 0.334 cycles per pixel, or 42.84 line pairs per mm, or 671 line pairs per picture height.<br /><br /><h3> D7000 MTF and OLPF model</h3>Here is the MTF plot of a D7000 razor image captured at f/4 (AF in live view), Bayer green channel only:<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-gyT8lKafoFs/T98dRUIuMoI/AAAAAAAAAWI/5E5hVb8cqjs/s1600/d7k_mtf.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="http://1.bp.blogspot.com/-gyT8lKafoFs/T98dRUIuMoI/AAAAAAAAAWI/5E5hVb8cqjs/s400/d7k_mtf.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">D7000 green channel MTF, Sigma 17-50 mm f/2.8 at f/4</td></tr></tbody></table>The D7000's PSF was modelled by convolving the square pixel aperture, the 4-point beam splitter (d=0.375 pixels), and the Airy function (f/4). A synthetic image of a step edge with this PSF was generated, and measured using MTF Mapper.<br /><br />The measured MTF does not fit the model MTF quite as well as it did in the D40's case. Given that the physical linear resolution is 60% higher, it is correspondingly harder to obtain optimal focus. The shape of the measured MTF relative to the model MTF is consistent with defocus blur.<br /><br /><br />The actual resolution figures are impressive: MTF50 is 0.304 cycles per pixel, or about 64 lp/mm, or equivalently, 992 line pairs per picture height.<br /><br />If the model is indeed accurate, it would mean that the D7000 can theoretically obtain resolution figures close to 68 lp/mm at f/4 in the green channel, provided that the lens is purely diffraction limited, and perfectly focused. <br /><br /><h3> Summary</h3>Perhaps this is not such a surprising result, but it appears that Nikon is using the same relative strength of AA filter in both the D40 and the D7000; this can be deduced from the fact that both the D40 and the D7000 OLPF models fitted best with an OLPF split distance of 0.375 pixels.<br /><br /><br />The somewhat unexpected result, for me at least, was that the MTF shape is so sensitive to perfect focus. Specifically, it seems that the first zero of the MTF curve, at around 0.6875 cycles per pixel, is not readily visible unless focus is perfect. The zero is quite clear in the D40 curve, but not quite so visible in the D7000 curve. You are extremely unlikely to achieve this kind of focus in real world photography, though.<br /><br /><h3>References</h3>1. Russ Palum, Optical Antialiasing filters, in Single-Sensor Imaging: Methods and Applications for Digital Cameras, Edited by <span class="NLM_contrib"> Rastislav Lukac,</span> CRC Press 2008<br /><span class="NLM_year"></span>Frans van den Berghhttps://plus.google.com/104242390274256831576noreply@blogger.com6tag:blogger.com,1999:blog-6555460465813582847.post-47310721945483447952012-06-06T10:12:00.000-07:002012-06-06T10:14:30.273-07:00D800E versus diffraction<div style="text-align: left;">In a previous post (<a href="http://mtfmapper.blogspot.com/2012/06/diffraction-and-box-filters.html">here</a>), I have illustrated how diffraction through a circular aperture can be modelled either in the spatial domain as a point spread function (PSF), or in the frequency domain as a modulation transfer function (MTF). I will now put these models to use to investigate the influence of diffraction on the resolution that can be achieved with the D800E at various apertures.</div><div style="text-align: left;"><br /></div><h3 style="text-align: left;"> Simulating the effect of diffraction</h3>I will not go into the maths behind the diffraction MTF; this was discussed in another post (<a href="http://mtfmapper.blogspot.com/2012/06/diffraction-and-box-filters.html">here</a>). For now, it is sufficient to understand that we can combine the diffraction MTF with the sensor's MTF through multiplication in the frequency domain.<br /><div style="text-align: left;"><br /></div><div style="text-align: left;">Assume for the moment that the D800E effectively does not have an AA filter (in practice, this might not be entirely true, i.e., the D800E may just have a very weak AA filter compared to other cameras). This allows us to model the pixel's MTF curve as a sinc(<i>x</i>), as was shown in <a href="http://mtfmapper.blogspot.com/2012/05/pixels-aa-filters-box-filters-and-mtf.html">a previous post</a>. Next, we assume that the lens is diffraction limited, i.e., the other lens aberrations are negligible, and thus the lens MTF is just the diffraction MTF.</div><div style="text-align: left;">For a D800(E) pixel pitch of 4.88 micron, and an aperture of f/8, we obtain the following combined MTF curve:</div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-X41nQeKUY9o/T89UcH9V9MI/AAAAAAAAATs/9hShVmq_uW0/s1600/d800e_f_8_.png" style="margin-left: auto; margin-right: auto;"><img border="0" height="320" src="http://3.bp.blogspot.com/-X41nQeKUY9o/T89UcH9V9MI/AAAAAAAAATs/9hShVmq_uW0/s320/d800e_f_8_.png" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">D800E combined MTF at f/8</td></tr></tbody></table><div style="text-align: left;">The dashed grey curve represents the sensor's MTF, and the black curve represents the diffraction MTF. The blue curve is the product of these two curves, and represents the combined diffraction-and-sensor MTF. </div><div style="text-align: left;">At f/8, our peak MTF50 value will be 0.344 c/p, or 70.4 lp/mm. Note that this is still higher than what I measured on a D7000 at f/5, which peaked at about 0.29 c/p (61 lp/mm), but the D7000 has an AA filter. </div><div style="text-align: left;"><br /></div><div style="text-align: left;">Moving to even smaller apertures will cost us resolution, thus at f/11 the curve looks like this:</div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-bEbdwRQGGRo/T89VP1DwsMI/AAAAAAAAAT0/_l-8dYJJp1w/s1600/d800e_f_11_.png" style="margin-left: auto; margin-right: auto;"><img border="0" height="320" src="http://3.bp.blogspot.com/-bEbdwRQGGRo/T89VP1DwsMI/AAAAAAAAAT0/_l-8dYJJp1w/s320/d800e_f_11_.png" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">D800E combined MTF at f/11</td></tr></tbody></table><div style="text-align: left;">At f/11, MTF50 peaks at only 0.278 c/p, or 57 lp/mm. This is still extremely crisp, although you might barely be able to see the difference compared to f/8 under ideal conditions. Pushing through to f/16:</div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-ASEvjTnV0mQ/T89V3uH_6XI/AAAAAAAAAT8/BXgvQAFwu8o/s1600/d800e_f_16_.png" style="margin-left: auto; margin-right: auto;"><img border="0" height="320" src="http://1.bp.blogspot.com/-ASEvjTnV0mQ/T89V3uH_6XI/AAAAAAAAAT8/BXgvQAFwu8o/s320/d800e_f_16_.png" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">D800E combined MTF at f/16</td><td class="tr-caption" style="text-align: center;"><br /></td></tr></tbody></table><div style="text-align: left;">Note how close the combined MTF curve and the diffraction MTF curve have now become; this indicates that diffraction is starting to dominate the MTF curve, and thus also resolution. At f/16, MTF50 has dropped to 0.207, or about 42.3 lp/mm, which is not bad, but quite far from the 70 lp/mm we achieved at f/8.</div><div style="text-align: left;"><br /></div><div style="text-align: left;">What about going in the other direction? Here is what happens at f/5.6:</div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-yDaSJoJDWPA/T89W3Nj0yfI/AAAAAAAAAUE/Qpj7mVvX7Uk/s1600/d800e_f_5.6_.png" style="margin-left: auto; margin-right: auto;"><img border="0" height="320" src="http://1.bp.blogspot.com/-yDaSJoJDWPA/T89W3Nj0yfI/AAAAAAAAAUE/Qpj7mVvX7Uk/s320/d800e_f_5.6_.png" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">D800E combined MTF at f/5.6</td></tr></tbody></table><div style="text-align: left;">MTF50 now reaches 0.412 c/p, or 84.4 lp/mm. At f/4 (not shown as a plot) we get 0.465 c/p (95.3 lp/mm), and so on. Below f/4 we will start seeing the residual aberrations of the lens take over, which will reduce effective resolution. I have no model for those yet, so I will stop here for now.</div><div style="text-align: left;"><br /></div><div style="text-align: left;">Ok, so I will go one step further. Here is the MTF plot at f/1.4, but keep in mind that for a real lens, other lens aberrations will alter the lens MTF so that it is no longer diffraction limited. But this is what it would have looked like if those aberrations were absent:</div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-yg7VKSy7gUA/T89ZMcRx8EI/AAAAAAAAAUM/l9gsPdpJSHI/s1600/d800e_f_1.4_.png" style="margin-left: auto; margin-right: auto;"><img border="0" height="320" src="http://4.bp.blogspot.com/-yg7VKSy7gUA/T89ZMcRx8EI/AAAAAAAAAUM/l9gsPdpJSHI/s320/d800e_f_1.4_.png" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">D800E combined MTF at f/1.4</td></tr></tbody></table><div style="text-align: left;">Off the charts! MTF50 will sit at 0.557 c/p, or 114.1 lp/mm. The pixel MTF and the combined MTF are now very similar, which is to be expected, since diffraction effects are now almost negligible. Now if only they could build this lens ...</div><div style="text-align: left;"><br /></div><h3 style="text-align: left;"> In closing</h3><div style="text-align: left;">These results seem to support the suggestions floating around on the web that the D800E will start to visibly lose sharpness after f/8, <i>compared to what it achieves at f/5.6</i>. But this does not mean that f/11 is not sharp, since 57 lp/mm is not something to be sneezed at! Even more importantly, there is no "magical f-stop" after which diffraction causes the resolution to drop; diffraction will lower resolution at all f-stop values. The balance between diffraction blur and blur caused by other lens aberrations tends to cause lens resolution to peak at a certain aperture (around f/4 to f/5.6 for many lenses), but even at f/1.4 you will lose resolution to diffraction, just not a lot.</div><div style="text-align: left;"><br /></div><div style="text-align: left;">There are also some claims that the D700 was usable at f/16, but now suddenly the D800E will not be usable at f/16 any more. This is not true. If we compare a hypothetical D700E with our hypothetical D800E above, we see that the D800E attains an MTF50 value of 42.3 lp/mm at f/16, and the hypothetical D700E would reach only 37.2 lp/mm.</div><div style="text-align: left;"><br /></div><div style="text-align: left;">The real D700 has an AA filter. If we approximate the strength of this filter as a Gaussian with a standard deviation of 0.6246, then the D700 would only reach an MTF50 of 25.6 lp/mm at f/16. A similar approximation of the AA filter for the D800 would produce an MTF50 of 34.4 lp/mm at f/16. So the D800 (or D800E) will always capture more detail than the D700 <i>at all apertures.</i> The D800E is perfectly usable at f/16, and more so than the D700. </div><div style="text-align: left;"><br /></div><div style="text-align: left;">[Incidentally, the diffraction + Gaussian AA filter approximation used here appears to be quite accurate. Roger Cicala's Imatest results on the D800 and D700 with the Zeiss 25 mm f/2 (<a href="http://www.lensrentals.com/blog/2012/03/d-resolution-tests">see here</a>) agree with my figures. From Roger's charts, we see the D800 at f/5.6 achieves 1200 lp/ph, or about 50.06 lp/mm, compared to my figure of 50.7 lp/mm. The D700 at f/5.6 attains roughly 750 lp/ph (31.38 lp/mm) in Roger's test, and my model predicts 31.9 lp/mm.]</div><div style="text-align: left;"><br /></div>The catch, though, is that the D700's MTF50 at f/16 is 0.216 c/p (25.6 lp/mm), whereas the D800's MTF50 at f/16 is 0.168 c/p (34.4 lp/mm). The apparent per-pixel sharpness of the D700 will exceed that of the D800 at 100% magnification on-screen. If you view them at the same size, though, the D800 will be somewhat sharper.Frans van den Berghhttps://plus.google.com/104242390274256831576noreply@blogger.com0