Thursday, 23 August 2018

Simulating defocus and spherical aberration in mtf_generate_rectangle

It has been many years since I last tinkered with the core functionality of mtf_generate_rectangle. For the uninitiated, mtf_generate_rectangle is a tool in the MTF Mapper package that is used to generate synthetic images with a known Point Spread Function (PSF). The tool can generate black rectangular targets on a white background for use with MTF Mapper's main functionality, but mtf_generate_rectangle can do quite a bit more, including rendering arbitrary multi-polygon scenes (I'll throw in some examples below).

A synthetic image is the result of convolving the 2D target scene with the 2D PSF of the system. In practice, this convolution is only evaluated at the centre of each pixel in the output image; see this previous post for a more detailed explanation of the algorithm. That post covers the method used to simulate an aberration-free ideal lens, often called a diffraction-limited lens, combined with the photosite aperture of the sensor. The Airy pattern which models the effects of diffraction is readily modelled using a jinc(x) function, but if we want to add in the effects of defocus and spherical aberration things become a little bit harder.

I can recommend Jack Hogan's excellent article on the topic, which describes the analytical form of a PSF that combines diffraction, defocus and spherical aberration. I repeat a version of this equation here with the normalization constant omitted:
where
specifies the defocus (W020 coefficient) and spherical aberration (W040 coefficient), both expressed as a multiple of the wavelength λ.

The PSF(r) equation contains two parameters that relate to radial distances: r, which denotes the normalized distance from the centre of the PSF, and ρ, which denotes a normalized radial distance across the exit pupil of the lens. Unfortunately we have to integrate out the ρ parameter to obtain our estimate of the PSF at each radial distance r that we wish to sample the PSF. Figure 1 below illustrates our sampled PSF as a function of r (the PSF is shown mirrored around r=0 for aesthetic reasons). Keep in mind that for a radially symmetric PSF, r is in the range [0, ∞). For practical purposes we have to truncate r at some point, but this point is somewhere around 120 λN units (λ is the wavelength, N is the f-number of the lens aperture).

Figure 1: Sampled PSF of the combined effect of circular aperture diffraction and spherical aberration.

Integrating oscillating functions is hard

Numerical integration is hard, like eating a bag of pine cones, but that is peanuts compared to numerical integration of oscillating functions. Where does all this oscillation come from? Well, that Bessel function of the first kind (of order 0), J0, looks like this when r=20 (and λN = 1):
Figure 2: Bessel J0 when r=20
Notice how the function crosses the y=0 axis exactly 20 times. As one would expect, when r=120, we have 120 zero crossings. Why is it hard to numerically integrate a function with zero crossings? Well, consider the high-school equivalent of numerical integration using the rectangle rule: we evaluate the function f at some arbitrary xi values (orange dots in Figure 3); we form rectangles with a "height" of f(xi), and a width of (xi+1 - xi), and we add up the values f(xi) * (xi+1 - xi) to approximate the area under the curve.
Figure 3: Notice how the area of the negative rectangles do not appear to cancel the area of the positive rectangles very well.
Ok, so I purposely chose some poorly and irregularly spaced xi values. The integral approximation will be a lot more accurate if we use more (and better) xi values, but you can see what the problem is: there is no way that the positive and negative rectangles will add up to anything near the correct area under the curve (which should be close to zero). We can use more sophisticated numerical integration methods, but it will still end in tears.

Fortunately there is a very simple solution to this problem: we just split our function into intervals between the zero crossings of f(), integrate each interval separately, and add up the partial integrals. This strategy is excellent if we know where the zero crossings are.

And our luck appears to be holding, because the problem of finding the zero crossings of the Bessel J0 function has already been solved [1]. The algorithm is fairly simple: we know the location of the first two roots of  J0, they are at x1=2.404826 and x2=5.520078, and subsequent roots can be estimated as xi = xi-1 + (xi-1 - xi-2), for i >= 3. We can refine those roots using a Newton-Raphson iteration, where xi,j = xi,j-1 + J0(xi,j-1)/J1(xi,j-1), and Jis the Bessel function of the first kind of order 1. We also know that if our maximum r is 120, then we expect 120 roots, and we just have to apply our numerical integration routine to each interval defined by [xi-1, xi] (plus the endpoints 0 and 120, of course).

But we are not quite there yet. Note that our aberration function γ(ρ) is a complex exponential function. If either the W020 or the W040 coefficient is non-zero, then our J0 function is multiplied by another oscillatory function, but fortunately the oscillations are a function of ρ, and not r, so this does not introduce enough oscillation to cause major problems for realistic values, since W020 < 20 and W040 < 20. This is not the only challenge that γ(ρ) introduces, though. Although our final PSF is a real-valued function, the entire integrand is complex, and care must be taken to only apply the modulus operator to the final result. This implies that we must keep all the partial integrals of our [xi-1, xi] intervals as a complex numbers, but it also requires the actual numerical integration routine to operate on complex numbers. Which brings us to the selection of a suitable numerical integration routine ....

Candidate numerical integration routines

The simplest usable integration routine that I know of is the adaptive variant of Simpson's rule, but this only works well if your integrand is sufficiently smooth, and your subdivision threshold is chosen carefully. The adaptive Simpson algorithm is exceedingly simple to implement, so that was the first thing I tried. Why did I want to use my own implementation? Why not use something like Boost or GSL to perform the integration?

Well, the primary reason is that I dislike adding dependencies to MTF Mapper unless it is absolutely necessary. There is nothing worse, in my experience, than trying to build some piece of open-source code from source, only to spend hours trying to get just the right version of all the dependencies. Pulling in either Boost or GSL as a dependency just because I want to use a single integration routine is just not acceptable. Anyway, why would I pass up the opportunity to learn more about numerical integration? (Ok, so I admit, this is the real reason. I learn most when I implement things myself.)

So I gave the adaptive Simpson algorithm a try, and it seemed to work well enough when I kept r < 20, thus avoiding the more oscillatory parts of the integrand. It was pretty slow, just like the literature predicted [2]. I decided that I will look for a better algorithm, but one that is still relatively simple to implement. This led me to TOMS468, and a routine called QSUBA. This FORTRAN77 implementation employs an adaptive version of Gauss-Kronrod integration. Very briefly, one of the main differences between Simpson's rule and Gaussian quadrature (quadrature = archaic name for integration) is that the former approximates the integrand with a quadratic polynomial with regularly-spaced samples, whereas the latter can approximate the integrand as the product of a weighting function and a higher-order polynomial (with custom spacing). The Kronrod extension is a clever method of choosing our sample spacing that allows us to re-use previous values while performing adaptive integration.

Much to my surprise, I could translate the TOMS468 FORTRAN77 code to C code using f2c, and it worked out of the box. It took quite a bit longer to port that initial C code to something that resembles good C++ code; all the spaghetti GOTO statements in the FORTRAN77 was faithfully preserved in the f2c output. I also had to extend the algorithm a little to support complex integrands.

Putting together the QSUBA routine and the root intervals of J0 described in the previous section seemed to do the trick. If I used only QSUBA without the root intervals, the integration was much slower, and led to errors at large values of r, as shown in Figure 4.
Figure 4: QSUBA integration without roots (top), and with roots (bottom). Note the logarithmic y-axis scale
Those spikes in the PSF certainly look nasty. Figure 5 illustrates how they completely ruin the MTF.
Figure 5: MTF derived from the two PSFs shown in Figure 4.
So how much faster is QSUBA compared to my original adaptive Simpson routine? Well, I measured about a 20-fold increase in speed.

Rendering the image

After having obtained a decent-looking PSF as explained above, the next step is to construct a cumulative density function (CDF) using the PSF, but this time we take into account that the real PSF is 2D. We can still do this with a 1D CDF, but at radial distance r we have to correct the probability proportionally to the area of a disc of radius r. The method is described in detail in the section titled "Importance sampling and the Airy disc" in this post. The next step is to generate 2D sampling locations drawn from the CDF, which effectively places sampling locations with a density proportional to the intensity of the 2D PSF.

If we are simulating just the lens part, we just calculate what fraction of our sampling locations are inside the target polygon geometry (e.g., our black rectangles), and shade our output pixel accordingly. To add in the effect of the sensor's photosite aperture, which is essentially an additional convolution of our PSF with a square the size of our photosites if we assume a 100% fill factor, we just replace the point-in-target-polygon test with an intersect-photosite-aperture-polygon-with-target-polygon step. This trick means that we do not have to resort to any approximations (e.g., discrete convolutions) to simulate the sensor side of our system PSF. Now for a few examples, as promised.

Figure 6: MTF curves of simulated aberrations. The orange curve is f/5.6 with W040 = 0.75 (i.e., significant spherical aberration). The blue curve is f/5.6 with W040 = 2 (i.e., extreme spherical aberration). The green curve is f/5.6 with no spherical aberration, but W020 = 1 (i.e., a lot of defocus).
Before I show you the rendered images, take a look at Figure 6 to see what the MTF curves of the simulated images look like. First up, the orange curve with W040 = 0.75. This curve sags a little between 0.1 and 0.4 cycles/pixel, compared to the same lens without the simulated spherical aberration, but otherwise it still looks relatively normal. Figure 7 illustrates what a simulated image with such an MTF curve looks like.
Figure 7: Simulated lens at f/5.6 with W040 = 0.75, corresponding to the orange curve in Figure 6. Looks OK, if a little soft. (remember to click on the image for a 100% view)
The blue curve (in Figure 6) represents severe spherical aberration with W040 = 2, also rendered at f/5.6. Notice how the shape of the blue curve looks very different from what we typically see on (most?) real lenses, where the designers presumably do their best to keep the spherical aberrations from reaching this magnitude. The other interesting thing about the blue curve is that contrast drops rapidly in the range 0 to 0.1 cycles/pixel, but despite the sudden drop we then have a more gradual decrease in contrast. This simulated lens gives us Figure 8.
Figure 8: Simulated lens at f/5.6 with W040 = 2, corresponding to the blue curve in Figure 6. This image has the typical glow that I associate with spherical aberration.

Figure 9 illustrates a simulated scene corresponding to the green curve in Figure 6, representing significant defocus with W020 = 1, but no spherical aberration with W040 = 0. It also exhibits a new MTF curve shape that requires some explanation. It only appears as if the curve "bounces" at about 0.3 cycles per pixel; what is happening is that the OTF undergoes phase inversion between, say, 0.3 and 0.6 cycles per pixel, but the because the MTF is the modulus of the OTF we see this rectified version of the OTF (see Jack Hogan's article on defocus for an illustration of the OTF under defocus). 


Figure 9: Simulated lens at f/5.6 with W020 = 1.0, corresponding to the green MTF curve in Figure 6. If you look very closely at 100% magnification (click on the image), you might just see some detail between the "2" and the "1" marks on the trumpet. This corresponds to the frequencies around 0.4 c/p, i.e., just after the bounce.
If we were to increase W020 to 2.0, we would see even more "bounces" as the OTF oscillates around zero contrast. Figure 10 shows what our simulated test chart would look like in this scenario. If you look closely near the "6" mark, you can see that the phase inversion manifests as an apparent reversal of the black and white stripes. Keep in mind that this amount of defocus aberration (say, W020 <= 2) is still in the region where diffraction interacts strongly with defocus. If you push the defocus much higher, you would start to enter the "geometric defocus" domain where defocus is essentially just an additional convolution of the scene with a circular disc. 
Figure 10: Simulated lens at f/5.6 with W020 = 2.0. This image looks rather out of focus, as one would expect, but notice how the contrast fades near the "7" mark on the trumpet, but then recovers somewhat between the "6" and "3" marks. This corresponds to the "bounce" near 0.3 cycles/pix we saw in the MTF curve. Look closely, and you will see the black and white stripes have been reversed between the "6" and "3" marks.

Sample usage

To reproduce the simulated images shown above, first grab an instance of the "pinch.txt" test chart geometry here. Then you can render the scenes using the following commands:

mtf_generate_rectangle.exe --target-poly pinch.txt -p wavefront-box --aperture 5.6 --w040 2.0 -n 0.0 -o pinch_example.png

This should produce an 8-bit sRGB PNG image called pinch_example.png with significant spherical aberration (the --w040 parameter). You can use the --w020 parameter to add defocus to the simulation. Note that both these aberration coefficient arguments only take effect if you use either the wavefront-box or wavefront PSF models (as argument to the -p option); in general I recommend using the wavefront-box PSF, unless you specifically want to exclude the sensor photosite aperture component for some reason.

These new PSF models are available in MTF Mapper versions 0.7.4 and up.

Caveats

I am satisfied that the rendering algorithm implemented in mtf_generate_rectangle produces the correct PSF, and eventually MTF, for simulated systems with defocus and/or spherical aberration. What I have not yet confirmed to my own satisfaction is that the magnitude of the aberrations W020 and W040, currently expressed as multiples of the wavelength (λ), does indeed effect an aberration with a physically-correct magnitude. In other words, see them as unitless parameters that control the magnitude of the aberrations. 

I hope to verify that the magnitude is physically correct at some point in the future.

If you simulate highly aberrated lenses, such as f/5.6 with W020 = 2, and measure a slanted-edge in the resulting image, you may notice that MTF Mapper does not reproduce the sharp "bounces" in the SFR curve quite as nicely as shown in Figure 6. You can add the "--nosmoothing" option to the Arguments field in the Preferences dialog to make those "bounces" more crisp, but this is only recommended if your image contains very low levels of image noise.

Another somewhat unexpected property of the W020 and W040 aberration coefficients is that the magnitude of their effect interacts strongly with the simulated f-number of the lens. In other words, simulating W020 = 0.5 at f/16 looks a lot more out-of-focus than W020 = 0.5 at f/2.8. This is because the W020 and W040 coefficients are specified as a displacement (along the axis of the lens) at the edge of the exit pupil of the lens, meaning that their angular impact depends strongly on the diameter of the exit pupil. Following Jack's article, the diameter of the defocus disc at the sensor plane scales as 8N λ W020 (translated to my convention for W020 and W040). Thus, if we want the same defocus disc diameter at f/2.8 that we saw at f/16 with W020 = 0.5, then we should choose W020 = 16/2.8 * 0.5 = 2.857. If I compare simulated images at f/2.8 with W020 = 2.857 to those at f/16 with W020 = 0.5, then it at least looks as if both images have a similar amount of blur, but obviously the f/16 image will lack a lot of the higher-frequency details outright.

Hold on a minute ...

The astute reader might have noticed something odd about the equation given for PSF(r) in the introduction. If we set W020 = 0, W040 = 0, N = 1 and λ = 1, then γ(ρ) is real-valued and equal to 1.0. This reduces the integral to
But without any defocus or spherical aberration, we should obtain the Airy pattern PSF that we have used in the past, i.e, we should get
It helps to call in some reinforcements at this stage. The trick is to invoke the Hankel transform. As shown by Piessens [3], the Hankel transform (of order zero) of a function f(r) is
If we choose the function  f(r) to be 1.0 when |r| < a, and zero otherwise, then Piessens [3, Example 9.2] shows that
If we make the substitutions a = 1.0, and s = πr, then we can see that our PSF model that includes defocus and spherical aberration readily reduces (up to a constant scale factor in this case, I could have been more careful) to the plain old Airy pattern PSF if we force the aberrations to be zero.


References

  1. S.K. Lucas and H.A. Stone, "Evaluating infinite integrals involving Bessel functions of arbitrary order", Journal of Computational and Applied Mathematics, 64:217-231, 1995.
  2. I. Robinson, "A comparison of numerical integration programs", Journal of Computational and Applied Mathematics, 5(3):207-223, 1979.
  3. R. Piessens, "The Hankel transform", in The transforms and applications handbook, CRC press, 2000. (PDF here)

Tuesday, 3 July 2018

Adding OpenGL to the GUI to make it zippy

The MTF Mapper GUI has always been a bit of a red-headed stepchild compared to the command-line version. In fact, the GUI just calls the command-line version to do the actual work. I have tried to keep the GUI functional, but minimal, mostly because I find working on the actual slanted-edge algorithm a lot more interesting than working on the GUI. At least the GUI is written in Qt, rather than, say, MATLAB ...

Fortunately I have found a way to make the GUI-related coding work a bit more interesting. I decided to upgrade the main image viewer of the GUI to an OpenGL implementation. The main motivation is that with an OpenGL rendering engine you essentially get high-quality image scaling for free, meaning you can effortlessly zoom into and out of an image without any noticeable lag. If you are familiar with the older MTF Mapper GUI (prior to version 0.7.0), then you may have experienced the unbearable delays when you try to adjust the image magnification with the mouse wheel.

Integrating OpenGL into Qt was a lot simpler than I expected; maybe this is because I only had to deal with the more modern QOpenGLWidget implementation. The somewhat more unexpected learning curve hit me when I tried to draw something in OpenGL. I think the last time I wrote any OpenGL code must have been in 2002, i.e., a while before OpenGL 2.0 was released. This meant that my knowledge of OpenGL was firmly stuck in the fixed-function pipeline era, so I had to start learning from scratch how to use the modern shader-based pipeline. Fortunately Joey de Vries created an excellent set of tutorials to help me get up to speed.

Anyhow, the idea is to cut the image (which may be larger than 10000 by 10000 pixels) into manageable tiles, and to map these tiles as textures onto quads. The textures are loaded with Mipmapping enabled, so the textures on the tiles always appear smoothly rescaled regardless of the final display size of each tile. I chose to stick to power-of-two dimensions for the textures, even though modern GPUs should be able to handle non-power-of-two (NPOT) textures, mostly because I read some unconfirmed reports that certain integrated GPUs may experience slowdowns or other unexpected behaviour. With a bit of luck all these choices will maximise compatibility.

The hardest part of the OpenGL viewer was actually to implement the scrolling / zooming behaviour with the help of Qt's QAbstractScrollArea; there are very few examples of how to use this class, at least according to Google. I also discovered that if you enable zooming in/out with a mouse wheel, then it is critical that the image appears to zoom around the point in the image directly under the mouse cursor --- any other zooming strategy feels disorienting. And of course I learnt that doing this while getting both your QOpenGLWidget and your QAbstractScrollArea objects to agree on the state (i.e., where you are in the image) is not trivial.

I will update the MTF Mapper help/user guide accordingly, but for the record, here is a rundown of the image viewer controls:

  • You can scroll/pan the image by holding down the left mouse button while moving the mouse.
  • You can scroll/pan the image by using the mouse wheel; the default is vertical scrolling, but you can select horizontal scrolling by holding down the shift key while scrolling the wheel.
  • You can zoom in/out by scrolling the mouse wheel while holding down the control key. Zooming in is limited to a maximum magnification of 2x. Images that are smaller than the current viewport (window) size cannot be zoomed, nor can you zoom out past the point where one edge of your image matches the viewport width/height.
  • You can also zoom by holding down the right mouse button while moving the mouse up/down.
  • You can zoom in/out using the "+" and "-" keys on the keyboard after you have clicked (with any mouse button) at the location in the image around which you would like to zoom.
  • If you are viewing an "Annotated image", you can display the SFR curve of an edge by clicking on the annotation text, as described in Section 5.3 of the MTF Mapper help/user guide. The new feature is that a coloured dot will be drawn to indicate which edge you have selected, as illustrated below. (Yes, I know this feature should have been there from the start, but it would have been an enormous pain to implement without the new OpenGL viewer.)

I also discovered that actually loading a large annotated image can take a while, around 0.3 seconds on my test machine for a D7000-sized image, and around 1.8 seconds for an IQ180 image. If you are examining multiple images in a session, then switching between the images still felt painfully slow, especially if you repeatedly go back-and-forth between them. I decided to add a cache to speed this up; the default cache size is 1GB, but you can change this (in the preferences) if your machine is memory constrained, or you regularly open multiple 100 MP images (and have RAM to burn).

Since the new OpenGL-based viewer introduced a whole lot of brand-new code, I expect that there may be a few issues, so please let me know if you encounter any!
(You can download version 0.7.0 from SourceForge)

Wednesday, 16 May 2018

Automatic processing of Imatest charts

It turns out that people sometimes want to process an image of an Imatest SFRplus type chart in MTF Mapper. Or at the very least, I have received requests about this.

When I first released MTF Mapper, I took it as a given that people would just print out the MTF Mapper charts if they wanted to use MTF Mapper. In the meantime I have gotten wiser, and I now know how hard (or expensive?) it is to print high-quality test charts. So it actually makes perfect sense to use a good quality chart that you already own (e.g., an SFRplus chart) with MTF Mapper.

Unfortunately the design of the SFRplus-style charts includes a black bar that runs through the top of the top row of target squares, like in this example:
An example of an SFRplus chart. I blatantly copied this example from Imatest's website (please don't sue me).

This black bar causes MTF Mapper to see the entire top row of squares plus the black bar as a single object, and since this compound object does not resemble a square, it ignores it. The same thing happens with the black bar at the bottom of the chart. As a result, MTF Mapper only processes the interior squares, like so:
Ignore the actual MTF50 values, but do note that only the interior square targets were detected automatically

Other than the obvious spurious detections on the non-target squares (which you can cover up with post-its or such if necessary) the output is usable, but you lose the top and bottom rows of squares, which is not ideal.

A simple solution is to just crop your image to exclude the bars, and then to process the cropped image with MTF Mapper's "-b" option. This works, but it is rather clunky. So I added a convenience feature that will do this automatically.

You can choose the new File/Open Imatest image(s) option in the GUI, or you can add the --imatest-chart option if you use the command-line interface. Because the gray target squares of the SFRplus chart cover more of the white background than a typical MTF Mapper chart, you probably have to adjust the "Threshold" value (-t on the CLI, or under Settings/Preferences/Advanced in the GUI) a little to detect all the target squares. The default Threshold is 0.55, and bumping it down to 0.4 should work a little better. For our test image above, we then get this:
Much better; all target squares detected
Note that the top edges of the top row of squares are tagged with "N/A" rather than MTF50 values; this is just MTF Mapper's way to indicate that these edges do not represent valid measurements. If you are parsing the "edge_mtf_values.txt" file produced by the "-q" output option, these edges will have an MTF50 value of 1.0 (which is an impossible / invalid MTF50 value). Or you could identify them by their pixel coordinates, which is probably the better way.

This feature is available from MTF Mapper 0.6.21 onward.

Monday, 14 May 2018

Cropped single edge images now handled more elegantly in the GUI

A while back I wrote a post that described the "--single-roi" option of MTF Mapper. The "--single-roi" mode specifically allows you to feed MTF Mapper with images that have already been cropped to contain only a single edge, i.e., they look like this:

From MTF Mapper 0.6.20 onwards, you can now use the menu option File/Open single edge image(s) to load images that look like the one pictured above. Note that by opening an image using this new menu option MTF Mapper will automatically enable the outputs that make sense (Annotated image output, with the ability to click on edges to view the SFR curve), while silently ignoring all the output types that do not make sense if you only have one edge.

Tuesday, 20 February 2018

Journal paper on MTF Mapper's deferred slanted edge analysis algorithm now available!

A new paper on MTF Mapper's deferred slanted edge analysis algorithm has been published in the Optical Society of America's JOSA A journal. The paper describes one of the methods that MTF Mapper can use to compensate for radial lens distortion. The paper also covers the technique that MTF Mapper uses to process Bayer CFA subsets, e.g., when processing just the green Bayer channel of a raw mosaiced image.

The full reference:
F. van den Bergh, Deferred slanted-edge analysis: a unified approach to spatial frequency response measurement on distorted images and color filter array subsets, Journal of the Optical Society of America A, Vol. 35, Issue 3, pp. 442-451 (2018).

You can see the on-line abstract here. The full article is paywalled, but if you contact me by email I can send you an alternative document that covers  the same topic. I will probably post some articles on this topic here on the blog sometime too.

Saturday, 10 February 2018

Improved user documentation

If you are a long-time MTF Mapper user, then you are probably just as surprised by this unexpected turn of events as I am, but I have updated the docs. Actually, it gets better: I completely rewrote the user documentation to produce the new and improved (yes, it is new, and yes, it is an improvement on the old docs) MTF Mapper user guide.

You can grab a PDF copy of the user guide here. I have discovered a way to produce a decent-looking HTML version of the PDF documentation; by selecting Help in the GUI (MTF Mapper version 0.6.14 and later) you should see a copy of the user guide open in your system web browser. This is probably a better way to ensure that you are reading the latest version of the user guide.

I have tried to make the user guide more task-focused so that new users will be able to have a better idea of what they can do with MTF Mapper, as well as how they can get started. However, it took me about two weeks to write the new user guide, and it weighs in at 50+ pages, so it is probably still a little intimidating at first glance. If you are a new user, and you have any suggestions on ways in which I can improve the user guide, please let me know.

Unfortunately, even 50+ pages are not enough to really cover all the functionality, and the sometimes non-intuitive behaviour, of MTF Mapper so there is still some work to be done.

Thursday, 8 February 2018

Device profile support

From version 0.6.16 onwards, MTF Mapper now supports device profiles embedded in input image files. If you normally feed MTF Mapper with raw camera files (via the GUI), then this new feature will not affect you in any way.

If you have been feeding MTF Mapper with JPEG files, or perhaps TIFF files produced by your preferred raw converter, then this new feature could have a meaningful impact on your results. You can jump ahead to the section on backwards compatibility if you want the low-down.

To explain what device profiles are, and why they affect MTF Mapper, I first have to explain what linear light is.

Linear light

At the sensor level we can assume that both CCD and CMOS sensors have a linear response, meaning that a linear increase in the amount of light falling on the sensor will produce a linear increase in the digital numbers (DNs) we read from the sensor. This is true up to the point where the sensor starts to saturate, where the response is likely to become noticeably non-linear.

The slanted-edge method at the heart of MTF Mapper expects that the image intensity values (DNs) are linear. If your intensity values are not linear, then the resulting SFR you produce using the slanted-edge method is incorrect.

Gamma and Tone Reproduction Curves

Rather than exploring the history of gamma correction in great detail, I'll try to summarize: Back in the day of Cathode Ray Tube (CRT) displays they found that a linear change in the signal (voltage) sent to the tube did not produce a linear change in the display brightness. If you were to produce a graph of the input signal vs brightness, you would obtain something that looks like this:
Figure 1: A non-linear display response
If you fast-forward to the digital era, you can see how this non-linearity in the brightness response can become rather tiresome if you want to display, say, a grayscale image. If you took a linear light image from a CCD sensor, which we assume produced linear values in the range 0 to 255, and put that in the display adaptor frame buffer, then your image would appear to be too dark. The solution was to pre-distort the image with the inverse of the non-linear response of the display, i.e., using this function:
Figure 2: The inverse of Figure 1
If you take a linear signal and apply the inverse curve of Figure 2, then take the result and apply the non-linear display response curve of Figure 1, you end up with a linear signal again. The process of pre-distorting the digital image intensity values to compensate for the non-linear display response is called gamma correction.

This scheme worked well enough when you knew what the approximate non-linear display response was: on PCs the curve could be approximated as f(x) = x2.2. Things became a lot more complicated when you wanted to display an image on different platforms, for example, early Macintosh systems were characterized by a gamma of 1.8, i.e., f(x) = x1.8.

The solution the platform interoperability problem was to attach some metadata to your image to clearly state whether the digital numbers were referring to linear light intensity values, or whether they were pre-distorted non-linear values chosen to produce a perceived linear brightness image on a display. So how do you specify what your actual image represents? Do you specify the properties of the non-linear pre-distortion you applied to produce the values found in the image file, or do you instead specify the properties of the display device for which you image has been corrected? It turns out that both strategies were followed, with the PNG specification choosing the former, and most of the other formats (including ICC profiles) choosing the latter.

To cut to the chase: One important component of a device profile is its Tone Reproduction Curve (TRC); Figure 1 can be considered to be a TRC of our hypothetical CRT display device. Depending on the metadata format, you can either provide a single gamma parameter (γ) to describe the TRC as f(x) = xγ, or you can provide a look-up table to describe the TRC.

Colour spaces

The other component of a device profile that potentially has an impact on how MTF Mapper operates is the definition of the colour space. The colour space matters because MTF Mapper transforms an RGB input image to a luminance image automatically. The main reason for this is that the slanted-edge method does not naturally apply to colour images; you have to choose to either apply it to each of the R, G and B channels separately, or you have to synthesize a single grayscale image from the colour channels. For MTF Mapper I chose the luminance (Y) component of the image, as represented in the CIE XYZ colour space, because this luminance component should correlate well with how our human vision system perceives detail.

So what is a colour space? To keep the explanation simple(r), I will just consider tristimulus colour spaces; in practice, those that describe a colour as a combination of three primary colours such as RGB (Red, Green, and Blue). Now consider the digital numbers associated with a given pixel in a linear 8-bit RGB colour space, e.g., (0, 255, 0) would represent a green pixel. Sounds straightforwards, right? The catch is that we have not defined what we mean by "green". Two common RGB colour spaces that we encounter are sRGB and Adobe RGB; they have slightly different TRCs, but here we focus on their colour spaces. The difference between sRGB and Adobe RGB is that they (literally) have different definitions of the colour "green": Our "green" pixel with linear RGB values (0, 255, 0) in the sRGB colour space would have the values (73, 255, 10) in the linear Adobe RGB colour space because the Adobe RGB colour space uses a different green primary compared to sRGB. Note that the actual colour we wanted has not changed, but the internal representation has changed.

The nub of the matter is that our image file may contain the value (0, 255, 0), but we only really know what colour that refers to once we know in which colour space we are working. I hope you can see the parallel to the TRC discussion above: the image file contains some numbers, but we really do have to know how to interpret these numbers if we want consistent results.

ICC profiles

A valid ICC profile always contains both the TRC information and the colour space information that MTF Mapper requires to produce a linear luminance grayscale image. In fact, the ICC profile contains the matrix that tells use how to transform linear RGB values into CIE XYZ values adapted for D50 illumination (that is very convenient if you do not want to get into chromatic adaptation).

So if you provide MTF Mapper with a TIFF, PNG* or JPEG file with an embedded ICC profile, you can be sure that the resulting synthesized grayscale image will be essentially identical regardless of which colour profile you saved your image in.
*MTF Mapper 0.6.17 and later.

JPEG/Exif files

If your JPEG file has no ICC profile, and no Exif metadata, then MTF Mapper will just assume that your image is encoded in the sRGB profile. Most cameras appear to at least add Exif metadata, but that only helps a little bit, since the Exif standard only really has a definitive way of indicating that the image is encoded in an sRGB profile. If your JPEG file is encoded in the Adobe RGB space (most DSLRs allow you to configure the JPEG output this way), then MTF Mapper will try to infer this from the clues in the Exif data. 

MTF Mapper will use the appropriate TRC (either sRGB, or Adobe RGB), and the appropriate D50-adapted RGB-to-XYZ matrix will be selected for the luminance conversion.

Backwards compatibility (or the lack thereof)

Unfortunately the addition of device profile support has encouraged me to change the way in which JPEG files are converted to grayscale luminance images. In MTF Mapper version 0.6.14 and earlier, the JPEG RGB-to-YCrCb conversion was used to obtain a luminance image; from version 0.6.16 onwards the device profile conversion is used. In practice, this means that older versions would use
Y = 0.299R + 0.587G + 0.114B
regardless of whether the JPEG file was encoded in sRGB or Adobe RGB, which is clearly incorrect in the case of Adobe RGB files (regardless of the TRC differences). A typical weighting for an sRGB profile in version 0.6.16 and later would be
Y = 0.223R + 0.717G + 0.061B.

The practical implication is that results derived from JPEG files will be different between versions <= 0.6.14 and 0.6.16 and later. Figure 3 illustrates this difference on a sample sRGB JPEG file. To make matters worse, the difference will be exacerbated by lenses with significant chromatic aberration because the relative weight of the RGB channels have changed.
Figure 3: SFR difference on the same edge of a JPEG image (Green is v0.6.14, Blue is v0.6.16)
In Figure 3 the difference is small, but noticeable. For example, when rounded to two decimal places this edge will display as an MTF50 of 0.19 c/p on version 0.6.16, but 0.20 c/p on version 0.6.14. I expect that there will be examples out there that will exhibit larger differences than what we see here, but I do not expect to see completely different SFR curves.

More importantly, MTF Mapper's behaviour regarding TIFF files has changed. In version 0.6.14 and earlier, all 8-bit input images were treated as if they were encoded in the sRGB profile; this probably produced the desired behaviour most of the time. If, however, a 16-bit TIFF file is used as input in version 0.6.14 and earlier, then MTF Mapper assumed the file contained linearly coded intensity values (i.e., gamma = 1.0). This behaviour worked fine if you used "dcraw -4 ..." to produce the TIFF (or .ppm) file, but would not work on 16-bit TIFF files produced by most raw converters or image editors. From version 0.6.16 onwards all TIFF files with embedded ICC profiles will work correctly, whether they are encoded in linear, sRGB, Adobe RGB or ProPhoto profiles. Figure 4 illustrates the difference on a 16-bit sRGB encoded TIFF file.
Figure 4: SFR difference on the same edge of a 16-bit sRGB TIFF image (Green is v0.6.14, Blue is v0.6.16)

In Figure 4 we see much larger differences between the SFR curves produced by versions 0.6.14 and 0.6.16; this larger difference is because 0.6.14 incorrectly interpreted the sRGB encoded (roughly gamma = 2.2) values as if they were linear.

One last important rule: MTF Mapper version 0.6.16 still interprets all other 16-bit input files without ICC profiles (PNG, PPM) as if they have a linear (gamma = 1.0) encoding.

Summary recommendations

Overall, you should now be able to obtain more consistent results with MTF Mapper now that it supports embedded device profiles. For best results, choose to embed an ICC profile if your raw converter or image editor supports it. Given a choice, I still recommend using raw camera files directly with the GUI, or doing the raw conversion using "dcraw -4 -T ..." to convert your raw files if you use the command-line interface.