Tuesday, 3 November 2015

PffffFFTttt...

There is no doubt that FFTW is one of the fastest FFT implementations available. It can be a pain to include in a Microsoft Visual Studio project, though. Maybe I am "using it wrong"...

One solution to this problem is to include my own FFT implementation in MTF Mapper, thereby avoiding the FFTW dependency entirely. Although it is generally frowned upon to use a homebrew FFT implementation in lieu of an existing, proven library, I decided it was time to ditch FFTW.

One of the main advantages of using a homebrew FFT implementation is that it avoids the GPL license of FFTW. Not that I have any fundamental objection to the GPL, but the main sources of MTF Mapper are available under a BSD license, which is a less strict license than the GPL. In particular, the BSD license makes allowance for commercial use of the code. Before anyone asks, no, MTF Mapper is not going closed source or anything like that. All things being equal, the BSD license is just less restrictive, and avoiding FFTW brings MTF Mapper closer to being a pure BSD (or compatible) license project.

FFT Implementation

After playing around with a few alternative options, including considering the my first c++ FFT implementation way back from first year at university, I settled on Sorenson's radix-2 real-valued FFT (Sorenson, H.B, et al, Real-Valued Fast Fourier Transform Algorithms, IEEE Transactions on Accoustics, Speech, and Signal Processing, 35(6), 1987). This algorithm appears to be a decent balance between complexity and theoretical efficiency, but I had to work fairly hard at the code to produce a reasonably efficient implementation.

I tried to implement it in fairly straightforward c++, but taking care to use pointer walks in stead of array indexing, and using look up tables for both the bit-reversal process and the sine/cosine functions. These changes produced an algorithm that was at least as fast as my similarly optimized complex FFT implementation augmented with a two-for-the-price-of-one step for real-valued inputs.

One thing I did notice is that the FFT in its "natural" form does not lend itself to an efficient streaming implementation. For example, the first pass of the radix-2 algorithm looks like this:
for (; xp <= xp_sentinel; xp += 2) {  
    double xt = *xp;
    *(xp)   = xt + *(xp+1);
    *(xp+1) = xt - *(xp+1);
}
Note that the value of x[i] (here *xp) is overwritten in the 3rd line of the code, while the original value of x[i] (copied into xt) is still required in the 4th line of the code. This write-after-read dependency causes problems for out-of-order execution. Maybe the compiler is smart enough to unroll the loop and intersperse the reads and writes to achieve maximal utilization of all the processing units on the CPU, but the stride of the loop and the packing of the values is not ideal for SSE2/AVX instructions either. I suppose that this can be addressed with better code, but before I spend time on that I first have to determine how significant raw performance of the FFT is in the context of MTF Mapper.

Real world performance in MTF Mapper

So how much time does MTF Mapper spend calculating FFTs? Well, one FFT for every edge. A high-density grid-style test chart has roughly 1452 edges. According to a "callgrind" trace produced using valgrind, MTF Mapper v0.4.21 spends 0.09% of its instruction count inside FFTW's real-valued FFT algorithm.

Using the homebrew FFT of MTF Mapper 0.4.23 the total number of instruction fetches increase by about 1.34%, but this does not imply a 1.34% increase in runtime. The callgrind trace indicates that 0.31% of v0.4.23's instructions are spent in the new FFT routine.

In relative terms, this implies that the new routine is roughly 3.5 times slower, but this does not account for the additional overheads incurred by FFTW's memory allocation routines (the FFTW routine is not in-place, hence requires a new buffer to be allocated before every FFT to keep the process thread-safe).

Measuring the actual wall-clock time gives us a result of 22.27 ± 0.14 seconds for 20 runs of MTF Mapper v0.4.21 on my test image, versus 21.631 ± 0.16 seconds for 20 runs of v0.4.23 (each experiment repeated 4 times for computing standard deviations). These timings were obtained on a Sandy-bridge laptop with 8/4 threads. The somewhat surprising reversal of the standings (the homebrew FFT now outperforms the FFTW implementation) just goes to show that the interaction between hyperthreading, caching, and SSE/AVX unit contention can produce some surprising results.

Bottom line: the homebrew FFT is fast enough (at least on the two hardware/compiler combinations I tested).

Are we done yet?

Well, surely you want to know how fast the homebrew FFT is in relation to FFTW in a fair fight, right?

I set up a simple test using FFTW version 3.3.4 built on gentoo using gcc-4.9.3, running on a Sandy-bridge laptop cpu (i7-2720QM) running at a base clock of 2.2 GHz. This was a single-threaded test, so we should see a maximum clock speed of 3.3GHz, if we are lucky.

For a 1024-sample real-valued FFT, 2 million iterations took 14.683 seconds using the homebrew code, and only 5.798 seconds using FFTW. That is a ratio of ~2.53.

For a 512-sample (same as what MTF Mapper uses) real-valued FFT, 2 million iterations took 6.635 seconds using the homebrew code, and only 2.743 seconds using FFTW. That is a ratio of ~2.42.

According to general impressions gathered from the Internet, you are doing a good-enough job if you are less than 4x slower than FFTW. I ran metaFFT's benchmarks, which gave a ratio of 2.4x and 2.1x relative to FFTW for size 1024 and 512, respectively (these were probably complex transforms, so not a straight comparison).

The MTF Mapper homebrew FFT at least appears to be in the right ballpark, at least fast enough not to cause embarrassment....