Source fortification is a powerful tool in modern compilers. When enabled, the compiler will inspect the code and attempt to automatically replace risky functions with safer, better-bounded versions. Of course, the compiler can only do that if it can figure out what those bounds should be, which isn't always easy. The developer does not get much feedback as to the success rate of this process, though. The developer knows that they may have enabled source code fortification (-D_FORTIFY_SOURCE), but they do not get a readout on how many of their memcpy instances are now replaced with the safer memcpy_chk function, for example. This is important to the consumer because just looking to see that a good software build practice was intended does not reveal whether the practice actually improved the safety in the resulting application. That made us really curious to dig into the data on source fortification and its efficacy.
We first looked at the fortification statistics on Linux. The following numbers only deal with the 25% of Linux binaries (2631 files) where fortification was enabled. Linux compilers, at the time we performed our initial analysis, have 72 functions that they inspect for potential fortification, and the binaries we inspected had a total of almost 2 million instances of those 72 functions. Of those 2 million instances with functions that could potentially made safer, 91.7% were fortified! Well done, Linux. Of course, those 165,000+ functions that remain in their less secure form warrant some concern and the there's still further good and bad news when you look at all of this more closely. We viewed this data two different ways. First we looked at it broken out by binary, then by function.
This chart shows the percent fortification for all binary files on our default base install of Ubuntu Linux. 31% of files were 95-100% fortified. Many of these had large function counts, with some as high as 25k fortified function instances. The rest of the files were evenly distributed from 0 to 95%. While these binaries generally had lower function counts, there were exceptions to this rule. Most notable was /lib/systemd/systemd, which had over 42k unfortified pread instances.
When we view the fortification data by function, as shown in the graphic below, we see that most functions are at one extreme or the other. 15 are 95-100% fortified, while 39 were 0-5% fortified. Of those 39 functions, 28 of them were *never* fortified. Many of these totally unfortified functions are less common, with an average of 302 total instances across the 2,631 Ubuntu files we examined. On the other hand, the highly fortified functions had an average of 104k total instances. The most common function overall was sprintf, which had 1.3 million instances and was over 99.9% fortified.
So, function fortification is a fairly mature feature on Linux, although imperfect. It does well on many commonly used risky functions, and fortifies 91% of files overall. Still, over 2/3 of the binaries that are intended to be fortified end up being less than 95% fortified, and the developer gets no feedback as to the success rate for their particular case.
On OSX, however, we see that this security feature is much less mature. For OSX we had more software installed, so we had almost 7 million function instances observed. 21.4% of these were fortified, which is fairly dismal, especially when compared to Linux's 91.7%.
In fact, while Linux had many binaries with high function counts that were almost entirely fortified, no OSX binaries with high function counts got high fortification scores. The most fortified functions in a 100% fortified file was 121.
The picture is similar when OSX fortification is viewed by function. 52 functions are never fortified. The most successful function is strcat, with 93% fortification, followed by strncat and sprintf (83% and 82%, respectively). Interestingly, we found that a couple functions had higher fortification rates in OSX than they did in Linux.
In both Linux and OSX, we observed that string functions like these had higher success rates than functions dealing with (dynamic) memory pointers.
So, OSX has some catching up to do, compared to their Linux equivalent. This was a relatively unexpected result, but that's why this sort of data analysis is important.