perjantai 10. marraskuuta 2017

Aiming for C++ sorting speed with a plain C API

A well known performance measurement result is that C++ standard library's std::sort function is a lot faster than C library's equivalent qsort. Most people, when they first hear of this, very strongly claim that this is not possible, C is just as fast (if not faster) than C++, this is a measurement error, the sorting algorithms used are different and so on. Then they run the experiment themselves and find that C++ is indeed faster.

The reason for this has nothing to do with how the sorting function is implemented but everything to do with the API. The C API for sorting, as described in the man pages looks like this:

void qsort(void *base, size_t nmemb, size_t size,
           int (*compar)(const void *, const void *));

The interesting point here is the last argument, which is a function pointer to a comparison function. Because of this, the sort implementation can not inline this function call but must instead always call the comparator function, which translates to an indirect jump.

In C++ the sort function is a template. Because of this the comparison function can be inlined in the implementation. This turns out to have a massive performance difference (details below). The only way to emulate this in plain C would be to ship the sort function as a preprocessor monster thingy that people could then use in their own code. This leads to awful and hard to maintain code, so this is usually not done. It would be nice to be able to provide a similar fast sort performance with a stable plain C API but due to the way shared libraries work, it's just not possible.

So let's do it by cheating.

If we know that a compiler is available during program execution, we can implement a hybrid solution that achieves this. Basically we emulate how JIT compilers work. All we need is something like this:

sorter opt_func = build_sorter("int",
    "(const int a, const int b) { return a < b; }");
(*opt_func)(num_array, NUM_INTS);

Here build_sorter is a function that takes as arguments the type of the item being sorted, and a sorting function as source code. Then the function calls into the external compiler to create an optimised sorting function and returns that via a function pointer that can be called to do the actual sort.

Full source code is available in this Github repo. Performance measurements for 100 million integers are as follows.

C++ is almost twice as fast as plain C. On the fly code generation is only 0.3 seconds slower than C++, which is the amount of time it takes to compile the optimised version. Codegen uses the C++ sorting function internally, so this result is expected.

Thus we find that it is possible to provide C++ level performance with a plain C API, but it requires the ability to generate code at runtime.

Extra bonus

During testing it was discovered that for whatever reason the C++ compiler (I used GCC) is not able to inline free functions as well as lambdas. That is, this declaration:

std::sort(begin, end, sorting_function);

generates slower code than this one:

std::sort(begin, end, [](sorting_lambda_here));

even though the contents of both comparison functions is exactly the same (basically return a < b). This is the reason the source code for the sort function above is missing the function preamble.

sunnuntai 3. syyskuuta 2017

Comparing C and C++ usage and performance with a real world project

The relative performance of C and C++ is the stuff of folk legends and Very Strong Opinions. There are microbenchmarks that can prove differences in performance in any direction one could wish for, but, as always, they are not conclusive in any way. For an actual comparison you'd need to have a complete, non-trivial program in one language, translate it to the other one without doing any functional changes and then comparing the results. The problem here is that this sort of a conversion does not exist.

So I made one myself.

I took the well known pkg-config program which is written in plain C using GLib and converted it to C++. The original code is available at Freedesktop and the converted version is on Github (branches master and cpp). The C++ version does not have any dependencies outside of C++ standard library whereas the C version depends on GLib and by extension pcre (which is an internal dependency of Glib, pkg-config does not use regular expressions).

All tests were run on Ubuntu 1704. The C++ version was tested both with GCC/stdlibc++ and Clang/libc++. Measurements were done with the gtk+ test in pkg-config's test suite.

The results in a single array

                           C++ stlibc++   C++ libc++       C

Optimized exe size                180kB        153kB    47kB
minsize exe size                  100kB        141kB    43kB
3rd party dep size                    0            0   1.5MB
compile time                       3.9s         3.3s    0.1s
run time                          0.01s       0.005s  0.004s
lines of code                      3385         3385    3388
memory allocations                 9592         8571    5549
Explicit deallocation calls           0            0      79
memory leaks                          0            0   >1000
peak memory consumption           136kB         53kB    56kB

Binary sizes and builds

The first thing to note is that the C version is a lot smaller than the corresponding C++ executables. However if you factor in the size of the external third party dependency binaries, i.e. the shared libraries of GLib and pcre, the C version is an order of magnitude bigger. One could argue endlessly what is the correct way to calculate these sizes (because a system provided library is shared among many users) but we're not going to do that here.

C++ is known for its slow build times and that is also apparent here. Again it should be noted that compiling the C dependencies takes several minutes so if you are on a platform where dependencies are built from source, the C version is a lot slower.

The runtime is fast for all three versions. This is expected because pkg-config is a fairly simple program. Stdlibc++ is slower than the other two, whose runtime is within measurement error of each other.

Memory

Memory and resource management has traditionally been the problem of C, where the programmer is responsible for shepherding and freeing every single resource. This can be clearly seen in the result table above. Perhaps the most striking fact is that there are 79 explicit (that is, written and maintained by the developer) resource release calls. That means that more than 2% of all statements in the entire code base are resource deallocation calls.

Every manual resource deallocation call is a potential bug. This is confirmed by the number of memory leaks as reported by Valgrind. There are more than 1000 of them, several dozen of which are marked as "definitely lost". The C++ implementation on the other hand uses value types such as std::string and RAII consistently. Every resource is deallocated automatically by the compiler, which, as we can see, does it perfectly. There are no resource leaks.

Memory consumption is also interesting. The C version works by creating an array of package objects and strings. Then it creates a hash table with pointers that point to said array. This is the classical C "sea of aliased pointers" problem, where the developer must keep track of the origin and meaning of every single pointer with no help from the compiler.

The C++ version has no pointers but instead uses value types. This means that all data is stored twice: once in the array and a second time in the hash table. This could probably be optimized away but was left as is for the purposes of this experiment. Even with this duplication we find that the version using libc++ uses less memory than the Glib one. Stdlibc++ uses a fair bit more memory than the other two. To see why, let's look at some Massif graphs starting with stdlibc++.


This shows that for some reason stdlibc++ allocates one chunk of 70 kB during startup. If we ignore this allocation the memory consumption is about 60 kB which is roughly the same as for the other two executables.

Plain C looks like this.


The most notable thing here is that Massif can not tell the difference between different allocation sources but instead lumps everything under g_malloc0. The C++ version shows allocations per container type which is extremely useful.

Finally, here is the chart for libc++.



Libc++ does not have an initial allocation like stdlibc++, so its memory usage is lower. Its containers also seem to be more optimized, so it uses less memory overall. Memory consumption could probably be reduced by using a linear probing hash map (which is also what Glib does internally) rather than the node-based one as required by the C++ standard but it would mean having an external dependency which we want to avoid.

The conversion job

One of the many talking points of Rust is that converting C to it is easy. This is spoken of in quotes such as "Rust is the only language that allows you to convert existing C code into a memory safe language piece by piece" (link to original purposefully omitted to protect the innocent). Depending on your definition of a "memory safe language" this statement is either true or complete bunk.

If you are of the opinion that Rust is the only memory safe language then the statement is obviously true.

If not then this statement is fairly vacuous. Every programming language that has support for plain C ABI and calling conventions, which is to say almost every one of them, has supported transitioning from C code one function at a time. Pascal, D, Java with JNI, even Fortran have been capable of doing this for decades.

C++ can also do this but it goes even further: it supports replacing C structures one element at a time. Pkg-config had many structs which consisted of things like GLists of char pointers. In any other programming languages changing this element means converting the entire struct from C into your new language in a single step. This means changing all code that uses said struct into the new language in one commit, which is usually huge and touches a large fraction of the code base.

In C++ you can convert only a fraction of the struct, such as replacing one of the stringlists with a std::vector<std::string>. Other elements of the struct can remain unchanged. This means smaller, more understandable commits. The extra bonus here is that these changes do not affect functionality in any way. There are no test suite regressions during the update process, even when working with frankenstein structs that are half C and half C++.

Following this train of thought to its final station yields slightly paradoxical results. If you have a legacy C code base that you want to convert to D or Rust or whatever, it might make sense to convert it to C++ first. This allows you to do the hard de-C-ification in smaller steps. The result is modern C++ with RAII and value types that is a lot simpler to convert to the final target language.

The only other programming language in common use that is capable of doing this is Objective C but it has the unfortunate downside of being Objective C.

Conclusions

Converting an existing C program into C++ can yield programs that are as fast, have fewer dependencies and consume less memory. The downsides include a slightly bigger executable and slower compilation times.

tiistai 29. elokuuta 2017

Dependencies, why are they so hard to get right?

The life cycle of any programming project looks roughly like this.


We start with the plain source code. It gets processed by the build system to produce so called "build tree artifacts". These are executables, libraries and the like but they are slightly special. They are stored inside the build tree and can not usually be used directly. Every build system has its own special magic sprinkled in the outputs. The files inside a build tree can not be run directly (usually) and the file system layout can be anything. The build tree is each build system's internal implementation detail, which is usually not documented and definitely not stable. The only thing that can reliably operate on items in the build directory is the build system itself.

The final stage is the "staging directory" which usually is the system tree, as an example /usr/lib in Unix machines but can be e.g. an app bundle dir on OSX or a standalone dir that is used to generate an MSI installer package on Windows. The important step here is installation. Conceptually it scrubs all traces of the build system's internal info and make the outputs conform to the standards of the current operating system.

The different dependency types

Based on this there are three different ways to obtain dependencies.


The first and simplest one is to take the source code of your dependency, put it inside your own project and pretend it is a native part of your project. Examples of this include the SQLite amalgamation file and some header-only C++ libraries. This way of obtaining dependencies is not generally recommended or interesting so we'll ignore it for the remainder of this post.

Next we'll look into the final case. Dependencies that are installed on the system are relatively easy to use as they are guaranteed to exist before any compilation steps are undertaken and they don't change during build steps. The most important thing to note here is that these dependencies must provide their own usage information in a build system independent format that is preferably fully declarative. The most widely accepted solution here is pkg-config but there can be others, as long as it is fully build system independent.

Which leaves us the middle case: build system internal dependencies. There are many implementations of this ranging from Meson subprojects to CMake internal projects and many new languages such as D and Rust which insist on compiling all dependencies by themselves all the time. This is where things get complicated.

Since the internal state of build trees are different, it is easy to see that you can not mix two different build systems within one single build tree. Or, rather, you could but it would require one of them to be in charge and the other one to do all of the following:
  • conform to the file layout of the master project
  • conform to the file format internals of the master project (which, if you remember, are undocumented and unstable)
  • export full information about what it generates, where and how to the master project in a fully documented format
  • accept dependency information for any dependency built by the master project in a standardized format
And there's a bunch more. If you go to any build system developer and tell them to add these features to their system they will first laugh at you and tell you that it will happen absolutely never.

This is totally understandable. Pairing together the output of two wildly different unstable interfaces in a reliable way is not fun or often even possible. But it gets worse.

Lucy in the Sky with Diamond Dependency Graphs

Suppose that your dependency graph looks like this.

The main program uses two libraries libbaz and libbob. Each one of them builds with a different build system each of which has its own package manager functionality. They both depend on a common library libfoo. As an example libbob might be a language wrapper for libfoo whereas libbaz only uses it internally. It is crucially important that the combined project has one, and only one, copy of libfoo and it must be shared by both dependents. Duplicate dependencies lead, at best, into link time errors and at worst to ten hour debugging sessions of madness in production.

The question then becomes: who should build libfoo? If it is provided as a system dependency this is not an issue but for build tree dependencies things break horribly. Each package manager will most likely insist on compiling all their own dependencies (in their own special format) and plain refuse to work anything else. What if we want the main program to build libfoo instead (as it is the one in charge)? This quagmire is the main reason why certain language advocates' view of "just call into our build tool [which does not support any way of injecting external dependency information] from your build tool and things will work" ultimately unworkable.

What have we learned?

  1. Everything is terrible and broken.
  2. Every project must provide a completely build system agnostic way of declaring how it is to be used when it is provided as a system dependency.
  3. Every build system must support reading said dependency information.
  4. Mixing multiple build systems in a single build directory is madness.

lauantai 19. elokuuta 2017

Apple laptops have become garbage

When OSX launched it quite quickly attracted a lot of Linux users and developers. There were three main reason for this:

  1. Everything worked out of the box
  2. The hardware was great, even sexy
  3. It was a full Unix laptop
It is interesting, then, that none of these things really hold true any more.

Everything works out of the box

I have an Android phone. One of the things one would like to do with it is to take pictures and then transfer them to a computer. On Linux and Windows this is straightforward: you plug in the USB cable, select "share pictures" on the phone and the operating system pops up a file dialog. Very simple.

In OSX this does not work. Because Android is a competitor to the iPhone (which makes Apple most of its money nowadays) it is in Apple's business interest to not work together with competing products. They have actively and purposefully chosen to make things worse for you, the paying customer, for their own gain. Google provides a file transfer helper application but since it is not hooked inside the OS its UX is not very good.

But let's say you personally don't care about that. Maybe you are a fully satisfied iPhone user. Very well, let's look at something completely different: external monitors. In this year's Europython conference introductory presentation the speaker took the time to explicitly say that if anyone presenting had a latest model Macbook Pro, it would not work with the venue's projectors. Things have really turned on their heads because up to a few years ago Macs were pretty much the only laptops that always worked.

This problem is not limited to projectors. At home I have an HP monitor that has been connected to many a different video source and it has worked flawlessly. The only exception is the new work laptop. Connecting it to this monitor makes the system go completely wonky. On every connection it does an impressive impersonation of the dance floor of a german gay bar with colors flickering, things switching on and off and changing size for about ten seconds or so. Then it works. Until the screen saver kicks in and the whole cycle repeats.

If this was not enough every now and then the terminal application crashes. It just goes completely blank and does not respond to anything. This is a fairly impressive feat for an application that reached feature stability in 1993 or thereabouts.

Great hardware

One of the things I do in my day job is mobile app development (specifically Android). This means connecting external display, mouse and keyboard to the work laptop. Since macs have only two USB ports they are already fully taken and there is nowhere to plug the development phone. The choices here are to either unplug the mouse whenever you need to deploy or debug on the device or use a USB hub.

Using dongles for connectivity is annoying but at least with a hub one can get things working. Except no. I have a nice USB hub that I have used for many years on many devices that works like a charm. Except on this work computer. Connecting anything through that hub causes something to break so the keyboard stops working every two minutes. The only solution is to unplug the hub and then replug it again. Or, more specifically, not to use the hub but instead live without an external mouse. This is even more ridiculous when you consider that Apple was the main pioneer for driving USB adoption back in the day.

Newer laptop models are even worse. They have only USB-C connectors and each consecutive model seems to have fewer and fewer of them. Maybe their eventual goal is to have a laptop with no external connection slots, not even a battery charger port. The machine would ship from the factory pre-charged and once the juice runs out (with up to 10 hours of battery life™) you have to throw it away and buy a new one. It would make for good business.

After the introduction of the Retina display (which is awesome) the only notable hardware innovation has been the emojibar. It took the concept of function buttons and made it worse.

Full Unix support

When OSX launched it was a great Unix platform. It still is pretty much the same it was then, but by modern standards it is ridiculously outdated. There is no Python 3 out of the box, and Python 2 is several versions behind the latest upstream release. Other tools are even worse. Perl is 5.18 from 2014 or so, Bash is 3.2 with the copyright year of 2007, Emacs from 2014 and Vim from 2013. This is annoying even for people who don't use macs, but just maintain software that supports OSX. Having to maintain compatibility with these sorts of stone age tools is not fun.

What is causing this dip in quality?

There are many things one could say about the current state of affairs. However there is already someone who has put it into words much more eloquently than any of us ever could. Take it away, Steve:

Post scriptum

Yes, this blog post was written on a Macbook, but it is one of the older models which were still good. I personally need to maintain a piece of software that has native support for OSX so I'm probably going to keep on using it for the foreseeable future. That being said if someone starts selling a laptop with a Risc-V processor, a retina-level display and a matte screen, I'm probably going to be first in line to get one.

maanantai 7. elokuuta 2017

Reconstructing old game PC speaker music

Back when dinosaurs walked the earth regular PCs did not have sound cards by default. Instead they had a small piezoelectric speaker that could only produce simple beeps. The sound had a distinctive feel and was described with words such as "ear-piercing", "horrible" and "SHUT DOWN THAT INFERNAL RACKET THIS INSTANT OR SO HELP ME GOD".

The biggest limitation of the sound system was that it could only play one constant tone at a time. This is roughly equivalent to playing the piano with one finger and only pressing one key at a time. Which meant that the music in games of the era had to be simple. (Demoscene people could do crazy things with this hardware but it's not relevant for this post so we'll ignore it.)

An interesting challenge, then, is whether you could take a recording of game music of that era, automatically detect the notes that were played, reconstruct the melody and play it back with modern audio devices. It seems like a fairly simple problem and indeed there are ready made solutions for detecting the loudest note in a given block of audio data. This works fairly well but has one major problem. Music changes from one note to another seamlessly and if you just chop the audio into constant sized blocks, you get blocks with two different consecutive notes in them. This confuses pitch detectors. In order to split the sound into single note blocks you'd need to know the length of each note and you can't determine that unless you have detected the pitches.

This circular problem could probably be solved with some sort of an incremental refinement search or having a detector for blocks with note changes. We're not going to do that. Let's look at the actual waveform instead.
This shows that the original signal consists of square waves, which makes this specific pitch detector a lot simpler to write. All we need to do is to detect when the signal transitions between the "up" and "down" values. This is called a zero-crossing detector. When we add the duration of one "up" and the following "down" segment we have the duration of one full duty cycle. The frequency being played is the inverse of this value.

With this algorithm we can get an almost cycle-accurate reconstruction of the original sound data. The problem is that it takes a lot of space so we need to merge consecutive cycles if they are close enough to each other. This requires a bit of tolerance and guesswork since the original analog components were not of the highest quality so they have noticeable jitter in note lengths. With some polishes and postprocessing you get an end result that goes something like this. Enjoy.

maanantai 24. heinäkuuta 2017

Managing the build definitions of a big project with many subprojects and interdependencies

Last week the news broke that Boost is switching from their own build system to CMake. This made me finally look properly into how Boost is built and what lessons we can learn from it. The results turned out to be quite interesting.

For those interested in diving into Boost's code note that the source layout in Git repos is different from what it is in the release tarballs. The latter has a sort of a "preinstalled header" directory with all public headers whereas they are inside each individual repository in Git. There also seem to be two different sets of build definitions, one for each.

Creating a sample project

My first idea was to convert a subset of Boost into Meson for a direct comparison. I spent a lot of time looking at the Jamfiles and could not understand a single thing about them. So instead I created a demonstration project called Liftoff, which can be downloaded from Github. The project had the following requirements:
  • support many standalone subprojects
  • subprojects can depend on other subprojects
  • shared dependencies are built only once, every project using it gets the same instance
  • subprojects can be built either as shared or static libraries or used in a header only mode
  • can build either all projects or only one + all its dependencies
  • any dependency can also be obtained from the system if it is available
  • monorepo layout, but support splitting it up into many individual repos if desired

The libraries

The project consists of four independent subprojects:
  • lo_test, a simple unit testing framework
  • lo_adder, a helper module for adding integers, depends on lt_test
  • lo_strings, a helper module for manipulating strings, has no dependencies
  • lo_shuttle, an application to launch shuttles, depends on all other modules
Note how both lo_adder and lo_shuttle depend on lo_test. Each subproject comes with a header and unit tests, some come with a dependency library as well.

The dependency bit

The core idea behind Meson's dependency system is that projects can declare dependency objects which specify how the dependency should be used (sort of like a Meson-internal pkg-config file). This is how it looks like for the string library:

lo_strings_dep = declare_dependency(link_with : string_lib,
  include_directories : include_directories('.'),
)

Other projects can then request this dependency object and use it to build their targets like this:

string_dep = dependency('lo_strings', fallback : ['lo_strings', 'lo_strings_dep'])

This is Meson nomenclature for "try to find the dependency from the system and if not found use the one in the given subproject". This dependency object can then be used in build targets and the build system takes care of the rest.

Building it

The build command from the command line is this:

meson build
ninja -C build test

This builds and runs all tests. Once you have it built, here are things to try:
  • toggle between shared and static libraries with mesonconf -Ddefault_library=shared [or static]
  • note how the test library is built only once, even though it is used by two different subprojects
  • do a mesonconf -Dmodule=lo_strings and build, note that no other subproject is built anymore
  • do a mesonconf -Dmodule=lo_adder and build, note that lo_test is built automatically, because it is a direct dependency of lo_adder

"Header only" dependencies

Some projects want to ship header only libraries but to also make it possible to build a helper library, usually to cut down on build times. This can be done but it is usually not pretty. You need to write "implementation header files" and do magic preprocessor incantations to ensure things are built in proper locations. We could replicate all of that in Meson if we wanted to, after all it's only grunt work. But we're not going to do that.

Instead we are going to do something fancier.

The main problem here is that traditionally there has been no way to tell that a dependency should also come with some source files that should be compiled in the dependent target. However in Meson this is supported. The lo_strings subproject can be set up to build in this way with the following command:

mesonconf build -Dlo_strings:header_only=true

When the project is built after this, the lo_strings project is not built, instead its source files are put inside the dependent targets and built there. Note that the build definition files for the dependent targets do not change at all. They are identical regardless of where your dependency comes from or how it should be built. Also switching between how things should be built does not require changing the build definition files, it can be toggled from "the outside".

How much space do the build definitions take in total?

66 lines.

tiistai 18. heinäkuuta 2017

Experiment: binary size reduction by using common function tails

In embedded development the most important feature of any program is its size. The raw performance does not usually matter that much, but size does. A program that is even one byte larger than available flash size is useless.

GCC, Clang and other free compilers do an admirable job in creating small executables when asked to with the -Os compiler switch. However there are still optimizations that could be added. Suppose we have two functions that looks like this:

int funca() {
  int i = 0;
  i+=func2();
  return i+func1();
}

int funcb() {
  int i = 1;
  i+=func3();
  return i+func1();
}

They would get compiled into the following asm on x86-64:

funca():
        push    rbp
        mov     rbp, rsp
        sub     rsp, 16
        mov     DWORD PTR [rbp-4], 0
        call    func2()
        add     DWORD PTR [rbp-4], eax
        call    func1()
        mov     edx, eax
        mov     eax, DWORD PTR [rbp-4]
        add     eax, edx
        leave
        ret
funcb():
        push    rbp
        mov     rbp, rsp
        sub     rsp, 16
        mov     DWORD PTR [rbp-4], 1
        call    func3()
        add     DWORD PTR [rbp-4], eax
        call    func1()
        mov     edx, eax
        mov     eax, DWORD PTR [rbp-4]
        add     eax, edx
        leave
        ret

If you look carefully, the last 7 instructions on both of these functions are identical. In fact the code above can be rewritten to this:

funca():
        push    rbp
        mov     rbp, rsp
        sub     rsp, 16
        mov     DWORD PTR [rbp-4], 0
        call    func2()
common_tail:
        add     DWORD PTR [rbp-4], eax
        call    func1()
        mov     edx, eax
        mov     eax, DWORD PTR [rbp-4]
        add     eax, edx
        leave
        ret
funcb():
        push    rbp
        mov     rbp, rsp
        sub     rsp, 16
        mov     DWORD PTR [rbp-4], 1
        call    func3()
        jmp common_tail

Depending on your point of view this can be seen as either a cool hack or an insult to everything that is good and proper in the world. funcb does an unconditional jump inside the body of an unrelated function. The reason this works is that we know that the functions end in a ret operand which pops a return address from the stack and jumps into that (that is, the "parent function" that called the current function). Thus both code segments are identical and can be collapsed into one. This is an optimisation that can only be done at the assembly level, because C prohibits gotos between functions.

How much does this save?

To test this I wrote a simple Python script that parses assembly output, finds the ends of functions and replaces common tails with jumps as described above. It uses a simple heuristic and only does the reduction if there are three or more common instructions. Then I ran it on the assembly output of SQLite's "amalgamation" source file. That resulted in reductions such as this one:

Ltail_packer_57:
        setne   %al
Ltail_packer_1270:
        andb    $1, %al
        movzbl  %al, %eax
        popq    %rbp
        retq

This function tail is being used in two different ways, sometimes with the setne command and sometimes without. In total the asm file contained 1801 functions. Out of those 1522 could be dedupped. The most common removals looked like this:

       addq    $48, %rsp
       popq    %rbp
       retq

That is, the common function suffix. Interestingly, when the dedupped asm is compiled, the output is about 10k bigger than without dedupping. The original code was 987 kB. I did not measure where the difference comes from. It could be either because the extra labels need extra metadata or because the jmp instruction takes more space than the instructions it replaces because the jump might need a 32 bit offset. A smarter implementation would look to minimize the jump distance so it would fit in 16 bits and thus in a smaller opcode. (I'm not sure if x86-64 has those opcodes so the previous comment might be wrong in practice but the reasoning behind it is still valid.)

Is this actually worth doing?

On the x86 probably not because the machines have a lot of ram to spare and people running them usually only care about raw performance. The x86 instruction set is also quite compact because it has a variable size encoding. The situation is different in ARM and other embedded platforms. They have fewer instructions and a constant encoding size (usually 32 bits). This means longer instruction sequences which gives more potential for size reductions. Some embedded compilers do this optimization so The Real World would seem to indicate that it is worth it.

I wanted to run the test on ARM assembly as well, but parsing it for function tails is much more difficult than for x86 asm so I just gave up. Thus knowing the real benefits would require getting comments from an actual compiler engineer. I don't even pretend to be one on the Internet, so I just filed a feature request on this to the Clang bug tracker.