Sun, 21 Feb 2010

Lies, damned lies and ...

The build system at work needed a bit of a cleanup. It's a fairly standard autoconf/automake deal which has been extended by people who don't understand those tools. This is understandable. There are after all very few people who truly do. I certainly don't.

Still, seeing HOST be confused with TARGET does indicate there's a problem.

While I don't know much about autotools I do know a thing or two about plain old make. I spent some time setting up a non-recursive build system, borrowing heavily from the Linux kernel build system bag of tricks.
After a little work I got the first few applications building with the new system. Before committing it I wanted some performance numbers. Sure, the new system is (at least in my opinion) much cleaner, simpler, and powerful, but a faster build is something every developer cares about.

Measuring build times can be a little tricky as they could vary significantly due to caching, other processes running on the system, ...
The mathematical tools to deal with this exist, but who wants to spend hours refreshing statistics courses? Fortunately there's a very simple tool to do exactly the kind of statistical calculations benchmarks call for: ministat. All you need to do is hand it two files with the results to compare.
It's a FreeBSD tool, getting it to work on Linux is a (trivial) exercise left for the reader.

Here's the output of 11 test runs of both the old (autotools, recursive make) and the new (plain make, non-recursive) system when running 6 jobs simultaneously.


x old-system.txt
+ new-system.txt
+------------------------------------------------------------------------------+
|+                                                                          x  |
|+                                                                          x  |
|+                                                                          x  |
|++                                                                         x  |
|++                                                                         x  |
|++                                                                         xx |
|++                                                                         xxx|
|A|                                                                         A| |
+------------------------------------------------------------------------------+
    N           Min           Max        Median           Avg        Stddev
x  11        662.44        673.03        663.69     665.20545      3.224591
+  11        324.47        329.55        326.46     326.61364     1.8295151
Difference at 95.0% confidence
  -338.592 +/- 2.3318
  -50.9003% +/- 0.350539%
  (Student's t, pooled s = 2.62156)

The results surprised me a little: because there's so little variation in the results, but mostly because the build time with the new system is so much better. There's little variance in these results so simply comparing the first measurements would have told us the same thing. On the other hand, now we know.

I believe I'll have little trouble convincing my coworkers that the new system is better.

posted at: 11:37 | path: / | [ 0 comments ]