fbpx
1-888-310-4540 (main) / 1-888-707-6150 (support) info@spkaa.com
Select Page

A Brief History of Automated Builds

Almost every programming book starts with an example of a small program which can be compiled from the command line using a simple call to the compiler with maybe a few flags. Known as “Hello World”, the few lines of code needed to output the text is almost universal.

What the reader takes away from the example is how to call the compiler to produce a build. However as they progress in learning the language, it soon becomes clear that multiple source files are needed and that compiling these manually is tedious and error prone. The solution is either to use some kind of script which compiles the modules or use a build tool.

The Unix operating system (and its derivatives / clones like FreeBSD and Linux) includes a tool called Make which automated builds based on a configuration file (a Makefile) describing which source files are needed by which components of the build. Due to its inclusion in Unix and the Unix-like operating systems, Make has become fairly universal. There are different versions, including a version by Microsoft for use on its Windows operating system. However Make isn’t the only build tool. For example, the Apache Ant build system is very popular for Java.

The problem with tools like Make, when used in their standard configuration, is that the compiling is performed sequentially. This means that one file is compiled and then the next and so on. When there are thousands of files with millions of lines of code this can be very slow. With the advent of cheap multi-core processors it became practical to compile source files in parallel. Make offers a flag (-j for jobs) which tells Make to perform multiple compiles at the same time. On a 32 core machine with solid state disks (SSDs), the compile time of the Linux kernel can be reduced from hours to under a minute using parallel building.

However the problem with parallel building is resolving the dependencies. If the Makefile doesn’t precisely define the dependencies of each module the build times might remain high (as the high level of cross dependencies means that the build process remains more sequential rather than parallel) or the build process breaks as modules are compiled and linked in the wrong order.

Tools like ElectricAccelerator are able to analyze a build system and create a dependency map. One way it does this is to monitor the usage of each file and detect when it is used by the build process. Such maps guarantee that the build is consistent and doesn’t break due to out-of-order compiles.

ElectricAccelertor also uses caching technology to reuse the output of previous compilations and so avoid unnecessary compiles. There is also a tool which can be used together with Make known as ccache which performs a similar function.

Conclusion

On large projects, build times significantly influence productivity. Enabling more sophisticated builds using parallelism and caching enables builds to be produced quicker and allows downstream activities (such as testing) to continue without hold-ups.

Latest White Papers

Related Resources

The Domino Effect of Late-Stage Product Changes

The Domino Effect of Late-Stage Product Changes

Introduction Welcome to this video from SPK and Associates. My name is Michael Roberts, VP of Sales and Marketing for SPK and Associates. Hi, I'm Chris McHale. I am co-founder and CEO of SPK and Associates, and we are an IT services company that focuses on...

Tackling Complexity in Product Variants

Tackling Complexity in Product Variants

In today's highly competitive marketplace, customers are demanding more customization options. In order to stand out, product and software development tools must not only support variants but also provide efficient variant management capabilities. PTC’s Windchill and...