All of us who work in software development want the products that we release to be high quality, and also low cost to maintain in the field. This is no big revelation; it’s common knowledge, and it’s common sense. It’s not clear that there is any silver bullet in achieving these goals, but it’s also not the case that it is impossible to move toward automation in achieving these outcomes.
A good part of the answer (not the full answer, by any means) to this problem is the use of a source code static analysis checker. For those of you unfamiliar with this idea, a static analysis source code checker is a program that takes your source code (or sometimes the object code) as input and subjects this to analysis the intent of which is to determine the correctness of the code. Usually a static analysis program is composed of a collection of code checkers, each of which analyzes for a specific type of problem. Different combinations of these error checkers can be turned on to search for specific sets of defects.
After a static analysis run is complete, reports are generated which describe all the different errors identified in the source code. In rough terms, here are some major types of errors that are normally identified by source code static analysis.
1. Logic errors. Syntax errors are normally caught by the compiler, but logic errors, such as using a variable before initializing it are caught by a static analyzer. Other example might include analyzing the data flow of the code for unreachable (dead) code, poor coding practices, and the enforcement of desired coding standards.
2. Concurrency errors. Race conditions, locking sequence problems, and other issues with software that involves multiple processes can be analyzed and any problems identified.
3. Security errors. These include such things as buffer overflows, un-validated user input, memory management, and other vulnerable coding practices.
I’d now like to touch on 4 benefits of using this approach to improve software reliability.
1. Quality. When we speak here of quality, what is meant is identifying defects up front and fixing them before customers experience the defects. Having static analysis software present a list of defects prior even to any code testing certainly provides a head start in fixing real and potential code problems.
2. Time to market. Being able to identify problems prior to the various testing cycles and alpha/beta deployment at customer sites can shorten the entire software development lifecycle. There has been much written about how each stage of the cycle that a defect progresses to results in a 10X increase in cost of remediation. Being able to identify and fix some non-trivial number of defects prior to testing can only improve scheduled testing and delivery schedules.
3. Legal insulation. Note that this is not legal protection, but rather insulation, since the legal system is not a perfectly predictable entity. But the fact that a software vendor has exercised due diligence by using appropriate tools to promote software quality can only be a help if software defects led to the initiation of a lawsuit. Virtually no non-trivial software can be certified as “defect free”. But having taken proactive steps to minimize defects may form a part of any defense to such lawsuits.
4. Food and Drug Administration (FDA) or other government mandates. For medical device companies, the FDA has not yet mandated the use of static analysis for medical software. However, they have identified this as a desirable practice. And, in the future there is some possibility that such desirable practices will become part of a government mandate to assure patient safety. Other industries that deal with government agencies or regulatory bodies may also want to pay attention to the need for due diligence and avail themselves of best practices in their software development processes.
One final topic that deserves coverage is the issue of false positives. It can be true that software static analysis output can contain a variety of issues that are not deemed serious errors. There are a couple of approaches to this problem.
First, it is worth considering that the coders adopt coding standards so that these kind of false positives do not appear. Usually false positives are the result of some substandard coding practice, and while it may not be an immediate coding error, it may be problematic in other ways.
Next, static analysis tools normally have a variety of settings, from lower to highly aggressive in determining what might be an error. It is recommended that one begin with the lower settings in order not to be overwhelmed with the number of errors presented in the reports. If possible, at first only turn on the error checkers which catch the most serious types of errors.
Finally, a process should be put in place to drive identified errors to resolution. When that number approaches zero, other checkers can be turned on to catch other types of errors, which can then become the focus of developer remediation.
To sum up, static analysis is not, and should not be the only effort exerted to attain code quality. Activities like proper coding practices, code reviews, and the different stages of code testing are also very important. It’s not a single action that delivers quality code, but the combination of known good practices. As mentioned previously, static analysis of source code is not by itself a silver bullet, but it should not be ignored.