About The Contest

The first ever Olympiad of Misguided Geeks contest at Worse Than Failure (or OMGWTF for short) is a new kind of programming contest. Readers are invited to be creative with devising a calculator with the craziest code they can write. One lucky and potentially insane winner will get either a brand new MacBook Pro or comparable Sony VAIO laptop.

Navigation

Advertisements

Code Analysis

by David Maxwell

Alex asked me to write an article because Coverity is contributing to the contest that has just been launched. Coverity's code analysis tool Prevent is going to provide some objective statistics for the judges to use when evaluating submissions.

Programming is creative work. This site contains daily examples of just how creative some programmers are. Some programmers are more ingenious than others.

In any creative effort some tools help the creator express himself or enable work beyond what is possible with bare hands. Other tools contribute to the quality of the finished product the way a carpenter's measures and levels ensure a building is as sturdy as the designer intended.

Programmers are used to working with tools like editors, compilers, and debuggers. Those tools are required as part of the software development process, but they don't tell you anything about how sturdy a program is. People have accepted for years that complexity is a fundamental challenge in software development, but most tools provide limited help in dealing with the effect of that complexity.

One particular challenge is that as a program grows the number of different paths through the code may increase rapidly. While code reviews do catch errors, they also miss errors because the reviewers can't spend enough time to consider every execution path in a sizable program. Test cases catch errors, but often the safety check that the programmer forgot doesn't have a test case that triggers it. Additionally a failure condition may occur by chance, but in some cases it could be triggered intentionally by someone malicious.

Here's where a new kind of tool is valuable. It's a kind of tool that looks at code the same way a programmer would while doing a code review, except it has the patience to look at every line of code and every path through the program.

The concept of static analysis isn't new. Many programmers have seen output from a tool called lint that has been a companion to C compilers for many years. The limitation of lint is that it looks at the text of a program without understanding execution paths, user-defined data types, APIs, security implications, and multi-threading issues like deadlocks.

If you've ever done a code review you can imagine the problems that result. Trying to say anything informative about a program without being able to understand what functions are supposed to do, how data is stored, or how the program runs is largely impossible. If you've never participated in a code review, find some source code you don't know and pick a line at random and try to understand its effect without looking at the rest of the program. Context is everything.

A tool that actually understands the guts of the code has several advantages over tools that do not. If a particular code construct is a risky one the tool can check and see if the programmer used safety measures to ensure that the construct cannot misbehave. If a path of execution contains an error where bad data is set up in one function but references to the data cause an error in another function the tool can see which inputs cause that to occur.

As compared to a manual review, a tool never gets tired, bored or lazy. Each line of code in each path of execution is thoroughly reviewed with respect to every other line in that code path. Each line is reviewed with a full understanding of the data and functions it refers to.

So why is it called static analysis? The answer is because the program is not running while it is being reviewed. Coverity's analysis operates on the source code, and how the compiler will interpret the source code. By contrast another type of testing is called dynamic analysis. Dynamic analysis tools include things like program specific test cases and tools that observe a program while it is running.

Static and dynamic analysis tools partly overlap in the issues they can identify, but they are complementary methods because each can identify issues that cannot be found by the other. Going back to the code reviewer example, if you write a 'hello world' application a static analysis tool can confirm that the code is structurally sound, but not whether it performs the task it's intended to. If there's a typo in a literal string or errors in formatting such as an extra carriage return, those will only be detected by test cases that verify the program's results against the specifications it was designed to meet. Conversely the static analysis tool can find bugs for any possible input data, not only the subset expressed in test cases.

Coding errors are introduced in different ways. Sometimes an API is used in a way its designer did not intend. Sometimes an existing piece of code is reused without an understanding of the consequences. Code may include assumptions about the system it is written for which later fail when the program is ported to new platform. External time pressures could lead to a lack of te sting or safety mechanisms in the code.

When software is developed under time constraints test cases and code reviews are often the first thing removed from a schedule. Everyone knows that the code has to be written, so development time is difficult to cut back on. Quality is difficult to quantify, and although there are costs they tend to occur later. People forget that lack of quality was the root cause of the later problems.

In the contest, there are time constraints driven by the submission deadline. There are resource limitations, from contestants' free time and level of interest in the prizes. So extensive testing is not likely to occur. The judges want to make the best use of their time as well, and analyzing entries line by line isn't a lot of fun. So we'll use methods most appropriate to each task. The compilers will confirm that the code can be built. Test cases will confirm functionality. Prevent will evaluate code quality. The judges will evaluate style, humor, and ingenuity.