Why Undefined Behavior Matters

So Many Bugs

So, you’ve been programming for one month, or for ten years, and you have gotten familiar with Murphy’s and Sturgeon’s laws. And you are wondering if there is some way to catch all the bugs, not just spray-and-pray with regression/unit/integration/system/acceptance testing.

Or you are a curious onlooker, wondering why all these programmers cannot make a piece of software that works reliably, what with apps doing unexpected things and with all the security breaches you hear off.

First, what are we talking about? Essentially, a bug is anytime a system does not do what it is expected to:

The Mars Climate Orbiter disintegrated into Mars because a module was using non-metric units and forgot to convert. Seriously.

As you can see, there is quite a spectrum. I like to sort bugs in three categories:

Today, I would like to focus on the last one. It looks like it should be the more tractable, since the scope is well constrained. But it is still a non-trivial problem.

What we want is two-fold:

  1. Specify what we expect the program to do
  2. Verify that the program meets our specification

Even the first step is tricky. For the sake of simplicity, we are going to reduce the scope of what we want to achieve, and focus on a well-delimited set of things our program should do: have a well-defined behavior.

What does this mean? Let us say you wrote a program that does some kind of scientific computation, expect you used meters instead of kilometers. Then, the result is going to be wrong, of course, but in a predictable way: the result will be off by a factor 1,000. If you run the program again the same way, you should get the same result.

On the contrary, if your program contains undefined behavior, it could do unexpected things. It might output the right value; it might output a wrong value; it might crash; it might allow an attacker to take control of your computer. And it might behave differently every time you run it. Worse: it might run perfectly fine during all your tests, but output the wrong value when it is being used to pilot a real plane with real passengers.

How can we know if a program contains undefined behavior? First, we need to talk about programming languages.

Why is C an “Unsafe” Language?

When people talk about “undefined behavior”, they are usually referring to undefined behaviors in the C programming language. There is a reason why this particular language is criticized as being “memory-unsafe”, and why so many security vulnerabilities stem from code written in C. And that reason is that the C language leaves the door open for many bugs.

To understand why, we must first ask ourselves what “the C programming language” refers to. Who decides what it is, and what if someone does things differently?

Historically, C was loosely defined and had many dialects. Every compiler author would do things a bit differently. The people writing a compiler to compile “C” code to 8080 assembly made different choices than the ones targeting PDP-11. The same program could end up doing different things on different processors. Things were messy.

So, the C compiler writers, and the C developers, and the processors manufacturers joined together and agreed on a common definition for what “C” really meant. Thus was born the C standard.

The C standard had to try to align things between people doing very different things. And these people cared a lot about performance. Processors at the time were about a hundred times slower than the ones today. And I am only talking about clock frequency here; nowadays, each CPU core can do much more in a cycle than the CPUs at that time, and there are now many cores in a modern CPU. It makes sense that they really wanted to keep compiling C code to efficient assembly, to have programs run at decent speeds.

Now, what happens if the code tries to access memory it is not supposed to? Well, the compiler cannot just add runtime checks everywhere, since that would have a negative impact on performance. And it cannot always know in advance that this will happen, since it would require it to solve arbitrary complex mathematical problems.

Because of this, the C standard did not require either. Instead, if the code does such an invalid operation, it is said to have “undefined behavior”.

So what? If code crashes, I’ll notice the problem. If it outputs the wrong result, my tests will notice it.

Simplicius, my imaginary straw-man C developer

Wait a minute, renegade figment of my imagination! Undefined behavior means that the compiler is allowed to produce code that does anything. In particular, it is allowed to assume that undefined behavior never occurs in the problem, and deduce things from that. And you can prove many things if you assume something which is false.

Let’s take an example. The code below uses a toy authorization function that just checks whether the user id (uid) . As you might have already guessed from the topic of this article, there is an undefined behavior in this code. First question: can you see it?

#include <stdio.h>
int authorized_uids[] = {0, 1001, 1005, 1012};
int is_authorized(int uid) {
    for (int i = 0; i <= 4; i++) {
        if (uid == authorized_uids[i]) {
            return 1;
        }
    }
    return 0;
}
int main(void) {
    if (is_authorized(4242)) {
        puts("You are now logged in as root.");
    } else {
        puts("You are a lowly common user.");
    }
}

Spoiler: answer below.

The loop is written as for (int i = 0; i <= 4; i++), so i takes the values 0, 1, 2, 3 and 4. Then we use it as an index to read from an array: authorized_uids. However, this array only contains 4 elements, at indices 0, 1, 2 and 3. There is no index 4 in this array. Trying to access index 4 is an out-of-bounds access, which is a type of undefined behavior.

Second question: what will happen when we compile and run this code? You can actually do it, but do try to have a guess first.

The universal answer is: anything can happen. Since there is an undefined behavior, the program could do anything. However, there are two common things compilers will do in this case.

Option 1. It can translate this C code to assembly in a rather naive way. For an x86 processor, it would translate authorized_uids[i] as an assembly instruction reading memory at the address authorized_uids + i.

This might seem a bit obvious, but the compiler does not have to do that. Even when there is no undefined behavior, the compiler is allowed to emit any code it wants, as long as it produces the same result. For instance, the compiler might find a faster series of instructions that achieves the same results. This is what “optimizing” mean.

If we then run the compiled program, the CPU will try to read at authorized_uids + i. Most likely, however, this will still be in the valid memory of the program. Indeed, memory is usually allocated generously (segments for the stack, pages for the heap). So, the assembly instruction will not cause a crash; it will return some value. Most likely, the value will not be 4242, so the test will fail, and the user will not be authorized.

This is the middle column in the compiler explorer below. You can see at the top that the source code for is_authorized contains various assembly instructions. And, at the bottom, you can see the output of the program: “You are just a lowly common user.”.

Option 2. Now, let’s have a look at the rightmost column. The output is “You are now logged in as root.”! This is not what this program was supposed to do at all! And this is not just a fluke. It happens every time you run it!

Then, why is the result different this time? If you look closely, you will notice that the only difference is that we passed the option -O2 to the compiler. It tells the compiler to try harder to optimize the program.

In fact, if you look at the generated assembly, you will see that the compiler optimized the function to something extremely efficient:

is_authorized(int):
        movl    $1, %eax
        ret

This means that the function is_authorized just always returns 1. Wat?

Let’s try to understand the reasoning of the compiler. It sees:

int is_authorized(int uid) {
    for (int i = 0; i <= 4; i++) {
        if (uid == authorized_uids[i]) {
            return 1;
        }
    }
    return 0;
}

This is equivalent to:

int is_authorized(int uid) {
    if (uid == authorized_uids[0]) { return 1; }
    if (uid == authorized_uids[1]) { return 1; }
    if (uid == authorized_uids[2]) { return 1; }
    if (uid == authorized_uids[3]) { return 1; }
    if (uid == authorized_uids[4]) { return 1; }
    return 0;
}

But we can assume that there is no undefined behavior in the code. So uid == authorized_uids[4] must never be evaluated. So the compiler can safely assume that the function always exits at one of the first 4 return statements. In other words, it always returns 1.

But couldn’t the compiler at least warn about this?

Someone rightfully annoyed with the state of the universe

Fighting Undefined Behavior

If you run gcc with -Wall -Wextra -Wpedantic or even clang with -Weverthing, you still get no warnings about the undefined behavior!

Why don’t they warn about this? Because this is harder than it looks. The compiler cannot always know where an UB will happen. Let’s have a look at the example below.

// hour ∈ [0, 23]
// minute ∈ [0, 59]
// second ∈ [0, 59]
int time_to_seconds(int hour, int minute, int second) {
    return hour * 3600 + minute * 60 + second;
}

If a compiler had to ensure the absence of undefined behavior when compiling the above code, they would have to assume that hour, minute and second contain any possible value from INT_MIN to INT_MAX. The compiler does not understand comments written for humans, and it does not understand variable or function names, even when they are chosen appropriately.

To make the code safe, the compiler would have to introduce runtime checks in the compiled code to make sure that no overflows occur during the computation. In practice, that would imply 2 comparisons and 2 branches before each of the 2 multiplications and each of the 2 additions. But C was not designed for safety. It was designed for speed. All these comparisons and branches add up. On a simple benchmark, I observed that a safe version would be between 2 and 3 times as slow as the unsafe one.

For the same reason, if the compiler had to warn for every potential undefined behavior, developers would get flooded with false positives. In the example above, the compiler would have to emit a warning for each arithmetic operation. With many false alarms, the developers would have to get in the habit of just ignoring all the warnings.

But I do not care about performance that much! I want runtime checks everywhere. Or at least, I want to see the warnings! I set -Wall -Wextra -Wpedantic -Wconversion -Weverything. I really do want to know when there are potential undefined behaviors in my code.

Let’s address each of these.

Runtime checks. The clang compiler does have the ability to add runtime checks! ASan (Address Sanitizer) looks closely at the way memory is allocated, deallocated and used. UBSan (Undefined Behavior Sanitizer) looks at a few additional things, such as invalid arithmetic operations. Both of them work by adding instructions to the resulting binary executable, which will check everything. Since the program has more things to do, it runs slower. Valgrind/memcheck works directly on the compiled program, and inserts its own checks when running it. It also makes running the program much slower. These tools are able to catch a few types of issues every time they actually happen in the program. However, it is often non-trivial to understand why it happened. Using the function from the previous example, compiling with clang -fsanitize=undefined time-to-seconds.c and running the program would result in something like:

time-to-seconds.c:5:17: runtime error: signed integer overflow: 2147483647 * 3600 cannot be represented in type 'int'
SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior time-to-seconds.c:5:17 in 
time-to-seconds.c:5:33: runtime error: signed integer overflow: 2147483647 * 60 cannot be represented in type 'int'
SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior time-to-seconds.c:5:33 in 

Syntactic checks. Although they do not understand what your program does, compilers are able to notice suspicious patterns in the source code. For instance, they can see if (x = 42) and emit a warning, since this is a common mistake where the developer should have written if (x == 42). Modern compilers now include an extensive set of such patterns to warn the user about potential mistakes. Some tools are specialized in finding such patterns in the code, such as clang-tidy or cppcheck. Since they do not change the way the code is compiled, they incur no slow down when actually running the program. However, they can only find common mistakes, or they would risk emitting too many false warnings. In the example, no linter emits any warnings, since it would be likely to be a false positive. However, they do warn of more obvious cases:

warning: integer overflow in expression of type ‘int’ results in ‘1’ [-Woverflow]
   39 |     INT_MAX * INT_MAX;
      |             ^

If we want to go further, we need to look at formal methods. The idea is to use mathematical and systematical approaches to find all the undefined behaviors in a provable way. This means that such methods even allow you to prove the absence of undefined behavior (when there is none, of course).

I want to talk about two techniques from formal methods in particular. I am familiar with them because the company where I work, TrustInSoft, sells an industrial tool to apply them to real code used in critical applications.

Abstract interpretation. The idea for this one is intuitive. Essentially, we are going to interpret the source code in an abstract machine. This abstract machine will interpret each statement from the code exactly as specified by the standard. If undefined behavior can happen, a warning can be emitted. Essentially, this can be thought as an implementation of the abstract machine as described by the C standard.

The interesting thing is that you can go even further by having the abstract machine work with sets of values instead of singular values. For instance, you can analyze a function for all possible inputs (the values for each parameter being represented as a set of actual values), instead of just one at a time. For instance, let us consider the example below.

int abs(int x) {
    if (x < 0) {
        return -x;
    } else {
        return x;
    }
}

We can analyze this using the Eva plugin of Frama-C, or the Value plugin of TrustInSoft Analyzer.

$ frama-c -eva abs.c -main abs -verbose=0
[eva:alarm] abs.c:3: Warning: signed overflow. assert -x ≤ 2147483647;

This command interprets the source code in an abstract machine, starting from the abs function. By default, it assumes the argument can be any possible value (any integer from INT_MIN to INT_MAX). When checking the program for all these possible values, it notices a problem in one specific case: when x is INT_MIN, computing -x is undefined behavior, because it would be bigger than INT_MAX. Indeed, usually INT_MIN is 2³² and INT_MAX is 2³²-1, because of the way integers are represented. If you are interested in knowing how these programs manage to work with huge sets of values, look up interval arithmetic.

Weakest Precondition. The other technique is much more general and can actually be used to verify arbitrary facts on programs, not just to find undefined behaviors. For instance, with this, you can prove that your implementation of a sorting algorithm actually does what it is supposed to do. However, it also takes much longer to learn to use it, and to analyze a real program. If you are actually interested in it, I can only recommend the excellent tutorial written by Allan Blanchard.

What About Other Languages?

We explained why C is an unsafe language. But are the other languages unsafe?

C++. This is essentially a superset of C. And it adds its own undefined behaviors. It does introduce features to avoid doing mistakes common in C code, but the complexity it brings also hands another footgun to the user.

Java, Python, Ruby. These languages work by being compiled to byte code, which is something similar to assembly, except that it is not run directly by the processor. Instead, it is run by the corresponding virtual machine. In other, each language targets a single architecture. While C needs to encompass CPUs whose memory model vary significantly, these languages can specify strictly what should happen. They do not care as much about performance, either. The result is that, if you try to access out of the bounds of an array, you do not get undefined behavior. You get a nice, clean, exception telling you what the problem is, and a call stack telling you where it happened.

JavaScript. JavaScript is the worst client-side language. It is also the only one. Surprisingly, however, it is safe in the sense that there are no undefined behaviors (there were some until ECMAScript 2016).

Rust. Now, Rust is an interesting case. Like C, it is designed to be fast. Unlike C, it is designed to be safe. It achieves this thanks to the concept of unsafety. By default, checks are added wherever they are needed, and dangerous operations are simply prohibited. You can turn off these checks to do lower level operations and improve performance where needed, but you have to do explicitly. The point is that unsafety is opt-in, so developers know to be careful with undefined behaviors.

As you can see, we can do quite a bit about undefined behaviors. There are very fast, very simple-to-use tools, that might find common mistakes, and there are very advanced, and exhaustive tools, that require more involvement. And there are languages trying to make safety the default.

There is still a long way to go until software becomes as reliable as it could be, but we’re getting there, one byte at a time.

Further Reading

4 thoughts on “Why Undefined Behavior Matters

Leave a Reply

Your email address will not be published. Required fields are marked *