E. Allen Emerson, The University of Texas at Austin
Model Checking: A Personal Perspective
Joseph Sifakis, Verimag Laboratory
The Quest for Correctness -- Beyond Verification
[Abstract]
Five years ago, frustrated by the never-ending process of finding bugs that developers had cleverly hidden throughout our software, I started a new project with Galen Hunt to rethink what software might look like if it was written, from scratch, with the explicit intent of producing more robust and reliable software artifacts. The Singularity project in Microsoft Research pursued several novel strategies to this end. It has successfully encouraged researchers and product groups to think beyond the straightjacket of time-tested software architectures, to consider new solutions that cross the bounds of academic disciplines such as programming languages, operating systems, and tools.
Singularity built a new operating system using a new programming language, new software architecture, and new verification tools. The Singularity OS incorporates a system architecture based on software isolation of processes. Sing#, the programming language is an extension of C# that provides pre- and post-conditions; object invariants; verifiable, first-class support for OS communication primitives; and strong support for systems programming and code factoring.
From its start, the Singularity project was driven by the question of what would a software platform look like if it was designed with the primary goal of improving the reliability and robustness of software? To this end, we adopted three strategies. First, Singularity is almost entirely written in a safe, modern programming language, which eliminates many serious defects such as buffer overruns. Second, the system architecture limits the propagation of runtime errors by providing numerous, inexpensive, well-defined failure boundaries, thereby making it easier to achieve robust and correct system behavior, even in the presence of imperfect software. Finally, Singularity was designed from the start to facilitate the widespread use of sound program verification tools, with the belief that these tools could provide strong guarantees that entire classes of errors were eliminated.
The success of Singularity raises the possibility that it is time to rethink the traditional design, architecture, and construction practices for software in light of its increasingly central role in the world and the unprecedented threats to its security and integrity. It also poses interesting questions about today's balance of effort between finding defects in existing software and developing the next generation of languages and tools, which could make a qualitative improvement in software robustness. The advent of parallel programming, occasioned by the Multicore revolution, makes these changes even more relevant, as this sea change opens the door for other radical changes in software.
I discuss main achievements in the area of formal verification, in particular regarding their impact on the development of Computer Science as a discipline as well as future research directions.
The presentation starts with a short overview of formal verification techniques and their main characteristics followed by an analysis of their current status with respect to 1) requirements specification; 2) faithfulness of modeling; 3) scalability of verification methods.
Compositional modeling and verification is the main challenge for tackling complexity. I identify two complementary research directions for overcoming current difficulties in compositional techniques. 1) Moving from low-level automata-based composition to component-based composition, by developing frameworks encompassing heterogenous components; 2) Use such frameworks to study compositionality techniques for particular architectures and/or specific properties.
I argue that these directions are not only an opportunity for reinvigorating formal verification, but they also lead to constructivity results which will help to close the gap between Formal Methods / Verification and Algorithms / Complexity.
There is a long history of security attacks that succeed by violating the system designer's assumptions about how things work. Even if a designer does everything right -- within the "obvious" model -- such attacks can still succeed. How can we, as designers and verifiers of systems, cope with these "outside-the-box" attacks?
The classic examples of assumption-violating attacks are the timing attacks on cryptosystems first introduced by Kocher. Cryptosystems are designed so that an attacker who has black-box access to an implementation (and does not know the secret key) cannot deduce the key. Extensive mathematical analysis of the input-output behavior of cryptographic functions led to the belief (though unfortunately not proof) that an attacker who can observe the input-output behavior of cryptosystems cannot feasibly find the secret key. Kocher showed that even if this is true, the running time of common cryptographic algorithms does depend on the secret key. Though the dependence of running time on the key is complex, Kocher showed how to use randomized experiments to extract enough signal to deduce the key, at least in principle. Brumley and Boneh later showed that such attacks are practical, even across a network.
Systems can be redesigned to resist Kocher-style timing attacks, and many systems were redesigned after the attacks became known. However, any new attack that breaks existing systems imposes significant cost and puts existing systems at risk. Designers began to worry that more outside-the-box attacks might be coming.
This fear turned out to be justified, as more attacks on cryptosystems were uncovered. Kocher, Jaffe, and Jun discovered that the power consumption of crypto implementations leaked information about their internal computations, allowing secret keys to be discovered. Fine-grained fluctuations in power consumption during a computation turned out to leak even more information. Again, systems could be redesigned -- though not so easily this time -- but again existing systems were vulnerable.
Cryptosystems fell victim to other outside-the-box attacks. Boneh, DeMillo, and Lipton showed how to defeat public-key cryptosystems by inducing faults in the cryptographic computation. Remarkably, even if the attacker couldn't determine exactly where, when, and how the error occurred, just knowing there was an error and having access to the corrupted output was sufficient to deduce the secret key in most cases. Biham and Shamir found a similar result for symmetric cryptosystems. More systems had to be redesigned, even though they were secure under (then-)existing models.
Similar attacks worked in non-crypto settings as well. Kuhn and Anderson showed attacks on satellite television smartcards. Govindavajhala and Appel showed how to break Java virtual machine security by using heatlamps to generate thermal errors in PC execution. Several researchers showed how to breach security barriers by observing the timing of execution. Halderman et al. defeated operating system memory protection by cooling DRAM chips then cutting system power.
The upshot of all this is that outside-the-box attacks seem to be a fact of life. We can expect to see more attacks new unexpected attack modes. Though we can't predict what the attacks will be, we know they are coming. How can we, as system designers and verifiers, cope?
We are often advised to think outside the box. While it is good to widen our horizons and think creatively from time to time, "think outside the box" is not really actionable advice. What we need instead is to find a better box -- one whose boundaries are not so often broken.
How exactly to do this is not a question we can answer now. One goal of this presentation is to catalyze a discussion about how we might improve our models, or how we might design systems to be more resilient against failures of our models to match reality. Until we can answer this question, our systems will remain fragile, no matter how hard we work on verifying them.