Computer security is a much-studied area of computer science, unfortunately characterized as much by art as science. This talk describes some steps taken towards engineering rigor, first in a data-driven analysis of where security vulnerabilities occur in “real life”, and second in a new technique, Chaining Layered Integrity Checks, which addresses many of these vulnerabilities.
I begin with a presentation of preliminary results from a study (performed with the Computer Emergency Response Team) of a life cycle model for security vulnerabilities, based on data from incident reports. The results suggest that the security rule of thumb “10% technology, 90% management” has some basis in fact.
I then turn to the issue of computer system integrity. In a computer system, the integrity of lower layers is typically treated as axiomatic by higher layers. Under the presumption that the hardware comprising the machine (the lowest layer) is valid, the integrity of a layer can be guaranteed if and only if: (1) the integrity of the lower layers is checked, and (2) transitions to higher layers occur only after integrity checks on them are complete. The resulting integrity “chain” inductively guarantees system integrity. If the integrity chain is not preserved between one or more layers in a system design, the integrity of the system as a whole cannot be guaranteed. The Chaining Layered Integrity Checks (CLIC) model is used to develop a system architecture with which integrity for an entire system can be guaranteed. I have also demonstrated that CLIC can be realized, by implementing AEGIS a prototype secure bootstrap system for personal computer systems.