How much security is enough? No one today can satisfactorily answer this question for computer-related risks. The first generation of computer security risk modelers struggled with issues arising out of their binary view of security, ensnaring them in an endless web of assessment, disagreement, and gridlock. Even as professional risk managers wrest responsibility away from the first-generation technologists, they are still unable to answer the question with sufficient quantitative rigor. Their efforts are handicapped by a reliance on non-quantitative methodologies originally developed to address the deployment and organizational acceptance issues that plagued first-generation tools.
In this report, I argue that these second-generation approaches are only temporary solutions to the computer security risk-management problem and will eventually yield to decision-focused, quantitative, analytic techniques. Using quantitative decision analysis, I propose a candidate modeling approach that explicitly incorporates uncertainty and flexibly allows for varying degrees of modeling detail to address many of the failings of previous modeling paradigms. Because quantitative modeling requires data, I also present a compilation and critique of publicly available computer security data. I highlight the importance of data collection, sharing, and standardization with discussions of measurement, relevance, terminology, competition, and liability. I conclude with a case study example, demonstrating how uncertain data and expert judgments are used in the proposed modeling framework to give meaningful guidance to risk managers and ultimately to answer the question: How much is enough?