Quantitative Methods for Assessing Cyber Risk

Accurately model risk to up-level cyber discussions and evolve security postures

Most businesses are very comfortable assessing risk, whether it be from a project failing, market uncertainty, workplace injury, or any other number of causes. But when it comes to cyber security, rigor disappears, hand-waving commences, and analysts pick a color (red, yellow, or green).

Quantification is one of the most valuable endeavors in business, but most organizations are still guessing about cyber security. Return on investment isn’t known and analysts rely on media hype instead of hard data. But it doesn’t have to be this way. The data and tools exist for organizations to make better investment decisions, and with a little practice anyone can become an expert.

In part one of this three-part series, we’ll explain why we need data. Lots of evidence shows that without data, we make really bad decisions. In part two, we focus on the data itself, and where to get it. In part three, we focus on the tools we need to calculate cyber risk. Putting these together, combined with some practice, will have you making data-driven decisions in no time.

Part 1: Cyber Confusion

Making good decisions in cyber security is hard. Misinformation abounds, the public doesn’t understand root causes of data breaches, and a million security vendors want to sell you something. Without the aid of hard data, decision makers can quickly be led astray. In this post, we’ll cover a basic overview of how organizations make investment decisions, and talk about the pitfalls of poor risk management programs and misinformation.

Quantitative Cyber Risk Assessment Is an Immature Field

The bar for assessing cyber risk is pretty low. Many organizations make decisions by sitting around a table, or using a lot of hand-waving (often by a salesperson). Security vendors like to tell customers that all they have to do is buy their product and it will fix all of their problems.

When formal cyber risk programs at organization do exist, it’s often qualitative, meaning that you label risks as likely or unlikely and the potential impact as minor, moderate, or major. And while this methodology gets people to sit down and discuss cyber risk, it’s limiting because of the subjectivity of language. Something as simple as the word “unlikely” might mean different things to different people.

People may interpret things in radically different ways with even small changes in phrasing. Here’s an example that comes from “Thinking Fast and Slow” by Daniel Kahneman: a group of surgeons are asked if they recommend a surgery that has a one-month survival rate of 90%. 84% of the doctors recommend the operation. Then you can ask another group of surgeons if they recommend a surgery that has a one-month mortality rate of 10%. Notice that this is the same question, just phrased differently. But this time, only 50% of the doctors recommend surgery! It’s a bit terrifying to think that if you’re going in for a life-saving surgery or procedure, you can influence the surgeon’s recommendation by how you phrase the question.

Now imagine that you want to ask your CISO a question like “how secure are we these days?” How do you know that the CISO understands what “secure” means in the same way you do?

Confusion Abounds

Confusion also permeates discussions about cybersecurity. For example, several media outlets ran stories stating that cybercrime costs the global economy $1 trillion dollars per year. This number is nonsense and was based off a survey of companies that asked about data breach costs. The study was then taken out of context and extrapolated, basically saying that if 100 companies lost $500 million, then all 20,000 companies in the US must have lost $100 billion. This huge extrapolation doesn’t fit other data or rigorous studies, but the $1 trillion headline was quoted repeatedly at very high levels in government and in the media. No one ever asked to see the data behind the claim.

Vendors have generated some of the most misleading cyber reports that have ever been published. An example comes from a vendor report that talks about “the precision interval for the mean value of annualized total cost”. Doesn’t that sound sophisticated? The report really makes it look like they’re coming up with detailed analyses of how much cyber is costing, but there’s actually no such term as a precision interval. It’s a completely made-up word. Confidence intervals and credible intervals exist, but precision intervals do not.

As we can see, for cyber risk assessment, if you simply use the right mathematical tools, you’re ahead of the game. The bar is really quite low when it comes to quantitative cyber risk assessment and you’ve got to be careful about how you’re ingesting information. There’s another great paper called “Sex, Lies, and Cybercrime Surveys” about the issues surrounding a reliance on vendor reports or cyber crime surveys that are used as evidence for different problems in the cyber landscape.

Countless other examples of bad studies exist, but the result is the same: they create inaccurate perceptions about what is important, and lead to poor decision making. IT professionals should be skeptical of headlines, and demand transparency into where data came from.

At Expanse, we’ve integrated data-driven decision making into the entire customer lifecycle. Customers can make purchase decisions based on facts – not fear, uncertainty, and doubt. We track key metrics about an organization’s perimeter over time, and arm the IT organization with data that guide their security program.

In the next two posts, we’ll dive into examples of data (where to get it), and the tools needed to make better investment decisions (how to use it).

Read part two here.

Dr. Marshall Kuypers, Director of Cyber Risk, is passionate about quantitative risk and cyber systems. He wrote his PhD thesis at Stanford on using data driven methods to improve risk analysis at large organizations. He was a fellow at the Center for International Security and Cooperation, and he has modeled cyber risk for the NASA Jet Propulsion Lab and assessed supply chain risk and cyber systems with Sandia National Labs.