# How to Measure Anything in Cybersecurity Risk

## Metadata
- Author: [[Douglas W. Hubbard and Richard Seiersen]]
- Full Title: How to Measure Anything in Cybersecurity Risk
- Category: #books
## Highlights
- We propose that there are just three reasons why anyone ever thought something was immeasurable—cybersecurity included—and all three are rooted in misconceptions of one sort or another. We categorize these three reasons as concept, object, and method. Various forms of these objections to measurement will be addressed in more detail later in this book (especially in Chapter 5). But for now, let's review the basics: Concept of measurement. The definition of measurement itself is widely misunderstood. If one understands what “measurement” actually means, a lot more things become measurable. Object of measurement. The thing being measured is not well defined. Sloppy and ambiguous language gets in the way of measurement. Methods of measurement. Many procedures of empirical observation are not well known. If people were familiar with some of these basic methods, it would become apparent that many things thought to be immeasurable are not only measurable but may have already been measured. ([Location 1001](https://readwise.io/to_kindle?action=open&asin=B0C1RJ9SR1&location=1001))
- mnemonic “howtomeasureanything.com,” where the “c,” “o,” and “m” in “com” stand for concept, object, and method, respectively. ([Location 1011](https://readwise.io/to_kindle?action=open&asin=B0C1RJ9SR1&location=1011))
- Definition of Measurement Measurement: A quantitatively expressed reduction of uncertainty based on one or more observations. ([Location 1041](https://readwise.io/to_kindle?action=open&asin=B0C1RJ9SR1&location=1041))
- His simple formula, known as Bayes's theorem, describes how new information can update prior probabilities. ([Location 1133](https://readwise.io/to_kindle?action=open&asin=B0C1RJ9SR1&location=1133))
- This is a distinction that cybersecurity professionals need to understand. Those who think of probabilities as only being the result of calculations on data—and not also a reflection of personal uncertainty—are, whether they know it or not, effectively presuming a particular interpretation of probability. They are choosing the “frequentist” interpretation, and while they might think of this as objective and scientific, many great statisticians, mathematicians, and scientists would beg to differ. (The original How to Measure Anything book has an in‐depth exposition of the differences.) ([Location 1142](https://readwise.io/to_kindle?action=open&asin=B0C1RJ9SR1&location=1142))
## New highlights added September 3, 2023 at 8:21 AM
- You may notice that the beliefs some hold about statistics will contradict the following facts: There is no single, universal sample size required to be “statistically significant.” To compute it correctly, statistical significance is a function of not only sample size but also the variance within a sample and the “null hypothesis.” These would be used to compute something called a “P‐value.” This result is then compared to a stated “significance level.” Lacking those steps, the declaration of what is statistically significant cannot be trusted. Once you know how to compute statistical significance and understand what it means, then you will find out that it isn't even what you wanted to know in the first place. Statistical significance does not mean you learned something and the lack of statistical significance does not mean you learned nothing. ([Location 1281](https://readwise.io/to_kindle?action=open&asin=B0C1RJ9SR1&location=1281))
- For now, it is probably better if you drop the phrase “statistically significant” from your vocabulary. What you want to determine is whether you have less uncertainty after considering some source of data and whether that reduction in uncertainty warrants some change in actions. ([Location 1297](https://readwise.io/to_kindle?action=open&asin=B0C1RJ9SR1&location=1297))
- Rule of Five There is a 93.75% chance that the median of a population is between the smallest and largest values in any random sample of five from that population. ([Location 1335](https://readwise.io/to_kindle?action=open&asin=B0C1RJ9SR1&location=1335))
- Laplace's Rule of Succession Given that some event occurred m times out of n observations, the probability it will occur in the next observation is (1 + m)/(2 + n). ([Location 1380](https://readwise.io/to_kindle?action=open&asin=B0C1RJ9SR1&location=1380))
- Remember, for the LRS just think of how many observations are in a “reference class” (your years of experience, other companies in a given year, etc.) and out of those observations how many “hits” (threat events of a given type) were observed. ([Location 1568](https://readwise.io/to_kindle?action=open&asin=B0C1RJ9SR1&location=1568))
- In following chapters, we will describe in greater detail the problems that are introduced by the risk matrix and measurable improvements observed when using more quantitative methods, even if the quantitative assessments are subjective. ([Location 1649](https://readwise.io/to_kindle?action=open&asin=B0C1RJ9SR1&location=1649))
- In addition to correcting for overconfidence, the expert can also be improved by using methods that account for two other sources of error in judgment: the high degree of expert inconsistency and a tendency to make common inference errors when it comes to thinking probabilistically. These improvements will also be addressed in upcoming chapters. (Of course, these sources of error are not dealt with in the typical risk matrix at all.) ([Location 1729](https://readwise.io/to_kindle?action=open&asin=B0C1RJ9SR1&location=1729))
- What the CISO needs is a “return on control” calculation. That is the monetized value of the reduction in expected losses divided by the cost of the control. ([Location 1743](https://readwise.io/to_kindle?action=open&asin=B0C1RJ9SR1&location=1743))
## New highlights added September 3, 2023 at 8:21 PM
- We propose that the single most important measurement in cybersecurity risk assessment, or any other risk assessment, is to measure how well the risk assessment methods themselves work. ([Location 2205](https://readwise.io/to_kindle?action=open&asin=B0C1RJ9SR1&location=2205))
- We assert that if firms are using cybersecurity risk‐analysis methods that cannot show a measurable improvement or, even worse, if they make risk assessment worse, then that is the single biggest risk in cybersecurity, and improving risk assessment will be the single most important risk management priority. ([Location 2216](https://readwise.io/to_kindle?action=open&asin=B0C1RJ9SR1&location=2216))
- This research generated one of the most consistently replicated and consequential findings of psychology: that even relatively naive statistical models seem to outperform human experts in a surprising variety of estimation and forecasting problems. ([Location 2341](https://readwise.io/to_kindle?action=open&asin=B0C1RJ9SR1&location=2341))
- is impossible to find any domain in which humans clearly outperformed crude extrapolation algorithms, less still sophisticated statistical ones. Why Does This Happen? Robyn Dawes, Meehl's colleague whom we mentioned earlier, makes the case that poor performance by humans in forecasting and estimation tasks is partly due to inaccurate interpretations of probabilistic feedback. ([Location 2394](https://readwise.io/to_kindle?action=open&asin=B0C1RJ9SR1&location=2394))
- The problem is that experts often seem to conflate the knowledge of a vast set of details in their field with their skill at forecasting uncertain future events. A cybersecurity expert can become well versed in technical details such as conducting penetration tests, using encryption tools, setting up firewalls, using various types of cybersecurity software, and much more—and still be unable to realistically assess their own skills at forecasting future events. ([Location 2475](https://readwise.io/to_kindle?action=open&asin=B0C1RJ9SR1&location=2475))
- In fact, our goal is to elevate the expert. We want to treat the cybersecurity expert as part of the risk assessment system. Like a race car or athlete, they need to be monitored and fine‐tuned for maximum performance. The expert is really a type of measurement instrument that can be “calibrated” to improve its output. ([Location 2487](https://readwise.io/to_kindle?action=open&asin=B0C1RJ9SR1&location=2487))
- Perhaps the most common method of combining expert judgments is sometimes referred to in the US military as the “BOGSAT” method—that is, the “Bunch of Guys Sitting Around Talking” method (excuse the gender specificity). The experts meet in a room and talk about how likely an event would be, or what its impact would be if it occurred, until they reach a consensus (or at least until remaining objections have quieted down). ([Location 2670](https://readwise.io/to_kindle?action=open&asin=B0C1RJ9SR1&location=2670))
- So whichever risks made the “Top 10” was mostly a function of which individuals judged them. ([Location 3111](https://readwise.io/to_kindle?action=open&asin=B0C1RJ9SR1&location=3111))
- Intelligence analysts should be self‐conscious about their reasoning process. They should think about how they make judgments and reach conclusions, not just about the judgments and conclusions themselves. —Richards J. Heuer Jr., Psychology of Intelligence Analysis ([Location 3118](https://readwise.io/to_kindle?action=open&asin=B0C1RJ9SR1&location=3118))
- In short, matrices are ambiguity amplifiers. ([Location 3202](https://readwise.io/to_kindle?action=open&asin=B0C1RJ9SR1&location=3202))
- Remember, the reason we are promoting the quantitative methods mentioned in this book is that we can point to specific research showing that they are superior ‐ that is, measurably superior ‐ to specific alternatives like expert intuition. ([Location 3311](https://readwise.io/to_kindle?action=open&asin=B0C1RJ9SR1&location=3311))
- The researchers concluded that many subjects were just more intolerant of error from algorithms than error from humans. In other words, they had a double standard. ([Location 3330](https://readwise.io/to_kindle?action=open&asin=B0C1RJ9SR1&location=3330))
- In every single presentation I give in analytics I have a slide regarding how to respond to people who say you can't model that. Of course, they are actually building a model in their head when they make a decision. I tell them, “We just want the model out of your head.” ([Location 3477](https://readwise.io/to_kindle?action=open&asin=B0C1RJ9SR1&location=3477))
## New highlights added September 4, 2023 at 9:21 PM
- Reputation, after all, seems to be the loss category cybersecurity professionals resort to when they want to create the most FUD. It comes across as an unbearable loss. ([Location 3961](https://readwise.io/to_kindle?action=open&asin=B0C1RJ9SR1&location=3961))
- Another look at more recent events and more recent academic research does show that there is a direct market response to major events—but only for some companies. ([Location 4009](https://readwise.io/to_kindle?action=open&asin=B0C1RJ9SR1&location=4009))
## New highlights added September 6, 2023 at 7:31 AM
- A 90% CI is a range that has a 90% chance of containing the correct answer. ([Location 4195](https://readwise.io/to_kindle?action=open&asin=B0C1RJ9SR1&location=4195))
- This research shows that almost everyone tends to be biased either toward “overconfidence” or “underconfidence” about their estimates, the vast majority being overconfident (see inset, “Two Extremes of Subjective Confidence”). Putting ([Location 4230](https://readwise.io/to_kindle?action=open&asin=B0C1RJ9SR1&location=4230))
- We won't go into detail here about that debate but if you want to read about it, see the original How to Measure Anything book, especially the third edition. ([Location 4251](https://readwise.io/to_kindle?action=open&asin=B0C1RJ9SR1&location=4251))
- We sometimes call this the “absurdity test.” It reframes the question from “What do I think this value could be?” to “What values do I know to be ridiculous?” We look for answers that are obviously absurd and then eliminate them until we get to answers that are still unlikely but not entirely implausible. This is the edge of our knowledge about that quantity. After a few calibration tests and practice with methods such as listing pros and cons, using the equivalent bet and anti‐anchoring, estimators learn to fine‐tune their “probability senses.” Most people get nearly perfectly calibrated after just a half‐day of training. Most important, even though subjects may have been training on general trivia, the calibration skill transfers to any area of estimation. ([Location 4438](https://readwise.io/to_kindle?action=open&asin=B0C1RJ9SR1&location=4438))
## New highlights added September 6, 2023 at 8:31 PM
- Judgments are better and more consistent if they are based on just a few reality checks. For example, if 100 identified risks each have a 5% to 20% chance of occurrence per year, then we are expecting to see several such events per year. If none of those events have happened in the past five years, then we might want to rethink our estimates (Hubbard and Seiersen both observed this when working together on the same project). ([Location 4735](https://readwise.io/to_kindle?action=open&asin=B0C1RJ9SR1&location=4735))
## New highlights added September 7, 2023 at 8:31 AM
- The previous chapter showed how the performance of subjective probabilities is objectively measurable—and they have been measured thoroughly in published scientific literature. These subjective “prior probabilities” (“priors” for short) are the starting point of all our analyses. This is the best way to both preserve the special knowledge and experience of the cybersecurity expert and produce results that are mathematically meaningful and useful in simulations. Stating our current uncertainty in a quantitative manner allows us to update our probabilities with new observations using some powerful mathematical methods. ([Location 4829](https://readwise.io/to_kindle?action=open&asin=B0C1RJ9SR1&location=4829))
## New highlights added September 8, 2023 at 8:58 PM
- Objective independent data is sparse. However, suppose you know of a firm similar to yours that implemented MFA across all platforms a year ago, and they say they haven't had a material breach since then. Is this evidence that MFA is working? It's certainly not proof it is working because, given our estimate above, there was a 90% chance of not having a significant breach, anyway. But it turns out it is evidence in the sense that the information from one year of no observed breaches at one organization does reduce our uncertainty, even if just incrementally. ([Location 4946](https://readwise.io/to_kindle?action=open&asin=B0C1RJ9SR1&location=4946))