The Cybersecurity Canon - How to Measure Anything: Finding the Value of ‘Intangibles’ in Business

Jul 19, 2017
18 minutes
... views

We modeled the Cybersecurity Canon after the Baseball or Rock & Roll Hall-of-Fame, except for cybersecurity books. We have more than 25 books on the initial candidate list, but we are soliciting help from the cybersecurity community to increase the number to be much more than that. Please write a review and nominate your favorite. 

The Cybersecurity Canon is a real thing for our community. We have designed it so that you can directly participate in the process. Please do so!

Book review by Canon Committee Member, Rick Howard: “How to Measure Anything: Finding the Value of ‘Intangibles’ in Business” (2011), by Douglas W. Hubbard.

Executive Summary

Douglas Hubbard’s "How to Measure Anything: Finding the Value of 'Intangibles' is an excellent candidate for the Cybersecurity Canon Hall of Fame. He describes how it is possible to collect data to support risk decisions for even the hardest kinds of questions. He says that network defenders do not have to have 100 percent accuracy in our models to help support these risk decisions. We can strive to simply reduce our uncertainty about ranges of possibilities. He writes that this particular view of probability is called Bayesian, and it has been out of favor within the statistical community until just recently, when it became obvious that it worked for a certain set of really hard problems. He describes a few simple math tricks that all network defenders can use to make predictions about risk decisions for our organizations. He even demonstrates how easy it is for network defenders to run our own Monte Carlo simulations using nothing more than a spreadsheet. Because of all of that, "How to Measure Anything: Finding the Value of 'Intangibles' is indeed a Cybersecurity Canon Hall of Fame candidate, and you should have read it by now.

Introduction

The Cybersecurity Canon project is a “curated list of must-read books for all cybersecurity practitioners – be they from industry, government or academia — where the content is timeless, genuinely represents an aspect of the community that is true and precise, reflects the highest quality and, if not read, will leave a hole in the cybersecurity professional’s education that will make the practitioner incomplete.” [1]

This year, the Canon review committee inducted this book into the Canon Hall of Fame: “How to Measure Anything in Cybersecurity Risk," by Douglas W. Hubbard and Richard Seiersen. [2] [3]

According to Canon Committee member Steve Winterfeld, "How to Measure Anything in Cybersecurity Risk” is an extension of Hubbard’s successful first book, "How to Measure Anything: Finding the Value of 'Intangibles' in Business. It lays out why statistical models beat expertise every time. It is a book anyone who is responsible for measuring risk, developing metrics, or determining return on investment should read. It provides a strong foundation in qualitative analytics with practical application guidance." [4]

I personally believe that precision risk assessment is a key, and currently missing, element in the CISO’s bag of tricks. As a community, network defenders, in general, are not good at transforming technical risk into business risk for the senior leadership team. For my entire career, I have gotten away with listing the 100+ security weaknesses within my purview and giving them a red, yellow, or green label to mean bad, kind-of-bad, or not bad. If any of my bosses would have bothered to ask me why I gave one weakness a red label vs. a green label, I would have said something like: “25 years of experience…blah, blah, blah…trust me…blah, blah, blah…can I have the money, please?”

I believe the network defender’s inability to translate technical risk into business risk with precision is the reason that the CISO is not considered at the same level as other senior C-suite executives, such as the CEO, CFO, CTO, and CMO. Most of those leaders have no idea what the CISO is talking about. For years, network defenders have blamed these senior leaders for not being smart enough to understand the significance of the security weaknesses we bring to them. But I assert that it is the other way around. The network defenders have not been smart enough to convey the technical risks to business leaders in a way they might understand.

This CISO inability is the reason that the Canon Committee inducted "How to Measure Anything in Cybersecurity Risk,” and another precision risk book called “Measuring and Managing Information Risk: A FAIR Approach” into the Canon Hall of Fame. [5][4][3][6][7]. These books are the places to start if you want to educate yourself on this new way of thinking about risk to the business.

For me though, this is not an easy subject. I slogged my way through both of these books because basic statistical models completely baffle me. I took stat courses in college and grad school but sneaked through them by the skin of my teeth. All I remember about stats was that it was hard. When I read these two books, I think I only understood about a three-quarters of what I was reading, not because they were written badly, but because I struggled with the material. I decided to get back to the basics and read Hubbard’s original book that Winterfeld referenced in his review: "How to Measure Anything: Finding the Value of 'Intangibles' in Business" to see if it was also Canon-worthy.

The Network Defender’s Misunderstanding of Metrics, Risk Reduction and Probabilities

Throughout the book, Hubbard emphasizes that seemingly dense and complicated risk questions are not as hard to measure as you might think. He reasons from scholars like Edward Lee Thorndike and Paul Meehl from the early twentieth century about Clarification Chains:

If it matters at all, it is detectable/observable.
If it is detectable, it can be detected as an amount (or range of possible amounts).
If it can be detected as a range of possible amounts, it can be measured. [8]

As a network defender, whenever I think about capturing metrics that will inform how well my security program is doing, my head begins to hurt. Oh, there are many things that we could collect – like outside IP addresses hitting my infrastructure, security control logs, employee network behavior, time to detect malicious behavior, time to eradicate malicious behavior, how many people must react to new detections, etc. – but it is difficult to see how that collection of potential badness demonstrates that I am reducing material risk to my business with precision. Most network defenders in the past, including me, have simply thrown our hands up in surrender. We seem to say to ourselves that if we can’t know something with 100 percent accuracy, or if there are countless intangible variables with many veracity problems, then it is impossible to make any kind of accurate prediction about the success or failure of our programs.

Hubbard makes the point that we are not looking for 100 percent accuracy. What we are really looking for is a reduction in uncertainty. He says that the concept of measurement is not the elimination of uncertainty but the abatement of it. If we can collect a metric that helps us reduce that uncertainty, even if it is just by a little bit, then we have improved our situation from not knowing anything to knowing something. He says that you can learn something from measuring with very small random samples of a very large population. You can measure the size of a mostly unseen population. You can measure even when you have many, sometimes unknown, variables. You can measure the risk of rare events. Finally, Hubbard says that you can measure the value of subjective preferences, like art or free time, or of life in general.

According to Hubbard, “We quantify this initial uncertainty and the change in uncertainty from observations by using probabilities.” [8] These probabilities refer to our uncertainty state about a specific question. The math trick that we all need to understand is allowing for ranges of possibilities within which we are 90 percent sure the true value lies.

For example, we may be trying to reduce the number of humans who have to respond to a cyberattack. In this fictitious example, last year the Incident Response team handled 100 incidents with three people each – a total of 300 people. We think that installing a next-generation firewall will reduce that number. We don’t know exactly how many, but some. We start here to bracket the question.

Do we think that installing the firewall will eliminate the need for all humans to respond? Absolutely not. What about reducing the number to three incidents with three people for a total of nine? Maybe. What about reducing the number to 10 incidents with three people for a total of 30? That might be possible. That is our lower limit.

Let’s go to the high side. Do you think that installing the firewall will have zero impact on reducing the number? No. What about 90 attacks with three people for a total of 270? Maybe. What about 85 attacks with three people for a total of 255? That seems reasonable. That is our upper limit.

By doing this bracketing we can say that we are 90 percent sure that installing the next-generation firewall will reduce the number of humans who have to respond to cyber incidents from 300 to between 30 and 255. Astute network defenders will point out that this range is pretty wide. How is that helpful? Hubbard says that first, you now know this, where before you didn’t know anything. Second, this is the start. You can now collect other metrics, perhaps, that might help you reduce the gap.

The History of Scientific Measurement Evolution

This particular view of probabilities, the idea that there is a range of outcomes that you can be 90 percent sure about, is the Bayesian interpretation of probabilities. Interestingly, this different view of statistics has not been in favor since its inception, when Thomas Bayes penned the original formula back in the 1740s. The naysayers originated from the Frequentists. Their theory said that the probability of an event can only be determined by how many times it has happened in the past. To them, modern science requires both objectivity and precise answers. According to Hubbard:

“The term ‘statistics’ was introduced by the philosopher, economist, and legal expert Gottfried Achenwall in 1749. He derived the word from the Latin statisticum, meaning ‘pertaining to the state.’ Statistics was literally the quantitative study of the state.” [8]

In the Frequentist view, the Bayesian philosophy requires a measure of “belief and approximations. It is subjectivity run amok, ignorance coined into science.” [7] But the real world has problems where the data is scant. Leaders worry about potential events that have never happened before. Bayesians were able to provide real answers to these kinds problems, like the defeating of the Enigma encryption machine in World War II and finding a lost and sunken nuclear submarine that was the basis for the movie “The Hunt for Red October.” But It wasn’t until the early 1990s that the theory became commonly accepted. [7]

Hubbard walks the reader through this historical research about the current state in scientific measurement. He explains how Paul Meehl, in the early 1900s, demonstrated time and again that statistical models outperformed human experts. He describes the birth of information theory with Claude Shannon in the late 1940s and credits Stanley Smith Stevens, around the same time, with crystalizing different scales of measurement from sets to ordinals to ratios and intervals. He reports how Amos Tversky and Daniel Kahneman, through their research in the 1960s and 1970s, demonstrated that we can improve our measurements around subjective probabilities.

In the end, Hubbard defines “measurement” as this:

  • Measurement: A quantitatively expressed reduction of uncertainty based on one or more observations. [8]

Simple Math Tricks

Hubbard explains two math tricks that, after reading, seem impossible to be true, but when used by a Bayesian proponents, greatly simplify measurement-taking for difficult problems:

  • The Power of Small Samples: The Rule of Five: There is a 93.75% chance that the median of a population is between the smallest and largest values in any random sample of five from that population. [8]
  • The Single Sample Majority Rule (i.e., The Urn of Mystery Rule): Given maximum uncertainty about a population proportion – such that you believe the proportion could be anything between 0% and 100% with all values being equally likely – there is a 75% chance that a single randomly selected sample is from the majority of the population. [8]

I admit that the math behind these rules escapes me. But I don’t have to understand the math to use the tools. It reminds me of a moving scene from one of my favorite movies: “Lincoln.” President Lincoln, played brilliantly by Daniel Day-Lewis, discusses his reasoning for keeping the southern agents – who want to discuss peace before the 13th Amendment is passed – away from Washington.

"Euclid's first common notion is this: Things which are equal to the same thing are equal to each other. That's a rule of mathematical reasoning. It's true because it works. Has done and always will do.” [9]

The bottom line is that “statistically significant” does not mean a large number of samples. Hubbard says that statistical significance has a precise mathematical meaning that most lay people do not understand and many scientists get wrong most of the time. For the purposes of risk reduction, stick to the idea of a 90 percent confidence interval regarding potential outcomes. The Power of Small Samples and the Single Sample Majority Rule are rules of mathematical reasoning that all network defenders should keep handy in their utility belts as they measure risk in their organizations.

Simple Measurement Best Practices and Definitions

As I said before, most network defenders think that measuring risk in terms of cybersecurity is too hard. Hubbard explains four rules of thumb that every practitioner should consider before giving up:

  • It’s been measured before.
  • You have far more data than you think.
  • You need far less data than you think.
  • Useful, new observations are more accessible than you think. [8]

He then defines “uncertainty” and “risk” through a possibility and probabilistic lens:

Uncertainty:

The lack of complete certainty, that is, the existence of more than one possibility.

Measurement of Uncertainty:

A set of probabilities assigned to a set of possibilities.

Risk:

A state of uncertainty where some of the possibilities involve a loss, catastrophe, or other undesirable outcome.

Measurement of Risk:

A set of possibilities each with quantified probabilities and quantified losses.  [8]

In the network defender world, we tend to define risk in terms of threats and vulnerabilities and consequences. [10] Hubbard’s relatively new take gives us a much more precise way to think about these terms.

Monte Carlo Simulations

According to Hubbard, the invention of the computer made it possible for scientists to run

thousands of experimental trials based on probabilities for inputs. These trials are called Monte Carlo simulations. In the 1930s, Enrico Fermi used the method to calculate neutron diffusion by hand with human mathematicians calculating the probabilities. In the 1940s, Stanislaw Ulam, John von Neumann, and Nicholas Metropolis realized that the computer could automate the Monte Carlo method and help them design the atomic and hydrogen bombs. Today, everybody who has access to a spreadsheet can run their own Monte Carlo simulations.

For example, take my previous example of trying to reduce the number of humans who have to respond to a cyberattack. We said that, during the previous year, 300 people responded to a cyberattack. We said that we were 90 percent certain that the installation of a next-generation firewall would result in a reduction in the number of humans who have to respond to an incident to between 30 and 255.

We can refine that number even more by simulating hundreds or even thousands of scenarios inside a spreadsheet. I did this myself by setting up 100 scenarios where I randomly picked a number between 0 and 300. I calculated the mean to be 131 and the standard deviation to be 64. Remember that the standard deviation is nothing more than a measure of spread from the mean. [11][12][13] The rule of 68–95–99.7 says that 68 percent of the recorded values will fall within the first standard deviation. 95 percent will fall within the second standard deviation. 99.7 percent will fall within the third standard deviation. [8] With our original estimate, we said there was a 90 percent chance that the number is between 30 and 255. After running the Monte Carlo simulation, we can say that there is a 68 percent chance that the number is between 76 and 248.

How about that? Even a statistical luddite can like me an run his own Monte Carlo simulation.

Conclusion

After reading Hubbard’s second book in the series, “How to Measure Anything in Cybersecurity Risk," I decided to go back to the original to see if I could understand with a bit more clarity exactly how the statistical models worked and to determine if the original was Canon-worthy too. I learned that there was probably a way to collect data to support risk decisions for even the hardest kinds of questions. I learned that we, network defenders, do not have to have 100 percent accuracy in our models to help support these risk decisions. We can strive to simply reduce our uncertainty about ranges of possibilities. I learned that this particular view of probability is called Bayesian, and it has been out of favor within the statistical community until just recently, when it became obvious that it worked for a certain set of really hard problems. I learned that there are a few simple math tricks that we can all use to make predictions about these really hard problems that will help us make risk decisions for our organizations. And I even learned how to build my own Monte Carlo simulations to supports those efforts. Because of all of that, "How to Measure Anything: Finding the Value of 'Intangibles' is indeed Canon-worthy, and you should have read it by now.

Sources

[1] "Cybersecurity Canon: Essential Reading for the Security Professional," by Palo Alto Networks, last viewed 5 July 2017,

https://www.paloaltonetworks.com/threat-research/cybercanon.html

[2] "Cybersecurity Canon: 2017 Award Winners," by Palo Alto Networks, last visited 5 July 2017,

https://cybercanon.paloaltonetworks.com/award-winners/

[3] " 'How To Measure Anything in Cybersecurity Risk' – Cybersecurity Canon 2017," video interview by Palo Alto Networks, interviewer: Cybersecurity Canon Committee member, Bob Clark, interviewees Douglas W. Hubbard and Richard Seiersen, 7 June 2017, last visited 5 July 2017,

https://www.youtube.com/watch?v=2o_mAavdabg&t=3s

[4] "The Cybersecurity Canon: How to Measure Anything in Cybersecurity Risk," book review by Cybersecurity Canon Committee member, Steve Winterfeld, 2 December 2016, last visited 5 July 2017,

https://cybercanon.paloaltonetworks.com/

[5] "How to Measure Anything in Cybersecurity Risk," by Douglas W. Hubbard and Richard Seiersen, published by Wiley, April 25th 2016, last visited 5 July 2017,

https://www.goodreads.com/book/show/26518108-how-to-measure-anything-in-cybersecurity-risk?ac=1&from_search=true

[6] "The Cybersecurity Canon: Measuring and Managing Information Risk: A FAIR Approach," book review by Canon Committee member, Ben Rothke, 10 September 2015, last visited 5 July 2017,

https://www.paloaltonetworks.com/blog/2015/09/the-cybersecurity-canon-measuring-and-managing-information-risk-a-fair-approach/

[7] "Sharon Bertsch McGrayne: 'The Theory That Would Not Die' | Talks at Google," by Sharon Bertsch McGrayne, Google, 23 August 2011, last visited 7 July 2017,

https://www.youtube.com/watch?v=8oD6eBkjF9o

[8] "How to Measure Anything: Finding the Value of 'Intangibles' in Business," by Douglas W. Hubbard, published by John Wiley & Sons, 1985, last visited 10 July 2017,

https://www.goodreads.com/book/show/444653.How_to_Measure_Anything?ac=1&from_search=true

[9] "Lincoln talks about Euclid," by Alexandre Borovik, The De Morgan Forum, 20 December 2012, last visited 10 July 2017,

http://education.lms.ac.uk/2012/12/lincoln-talks-about-euclid/

[10] BitSight Security Ratings Blog," by Melissa Stevens, 10 January 2017, last visited 10 July 2017,

https://www.bitsighttech.com/blog/cybersecurity-risk-thorough-definition

[11] "Standard Deviation – Explained and Visualized," by Jeremy Jones, YouTube, 5 April 2015, last visited 9 July 2017,

https://www.youtube.com/watch?v=MRqtXL2WX2M

[12] "Difference Between Standard Deviation and Variance," by Maths Partner, YouTube, 13 November 2016, last visited 9 July 2017,

https://www.youtube.com/watch?v=CVF9lr9mpes

[13] "Standard Deviation and Variance (Explaining Formulas)," by statisticsfun, YouTube, 7 February 2015, last visited 7 July 2017,

https://www.youtube.com/watch?v=VTE25D77UI8

References

“The Mathematical Theory of Communication,” by Claude Shannon and Warren Weaver, published by University of Illinois Press (first published 1949),

https://www.goodreads.com/book/show/880735.The_Mathematical_Theory_of_Communication?ac=1&from_search=true

"How Two Trailblazing Psychologists Turned the World of Decision Science Upside Down," by Michael Lewis, Vanity Fair, December 2016, last visited 7 July 2017,

http://www.vanityfair.com/news/2016/11/decision-science-daniel-kahneman-amos-tversky

"On the Theory of Scales of Measurement," S. S. Stevens, Science, New Series, Vol. 103, No. 2684. (Jun. 7, 1946), pp. 677-680.

https://marces.org/EDMS623/Stevens%20SS%20(1946)%20On%20the%20Theory%20of%20Scales%20of%20Measurement.pdf

"Paul Meehl: A Legend of Clinical Psychological Science," by Eric Jaffe, July 2013, last visited 7 July 2017,

https://www.psychologicalscience.org/observer/paul-meehl-a-legend-of-clinical-psychological-science

"Paul E. Meehl: Smartest Psychologist of the 20th Century?" by John A. Johnson, 8 February 2014, last visited 8 July 2017,

https://www.psychologytoday.com/blog/cui-bono/201402/paul-e-meehl-smartest-psychologist-the-20th-century

"The Theory That Would Not Die: How Bayes' Rule Cracked the Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant from Two Centuries of Controversy,"

by Sharon Bertsch McGrayne, Yale University Press, 14 May 2011, last visited 7 July 2017,

https://www.goodreads.com/book/show/10672848-the-theory-that-would-not-die?ac=1&from_search=true

 


Subscribe to the Blog!

Sign up to receive must-read articles, Playbooks of the Week, new feature announcements, and more.