This is a companion post for my talk titled, “Baby Steps: Easing your company into a quantitative cyber risk program.” This blog post contains links and resources to many of the items and concepts mentioned in the talk.
Abstract: Risk managers tasked with integrating quantitative methods into their risk programs - or even those just curious about it - may be wondering, Where do I start? Where do I get the mountain of data I need? What if my key stakeholders want to see risk communicated in colors?
Attendees will learn about common myths and misconceptions, learn how to get a program started, and receive tips on integrating analysis rigor into risk culture. When it comes to quant risk, ripping the Band-Aid off is a recipe for failure. Focusing on small wins in the beginning, building support from within, and a positive bedside manner is the key to long-term success.
Many security awareness metrics don’t tell us it’s working. They report something related, like how many people attend training, pass/fail rate on post-training quizzes, or sentiment surveys. I presume most CISO’s want their security awareness training to reduce risk. How would you know if it does?
Therein lies the CISO’s white whale. CISO’s don’t need (or want) metrics that prove the program exists or count the number of employees that completed training. CISO’s need metrics that show employee behavior is noticeably influenced and measurably changed, proportional to the level of investment.
Nearly everyone has been in a situation that required us to form a hypothesis or draw a conclusion to make a decision with limited information. This kind of decision-making crops up in all aspects of life, from personal relationships to business. However, there is one cognitive trap that we can easily fall into from time to time. We tend to overcomplicate reasoning when it’s not necessary.
Tune in to just about any cable talk show or Sunday morning news program and you are likely to hear the terms “cyber war,” “cyber terrorism,” and “cyber espionage” bandied about in tones of grave solemnity, depicting some obscure but imminent danger that threatens our nation, our corporate enterprises, or even our own personal liberties. Stroll through the halls of a vendor expo at a security conference, and you will hear the same terms in the same tones, only here they are used to frighten you into believing your information is unsafe without the numerous products or services available for purchase.
A new year always means one thing in any field with an ample number of armchair pundits: another round of annual predictions.
The big problem with annual prediction lists is that they are written so generically and broadly they are hardly ever wrong. They don’t offer any way to measure or define a successful prediction. To add to that, most list writers never bother to go back and grade themselves on the quality of their predictions.
Risk management is both art and science. There is no better example of risk as an art form than risk scenario building and statement writing. Scenario building is the process of identifying the critical factors that contribute to an adverse event and crafting a narrative that succinctly describes the circumstances and consequences if it were to happen. The narrative is then further distilled into a single sentence, called a risk statement, that communicates the essential elements from the scenario.
The whitepaper was peer-reviewed with an academic tone. After reviewing my notes one last time, I decided to write up a post capturing some of my thoughts on the topic and process, of course, unfiltered and a little saltier than a whitepaper.
I recently wrapped up a true labor of love that occupied a bit of my free time in the late winter and early spring of 2021. The project is a peer-reviewed whitepaper I authored for ISACA, “Optimizing Risk Response,” released in July 2021. Following the whitepaper, I conducted a companion webinar titled “Rethinking Risk Response,” on July 29, 2021. Both are available at the links above to ISACA members. The whitepaper should be available in perpetuity, and the webinar will be archived on July 29, 2022.
This is a companion post for my talk titled, “Baby Steps: Easing your company into a quantitative cyber risk program.” This blog post contains links and resources to many of the items and concepts mentioned in the talk.
Abstract: Risk managers tasked with integrating quantitative methods into their risk programs - or even those just curious about it - may be wondering, Where do I start? Where do I get the mountain of data I need? What if my key stakeholders want to see risk communicated in colors?
Attendees will learn about common myths and misconceptions, learn how to get a program started, and receive tips on integrating analysis rigor into risk culture. When it comes to quant risk, ripping the Band-Aid off is a recipe for failure. Focusing on small wins in the beginning, building support from within, and a positive bedside manner is the key to long-term success.
Effective risk governance means organizations are making data-driven decisions with the best information available at the moment. The elephant, of course, refers to the means and methods used to analyze and visualize risk. The de facto language of business risk is the risk matrix, which enables conversations about threats, prioritizations and investments but lacks a level of depth and rigor to consider it a tool for strategic decision-making. However, there is a better option—one that unlocks deeper, more comprehensive conversations not only about risk, but also how risk impedes or enables organizational strategy and objectives.
Some variability between experts is always expected and even desired. One expert, or a minority of experts, with a wildly divergent opinion, is a fairly common occurrence in any risk analysis project that involves human judgment. Anecdotally, I'd say about one out of every five risk analyses I perform has this issue. There isn't one single way to deal with it. The risk analyst needs to get to the root cause of the divergence and make a judgment call.
In December 2019, I made 15 predictions for 2020. I put a twist on my predictions. I wrote them to be measurable and completely gradable. They pass The Clairvoyant Test (or, The Biff Test, if you please.) More importantly, I put my money where my mouth is. So, how did I do?
Without a decision, a risk assessment is, at best, busywork. At worst, it produces an unfocused, time-intensive effort that does not help leaders achieve their objectives. Information risk professionals operate in a fast, ever-changing and often chaotic environment, and there is not enough time to assess every risk, every vulnerability and every asset. Identifying the underlying decision driving the risk assessment ensures that the activity is meaningful, ties to business objectives and is not just busywork.
Think of risk behavior as a baseball bat. A batter should not hit the ball on the knob or the end cap. It is wasted energy. One also does not want to engage in extreme risk seeking or risk avoidance behaviors. Somewhere in the middle there is an equilibrium. It is the job of the risk manager to help leadership find the balance between risk that enables business and risk that lies beyond an organization’s tolerance.
This post is the second of a two-part series on how to frame, scope, and model unusual or emerging risks in your company's risk register. Part 1 covered how to identify, frame, and conceptualize these kinds of risks. Part 2, this post, introduces several tips and steps I use to brainstorm emerging risks and include the results in your risk register.
Every few months or so, we hear about a widespread vulnerability or cyber attack that makes its way to mainstream news. Some get snappy nicknames and their very own logos, like Meltdown, Specter, and Heartbleed. Others, like the Sony Pictures Entertainment, OPM, and Solarwinds attacks cause a flurry of activity across corporate America with executives asking their CISO’s and risk managers, “Are we vulnerable?”
I’ve noticed something unusual lately. There seems to be an increase in the number of events people are declaring Black Swans and the ensuing philosophic tug-of-war of detractors saying they’re wrong. At first, I thought people were just going for clickbait headlines, but I now feel something else is going on. We are experiencing a sort of collective risk blindness: we’re unable or unwilling to see emerging risk in front of us.
Bitcoin and the 17th-century Dutch tulip market are starting to have more in common than one would think. The story begins in 17th century Holland when the demand for tulips, fueled by a jump in agritech, drove the price of bulbs up. Speculators piled on, starting a frenzy of borrowing, buying, markup selling, buying more, and placing bets on a tulip futures market. Some people got rich and didn’t think the good times would end… until it did.
Something extraordinary happened recently in the Information Security research report area. Why I think it’s so extraordinary might have passed you by, unless you geek out on statistical methods in opinion polling as I do. The report is Cisco’s 2021 Security Outcomes report, produced in collaboration with the Cyentia Institute which is the only report in recent memory that uses sound, statistical methods in conducting survey-based opinion research. What is that and why is it so important? Glad you asked!
There are many myths about cyber risk quantification that have become so common, they border on urban legend. The idea that we need vast and near-perfect historical data is a compelling and persistent argument, enough to discourage all the but the most determined of risk analysts. Here’s the flaw in that argument: actuarial science is a varied and vast discipline, selling insurance on everything from automobile accidents to alien abduction - many of which do not have actuarial tables or even historical data. Waiting for “perfect” historical data is a fruitless exercise and will prevent the analyst from using the data at hand, no matter how sparse or flawed, to drive better decisions.
Some people struggle with The Clairvoyant Test. They have a hard time grasping the rules: the clairvoyant can observe anything but cannot make judgments, read minds or extrapolate. It’s no wonder they have a hard time; our cultural view of clairvoyants is shaped by the fake ones we see on TV. For example, Miss Cleo, John Edward, and Tyler “The Hollywood Medium” Henry often do make personal judgments and express opinions about future events. Almost every clairvoyant we see in movies and TV can read minds. I think people get stuck on this, and often will declare metrics or measurements as incorrectly passing The Clairvoyant Test due to the cultural perception that clairvoyants know everything.
There’s an apocryphal business quote from Drucker, Demmings, or maybe even Lord Kelvin that goes something like this: “You can’t manage what you don’t measure.” I’ll add that you can’t measure what you don’t clearly define.
Clearly defining the object of measurement is where many security metrics fail. I’ve found one small trick borrowed from the field of Decision Science that helps in the creation and validation of clear, unambiguous, and succinct metrics. It’s called The Clairvoyant Test, and it’s a 30-second thought exercise that makes the whole process quick and easy.
I was appointed in November 2019 to fill a vacancy and had a great time working with the Board and helping advance SIRA’s mission. There’s so much more to do, so I ran for a full 2-year spot.
A well-studied phenomenon is that perceptions of probability vary greatly between people. You and I perceive the statement “high risk of an earthquake” quite differently. There are so many factors that influence this disconnect: one’s risk tolerance, events that happened earlier that day, cultural and language considerations, background, education, and much more. Words sometimes mean a lot, and other times, convey nothing at all. This is the struggle of any risk analyst when they communicate probabilities, forecasts, or analysis results.
Passing and obtaining the OpenGroup’s OpenFAIR certification is a big career booster for information risk analysts. Not only does it look good on your CV, but it also demonstrates your mastery of FAIR to current and potential employers. It also makes better analysts because it deepens one’s understanding of risk concepts that may not be often used. I passed the exam myself a while back, and I’ve also helped many people prepare and study for it. This is my recipe for studying for and passing the OpenFAIR exam.
There’s a special kind of history re-writing going on right now among some financial analysts, risk managers, C-level leadership, politicians and anyone else responsible for forecasting and preparing for major business, societal and economic disruptions. We’re about 3 months into the COVID-19 outbreak and people are starting to declare this a “Black Swan” event. Not only is “Black Swan” a generally bad and misused metaphor, but the current pandemic also doesn’t fit the definition. I think it’s a case of CYA.
When the first edition of The Failure of Risk Management: Why It's Broken and How to Fix It by Douglas Hubbard came out in 2012, it made a lot of people uncomfortable. Hubbard laid out well-researched arguments that some of businesses’ most popular methods of measuring risk have failed, and in many cases, are worse than doing nothing. Some of these methods include the risk matrix, heat map, ordinal scales, and other methods that fit into the qualitative risk category. Readers of the 1st edition will know that the fix is, of course, methods based on mathematical models, simulations, data, and evidence collection.
I'm excited about Exploit Prediction Scoring System (EPSS)! Most Information Security and IT professionals will tell you that one of their top pain points is vulnerability management. Keeping systems updated feels like a hamster wheel of work: update after update, yet always behind. It’s simply not possible to update all the systems all the time, so prioritization is needed. Common Vulnerability Scoring System (CVSS) provides a way to rank vulnerabilities, but at least from the risk analyst perspective, something more is needed. EPSS is what we’ve been looking for.
In this post, I’m going to cover two kinds of shit. The first kind is feces on the streets of San Francisco that I’m sure everyone knows about due to abundant news coverage. The second kind is bullshit; specifically, the kind found in faulty data gathering, analysis, hypothesis testing, and reporting.
It’s the end of the year and that means two things: the year will be declared the “Year of the Data Breach” again (or equivalent hyperbolic headline) and <drumroll> Cyber Predictions! I react to yearly predictions with equal parts of groan and entertainment. They’re written so generically that they could hardly be considered predictions at all.
This is a companion post for my talk titled, “Baby Steps: Easing your company into a quantitative cyber risk program.” This blog post contains links and resources to many of the items and concepts mentioned in the talk.
Abstract:
Risk managers tasked with integrating quantitative methods into their risk programs - or even those just curious about it - may be wondering, Where do I start? Where do I get the mountain of data I need? What if my key stakeholders want to see risk communicated in colors?
Attendees will learn about common myths and misconceptions, learn how to get a program started, and receive tips on integrating analysis rigor into risk culture. When it comes to quant risk, ripping the Band-Aid off is a recipe for failure. Focusing on small wins in the beginning, building support from within, and a positive bedside manner is the key to long-term success.