Blog

Threat-Countering: Refining Risk-Based Cybersecurity

Posted by Geoff Stoker on Apr 9, 2019 8:38:00 AM

 Risk Management

A lot of effort has gone into applying risk-based approaches to cybersecurity in general and patch management in particular. Risk management (RM) techniques tend to prioritize patching “based on the value of the information resource”[1] requiring the patch. At its core, however, cybersecurity is about fighting adaptive threats. So, we advocate the idea of threat-countering patch prioritization. This approach, while complementary to a risk-based one, provides a key advantage: it directly focuses on what we most want to accomplish — avoiding attacks.

The RM process model [2] enumerates four components: framing risk, assessing risk, responding to risk, and monitoring risk. This article is part 1 of a closer look at the assessing risk component specifically as it applies to prioritizing patch management and recommends a threat-countering focus.

“Il n’est pas certain que tout soit incertain.” (It is not certain that everything is uncertain.) — Pascal

The history of RM is a bit fuzzy. But, likely as a result of Pascal’s and Fermat’s correspondence contemplating “the problem of points” (a gambling question), society was provided with a formalized way to predict, with a certain degree of accuracy, events which have not yet occurred. Want to bet $10 that an even number will come up on a single roll of a fair die? Your expected payoff is $10 * .5 = $5.

Following the early 1660’s work of John Graunt (which was so ground-breaking that King Charles ordered Graunt [a haberdasher] be admitted to the Royal Society of London), it became possible to understand probabilities related to life expectancy — the risk of death — which, although tough to predict for a particular individual, is sufficiently statistically regular when aggregated across the whole population. So, crudely we can consider life insurance policies similar to gambling bets where the payoff analysis is done in much the same way (albeit with more complicated math) and probabilities are based on the risk of death for each age cohort of the larger population.

Thus, in a very brief, high-level way we have the foundations from which the practice and study of RM would grow with the initial idea that risks from loss of life to ruined crops to lost sea shipments might adequately be managed with insurance. Following WWII, when it became clear that using insurance to manage every kind of risk was too expensive, the discipline greatly expanded to alternative strategies for risk mitigation. This expansion evolved into the idea of enterprise risk management (ERM) to deal with the wide array of risks facing complex organizations.

Now that information technology pervades almost all aspects of business and government, IT risk management has grown to be an important component of overall ERM. Today, many are rightly concerned about risks related to cybersecurity and collectively have brought RM ideas to bear on the problem. Pascal’s and Fermat’s ideas have been stretched into the future. NIST has defined risk as a measure of the extent to which an entity is threatened by a potential circumstance or event. Risk is further described as a function of the adverse impacts that arise if the circumstance/event occurs, and the likelihood of occurrence. This formula, impacts * likelihood = risk, bears a strong resemblance to the payoff equation.

Returning to the risk assessment component of the RM process, we find that NIST states the purpose of this component is to identify: threats, vulnerabilities, impacts, and likelihood. Impacts occur when threats exploit vulnerabilities, so stretching the formula a bit more we could posit that, (threats * vulnerabilities) * likelihood = risk. Threats, according to NIST, include equipment failure, environmental disruptions, human or machine errors, and purposeful attacks. 

An important distinction to make here is that not all threat types can be probabilistically modeled in the same way. The first three categories (failure, disruption, error) can be modeled to a great extent as random processes. The purposeful attacks category, however, is fundamentally different and modeling is not quite so straight forward.

So, how does this play into risk assessment and vulnerability patch management? Well, bad actors trawl the internet looking for vulnerable systems to exploit in ways similar to those who fish in the ocean trawl about. But, just as whalers set off specially equipped for hunting whales and ill-equipped to capture lobsters or krill, so too are hackers primarily equipped to exploit just a few vulnerabilities. Several studies of vulnerabilities exploited in the wild have found that the rate is ~2% or about 1 in 50. Just as whalers pose no threat to lobsters, neither do hackers prepared to exploit systems with vulnerability A pose a threat to systems with vulnerability B.

As RM techniques are used to bring a risk-based approach to the patching process, many focus first on determining which systems are high-value or critical which tends to lead to a strategy that prioritizes patching critical vulns on those systems. While it’s hard to argue against the intuitive appeal for such a focus, there are two key points that are missing in this analysis:

  1. Studying hackers’ exploit patterns, it does not appear that they target systems based on relative value within an organization, but rather based on their ability to compromise particular vulnerabilities; so, prioritizing patching on critical systems does not directly counter real-world threat.
  2. Unlike other assets, the interconnected nature of IT systems makes it unclear that prioritizing patching of high-value or critical systems actually protects them. Often, the exploitation of hacker-attractive vulns on peripheral systems allows bad actors to gain a foothold from which they move internally to more high-value systems. The 2017 Equifax breach reflected hackers’ use of this technique.

While RM offers useful risk-based techniques in general, threat-countering is necessary when dealing with the special challenges of prioritizing vulnerability mitigation — we must directly meet the actual, real-world threat in order to stop it. Worrying about an unknown threat that might compromise your most high-value or critical system feels intuitively correct. However, if the real-world likelihood of that happening is zero, you haven’t made your organization any more secure by prioritizing patching of those systems. This is especially true if your resources are sufficiently limited (as many patching teams’ are) that you’re never able to get to vulns on less important systems that hackers are actually targeting. The rational move with patching prioritization is to play the game the opponent is playing and meet them where they are.

In part 2 on this idea of threat-countering patch prioritization, we’ll dig a little deeper into how this can be accomplished.

CYR3CON provides cyber threat intelligence through advanced machine learning and data mining of deep-/dark-web information. CYR3CON’s flagship product, CYR3CON Priority, ranks all vulnerabilities based on threat. Hacker discussions are analyzed with predictive machine learning algorithms that considers conversation content, hacker social structure, reputation, language, etc. in order to help organizations best mitigate risk by prioritizing patching against real-world threats.

[1] CISA Review Manual

[2] NIST SP 800–39, Managing Information Security Risk

Topics: Cybersecurity, Risk Management, Artificial Intelligence

Contact Us

Tempe, AZ 85280

Email: sales@cyr3con.ai
Phone: (833) 229-0110

Take the next step to be in the know, now.

Complete the form and a member of the CYR3CON team will contact you shortly to discuss your cyber security needs.