Risk Based Vulnerability Management – starting with the why

CVSS scores are static and limited
From an enterprise perspective, the consensus is, patching a vulnerability takes around 80 days from announcement. And often that’s focusing on the top CVSS scores (9 & 10) with the lower rated vulnerabilities taking, in some instances, up to 1 year to patch. Therefore, what happens if a vulnerability scores low on CVSS but the level of risk increases over time? Due to the original CVSS score this vulnerability has dropped down the priority list for patching, the risk exposure increases as the time to patch grows where the highest risks have taken precedence. In this instance, which factors should we be taking a magnifying glass to make a better-informed strategic decisions? We need access to real time threat intelligence including those being discussed in the hacker community to help us focus on the right prioritization and remediation.
Help I’m Drowning
Whilst focusing on something is better than burying your head in the sand and pretending there isn’t a problem in the first place, the daily battle to keep on top of newly discovered vulnerabilities is like treading water. All it takes is for one thing to come along and you go from treading water (just) to drowning. And that one thing can be the shining of a media spotlight on a newly released vulnerability that gets the senior management team wanting to know immediately what the risks are to the business, or that dreaded ‘can we be hacked’ question. The lack of context surrounding vulnerabilities and the levels of risks posed, means we’re often playing catch up with hackers, increasing the pressure on security teams as management demand answers with little information available - how can we level the playing field and use data to predict the future ourselves?
Risk Based Vulnerability Management – an organized chaos
First and foremost, Risk Based Vulnerability Management (RBVM) isn’t a silver bullet. You still have a huge list of vulnerabilities to remediate, but it’s like a funnel. When all those 12k vulnerabilities go into the top of the funnel, you sift through them by adding contributing factors which helps verify the ‘what should I remediate first?’ question other than just basing on CVSS, so the actual number of vulnerabilities for focused remediation is reduced.
Let’s look at what contributing factors we should consider when prioritizing vulnerabilities:
- Business risk: Every asset you deploy in the enterprise has a business risk associated with it. It might be a low risk (the desktop pc that only accesses email and a browser) to very high (the Server containing the database holding all your customers credit card transactions). Focusing on business risk allows us to prioritize those assets in the organization that, should they be compromised, would have the biggest impact on the company’s reputation and/or revenue.
- Exposure: As simple as answering the question ‘is this asset exposed on the Internet?’ This can either be directly or of course using NAT on firewall rules. Once we expose the asset to the world outside our network perimeter, we increase the risk of a vulnerability being exploited on the asset – this is low hanging fruit for a hacker
- Exploit available: this brings us to the third point – has an exploit been released for the vulnerability? Here’s another interesting statistic, whilst there seems to be some general disagreement as to the actual numbers, between 85% and 95% of all vulnerabilities released ARE NOT EXPLOITED. Let’s put that into context, if we take the average there at 90%, this means of the 12,000 vulnerabilities released in 2019, only 1,218 (approx.) vulnerabilities have exploits created for them which narrows down the pool once more
- Threat Context: has it been exploited? Finally if we can somehow determine if a vulnerability for which there has been an exploit released and exploited in the wild (directly, through malware or some other means) we can focus our efforts on these as they pose the biggest danger and we reach the small number of vulnerabilities for prioritization for our remediation efforts
Can it be that simple?
The answer to that is, it depends. It depends on the tools you use and how you track and gather all of the vulnerability information. Tracking business risk for each asset in an organization is in itself a challenging task unless you have tools like GRC platforms or CMDB platforms that help ease the burden. But even then, you need to keep it up to date which is a big strain on resource and can lead to mistakes.
Sourcing threat intelligence on vulnerabilities that have exploits is easy to find, however finding out if it has been exploited recently – not so easy (of course there are several well-known threat intelligence providers that have some of this capability).
Bringing it all together in one place to focus your attention on the real risks – again you can do this in excel (good luck if you have more than a few 10’s of 1,000’s of rows of data) but more realistically you should look to your vulnerability management vendor to deliver some or all of this capability directly within their tool set. This means the hard work of prioritization of risks is done for you, that’s automated, trusted and verified – a no brainer!
What about Outpost24?
In 2020 Outpost24 is developing a risk based vulnerability management tool which is fully integrated into our vulnerability assessment solution by adding the fundamental building blocks to help our customers move to a truly risk-based vulnerability management model approach with trusted data intelligence. We have, for several years, already provided the first indicator: Exploit available, which has given our customers the ability to focus on those top 10% - 15% of vulnerabilities that have had exploits released and pose the biggest risk. Our next step in the journey is to give customers a full view of risk and helping them answer the question ‘has the vulnerability been exploited in the wild?’ So it can be flagged in a simple way to prioritize vulnerability remediation.
And this will be covered in our next blog ‘Predicting the future: Will that vulnerability be exploited?’