Vulnerability Magnitude, Exploitation Velocity, Blast Radius… No, Not Rocket Science

One of the tangible effects of digital transformation is the impact on security teams, processes, and roadmap.

Organizations are realizing that the technology landscape is rich in a very varied digital biodiversity – with species living in the cloud or in containers, in mobility or in the IoT/IIoT parallel universes, and in space-temporal tunnels called CI/CD pipelines.

And this digital biodiversity should be continuously qualified, assessed, and remediated in case anything is too anomalous… all these being responsibilities of Security teams.

The complexity that these actions imply is remarkable, often requiring augmentation of capabilities to avoid a devastating impact on specialized resources.

But capabilities need to be grounded on solid processes, and here is where an issue often surfaces: lack of operational efficiency.

Swiveling chairs, multiple consoles, poorly implemented APIs, manual operations are still common causes of long processes, human errors, and repetitive operations. Some solutions started to appear, to try automating the steps and accelerating the process.

Data about discovered assets are made available to other platforms, which try to transform these data into more refined information that can be processed by algorithms to understand the vulnerabilities detected; then the data about the vulnerable surface is propagated to other solutions which overlay other data to detect the exploitability, to enrich the context provided and enable prioritization; eventually, reports are produced for the infrastructure team to proceed with patching or remediation.

Again, this orchestration does little to improve the operational efficiency, because all the phases are processed by different platforms and different teams with varying objectives; hence these data lack consistency, normalization, and require adaptation to be properly ingested and processed by the subsequent consumer.

In short, there is a lack of a unified workflow.

Qualys invented VMDR, acronym for Vulnerability Management – Detection and Response.

A new app running within the Qualys Cloud Platform, processing the same consistent source of data across the products that implement the entire process through a single and integrated workflow:

  • asset discovery, categorization, and dynamic tagging;
  • detection of the vulnerable surface by identifying OS, network vulnerabilities and configuration errors
  • cyber threat intel based context enrichment, augmented by a machine learning engine to help prioritization
  • refined prioritization based on exposure, business impact and other unique distinctive traits of the digital landscape where the solution is deployed
  • Vulnerability-patch correlation, tailored on the assets and perimeters for the considered tags and for the prioritized vulnerable surfaces to be remediated
  • support the remediation with patch deployment
  • continuous validation of the security posture according to CIS benchmarks

All this without limits to the sensors you may need to properly observe your IT estate and collect data: software agents conceived to minimize the footprint on the servers/workstations/mobile devices where they are installed, virtual scanners to actively probe the networks, passive sensor listening to traffic and exposing every device visible, cloud APIs to have instant visibility on PaaS/IaaS deployments, container sensors to monitor images in registries or hosts and running containers.

All this in a unified application, where data are collected once and processed efficiently to support the whole workflow. All this with customizable dashboards and reports to keeping critical KPIs under control, and with an API to flow the refined information to other workflows – such as CI/CD pipelines. Besides the operational efficiency, the quality and accuracy of the information produced by this unified workflow using Qualys VMDR effectively support the risk mitigation.

From a more pragmatic standpoint, this boils down to have a clear perception of three important things.

First, the Vulnerability magnitude: this is the synthesis of your vulnerable surface enriched with important contextual information such as the patch availability for a given perimeter, considering supersedence and severity information, and the ability to summarize this information based on the observational needs.

Second, the exploitation velocity: crucially relevant to prioritize and plan the remediation, this data concerning the availability of an exploit. Including details about the ease of exploitation, the potential collateral damages coming from a wormable weaponization of vulnerability or from the potential lateral movement following the possible compromise of a system.

Third, the blast radius: the combination of the network context enriched with the business criticality of assets, the automatic validation of CIS benchmarks, and the ML-assisted risk scoring of the vulnerable and exploitable surface provide a tangible help to estimate the potential harm of a security incident, providing the needed refined information to measure and track the Time To Remediate.

Leave a Reply

Your email address will not be published. Required fields are marked *