Vulnerability Magnitude, Exploitation Velocity, Blast Radius… No, Not Rocket Science

Qualys_IoT

One of the tangible effects of digital transformation is the impact on security teams, processes, and roadmap.

Organizations are realizing that the technology landscape is rich in a very varied digital biodiversity – with species living in the cloud or in containers, in mobility or in the IoT/IIoT parallel universes, and in space-temporal tunnels called CI/CD pipelines.

And this digital biodiversity should be continuously qualified, assessed, and remediated in case anything is too anomalous… all these being responsibilities of Security teams.

The complexity that these actions imply is remarkable, often requiring augmentation of capabilities to avoid a devastating impact on specialized resources.

But capabilities need to be grounded on solid processes, and here is where an issue often surfaces: lack of operational efficiency.

Swiveling chairs, multiple consoles, poorly implemented APIs, manual operations are still common causes of long processes, human errors, and repetitive operations. Some solutions started to appear, to try automating the steps and accelerating the process.

Data about discovered assets are made available to other platforms, which try to transform these data into more refined information that can be processed by algorithms to understand the vulnerabilities detected; then the data about the vulnerable surface is propagated to other solutions which overlay other data to detect the exploitability, to enrich the context provided and enable prioritization; eventually, reports are produced for the infrastructure team to proceed with patching or remediation.

Again, this orchestration does little to improve the operational efficiency, because all the phases are processed by different platforms and different teams with varying objectives; hence these data lack consistency, normalization, and require adaptation to be properly ingested and processed by the subsequent consumer.

In short, there is a lack of a unified workflow.

Qualys invented VMDR, acronym for Vulnerability Management – Detection and Response.

A new app running within the Qualys Cloud Platform, processing the same consistent source of data across the products that implement the entire process through a single and integrated workflow:

  • asset discovery, categorization, and dynamic tagging;
  • detection of the vulnerable surface by identifying OS, network vulnerabilities and configuration errors
  • cyber threat intel based context enrichment, augmented by a machine learning engine to help prioritization
  • refined prioritization based on exposure, business impact and other unique distinctive traits of the digital landscape where the solution is deployed
  • Vulnerability-patch correlation, tailored on the assets and perimeters for the considered tags and for the prioritized vulnerable surfaces to be remediated
  • support the remediation with patch deployment
  • continuous validation of the security posture according to CIS benchmarks

All this without limits to the sensors you may need to properly observe your IT estate and collect data: software agents conceived to minimize the footprint on the servers/workstations/mobile devices where they are installed, virtual scanners to actively probe the networks, passive sensor listening to traffic and exposing every device visible, cloud APIs to have instant visibility on PaaS/IaaS deployments, container sensors to monitor images in registries or hosts and running containers.

All this in a unified application, where data are collected once and processed efficiently to support the whole workflow. All this with customizable dashboards and reports to keeping critical KPIs under control, and with an API to flow the refined information to other workflows – such as CI/CD pipelines. Besides the operational efficiency, the quality and accuracy of the information produced by this unified workflow using Qualys VMDR effectively support the risk mitigation.

From a more pragmatic standpoint, this boils down to have a clear perception of three important things.

First, the Vulnerability magnitude: this is the synthesis of your vulnerable surface enriched with important contextual information such as the patch availability for a given perimeter, considering supersedence and severity information, and the ability to summarize this information based on the observational needs.

Second, the exploitation velocity: crucially relevant to prioritize and plan the remediation, this data concerning the availability of an exploit. Including details about the ease of exploitation, the potential collateral damages coming from a wormable weaponization of vulnerability or from the potential lateral movement following the possible compromise of a system.

Third, the blast radius: the combination of the network context enriched with the business criticality of assets, the automatic validation of CIS benchmarks, and the ML-assisted risk scoring of the vulnerable and exploitable surface provide a tangible help to estimate the potential harm of a security incident, providing the needed refined information to measure and track the Time To Remediate.

Mark Gallagher: Driving The Future Towards High-Performance Through Big Data

Mark_Gallagher_Driving_The_Future_Big_Data

The future of data-driven organizations has arrived and spearheading businesses towards operational excellence is the vision that Mark Gallagher, the founder and CEO of Performance Insights and Industry Analyst at Formula One, continues to advocate.

As organizations start to adopt more data-driven strategies, Gallagher shares with us the challenges and solutions in which Big Data presents, the opportunities in which disruptive technologies can provide in tandem with data and analytics, and the future it holds for businesses and beyond.

Mark_Gallagher_Driving_High_Performance_Through_Big_Data


The Challenges and Solutions of Data-Driven High Performance

Big Data and analytics have quickly become the key ingredient that businesses need to integrate to remain as a high-performing and agile organization in today’s modern industry. Nevertheless, there are challenges that businesses have to overcome before being able to transform into a data-driven organization.

One such challenge that Gallagher notes is the need for organizations to understand and find which data is most relevant to unlocking new opportunities and not rely on established systems.

We may wish to gather data from the areas where we have some understanding,” notes Gallagher. “However, the real opportunity comes from questioning established systems and processes and examing data around the unknowns.

While finding and utilizing data effectively is still a major challenge for most organizations, Gallagher believes the solution lies in organizations finding the right partners and using the right emerging technology to help improve performances.

It is vital to work with the right partners to develop systems that can make rapid use of data”, Gallagher points out. “Real-time data encourages and facilitates real-time decision making, and this is where the power of AI kicks in.

In the world of Formula One, Gallagher found that both the quality and speed of decision-making have improved dramatically with the help of partners and AI to help understand and utilize data. This enabled Gallagher to guarantee much higher levels of quality, reliability, risk management and performance, allowing them to “avoid negative outcomes and guarantee more positive ones.

Utilizing The Power of Disruptive Technology

On its own, Big Data has proven to be a disruptive technology. However, Gallagher believes that several emerging technologies can be “game-changers” for the traditional business processes.

The opportunities afforded by AI and Blockchain technology are only just being realized, and far from dehumanizing businesses”. Gallagher continues, “these tools will enable more people and organizations to work together seamlessly to drive improved outcomes for their customers, businesses and supply chains.

The benefits of emerging technologies go beyond organizational efficiency and Gallagher points out how Internet of Things (IoT) and artificial intelligence have helped build a more connected and data-driven environment in Formula One.

We operate a fully connected environment so that we can manage our assets remotely, monitoring performance, quality, gathering diagnostic information and ultimate managing the product life cycle better than ever”, Gallagher remarked on the usage of IoT and artificial intelligence platforms.

Gallagher sees the innovation that IoT and artificial intelligence brings to Formula One, providing information to make better use of their resources and dramatically improve their manufacturing systems. “In creating a digital twin of our product, we have moved to an environment where we can manufacture and manage much more efficiently.

The Big Future of Big Data and Analytics

Focusing on the data that matters should be the priority for organizations, and as vast amounts of data become increasingly available, Gallagher and Formula One needs to work with solutions that cut to the core of the issues and opportunities that are affecting businesses.

Gallagher points out how Big Data and analytics can be utilized in new ways for businesses and society as a whole in the future, noting that in “a data-rich world, we can mine more opportunities to add value.

Beyond the profit margins, Big Data has the opportunity to develop innovative solutions and Gallagher shares this enthusiasm saying that he is ”very optimistic that many of the problems facing the world today will find their solutions in technology that develops as the result of having the data to understand issues properly.

At the end of the day, data is just information and when businesses can access a better quality of information, they can expect to improve outcomes across all areas of operations.