3 Reasons Why Compromise Detection is a Cyber Resilience Game Changer
by Martin Roesch
Cyber resilience has become a hot topic due to the fact that many enterprises have found it increasingly difficult to prevent disruptive and damaging attacks. The attack surface of modern enterprise hybrid multi-cloud networks is expanding and often encompasses OT, IoT, and IT environments. Additionally, the emergence of generative AI capabilities in the hands of attackers is fueling a new era in the industrialization of hacking.
The consensus is that the historical focus on prevention and protecting the perimeter, even when we define “identity” to be the perimeter, is less than adequate. Recommendations to build cyber resilience through more comprehensive planning, stringent backup policies, tested response processes, better visibility at the edge, and working in partnership with executive leadership are all valid. However, we also need to change our focus when we think about network security.
One resilience strategy that is new is evolving from threat-centric to compromise-centric detection as a fast and effective way to build cyber resilience and mitigate the impact of an attack.
Why compromise detection and how it works
1. Brittle defenses
Most organizations are primarily concerned with configuration management whether that’s in the guise of vulnerability management, some type of configuration management database, or even a Zero Trust network architecture (ZTNA). There are also endpoint and network-based threat detection tools at the point of attack that have milliseconds to detect and prevent an attack.
We typically think of this as “defense in depth” but the truth is that this is really defense in adjacent scope and defenders usually only have one or two tools directly in the path of preventing an attacker from successfully completing an attack. In this model, a simple failure in governance leaves organizations exposed to disruption. For example:
- Teams haven’t had time to patch for a new vulnerability.
- EDR, NDR, IPS, and NGFW tools haven’t been configured to look for a new attack that is relevant to the organization.
- A new vulnerability has emerged that an attacker knew about before defenders, and a definition for detecting and protecting against that attack doesn’t exist yet.
That’s an example of a relatively brittle architecture that doesn’t have resilience. If these layers are circumvented because they were either out of scope or behind the attacker’s knowledge of available attacks, you’re open to being compromised. And when that happens, these technologies have few capabilities to identify that a compromise has taken place much less to help scope, contain, and remediate the incident.
2. Outdated deployment requirements
Modern network environments are also problematic for traditional DPI-based security methods because they require deploying appliances or agents to work and only see traffic passing through the points where they are deployed.
Many legacy vendors try to bring their appliance-based models to the cloud and that model doesn’t really translate effectively. Additionally, there are also endpoints that either can’t support an agent, that you aren’t aware of, or that you don’t control. There’s a lot that is missed in this model as well, including east-west traffic, traffic going between clouds, misconfigurations, post-exploitation persistence activities, lateral motion, and any area of the network where budget or access prevents deployment.
Even if you had unlimited resources and could close all those gaps, the increasing use of encryption creates further challenges. Being able to analyze cleartext traffic once it’s encrypted requires additional hardware and gets computationally, operationally, and therefore, financially expensive.
3. Tower of Babel problem
When assets and workloads are spread across multiple cloud platforms and on-prem, how do you know what you’ve got, what it’s doing, what’s happening to it, and what its typical activities are?
Chances are you’re going to use different tools from different providers to try to figure that out in each environment. These tools may or may not be looking for the same things and they all use their own discrete languages to define what a hostile activity or an anomaly looks like. Each tool frequently has its own configurations and threat definitions, and its own eventing and reporting platform. How these tools generate events and report on them will vary, which can provide very different results and no cohesive picture for common understanding and comprehensive risk mitigation. When an attack happens, answering the question “Now what?” is extremely difficult.
Enter compromise detection
The Netography® Fusion platform allows you to switch to a compromise-centric vantage point today, leveraging flow logs and DNS as the foundation.
Fusion ingests flow logs from your multi-cloud VPCs and VNets (as well as your on-prem network) and DNS logs without having to deploy appliances or agents. That data is brought into a cloud-based AI-powered analytics backend where normalization, enrichment, analytics, and detection happen.
To build resilience and mitigate the impact of an attack, we have created over 300 open detection models to alert you to real-time security threats happening in your environment. We also leverage cloud and on-prem flow logs and DNS logs to detect anomalous activity and potential signs of compromise informed by the context of the environment it’s protecting, for example:
- Unauthorized access attempts and lateral movement
- Unusual communication patterns
- Data harvesting before exfiltration
- Internal misuse and policy violations
- Network scanning and enumeration
- Unusual data transfer rates and protocols
- Configuration errors and network mismanagement
If a response is warranted, Fusion signals out to the infrastructure or your existing tech stack utilizing our dozens of response integrations.
Fusion addresses the limitations of brittle defenses, outdated deployment requirements, and the Tower of Babel problem. As a 100% SaaS platform, Fusion can start ingesting flow and DNS logs in minutes and it operates at cloud scale, which means you get access to meaningful detections in real-time.
Detections that will tell you what you’ve got, what it’s doing, and what’s happening to it, across your entire cloud and on-prem environment will help you strengthen your organization’s cyber resilience and confidently and quickly mitigate the impact of an attack.