“You Can’t Protect What You Can’t See” Still Rings True. Why Observability Now.
by Martin Roesch
Remember the old saying: “You can’t protect what you can’t see”? When I started preaching about it as part of the marketing launch for Real-time Network Awareness (RNA) it seemed pretty obvious that we needed more visibility in order to protect our environments more effectively. But in the intervening years, as an industry, we’ve managed to go in the opposite direction – making it increasingly difficult to gain a comprehensive understanding of our modern networks.
Here’s what happened…
First, we saw the increasing use of encryption in networks and that was a problem because being able to analyze the cleartext traffic once it’s encrypted gets computationally, operationally, and therefore, financially expensive. As of TLS 1.3’s release in 2018 you also have problems with Perfect Forward Secrecy which generates a unique session key for every session initiated which makes post-facto analysis also problematic. As network encryption continues to evolve the tricks that we play become increasingly unwieldy and less effective at the same time.
This was followed by the problem posed by data centers and workloads moving to the cloud where the classic appliance-based models don’t translate effectively. Generally speaking, cloud environments really don’t want you doing deep packet inspection at scale in the old model of using a few powerful sensors to observe large swathes of the network. As a result, workloads essentially have to defend themselves somehow; every workload is an island responsible for its own integrity, defense, and reporting in the event of a successful compromise. Having worked in security for almost 30 years I find this to be a dubious proposition but it’s the road we, as an industry, decided to embark upon.
We have also seen the fallout from the traditional threat-centric approach to detection and response, in terms of the immense amount of cost and complexity involved to find a security event warranting a response. Security teams now talk about millions of event logs per day that go into their expensive pipeline of people, process and technology to figure out which events they need to worry about. As it turns out, when you do the analysis, in most circumstances there’s a vanishingly small number of events that actually indicate a compromise or other security issue that should elicit a response.
The time has come to ask ourselves why we continue to try to protect our enterprise networks this way. Isn’t the definition of madness doing the same thing over and over again and expecting different results?
Getting to observability
When I started looking at solutions to the problem of being able to provide useful observability and detection capabilities to cloud users and other architecture analogs that could help move us in the right direction, the architecture that came to the fore was similar to the architecture that was pioneered to do cloud-based antimalware. Known today as endpoint detection and response (EDR), these products are architected to collect metadata about the endpoint and its processes that are under threat and then forward that data to a cloud backend that does the heavy lifting of detection and sends out response information.
I brought these three threads together: the limitations of operating deep packet inspection (DPI) and the poor fit of the appliance-based form factor in the cloud and in heavily encrypted environments, along with the potential havoc wrought by poorly configured or curated threat-centric security approaches. And I began to imagine what an EDR-like architecture would look like in a network defense platform (NDP).
Fast forward to today, and what that looks like is our cloud-native Netography Fusion™ platform.
Netography Fusion collects metadata in real-time from your multi-cloud VPCs as well as your existing on-prem network infrastructure. That data is brought into a cloud-based analytics backend where enrichment, analytics, and detection happen. If a response is warranted it signals out to the infrastructure or your tech stack utilizing our dozens of integrations. And one of the most powerful aspects is that Fusion works everywhere – in any cloud and on-prem, and in IT and OT environments – equally well because it leverages information about activity in the environment and the participants in those activities instead of trying to decompose packet streams and protocols.
Doing metadata-based analysis of activities and the participants elevates the conversation from threat-centric: “Tell me whenever you see a Log4j attack” to compromise-centric: “I want to look for a containment failure in my Zero Trust environment.”
You can ask and get answers to more meaningful questions, for example:
- Are trust boundaries being violated?
- Are there communication patterns that shouldn’t be happening, like dev talking to prod?
- Are there changes to device communication patterns?
- Are applications exhibiting novel behaviors?
- Are users talking to things they shouldn’t be talking to?
Instead of looking at every packet that comes across the wire to detect a specific attack and letting you know – regardless of whether or not that detection means anything in your environment – Netography Fusion observes activities to detect actual signs of abuse, misuse, misconfiguration, or compromise.
Reversing the trajectory from less to more/See what matters
It’s not surprising that “you can’t protect what you can’t see” still rings true. Yes, within packets and protocols you look at activities within the content being sent. But as you try to scale this approach to large organizations you also incur immense computational costs which translates into immense financial costs because of the requirements to perform decryption and stateful packet inspection as well as having to deploy an appliance in the right place to be able to see this traffic to do something about it. Trying to do DPI across an entire enterprise that has dozens to hundreds of points of presence on the internet is cost prohibitive – and still leaves blind spots in and between clouds and out to OT environments.
Shifting to a model that is architected for today’s diverse, dispersed, and encrypted environments, means you can instrument your entire network because you are leveraging data coming from the infrastructure you already have and, therefore, move in the direction of more visibility, not less. With Netography Fusion, you can observe the entirety of your enterprise network and all its points of presence – multi-cloud and on-prem, in IT and OT environments. And you can observe and respond to the handful of activities that absolutely matter, instead of sifting through the thousands of events that do not.