Skip to main content

Unraveling The Clues: Delivering On The Promise Of High-Fidelity, Actionable Alerts

By Dan Ramaswami, VP of Field Engineering

It’s the time of year for family gatherings and that often includes board games. Clue was a favorite for me growing up. The object is to be the first to unravel the clues and figure out the who, the what, and the where of the murder. Once you have two of the three, you’re able to drill down into what that third might be and you’re really close to cracking the case. More often than not, it seemed to me, “the who” was Colonel Mustard, “the what” was the candlestick, and “the where” was the library…but I digress.

There are a lot of parallels between the components we need to piece together through the course of the Clue game, and the components we need to decorate the signals from Netography. The data that helps determine the who (user information), the what (host information), and the where (location information) lives across the Atomized Network environment. And when we are able to gather and apply relevant data for organizational context, this is what gives the signal a lot more luster and value so we can determine whether an alert is of interest or not.

In my last blog, I talked about how we’ve done a good job in the industry of building large data repositories to house massive volumes of information, but the data we need is not always there. Even waiting until we get the signal into the SIEM, doesn’t ensure we’ll always have all of that context.

The “who” lives in places like Active Directory, identity and access management systems (IAM), and there can even be user information on local hosts that could be useful. But all those different datapoints needed to enrich the signal are disparate. So, we swivel chair between different tools and technologies to piece them together, which wastes a lot of time and effort.

The “what” usually resides in configuration management databases (CMDBs) and there’s a ton of useful information about the host in endpoint detection and response (EDR) systems as well. Additionally, there are numerous tools available that take advantage of things like open-source intelligence (OSINT) to collect data on the host itself so that we have information on the applications that are running, the services that are running, and the users on the system. All these types of data are needed from a local standpoint to understand what the impact or severity of an event might be.

The “where” information tends to be more “tribal” because there is a lot of regional information that is only understood and known by the people who designed the network. They are the ones who know, for example, that the first 15 IP addresses in a Class C IP address range are reserved for routers and maybe PBXs. That’s the kind of thing that, while important, isn’t always readily available and certainly doesn’t always make it on to the signal stream where it is needed most – inline with the alert itself.

At Netography, part of our customer experience promise includes working with customers to define those important context data points and getting all that information into one place so that it can be applied as part of the signal stream to deliver alerts that matter. This is a powerful differentiator from data stores and data lakes, and what the addition of context labeling in our latest release of the Netography Fusion® platform is all about. 

We’ve built a mechanism to take all this context in through two different avenues that our customer experience team is dedicated to helping our customers leverage. One way we handle this integration is through APIs and programmatically accessing this context and importing it into the Netography Fusion platform, so that as soon as the signal is lit all that context is attached to the signal. The other way we do this is by bulk loading the data manually. If there is a large data repository that needs to be manually exported and then imported to the Fusion platform, our team is happy to handle that entire process without customers having to lift a finger.

The who, what, and where are the three main components that provide important data points for enrichment to create true, high-fidelity, actionable alerts worthy of an outcome. That’s why we’re focused on making it easy for customers to bring in and apply that organizational-specific context at the time of ingestion to quickly crack the case. It won’t be Colonel Mustard with a candlestick in the library. But I can guarantee that your security operations center (SOC), cloud operations, and network teams will be able to triangulate all these disparate types of data quickly and act with conviction.