How to Close the Visibility Gaps Across Your Multi-Cloud Environment
By Martin Roesch
Nearly 90% of organizations have a multi-cloud environment today. Whether the result of an acquisition, or for the benefits of cost containment, business resilience, best of breed capabilities, or compliance, multi-cloud environments are popular for a number of reasons. However, they also create challenges from an operational standpoint for teams that are trying to comprehensively understand their cloud plan, detect security compromises, and find compliance and governance issues.
Different tools
Think about an environment in which you have Microsoft Azure, Google Cloud, and Amazon Web Services (AWS) all active in your enterprise. When assets and workloads are spread across multiple cloud environments, how do you know what you’ve got, what it’s doing, what’s happening to it, and how it is interoperating within and between clouds? Chances are you’re going to use different tools from different providers to try to figure that out. This could include the native tools your cloud service provider offers, maybe a tool such as Wiz in one environment but not in another, or logs being aggregated into a data lake to try to synthesize a common understanding across all of your cloud properties.
Unfortunately, different tools in each cloud environment have their own configurations to be managed, their own discrete language, and usually their own eventing and reporting platform. If you are looking for security issues in those environments you will most likely end up writing detection logic for each individual cloud provider as well. These tools are largely trying to achieve similar outcomes, but because they are different, they may not be looking for the same things. How they define and generate an event, what they report on, and how they report are different.
Unsurprisingly, a piecemeal approach leads to big gaps in understanding within each cloud environment and you still don’t have observability into how the different clouds are interacting with each other. How you think things are configured versus what is actually happening can be two radically different things. Application A in cloud A may be talking to application B in cloud B and they both may be using the same database in cloud C, but how do you know that and is that okay? That may or may not be institutional knowledge, or it might be something just one team knows. But do the people who are responsible operationally for ensuring availability and maintaining compliance with policies have the oversight they need in real-time to make sure that’s okay or take action if it is due to either misconfiguration or compromise?
Different teams
The gaps in knowledge when using different tools are compounded by also having different operational teams. A parallel that comes to mind are airplane pilots and the different environments in which they operate. A recreational pilot, a private pilot, and a commercial pilot all fly planes, but they have different ratings which means they have vastly different capabilities and experiences and things that they are qualified to do.
The same can be said for the different teams that operate within a multi-cloud environment. They are all trying to do the same thing – manage and secure their cloud environment – but one team may have grown up with AWS and knows that environment really well, while another team may just be getting started with Google Cloud. You might also have a team that digs through log files to find IP address to IP address communications over what ports, but that approach doesn’t encompass the context needed to determine the impact of what is happening and whether or not it is problematic for governance or other security reasons. The typical method of trying to send all flow and log data to a data lake and leveraging dashboards and reports while writing queries against that data to detect compromises or anomalous behavior can take hours to get answers because these methods are both highly costly and unscalable.
It’s almost certain that you’ll run into trouble providing information to the governance and audit team or, if there is a compromise, delivering a level of observability across your entire multi-cloud environment to understand what is going on and what action to take if needed. Without equally comprehensive capabilities everywhere, organizations may be exposed to potential risks because if an attacker compromises one cloud environment, they may be able to move laterally within and between clouds, and even get to on-prem infrastructure.
Netography Fusion is the equalizer
Netography Fusion is a cloud-native Network Defense Platform (NDP) designed to absorb this incredibly dense, high-volume data type and bring diverse teams together. It aggregates and normalizes cloud flow logs from all the major cloud providers and enriches that data with additional threat intel and context to provide complete real-time visibility of what’s happening across your entire multi-cloud and hybrid network.
Fusion delivers visibility in a unified console and uses one common language so you can write once and detect everywhere. As a 100% SaaS platform, it can start ingesting flow logs in minutes from anywhere in your multi-cloud network, and it operates at scale which means you get access to meaningful detections in seconds versus hours.
Netography Fusion closes the visibility gaps across your multi-cloud environment and delivers the same, comprehensive capabilities to all your teams so you can capitalize on the benefits of your multi-cloud strategy while reducing risk.