Skip to main content

Legacy Effect: Why Innovation is Hard for Decades-old Companies.

By Matt Wilson, VP Product Management

As a product management leader for many years, I’m driven by innovation and have thought a lot about why legacy services have a hard time innovating. It has nothing to do with the strength of a company’s product team or its engineers; successful technology companies come out of the gate with a strong product. The cracks start to emerge, and many companies start to lose their innovation advantage three to five years into their journey when the market has evolved. With the Atomized Network, we now see this happening to traditional network visibility and detection companies and there are several reasons why.

The industry is changing out from underneath them. Enterprise networks are moving from static, low bandwidth, very centralized environments to composites of multi-cloud, hybrid-cloud, and on-premises infrastructure. These Atomized Networks are cloud-scale, ephemeral, and encrypted. Everything has an IP address, everybody is remote, and even what isn’t remote is moving out of a traditional network environment to a cloud infrastructure. These changes have massive implications for network security, so our conventional tools need to change as well. 

Teams become overly attached to their old technology. If you’ve become used to operating one way for years, have decades of time invested in that architecture, and it has worked well, it’s hard to admit that that way isn’t necessarily the right way to do things anymore. As the saying goes, “if you have a hammer, everything looks like a nail.” But when you’re no longer dealing with a nail, you need a different tool. When designing products for a new reality, teams have to be willing and able to throw out the technologies, methods, and procedures they used in the past and completely rearchitect.

Rethinking the problem is hard. It’s important to step back and take a fresh look at the problem you’re trying to solve because the market has evolved. The customer must be the guiding light and the technology and architecture to meet customer requirements must change over time. But starting from scratch is significantly harder to do when you have a large, embedded user base that is accustomed to one way of doing things and still has old use cases to address, but new use cases as well. And trying to retrain a legacy team to adjust for and operate with new problem sets presents its own challenges. 

Acquisition and “bolt-ons” become the default next step. For companies that can’t rethink the problem and cast aside aging approaches, the market forces them to acquire. They buy companies as much for the technology and customers they bring, as for new talent that introduces a fresh way of thinking. But few companies are capable of doing acquisitions really well because it requires that they completely change their perspective and their development cycle. Often they end up creating bolt-ons, tacking new capabilities onto their existing suite of tools. The result is separate teams supporting distinct products. Users move between multiple panes of glass and multiple environments, using tools with different capabilities, which makes it impossible to detect or threat hunt between them and adds to the operational challenges of managing everything. We also see the cultural impact of the cloud which has changed the way in which teams must operate to meet new go-to-market demands and is just as important as the technology itself. A failure to embrace new deployment models can make it extremely difficult for customers to consume these capabilities. Typically, post-acquisition solutions aren’t connected, easy, or intuitive and sometimes this just isn’t possible because the old technology is still hanging around, so customers suffer and start to peel off.

This is why new companies are needed. 

At Netography, we do things differently because we aren’t tied to the way things used to be. We believe that great developers don’t invest themselves in their current code, and great product teams must be willing to throw out anything and everything they’ve ever built to more adequately solve a different set of problems for a changing industry. We do this by letting the customer experience we want to provide guide us, with the technology underneath adapting and changing to fit that journey.

In our four years of existence, we’ve evolved the way we do things multiple times. We’ve rebuilt entire subsystems and revamped how the product works to provide a better customer experience. Within the same portal, customers can visualize, detect, hunt, and remediate threats across their entire Atomized Network. This is possible because we do whatever it takes to better meet the market, from using cloud and on-prem flow data, not packet captures, for complete network visibility to creating more efficient data stores. Recently, we added powerful context labeling to open up new capabilities and new use cases that enable teams to accelerate and improve analysis, decision-making, response capability, and reporting for the hyper-scale, multi-cloud world we live in. 

Granted, continuous innovation is easier to do when you’re a smaller company. But we also know that the most successful companies don’t limit themselves by simply building a better mousetrap. Incremental change is not the end game. Rearchitecting the problem from scratch for the way the world is today and how it will evolve is.