To Escape Data Chaos, Consolidate the Fragmented Storage Landscape
Multiplying Storage Solutions Leads to Data Sprawl
The digitization of so many aspects of our personal lives and business activities has exponentially increased the amount of data organizations collect across every industry. This data continues to double in size every two years, and is expected to reach 44 zetta bytes by 2020, making storage and management one of the most costly and resource-intensive aspects of IT today. Although the cost of memory and compute power continues to decline, hardware advances alone cannot keep pace with the demands for faster access to more data for an increasing variety of use cases.
To handle the new wave of data demands, companies need to find a path through the fragmented storage landscape that confronts them today
The storage industry’s initial reaction to this challenge was to introduce separate point solutions to handle each task. Although this approach offers an immediate answer to each individual challenge, it is ultimately shortsighted and will not scale effectively. The typical enterprise will buy backup targets and separate backup software from multiple vendors for their data protection, a NAS solution for file services and yet another storage product for test/dev, creating a sprawling storage landscape with many data copies—often without even realizing it.
This not only leads to duplicate data across the organization, but also makes storage management an incredibly complex task, as system administrators are forced to work with a dizzying array of systems and interfaces on a daily basis. As data grows, so do the inefficiencies and complexity. Indeed, a recent survey by IDC showed that growth and complexity was the top data management concern for IT decision makers, ahead of demands for faster data retrieval and budgetary pressures. In an effort to keep up with the deluge, many IT departments have lost even the most basic insight into the type of data they’re collecting and how it’s being stored, which makes it impossible to control its growth.
The Case for Consolidation
Consolidating various storage workflows on a single platform can solve today’s management complexity problems and enables much more efficient use of storage resources. Approximately 80 percent of data is held in secondary (non-primary) storage (across solutions designed for disaster recovery and archiving, for example), but the industry has devoted far less attention to this area than primary storage, which has realized major efficiencies through new, scalable, hyper converged architectures. Copy data management has recently come into the spotlight and this year was added to Gartner’s Hype Cycle for Storage Technologies.But on the whole, secondary storage consolidation remains a second thought.
Perhaps the lack of attention on secondary storage consolidation stems from the notion that it would be too difficult to bring so many different workflows with varying quality-of-service and scalability requirements together under a single architecture. Traditionally, backup is considered a passive data workflow but it still requires specific ingest speeds and recovery time objectives, while test/dev has another set of performance but lower resiliency requirements. As a result, storage products used customized designs for each type of workload.
But handling multiple performance requirements is not an insurmountable engineering challenge, and the potential benefits of consolidated secondary storage in the face of growing data sprawl should make this a top priority for the industry. The recent rise of affordable flash and web-scale storage architectures enable much more flexible platforms that can dynamically handle heterogeneous workload requirements while balancing performance and resiliency through the use of intelligent software, all at a more affordable cost.
Consolidating secondary storage on a single platform can address several key business problems at once. Bringing together data used for multiple functions can drastically reduce the amount of redundant data copies spread across the organization, immediately relieving pressure on storage resources. In addition, a single platform that handles all non-primary workloads can scale far more efficiently: instead of provisioning server space for each separate use case, administrators would be able to bundle all secondary storage demands together. In the same vein, having a single platform to view the full range of data use cases enables IT to understand how data is used and plan more effectively for expansion.
To handle the new wave of data demands, companies need to find a path through the fragmented storage landscape that confronts them today. One way out is to consolidate all of the point solutions that make up secondary storage on a single platform, where it’s easy to track and manage, and can be scaled efficiently. That doesn’t mean it’s an easy problem to solve – different types of data and use cases have vastly different workload requirements, and to bring them together will require a platform with a high degree of flexibility. However, CEOs, CTOs and IT pros need to understand that the current path of simply adding more point solutions to the data sprawl will result in complete chaos. We need to rethink our approach to storage with a focus on simplicity, flexibility and scalability that fits the many ways we use data today.