The Evolution of Storage Admin to the Data Admin
Posted in tech
It’s no secret that with all the different types of storage in the enterprise, a storage admin’s job is complex and more challenging than ever. Unprecedented data growth and new applications of data is making this role even harder, as storage admins find themselves managing new storage silos, including flash-based storage as well as private and public clouds.
DataSphere aligns data with the ideal storage resources to meet application needs, automatically and transparently, based on user-defined objectives. This makes storage administration more productive and predictable, while saving budgets for enterprises. It also transforms many conventional storage-based tasks into activities focused on the value of data. As a result, storage admins will find their role evolving into a more visible one, where their contributions can be tied to the services they provide data—and therefore the business. Let’s examine this transition.
Automating Storage Refreshes
Conventional migrations and upgrades require extensive planning and can take weeks or months to perform. Storage admins must size performance and capacity for all expected projects over a hardware’s given lifetime, which is typically three to five years. Admins must then plan and perform multiple steps to minimize the risk of disruption or downtime when migrating storage systems. This risk increases as admins perform disruptive or destructive upgrades to existing storage systems, move active or critical data to new, more performant storage tiers, and push less active data down to lower cost storage resources.
DataSphere automates the placement and movement of data – transparently to applications – to virtually eliminate the risk of downtime. The storage admin’s role becomes one of determining the levels of performance, protection and price that data requires, and then deploying hardware to support those requirements. To help admins make these assessments, DataSphere gives admins visibility into whether data is hot, cool or cold.
Deployment of new resources becomes much easier, as admins can add capacity of NAS system from any vendor, in a scale-out fashion. Storage refreshes simply require decommissioning a device and then adding new performance or capacity to the DataSphere global namespace. DataSphere automatically rebalances data across the newly added resources appropriately.
What this means for storage admins is that instead of agonizing over how to minimize business disruption, they can focus on the best way to scale out their storage resources, choosing the storage services and tiers that need to meet the unique needs of their business’ data and setting up optimal procurement means from their chosen vendor or reseller. Importantly, storage refreshes become a continuous, non-disruptive maintenance process, as opposed to a periodic, high risk event.
Automating SLA Alignment through Machine Learning
Traditionally, when an application owner needs storage, they work with a storage admin who will allocate static storage resources that can meet the application’s requirements. To prevent the need to move data, storage admins typically over-purchase storage. Even so, admins all have horror stories about the time an oversubscribed storage device slowed to a crawl, resulting in a fire drill to perform and emergency migration.
DataSphere enables storage admins to directly manage the needs of data by assigning performance (IOPS, bandwidth, latency) and protection objectives (durability, availability, and security) to data, down to the file level. Once objectives are applied, DataSphere automatically moves or places data on the ideal storage to meet those requirements.
Alerts and notifications let admins know when resources reach a given threshold. Since admins can deploy additional performance and capacity admins on demand, they can ensure applications always have the resources they need without the need to conduct complex sizing exercises or over purchase capacity.
In the event of unexpected application activity, DataSphere can automatically and transparently redistribute data to restore alignment with the application’s objective before end users even notice the problem. In fact, DataSphere uses machine learning to intelligently allocate storage resources with an enterprise’s unique data usage, protecting service levels far more effectively than traditional approaches. This allows storage admins to sign SLAs (Service Level Agreements) with much greater confidence, while reducing storage costs.
Storage Admins Can Deliver Data as a Service
In addition to automating data placement to meet SLAs, DataSphere enables storage administrators to create service catalogs that they can make directly available to application and data managers, with predefined costs to facilitate chargeback and showback.
The image below illustrates how a tiered catalog can offer different levels of service:
Figure 1 - DataSphere enables storage admins to create data-centric service catalogs.
These service catalogs focus on the needs of application data and raise the role of storage admin to a data-centric role, where the organization can clearly see the value they’ve been providing all along. At the same time, DataSphere removes the complexity and tediousness of manual data movement, so storage admins can move to projects that add even more value to the business. For example, instead of planning the next storage refresh, admins can expand their skills by taking on new projects, such as cloud initiatives, machine learning, Big Data and security.
With DataSphere, storage admins can evolve into data admins, leading the charge in enterprise initiatives that increase the company’s top and bottom lines, all while increasing productivity by streamlining application deployment and increasing uptime.