Fix NAS Waste to Save Storage Budget

Fix NAS Waste to Save Storage Budget

Posted in tech

In this blog series, we examine the inefficiencies of traditional NAS systems and how DataSphere solves them. This post examines how DataSphere fixes NAS system inefficiency, which is a key contributor to rising storage costs. The previous post discussed how DataSphere improves NAS performance. The next posts discuss how DataSphere enables tiering across different NAS systems and enhances the capabilities of existing NAS systems.

Inefficient Capacity Utilization Drains Dollars

While NAS bottlenecks create inefficiencies in performance, data gravity - the inability to easily move data -  is a key factor in overspending on excess capacity. It’s not uncommon for IT to double the capacity requested by application owners to avoid the business disruption caused when data needs to be moved. This means IT’s raw capacity spending is often at least double what business units request during the planning of a storage purchase. If data growth projections are high, this waste grows exponentially.

IT typically places an application’s entire data set on one type of storage. This is costly since studies have found that about 75 percent of data stored is typically inactive, or cold, meaning up to three fourths of storage capacity is being used inefficiently.

The overpurchasing of capacity grows even more wasteful given that IT typically places data on storage designed to meet an application’s peak needs, and those peak needs might need high performance storage with an equally high price tag. The gap between storage cost and the value of data grows quickly as data ages and requires far less performance from the storage system it resides on. An object storage expert at a leading healthcare company recently told us that he is surprised how many people don’t realize how cold enterprise data gets, and just how much of it there is to store. As data cools, the main resource it often needs is simply cheap archival capacity.

Save Big with Live Data Mobility

DataSphere solves the misalignment between data needs and storage capabilities by enabling even live data to move automatically across storage nodes without application disruption. This movement is data-aware, as DataSphere monitors how applications are experiencing storage and moves data to the right resource, on demand, as business needs evolve. Companies can then reclaim up to 75% of NAS capacity by automatically archiving cold data to a lower cost filer, on-premises object or public cloud storage, greatly extending the life of existing investments.

This efficiency translates into much lower costs. Once high performance storage is no longer stuck hosting data that’s gone cold - which currently accounts up to 75% of its total capacity, as noted above – each system can store up to 4X more active data. By placing the right data on the right storage resource, DataSphere significantly reduces the amount of high performance capacity needed to meet application demands. In addition, since DataSphere enables additional capacity to be deployed in minutes, enterprises can greatly reduce overprovisioning without risking business continuity.

DataSphere enables NAS storage capacity to be used extremely efficiently, eliminating the need to excessively overprovision by automatically moving cold data to more cost-effective storage, including the cloud, and also gives enterprises the ability to easily add capacity on demand. This increases agility and flexibility, while dramatically reducing costs. In the next blog post, we will examine how DataSphere enables enterprises to automatically move data between different types of NAS systems to fully utilize the capabilities of their NAS investments. If you can’t wait that long to learn about how DataSphere can improve your NAS system efficiency, contact us at deepdive@primarydata.com.



Contact Form

Channel Partner