Primary Data & The Woz: Continuing Our Mission for Simplicity

Primary Data & The Woz: Continuing Our Mission for Simplicity

Posted in tech

One of Steve Wozniak’s core engineering beliefs is simplicity by design. He implemented this philosophy in his floppy disk designs at Apple, enabling them to be made with few parts and at low cost, which was key in making personal PCs affordable. He joined Fusion-io as its Chief Scientist when he learned that founders David Flynn and Rick White shared his philosophy. Fusion’s ioMemory products simplified performance storage by using NAND flash as persistent memory tier to store data at electronic speeds, as opposed to the mechanical speed of disks. The PCI Express form factor also eliminated the need for data to pass through a storage controller. The result of this simplification was order of magnitude faster performance at a fraction of the price of performance disk-based systems.

While ioMemory’s simplicity toppled the performance limitations of mechanical disks, data became trapped on flash in servers. Enterprises had to buy enough expensive flash capacity to hold the entire data set, even though only a fraction of that data needed that performance. As flash was emerging to deliver the need for application speed, cloud storage was also on the rise to provide the inexpensive capacity needed in a world that was increasingly living online, but doesn’t want to delete anything.

This got the Fusion-io leadership team, including David Flynn, Lance Smith, Rick White, and Steve Wozniak, thinking about how much simpler enterprise IT would be if data could be made storage aware, and could be moved freely to any of take use the diverse storage capabilities available in enterprise datacenters, from flash to shared storage to the cloud. The DataSphere metadata engine is a culmination of their efforts. Here are some highlights on how DataSphere continues their mission for simplicity:

·  Live, Uninterrupted Data Mobility: With flash, cloud and shared storage in use at most enterprises today, fixing the inefficiencies of storage and optimizing the benefits of each resource starts with the ability to deliver live data mobility, without application interruption. With different storage types simultaneously available to applications, IT can finally automate data migration, making it an ongoing activity rather than a distruptive process. Adopting the cloud for archival gets easy, as does managing storage for virtualized environments, and scaling performance in parallel.

·  Automated Data Management: DataSphere virtualizes data so it can be managed across all storage, from flash to shared storage to the cloud. Software automatically places and moves data across all storage types in the DataSphere global namespace according to user-defined objectives for performance, price and protection. This movement is transparent to applications, even when for data that is open and active.

·  On-Demand Scalability: DataSphere’s vendor- and protocol-agnostic architecture enables IT to add capacity of any storage type, on demand, within minutes. This includes the assimilating storage with existing data on it. Enterprises can now add exactly the type of storage they want, when they need it.

·  Accelerate Data Performance: DataSphere simplifies and accelerates data access in several ways. First it moves the metadata path out-of-band, eliminating congestion and the queuing of data operations behind metadata operations. Second, it gives applications native, direct data access, without the added latencies of passing through a gateway or agent. Finally, it can distribute data across storage devices, enabling applications to access multiple files in parallel.

·  Accelerate Metadata Performance: DataSphere metadata operations are performed on dedicated metadata servers, ensuring metadata operations are never stuck behind data payloads and are always executed with low-latency performance. This enables DataSphere to support billions of data objects within a single namespace.

·  Reduce Costs: In addition to the performance and manageability, DataSphere enables enterprises to reduce the amount of storage they must purchase, monitor and maintain. Storage resources get used much more efficiently: Tier 1 capacity gets used for just active data, instead of entire data sets, mid-tier capacity gets used for cooler data, and cold data gets automatically archived to the cloud. This capability, combined with on-demand scalability of any resource, dramatically reduces costs by extending the life of an enterprise’s existing storage investments, while greatly reducing the need to overprovision on future purchases.

The DataSphere metadata engine simplifies datacenters holistically, at a global scale by making data storage aware and freeing data to be transparently moved, as business needs evolve. This simplification makes data much easier to manage, increases application performance and storage scalability, all while reducing costs.

To learn more, check out our web site or drop us a line at deepdive@primarydata.com to schedule a discussion or demo.



Contact Form

Channel Partner