Connect Storage Resources in a Global Namespace
DataSphere pools multiple physical storage resources and presents a virtualized single logical namespace to clients. The global namespace greatly simplifies management, while using open standards-based protocols to easily connect clients.
Add Intelligence to Data Management with Objectives
To dynamically and automatically respond to evolving business demands, DataSphere uses objectives to set an application’s data performance, cost and reliability goals throughout its operational life. Managing by objectives ensures the right data is on the right storage at the right time.
Maximize the Unique Features of Each Storage Resource
Despite the numerous options available today, storage fits within well-defined operational characteristics or attributes. For example, server-side flash storage is very fast (providing low latency, high IOPS, high bandwidth), is considered less reliable (in the event of a hardware failure), and carries a premium when compared to industry standard networked storage. NAS filers and SAN arrays have lower performance, but higher levels of data reliability through sophisticated RAID operations, error correction schemes, or disaster recovery redundancy. In recent years, cloud-based object storage has provided lower cost and greater capacity, but by comparison, deliver the lowest performance, which makes it more suitable for colder data and archiving applications. Each of these operating attributes can be used to define data management objectives.
Automatic Data Movement Across Storage to Align the Right Resource for the Job
With DataSphere, admins can create objectives with specific storage capabilities required to meet business needs. Target objectives can then be selected from a catalog of offerings with matching storage capabilities and applied to single files, directories, or shares, giving unprecedented application price/performance control. For example, a “Platinum” objective level can define a storage requirement with the highest IOPS, lowest latency and highest bandwidth for temp space, logs, indexes or swap space. In contrast, a “Bronze” objective would place less active data on lower cost and lower performant stores. DataSphere continually analyzes if objectives are being met, and will automatically move data to maintain compliance.
Storage Choice and Data Tiering
Thanks to a wide range of capabilities across performance, protection and price, today’s IT professionals have more choice than ever before when selecting a storage type or vendor to meet an application or business need. Given the storage diversity found in most petabyte-scale enterprises today, the challenge for IT is quickly becoming how to ensure the right resource is serving the right data at the right time.
Flash in a server is an ultra-fast storage memory that can be attached via PCI-Express to serve as a very low latency, high IOPS, direct-attached storage tier, but it comes at a premium cost. Network-attached flash in an array also brings more performance to primary storage at a high cost. Classic shared or networked NAS and SAN storage are known for high reliability and capacity, and cloud storage fulfills expandability at low costs, but with lower access or near-line performance for cold data and archiving functions. Each of these storage types provide a unique price-performance capability with different levels of data reliability, and the choices get even broader when considering emerging technologies.
In either case, no matter the type or vendor, DataSphere pools these heterogeneous storage resources together and present a virtualized view of data under a single namespace to clients. This global namespace greatly simplifies data management, while using open standards-based protocols to transparently present to the clients running the applications.
Combine Different Storage to Tier Data or Scale Out
Within the global namespace, DataSphere gives IT the power to configure Data Flow architectures to objectively and automatically deliver a variety of capabilities. Once storage can be classified by its price/performance and reliability attributes and its data virtualized, IT can now consider several different configurations to move and place data without impacting or changing applications.
Traditional data migration is the simple act of moving from old to new storage. However, with DataSphere objectives and client performance telemetry analysis, data can intelligently migrate to the appropriate storage; cold data to cloud, warm data to NAS arrays, and hot data to a all-flash storage. Storage can now be tiered support data throughout its lifecycle from creation to long term archival.
When several NAS systems are clustered together, IT can scale-out performance and accelerate metadata accesses from data. For data intensive applications, files can be load balanced across separate NAS devices to allow parallel access for the highest level of I/O performance. When this architecture integrates the cloud, IT has the ability to archive cold data across multiple cloud providers, while even allowing promotion of data back from the cloud to higher performing storage, automatically.