Supercharge Your Data with an Enterprise Metadata Engine
Posted in news
From flash to the cloud, enterprises face a paradox of choice when choosing storage to meet business and application objectives today. Compounding the challenge of making the right choice, once storage is selected, data tends to get frozen in place until it’s time for archival. This creates both complexity and waste.
DataSphere is a software platform that automates the flow of data across existing infrastructure and the cloud, built with valuable feedback from enterprises managing data at petabyte scale. With DataSphere, enterprise IT no longer has to waste time managing storage and can finally shift efforts to managing their data.
The updates to DataSphere 1.2 evolve the platform from its early focus serving development and testing (Dev-Test) environments to meeting demanding enterprise production requirements with the scalability, widespread platform support, reliability, availability and serviceability needed to finally automate the flow of data to the right storage at the right time.
Today’s updates transform DataSphere into an enterprise metadata engine that automates the management of billions of files, accelerates performance, and leverages the cloud to deliver significant savings and efficiency. Let’s take a closer look at how DataSphere solves the complexity and cost of managing petabytes of data across cloud, flash, and shared storage.
Automating Management of Billions of Files
DataSphere enables enterprises to serve and manage data at petabyte scale. This means DataSphere must scale massively across flash, shared, and cloud storage, while enabling data to be managed easily across all storage types.
The diagram below shows DataSphere managing over a billion files across different types of storage from a single UI:
Flexing the Metadata Engine’s Muscle
The new release of DataSphere includes several features that leverage DataSphere’s metadata capabilities:
scalability with automated management of billions of files enables companies to serve and manage data at petabyte scale
scale-out NAS performance for unstructured
data and other NAS workloads with vendor-agnostic support for Dell EMC Isilon and NetApp ONTAP solutions
cloud access with direct
interfaces for Amazon S3 and compatible cloud platforms; scale cloud uploads and downloads linearly while preserving the
namespace for applications
- Serve diverse enterprise environments
with expanded support for Linux, Mac and Windows, including Linux, macOS, and SMB support
for Windows Server 2008/Windows 7 and later
reliability, accessibility and serviceability, including non-disruptive H/A
failover and volume retrieval ensures rapid
recovery without impact to ongoing I/O processing to ensure recovery without
data services, including offloaded cloning directly from clients preserves application performance and optimizes capacity usage
into file and client performance with hot
file visibility; real-time performance graphs across different storage
resources visible on user dashboard
performance with advanced metadata algorithm
intelligence and resource usage while continuing to maintain client I/O even
while data is in flight
Accelerate Performance Without Buying New Hardware
DataSphere enables enterprises to place the right data on the right storage at the right time across enterprise infrastructure and the cloud to automatically meet evolving application demands without interruption. By creating a global namespace that spans cloud, shared and local storage, DataSphere helps enterprises overcome performance bottlenecks without buying new hardware. DataSphere’s powerful policy engine flows data to the ideal storage resource to automatically meet performance, price and protection requirements. In addition, DataSphere can monitor and move cold data to lower cost tiers like the cloud while maintaining accessibility. Moving files to fast flash resources accelerates performance and optimizes existing storage investments without disrupting applications.
Contact us to learn more at DeepDive@primarydata.com.