The DataSphere architecture simplifies operations for customers while helping enterprises seamlessly align the right data to the right storage at the right time. From its metadata engine to its ability to move live data to its high-performance scale-out NAS technology, DataSphere is designed to finally overcome the limitations of traditional storage to help petabyte-scale enterprises respond to changing business demands.
Your Most Valuable Asset: Metadata
- Metadata and machine learning unlocks intelligent data management: As a metadata engine, DataSphere is designed to separate and offload the architecturally rigid relationship between applications and where their data is stored. Offloading metadata access with DataSphere delivers predictable, low-latency metadata operations by guaranteeing that metadata operations do not get “stuck” in the queue behind other data requests.
- Live mobility frees your data from gravity: Rather than having to wait for sequential operations to complete, DataSphere can leverage parallel access with the latest optimizations of the standard NFS v4.2 protocol. Leveraging NFS v4.2 significantly speeds up metadata and small file operations by requiring less than half of the protocol-specific network round trips compared to NFS v3.
- Parallel access across storage maximizes data performance: DataSphere collects metadata of a client’s data access and how it experiences storage (IOPS, latency, bandwidth, and availability). Intelligent analytics are then applied against business objectives and data is moved, as needed, to achieve desired levels of performance, cost, and reliability. DataSphere makes real-time automated decisions for data placement, moves data without disruption in order to overcome or prevent outages, and maintains alignment to service level agreements or objectives.
How Machine Learning Makes Intelligent Decisions for Your Objectives
DataSphere provides clients access to billions of files across multiple storage devices in parallel. Performance is accelerated by balancing I/O load at a file level across the storage devices and by offloading the metadata tasks from the storage devices so they are free to serve more data.
DataSphere continuously collects telemetry in the form of metadata to learn the IOPS, bandwidth and latency from each client, for each file accessed. This provides a rich understanding about how storage devices are performing; which files are active and if application data is out of alignment with objectives. If data falls out of alignment, DataSphere automatically moves data to the right storage tier without disruption to running applications.
DataSphere makes real-time automated decisions for data placement, moves data without disruption to overcome or prevent outages, and maintains alignment to service level agreements or objectives. DataSphere manages data using DSX Data Portals and Data Movers. Data can flow across heterogeneous storage types, including the cloud. Data inflight remains accessible to applications during the movement from one store to another.