See What You’ve Been Missing: Proactively Spot and Solve Performance Problems

See What You’ve Been Missing: Proactively Spot and Solve Performance Problems

Posted in tech

It’s not easy to diagnose performance problems in the average enterprise today, because poor visibility makes it difficult to locate the source of performance issues. When applications slow down, determining whether hotspots are on the server (compute), network, caches, or storage takes investigation, and few enterprises can afford the time or cost of a trial-and-error hunt for the root cause of the problem.

DataSphere transforms this process with metadata-driven insight and analytics. By gathering telemetry on clients, DataSphere enables IT admins to see which applications are generating the workload and the performance each client receives. It also gathers performance information across all storage resources integrated into the global dataspace.

In addition, DataSphere charts workloads historically. This allows administrators to make decisions based on whether a workload spike is a one-time or recurring event. Importantly, DataSphere makes all this information available from a single UI to finally enable administrators to view data and storage performance across the datacenter. The diagram below illustrates how admins can easily view the IO activity of files:

Fig. 1: DataSphere charts workloads historically, helping administrators better manage data.

In addition, the diagram below illustrates how admins can view the performance and capacity of all storage systems in the global dataspace:

Figure 2. Admins can view all storage systems’ performance and capacity from a single pane of glass.

Automating Remediation

Once admins locate performance problems, resolving them can be challenging because fixes frequently require data to be migrated, which might also require application downtime and potential business disruption.

DataSphere makes remediating problems simple in the following ways:

  • Data-centric, policy-based management. DataSphere allows administrators to assign policies that define data performance (IOPS, bandwidth, latency) and protection (availability, durability, security) objectives. Admins no longer have to determine whether a particular storage system can meet application owners’ requirements. They can create the policy, and let DataSphere handle the placement of data on the ideal storage for the job.
  • Non-disruptive, automated remediation. When data falls out of compliance with policy, DataSphere can move data to another storage system that can meet data’s objectives, without impacting the application. This eliminates the complexity of planning migrations to avoid disrupting business continuity.
  • Scale out any storage type. DataSphere enables admins to add performance and capacity of any storage type, as needed, automatically rebalancing workloads according to the data’s policies. DataSphere enables organizations to scale out any storage, including server flash, SAN, NAS, and cloud/object storage, as needed. This reduces the need to overprovision, which can save organizations a ton of money. It also makes life easier for IT since, when IT receives a new storage request, they no longer have to figure out which of their suitable storage devices have capacity for the new app or, if IT has to deploy more storage, how and when to redistribute existing data.
  • Smart, granular control. DataSphere gives admins file-level control over all data in the global dataspace, while making it easy to apply them via Smart Objectives. Smart Objectives apply objectives to data dynamically using rules and conditions. For example, admins can apply objectives by matching file type extension patterns, such as .log, .tmp, .dat or .sql. Smart Objectives can also apply objectives to data based on file activity or inactivity. For example, admins can apply objectives to a share that ensures that all files in a share that have been accessed in the last day are placed on storage that can deliver 10,000 IOPS, 100MB/bandwidth, and .5ms latency and that all files in the share that have not been accessed in the last 30 days get moved to object/cloud storage.

DataSphere makes data management easy by delivering comprehensive visibility into application’s data access and storage use, while automating remediation of hot spots non-disruptively across any storage customers choose to deploy. Want to learn more? Connect with us at

Contact Form

Channel Partner