M&E Scales Simple with DataSphere Management

M&E Scales Simple with DataSphere Management

Posted in tech

In this blog series, we discuss how Media and Entertainment (M&E) companies use DataSphere to simplify scale-out architectures to make existing resources more powerful and effective. The series is comprised of three posts that compare a DataSphere scale-out system to a leading traditional scale out system.

This post reviews the management challenges posed by traditional scale out systems, then examines how DataSphere overcomes these challenges by simplifying architectures and expanding customer storage options. In addition to more simplicity and choice, DataSphere also increases visibility and automates remediation to improve manageability. The previous post discussed how DataSphere improves performance to accelerate time to market. The next post will describe how DataSphere reduces infrastructure needs to lower costs.

For background, DataSphere is a data virtualization platform that creates a global dataspace of storage resources spanning file, block and object protocols. It enables enterprises to orchestrate data non-disruptively across all of their storage resources according to IT-defined storage performance, protection and price requirements. Visit our DataSphere page to learn more.

Comprehensive Visibility Makes Troubleshooting Easy

It’s not easy to diagnose storage problems today, because poor visibility makes it difficult to locate the source of performance issues. When applications slow down, determining whether hotspots are on the server (compute), network, caches, or storage takes investigation, and few enterprises can afford the time or cost of a trial-and-error hunt for the root cause of the problem.

DataSphere transforms this process with metadata-driven insight and analytics. By gathering telemetry on clients, IT admins are able to see which applications are generating the workload and the performance each client receives. It also gathers performance information across all storage resources integrated into the global dataspace.

In addition, DataSphere charts workloads historically. This allows administrators to make decisions based on whether a workload spike is a one-time or recurring event. Importantly, DataSphere makes all of this information available from a single UI to finally enable administrators to view data and storage performance across the datacenter.


Fig. 1: DataSphere charts workloads historically, helping administrators better manage data.

The DataSphere architecture also simplifies scale-out systems, removing the need and trouble spots of unnecessary nodes and caches.

Overcoming Storage Problems with Intelligent, Granular Control

Once admins locate problem applications today, resolving the problems can be challenging because most solutions do not give admins sufficiently granular control. As noted in the previous post, traditional systems’ coarse and static load distribution make remediating problems difficult. While most systems enable admins to solve this problem by changing the node a client is assigned to, the problem is that the admin must move the workload of an entire client to the new node, and it’s difficult for admins to measure the individual and aggregate workloads. It’s entirely possible that moving the client to another node will simply move the hot spot to the new node.

DataSphere resolves these problems in several ways. First, as mentioned in the previous section, DataSphere eliminates the common metadata management chokepoint in most scale out systems. Second, DataSphere allows administrators to assign performance and protection objectives to data down to the file level. This gives admins granular control over policies that DataSphere uses to automatically rebalance workloads. Admins can even assign data a priority level to prevent one application from consuming all the storage resources at the expense of another application that may be more important to the business.

Finally, while file-level control is an advantage, few admins want to actually apply policies to individual files. DataSphere makes assigning objectives to data easy with a feature called Smart Objectives. Smart Objectives apply objectives to data dynamically. For example, admins can apply objectives via pattern matching files of a given type, such as .log, .tmp, .dat or .sql. Objectives can also be applied based on file activity or inactivity. For example, admins can apply objectives to a share with a Smart Objective that ensures that all files in a share that have been accessed in the last day are placed on storage that can deliver 10,000 IOPS, 100MB/bandwidth, and .5ms latency. The Objective can ensure that all files in a share that have not been accessed in the last 30 days get moved to object/cloud storage.

DataSphere Automates Data Management

The Media and Entertainment industry often struggles to balance data storage requirements as needs change over the course of the data lifecycle. While active projects require very high performance, completed projects are so large that M&E and Visual Effects (VFX) companies can’t afford to leave them on performance tiers.

While capacity tiers are much more economical for longer-term storage, M&E companies have difficulty capitalizing on the cost saving potential of the cloud because completed project data must still remain accessible for marketing, distribution, and sequels. As a result, traditional scale-out systems commonly support just two tiers: performance and capacity. This leaves customers with the choice to pay for either fast, expensive storage or slow, cost-efficient storage.

But what impacts M&E IT most is that moving data between tiers is a manual process that must be done during off-hours to avoid the slowdowns that migration causes on active projects. Off-hours are an increasingly scarce commodity in an industry that works around the globe. Some M&E companies have developed homegrown solutions that create a single namespace across tiers so data can be moved as needed. The problem with these solutions is that they are complex, consisting of auto-mounters, symlinks, and scripts. These solutions introduce many moving parts to troubleshoot and points of instability. They also represent a massive project for IT when it’s time to upgrade storage.

DataSphere makes data management easy by delivering comprehensive visibility into application’s data access and storage use, while automating remediation of hot spots non-disruptively across any storage customers choose to deploy. In the next blog post, we will examine how DataSphere reduces infrastructure to cut scale out costs. If you can’t wait that long to start investigating how DataSphere can help, connect with us at deepdive@primarydata.com.



Contact Form

Channel Partner