Data Virtualization Delivers Big Savings, Integration, and Agility
Posted in news
Primary Data announced this week that DataSphere has been included in Gartner’s 2017 “Market Guide for Data Virtualization.” We’re seeing that the need for agility is driving adoption of data virtualization for many enterprises, and this lines up with Gartner’s statement that “In the age of digital business, data and analytics leaders along with their data integration teams are feeling immense pressure to reduce data silos and become more agile with data access and delivery.”
Sweetening the deal even further, the benefits of data virtualization extend beyond agility. It also is estimated to help organizations spend up to 40% less, as reported in the Market Guide. In addition, in the report, “Adopt Data Virtualization to Improve Agility and Bimodal Traits in Your Aging Data Integration,” Gartner emphasized the importance of the ability for data virtualization to empower data existing data integration architectures, recommending data and analytics leaders: “Investigate the technical and business benefits of complementing (not replacing) your existing data integration architecture with data virtualization. This will enable diverse and distributed data access (e.g., IoT data, big data and cloud data) with flexibility, agility, reusability and bimodal integration practices.”
DataSphere differs from many of the vendors in the 2017 Gartner Data Virtualization Market Guide. Let’s take a closer look at how, and how DataSphere software can actually complement these data integration and data virtualization solutions.
DataSphere Automates Management of Storage Resources According to Application Needs
Unlike database focused data virtualization solutions, DataSphere is focused on managing data movement and placement to deliver optimal response times to applications and make optimal use of infrastructure. It abstracts data from underlying storage infrastructure to eliminate storage silos. Once storage resources are connected within a global namespace, data movement and placement can be automated to give applications access to all storage resources. When storage is added to DataSphere, DataSphere classifies and pools storage according to its performance, price, and protection attributes. As data is accessed in the global namespace, DataSphere monitors metadata to determine if data is hot, cooling, or cold. With this intelligence, DataSphere automatically and transparently moves or places data on the ideal storage resource to meet IT-defined Objectives for applications and data. Over time, DataSphere uses machine learning capabilities to detect patterns and to optimize the alignment of data with the right resources, to ensure that pre-defined data objectives are maintained.
How DataSphere Complements Data Integration and Data Virtualization Solutions
DataSphere can make existing data integration and new data virtualization solutions more powerful by automating the placement of data for the way it’s being used. It can move hot data to flash in servers, warm data to shared NAS storage, and cold data to cloud or on-premises object storage. This complements traditional data integration and data virtualization solutions by optimizing locality to improve service levels, according to how apps access data at any given moment, while also optimizing resource utilization to minimize costs.
DataSphere’s approach to data virtualization across infrastructure focuses on giving data exactly what it needs across an enterprise’s heterogeneous storage resources. While this is a different approach to data virtualization than database-focused solutions, it can actually serve as a great complement to them, increasing service levels, while improving storage efficiency. Want to learn more? Connect with us a email@example.com.