Jump to content

Data virtualization

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 70.114.204.101 (talk) at 19:30, 29 November 2016. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Data virtualization is any approach to data management that allows an application to retrieve and manipulate data without requiring technical details about the data, such as how it is formatted at source, or where it is physically located.[1]

Unlike the traditional extract, transform, load ("ETL") process, the data remains in place, and real-time access is given to the source system for the data, thus reducing the risk of data errors and reducing the workload of moving data around that may never be used.

Unlike a federated database system, it does not attempt to impose a single data model on the data (heterogeneous data). The technology also supports the writing of transaction data updates back to the source systems.[2]

To resolve differences in source and consumer formats and semantics, various abstraction and transformation techniques are used. This concept and software is a subset of data integration and is commonly used within business intelligence, service-oriented architecture data services, cloud computing, enterprise search, and master data management.

Examples

  • The Phone House—the trading name for the European operations of UK-based mobile phone retail chain Carphone Warehouse—implemented Denodo’s data virtualization technology between its Spanish subsidiary’s transactional systems and the Web-based systems of mobile operators.[2]
  • Novartis, which implemented a data virtualization tool from Composite Software to enable its researchers to quickly combine data from both internal and external sources into a searchable virtual data store.[2]
  • The storage-agnostic Primary Data data virtualization platform enables applications, servers, and clients to transparently access data while it is intelligently migrated between direct-attached, network-attached, private and public cloud storage. Server flash memory pioneer Fusion-io co-founder David Flynn, now Primary Data CTO, saw the need to move data across storage types to maximize efficiency with data virtualization.
  • Linked Data can use a single hyperlink-based Data Source Name (DSN) to provide a connection to a virtual database layer that is internally connected to a variety of back-end data sources using ODBC, JDBC, OLE DB, ADO.NET, SOA-style services, and/or REST patterns.
  • Database virtualization may use a single ODBC-based DSN to provide a connection to a similar virtual database layer.

Functionality

Data Virtualization software provides some or all of the following capabilities:

  • Abstraction – Abstract the technical aspects of stored data, such as location, storage structure, API, access language, and storage technology.
  • Virtualized Data Access – Connect to different data sources and make them accessible from a common logical data access point.
  • Transformation – Transform, improve quality, reformat, etc. source data for consumer use.
  • Data Federation – Combine result sets from across multiple source systems.
  • Data Delivery – Publish result sets as views and/or data services executed by client application or users when requested.

Data virtualization software may include functions for development, operation, and/or management.

Benefits include:

  • Reduce risk of data errors
  • Reduce systems workload through not moving data around
  • Increase speed of access to data on a real-time basis
  • Significantly reduce development and support time
  • Increase governance and reduce risk through the use of policies[3]
  • Reduce data storage required[4]

Drawbacks include:

  • May impact Operational systems response time, particularly if under-scaled to cope with unanticipated user queries or not tuned early on[5]
  • Does not impose a heterogeneous data model, meaning the user has to interpret the data, unless combined with Data Federation and business understanding of the data[6]
  • Requires a defined Governance approach to avoid budgeting issues with the shared services
  • Not suitable for recording the historic snapshots of data - data warehouse is better for this[6]
  • Change management "is a huge overhead, as any changes need to be accepted by all applications and users sharing the same virtualization kit"[6]

Technology

Some data virtualization technologies include:

History

Enterprise information integration (EII), first coined by Metamatrix, now known as Red Hat JBoss Data Virtualization, and federated database systems are terms used by some vendors to describe a core element of data virtualization: the capability to create relational JOINs in a federated VIEW.

See also

References

  1. ^ "What is Data Virtualization?", Margaret Rouse, TechTarget.com, retrieved 19 August 2013
  2. ^ a b c "Data virtualisation on rise as ETL alternative for data integration" Gareth Morgan, Computer Weekly, retrieved 19 August 2013
  3. ^ "Rapid Access to Disparate Data Across Projects Without Rework" Informatica, retrieved 19 August 2013
  4. ^ Data virtualization: 6 best practices to help the business 'get it' Joe McKendrick, ZDNet, 27 October 2011
  5. ^ pros reveal benefits, drawbacks of data virtualization software" Mark Brunelli, SearchDataManagement, 11 October 2012
  6. ^ a b c "The Pros and Cons of Data Virtualization" Loraine Lawson, BusinessEdge, 7 October 2011
  7. ^ https://capsenta.com/

Further reading

  • Data Virtualization: Going Beyond Traditional Data Integration to Achieve Business Agility, Judith R. Davis and Robert Eve
  • Data Virtualization for Business Intelligence Systems: Revolutionizing Data Integration for Data Warehouses Rick van der Lans
  • Data Integration Blueprint and Modeling: Techniques for a Scalable and Sustainable Architecture Anthony Giordano