Jump to content

Shard (database architecture)

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Cydebot (talk | contribs) at 01:49, 3 November 2016 (Robot - Removing category Terms of big data‎ per CFD at Wikipedia:Categories for discussion/Log/2016 October 25.). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

A database shard is a horizontal partition of data in a database or search engine. Each individual partition is referred to as a shard or database shard. Each shard is held on a separate database server instance, to spread load.

Some data within a database remains present in all shards,[notes 1] but some appears only in a single shard. Each shard (or server) acts as the single source for this subset of data.[1]

Database architecture

Horizontal partitioning is a database design principle whereby rows of a database table are held separately, rather than being split into columns (which is what normalization and vertical partitioning do, to differing extents). Each partition forms part of a shard, which may in turn be located on a separate database server or physical location.

There are numerous advantages to the horizontal partitioning approach. Since the tables are divided and distributed into multiple servers, the total number of rows in each table in each database is reduced. This reduces index size, which generally improves search performance. A database shard can be placed on separate hardware, and multiple shards can be placed on multiple machines. This enables a distribution of the database over a large number of machines, greatly improving performance. In addition, if the database shard is based on some real-world segmentation of the data (e.g., European customers v. American customers) then it may be possible to infer the appropriate shard membership easily and automatically, and query only the relevant shard.[2]

Disadvantages include :

  • A heavier reliance on the interconnect between servers[citation needed]
  • Increased latency when querying, especially where more than one shard must be searched.[citation needed]
  • Data or indexes are often only sharded one way, so that some searches are optimal, and others are slow or impossible.[clarification needed]
  • Issues of consistency and durability due to the more complex failure modes of a set of servers, which often result in systems making no guarantees about cross-shard consistency or durability.[citation needed]

In practice, sharding is complex. Although it has been done for a long time by hand-coding (especially where rows have an obvious grouping, as per the example above), this is often inflexible. There is a desire to support sharding automatically, both in terms of adding code support for it, and for identifying candidates to be sharded separately. Consistent hashing is one form of automatic sharding to spread large loads across multiple smaller services and servers.[3]

Where distributed computing is used to separate load between multiple servers (either for performance or reliability reasons), a shard approach may also be useful.

Shards compared to horizontal partitioning

Horizontal partitioning splits one or more tables by row, usually within a single instance of a schema and a database server. It may offer an advantage by reducing index size (and thus search effort) provided that there is some obvious, robust, implicit way to identify in which partition a particular row will be found, without first needing to search the index, e.g., the classic example of the 'CustomersEast' and 'CustomersWest' tables, where their zip code already indicates where they will be found.

Sharding goes beyond this: it partitions the problematic table(s) in the same way, but it does this across potentially multiple instances of the schema. The obvious advantage would be that search load for the large partitioned table can now be split across multiple servers (logical or physical), not just multiple indexes on the same logical server.

Splitting shards across multiple isolated instances requires more than simple horizontal partitioning. The hoped-for gains in efficiency would be lost, if querying the database required both instances to be queried, just to retrieve a simple dimension table. Beyond partitioning, sharding thus splits large partitionable tables across the servers, while smaller tables are replicated as complete units.

This is also why sharding is related to a shared nothing architecture—once sharded, each shard can live in a totally separate logical schema instance / physical database server / data center / continent. There is no ongoing need to retain shared access (from between shards) to the other unpartitioned tables in other shards.

This makes replication across multiple servers easy (simple horizontal partitioning does not). It is also useful for worldwide distribution of applications, where communications links between data centers would otherwise be a bottleneck.

There is also a requirement for some notification and replication mechanism between schema instances, so that the unpartitioned tables remain as closely synchronized as the application demands. This is a complex choice in the architecture of sharded systems: approaches range from making these effectively read-only (updates are rare and batched), to dynamically replicated tables (at the cost of reducing some of the distribution benefits of sharding) and many options in between.

Implementation of shards

Apache HBase
HBase provides automatic sharding.[4]
Azure SQL Database Elastic Database tools
enables the data-tier of an application to scale out and in via industry-standard sharding practices[5]
Couchbase
Couchbase provides automatic transparent sharding as well as extreme performance.
CUBRID
CUBRID allows sharding from version 9.0
Elasticsearch
Elasticsearch enterprise search server provides sharding capabilities.[6]
eXtreme Scale
eXtreme Scale is a cross-process in-memory key/value datastore (a variety of NoSQL datastore). It uses sharding to achieve scalability across processes for both data and MapReduce-style parallel processing.[7]
Hibernate ORM
Hibernate Shards provides for shards, although there has been little activity since 2007.[8][9]
IBM Informix
IBM has allowed sharding in Informix since version 12.1 xC1 as part of the MACH11 technology. Informix 12.10 xC2 added full compatibility with MongoDB drivers, allowing the mix of regular relational tables with NoSQL collections, while still allowing sharding, failover and ACID properties.[10][11]
Kdb+
Kdb+ allows sharding from version 2.0
MonetDB
the open-source column-store MonetDB allows read-only sharding as its July 2015 release.[12]
MongoDB
MongoDB allows sharding from version 1.6
MySQL Cluster
Auto-Sharding: Database is automatically and transparently partitioned across low-cost commodity nodes, allowing scale-out of read and write queries, without requiring changes to the application.[13]
MySQL Fabric (part of MySQL utilities) includes sharding capability.[14]
Oracle NoSQL Database

Oracle NoSQL Database has automatic sharding and elastic, online expansion of the cluster (adding more shards).

OrientDB
OrientDB allows sharding from version 1.7
pg_shard
a sharding extension for PostgreSQL. It shards and replicates PostgreSQL tables for horizontal scale and for high availability. The extension also seamlessly distributes SQL statements without requiring any changes to applications.[15]
Plugin for Grails
Grails does sharding using the Grails Sharding Plugin.[16]
Ruby ActiveRecord
Octopus works as a database sharding and replication extension for the ActiveRecord ORM.
ScaleBase's Data Traffic Manager
ScaleBase's Data Traffic Manager is a software product dedicated to automating MySQL database sharding without requiring changes to applications.[17]
Shard Query
Open Source parallel query engine for MySQL.[18]
Solr Search Server
Solr enterprise search server provides sharding capabilities.[19]
Spanner
Spanner, Google's global-scale distributed database, shards data across multiple Paxos state machines to scale to "millions of machines across hundreds of datacenters and trillions of database rows".[20]
SQLAlchemy ORM
SQLAlchemy is a data-mapper for the Python programming language that provides sharding capabilities.[21]
Teradata
The DWH of Teradata was the first massive parallel database.[citation needed]

Disadvantages of sharding

Sharding a database table before it has been optimized locally causes premature complexity. Sharding should be used only when all other options for optimization are inadequate. The introduced complexity of database sharding causes the following potential problems:

  • Increased complexity of SQL - Increased bugs because the developers have to write more complicated SQL to handle sharding logic.
  • Sharding introduces complexity - The sharding software that partitions, balances, coordinates, and ensures integrity can fail.
  • Single point of failure - Corruption of one shard due to network/hardware/systems problems causes failure of the entire table.
  • Failover servers more complex - Failover servers must themselves have copies of the fleets of database shards.
  • Backups more complex - Database backups of the individual shards must be coordinated with the backups of the other shards.
  • Operational complexity added - Adding/removing indexes, adding/deleting columns, modifying the schema becomes much more difficult.

These historical complications of do-it-yourself sharding are now being addressed by independent software vendors who provide autosharding solutions.

Etymology

The word "shard" in a database context may have been introduced by the CCA's "System for Highly Available Replicated Data".[22] There has been speculation[23] that the term might be derived from the 1997 MMORPG Ultima Online, but the SHARD database system predates this by at least nine years.

However, the SHARD system appears[24] to have used its redundant hardware only for replication and not for horizontal partitioning. It is not known[citation needed] whether present-day use of the term "shard" is derived from the CCA system, but in any case it refers to a different use of redundant hardware in database systems.

See also

References

  1. ^ Typically 'supporting' data such as dimension tables
  1. ^ Pramod J. Sadalage; Martin Fowler (2012), "4: Distribution Models", NoSQL Distilled, ISBN 0321826620
  2. ^ Rahul Roy (July 28, 2008). "Shard - A Database Design".
  3. ^ Ries, Eric. "Sharding for Startups".
  4. ^ "Apache HBase Sharding".
  5. ^ "Introducing Elastic Scale preview for Azure SQL Database".
  6. ^ "Index Shard Allocation".
  7. ^ http://publib.boulder.ibm.com/infocenter/wxsinfo/v7r1/index.jsp?topic=%2Fcom.ibm.websphere.extremescale.over.doc%2
  8. ^ "Hibernate Shards". 2007-02-08.
  9. ^ "Hibernate Shards".
  10. ^ "New Grid queries for Informix".
  11. ^ "NoSQL support in Informix".
  12. ^ "MonetDB July2015 Released". 31 August 2015.
  13. ^ "MySQL Cluster Features & Benefits". 2012-11-23.
  14. ^ "MySQL Fabric sharding quick start guide".
  15. ^ "pg_shard PostgreSQL extension".
  16. ^ "Grails Sharding Plugin".
  17. ^ "ScaleBase's Data Traffic Manager product architecture overview".
  18. ^ "Shard Query".
  19. ^ "Distributed Search".
  20. ^ Corbett, James C; Dean, Jeffrey; Epstein, Michael; Fikes, Andrew; Frost, Christopher; Furman, JJ; Ghemawat, Sanjay; Gubarev, Andrey; Heiser, Christopher; Hochschild, Peter; Hsieh, Wilson; Kanthak, Sebastian; Kogan, Eugene; Li, Hongyi; Lloyd, Alexander; Melnik, Sergey; Mwaura, David; Nagle, David; Quinlan, Sean; Rao, Rajesh; Rolig, Lindsay; Saito, Yasushi; Szymaniak, Michal; Taylor, Christopher; Wang, Ruth; Woodford, Dale. "Spanner: Google's Globally-Distributed Database" (PDF). Proceedings of OSDI 2012. Google. Retrieved 24 February 2014.
  21. ^ "Basic example of using the SQLAlchemy Sharding API".
  22. ^ Sarin, DeWitt & Rosenburg, Overview of SHARD: A System for Highly Available Replicated Data, Technical Report CCA-88-01, Computer Corporation of America, May 1988
  23. ^ Koster, Raph (2009-01-08). "Database "sharding" came from UO?". Raph Koster's Website. Retrieved 2015-01-17.
  24. ^ Sarin & Lynch, Discarding Obsolete Information in a Replicated Database System, IEEE Transactions on Software Engineering vol SE-13 no 1, January 1987