|Written in||C++, PHP|
Sherpa is Yahoo's next-generation cloud storage platform. It is a hosted, distributed and geographically replicated key-value data store. It is a NoSQL system that has been developed by Yahoo, to address scalability, availability, and latency needs of Yahoo websites. Sherpa has abilities such as elastic growth, multi-tenancy, global footprint for local low-latency access, asynchronous replication, representational state transfer (REST) based web service APIs, novel per-record consistency knobs, high availability, compression, secondary indexes, and record-level replication.
Sherpa is a multi-tenant system. An application can store data in a table, which is a collection of records. A table is sharded into smaller pieces called tablets. Data is sharded based on the hash value of the key, or range partitioned. Tablets are stored on nodes referred to as storage units. A software routing layer keeps track of mapping between applications tablets and storage units. Applications send requests to the router, which forwards them to the correct storage unit based on the tablet map. Clients can get, set, delete, and scan records via unique record primary keys.
Sherpa's data model is a key-value store where data is stored as JSON blobs. Data is organized in tables where primary key uniqueness can be enforced, but other than that, there are no fixed schemas. It supports single-table scans with predicates. Customers can choose a variety of table types: distributed hash table, distributed ordered table, and mastered and merging tables. Application-specific access patterns determine the suitability of each table type. Query patterns affect key definition.
Sherpa scales by partitioning data: data partitions are called tablets. Each customer-defined table is partitioned into tablets. Thus, tablets are both units of work assignment and tenancy. Each tablet contains a range of records. Sherpa can scale to very large numbers of tables, tablets and records.
The system scales horizontally as newer machines are added, with no downtime to applications. Other elasticity operations include data partition assignment, reassignment and splitting.
Data is automatically replicated to multiple nodes for fault tolerance. Replication across multiple data centers is supported. Single-node failure is transparent to the applications. Sherpa relies on a reliable transaction message bus for replicating transactions. This message bus guarantees at-least-once delivery of transaction messages.
Sherpa supports different levels of consistency, ranging from per-record timeline consistency where all writes are serialized to a master copy, to eventual consistency.
Selective record replication
Replication granularity occurs at the levels of records and tables.
The Backup feature allows multiple old copies of the full table to be saved in offline storage. From this offline storage, customers may retrieve old versions of individual records.
Many applications need to access data via non-primary key data fields. Sherpa supports asynchronous secondary indexes.