EMC VPLEX

From Wikipedia, the free encyclopedia
  (Redirected from VPLEX)
Jump to: navigation, search

EMC VPLEX is a virtual computer data storage software product introduced by EMC Corporation in May 2010.[1] VPLEX implements a distributed "virtualization" layer within and across geographically disparate Fibre Channel storage area networks and data centers.[2][3][4]

History[edit]

A previous virtual storage product from EMC Corporation called Invista was announced in 2005.[5] Five months after the announcement, Invista had not shipped, and was expected to not have much impact until 2007.[6] By 2009, some analysts suggested the Invista product might best be shut down.[7] Another product called the Symmetrix Remote Data Facility (SRDF) also was marketed when VPLEX was announced in May 2010.[8]

Architecture[edit]

Logical layout

VPLEX is deployed as a cluster consisting of one or more engines. Each engine consists of two redundant io directors and one IO annex, each being a single rack unit (1U) physical device. Each engine has 32 Fibre Channel ports (VS1 16 Front end ports, 16 Back end ports) or 16 Fibre Channel ports (VS2 8 "Front end" ports, 8" Back end" ports) and is protected by two redundant stand-by power supplies. Each director is a bladed multi-core multi-processor x86 virtualization processing unit containing 4 hot-swappable io modules. The 1U IO annex is used for intra-cluster director communication. Each director runs a Linux kernel and a specialized Virtualization Storage Software environment called GeoSynchrony, that provides proprietary clustering capability. Each cluster has a service management station which provides all alerting and software management capabilities.[2]

VPLEX is based on standard EMC building block hardware architecture components such as those used in its Symmetrix product line.

VPLEX uses an in-band architecture which means that data flowing between a host and a storage controller flows through one or more directors. On the front end, VPLEX presents an interface to a host which looks like a storage controller (array) (like a SCSI target). On the VPLEX back end, it provides an interface to a storage controller that looks like a host (like an SCSI initiator).

A VPLEX cluster consists of one or more pairs of directors (up to 4 pairs). Any director from any engine can failover to any other director in the cluster in the case of hardware or path failure.

Terminology[edit]

Components of VPLEX include:[2]

  • Director - a single 1U virtualization processor.
V-Plex models
Type-model Cache [GB] FC speed [Gb/s] Engines FC Ports Announced
VPLEX VS1 Single 64 8 1 32 10 May 2010
VPLEX VS1 Dual 128 8 2 64 10 May 2010
VPLEX VS1 Quad 256 8 4 128 10 May 2010
VPLEX VS2 Single 72 8 1 16 23 May 2011
VPLEX VS2 Dual 144 8 2 32 23 May 2011
VPLEX VS2 Quad 288 8 4 64 23 May 2011
  • Cluster - a set of one or more pairs of directors, that are managed as a single entity.
  • Cluster (site management) IP address - a single IP address of a cluster, that provides administrative interfaces (SSH and HTTPS).
  • VPLEX Management Console - a management GUI for VPLEX. Installed on the System Management Server (SMS).
  • Virtual Volume - a unit of storage presented to the host by VPLEX.
  • Extent - an atomic unit of storage; an extent consists of some or all of a storage volume.
  • Device - a logical unit constructed from one or more extents. Devices can be of type Raid-0, Raid-1, or Raid-C and be recursively constructed from other devices.
  • Storage View - a logical container consisting of front end ports, registered host initiator ports, and virtual volumes. Storage Views determine host access to virtual volumes from VPLEX.
  • VPLEX Local - a VPLEX cluster within a single data center.
  • VPLEX Metro - two VPLEX clusters located within or across multiple data centers separated by up to 5ms of rtt latency.
  • VPLEX Geo - two VPLEX clusters located within or across multiple data centers separated by up to 50 ms of rtt latency.

Performance[edit]

A VPLEX Quad VS2 is advertised with up to 3,000,000 I/Os, and up to 23.2 GB/S.[9]

Features[edit]

As of 2010 with release 4.0.0.00.11, the base major features of VPLEX were:[3]

Virtual Storage
Servers access VPLEX as if it were a storage array. The SCSI LUNs they see represent virtual disks (virtual volumes) which are allocated in VPLEX from a pool of storage volumes provided by one or more back-end storage arrays. A storage volume is simply a storage LUN provided by one of the storage arrays that VPLEX is connected to.
Data migration
VPLEX can move data between different devices or between different extents, while maintaining I/O access to the data.
Importing existing LUNs via a feature called Application Consistent mode.
Application consistent mode virtual volumes are one-to-one representations of existing storage volumes; such volumes can be easily imported by a host after removing VPLEX from the data path. The ability to easily move from virtualized to non-virtualized disk storage is the main advantage to this approach. This approach limits the usable extent size to that of the underlying storage volume and imposes upper level limits on device layout and construction.
Host LUN Mapping
The set of presented virtual volumes can be configured independently for each server.
Write-Through cache (Local and Metro)
Writes from hosts are cached by VPLEX, but only acknowledged back to the host once they have been acknowledged by the back-end storage array. In the initial VS1 release, VPLEX caching is very beneficial in read-skewed environments. Cache size is 32 GB per director.
Write-Back cache (Geo only)
Writes from hosts are cached by VPLEX, protected, and then acknowledged back to the host. For the VS1 hardware, cache size is 32 GB per director.
Power and Space efficient[10]
Virtual Volume Mirroring
Provides the ability to make two copies of a LUN within and across heterogeneous storage arrays.
Distributed Devices
Presentation of a logical device to hosts across geographically disparate (<100km | <5ms latency) clusters with full Read/Write host access provided by each VPLEX Cluster.
Application layer manages (prevents) concurrent updates from multiple hosts.
AccessAnywhere ensures all hosts read the most recent updates, independent of source.

Base licensing includes up to 10 TB of attached back-end storage and then priced per TB per price tier beyond the base. There are some optional features (i.e. Metro), separately licensed.

See also[edit]

References[edit]

  1. ^ http://www.zdnet.com/blog/btl/emc-launches-vplex-eyes-teleporting-petabytes-globally/34231
  2. ^ a b c "VPLEX Architecture Deployment" (PDF). EMC Corporation. 
  3. ^ a b Mark Peters (May 10, 2005). "EMC VPLEX: Virtual Storage Beyond Real Walls". Enterprise Strategy Group. Archived from the original on May 12, 2010. 
  4. ^ Vance, Ashlee (May 13, 2010). "EMC Now Performing Data Contortion Act". The New York Times. 
  5. ^ "EMC Announces EMC Invista Network Storage Virtualization Platform". Press release (EMC Corporation). May 16, 200. Retrieved July 11, 2013. 
  6. ^ Lucas Mearian (October 18, 2005). "Q&A: EMC's Mark Lewis on virtualization, competition". Computer World. Retrieved July 11, 2013. 
  7. ^ Chris Mellor (February 9, 2009). "Why EMC should really rev InVista: Put up, or put down?". The Register. Retrieved July 11, 2013. 
  8. ^ Lucas Mearian (May 12, 2010). "Q&A: EMC's Brian Gallagher touts the new VPLEX appliance". Computer World. Retrieved July 11, 2013. 
  9. ^ "EMC VPLEX Local" (PDF). 
  10. ^ http://www.technewsworld.com/story/EMCs-Vplex-Puts-Data-on-the-Bullet-Train-69966.html?wlc=1274311173