Jump to content

VMware ESXi: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
→‎Infrastructure limitations: Correct cores/processor
Line 147: Line 147:
* Maximum number of processors per virtual machine: 8
* Maximum number of processors per virtual machine: 8
* Maximum number of processors per host: 64
* Maximum number of processors per host: 64
* Maximum number of cores per processor: 6
* Maximum number of cores per processor: 12


===Performance limitations===
===Performance limitations===

Revision as of 22:30, 18 March 2010

VMware ESX
Developer(s)VMware, Inc.
Stable release
4.0U1a (build 208167) / December 10, 2009 (2009-12-10)[1]
Platformx64-compatible
TypeVirtual machine monitor
LicenseProprietary
WebsiteVMware ESX

VMware ESX is an enterprise-level virtualization product offered by VMware, Inc. ESX is a component of VMware's larger offering, VMware Infrastructure, which adds management and reliability services to the core server product. ESX is being replaced by ESXi (see below)

The basic server requires some form of persistent storage—typically, an array of hard disk drives—for storing the virtualization kernel and support files. A variant of this design, VMware ESXi, does away with the first requirement by moving the server kernels into a dedicated hardware device. Both variants support the services offered by VMware Infrastructure.[2]

Technical description

Terms and working

VMware, Inc. refers to the hypervisor used by VMware ESX as "vmkernel".

Architecture

VMware states that the ESX product runs on "bare metal".[3] In contrast to other VMware products, it does not run atop a third-party operating system,[4] but instead includes its own kernel. Up through the current ESX version 4.0, a Linux kernel is started first[5], and is used to load a variety of specialized virtualization components, including VMware's 'vmkernel' component. This previously-booted Linux kernel then becomes the first running virtual machine and is called the service console. Thus, at normal run-time, the vmkernel is running on the bare computer and the Linux-based service console runs as the first virtual machine.

The vmkernel itself, which VMware claims is a microkernel,[6] has three interfaces to the outside world:

  • hardware
  • guest systems
  • service console (Console OS)

Interface to hardware

The vmkernel handles CPU and memory directly, using Scan-Before-Execution (SBE) to handle special or privileged CPU instructions.[7]

Access to other hardware (such as network or storage devices) takes place using modules. At least some of the modules derive from modules used in the Linux kernel. To access these modules, an additional module called vmklinux implements the Linux module interface. According to the README file, "This module contains the Linux emulation layer used by the vmkernel."[8]

The vmkernel uses the device drivers:[8]

  1. net/e100
  2. net/e1000
  3. net/bnx2
  4. net/tg3
  5. net/forcedeth
  6. net/pcnet32
  7. block/cciss
  8. scsi/adp94xx
  9. scsi/aic7xxx
  10. scsi/aic79xx
  11. scsi/ips
  12. scsi/lpfcdd-v732
  13. scsi/megaraid2
  14. scsi/mptscsi_2xx
  15. scsi/qla2200-v7.07
  16. scsi/megaraid_sas
  17. scsi/qla4010
  18. scsi/qla4022
  19. scsi/vmkiscsi
  20. scsi/aacraid_esx30
  21. scsi/lpfcdd-v7xx
  22. scsi/qla2200-v7xx

These drivers mostly equate to those described in VMware's hardware compatibility list.[9] All these modules fall under the GPL. Programmers have adapted them to run with the vmkernel: VMware Inc has changed the module-loading and some other minor things.[8]

Guest systems

The vmkernel offers an interface to guest systems which simulates hardware. This takes place in such a way that a guest system itself can run unmodified atop the hypervisor. Because using unmodified drivers in the guest system uses up some system resources, VMware Inc offers special drivers for different operating systems to increase performance.[10] These enhanced drivers are typically installed on the guest OS as part of VMTools, which also add utilities to better connect the guest OS with the underlying vmkernel and/or service console, for things such as better clock synchronization and automatic guest OS shutdown. Each guest system behaves as a different system.

Service console

The Service Console is a vestigial general purpose operating system most significantly used as the bootstrap for the VMware kernel, vmkernel, and secondarily used as a management interface. Both of these Console Operating System functions are being deprecated as VMware migrates to exclusively the 'embedded' ESX model, current version being ESXi.[citation needed]

Linux dependencies

ESX uses a Linux kernel to load additional code: often referred to by VMware, Inc. as the "vmkernel". The dependencies between the "vmkernel" and the Linux part of the ESX server have changed drastically over different major versions of the software. The VMware FAQ[11] states: "ESX Server also incorporates a service console based on a Linux 2.4 kernel that is used to boot the ESX Server virtualization layer". The Linux kernel runs before any other software on an ESX host.[5] On ESX versions 1 and 2, no VMkernel processes run on the system during the boot process.[12] After the Linux kernel has loaded, the S90vmware script loads the vmkernel.[12] VMware Inc states that vmkernel does not derive from Linux, but acknowledges that it has adapted certain device-drivers from Linux device drivers. The Linux kernel continues running, under the control of the vmkernel, providing functions including the proc file system used by the ESX and an environment to run support applications.[12] ESX version 3 loads the VMkernel from the Linux initrd, thus much earlier in the boot-sequence than in earlier ESX versions.

In traditional systems, a given operating system runs a single kernel. The VMware FAQ mentions that ESX has both a Linux 2.4 kernel and vmkernel — hence confusion over whether ESX has a Linux base. An ESX system starts a Linux kernel first, but it loads vmkernel (also described by VMware as a kernel), which according to VMware 'wraps around' the linux kernel, and which (according to VMware Inc) does not derive from Linux.

The ESX userspace environment, known as the "Service Console" (or as "COS" or as "vmnix"), derives from a modified version of Red Hat Linux, (Red Hat 7.2 for ESX 2.x and Red Hat Enterprise Linux 3 for ESX 3.x). In general, this Service Console provides management interfaces (CLI, webpage MUI, Remote Console). This VMware ESX hypervisor virtualization approach provides lower overhead and better control and granularity for allocating resources[citation needed] (CPU-time, disk-bandwidth, network-bandwidth, memory-utilization) to virtual machines, compared to so-called "hosted" virtualization, where a base OS handles the physical resources. It also increases security[citation needed].

As a further detail which differentiates the ESX from other VMware virtualization products: ESX supports the VMware proprietary cluster file system VMFS. VMFS enables multiple hosts to access the same SAN LUNs simultaneously, while file-level locking provides simple protection to file-system integrity.

Purple Screen of Death

A Purple Screen of Death as seen in VMware ESX Server 3.0

In the event of a hardware error, the vmkernel can trigger a Machine Check Exception[13]. This results in a error message displayed on a purple console screen. This is colloquially known as a PSOD, or Purple Screen of Death, after the Blue Screen of Death in Windows operating systems.

Upon displaying a PSOD, the vmkernel writes debug information to the core dump partition. This information, together with the error codes displayed on the PSOD can be used by VMware support to determine the cause of the problem.

Two other products operate in conjunction with ESX - VirtualCenter and Converter.[14]

  • VirtualCenter allows monitoring and management of multiple ESX or GSX servers. In addition, users must install it to run infrastructure services such as:
    • VMotion (transferring virtual machines between servers on the fly, with almost zero downtime)
    • SVMotion (transferring virtual machines between Shared Storage LUNs on the fly, with almost zero downtime)
    • DRS (automated VMotion based on host/VM load requirements/demands)
    • HA (restarting of Virtual Machine Guests in the event of a physical ESX Host failure)
  • Converter allows users to create VMware ESX Server- or Workstation-compatible virtual machines from either physical machines or from virtual machines made by other virtualization products. Converter replaces the VMware "P2V Assistant" and "Importer" products — P2V Assistant allowed users to convert physical machines into virtual machines; and Importer allowed the import of virtual machines from other products into VMware Workstation.

VMware ESXi

VMware ESXi
Developer(s)VMware, Inc.
Stable release
4.0 / May 21, 2009 (2009-05-21)[15]
Platformx64-compatible
TypeVirtual machine monitor
LicenseProprietary
WebsiteVMware ESX/ESXi

VMware ESXi is server virtualization software written by VMware. It can be either free (limited features) or full-featured. VMware ESX and VMware ESXi are both bare-metal hypervisors that install directly on the server hardware. The difference is that ESX also installs a Linux-based service console rather than relying on a remote service console like ESXi. VMware recommends ESXi over ESX.

VMware ESXi was originally a reduced version of VMware ESX, that allowed for a smaller 32 MB disk footprint on the Host. With a simple configuration console for mostly network configuration and remote based VMware Infrastructure Client Interface, this allows for more resources to be dedicated to the Guest environments.

There are two variations of ESXi, VMware ESXi 3.5 Installable and VMware ESXi 3.5 Embedded Edition. It also has the ability to upgrade to VMware Infrastructure 3[16] or VMware vSphere 4.0 ESXi.

Originally named VMware ESX Server ESXi edition. Through several revisions finally becoming VMware ESXi 3. New editions then followed ESXi 3.5 and now ESXi 4.

Version release history:

  • VMware ESX 3 Server ESXi edition
  • -- unknown --
  • VMware ESXi 3.5 First Public Release (Build 67921) (December 31, 2007 (2007-12-31))
  • VMware ESXi 3.5 Initial Release (Build 70348)
  • VMware ESXi 3.5 Update 1 (Build 82664)
  • VMware ESXi 3.5 Update 2 (Build 110271)
  • VMware ESXi 3.5 Update 3 (Build 123629)
  • VMware ESXi 3.5 Update 4 (Build 153875)
  • VMware ESXi 3.5 Update 5 (Build 207095)
  • VMware ESXi 4.0 (Build 164009) (May 21, 2009 (2009-05-21))
  • VMware ESXi 4.0 Update 1 (Build 208167)

Known limitations

Known limitations of VMware ESX, as of May 2009, include the following:

Infrastructure limitations

Some limitations in ESX Server 4 may constrain the design of data centers:[17][18]

  • Guest system maximum RAM: 255 GB
  • Host system maximum RAM: 1TB
  • Number of hosts in an HA cluster: 32
  • Number of hosts in a DRS cluster: 32
  • Maximum number of processors per virtual machine: 8
  • Maximum number of processors per host: 64
  • Maximum number of cores per processor: 12

Performance limitations

In terms of performance, virtualization imposes a cost in the additional work the CPU has to perform to virtualize the underlying hardware. Instructions that perform this extra work, and other activities that require virtualization, tend to lie in operating system calls. In an unmodified operating system, OS calls introduce the greatest portion of virtualization overhead.

Paravirtualization or other virtualization techniques may help with these issues. VMware and XenSource invented the Virtual Machine Interface for this purpose, and selected operating systems currently support this. A comparison between full virtualization and paravirtualization for the ESX Server [19] shows that in some cases paravirtualization is much faster.

See also

References

  1. ^ "VMware ESX 4.0 Update 1a kb". VMware, Inc.
  2. ^ "Meet the Next Generation of Virtual Infrastructure Technology". VMware. Retrieved 2007-09-21.
  3. ^ "ESX Server Datasheet"
  4. ^ ""ESX Server Architecture"". Vmware.com. Retrieved 2009-07-01.
  5. ^ a b "ESX machine boots". Video.google.com.au. 2006-06-12. Retrieved 2009-07-01.
  6. ^ ""Support for 64-bit Computing"". Vmware.com. 2004-04-19. Retrieved 2009-07-01.
  7. ^ Gerstel, Markus: "Virtualisierungsansätze mit Schwepunkt Xen"[dead link]
  8. ^ a b c ""ESX Server Open Source"". Vmware.com. Retrieved 2009-07-01.
  9. ^ ""ESX Hardware Compatibility List"". Vmware.com. 2008-12-10. Retrieved 2009-07-01.
  10. ^ "Benchmarking VMware ESX Server 2.5 vs Microsoft Virtual Server 2005 Enterprise Edition". Virtualization Benchmark Review. 2006-04-19. Retrieved 2009-06-11.
  11. ^ VMware FAQ[dead link]
  12. ^ a b c ESX Server Advanced Technical Design Guide[dead link]
  13. ^ "KB: Decoding Machine Check Exception (MCE) output after a purple screen error|publisher=VMware, Inc."
  14. ^ "P2V Assistant Documentation". Vmware.com. Retrieved 2009-07-01.
  15. ^ "VMware vSphere 4.0 Release Notes—ESXi Edition". VMware, Inc.
  16. ^ "Free VMware ESXi: Bare Metal Hypervisor with Live Migration". Vmware.com. Retrieved 2009-07-01.
  17. ^ "Configuration Maximums" (PDF). VMware, Inc. 2010-02-01. Retrieved 2010-02-01.
  18. ^ "What's new in VMware vSphere 4: Performance Enhancements" (PDF). VMware, Inc.
  19. ^ "Performance of VMware VMI" (PDF). VMware, Inc. 2008-02-13. Retrieved 2009-01-22.

3rd Party Tools

  • PowerWF - Provides a visual representation of PowerCLI script, converting them into workflows, or converting workflows into Powershell cmdlets and modules. PowerCLI is VMware's addition to Microsoft's Powershell for automation of virtual environments.
  • Vizioncore - Provides virtual machine monitoring through vFoglight and virtual machine backup through vRanger. Vizioncore offers a whole range of VMware virtualization tools.
  • iQuate - Provides administrators of large scale (1,000 - 10,000 guest) environments with a single data repository for tracking guest deployments on physical hosts. Rather than searching through multiple Virtual Center or vCenter consoles administrators can (for example) locate where a guest is running and which physical hosts are currently configured to run that guest image in a single query.