Operating system–level virtualization

From Wikipedia, the free encyclopedia
Jump to: navigation, search

Operating system–level virtualization is a server virtualization method where the kernel of an operating system allows for multiple isolated user space instances, instead of just one. Such instances (often called containers, virtualization engines (VE), virtual private servers (VPS) or jails) may look and feel like a real server from the point of view of its owners and users.

On Unix-like operating systems, this technology can be seen as an advanced implementation of the standard chroot mechanism. In addition to isolation mechanisms, the kernel often provides resource management features to limit the impact of one container's activities on the other containers.

Uses[edit]

Virtual hosting environments commonly use operating system–level virtualization, where it is useful for securely allocating finite hardware resources amongst a large number of mutually-distrusting users. System administrators may also use it, to a lesser extent, for consolidating server hardware by moving services on separate hosts into containers on the one server.

Other typical scenarios include separating several applications to separate containers for improved security, hardware independence, and added resource management features. The improved security provided by the use of a chroot mechanism, however, is nowhere near ironclad.[1]

OS-level virtualization implementations that are capable of live migration can be used for dynamic load balancing of containers between nodes in a cluster.

Overhead[edit]

This form of virtualization usually imposes little or no overhead, because programs in virtual partitions use the operating system's normal system call interface and do not need to be subject to emulation or run in an intermediate virtual machine, as is the case with whole-system virtualizers (such as VMware ESXi and QEMU) or paravirtualizers (such as Xen and UML). It also does not require hardware assistance to perform efficiently.

Flexibility[edit]

Operating system–level virtualization is not as flexible as other virtualization approaches since it cannot host a guest operating system different from the host one, or a different guest kernel. For example, with Linux, different distributions are fine, but other operating systems such as Windows cannot be hosted.

Solaris partially overcomes the above described limitation with its branded zones feature, which provides the ability to run an environment within a container that emulates an older Solaris 8 or 9 version in a Solaris 10 host. Linux branded zones (referred to as "lx" branded zones) are also available on x86-based Solaris systems, providing a complete Linux userspace and support for the execution of Linux applications; additionally, Solaris provides utilities needed to install Red Hat Enterprise Linux 3.x or CentOS 3.x Linux distributions inside "lx" zones.[2][3] However, in 2010 Linux branded zones were removed from Solaris; in 2014 they were reintroduced in Illumos, which is the open source Solaris fork, supporting 32-bit Linux kernels.[4]

Storage[edit]

Some operating-system virtualizers provide file-level copy-on-write mechanisms. (Most commonly, a standard file system is shared between partitions, and those partitions that change the files automatically create their own copies.) This is easier to back up, more space-efficient and simpler to cache than the block-level copy-on-write schemes common on whole-system virtualizers. Whole-system virtualizers, however, can work with non-native file systems and create and roll back snapshots of the entire system state.

Implementations[edit]

Mechanism Operating system License Available since/between Features
File system isolation Copy on Write Disk quotas I/O rate limiting Memory limits CPU quotas Network isolation Partition checkpointing
and live migration
Root privilege isolation
chroot most UNIX-like operating systems varies by operating system 1982 Partial[5] No No No No No No No No
Docker Linux[6] Apache License 2.0 2013 Yes Yes Not directly Not directly Yes Yes Yes No No
Linux-VServer
(security context)
Linux GNU GPLv2 2001 Yes Yes Yes Yes[7] Yes Yes Partial[8] No Partial[9]
lmctfy Linux Apache License 2.0 2013 Yes Yes Yes Yes[7] Yes Yes Partial[8] No Partial[9]
LXC Linux GNU GPLv2 2008 Yes[10] Partial. Yes with Btrfs. Partial. Yes with LVM or Disk quota. Yes Yes Yes Yes No Yes[10]
OpenVZ Linux GNU GPLv2 2005 Yes No Yes Yes[11] Yes Yes Yes[12] Yes Yes[13]
Parallels Virtuozzo Containers Linux, Windows Proprietary 2001 Yes Yes Yes Yes[14] Yes Yes Yes[12] Yes Yes
Solaris Containers Solaris and OpenSolaris CDDL 2005 Yes Partial. Yes with ZFS Yes Partial. Yes with Illumos.[15] Yes Yes Yes[16] No[17] Yes[18]
FreeBSD Jail FreeBSD BSD License 1998 Yes Yes (ZFS) Yes[19] No Yes[20] Yes Yes No Yes[21]
sysjail OpenBSD, NetBSD BSD License No longer supported, as of March 3, 2009 Yes No No No No No Yes No ?
WPARs AIX Proprietary 2007 Yes No Yes Yes Yes Yes Yes[22] Yes[23] ?
HP-UX Containers (SRP) HPUX Proprietary 2007 Yes No Partial. Yes with logical volumes Yes Yes Yes Yes Yes ?
iCore Virtual Accounts Windows XP Proprietary/Freeware 2008 Yes No Yes No No No No No ?
Sandboxie Windows Proprietary/Shareware 2004 Yes Yes Partial No No No Partial No Yes

See also[edit]

References[edit]

  1. ^ "How to break out of a chroot() jail". 2002. Retrieved 7 May 2013. 
  2. ^ "Chapter 16: Introduction to Solaris Zones". "System Administration Guide: Oracle Solaris Containers-Resource Management and Oracle Solaris Zones". Oracle Corporation. 2010. Retrieved 2014-09-02. 
  3. ^ "Chapter 31: About Branded Zones and the Linux Branded Zone". "System Administration Guide: Oracle Solaris Containers-Resource Management and Oracle Solaris Zones". Oracle Corporation. 2010. Retrieved 2014-09-02. 
  4. ^ Bryan Cantrill (2014-09-28). "The dream is alive! Running Linux containers on an illumos kernel". slideshare.net. Retrieved 2014-10-10. 
  5. ^ Root user can easily escape from chroot. Chroot was never supposed to be used as a security mechanism. [1]
  6. ^ "Docker drops LXC as default execution environment". InfoQ. 
  7. ^ a b Utilizing the CFQ scheduler, you get a separate queue per guest.
  8. ^ a b Networking is based on isolation, not virtualization.
  9. ^ a b 14 user capabilities are considered safe within a container. The rest may cannot be granted to processes within that container without allowing that process to potentially interfere with things outside that container. Linux-VServer Paper, Secure Capabilities.
  10. ^ a b Graber, Stéphane (1 January 2014). "LXC 1.0: Security features [6/10]". Retrieved 12 February 2014. "LXC now has support for user namespaces. [...] LXC is no longer running as root so even if an attacker manages to escape the container, he’d find himself having the privileges of a regular user on the host" 
  11. ^ Available since kernel 2.6.18-028stable021. Implementation is based on CFQ disk I/O scheduler, but it is a two-level schema, so I/O priority is not per-process, but rather per-container. See OpenVZ wiki: I/O priorities for VE for details.
  12. ^ a b Each container can have its own IP addresses, firewall rules, routing tables and so on. Three different networking schemes are possible: route-based, bridge-based, and assigning a real network device (NIC) to a container.
  13. ^ Each container may have root access without possibly affecting other containers. [2].
  14. ^ Available since version 4.0, January 2008.
  15. ^ Pijewski, Bill. "Our ZFS I/O Throttle". 
  16. ^ See OpenSolaris Network Virtualization and Resource Control and Network Virtualization and Resource Control (Crossbow) FAQ for details.
  17. ^ Cold migration (shutdown-move-restart) is implemented.
  18. ^ Non-global zones are restricted so they may not affect other zones via a capability-limiting approach. The global zone may administer the non-global zones. (Oracle Solaris 11.1 Administration, Oracle Solaris Zones, Oracle Solaris 10 Zones and Resource Management E29024.pdf, pages 356--360. Available within archive)
  19. ^ Check the "allow.quotas" option and the "Jails and File Systems" section on the FreeBSD jail man page for details.
  20. ^ "Hierarchical_Resource_Limits - FreeBSD Wiki". Wiki.freebsd.org. 2012-10-27. Retrieved 2014-01-15. 
  21. ^ "3.5. Limiting your program's environment". Freebsd.org. Retrieved 2014-01-15. 
  22. ^ Available since TL 02. See Fix pack information for: WPAR Network Isolation for details.
  23. ^ See Live Application Mobility in AIX 6.1

External links[edit]