OpenNebula

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
OpenNebula
Developer(s)OpenNebula Community
Initial releaseMarch 1, 2008; 10 years ago (2008-03-01)
Stable release
5.4.0[1] / 20 July 2017; 15 months ago (2017-07-20)
Repository Edit this at Wikidata
Written inC++, C, Ruby, Java, Shell script, lex, yacc
Operating systemLinux
PlatformHypervisors (Xen, KVM, VMware, vCenter)
Available inEnglish, Russian, Spanish
TypeCloud computing
LicenseApache License version 2
Websitewww.opennebula.org

OpenNebula is a cloud computing platform for managing heterogeneous distributed data center infrastructures. The OpenNebula platform manages a data center's virtual infrastructure to build private, public and hybrid implementations of infrastructure as a service. The two primary uses of the OpenNebula platform are data center virtualization solutions and cloud infrastructure solutions. The platform is also capable of offering the cloud infrastructure necessary to operate a cloud on top of existing infrastructure management solutions. OpenNebula is free and open-source software, subject to the requirements of the Apache License version 2.


History[edit]

The OpenNebula Project was started as a research venture in 2005 by Ignacio M. Llorente and Ruben S. Montero. The first public release of the software occurred in 2008. The goals of the research were to create efficient solutions for managing virtual machines on distributed infrastructures. It was also important that these solutions had the ability to scale at high levels. Open-source development and an active community of developers have since helped mature the project. As the project matured it began to become more and more adopted and in March 2010 the primary writers of the project founded C12G Labs, now known as OpenNebula Systems, which provides value-added professional services to enterprises adopting or utilizing OpenNebula.


Description[edit]

OpenNebula orchestrates storage, network, virtualization, monitoring, and security[2] technologies to deploy multi-tier services (e.g. compute clusters[3][4]) as virtual machines on distributed infrastructures, combining both data center resources and remote cloud resources, according to allocation policies. According to the European Commission's 2010 report "... only few cloud dedicated research projects in the widest sense have been initiated – most prominent amongst them probably OpenNebula ...".[5]

The toolkit includes features for integration, management, scalability, security and accounting. It also claims standardization, interoperability and portability, providing cloud users and administrators with a choice of several cloud interfaces (Amazon EC2 Query, OGF Open Cloud Computing Interface and vCloud) and hypervisors (Xen, KVM and VMware), and can accommodate multiple hardware and software combinations in a data center.[6]

OpenNebula was a mentoring organization in Google Summer of Code 2010.[7]

OpenNebula is sponsored by OpenNebula Systems (formerly C12G).

OpenNebula is widely used by variety of industries, including internet providers, telecommunication, information technology services, supercomputing, research laboratories, and international research projects. The OpenNebula Project is also used by some other cloud solutions as a cloud engine.[8] OpenNebula has grown significantly since going public and now has many notable users from a variety of industries.

Notable users from the telecommunications and internet industry include Akamai, Blackberry, Fuze, Telefónica, and INdigital.
Users in the information technology industry include CA Technologies, Hewlett Packard Enterprise, Hitachi, Informatica, CentOS, Netways, Ippon Technologies, Terradue 2.0, Unisys, MAV Technologies, Liberologico, Etnetera, EDS Systems, Inovex, Bosstek, Datera, Saldab, Hash Include, Blackpoint, Deloitte, Sharx dc, Server Storage Solutions, and NTS. Government solutions utilizing the OpenNebula Project include the National Central Library of Florence, bDigital, Deutsch E-Post, RedIRIS, GRNET, Instituto Geografico Nacional, CSIC, Gobex, ASAC Communications, KNAW, Junta De Andalucia, Flanders Environmental Agency, red.es, CENATIC, Milieuinfo, SIGMA, and Computaex.
Notable users in the financial sector include TransUnion, Produpan, Axcess Financial, Farm Credit Services of America, and Nasdaq Dubai.
Media and gaming users include BBC, Unity, R.U.R., Crytek, iSpot.tv, and Nordeus.

Hosting providers include ON VPS, NBSP, Orion VM, CITEC, LibreIT, Quobis, Virtion, OnGrid, Altus, DMEx, LMD, HostColor, Handy Networks, BIT, Good Hosting, Avalon, noosvps, Opulent Cloud, PTisp, Ungleich.ch, TAS France, TeleData, CipherSpace, Nuxit, Cyon, Tentacle Networks, Virtiso BV, METANET, e-tugra, lunacloud, todoencloud, Echelon, Knight Point Systems, 2 Twelve Solutions, and flexyz. SaaS and enterprise users include Scytl, LeadMesh, OptimalPath, RJMetrics, Carismatel, Sigma, GLOBALRAP, Runtastic, MOZ, Rentalia, Vibes, Yuterra, Best Buy, Roke, Intuit, Securitas Direct, trivago, and Booking.com.
Science and academia implementations include FAS Research Computing at Harvard University, FermiLab, NIKHEF, LAL CNRS, DESY, INFN, IPB Halle, CSIRO, fccn, AIST, KISTI, KIT, ASTI, Fatec Lins, MIMOS, SZTAKI, Ciemat, SurfSARA, ESA, NASA, ScanEX, NCHC, CESGA, CRS4, PDC, CSUC, Tokyo Institute of Technology, CSC, HPCI, Cerit-SC, LRZ, PIC, Telecom SUD Paris, Universidade Federal de Ceara, Instituto Superiore Mario Barella, Academia Sinica, UNACHI, UCM, Universite Catholique de Louvain, Universite de Strasbourg, ECMWF, EWE Tel, INAFTNG, TeideHPC, Cujae, and Kent State University.
Cloud products using OpenNebula include ClassCat, HexaGrid, NodeWeaver, Impetus, and ZeroNines.
The OpenNebula Project is also used internationally for research purposes. International research teams use the platform to study the potential issues in the use and deployment of large scale enterprise cloud and data center management projects. In 2010, The European Commission noted that very few large-scale research projects focused on cloud applications have been started, and they noted that the best example of such a project was OpenNebula.[5]


Development[edit]

The OpenNebula project follows a rapid release cycle with the aim of offering users rapid access to new features and innovations. Major upgrades generally occur once a year and each upgrade generally has 3-4 updates. The OpenNebula project is fully open-source and possible due to the active community of developers behind the project.

Release History[edit]

Version 4.4, released in 2014, brought a number of innovations in Open Cloud, improved cloud bursting, and implemented the use of multiple system datastores for storage load policies.
Version 4.6 allowed users to have different instances of OpenNebula in geographically dispersed and different data centers, this was known as the Federation of OpenNebula. A new cloud portal for cloud consumers was also introduced and in App market support was provided to import OVAs.
Version 4.8 began offering support for Microsoft Azure and IBM. Developers, it also continued evolving and improving the platform by incorporating support for OneFlow in cloud view. This meant end users could now define virtual machine applications and services elastically.
Version 4.10 integrated the support portal with the SunStone GUI. Login token was also developed, and support was provided for VMS and vCenter.
Version 4.12 offered new functionality to implement security groups and improve vCenter integration. Show back model was also deployed to track and analyze clouds due to different departments.
Version 4.14 introduced a newly redesigned and modularized graphical interface code, Sunstone. This was intended to improve code readability and ease the task of adding new components.

Milestones[edit]

2005Ignacio M. Llorente and Ruben S. Montero establish OpenNebula as a research project in Spain.
2008 – The OpenNebula open-source community is created and OpenNebula is released to the public.
March 2010 – C12G Labs is founded to provide services to enterprises utilizing the OpenNebula platform.
Summer 2010 – Google Summer of Code 2010 features OpenNebula as a mentoring organization.[7]
September 2013 – OpenNebula organizes its first ever community-conference.
2013 – 2014 – Large scale production deployment was carried out and Softlayer and Microsoft Azure become hybrid cloud partners of OpenNebula.


Features[2][edit]

The OpenNebula project focuses on providing a full featured cloud computing platform in a simplified, easy to use way. The following features are available in the platform.

Interfaces for cloud consumers and administrators[edit]

  • A number of API’s are available for the platform, including AWS EC2, EBS, and OGF OCCI.
  • A powerful, yet familiar UNIX based, command-line interface is available to administrators.
  • Further ease of use is available via the SunStone Portal, a graphical-user interface for cloud consumers and data center administrators.

Appliance marketplace[edit]

  • The OpenNebula Marketplace offers a wide variety of applications capable of running in OpenNebula environments.
  • A private catalogue of applications is deployable across OpenNebula instances.
  • The marketplace is fully integrated with the SunStone GUI.

Capacity and Performance Management[edit]

  • Resource allocation is possible via fine-grained ACL’s.
  • Resource Quota Management enables users to track and limit computing, storage, and networking resource utilization.
  • Load balancing, high availability, and high-performance computing possible via the dynamic creation of clusters which share datastores and virtual networks.
  • The dynamic creation of virtual data centers allow a group of users, under control of a central admin, the ability to create and manage computing, storage, and networking capacity.
  • A powerful scheduling component allows for the management of tasks based on resource availability.

Security[edit]

  • Fine-tuned ACL’s, user quotas, and powerful user, group, and role management ensure solid security.
  • The platform fully integrates with user management services such as LDAP and Active Directory. A built-in user name and password, SSH, and X.509 are also supported.
  • Login token functionality, fine-grained auditing, and the ability to isolate various levels also provide increased security levels.

Integration with third-party tools[edit]

  • The platform features a modular and extensible architecture allowing third-party tools to be easily integrated.
  • Custom plug-ins are available for the integration of any third-party data center service.
  • A number of API’s allow for the integration of tools such as billing and self-service portals.


Internal architecture[edit]

Basic components[edit]

OpenNebula Internal Architecture

Host: Physical machine running a supported hypervisor.
Cluster: Pool of hosts that share datastores and virtual networks.
Template: Virtual Machine definition.
Image: Virtual Machine disk image.
Virtual Machine: Instantiated Template. A Virtual Machine represents one life-cycle, and several Virtual Machines can be created from a single Template.
Virtual Network: A group of IP leases that VMs can use to automatically obtain IP addresses. It allows the creation of Virtual Networks by mapping over the physical ones. They will be available to the VMs through the corresponding bridges on hosts. Virtual network can be defined in three different parts:

  1. Underlying of physical network infrastructure.
  2. The logical address space available (IPv4, IPv6, dual stack).
  3. Context attributes (e.g. net mask, DNS, gateway). OpenNebula also comes with a Virtual Router appliance to provide networking services like DHCP, DNS etc.


Components and Deployment Model[edit]

OpenNebula Deployment Model

The OpenNebula Project's deployment model resembles classic cluster architecture which utilizes

  • A front-end (master node)
  • Hypervisor enabled hosts (worker nodes)
  • Datastores
  • A physical network

Front-end machine[edit]

The master node, sometimes referred to as the front-end machine, executes all the OpenNebula services. This is the actual machine where OpenNebula is installed. OpenNebula services on the front-end machine include the management daemon (oned), scheduler (sched), the web interface server (Sunstone server), and other advanced components. These services are responsible for queuing, scheduling, and submitting jobs to other machines in the cluster. The master node also provides the mechanisms to manage the entire system. This includes adding virtual machines, monitoring the status of virtual machines, hosting the repository, and transferring virtual machines when necessary. Much of this is possible due to a monitoring subsystem which gathers information such as host status, performance, and capacity use. The system is highly scalable and is only limited by the performance of the actual server.

Hypervisor enabled-hosts[edit]

The worker nodes, or hypervisor enabled-hosts, provide the actual computing resources needed for processing all jobs submitted by the master node. OpenNebula hypervisor enabled-hosts use a virtualization hypervisor such as Vmware, Xen, or KVM. The KVM hypervisor is natively supported and used by default. Virtualization hosts are the physical machines that run the virtual machines and various platforms can be used with OpenNebula. A Virtualization Subsystem interacts with these hosts to take the actions needed by the master node.

Storage[edit]

OpenNebula Storage

The datastores simply hold the base images of the Virtual Machines. The datastores must be accessible to the front-end, this can be accomplished by using one of a variety of available technologies such as NAS, SAN, or direct attached storage.

Three different datastore classes are included with OpenNebula including system datastores, image datastores, and file datastores. System datastores hold the images used for running the virtual machines. The images can be complete copies of an original image, deltas, or symbolic links depending on the storage technology used. The image datastores are used to store the disk image repository. Images from the image datastores are moved to or from the system datastore when virtual machines are deployed or manipulated. The file datastore is used for regular files and is often used for kernels, ram disks, or context files.

Physical networks[edit]

Physical networks are required to support the interconnection of storage servers and virtual machines in remote locations. It is also essential that the front-end machine can connect to all the worker nodes or hosts. At the very least two physical networks are required as OpenNebula requires a service network and an instance network. The front-end machine uses the service network to access hosts, manage and monitor hypervisors, and to move image files. The instance network allows the virtual machines to connect across different hosts. The network subsystem of OpenNebula is easily customizable to allow easy adaptation to existing data centers.


See also[edit]

References[edit]

  1. ^ Releases
  2. ^ a b "OpenNebula Key Features and Functionality". OpenNebula documentation. Retrieved 13 October 2011.
  3. ^ R. Moreno-Vozmediano, R. S. Montero, and I. M. Llorente. "Multi-Cloud Deployment of Computing Clusters for Loosely-Coupled MTC Applications", Transactions on Parallel and Distributed Systems. Special Issue on Many Task Computing (in press, doi:10.1109/TPDS.2010.186)
  4. ^ R. S. Montero, R. Moreno-Vozmediano, and I. M. Llorente. "An Elasticity Model for High Throughput Computing Clusters", J. Parallel and Distributed Computing (in press, DOI: 10.1016/j.jpdc.2010.05.005)
  5. ^ a b "The Future of Cloud Computing" (PDF). European Commission Expert Group Report. 25 January 2010. Retrieved 12 December 2017.
  6. ^ B. Sotomayor, R. S. Montero, I. M. Llorente, I. Foster. "Virtual Infrastructure Management in Private and Hybrid Clouds", IEEE Internet Computing, vol. 13, no. 5, pp. 14-22, September/October 2009. DOI: 10.1109/MIC.2009.119)
  7. ^ a b "OpenNebula @ GSoC 2010". Google Summer of Code 2010. Retrieved 27 December 2010.
  8. ^ "Featured Users". OpenNebula website. Retrieved 20 December 2017.

External links[edit]