Cloud load balancing

From Wikipedia, the free encyclopedia
Jump to: navigation, search

Cloud load balancing is a type of load balancing that is performed in cloud computing.[1] Cloud load balancing is the process of distributing workloads across multiple computing resources. Cloud load balancing reduces costs associated with document management systems and maximizes availability of resources. It is a type of load balancing and not to be confused with Domain Name System (DNS) load balancing. While DNS load balancing uses software or hardware to perform the function,[2] cloud load balancing uses services offered by various computer network companies.[3]

Comparison With DNS Load Balancing[edit]

Cloud load balancing has an advantage over DNS load balancing as it can transfer loads to servers globally as opposed to distributing it across local servers.[3] In the event of a local server outage, cloud load balancing delivers users to the closest regional server without interruption for the user.[4]

Cloud load balancing addresses issues relating to TTL reliancy present during DNS load balancing.[5] DNS directives can only be enforced once in every TTL cycle and can take several hours if switching between servers during a lag or server failure. Incoming server traffic will continue to route to the original server until the TTL expires and can create an uneven performance as different internet service providers may reach the new server before other internet service providers.[5] Another advantage is that cloud load balancing improves response time by routing remote sessions to the best performing data centers.[1][6]

Importance of Load Balancing[edit]

Cloud computing brings advantages in "cost, flexibility and availability of service users."[7] Those advantages drive the demand for Cloud services. The demand raises technical issues in Service Oriented Architectures and Internet of Services(IoS)-style applications, such as high availability and scalability. As a major concern in these issues, load balancing allows cloud computing to "scale up to increasing demands" [7] by efficiently allocating dynamic local workload evenly across all nodes.[8]

Load Balancing Techniques[edit]

Scheduling Algorithms[edit]

Opportunistic Load Balancing (OLB) is the algorithm that assigns workloads to nodes in free order. It is simple but does not consider the expected execution time of each node. [9] Load balance Min-Min (LBMM) assigns sub-tasks to the node which requires minimum execution time.[9] The pseudo-code is following:

    Minmin()
    {
        generate a completionTime matrix
        for each task in taskList
        {
            find minimum completionTime from matrix;
            assign task to respective vm;
            update the completionTime;
        }}

    }

Load Balancing Policies[edit]

Workload and Client Aware Policy (WCAP) is "implemented in a dis-centralized manner with low overhead."[10] It specifies the unique and special property (USP) of requests and computing nodes. With the information of USP, the schedule can decide the most suitable node to complete a request. WCPA makes the most of computing nodes by reducing their idle time. Also, it reduces performance time through searches based on content information.

A Comparative Study of Algorithms[edit]

The Honeyhive algorithm is inspired by the "behavior of a colony of honeybees foraging and harvesting food."[7] Forager bees search for food, return to the hive and describe the food they found through a "waggle dance." The "waggle dance" can show the quantity, quality and distance of the food. For the Honeyhive algorithm, every server first plays a forger bee role and satisfies requests from virtual servers. With the service done, each server evaluates the profitability of its just-serviced virtual server. Then a server will adjust the advert board, which serves as a "waggle dance", and record the profitability of virtual servers. If the calculated profitability is high, a server will continue to serve the current virtual server. Otherwise, it will keep waiting.

Biased Random Sampling bases its job allocation on the network represented by a directed graph. For each execution node in this graph, in-degree means available resources and out-degree means allocated jobs. In-degree will decrease during job execution while out-degree will increase after job allocation.The pseudo-code is following:

    BiasedRandomSampling()
    {
        for each task in task queue
            init walklength = 0;
            while (task is assigned to a vm) or (walklength > threshold)
                {
                    increment walklength
                    assign task to vm if indegree >0; decrement indegree
                }
            remove task from task queque
    }

    Process completed tasks()
    {
        increment degreee of vm assigned to the task
    }   

Active Clustering is a self-aggregation algorithm to rewire the network.

The experiment result is that"Active Clustering and Random Sampling Walk predictably perform better as the number of processing nodes is increased"[7] while the Honeyhive algorithm does not show the increasing pattern.

Client-side Load Balancer Using Cloud[edit]

Load balancer forwards packets to web servers according to different workloads on servers. However, it is hard to implement a scalable load balancer because of both the "cloud's commodity business model and the limited infrastructure control allowed by cloud providers."[11] Client-side Load Balancer (CLB) solve this problem by using a scalable cloud storage service.CLB allows clients to choose back-end web servers for dynamic content although it delivers static content.

References[edit]

  1. ^ a b Chee, Brian J.S. (2010). Cloud Computing: Technologies and Strategies of the Ubiquitous Data Center. CRC Press. ISBN 9781439806173. 
  2. ^ Xu, Cheng-Zhong (2005). Scalable and Secure Internet Services and Architecture. CRC Press. ISBN 9781420035209. 
  3. ^ a b "Research Report - In Demand – The Culture of Online Service Provision". Citrix. 14 October 2013. Retrieved 30 January 2014. 
  4. ^ Shatz, Gur (15 October 2013). "Bringing Layer 7 Load Balancing into the Cloud". Incapsula. Retrieved 30 January 2014. 
  5. ^ a b Furht, Borko (2010). Handbook of Cloud Computing. Springer. ISBN 9781441965240. 
  6. ^ Nolle, Tom. "Designing public cloud applications for a hybrid cloud future". Tech Target. Retrieved 30 January 2014. 
  7. ^ a b c d Randles, Martin, David Lamb, and A. Taleb-Bendiab. "A comparative study into distributed load balancing algorithms for cloud computing." Advanced Information Networking and Applications Workshops (WAINA), 2010 IEEE 24th International Conference on. IEEE, 2010.
  8. ^ Ferris, James Michael. "Methods and systems for load balancing in cloud-based networks." U.S. Patent Application 12/127,926.
  9. ^ a b Wang, S. C.; Yan, K. Q.; Liao, W. P.; Wang, S. S. (2010), "Towards a load balancing in a three-level cloud computing network", Proceedings of the 3rd International Conference on Computer Science and Information Technology (ICCSIT) (IEEE): 108–113, ISBN 978-1-4244-5537-9 
  10. ^ Kansal, Nidhi Jain, and Inderveer Chana. "Cloud load balancing techniques: A step towards green computing." IJCSI International Journal of Computer Science Issues 9.1 (2012): 1694-0814.
  11. ^ Wee, Sewook, and Huan Liu. "Client-side load balancer using cloud." Proceedings of the 2010 ACM Symposium on Applied Computing. ACM, 2010.