Data center network architectures: Difference between revisions
rv reference error introduced by previous edit ~~~~ |
Robertsatya (talk | contribs) Fix references errors. Named references are defined only once. |
||
Line 5: | Line 5: | ||
===Three-tier DCN=== |
===Three-tier DCN=== |
||
The [[Legacy system|legacy]] three-tier DCN architecture follows a multi-rooted [[Tree network|tree based network topology]] composed of three layers of network switches, namely access, aggregate, and core layers.<ref name=cisco>Cisco, Cisco Data Center Infrastructure 2.5 Design Guide, Cisco Press, 2010.</ref> The [[Server (computing)|servers]] in the lowest layers are connected directly to one of the edge layer switches. The aggregate layer switches interconnects multiple access layer switches together. All of the aggregate layer switches are connected to each other by core layer switches. Core layer switches are also responsible for connecting the data center to the [[Internet]]. The three-tier is the common network architecture used in data centers.<ref name=cisco>Cisco, Cisco Data Center Infrastructure 2.5 Design Guide, Cisco Press, 2010.</ref> However, three-tier architecture is unable to handle the growing demand of cloud computing.<ref name=taxonomy>Bilal et al., [http://sameekhan.org/pub/B_K_2013_FGCS.pdf "A Taxonomy and Survey on Green Data Center Networks,"] Future Generation Computer Systems.</ref> The higher layers of the three-tier DCN are highly oversubscribed.<ref name=fat |
The [[Legacy system|legacy]] three-tier DCN architecture follows a multi-rooted [[Tree network|tree based network topology]] composed of three layers of network switches, namely access, aggregate, and core layers.<ref name=cisco>Cisco, Cisco Data Center Infrastructure 2.5 Design Guide, Cisco Press, 2010.</ref> The [[Server (computing)|servers]] in the lowest layers are connected directly to one of the edge layer switches. The aggregate layer switches interconnects multiple access layer switches together. All of the aggregate layer switches are connected to each other by core layer switches. Core layer switches are also responsible for connecting the data center to the [[Internet]]. The three-tier is the common network architecture used in data centers.<ref name=cisco>Cisco, Cisco Data Center Infrastructure 2.5 Design Guide, Cisco Press, 2010.</ref> However, three-tier architecture is unable to handle the growing demand of cloud computing.<ref name=taxonomy>Bilal et al., [http://sameekhan.org/pub/B_K_2013_FGCS.pdf "A Taxonomy and Survey on Green Data Center Networks,"] Future Generation Computer Systems.</ref> The higher layers of the three-tier DCN are highly oversubscribed.<ref name=fat /> Moreover, scalability is another major issue in three-tier DCN. Major problems faced by the three-tier architecture include, scalability, fault tolerance, energy efficiency, and cross-sectional bandwidth. The three-tier architecture uses enterprise-level network devices at the higher layers of topology that are very expensive and power hungry.<ref name=fit /> |
||
===Fat tree DCN=== |
===Fat tree DCN=== |
||
Fat tree DCN architecture handles the oversubscription and cross section bandwidth problem faced by the legacy three-tier DCN architecture. Fat tree DCN employs commodity network switches based architecture using [[Clos network|Clos topology]].<ref name=fat |
Fat tree DCN architecture handles the oversubscription and cross section bandwidth problem faced by the legacy three-tier DCN architecture. Fat tree DCN employs commodity network switches based architecture using [[Clos network|Clos topology]].<ref name=fat /> The network elements in fat tree topology also follows hierarchical organization of network switches in access, aggregate, and core layers. However, the number of network switches is much larger than the three-tier DCN. The architecture is composed of ''k'' pods, where each pod contains, (k/2)<sup>2</sup> servers, k/2 access layer switches, and k/2 aggregate layer switches in the topology. The core layers contain (k/2)<sup>2</sup> core switches where each of the core switches is connected to one aggregate layer switch in each of the pods. The fat tree topology offers 1:1 oversubscription ratio and full bisection bandwidth.<ref name=fat /> The fat tree architecture uses a customized addressing scheme and [[routing algorithm]]. The scalability is one of the major issues in fat tree DCN architecture and maximum number of pods is equal to the number of ports in each switch.<ref name=taxonomy /> |
||
===DCell=== |
===DCell=== |
||
DCell is a server-centric hybrid DCN architecture where one server is directly connected to many other servers.<ref name=dcell>C. Guo, H. Wu, K. Tan, L. Shi, Y. Zhang, S. Lu, DCell: a scalable and fault tolerant network structure for data centers, ACM SIGCOMM Computer Communication Review 38 (4) (2008) 75–86.</ref> A server in the DCell architecture is equipped with multiple [[Network Interface Card]]s (NICs). The DCell follows a recursively build hierarchy of cells. A cell<sub>0</sub> is the basic unit and building block of DCell topology arranged in multiple levels, where a higher level cell contains multiple lower layer cells. The cell<sub>0</sub> is building block of DCell topology, which contains ''n'' servers and one commodity network switch. The network switch is only used to connect the server within a cell<sub>0</sub>. A cell<sub>1</sub> contain ''k=n+1'' cell<sub>0</sub> cells, and similarly a cell<sub>2</sub> contains k * n + 1 dcell<sub>1</sub>. The DCell is a highly scalable architecture where a four level DCell with only six servers in cell<sub>0</sub> can accommodate around 3.6 million servers. Besides very high scalability, the DCell architecture depicts very high structural robustness.<ref name=tcs>K. Bilal, M. Manzano, S. U. Khan, E. Calle, K. Li, and A. Y. Zomaya, [http://sameekhan.org/pub/B_K_2013_TCC.pdf "On the Characterization of the Structural Robustness of Data Center Networks,"] IEEE Transactions on Cloud Computing, vol. 1, no. 1, pp. 64-77, 2013.</ref> However, cross section bandwidth and network latency is a major issue in DCell DCN architecture.<ref name=comparison |
DCell is a server-centric hybrid DCN architecture where one server is directly connected to many other servers.<ref name=dcell>C. Guo, H. Wu, K. Tan, L. Shi, Y. Zhang, S. Lu, DCell: a scalable and fault tolerant network structure for data centers, ACM SIGCOMM Computer Communication Review 38 (4) (2008) 75–86.</ref> A server in the DCell architecture is equipped with multiple [[Network Interface Card]]s (NICs). The DCell follows a recursively build hierarchy of cells. A cell<sub>0</sub> is the basic unit and building block of DCell topology arranged in multiple levels, where a higher level cell contains multiple lower layer cells. The cell<sub>0</sub> is building block of DCell topology, which contains ''n'' servers and one commodity network switch. The network switch is only used to connect the server within a cell<sub>0</sub>. A cell<sub>1</sub> contain ''k=n+1'' cell<sub>0</sub> cells, and similarly a cell<sub>2</sub> contains k * n + 1 dcell<sub>1</sub>. The DCell is a highly scalable architecture where a four level DCell with only six servers in cell<sub>0</sub> can accommodate around 3.6 million servers. Besides very high scalability, the DCell architecture depicts very high structural robustness.<ref name=tcs>K. Bilal, M. Manzano, S. U. Khan, E. Calle, K. Li, and A. Y. Zomaya, [http://sameekhan.org/pub/B_K_2013_TCC.pdf "On the Characterization of the Structural Robustness of Data Center Networks,"] IEEE Transactions on Cloud Computing, vol. 1, no. 1, pp. 64-77, 2013.</ref> However, cross section bandwidth and network latency is a major issue in DCell DCN architecture.<ref name=comparison /> |
||
===Others=== |
===Others=== |
||
Line 19: | Line 19: | ||
== Challenges == |
== Challenges == |
||
Scalability is one of the foremost challenges to the DCNs.<ref name=fat |
Scalability is one of the foremost challenges to the DCNs.<ref name=fat /> With the advent of cloud paradigm, data centers are required to scale up to hundreds of thousands of nodes. Besides offering immense scalability, the DCNs are also required to deliver high cross-section bandwidth. Current DCN architectures, such as three-tier DCN offer poor cross-section bandwidth and possess very high over-subscription ratio near the root.<ref name=fat /> Fat tree DCN architecture delivers 1:1 oversubscription ratio and high cross section bandwidth, but it suffers from low scalability limited to ''k''=total number of ports in a switch. DCell offers immense scalability, but it delivers very poor performance under heavy network load and one-to-many traffic patterns. |
||
== Performance Analysis of DCNs == |
== Performance Analysis of DCNs == |
||
A quantitatively analysis of the three-tier, fat tree, and DCell architectures for performance comparison (based on throughput and latency) is performed for different network traffic pattern.<ref name=comparison |
A quantitatively analysis of the three-tier, fat tree, and DCell architectures for performance comparison (based on throughput and latency) is performed for different network traffic pattern.<ref name=comparison /> The fat tree DCN delivers high throughput and low latency as compared to three-tier and DCell. DCell suffers from very low throughput under high network load and one to many traffic patterns. One of the major reasons for DCell’s low throughput is very high over subscription ratio on the links that interconnect the highest level cells.<ref name=comparison /> |
||
== Structural robustness and Connectivity of DCNs == |
== Structural robustness and Connectivity of DCNs == |
||
The DCell exhibits very high robustness against random and targeted attacks and retains most of its node in the giant cluster after even 10% of targeted failure.<ref name=tcs |
The DCell exhibits very high robustness against random and targeted attacks and retains most of its node in the giant cluster after even 10% of targeted failure.<ref name=tcs /> multiple failures whether targeted or random, as compared to the fat tree and three-tier DCNs.<ref>M. Manzano, K. Bilal, E. Calle, and S. U. Khan, [http://sameekhan.org/pub/M_K_2013_CL.pdf "On the Connectivity of Data Center Networks,"] IEEE Communications Letters, vol. 17, no. 11, pp. 2172-2175, 2013.</ref> One of the major reasons for high robustness and connectivity of the DCell is its multiple connectivity to other nodes that is not found in fat tree or three-tier architectures. |
||
== Energy efficiency of DCNs == |
== Energy efficiency of DCNs == |
||
The concerns about the energy needs and environmental impacts of data centers are intensifying.<ref name=fit |
The concerns about the energy needs and environmental impacts of data centers are intensifying.<ref name=fit /> [[Efficient energy use|Energy efficiency]] is one of the major challenges of today’s [[Information and communications technology]] (ICT) sector. The networking portion of a data center is accounted to consume around 15% of overall cyber energy usage. Around 15.6 billion kWh of energy was utilized solely by the networks infrastructure within the data centers worldwide . The energy consumption by the network infrastructure within a data center is expected to increase to around 50% in data centers.<ref name=fit /> [[IEEE 802.3az]] standard has been standardized in 2011 that make use of adaptive link rate technique for energy efficiency.<ref>K. Bilal, S. U. Khan, S. A. Madani, K. Hayat, M. I. Khan, N. Min-Allah, J. Kolodziej, L. Wang, S. Zeadally, and D. Chen, [http://sameekhan.org/pub/B_K_2012_CLUS.pdf "A Survey on Green Communications using Adaptive Link Rate,"] Cluster Computing, vol. 16, no. 3, pp. 575-589, 2013</ref> Moreover, fat tree and DCell architectures use commodity network equipment that is inherently energy efficient. Workload consolidation is also used for energy efficiency by consolidating the workload on few devices to power-off or sleep the idle devices.<ref>Heller, Brandon, et al. "ElasticTree: Saving Energy in Data Center Networks." NSDI. Vol. 10. 2010.</ref> |
||
==References== |
==References== |
Revision as of 15:44, 22 October 2015
Data center is a pool of resources (computational, storage, network) interconnected using a communication network.[1] Data Center Network (DCN) holds a pivotal role in a data center, as it interconnects all of the data center resources together. DCNs need to be scalable and efficient to connect tens or even hundreds of thousands of servers to handle the growing demands of Cloud computing.[2][3] Today’s data centers are constrained by the interconnection network.[4]
Types of Data center network
Three-tier DCN
The legacy three-tier DCN architecture follows a multi-rooted tree based network topology composed of three layers of network switches, namely access, aggregate, and core layers.[5] The servers in the lowest layers are connected directly to one of the edge layer switches. The aggregate layer switches interconnects multiple access layer switches together. All of the aggregate layer switches are connected to each other by core layer switches. Core layer switches are also responsible for connecting the data center to the Internet. The three-tier is the common network architecture used in data centers.[5] However, three-tier architecture is unable to handle the growing demand of cloud computing.[6] The higher layers of the three-tier DCN are highly oversubscribed.[2] Moreover, scalability is another major issue in three-tier DCN. Major problems faced by the three-tier architecture include, scalability, fault tolerance, energy efficiency, and cross-sectional bandwidth. The three-tier architecture uses enterprise-level network devices at the higher layers of topology that are very expensive and power hungry.[4]
Fat tree DCN
Fat tree DCN architecture handles the oversubscription and cross section bandwidth problem faced by the legacy three-tier DCN architecture. Fat tree DCN employs commodity network switches based architecture using Clos topology.[2] The network elements in fat tree topology also follows hierarchical organization of network switches in access, aggregate, and core layers. However, the number of network switches is much larger than the three-tier DCN. The architecture is composed of k pods, where each pod contains, (k/2)2 servers, k/2 access layer switches, and k/2 aggregate layer switches in the topology. The core layers contain (k/2)2 core switches where each of the core switches is connected to one aggregate layer switch in each of the pods. The fat tree topology offers 1:1 oversubscription ratio and full bisection bandwidth.[2] The fat tree architecture uses a customized addressing scheme and routing algorithm. The scalability is one of the major issues in fat tree DCN architecture and maximum number of pods is equal to the number of ports in each switch.[6]
DCell
DCell is a server-centric hybrid DCN architecture where one server is directly connected to many other servers.[3] A server in the DCell architecture is equipped with multiple Network Interface Cards (NICs). The DCell follows a recursively build hierarchy of cells. A cell0 is the basic unit and building block of DCell topology arranged in multiple levels, where a higher level cell contains multiple lower layer cells. The cell0 is building block of DCell topology, which contains n servers and one commodity network switch. The network switch is only used to connect the server within a cell0. A cell1 contain k=n+1 cell0 cells, and similarly a cell2 contains k * n + 1 dcell1. The DCell is a highly scalable architecture where a four level DCell with only six servers in cell0 can accommodate around 3.6 million servers. Besides very high scalability, the DCell architecture depicts very high structural robustness.[7] However, cross section bandwidth and network latency is a major issue in DCell DCN architecture.[1]
Others
Some of the other well-known DCNs include BCube,[8] Camcube,[9] FiConn,[10] Jelly fish,[11] and Scafadia.[12]
Challenges
Scalability is one of the foremost challenges to the DCNs.[2] With the advent of cloud paradigm, data centers are required to scale up to hundreds of thousands of nodes. Besides offering immense scalability, the DCNs are also required to deliver high cross-section bandwidth. Current DCN architectures, such as three-tier DCN offer poor cross-section bandwidth and possess very high over-subscription ratio near the root.[2] Fat tree DCN architecture delivers 1:1 oversubscription ratio and high cross section bandwidth, but it suffers from low scalability limited to k=total number of ports in a switch. DCell offers immense scalability, but it delivers very poor performance under heavy network load and one-to-many traffic patterns.
Performance Analysis of DCNs
A quantitatively analysis of the three-tier, fat tree, and DCell architectures for performance comparison (based on throughput and latency) is performed for different network traffic pattern.[1] The fat tree DCN delivers high throughput and low latency as compared to three-tier and DCell. DCell suffers from very low throughput under high network load and one to many traffic patterns. One of the major reasons for DCell’s low throughput is very high over subscription ratio on the links that interconnect the highest level cells.[1]
Structural robustness and Connectivity of DCNs
The DCell exhibits very high robustness against random and targeted attacks and retains most of its node in the giant cluster after even 10% of targeted failure.[7] multiple failures whether targeted or random, as compared to the fat tree and three-tier DCNs.[13] One of the major reasons for high robustness and connectivity of the DCell is its multiple connectivity to other nodes that is not found in fat tree or three-tier architectures.
Energy efficiency of DCNs
The concerns about the energy needs and environmental impacts of data centers are intensifying.[4] Energy efficiency is one of the major challenges of today’s Information and communications technology (ICT) sector. The networking portion of a data center is accounted to consume around 15% of overall cyber energy usage. Around 15.6 billion kWh of energy was utilized solely by the networks infrastructure within the data centers worldwide . The energy consumption by the network infrastructure within a data center is expected to increase to around 50% in data centers.[4] IEEE 802.3az standard has been standardized in 2011 that make use of adaptive link rate technique for energy efficiency.[14] Moreover, fat tree and DCell architectures use commodity network equipment that is inherently energy efficient. Workload consolidation is also used for energy efficiency by consolidating the workload on few devices to power-off or sleep the idle devices.[15]
References
- ^ a b c d K. Bilal, S. U. Khan, L. Zhang, H. Li, K. Hayat, S. A. Madani, N. Min-Allah, L. Wang, D. Chen, M. Iqbal, C.-Z. Xu, and A. Y. Zomaya, "Quantitative Comparisons of the State of the Art Data Center Architectures," Concurrency and Computation: Practice and Experience, vol. 25, no. 12, pp. 1771-1783, 2013.
- ^ a b c d e f M. Al-Fares, A. Loukissas, A. Vahdat, A scalable, commodity data center 2 network architecture, in: ACM SIGCOMM 2008 Conference on Data 3 Communication, Seattle,WA, 2008, pp. 63–74.
- ^ a b C. Guo, H. Wu, K. Tan, L. Shi, Y. Zhang, S. Lu, DCell: a scalable and fault tolerant network structure for data centers, ACM SIGCOMM Computer Communication Review 38 (4) (2008) 75–86.
- ^ a b c d K. Bilal, S. U. Khan, and A. Y. Zomaya, "Green Data Center Networks: Challenges and Opportunities," in 11th IEEE International Conference on Frontiers of Information Technology (FIT), Islamabad, Pakistan, December 2013, pp. 229-234.
- ^ a b Cisco, Cisco Data Center Infrastructure 2.5 Design Guide, Cisco Press, 2010.
- ^ a b Bilal et al., "A Taxonomy and Survey on Green Data Center Networks," Future Generation Computer Systems.
- ^ a b K. Bilal, M. Manzano, S. U. Khan, E. Calle, K. Li, and A. Y. Zomaya, "On the Characterization of the Structural Robustness of Data Center Networks," IEEE Transactions on Cloud Computing, vol. 1, no. 1, pp. 64-77, 2013.
- ^ Guo, Chuanxiong, et al. "BCube: a high performance, server-centric network architecture for modular data centers." ACM SIGCOMM Computer Communication Review 39.4 (2009): 63-74.
- ^ Costa, P., et al. CamCube: a key-based data center. Technical Report MSR TR-2010-74, Microsoft Research, 2010.
- ^ Li, Dan, et al. "FiConn: Using backup port for server interconnection in data centers." INFOCOM 2009, IEEE. IEEE, 2009.
- ^ Singla, Ankit, et al. "Jellyfish: Networking data centers randomly." 9th USENIX Symposium on Networked Systems Design and Implementation (NSDI). 2012.
- ^ Gyarmati, László, and Tuan Anh Trinh. "Scafida: A scale-free network inspired data center architecture." ACM SIGCOMM Computer Communication Review 40.5 (2010): 4-12.
- ^ M. Manzano, K. Bilal, E. Calle, and S. U. Khan, "On the Connectivity of Data Center Networks," IEEE Communications Letters, vol. 17, no. 11, pp. 2172-2175, 2013.
- ^ K. Bilal, S. U. Khan, S. A. Madani, K. Hayat, M. I. Khan, N. Min-Allah, J. Kolodziej, L. Wang, S. Zeadally, and D. Chen, "A Survey on Green Communications using Adaptive Link Rate," Cluster Computing, vol. 16, no. 3, pp. 575-589, 2013
- ^ Heller, Brandon, et al. "ElasticTree: Saving Energy in Data Center Networks." NSDI. Vol. 10. 2010.