Colocation centre: Difference between revisions
m Dating maintenance tags: {{Dead link}} |
Cyberbot II (talk | contribs) Rescuing 1 sources. #IABot |
||
Line 51: | Line 51: | ||
* Cages – A cage is dedicated server space within a traditional raised-floor data center; it is surrounded by mesh walls and entered through a locking door. Cages share power and cooling infrastructure with other data center tenants. |
* Cages – A cage is dedicated server space within a traditional raised-floor data center; it is surrounded by mesh walls and entered through a locking door. Cages share power and cooling infrastructure with other data center tenants. |
||
* Suites – A suite is a dedicated, private server space within a traditional raised-floor data center; it is fully enclosed by solid partitions and entered through a locking door. Suites share power and cooling infrastructure with other data center tenants. |
* Suites – A suite is a dedicated, private server space within a traditional raised-floor data center; it is fully enclosed by solid partitions and entered through a locking door. Suites share power and cooling infrastructure with other data center tenants. |
||
* Modules – [[Modular data center|data center modules]] are purpose-engineered modules and components to offer scalable data center capacity. They typically use standardized components, which make them easily added, integrated or retrofitted into existing data centers, and cheaper and easier to build.<ref>DCD Intelligence [http://www.io.com/solutions/resource/assessing-cost-modular-versus-traditional-build/ “Assessing the Cost: Modular versus Traditional Build”], October 2013{{ |
* Modules – [[Modular data center|data center modules]] are purpose-engineered modules and components to offer scalable data center capacity. They typically use standardized components, which make them easily added, integrated or retrofitted into existing data centers, and cheaper and easier to build.<ref>DCD Intelligence [http://www.io.com/solutions/resource/assessing-cost-modular-versus-traditional-build/ “Assessing the Cost: Modular versus Traditional Build”], October 2013 {{wayback|url=http://www.io.com/solutions/resource/assessing-cost-modular-versus-traditional-build/ |date=20141007014729 |df=y }}</ref> In a colocation environment, the data center module is a data center within a data center, with its own steel walls and security protocol, and its own cooling and power infrastructure. “A number of colocation companies have praised the modular approach to data centers to better match customer demand with physical build outs, and allow customers to buy a data center as a service, paying only for what they consume.”<ref>John Rath, [http://www.datacenterknowledge.com/archives/2011/10/26/dck-guide-to-modular-data-centers-the-modular-market/ “DCK Guide To Modular Data Centers: The Modular Market”], “Data Center Knowledge”, October 2011</ref> |
||
==Building features== |
==Building features== |
Revision as of 17:05, 6 March 2016
Part of a series on |
Internet hosting service |
---|
Full-featured hosting |
Web hosting |
Application-specific web hosting |
By content format |
Other types |
A colocation centre (also spelled co-location, or colo) is a type of data centre where equipment, space, and bandwidth are available for rental to retail customers. They are sometimes also referred to as "carrier hotels." Colocation facilities provide space, power, cooling, and physical security for the server, storage, and networking equipment of other firms—and connect them to a variety of telecommunications and network service providers—with a minimum of cost and complexity.
Benefits
Colocation has become a popular option for companies with midsize IT needs—especially those in Internet related business. It allows companies to focus its IT staff on the actual work being done, instead of the logistical support needs which underlie the work. Significant benefits of scale (large power and mechanical systems) result in large colocation facilities, typically 4500 to 9500 square metres (roughly 50,000 to 100,000 square feet).
Claimed benefits of colocation include:[1]
- A predictable and operational expenditure model
- Additional capacity can be brought on quickly, cheaply, and only as needed
- Better access to space and power
- Experienced professionals managing your data center facility
- An ecosystem of partners in the same facility
- Dedicated infrastructure to build your cloud strategy
- Lean infrastructure to manage during times of rapid business change
- A better road map for disaster recovery
Colocation facilities provide, as a retail rental business, usually on a term contract:
- lockable rack cabinets or cages,
- power in a variety of formats, AC and DC,
- network connectivity—either in a 'house blend', where the colo provider is a customer of carriers, and connects their clients to their own router for access to multiple carriers, or as direct 'cross-connect' access to the routers of the carriers themselves, or both,
- cooling,
- physical security (including video surveillance, biometric and badge access, logging, and the like), and
- real-time live monitoring of all these functions for failures.
They also provide redundant systems for, usually, all of these features, to mitigate the problems when each inevitably fails.
Among the economies of scale which result from grouping many small-to-midsized customers together in one facility are included:
- higher reliability due to redundant systems
- 24/7 monitoring by engineers
- lower network latency and higher bandwidth at a lower cost
- specialist staff, like network and facilities engineers, which would not be cost effective for any single client to keep on the payroll.
Major types of colocation customers are:
- Web commerce companies, who use the facilities for a safe environment and cost-effective, redundant connections to the Internet
- Major enterprises, who use the facility for disaster avoidance, offsite data backup and business continuity
- Telecommunication companies, who use the facilities to exchange traffic with other telecommunications companies and access to potential clients—a colo facility where many carriers are physically present is often called a 'carrier hotel'; the presence of such a facility at a colo increases its value to some classes of potential customers.
- eCommerce sites, who use the facilities to house servers dedicated to processing secure transactions online.[2]
Typically, colocation service offers the infrastructure, power, physical security etc. while the clients provide both storage and servers. Besides this, space a facility is either leased by room, rack as well as cabinet. Many colos today are expanding their portfolio to extend managed services that back their client’s business initiatives. Several reasons persuade business owners to opt for colo over constructing their own data center. However, the major driver in this space is CAPEX (or capital expenditures) usually associated with a large building or managing a big computing facility. Traditionally, colos were popular among the private companies mainly for disaster recovery. But in recent times they are used by cloud service vendors.
Furthermore, for few enterprises, colo is an effective solution, but there are certain downsides to such an approach. Distance often results in high travel costs especially when a device has to be touched manually. In such cases, colo clients often find themselves trapped into long–term contracts, which often prevent customers from re-negotiating the prices when they fall. Therefore, it’s important for a company to thoroughly examine its colocation SLAs (Service Level Agreements) so that they are not taken aback by the hidden charges.[3]
Configuration
Many colocation providers sell to a wide range of customers, ranging from large enterprises to small companies.[4] Typically, the customer owns the IT equipment and the facility provides power and cooling. Customers retain control over the design and usage of their equipment, but daily management of the data center and facility are overseen by the multi-tenant colocation provider.[5]
- Cabinets – A cabinet is a locking unit that holds a server rack. In a multi-tenant data center, servers within cabinets share raised-floor space with other tenants, in addition to sharing power and cooling infrastructure.[6]
- Cages – A cage is dedicated server space within a traditional raised-floor data center; it is surrounded by mesh walls and entered through a locking door. Cages share power and cooling infrastructure with other data center tenants.
- Suites – A suite is a dedicated, private server space within a traditional raised-floor data center; it is fully enclosed by solid partitions and entered through a locking door. Suites share power and cooling infrastructure with other data center tenants.
- Modules – data center modules are purpose-engineered modules and components to offer scalable data center capacity. They typically use standardized components, which make them easily added, integrated or retrofitted into existing data centers, and cheaper and easier to build.[7] In a colocation environment, the data center module is a data center within a data center, with its own steel walls and security protocol, and its own cooling and power infrastructure. “A number of colocation companies have praised the modular approach to data centers to better match customer demand with physical build outs, and allow customers to buy a data center as a service, paying only for what they consume.”[8]
Building features
Buildings with data centres inside them are often easy to recognize due to the amount of cooling equipment located outside or on the roof.[9]
Colocation facilities have many other special characteristics:
- Fire protection systems, including passive and active design elements, as well as implementation of fire prevention programmes in operations. Smoke detectors are usually installed to provide early warning of a developing fire by detecting particles generated by smouldering components prior to the development of flame. This allows investigation, interruption of power, and manual fire suppression using hand held fire extinguishers before the fire grows to a large size. A fire sprinkler system is often provided to control a full scale fire if it develops. Clean agent fire suppression gaseous systems are sometimes installed to suppress a fire earlier than the fire sprinkler system. Passive fire protection elements include the installation of fire walls around the space, so a fire can be restricted to a portion of the facility for a limited time in the event of the failure of the active fire protection systems, or if they are not installed.
- 19-inch racks for data equipment and servers, 23-inch racks for telecommunications equipment.
- Cabinets and cages for physical access control over tenants' equipment.
- Overhead or underfloor cable rack (tray) and fibreguide, power cables usually on separate rack from data.
- Air conditioning is used to control the temperature and humidity in the space. ASHRAE recommends a temperature range and humidity range for optimal electronic equipment conditions versus environmental issues.[10] The electrical power used by the electronic equipment is converted to heat, which is rejected to the ambient air in the data centre space. Unless the heat is removed, the ambient temperature will rise, resulting in electronic equipment malfunction. By controlling the space air temperature, the server components at the board level are kept within the manufacturer's specified temperature/humidity range. Air conditioning systems help keep equipment spaces humidity within acceptable parameters by cooling the return space air below the dew point. Too much humidity and water may begin to condense on internal components. In case of a dry atmosphere, ancillary humidification systems may add water vapour to the space if the humidity is too low, to avoid static electricity discharge problems which may damage components.
- Low-impedance electrical ground.
- Few, if any, windows.
Colocation data centres are often audited to prove that they live up to certain standards and levels of reliability; the most commonly seen systems are SSAE 16 SOC 1 Type I and Type II (formerly SAS 70 Type I and Type II) and the tier system by the Uptime Institute. For service organizations today, SSAE 16 calls for a description of its "system". This is far more detailed and comprehensive than SAS 70's description of "controls".[11] Other data center compliance standards include HIPAA (Health Insurance Portability and Accountability Act (HIPAA) audit) and PCI DSS Standards.
Physical security
Most colocation centres have high levels of physical security, including on-site security guards trained for Anti-Terrorism in the most extreme cases. Others may simply be guarded continuously. They may also employ CCTV.
Some colocation facilities require that employees escort customers, especially if there are not individual locked cages or cabinets for each customer. In other facilities, a PIN code or proximity card access system may allow customers access into the building, and individual cages or cabinets have locks. Biometric security measures, such as fingerprint recognition, voice recognition and "weight matching", are also becoming more commonplace in modern facilities. 'Man-traps' are also used, where a hallway leading into the data centre has a door at each end and both cannot be open simultaneously; visitors can be seen via CCTV and are manually authorized to enter.
Power
Colocation facilities generally have generators that start automatically when utility power fails, usually running on diesel fuel. These generators may have varying levels of redundancy, depending on how the facility is built.
Generators do not start instantaneously, so colocation facilities usually have battery backup systems. In many facilities, the operator of the facility provides large inverters to provide AC power from the batteries. In other cases, the customers may install smaller UPSes in their racks.
Some customers choose to use equipment that is powered directly by 48VDC (nominal) battery banks. This may provide better energy efficiency, and may reduce the number of parts that can fail, though the reduced voltage greatly increases necessary current, and thus the size (and cost) of power delivery wiring.
An alternative to batteries is a motor generator connected to a flywheel and diesel engine.
Many colocation facilities can provide redundant, A and B power feeds to customer equipment, and high end servers and telecommunications equipment often can have two power supplies installed.
“Redundancy in IT is a system design in which a component is duplicated so if it fails there will be a backup.”[12]
N+1, also referred to as “parallel redundant”: “The number of UPS modules that are required to handle an adequate supply of power for essential connected systems, plus one more.”[13]
2N+1, also referred to as “system plus system”: “2 UPS systems feeding 2 independent output distribution systems.”[14] Offers complete redundancy between sides A and B. “2(N+1) architectures fed directly to dual-corded loads provide the highest availability by offering complete redundancy and eliminating single points of failure.”[15]
Colocation facilities are sometimes connected to multiple sections of the utility power grid for additional reliability.
Cooling
Cooling within the data center can be done in multiple ways.Typically though, what "cooling" refers to is the removal of heat. Consider the heat you feel coming out of the back of your cable box, laptop or desktop computer. Now consider how much heat is produced from the thousands of servers and other IT equipment inside the data center. In order to keep this heat from damaging the sensitive IT equipment, data center operators utilize a few different technologies including the Computer Room Air Conditioner (CRAC), Computer Room Air Handler (CRAH) and chiller plant. More progressive operators have opted to use conductive cooling. Whereas traditional cooling technologies rely on chilled water systems, which consume and waste a lot of power and water, conductive cooling leverages refrigerant and consumes far less water and energy.
The operator of a colocation facility generally provides air conditioning for the computer and telecommunications equipment in the building. The cooling system generally includes some degree of redundancy.
In older facilities, the cooling system capacity often limits the amount of equipment that can operate in the building, more so than the available square footage.
Internal connections
Colocation facility owners have differing rules regarding cross connects between their customers, some of whom may be carriers. These rules may allow customers to run such connections at no charge, or allow customers to order such connections for a significant monthly fee. They may allow customers to order cross connects to carriers, but not to other customers.
Some colocation centres feature a "meet-me-room" where the different carriers housed in the centre can efficiently exchange data.
Most peering points sit in colocation centres.
Because of the high concentration of servers inside larger colocation centres, most carriers will be interested in bringing direct connections to such buildings.
In many cases, there will be a larger Internet Exchange hosted inside a colocation centre, where customers can connect for peering.
External connections
Colocation facilities generally have multiple locations for fibre optic cables to enter the building, to provide redundancy so that communications can continue if one bundle of cables is damaged. Some also have wireless backup connections, for example via satellite.
See also
References
- ^ Rachel A. Dines, Sophia I. Vargas, Doug Washburn, and Eric Chi, “Build Or Colocate? The ROI Of Your Next Data Center”, “Forrester”, August 2013
- ^ Miami data center to protect Latin American e-commerce from fraud | Datacenter Dynamics
- ^ Colocation Market Forecast to Reach Nearly $52 Billion by 2020
- ^ Pashke, Jeff. "Going Open – Software vendors in transition". 451 Research. Retrieved 6 March 2016.
- ^ "Colocation: Managed or unmanaged?". 7L Networks. Retrieved 6 March 2016.
- ^ "Colocation Benefits And How To Get Started". Psychz Networks. Retrieved 18 February 2015.
- ^ DCD Intelligence “Assessing the Cost: Modular versus Traditional Build”, October 2013 Archived 2014-10-07 at the Wayback Machine
- ^ John Rath, “DCK Guide To Modular Data Centers: The Modular Market”, “Data Center Knowledge”, October 2011
- ^ Examples can be seen at http://www.datacentermap.com/blog/data-centers-from-the-sky-174.html
- ^ Thermal Guidelines for Data Processing Environments, 3rd Ed. | ASHRAE Store
- ^ "SSAE 16 Compliance".
- ^ Clive Longbottom, “How to plan and manage datacentre redundancy”, “Computer Weekly”, August 2013
- ^ Margaret Rouse, “N+1 UPS”, “TechTarget”, June 2010
- ^ Emerson Network Power, “Powering Change in the Data Center”
- ^ Kevin McCarthy and Victor Avelar, “Comparing UPS System Design Configurations”, “Schneider Electric”
External links
- Template:Dmoz
- Build Or Colocate? The ROI Of Your Next Data Center
- Multi-Tenant Datacenter Global Providers – 2014
- DCK Guide To Modular Data Centers: The Modular Market