Colocation centre

From Wikipedia, the free encyclopedia
  (Redirected from Carrier hotel)
Jump to: navigation, search
For the methods for the solution of differential equations, see Collocation method. For the corpus linguistics notion, see collocation.

A colocation centre or colocation center (also spelled co-location, collocation, colo, or coloc) is a type of data centre where equipment, space, and bandwidth are available for rental to retail customers. Colocation facilities provide space, power, cooling, and physical security for the server, storage, and networking equipment of other firms—and connect them to a variety of telecommunications and network service providers—with a minimum of cost and complexity.

Benefits[edit]

Colocation has become a popular option for companies with midsize IT needs—especially those in Internet related business—because it allows the company to focus its IT staff on the actual work being done, instead of the logistical support needs which underlie the work. Significant benefits of scale (large power and mechanical systems) result in large colocation facilities, typically 4500 to 9500 square metres (roughly 50,000 to 100,000 square feet).

Reported Benefits of Colocation Include:[1]

  • A predictable and operational expenditure model
  • Additional capacity can be brought on quickly, cheaply, and only as needed
  • Better access to space and power
  • Experienced professionals managing your data center facility
  • An ecosystem of partners in the same facility
  • Dedicated infrastructure to build your cloud strategy
  • Lean infrastructure to manage during times of rapid business change
  • A better road map for disaster recovery

Colocation facilities provide, as a retail rental business, usually on a term contract:

  • lockable rack cabinets or cages,
  • power in a variety of formats, AC and DC,
  • network connectivity—either in a 'house blend', where the colo provider is a customer of carriers, and connects their clients to their own router for access to multiple carriers, or as direct 'cross-connect' access to the routers of the carriers themselves, or both,
  • cooling,
  • physical security (including video surveillance, biometric and badge access, logging, and the like), and
  • real-time live monitoring of all these functions for failures.

They also provide redundant systems for, usually, all of these features, to mitigate the problems when each inevitably fails.

Among the economies of scale which result from grouping many small-to-midsized customers together in one facility are included:

  • higher reliability due to redundant systems
  • 24/7 monitoring by engineers
  • lower network latency and higher bandwidth at a lower cost
  • specialist staff, like network and facilities engineers, which would not be cost effective for any single client to keep on the payroll.

Major types of colocation customers are:

  • Web commerce companies, who use the facilities for a safe environment and cost-effective, redundant connections to the Internet
  • Major enterprises, who use the facility for disaster avoidance, offsite data backup and business continuity
  • Telecommunication companies, who use the facilities to exchange traffic with other telecommunications companies and access to potential clients—a colo facility where many carriers are physically present is often called a 'carrier hotel'; the presence of such a facility at a colo increases its value to some classes of potential customers.
  • eCommerce sites, who use the facilities to house servers dedicated to processing secure transactions online.[2]

Configuration[edit]

“Multi-tenant [colocation] providers sell to a wide range of customers, from Fortune 1000 enterprises to small- and medium-sized organizations.”[3] “Typically the facility provides power and cooling to the space, but the IT equipment is owned by the customer. The value proposition of retail multi-tenant is that customers can retain full control of the design and management of their servers and storage, but turn over the daily task of managing data center and facility infrastructure to their multi-tenant provider.”[4]

  • Cabinets – A cabinet is a locking unit that holds a server rack. In a multi-tenant data center, servers within cabinets share raised-floor space with other tenants, in addition to sharing power and cooling infrastructure.
  • Cages – A cage is dedicated server space within a traditional raised-floor data center; it is surrounded by mesh walls and entered through a locking door. Cages share power and cooling infrastructure with other data center tenants.
  • Suites – A suite is a dedicated, private server space within a traditional raised-floor data center; it is fully enclosed by solid partitions and enterer through a locking door. Suites share power and cooling infrastructure with other data center tenants.
    IO Modular data center colocation
  • Modules – A data center module is “a prefabricated, pretested module which is assembled in a custom-configured manner to form a complete solution, ideally defined by software.”[5] In a colocation environment, the data center module is a data center within a data center, with its own steel walls and security protocol, and its own cooling and power infrastructure. “A number of colocation companies have praised the modular approach to data centers to better match customer demand with physical build outs, and allow customers to buy a data center as a service, paying only for what they consume.”[6]

Building features[edit]

Buildings with data centres inside them are often easy to recognize due to the amount of cooling equipment located outside or on the roof.[7]

Colocation facilities have many other special characteristics:

A room in the Telecity colocation centre in Aubervilliers, a suburb of Paris
A typical server rack, commonly seen in colocation
  • Fire protection systems, including passive and active design elements, as well as implementation of fire prevention programmes in operations. Smoke detectors are usually installed to provide early warning of a developing fire by detecting particles generated by smouldering components prior to the development of flame. This allows investigation, interruption of power, and manual fire suppression using hand held fire extinguishers before the fire grows to a large size. A fire sprinkler system is often provided to control a full scale fire if it develops. Clean agent fire suppression gaseous systems are sometimes installed to suppress a fire earlier than the fire sprinkler system. Passive fire protection elements include the installation of fire walls around the space, so a fire can be restricted to a portion of the facility for a limited time in the event of the failure of the active fire protection systems, or if they are not installed.
  • 19-inch racks for data equipment and servers, 23-inch racks for telecommunications equipment.
  • Cabinets and cages for physical access control over tenants' equipment.
  • Overhead or underfloor cable rack (tray) and fibreguide, power cables usually on separate rack from data.
  • Air conditioning is used to control the temperature and humidity in the space. ASHRAE recommends a temperature range and humidity range for optimal electronic equipment conditions versus environmental issues.[8] The electrical power used by the electronic equipment is converted to heat, which is rejected to the ambient air in the data centre space. Unless the heat is removed, the ambient temperature will rise, resulting in electronic equipment malfunction. By controlling the space air temperature, the server components at the board level are kept within the manufacturer's specified temperature/humidity range. Air conditioning systems help keep equipment spaces humidity within acceptable parameters by cooling the return space air below the dew point. Too much humidity and water may begin to condense on internal components. In case of a dry atmosphere, ancillary humidification systems may add water vapour to the space if the humidity is too low, to avoid static electricity discharge problems which may damage components.
  • Low-impedance electrical ground.
  • Few, if any, windows.

Colocation data centres are often audited to prove that they live up to certain standards and levels of reliability; the most commonly seen systems are SSAE 16 SOC 1 Type I and Type II (formerly SAS 70 Type I and Type II) and the tier system by the Uptime Institute. For service organizations today, SSAE 16 calls for a description of its "system". This is far more detailed and comprehensive than SAS 70's description of "controls".[9] Other data center compliance standards include HIPAA (Health Insurance Portability and Accountability Act (HIPAA) audit) and PCI DSS Standards.[10]

Physical security[edit]

Most colocation centres have high levels of physical security, including on-site security guards trained for Anti-Terrorism in the most extreme cases.[11] Others may simply be guarded continuously. They may also employ CCTV.

A biometric key lock for a data center access point. Taken at the Netrepid Data Center in Harrisburg, PA in July 2013.
Security features of a modular data center

Some colocation facilities require that employees escort customers, especially if there are not individual locked cages or cabinets for each customer. In other facilities, a PIN code or proximity card access system may allow customers access into the building, and individual cages or cabinets have locks. Biometric security measures, such as fingerprint recognition, voice recognition and "weight matching", are also becoming more commonplace in modern facilities. 'Man-traps' are also used, where a hallway leading into the data centre has a door at each end and both cannot be open simultaneously; visitors can be seen via CCTV and are manually authorized to enter.

Power[edit]

Colocation facilities generally have generators that start automatically when utility power fails, usually running on diesel fuel. These generators may have varying levels of redundancy, depending on how the facility is built.

Generators do not start instantaneously, so colocation facilities usually have battery backup systems. In many facilities, the operator of the facility provides large inverters to provide AC power from the batteries. In other cases, the customers may install smaller UPSes in their racks.

Some customers choose to use equipment that is powered directly by 48VDC (nominal) battery banks. This may provide better energy efficiency, and may reduce the number of parts that can fail, though the reduced voltage greatly increases necessary current, and thus the size (and cost) of power delivery wiring.

An alternative to batteries is a motor generator connected to a flywheel and diesel engine.

Many colocation facilities can provide redundant, A and B power feeds to customer equipment, and high end servers and telecommunications equipment often can have two power supplies installed.

Example data center redundant power delivery model from IO Phoenix

“Redundancy in IT is a system design in which a component is duplicated so if it fails there will be a backup.”[12]

N+1, also referred to as “parallel redundant”: “The number of UPS modules that are required to handle an adequate supply of power for essential connected systems, plus one more.”[13]

2N+1, also referred to as “system plus system”: “2 UPS systems feeding 2 independent output distribution systems.”[14] Offers complete redundancy between sides A and B. “2(N+1) architectures fed directly to dual-corded loads provide the highest availability by offering complete redundancy and eliminating single points of failure.”[15]

Colocation facilities are sometimes connected to multiple sections of the utility power grid for additional reliability.

Cooling[edit]

A colocation cooling rack used to moderate server temperatures in a data center. Taken at the Netrepid Data Center in Harrisburg, PA in July 2013.

The operator of a colocation facility generally provides air conditioning for the computer and telecommunications equipment in the building. The cooling system generally includes some degree of redundancy.

In older facilities, the cooling system capacity often limits the amount of equipment that can operate in the building, more so than the available square footage.

Thermal energy storage

For example, at its data center in Phoenix, Arizona, IO uses a thermal energy storage system. “IO Phoenix uses the Ice Ball Thermal Storage System from Cryogel. The ice balls are water-filled, dimpled plastic spheres about the size of a softball, floating in tanks filled with a glycol solution. During the night, that solution is run through chillers and then pumped back into the tanks, freezing the ice balls. During the day, the solution – still chilled from the ice balls – is pumped through a heat exchanger that chills water in a separate loop used in the data center. That way, we’re running the chiller most at night when there is less demand on the power grid rather than during the day when the demand is greater (and electricity more expensive).”[16]

Internal connections[edit]

Colocation facility owners have differing rules regarding cross connects between their customers, some of whom may be carriers. These rules may allow customers to run such connections at no charge, or allow customers to order such connections for a significant monthly fee. They may allow customers to order cross connects to carriers, but not to other customers.

Some colocation centres feature a "meet-me-room" where the different carriers housed in the centre can efficiently exchange data.

Most peering points sit in colocation centres.

Because of the high concentration of servers inside larger colocation centres, most carriers will be interested in bringing direct connections to such buildings.

In many cases, there will be a larger Internet Exchange hosted inside a colocation centre, where customers can connect for peering.

External connections[edit]

Colocation facilities generally have multiple locations for fibre optic cables to enter the building, to provide redundancy so that communications can continue if one bundle of cables is damaged. Some also have wireless backup connections, for example via satellite.

List of data center colocation providers[edit]

  • TeleData
  • IO
  • Equinix
  • Telecity
  • Interxion
  • AT&T
  • Verizon
  • Level3
  • Latisys
  • Telehouse
  • Telx Group[17]

See also[edit]

References[edit]

  1. ^ Rachel A. Dines, Sophia I. Vargas, Doug Washburn, and Eric Chi, “Build Or Colocate? The ROI Of Your Next Data Center”, “Forrester”, August 2013
  2. ^ Miami data center to protect Latin American e-commerce from fraud | Datacenter Dynamics
  3. ^ Jeff Paschke, “Multi-Tenant Datacenter Global Providers - 2014”, “451 Research”, August 2014
  4. ^ David Freeland, “Colocation and Managed Hosting”, “FOCUS Telecom”, Winter 2012
  5. ^ DCD Intelligence “Assessing the Cost: Modular versus Traditional Build”, October 2013
  6. ^ John Rath, “DCK Guide To Modular Data Centers: The Modular Market”, “Data Center Knowledge”, October 2011
  7. ^ Examples can be seen at http://www.datacentermap.com/blog/data-centers-from-the-sky-174.html
  8. ^ Thermal Guidelines for Data Processing Environments, 3rd Ed. | ASHRAE Store
  9. ^ "SSAE 16 (SOC 1) Overview". 
  10. ^ "HIPAA Compliance Data Center". Colocation America. 
  11. ^ "Blue Ridge Data Center Fact Sheet". Carpathia Hosting, Inc. Retrieved 4 April 2014. 
  12. ^ Clive Longbottom, “How to plan and manage datacentre redundancy”, “Computer Weekly”, August 2013
  13. ^ Margaret Rouse, “N+1 UPS”, “TechTarget”, June 2010
  14. ^ Emerson Network Power, “Powering Change in the Data Center”
  15. ^ Kevin McCarthy and Victor Avelar, “Comparing UPS System Design Configurations”, “Schneider Electric”
  16. ^ Troy Rutman, “Innovation in the Data Center: Making Ice Helps IO.Phoenix Keep Energy Costs Down”, 18 Sept 2013
  17. ^ David Freeland, “Colocation and Managed Hosting”, “FOCUS Telecom”, Winter 2012

External links[edit]