|WikiProject Computing / Hardware||(Rated B-class, Top-importance)|
- 1 Merger proposal
- 2 Data Center vs. Server Farm
- 3 Halon
- 4 Grades/Tiers
- 5 UPTIME TIER CLASSIFICATION AS APPLIED TO NONE FACILITY RELATED AREAS
- 6 History
- 7 == DATA CENTER HUMOR ==
- 8 "Tier 4 data center ... security zones controlled by biometric ..."
- 9 Additional suggestions for the article
- 10 Incorrect ASHRAE specifications listed in Wiki entry
- 11 External link removed
- 12 Solution providers review
- 13 Merge from Server farm
- 14 Image
- 15 Uptime Institute vs. TIA-942 Classifications
- 16 hot/cold air section appears to be cut and paste
- 17 Add to the Requirements for modern data centers section
- 18 Merge with Modular data center
- 19 Storage?
- 20 a copy-paste?
- 21 Spelling of Data Center
- 22 Energy efficiency: PUE
The server room article appears to duplicate much of the content in the data center article in a slightly stubbier form. There is a stale discussion on merging server farm into data center from 2010, and while I agree that there is a notable different between the use of "server farm" and "data center" in the industry, "server room" seems to be much poorly defined. At any rate, I don't think all three of these articles should exist separately, and we should do some cleanup to make sure Data center and Server farm contain their pertinent information with not too much overlap. Jenrzzz (talk) 01:57, 7 September 2012 (UTC)
- Seems a matter of scale that could be handled in one article. User:Fred Bauder Talk 12:01, 25 September 2012 (UTC)
A server room can exist in any building or business and that room owner can (usually) choose whether or not to ad-hear to the standards of a data center. Server farms and data centers can both be used by a single business or entity or multiple business' or entities and usually exist for that exclusive purpose. A server room is normally a room in a building that has been purposed for housing the servers of (usually and more likely) only one business or entity where the owner decides how much continuity is required or can be afforded unless otherwise required. I recommend making the distinction that a server room only serve one entity and merging the server farm and data center articles noting the distinctions between the two. Once a server room houses servers for more than one entity and spills over into standards regulation then it graduates to becoming a data center (or server farm). — Preceding unsigned comment added by Palmplant (talk • contribs) 15:52, 16 October 2012 (UTC)
- Data center and Server farm are more conceptually similar than are Data center and Server room. Typically a data center serves multiple businesses or organizations. A server farm is typically owned and used by a single organization but physically could be considered and mistaken with a data center. A server room is typically a small cluster of servers that serve a single organization on-location. 18.104.22.168 (talk) 19:27, 29 July 2014 (UTC)
Data Center vs. Server Farm
Data centers make very bad server farms and no server farm could be certified as a data center. Only in the weird world of Wikipedia where the people writing the articles know nothing about the industry would such confusion exist.
Data storage (Backup bust be deep vaulted, secure and with very limited access as remote as possible. Server farms must be on major communications intersections MAE East , AADS, Palo Alto etc... All major telecom hubs are in major cities and true vaulted data centers must never be located where rioting or civil unrest could interrupt service.
- Okay then, to prevent this from becoming "wikistupidia", could someone who knows this topic please edit in a statement about how a Data Center and Server Farm clearly differ inside the actual article before I do? Airelor (talk) 19:56, 23 September 2012 (UTC)
The halon fire system does not push the oxygen out of the room. That would kill anyone left in the room, which halon does not do. It chemically interrupts the fire chain reaction, extinguishing the fire. --Unsigned comment by 22.214.171.124
New trends in Gas Fire Suppression
The most recent demonstration of N2 being used as the fire suppression gas was very impressive. Nohmi the well known Fire suppressions Engineers held a series of demonstrations with N2 in August 2009 in Saitama near Tokyo. Guests were allowed to remain in the test zones when N2 was released.
The result was impressive, with the fire extinguished immediately and none of us feeling any ill effects.
Other gases being deployed are Inergen, a blend of Argon, CO2 and Nitrogen. Inergen is probably a tradename.
CO2 is also being used in limited applications.
The move away from Halon and FM type gases is more pronounced with the Green awareness. CFCs are in general not being used in Japan. (Ozlanka - Tokyo Japan --Ozlanka (talk) 04:45, 14 September 2009 (UTC)) —Preceding unsigned comment added by Ozlanka (talk • contribs) 04:33, 14 September 2009 (UTC)
I've left a link to http://www.donelan.com/design/general.html which describes the different "grades" of datacentres. It's practically the only reference on subject I've found.. despite everyone boasting they're "class A Datacenter". So the question is: is this simply another marketing buzz word, or does it actually mean something? I'm hoping the article could be updated to mention this. (I will try and get to it eventually, but in case I don't, I'm leaving these notes.) --geoff_o 15:48, 23 March 2006 (UTC)
The Donelan link seems to not work anymore, shall we remove it? JonnyRo
The Uptime Institute has a rating system for datacenters (tier 1 through 4). Ben 23:42, 9 June 2006 (UTC)
- Temperatures: Temperatures within a DC will vary, especially if you are using a hot-aisle cold-aisle hot-aisle rack configuration, but as as general rule 22 C +/- 1 degree is the typical target temperature range we look for and most vendors will offer. Not 17 degrees. Servers don't need to actually be cold, they just need not to be hot.
- My impression of Uptime Institute rating scales are that they provide just enough information to give the impression of being useful without providing a practical benefit. The goal of course being that you think you need to hire UI to guide you though the gray areas. Fine technique from a marketing point of view, but of limited use to evaluation of a DC. For example - what kinds of single points of failure are allowable in a Tier III center? If you just fix those SOP's does it become tier 4? Can an N+1 system have a SPoF? What level of granularity are you assuming when you are discussing SPoF? Systems? Components? Parts? Do you need a switchgear on each genset to be at N+1, or is one switchgear for N+1 number of gensets OK? Meaning, is the switchgear part of the system when you describe it as N+1? Again, fair enough to UI - answering these questions is what they are paid to do. But the I through IV description is, to my mind, not much better than Not so Good to Pretty Darn Good. But I am curious what others think of this. Do others find UI more useful than me - the free publications at least.
- There is a movement away from FM200 or other gas systems because they are so costly. Any savings you thought you had b/c of not damaging as much equipment (remember that you are still coating the area with some form of chemical or mist which is going to put the machines out of action anyway) is outweighed by the big up front cost and the constant cost of maintenance and high expense and time delay problem of re-charging of accidentally fired systems. A pre-action system, with no water in the pipes until the VESDA alarm activates, and zones of control - under floor, specific areas, etc, is what we are going back to.
Added Sep. 14, 2009
"This arrangement is often made to achieve N+1 Redundancy in the systems." The N+1 referred to here is often open to interpretation. What ratio is actually practical and reliable needs to be well thought out. 4:1 ratio where N=4 is a robust back up system and we often use this ratio for UPS redundancy. In some equipment such as cooling systems, a 5:1 ratio can be used, where N=5.
With Generators, the question of redundancy becomes more complex. Is N+1 actually required? After all the generators are the back up for Mains power. Do we then need a back up of a back up? Any comments would be welcome. (Ozlanka - Tokyo, Japan --Ozlanka (talk) 04:46, 14 September 2009 (UTC)Ozlanka (talk) 04:44, 14 September 2009 (UTC)) —Preceding unsigned comment added by Ozlanka (talk • contribs) 04:41, 14 September 2009 (UTC)
UPTIME TIER CLASSIFICATION AS APPLIED TO NONE FACILITY RELATED AREAS
The UI Tier levels have not been expanded to cover critical factors that could impact on the practical aspect of a DC.
Factors such as:
Access to the DC in the event of a natural calamity. How many road routes, how many bridges, rail routes, etc. to the DC. Alternate routes are critical to provide access.
Flood Plain: The probability of flooding of the DC is an important factor that should be considered. Any due dilligence would investigate this factor.
The Building per se: Type of construction, Seismic isolation to what level, etc.
Air Quality : In DCs that use Air Side Free cooling, the Air Quality becomes critical and can impact on server life.
PUE: The efficiency of the DC should be a part of the UI classifications. With Carbon emission reductions being highlighted at all levels in the community, an efficient DC should be rated higher. A PUE of 1.5 or under should become one of the qualifications for a Tier 4 DC rating. --Ozlanka (talk) 05:05, 14 September 2009 (UTC)
It is nearly impossible to note all the factors that the TIA will audit to certify the datacenter with a TIER-classification. Thousands of items. In great lines; Tier 1 -> Stand-alone, non-fault tolerant Tier 2 -> Stand-alone, maintenance-tolerant Tier 3 -> Full redundancy, except when in maintenance Tier 4 -> Full redundancy, including while in maintenance.
As you say (i'd like to highlight this); Tier Classification has nothing to do with PUE's (Tier 4 has an extremely high PUE). However, it should be taken in consideration, to make the classification more future-proof and possibly green.
I am doing some research on the subject of Data Centres and the quite simple question and one that will cause quite a bit of debate, relates to it's history, when did the words Data Centres first come into common use.
I remember visiting the Manchester University 'Main Computer Suite' in 1984 and it was in effect what we refer to today as a Data Centre with a few minor differences. (mainframes and no switched network)
So did it start with the implementation of Rack Mount Servers and a structured cabling infrastructure? which would mean no earlier than 1987 if it was the latter.
When did Rack Mount servers start being used? The earliest I can remember is the late 1990s.
Or am I barking up the wrong tree completely and is it all based around the internet and when we all started to use the world wide web in anger.
Any feedback would be helpful.
Caveman107 14:08, 11 June 2007 (UTC) Caveman107
- > When did Rack Mount servers start being used? The earliest I can remember is the late 1990s.
- Please define "server" as many early computers in the 1970s were rack mounted. Nearly all S-100 bus machines produced from the mid 1970s to mid-1980s (when S-100 more or less died) were rack mount and they were used as general purpose servers too. As S-100 died out they were replaced by rack mount machines containing IBM PC/AT compatible motherboards.
- As for the history of "data center" - I don't know that as well but iirc, there have been companies renting rack space since the 1970s and possibly into the 1960s. Internal corporate "computer rooms" have been called "data centers" too but we'd need to dig through the old computer books from the 1960s, 70s, etc. to see when the term was used for the computer room and when it morphed into a term often applied to colo and similar rented space. 00:56, 2 September 2007 (UTC)
My father was a Univac technician and my father-in-law was an IBMer from 1960 thru 1985. The term "data center" is actually a shortened form of "data processing center" which referred to a typical mainframe environment with the following characteristics; 1) raised access floor for cabling (not originally used for A/C); 2) a mainframe and associated tape drives; and 3)a precision environmental control system to keep temperature and humidity within tight tolerances. Tim Dueck 18:41 PST, 21 May 2008
== DATA CENTER HUMOR ==
- National Mutual: "When the fire alarm goes off, you have 30 seconds to clear the floor, then the doors lock. Halon gas is then released. .. Halon gas is perfectly safe ... It's the lack of oxygen which kills you.".
- Backup generators always fail to start UNTIL AFTER the battery banks have been exhausted.
- 9-track ibm tape write protect rings make great frisbees on a slow graveyard shift.
- The BIG RED push button really does power down the whole floor.
- You can never find a tile-lifter when you need one.
- ICL Mainframes had prettier blinking lights than the IBM big iron.
"Tier 4 data center ... security zones controlled by biometric ..."
Additional suggestions for the article
Whould anyone be interested in my adding to new sections to the Data Center entry for Wikipedia:
1. The data centre market - demand and supply dynamics 2. Data centre power - taking holistic approach to data centre power reduction
Incorrect ASHRAE specifications listed in Wiki entry
The Wiki states that "ASHRAE's "Thermal Guidelines for Data Processing Environments" recommends a temperature range of 20–25 °C (68–75 °F) and humidity range of 40–55% with a maximum dew point of 17°C as optimal for data center conditions."
But if you follow the link, the range of 20C-25C was the recommened range in 2004. The 2008 recommended range is 18C-25C, with recommended dew-point between 5.5C and either 15C or 60% relative humidity (whether ASHRAE meant the lesser of the two, or the greater of the two, is unclear). NextHopSelf (talk) 00:14, 19 January 2009 (UTC)
The page references obsolete ASHRAE Thermal Guidelines. The guidelines stated are from the 2004 version (68 - 77deg F and 40 -55%Rh). The current version is dated 2011 and the recommended ranges are now 64.4 - 80.6 degF and 41.9 degF dewpoint - 59 degF dewpoint capped at 60% Rh.
The guidelines have been updated to the 2011 values & the reference was updated to the 2012 official ASHRAE publication (the 2011 white paper that originally published them has been removed from the site). Ahadenfeldt (talk) 22:22, 2 December 2013 (UTC)
Hello, I've tried to add an external link to a new site for data center professionals (www.datacenterprofessionals.net) but this has been removed. What am I doing wrong (I'm also new to Wikipedia)? Its a valid link and would be useful for anyone involved in data centers. Thanks, Ken--DataCenterProfessionals (talk) 11:24, 8 January 2009 (UTC)
Solution providers review
Merge from Server farm
The articles for data center and server farm describe the same thing, and in fact they link to each other as equivalents. These articles should be merged into Data center, unless there is a compelling reason to keep Server farm as a different article. Henry Merriam (talk) 21:33, 21 December 2009 (UTC)
Agree. These two articles are too similar to keep them apart Floul1 (talk) 09:26, 12 February 2010 (UTC)
Somehow agree the two can be mnerged into data centres, but I think we will be missing the option to add Virtual Server farms into the server farms article. So, I'm guessing the current server farm article can be merged into Data centre, but there should be one more on virtual server farms.
I disagree strongly that these should be merged. Within the industry these two terms mean something very different. A server farm is a collection or set of servers performing the same function whereas a data centre is a facility for hosting the servers in a secure environment. It would make more sense to bring this point out in server farm. Tudorjames (talk) 10:19, 23 March 2010 (UTC)
Tudorjames is correct. A data center is simply not the same thing as a server farm. A data center may contain one or more server farms, or it may contain a collection of servers that perform individual functions, therefore not configured as a server farm. The sentence also called a data center in the first paragraph of server farm is incorrect and should be deleted. For the same reason, the sentence also called a server farm in the first paragraph of data center should be deleted. I have been a building and managing Data Centers for over 15 years. Arnoldpieper 15:15, 23 March 2010 (UTC)
The third paragraph in server farm is correct and supports what is being said: The computers, routers, power supplies, and related electronics are typically mounted on 19-inch racks in a server room or data center. In other words, a server farm must be housed or hosted by a data center. It seems to me that whoever inserted the sentences also called a..., has caused this confusion. Otherwise the contents of both terms is for the most part correct and define different things. Arnoldpieper 15:25, 23 March 2010 (UTC)
This is the record of the offending entry on Data Center (the incorrect insertion of also called a server farm):
12:49, 12 May 2009 126.96.36.199 (talk) (15,974 bytes) (yet another synonym) (undo)
And here is the record of the offending entry on Server Farm (incorrect insertion of also called a data center):
12:41, 12 May 2009 188.8.131.52 (talk) (4,236 bytes) (add references, as requested.) (undo)
These two sentences should be deleted from both terms, stopping the confusion. Arnoldpieper 15:47, 23 March 2010 (UTC)
Highly Disagree. It is true that the two articles are similar. However, the Data center focuses on more than the server farm. Server farms can be used in places other than Data Centers, for instance many schools and work offices would have server farms for use by students/staff so that computer data can be accessed from computers outside the room. I don't think that server farms and data centers are identical. Related, yes. Same, no. I will highly contest the merge of these two articles. - Riotrocket8676 operating from an outside computer. --184.108.40.206 (talk) 04:11, 22 June 2010 (UTC)
I disagree strongly that these should be merged. A datacenter can even contain no serverfarms or servers: it's the facility and not the contents. Maybe the articles should be edited as the terms might be used incorrectly (I haven't checked that for the En wiki though: but it is a mistake that's often made - also in publications from within the industry).
Microsoft recently (well, last year) opened a new datacentre in Dublin to house, among other things,several server-farms: infrastructure for MSN, infrastructure for cloud-computing/Windows Azure and also storage infrastructure for Microsoft's own support-case handling system MSSolve. And then the datacentre also contains infrastructure for the networking side; thus the switches and routers to connect the serverfarms to MS's other datacentres, the MS internet-backbone and MS corporate backbone. Thus to replace/merge serverfarm with Datacenter wouldn't be OK at all. JanT (talk) 23:37, 22 June 2010 (UTC) JanT (talk) 23:37, 22 June 2010 (UTC)
The image is incorrectly licensed. It's from Akamai NOCC Tour video. The author is not Gsmith1of2, so he can't just release it to public domain. Sorry if wrong, but worth checking. —Preceding unsigned comment added by Malikussaid (talk • contribs) 10:02, 28 March 2010 (UTC)
Uptime Institute vs. TIA-942 Classifications
The classifications provided by the Uptime Institute (Tier I - IV) are not identical to the TIA-942 classifications (Tier 1 - 4). http://professionalservices.uptimeinstitute.com/myths.htm Oracleofbargth (talk) 17:23, 14 June 2011 (UTC)
hot/cold air section appears to be cut and paste
the bit on hot and cold aisles could be good, but right now it is a cut and paste job from an external site .
Either the section should be reverted for copyright reasons, or the author has to say that they work for the company and releases the text, in which case it will need editing for CoI issues, and to sound less like and advertisment. SteveLoughran (talk) 19:40, 14 June 2011 (UTC)
Add to the Requirements for modern data centers section
I would like to add the following to the "Requirements for modern data centers" section. This sub-section discusses data center modernization.
There is a trend to modernize data centers in order to take advantage of the performance and energy efficiency increases of newer IT equipment and capabilities, such as cloud computing. This process is also known as data center transformation.
Organizations are experiencing rapid IT growth but their data centers are aging. Industry research company IDC puts the average age of a data center at nine-years-old. Gartner, another research company says data centers older than seven years are obsolete. In May 2011, data center research organization Uptime Institute, reported that 36 percent of the large companies it surveyed expect to exhaust IT capacity within the next 18 months.
Data center transformation takes a step-by-step approach through integrated projects carried out over time. This differs from a traditional method of data center upgrades that takes a serial and siloed approach. The typical projects within a data center transformation initiative include standardization/consolidation, virtualization, automation and security.
- Standardization/consolidation: The purpose of this project is to reduce the number of data centers a large organization may have. This project also helps to reduce the number of hardware, software platforms, tools and processes within a data center. Organizations replace aging data center equipment with newer ones that provide increased capacity and performance. Computing, networking and management platforms are standardized so they are easier to manage.
- Virtualize: There is a trend to use IT virtualization technologies to replace or consolidate multiple data center equipment, such as servers. virtualization helps to lower capital and operational expenses, and reduce energy consumption. Data released by investment bank Lazard Capital Markets reports that 48 percent of enterprise operations will be virtualized by 2012. Gartner views virtualization as a catalyst for modernization.
- Automating: Data center automation involves automating tasks such as provisioning, configuration, patching, release management and compliance. As IT administration and staff time is at a premium, automating tasks make data centers run more efficiently.
- Securing: In modern data centers, the security of data on virtual systems is integrated with existing security of physical infrastructures. The security of a modern data center must take into account physical security, network security, and data and user security.
 Mukhar, Nicholas. "HP Updates Data Center Transformation Solutions," August 17, 2011 http://www.mspmentor.net/2011/08/17/hp-updates-data-transformation-solutions/
 Sperling, Ed. "Next-Generation Data Centers," Forbes, March 15. 2010 http://www.forbes.com/2010/03/12/cloud-computing-ibm-technology-cio-network-data-centers.html
 Niccolai, James. "Data Centers Turn to Outsourcing to Meet Capacity Needs," CIO.com, May 10, 2011 http://www.cio.com/article/681897/Data_Centers_Turn_to_Outsourcing_to_Meet_Capacity_Needs
 Tang, Helen. "Three Signs it's time to transform your data center," August 3, 2010, Data Center Knowledge http://www.datacenterknowledge.com/archives/2010/08/03/three-signs-it%E2%80%99s-time-to-transform-your-data-center/
 Miller, Rich. "Complexity: Growing Data Center Challenge," Data Center Knowledge, May 16, 2007 http://www.datacenterknowledge.com/archives/2007/05/16/complexity-growing-data-center-challenge/
 Sims, David. "Carousel's Expert Walks Through Major Benefits of Virtualization," TMC Net, July 6, 2010 http://virtualization.tmcnet.com/topics/virtualization/articles/193652-carousels-expert-walks-through-major-benefits-virtualization.htm
 Delahunty, Stephen. "The New urgency for Server Virtualization," InformationWeek, August 15, 2011. http://www.informationweek.com/news/government/enterprise-architecture/231300585
 Higginbotham, Stacey. "When It Comes to Virtualization, Are We There Yet?," GigaOM http://gigaom.com/2010/04/19/when-it-comes-to-virtualization-are-we-there-yet/
 Forgione, Joe. "Five Top Data Center Protection Challenges and Best Practices for Overcoming Them," ITBusinessEdge, July 25, 2011 http://www.ctoedge.com/content/five-top-data-center-protection-challenges-and-best-practices-overcoming-them
 Miller, Rich. "Gartner: Virtualization Disrupts Server Vendors," Data Center Knowledge, December 2, 2008 http://www.datacenterknowledge.com/archives/2008/12/02/gartner-virtualization-disrupts-server-vendors/
 Ritter, Ted. Nemertes Research, "Securing the Data-Center Transformation Aligning Security and Data-Center Dynamics," http://lippisreport.com/2011/05/securing-the-data-center-transformation-aligning-security-and-data-center-dynamics/
Merge with Modular data center
The articles Modular data center can came here and be added to Data center. These articles should be merged into Data center, because this will make this article more as a complete, perfect article. That will help this article. بازرس (talk) 15:18, 6 May 2012 (UTC)
- Support Merge - Modular data center could stand to be pared down some anyways; there are far too many external links. VQuakr (talk) 03:27, 7 September 2012 (UTC)
Came here to find out the actual medium being used to store data in these data centers: nothing in article about this basic fact! They're called DATA centers, therefore the number one thing they do is handle DATA, and part of that requires STORAGE of DATA. Yet nothing in the article explains what companies are using to store data on (2.5", 3.5",...?? 3TB, 4TB drives...?? SATA, SAS,...??). I'm left none the wiser after looking on this page. Jimthing (talk) 07:42, 22 January 2013 (UTC)
- @Jimthing. What's your point/question: what do you expect to find? That there is only one way that you can store data on some storage array in a datacenter? Nope: anything is possible - but very often (large) SAN arrays will be used that are shared between many servers via iSCSI or Fibre Channel. So best to look for SAN or Storage area network and NAS: Network Attached Storage. And these arrays can use a mix of media: disk-drives in some RAID config, maybe in combination with Solid State Disk for faster throughputs...
- But also you will find servers with it's own local (SATA/SAS) disks: eg for booting the OS. Or there is no HDD in the local system as it boots from SAN or it has the basic OS on an SD card (for example you can get blade servers that have SD memory card on which VMware ESXi is installed and all data (the virtual disks for the virtual machines running on the ESX node) comes from a SAN....One final comment: a datacenter doesn't have to say that you store data - you mainly process data in a datacenter: it is not a word for a "data library" but a location where you process data. And then it is handy to store it somewhere. Tonkie (talk) 05:35, 23 January 2013 (UTC)
This 2008 article from techtarget was not mentioned as source, but could be a copy-paste in the Uptime Institute tier levels. The wikipedia article was created much later. --K0zka (talk) 12:37, 15 January 2014 (UTC)
Spelling of Data Center
Spelling of "data center" is not consistent in the article (or indeed on this talk page) - the term "datacenter" is often used, although I cannot find the single word without a space in either Oxford or Cambridge online dictionaries and Merriam-Webster does not define it at all. --BanzaiSi (talk) 17:21, 10 June 2014 (UTC)
Energy efficiency: PUE
In the "Energy efficiency" section, the statement "The average data center in the US has a PUE of 2.0, meaning that the facility uses two watts of overhead power for every watt delivered to IT equipment" is not correct. A PUE of 2.0 means that the facility uses two watts of TOTAL power for every watt delivered to IT equipment. Total power is overhead power plus IT equipment power. Thus a facility with two watts of OVERHEAD power for every watt of IT equipment power would have a PUE of 3.0, not 2.0. The text has been corrected. Piperh (talk) 18:42, 19 January 2015 (UTC)