Google data centers
Google data centers are the large data center facilities Google uses to provide their services, which combine large drives, computer nodes organized in aisles of racks, internal and external networking, environmental controls (mainly cooling and dehumidification), and operations software (especially as concerns load balancing and fault tolerance).
There is no official data on how many servers are in Google data centers, but Gartner estimated in a July 2016 report that Google at the time had 2.5 million servers. This number is changing as the company expands capacity and refreshes its hardware.
The locations of Google's various data centers by continent are as follows:
- Berkeley County, South Carolina ( ) — since 2007, expanded in 2013, 150 employees
- Council Bluffs, Iowa ( ) — announced 2007, first phase completed 2009, expanded 2013 and 2014, 130 employees
- Douglas County, Georgia ( ) — since 2003, 350 employees
- Bridgeport, Alabama ( )
- Lenoir, North Carolina ( ) — announced 2007, completed 2009, over 110 employees
- Montgomery County, Tennessee ( ) — announced 2015
- Mayes County, Oklahoma at MidAmerica Industrial Park ( ) — announced 2007, expanded 2012, over 400 employees
- The Dalles, Oregon ( ) — since 2006, 80 full-time employees
- Reno, Nevada — announced in 2018 : 1,210 acres of land bought in 2017 in the Tahoe Reno Industrial Center; project approved by the state of Nevada in November 2018
- Henderson, Nevada — announced in 2019; 64-acres; $1.2B building costs
- Loudoun County, Virginia — announced in 2019
- Northland, Kansas City — announced in 2019, under construction
- Midlothian, Texas — announced in 2019, 375-acres; $600M building costs
- New Albany, Ohio — announced in 2019; 400-acres; $600M building costs
- Papillion, Nebraska — announced in 2019; 275-acres; $600M building costs
- Salt Lake City, Utah — announced in 2020
- Las Vegas, Nevada — announced in 2020
- Quilicura, Chile ( ) — announced 2012, online since 2015, up to 20 employees expected. A million investment plan to increase capacity at Quilicura was announced in 2018.
- Cerrillos, Chile – announced for 2020
- Colonia Nicolich, Uruguay – announced 2019
- Saint-Ghislain, Belgium ( ) — announced 2007, completed 2010, 12 employees
- Hamina, Finland ( ) — announced 2009, first phase completed 2011, expanded 2012, 90 employees
- Dublin, Ireland ( ) — announced 2011, completed 2012, 150 employees
- Eemshaven, Netherlands ( ) — announced 2014, completed 2016, 200 employees, €500 million expansion announced in 2018 
- Hollands Kroon (Agriport), Netherlands – announced 2019 
- Fredericia, Denmark ( )— announced 2018, €600M building costs, completed in 2020 November 
- Zürich, Switzerland – announced in 2018, completed 2019
- Warsaw, Poland – announced in 2019, completed in 2021
- Jurong West, Singapore ( ) — announced 2011, completed 2013
- Changhua County, Taiwan ( ) — announced 2011, completed 2013, 60 employees
- Mumbai, India — announced 2017, completed 2019
- Tainan City, Taiwan — announced September 2019
- Yunlin County, Taiwan — announced September 2020
- Jakarta, Indonesia — announced in 2020
- New Delhi, India — announced in 2020, completed in July 2021
- Seoul, South Korea — announced in 2020
- Sun Microsystems Ultra II with dual 200 MHz processors, and 256 MB of RAM. This was the main machine for the original Backrub system.
- 2 × 300 MHz dual Pentium II servers donated by Intel, they included 512 MB of RAM and 10 × 9 GB hard drives between the two. It was on these that the main search ran.
- F50 IBM RS/6000 donated by IBM, included 4 processors, 512 MB of memory and 8 × 9 GB hard disk drives.
- Two additional boxes included 3 × 9 GB hard drives and 6 x 4 GB hard disk drives respectively (the original storage for Backrub). These were attached to the Sun Ultra II.
- SDD disk expansion box with another 8 × 9 GB hard disk drives donated by IBM.
- Homemade disk box which contained 10 × 9 GB SCSI hard disk drives.
The customization goal is to purchase CPU generations that offer the best performance per dollar, not absolute performance. How this is measured is unclear, but it is likely to incorporate running costs of the entire server, and CPU power consumption could be a significant factor. Servers as of 2009–2010 consisted of custom-made open-top systems containing two processors (each with several cores), a considerable amount of RAM spread over 8 DIMM slots housing double-height DIMMs, and at least two SATA hard disk drives connected through a non-standard ATX-sized power supply unit. The servers were open top so more servers could fit into a rack. According to CNET and a book by John Hennessy, each server had a novel 12-volt battery to reduce costs and improve power efficiency.
According to Google, their global data center operation electrical power ranges between 500 and 681 megawatts. The combined processing power of these servers might have reached from 20 to 100 petaflops in 2008.
Details of the Google worldwide private networks are not publicly available, but Google publications make references to the "Atlas Top 10" report that ranks Google as the third largest ISP behind Level 3.
From this site, we can see that the Google network can be accessed from 67 public exchange points and 69 different locations across the world. As of May 2012, Google had 882 Gbit/s of public connectivity (not counting private peering agreements that Google has with the largest ISPs). This public network is used to distribute content to Google users as well as to crawl the internet to build its search indexes. The private side of the network is a secret, but a recent disclosure from Google indicate that they use custom built high-radix switch-routers (with a capacity of 128 × 10 Gigabit Ethernet port) for the wide area network. Running no less than two routers per datacenter (for redundancy) we can conclude that the Google network scales in the terabit per second range (with two fully loaded routers the bi-sectional bandwidth amount to 1,280 Gbit/s).
From a datacenter view, the network starts at the rack level, where 19-inch racks are custom-made and contain 40 to 80 servers (20 to 40 1U servers on either side, while new servers are 2U rackmount systems. Each rack has an Ethernet switch). Servers are connected via a 1 Gbit/s Ethernet link to the top of rack switch (TOR). TOR switches are then connected to a gigabit cluster switch using multiple gigabit or ten gigabit uplinks. The cluster switches themselves are interconnected and form the datacenter interconnect fabric (most likely using a dragonfly design rather than a classic butterfly or flattened butterfly layout).
From an operation standpoint, when a client computer attempts to connect to Google, several DNS servers resolve
www.google.com into multiple IP addresses via Round Robin policy. Furthermore, this acts as the first level of load balancing and directs the client to different Google clusters. A Google cluster has thousands of servers, and once the client has connected to the server additional load balancing is done to send the queries to the least loaded web server. This makes Google one of the largest and most complex content delivery networks.
Google has numerous data centers scattered around the world. At least 12 significant Google data center installations are located in the United States. The largest known centers are located in The Dalles, Oregon; Atlanta, Georgia; Reston, Virginia; Lenoir, North Carolina; and Moncks Corner, South Carolina. In Europe, the largest known centers are in Eemshaven and Groningen in the Netherlands and Mons, Belgium. Google's Oceania Data Center is located in Sydney, Australia.
Data center network topology
One of the largest Google data centers is located in the town of The Dalles, Oregon, on the Columbia River, approximately 80 miles (129 km) from Portland. Codenamed "Project 02", the million complex[further explanation needed] was built in 2006 and is approximately the size of two American football fields, with cooling towers four stories high. The site was chosen to take advantage of inexpensive hydroelectric power, and to tap into the region's large surplus of fiber optic cable, a remnant of the dot-com boom. A blueprint of the site appeared in 2008.
In February 2009, Stora Enso announced that they had sold the Summa paper mill in Hamina, Finland to Google for 40 million Euros. Google invested 200 million euros on the site to build a data center and announced additional 150 million euro investment in 2012. Google chose this location due to the availability and proximity of renewable energy sources.
Modular container data centers
Floating data centers
In 2013, the press revealed the existence of Google's floating data centers along the coasts of the states of California (Treasure Island's Building 3) and Maine. The development project was maintained under tight secrecy. The data centers are 250 feet long, 72 feet wide, 16 feet deep. The patent for an in-ocean data center cooling technology was bought by Google in 2009 (along with a wave-powered ship-based data center patent in 2008). Shortly thereafter, Google declared that the two massive and secretly-built infrastructures were merely "interactive learning centers, [...] a space where people can learn about new technology."
Most of the software stack that Google uses on their servers was developed in-house. According to a well-known Google employee, C++, Java, Python and (more recently) Go are favored over other programming languages. For example, the back end of Gmail is written in Java and the back end of Google Search is written in C++. Google has acknowledged that Python has played an important role from the beginning, and that it continues to do so as the system grows and evolves.
The software that runs the Google infrastructure includes:
- Google Web Server (GWS) – custom Linux-based Web server that Google uses for its online services.
- Storage systems:
- Google File System and its successor, Colossus
- Bigtable – structured storage built upon GFS/Colossus
- Spanner – planet-scale database, supporting externally-consistent distributed transactions
- Google F1 – a distributed, quasi-SQL DBMS based on Spanner, substituting a custom version of MySQL.
- Chubby lock service
- MapReduce and Sawzall programming language
- Indexing/search systems:
- Borg declarative process scheduling software
Google has developed several abstractions which it uses for storing most of its data:
- Protocol Buffers – "Google's lingua franca for data", a binary serialization format which is widely used within the company.
- SSTable (Sorted Strings Table) – a persistent, ordered, immutable map from keys to values, where both keys and values are arbitrary byte strings. It is also used as one of the building blocks of Bigtable.
- RecordIO – a sequence of variable sized records.
Software development practices
Most operations are read-only. When an update is required, queries are redirected to other servers, so as to simplify consistency issues. Queries are divided into sub-queries, where those sub-queries may be sent to different ducts in parallel, thus reducing the latency time.
Like most search engines, Google indexes documents by building a data structure known as inverted index. Such an index obtains a list of documents by a query word. The index is very large due to the number of documents stored in the servers.
The index is partitioned by document IDs into many pieces called shards. Each shard is replicated onto multiple servers. Initially, the index was being served from hard disk drives, as is done in traditional information retrieval (IR) systems. Google dealt with the increasing query volume by increasing number of replicas of each shard and thus increasing number of servers. Soon they found that they had enough servers to keep a copy of the whole index in main memory (although with low replication or no replication at all), and in early 2001 Google switched to an in-memory index system. This switch "radically changed many design parameters" of their search system, and allowed for a significant increase in throughput and a large decrease in latency of queries.
In June 2010, Google rolled out a next-generation indexing and serving system called "Caffeine" which can continuously crawl and update the search index. Previously, Google updated its search index in batches using a series of MapReduce jobs. The index was separated into several layers, some of which were updated faster than the others, and the main layer wouldn't be updated for as long as two weeks. With Caffeine, the entire index is updated incrementally on a continuous basis. Later Google revealed a distributed data processing system called "Percolator" which is said to be the basis of Caffeine indexing system.
- Web servers coordinate the execution of queries sent by users, then format the result into an HTML page. The execution consists of sending queries to index servers, merging the results, computing their rank, retrieving a summary for each hit (using the document server), asking for suggestions from the spelling servers, and finally getting a list of advertisements from the ad server.
- Data-gathering servers are permanently dedicated to spidering the Web. Google's web crawler is known as GoogleBot. They update the index and document databases and apply Google's algorithms to assign ranks to pages.
- Each index server contains a set of index shards. They return a list of document IDs ("docid"), such that documents corresponding to a certain docid contain the query word. These servers need less disk space, but suffer the greatest CPU workload.
- Document servers store documents. Each document is stored on dozens of document servers. When performing a search, a document server returns a summary for the document based on query words. They can also fetch the complete document when asked. These servers need more disk space.
- Ad servers manage advertisements offered by services like AdWords and AdSense.
- Spelling servers make suggestions about the spelling of queries.
In October 2013, The Washington Post reported that the U.S. National Security Agency intercepted communications between Google's data centers, as part of a program named MUSCULAR. This wiretapping was made possible because, at the time, Google did not encrypt data passed inside its own network. This was rectified when Google began encrypting data sent between data centers in 2013.
Google's most efficient data center runs at 35 °C (95 °F) using only fresh air cooling, requiring no electrically powered air conditioning.
In December 2016, Google announced that—starting in 2017—it would purchase enough renewable energy to match 100% of the energy usage of its data centers and offices. The commitment will make Google "the world's largest corporate buyer of renewable power, with commitments reaching 2.6 gigawatts (2,600 megawatts) of wind and solar energy".
- "How Many Servers Does Google Have?". Data Center Knowlegdge. Retrieved September 20, 2018.
- "Google data centers, locations". Retrieved July 21, 2014.
- "Jackson County, Alabama". Google.
- "Google kicks off construction on M Alabama data center". Made in Alabama. Retrieved August 19, 2019.
- Dawn-Hiscox, Tanwen (February 20, 2018). "Google to spend m on Pryor data center expansion". Data Centre Dynamics. Archived from the original on April 23, 2019. Retrieved April 23, 2019.
- Tanwen Dawn-Hiscox (April 18, 2017). "Google is planning a massive data center in Nevada". Datacenterdynamics.com. Retrieved December 8, 2018.
- Jason Hidalgo (November 16, 2018). "Nevada approves Google's M data center near Las Vegas, M in tax incentives". Rgj.com. Retrieved December 8, 2018.
- Jason Hidalgo (September 16, 2020). "Google to invest $600 million in data center near Reno, gets tax break". Reno Gazette Journal. Retrieved October 26, 2020.
With our new data center in Storey County and our expanded investment in our Henderson site, Google will have two facilities in Nevada, bringing our total investment to over $1.88 billion.
- Torres-Cortez, Ricardo (September 16, 2020). "Google to invest additional $600M at Henderson data center – Las Vegas Sun Newspaper". lasvegassun.com. Retrieved February 12, 2021.
"With this latest announcement, Google will bring their total investment in the city of Henderson to $1.2 billion," said Mayor Debra March in the release
- "Henderson, Nevada – Data Centers – Google". Retrieved October 26, 2020.
- Baxtel. "Google Henderson NV Data Center". baxtel.com. Retrieved February 12, 2021.
- Report, Times-Mirror Staff. "Google 'caps off' $600M investment in Loudoun County". LoudounTimes.com. Retrieved February 12, 2021.
- "Loudoun County, Virginia – Data Centers – Google". Google Data Centers. Retrieved February 12, 2021.
- www.bizjournals.com https://www.bizjournals.com/kansascity/news/2019/08/28/google-data-center-deed-executed-for-kcmo-land.html. Retrieved February 12, 2021. Missing or empty
- "Google's massive $600M data center takes shape in Ellis County as tech giant ups Texas presence". Dallas News. June 14, 2019. Retrieved February 12, 2021.
- "Midlothian, Texas – Data Centers – Google". Google Data Centers. Retrieved February 12, 2021.
- Williams, Mark. "Google joins New Albany high-tech crowd with $600 million data center". The Columbus Dispatch. Retrieved February 12, 2021.
- "New Albany, Ohio – Data Centers – Google". Google Data Centers. Retrieved February 12, 2021.
- "Google confirms it is behind $600m Papillion data center project". www.datacenterdynamics.com. Retrieved February 12, 2021.
- "Papillion, Nebraska – Data Centers – Google". Google Data Centers. Retrieved February 12, 2021.
- "Google ha decido de invertir millones de dólares en su centro de datos en Chile". Newtechmag.net (in Spanish). September 28, 2018. Retrieved December 8, 2018.
- "Google instalará un nuevo data center en Chile". diarioeldia.cl (in Spanish).
- Observador, El. "Google instalará un centro de datos en Canelones". El Observador. Retrieved August 20, 2020.
- ICNDiario. "El gigante Google confirma que instalará su centro de datos en Uruguay | ICNDiario" (in Spanish). Retrieved August 20, 2020.
- "Google confirmó la instalación de su centro de datos en el Parque de las Ciencias". Montevideo Portal (in Spanish). Retrieved August 20, 2020.
- "Dublin, Ireland – Data Centers – Google". www.google.com. Retrieved April 2, 2019.
- "Google invests €1 billion in data centers in the Netherlands". NFIA. June 24, 2019. Retrieved February 12, 2021.
- "Google to Spend $1.1 Billion on New Data Centers in Netherlands". Data Center Knowledge. June 24, 2019.
- Sverdlik, Yevgeniy (November 20, 2018). "Google to Build M Data Center in Denmark". Data Center Knowledge. Archived from the original on April 16, 2019. Retrieved April 23, 2019.
- Baxtel. "Google Fredericia Denmark Data Center". baxtel.com. Retrieved February 12, 2021.
- <<Cite web|url=https://www.datacenterknowledge.com/google-alphabet/google-building-cloud-data-centers-close-swiss-banks%7Ctitle=Google[permanent dead link] Building Cloud Data Centers Close to Swiss Banks||
- "Google to Build Cloud Data Centers in Poland". Data Center Knowledge. September 27, 2019.
- Stiver, Dave (November 1, 2017). "GCP arrives in India with launch of Mumbai region". Google Cloud Blog. Retrieved July 30, 2019.
- "Google purchases land for new data center in Tainan". Taipei Times. September 12, 2019. Retrieved December 20, 2019.
- "Google to set up data center in Tainan". Focus Taiwan. September 11, 2019. Retrieved December 20, 2019.
- "Google to set up second data center in Taiwan". Taiwan News. September 11, 2019. Retrieved December 20, 2019.
- "Google confirms plans to build 3rd data center in Taiwan". Taiwan News. September 3, 2020. Retrieved September 3, 2020.
- "Namaste, India. Our new cloud region in Delhi NCR is now live". cloudonair.withgoogle.com.
- ""Google Stanford Hardware"". Archived from the original on February 9, 1999. Retrieved March 23, 2017.CS1 maint: bot: original URL status unknown (link). Stanford University (provided by Internet Archive). Retrieved on July 10, 2006.
- Merlin, Marc (2013). "Case Study: Live upgrading many thousand of servers from an ancient Red Hat distribution to a 10 year newer Debian based one" (PDF). Linux Foundation. Retrieved June 9, 2017.
- Tawfik Jelassi; Albrecht Enders (2004). "Case study 16 — Google". Strategies for E-business. Pearson Education. p. 424. ISBN 978-0-273-68840-2.
- Computer Architecture, Fifth Edition: A Quantitative Approach, ISBN 978-0123838728; Chapter Six; 6.7 "A Google Warehouse-Scale Computer" page 471 "Designing motherboards that only need a single 12-volt supply so that the UPS function could be supplied by standard batteries associated with each server"
- on YouTube
- Google on-server 12V UPS, April 1, 2009.
- "Google Sustainability". Google Sustainability.
- "Analytics Press Growth in data center electricity use 2005 to 2010". Archived from the original on January 11, 2012. Retrieved May 22, 2012.
- Google Surpasses Supercomputer Community, Unnoticed?, May 20, 2008.
- "Fiber Optic Communication Technologies: What's Needed for Datacenter Network Operations", Research
- Lam, Cedric F. (2010), FTTH look ahead — technologies & architectures (PDF), p. 4
- "kumara ASN15169", Peering DB
- "Urs Holzle", Speakers, Open Network Summit, archived from the original on May 10, 2012, retrieved May 22, 2012
- Web Search for a Planet: The Google Cluster Architecture (Luiz André Barroso, Jeffrey Dean, Urs Hölzle)
- Warehouse size computers
- Denis Abt High Performance Datacenter Networks: Architectures, Algorithms, and Opportunities
- Fiach Reid (2004). "Case Study: The Google search engine". Network Programming in .NET. Digital Press. pp. 251–253. ISBN 978-1-55558-315-6.
- Rich Miller (March 27, 2008). "Google Data Center FAQ". Data Center Knowledge. Archived from the original on March 13, 2009. Retrieved March 15, 2009.
- Brett Winterford (March 5, 2010). "Found: Google Australia's secret data network". ITNews. Retrieved March 20, 2010.
- Singh, Arjun; Ong, Joon; Agarwal, Amit; Anderson, Glen; Armistead, Ashby; Bannon, Roy; Boving, Seb; Desai, Gaurav; Felderman, Bob; Germano, Paulie; Kanagala, Anand (2015). "Jupiter Rising: A Decade of Clos Topologies and Centralized Control in Google's Datacenter Network". Sigcomm '15. doi:10.1145/2785956.2787508. S2CID 2817692.
- Google "The Dalles, Oregon Data Center" Retrieved on January 3, 2011.
- Markoff, John; Hansell, Saul. "Hiding in Plain Sight, Google Seeks More Power." New York Times. June 14, 2006. Retrieved on October 15, 2008.
- Strand, Ginger. "Google Data Center" Harper's Magazine. March 2008. Retrieved on October 15, 2008. Archived August 30, 2012, at the Wayback Machine
- "Stora Enso divests Summa Mill premises in Finland for million". Stora Enso. February 12, 2009. Archived from the original on April 13, 2009. Retrieved December 2, 2009.
- [dead link] "Stooora yllätys: Google ostaa Summan tehtaan". Kauppalehti (in Finnish). Helsinki. February 12, 2009. Archived from the original on February 14, 2009. Retrieved February 12, 2009.
- "Google investoi 200 miljoonaa euroa Haminaan". Taloussanomat (in Finnish). Helsinki. February 4, 2009. Retrieved March 15, 2009.
- "Hamina, Finland". Retrieved April 23, 2018.
- Finland – First Choice for Siting Your Cloud Computing Data Center. Archived July 6, 2013, at the Wayback Machine Accessed August 4, 2010.
- Metz, Cade (April 10, 2009). "Google streams data center pods to world+dog". The Register.
- "United States Patent: 7278273". Patft.uspto.gov. Retrieved February 17, 2012.
- Rory Carroll (October 30, 2013). "Google's worst-kept secret: floating data centers off US coasts". Theguardian.com. Retrieved December 8, 2018.
- Rich Miller (April 29, 2009). "Google Gets Patent for Data Center Barges". Datacenterknowledge.com. Retrieved December 8, 2018.
- Martin Lamonica (September 8, 2008). "Google files patent for wave-powered floating data center". Cnet.com. Retrieved December 8, 2018.
- "Google's ship based datacenter patent application surfaces". Datacenterdynamics.com. September 7, 2008. Retrieved December 8, 2018.
- "Google barge mystery solved: they're for 'interactive learning centers'". Theguardian.com. November 6, 2013. Retrieved December 8, 2018.
- Brandon Bailey (August 1, 2014). "Google confirms selling a mystery barge". San Jose Mercury News. Retrieved April 7, 2015.
- Chris Morran (November 7, 2014). "What Happened To Those Google Barges?". Consumerist. Retrieved January 15, 2017.
- Mark Levene (2005). An Introduction to Search Engines and Web Navigation. Pearson Education. p. 73. ISBN 978-0-321-30677-7.
- "Python Status Update". Artima. January 10, 2006. Retrieved February 17, 2012.
- "Warning". Panela. Blog-city. Archived from the original on December 28, 2011. Retrieved February 17, 2012.
- "Quotes about Python". Python. Retrieved February 17, 2012.
- "Google Architecture". High Scalability. November 22, 2008. Retrieved February 17, 2012.
- Fikes, Andrew (July 29, 2010), "Storage Architecture and Challenges", TechTalk (PDF)[permanent dead link]
- "Colossus: Successor to the Google File System (GFS)". SysTutorials. November 29, 2012. Retrieved May 10, 2016.
- Dean, Jeffrey 'Jeff' (2009), "Design, Lessons and Advice from Building Large Distributed Systems", Ladis (keynote talk presentation), Cornell
- Shute, Jeffrey 'Jeff'; Oancea, Mircea; Ellner, Stephan; Handy, Benjamin 'Ben'; Rollins, Eric; Samwel, Bart; Vingralek, Radek; Whipkey, Chad; Chen, Xin; Jegerlehner, Beat; Littlefield, Kyle; Tong, Phoenix (2012), "F1 — the Fault-Tolerant Distributed RDBMS Supporting Google's Ad Business", Research (presentation), Sigmod
- The Register. Google Caffeine jolts worldwide search machine
- "Google official release note". Retrieved September 28, 2013.
- "Google Developing Caffeine Storage System | TechWeekEurope UK". Eweekeurope.co.uk. August 18, 2009. Archived from the original on November 15, 2011. Retrieved February 17, 2012.
- "Developer Guide – Protocol Buffers – Google Code". Retrieved February 17, 2012.
- windley on (June 24, 2008). "Phil Windley's Technometria | Velocity 08: Storage at Scale". Windley.com. Retrieved February 17, 2012.
- "Message limit – Protocol Buffers | Google Groups". Retrieved February 17, 2012.
- "Jeff Dean's keynote at WSDM 2009" (PDF). Retrieved February 17, 2012.
- Daniel Peng, Frank Dabek. (2010). Large-scale Incremental Processing Using Distributed Transactions and Notifications. Proceedings of the 9th USENIX Symposium on Operating Systems Design and Implementation.
- The Register. Google Percolator – global search jolt sans MapReduce comedown
- Chandler Evans (2008). "Google Platform". Future of Google Earth. Madison Publishing Company. p. 299. ISBN 978-1-4196-8903-1.
- Chris Sherman (2005). "How Google Works". Google Power. McGraw-Hill Professional. pp. 10–11. ISBN 978-0-07-225787-8.
- Michael Miller (2007). "How Google Works". Googlepedia. Pearson Technology Group. pp. 17–18. ISBN 978-0-7897-3639-0.
- Gellman, Barton; Soltani, Ashkan (October 30, 2013). "NSA infiltrates links to Yahoo, Google data centers worldwide, Snowden documents say". The Washington Post. Retrieved November 1, 2013.
- Savage, Charlie; Miller, Claire Cain; Perlroth, Nicole (October 30, 2013). "N.S.A. Said to Tap Google and Yahoo Abroad". The New York Times. Retrieved March 9, 2017.
- Gallagher, Sean (October 31, 2013). "How the NSA's MUSCULAR tapped Google's and Yahoo's private networks". Ars Technica. Condé Nast. Retrieved March 9, 2017.
- Miller, Claire Cain (October 31, 2013). "Angry Over U.S. Surveillance, Tech Giants Bolster Defenses". The New York Times. Retrieved March 9, 2017.
- Humphries, Matthew (March 27, 2012). "Google's most efficient data center runs at 95 degrees". geek.com. Archived from the original on June 13, 2016. Retrieved June 13, 2016.
- Hölzle, Urs (December 6, 2016). "We're set to reach 100% renewable energy — and it's just the beginning". The Keyword Google Blog. Retrieved December 8, 2016.
- Statt, Nick (December 6, 2016). "Google just notched a big victory in the fight against climate change". The Verge. Vox Media. Retrieved December 8, 2016.
- Etherington, Darrell (December 7, 2016). "Google says it will hit 100% renewable energy by 2017". TechCrunch. AOL. Retrieved December 8, 2016.
- L.A. Barroso; J. Dean; U. Hölzle (March–April 2002). "Web search for a planet: The Google cluster architecture" (PDF). IEEE Micro. 23 (2): 22–28. doi:10.1109/MM.2003.1196112.
- Shankland, Stephen, CNET news "Google uncloaks once-secret server." April 1, 2009.
- Google Research Publications
- Web Search for a Planet: The Google Cluster Architecture (Luiz André Barroso, Jeffrey Dean, Urs Hölzle)