|Type of project||Artificial intelligence and machine learning|
|Location||Mountain View, California|
Google Brain's mission is to improve people's lives by making machines smarter. To do this, the team focuses on constructing models with high degrees of flexibility that are capable of learning their own features, and use data and computation efficiently.
As the Google Brain Team describes "This approach fits into the broader Deep Learning subfield of ML and ensures our work will ultimately make a difference for problems of practical importance. Furthermore, our expertise in systems complements this approach by allowing us to build tools to accelerate ML research and unlock its practical value for the world."
The so-called "Google Brain" project began in 2011 as a part-time research collaboration between Google Fellow Jeff Dean, Google Researcher Greg Corrado, and Stanford University professor Andrew Ng. Ng had been interested in using deep learning techniques to crack the problem of artificial intelligence since 2006, and in 2011 began collaborating with Dean and Corrado to build a large-scale deep learning software system, DistBelief, on top of Google's cloud computing infrastructure. Google Brain started as a Google X project and became so successful that it was graduated back to Google: Astro Teller has said that Google Brain paid for the entire cost of Google X.
In June 2012, the New York Times reported that a cluster of 16,000 computers dedicated to mimicking some aspects of human brain activity had successfully trained itself to recognize a cat based on 10 million digital images taken from YouTube videos. The story was also covered by National Public Radio and SmartPlanet.
In March 2013, Google hired Geoffrey Hinton, a leading researcher in the deep learning field, and acquired the company DNNResearch Inc. headed by Hinton. Hinton said that he would be dividing his future time between his university research and his work at Google.
On 26 January 2014, multiple news outlets stated that Google had purchased DeepMind Technologies for an undisclosed amount. Analysts later announced that the company was purchased for £400 million ($650M USD / €486M), although later reports estimated the acquisition was valued at over £500 million. The acquisition reportedly took place after Facebook ended negotiations with DeepMind Technologies in 2013, which resulted in no agreement or purchase of the company.
Artificial Intelligence Devised Encryption System
In October 2016, the Google Brain ran an experiment concerning the encrypting of communications. In it, two sets of AI's devised their own cryptographic algorithms to protect their communications from another AI, which at the same time aimed at evolving its own system to crack the AI-generated encryption. The study proved to be successful, with the two initial AIs being able to learn and further develop their communications from scratch.
In this experiment, three AIs were created: Alice, Bob and Eve. The goal of the experiment was for Alice to send a message to Bob, which would decrypt it, while in the meantime Eve would try to intercept the message. In it, the AIs were not given specific instructions on how to encrypt their messages, they were solely given a loss function. The consequence was that during the experiment, if communications between Alice and Bob were not successful, with Bob misinterpreting Alice's message or Eve intercepting the communications, the following rounds would show an evolution in the cryptography so that Alice and Bob could communicate safely. Indeed, this study allowed for concluding that it is possible for AIs to devise their own encryption system without having any cryptographic algorithms prescribed beforehand, which would reveal a breakthrough for message encryption in the future.
In February 2017, Google Brain announced an image enhancement system using neural networks to fill in details in very low resolution pictures. The examples provided would transform pictures with an 8x8 resolution into 32x32 ones.
The software utilizes two different neural networks to generate the images. The first, called a “conditioning network,” maps the pixels of the low-resolution picture to a similar high-resolution one, lowering the resolution of the latter to 8x8 and trying to make a match. The second is a “prior network”, which analyzes the pixelated image and tries to add details based on a large number of high resolution pictures. Then, upon upscaling of the original 8x8 picture, the system adds pixels based on its knowledge of what the picture should be. Lastly, the outputs from the two networks are combined to create the final image.
This represents a breakthrough in the enhancement of low resolution pictures. Despite the fact that the added details are not part of the real image, but only best guesses, the technology has shown impressive results when facing real-world testing. Upon being shown the enhanced picture and the real one, humans were fooled 10% of the time in case of celebrity faces, and 28% in case of bedroom pictures. This compares to previous disappointing results from normal bicubic scaling, which did not fool any human.
Breakthroughs for Google Translate
The Google Brain Team has recently reached significant breakthroughs for Google Translate, which is part of the Google Brain Project. In September 2016, the team launched the new system, Google Neural Machine Translation (GNMT), which is an end-to-end learning framework, able to learn from a large amount of examples. While its introduction has greatly increased the quality of Google Translate's translations for the pilot languages, it was very difficult to create such improvements for all of its 103 languages. Addressing this problem, the Google Brain Team was able to develop a Multilingual GNMT system, which extended the previous one by enabling translations between multiple languages. Furthermore, it allows for Zero-Shot Translations, which are translations between two languages that the system has never explicitly seen before. Recently, Google has announced, that Google Translate can now also translate without transcribing, using neural networks. This means, that it is possible to translate speech in one language directly into text in another language, without first transcribing it to text. According to the Researchers at Google Brain, this intermediate step can be avoided using neural networks. In order for the system to learn this, they exposed it to many hours of Spanish audio together with the corresponding English text. The different layers of neural networks, replicating the human brain, were able to link the corresponding parts and subsequently manipulate the audio waveform until it was transformed to English text.
In Google products
Google Brain was initially established by Google Fellow Jeff Dean and visiting Stanford professor Andrew Ng (Ng later left to lead the artificial intelligence group at Baidu). As of 2017, team members include Anelia Angelova, Samy Bengio, Greg Corrado, George Dahl, Michael Isard, Anjuli Kannan, Hugo Larochelle, Quoc Le, Chris Olah, Vincent Vanhoucke, Vijay Vasudevan and Fernanda Viegas.
- Artificial intelligence
- Glossary of artificial intelligence
- Google X
- Quantum Artificial Intelligence Lab – run by Google in collaboration with NASA and Universities Space Research Association
- Google Translate
- Machine Learning Algorithms and Techniques Research at Google. Retrieved May 18, 2017
- "Google Brain Team's Mission".
- "Google's Large Scale Deep Neural Networks Project". Retrieved 25 October 2015.
- Jeff Dean and Andrew Ng (26 June 2012). "Using large-scale brain simulations for machine learning and A.I.". Official Google Blog. Retrieved 26 January 2015.
- Markoff, John (June 25, 2012). "How Many Computers to Identify a Cat? 16,000". New York Times. Retrieved February 11, 2014.
- Jeffrey Dean; et al. (December 2012). "Large Scale Distributed Deep Networks" (PDF). Retrieved 25 October 2015.
- Conor Dougherty (16 February 2015). "Astro Teller, Google’s ‘Captain of Moonshots,’ on Making Profits at Google X". Retrieved 25 October 2015.
- "A Massive Google Network Learns To Identify — Cats". National Public Radio. June 26, 2012. Retrieved February 11, 2014.
- Shin, Laura (June 26, 2012). "Google brain simulator teaches itself to recognize cats". SmartPlanet. Retrieved February 11, 2014.
- "U of T neural networks start-up acquired by Google" (Press release). Toronto, ON. 12 March 2013. Retrieved 13 March 2013.
- Regalado, Antonio (January 29, 2014). "Is Google Cornering the Market on Deep Learning? A cutting-edge corner of science is being wooed by Silicon Valley, to the dismay of some academics.". Technology Review. Retrieved February 11, 2014.
- Wohlsen, Marcus (January 27, 2014). "Google’s Grand Plan to Make Your Brain Irrelevant". Wired Magazine. Retrieved February 11, 2014.
- "Google Acquires UK AI startup Deepmind". The Guardian. Retrieved 27 January 2014.
- "Report of Acquisition, TechCrunch". TechCrunch. Retrieved 27 January 2014.
- Oreskovic, Alexei. "Reuters Report". Reuters. Retrieved 27 January 2014.
- "Google Acquires Artificial Intelligence Start-Up DeepMind". The Verge. Retrieved 27 January 2014.
- "Google acquires AI pioneer DeepMind Technologies". Ars Technica. Retrieved 27 January 2014.
- "Google beats Facebook for Acquisition of DeepMind Technologies". Retrieved 27 January 2014.
- "Google AI invents its own cryptographic algorithm; no one knows how it works". arstechnica.co.uk. Retrieved 2017-05-15.
- "Google Brain super-resolution image tech makes "zoom, enhance!" real". arstechnica.co.uk. Retrieved 2017-05-15.
- "Google just made 'zoom and enhance' a reality -- kinda". cnet.com. Retrieved 2017-05-15.
- "Google uses AI to sharpen low-res images". engadget.com. Retrieved 2017-05-15.
- Schuster, Mike; Johnson, Melvin; Thorat, Nikhil. "Zero-Shot Translation with Google’s Multilingual Neural Machine Translation System". Google Research Blog. Retrieved 15 May 2017.
- Reynolds, Matt. "Google uses neural networks to translate without transcribing". New Scientist. Retrieved 15 May 2017.
- "Speech Recognition and Deep Learning". Google Research Blog. Google. August 6, 2012. Retrieved February 11, 2014.
- "Improving Photo Search: A Step Across the Semantic Gap". Google Research Blog. Google. June 12, 2013.
- "This Is Google’s Plan to Save YouTube". Time. May 18, 2015.
- "Ex-Google Brain head Andrew Ng to lead Baidu's artificial intelligence drive". South China Morning Post.
- Google Brain team website. Accessed 13.05.2017. https://research.google.com/teams/brain/
- Levy, Steven (April 25, 2013). "How Ray Kurzweil Will Help Google Make the Ultimate AI Brain". Wired Magazine. Retrieved February 11, 2014.
- Hernandez, Daniela (May 7, 2013). "The Man Behind the Google Brain: Andrew Ng and the Quest for the New AI". Wired Magazine. Retrieved February 11, 2014.
- Hof, Robert (April 23, 2013). "Deep Learning: With massive amounts of computational power, machines can now recognize objects and translate speech in real time. Artificial intelligence is finally getting smart.". Technology Review. Retrieved February 11, 2014.
- "Ray Kurzweil and the Brains Behind the Google Brain". Big Think. December 8, 2013. Retrieved February 11, 2014.