Talk:Geoffrey Hinton

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

location of birth[edit]

This article says he was born in Bristol? http://www.magazine.utoronto.ca/feature/getting-smarter-computer-science-professor-geoffrey-hinton-is-helping-to-build-a-new-generation-of-intelligent-machines/ but the wiki article says london? — Preceding unsigned comment added by 82.2.180.6 (talk) 07:12, 23 June 2015 (UTC)

godfather joke[edit]

The first paragraph claims his work has gained him the nickname "godfather of neural networks". This must be a joke. What about the other, much older pioneers of neural networks? For example, Warren McCulloch was called the godfather of neural networks - see http://soma.berkeley.edu/books/MA/MassAction.html . And there are Teuvo Kohonen, Kunihiko Fukushima, Shun'ichi Amari, Paul Werbos, David E. Rumelhart, and others who may be more deserving of such a title. I checked the source - apparently it was Andrew Ng who called Hinton that in a Wired magazine article. But both Hinton and Ng are working for the same company, Google (next door to Wired magazine). This looks like a company's self-promotion in disguise. Should not appear in any biography. Putting things straight (talk) 16:15, 8 October 2013 (UTC)

For future use[edit]

My father was a Stalinist and sent me to a private Christian school where we had to pray every morning. From a very young age I was convinced that many of the things that the teachers and other kids believed were just obvious nonsense. That's great training for a scientist and it transferred very well to artificial intelligence. But it was a nasty shock when I found out what Stalin actually did.

Why delete Alex Krizhevsky whose breakthrough made this possible? Other co-workers are also mentioned.[edit]

User Nelson: You deleted my text on Krizhevsky and others. One cannot give Hinton sole credit for the work of Alex Krizhevsky. In fact, Hinton was resistant to Krizhevsky's idea. Why delete Krizhevsky? Other co-workers are also mentioned. Same for David E. Rumelhart and others. I also added Seppo Linnainmaa, the inventor of backpropagation (1970):

While a professor at Carnegie Mellon University (1982–1987), Hinton and David E. Rumelhart and Ronald J. Williams were among the first researchers who demonstrated the use of back-propagation algorithm (also known as the reverse mode of automatic differentiation published by Seppo Linnainmaa in 1970) for training multi-layer neural networks that has been widely used for practical applications.[1]

The dramatic image-recognition milestone of the AlexNet designed by his student Alex Krizhevsky[2] for the Imagenet challenge 2012[3] helped to revolutionize the field of computer vision.

Extra section for high-profile case of plagiarism in the backpropagation paper?[edit]

Hinton's backpropagation paper[1] (he was the second of three authors) did not mention Seppo Linnainmaa, the inventor of the method. This is actually Hinton's most highly cited paper, together with the Krizhevsky paper.[3] At the moment, the article mentions only in passing this high-profile case of plagiarism although it probably deserves an extra section.

Uf11 (talk)

  1. ^ a b Rumelhart, David E.; Hinton, Geoffrey E.; Williams, Ronald J. (1986-10-09). "Learning representations by back-propagating errors". Nature. 323 (6088): 533–536. doi:10.1038/323533a0. ISSN 1476-4687.
  2. ^ Dave Gershgorn (18 June 2018). "The inside story of how AI got good enough to dominate Silicon Valley". Quartz. Retrieved 5 October 2018.
  3. ^ a b Krizhevsky, Alex; Sutskever, Ilya; Hinton, Geoffrey E. (2012-12-03). "ImageNet classification with deep convolutional neural networks". Curran Associates Inc.: 1097–1105.