Peter Dayan: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
No edit summary
m Updated h-index
Line 1: Line 1:
'''Peter Dayan''' is the director of the [[Gatsby Computational Neuroscience Unit]] at [[University College London]]. He is [[co-author]] of "Theoretical Neuroscience", a leading textbook in computational and mathematical modeling of brain function (see [[Computational Neuroscience]]). He is known for applying [[Bayesian method]]s from [[Machine Learning]] and [[Artificial Intelligence]] to understand neural function, and is particularly renowned for having related neurotransmitter levels to prediction errors and Bayesian uncertainties.<ref>Schultz, W., Dayan, P., & Montague, P. R. (1997). A neural substrate of prediction and reward. ''Science'', 275(5306), 1593–1599.</ref> He co-authored "[[Q-learning]]" with [[Chris Watkins]], and provided a proof of convergence of TD(λ) for arbitrary λ (see [[temporal difference learning]]).<ref>Dayan, Peter. "The convergence of TD (λ) for general λ". ''Machine learning'' 8, no. 3–4 (1992): 341–362.</ref><ref>Watkins, Christopher JCH, and Peter Dayan. "Q-learning". ''Machine learning'' 8, no. 3–4 (1992): 279–292.</ref> His [[h-index]] according to [[Google Scholar]] is 65.
'''Peter Dayan''' is the director of the [[Gatsby Computational Neuroscience Unit]] at [[University College London]]. He is [[co-author]] of "Theoretical Neuroscience", a leading textbook in computational and mathematical modeling of brain function (see [[Computational Neuroscience]]). He is known for applying [[Bayesian method]]s from [[Machine Learning]] and [[Artificial Intelligence]] to understand neural function, and is particularly renowned for having related neurotransmitter levels to prediction errors and Bayesian uncertainties.<ref>Schultz, W., Dayan, P., & Montague, P. R. (1997). A neural substrate of prediction and reward. ''Science'', 275(5306), 1593–1599.</ref> He co-authored "[[Q-learning]]" with [[Chris Watkins]], and provided a proof of convergence of TD(λ) for arbitrary λ (see [[temporal difference learning]]).<ref>Dayan, Peter. "The convergence of TD (λ) for general λ". ''Machine learning'' 8, no. 3–4 (1992): 341–362.</ref><ref>Watkins, Christopher JCH, and Peter Dayan. "Q-learning". ''Machine learning'' 8, no. 3–4 (1992): 279–292.</ref> His [[h-index]] according to [[Google Scholar]] is 72.


He began his career studying Mathematics at the [[University of Cambridge]] (UK) and then continued for a PhD in [[Artificial Intelligence]] at the [[University of Edinburgh]] with [http://homepages.inf.ed.ac.uk/willshaw/ David Willshaw], which focused on [[Association (psychology)|associative]] [[memory]] and [[reinforcement learning]]. He then went on to do a Postdoc with [[Terry Sejnowski]] at the [[Salk Institute]]. He then took up an Assistant Professor position at the [[Massachusetts Institute of Technology]], and later moved to [[University College London]], where he became Professor and Director of the [[Gatsby Computational Neuroscience Unit]].
He began his career studying Mathematics at the [[University of Cambridge]] (UK) and then continued for a PhD in [[Artificial Intelligence]] at the [[University of Edinburgh]] with [http://homepages.inf.ed.ac.uk/willshaw/ David Willshaw], which focused on [[Association (psychology)|associative]] [[memory]] and [[reinforcement learning]]. He then went on to do a Postdoc with [[Terry Sejnowski]] at the [[Salk Institute]]. He then took up an Assistant Professor position at the [[Massachusetts Institute of Technology]], and later moved to [[University College London]], where he became Professor and Director of the [[Gatsby Computational Neuroscience Unit]].

Revision as of 03:18, 7 September 2015

Peter Dayan is the director of the Gatsby Computational Neuroscience Unit at University College London. He is co-author of "Theoretical Neuroscience", a leading textbook in computational and mathematical modeling of brain function (see Computational Neuroscience). He is known for applying Bayesian methods from Machine Learning and Artificial Intelligence to understand neural function, and is particularly renowned for having related neurotransmitter levels to prediction errors and Bayesian uncertainties.[1] He co-authored "Q-learning" with Chris Watkins, and provided a proof of convergence of TD(λ) for arbitrary λ (see temporal difference learning).[2][3] His h-index according to Google Scholar is 72.

He began his career studying Mathematics at the University of Cambridge (UK) and then continued for a PhD in Artificial Intelligence at the University of Edinburgh with David Willshaw, which focused on associative memory and reinforcement learning. He then went on to do a Postdoc with Terry Sejnowski at the Salk Institute. He then took up an Assistant Professor position at the Massachusetts Institute of Technology, and later moved to University College London, where he became Professor and Director of the Gatsby Computational Neuroscience Unit.

References

  1. ^ Schultz, W., Dayan, P., & Montague, P. R. (1997). A neural substrate of prediction and reward. Science, 275(5306), 1593–1599.
  2. ^ Dayan, Peter. "The convergence of TD (λ) for general λ". Machine learning 8, no. 3–4 (1992): 341–362.
  3. ^ Watkins, Christopher JCH, and Peter Dayan. "Q-learning". Machine learning 8, no. 3–4 (1992): 279–292.

External links

Template:Persondata