Ronald J. Williams

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Ser Amantio di Nicolao (talk | contribs) at 13:31, 28 March 2019 (→‎External links: add category). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Ronald J. Williams is professor of computer science at Northeastern University, and one of the pioneers of neural networks. He co-authored a paper on the backpropagation algorithm which triggered a boom in neural network research.[1] He also made fundamental contributions to the fields of recurrent neural networks[2][3] and reinforcement learning.[4] Together with Wenxu Tong and Mary Jo Ondrechen he developed Partial Order Optimum Likelihood (POOL), a machine learning method used in the prediction of active amino acids in protein structures. POOL is a maximum likelihood method with a monotonicity constraint and is a general predictor of properties that depend monotonically on the input features.[5]

References

  1. ^ David E. Rumelhart, Geoffrey E. Hinton und Ronald J. Williams. Learning representations by back-propagating errors., Nature (London) 323, S. 533-536
  2. ^ Williams, R. J. and Zipser, D. (1989). A learning algorithm for continually running fully recurrent neural networks. Neural Computation, 1, 270-280.
  3. ^ R. J. Williams and D. Zipser. Gradient-based learning algorithms for recurrent networks and their computational complexity. In Back-propagation: Theory, Architectures and Applications. Hillsdale, NJ: Erlbaum, 1994.
  4. ^ Williams, R. J. (1992). Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8, 229-256.
  5. ^ W. Tong, Y. Wei, L.F. Murga, M.J. Ondrechen, and R.J. Williams (2009). Partial Order Optimum Likelihood (POOL): Maximum Likelihood Prediction of Active Site Residues Using 3D Structure and Sequence Properties. PLoS Computational Biology, 5(1): e1000266.

External links