Jump to content

User:Zachary kaplan/sandbox

From Wikipedia, the free encyclopedia

Al-Nuwayri Article Evaluation:[edit]

This stub-class article could benefit from a great deal more editing; in general, the article gives the bare minimum about Al-Nuwayri's life, and for what little is provided, no sources are cited. Although there are sources in the citations list, these sources are not linked to the article's claims, and one cannot follow the links for confirmation. Though the information in the article is not dated (since it's a historical biography), it is correct–however, it is not easy to check this correctness. Additionally, the talk page is empty, meaning that very few Wikipedians are interested in improving this article.

Article Editing Selection[edit]

  • Option One: Al-Nuwayri---the article is currently a stub, so this would make sense in the context of the course (and especially given the instructor)
  • Option Two: scATAC-seq Bioinformatics Assays---an area where perhaps I have more specialist information than many other
  • SGLD -- No article on this yet

Stochastic Gradient Langevin Dynamics[edit]

Stochastic Gradient Langevin Dynamics (abbreviated as SGLD), is an optimization technique composed of characteristics from Stochastic Gradient Descent, a Robbins-Monro optimization algorithm, and Langevin Dynamics, a mathematical extension of molecular dynamics models. Like Stochastic Gradient Descent, SGLD is an iterative optimization algorithm which introduces additional noise to the stochastic gradient estimator used in SGD to optimize a differentiable objective function.[1] Unlike traditional SGD, SGLD can be used for Bayesian learning, since the method produces samples from a posterior distribution of parameters based on available data. First described by Welling and Teh in 2011, the method has applications in many algorithms which require optimization, and is most notably applied in machine learning problems.

Formal Definition[edit]

Given some parameter vector , its prior distribution , and a set of data points , Stochastic Gradient Langevin dynamics samples from the posterior distribution by updating the chain:

where is Gaussian noise and our step sizes satisfy the following conditions:

For early iterations of the algorithm, each parameter update mimics Stochastic Gradient Descent; however, as the algorithm approaches a local minima or maxima, the gradient shrinks to zero and the chain produces samples surrounding the maximum a posteriori mode allowing for posterior inference.

Application[edit]

SGDL is applicable in any optimization context for which it is desirable to quickly obtain posterior samples instead of a maximum a posteriori mode. In doing so, the method maintains the computational efficiency of stochastic gradient descent when compared to traditional gradient descent while providing additional information regarding the landscape around the critical point of the objective function. In practice, SGLD can be applied to the training of Neural Networks in Deep Learning, a task in which the method provides a distribution over model parameters. By introducing information about the variance of these parameters, SGLD provides a method by which to characterizes the generalizability of these models at certain points in training.[2] Additionally, obtaining samples from a posterior distribution permits uncertainty quantification by means of confidence intervals, a feature which is not possible using traditional stochastic gradient descent.

Further Reading[edit]

  1. ^ "Bayesian Learning via Stochastic Gradient Langevin Dynamics" (PDF).
  2. ^ Chaudhari, Pratik, Choromanska, Anna, Soatto, Stefano, Le- Cun, Yann, Baldassi, Carlo, Borgs, Christian, Chayes, Jen- nifer, Sagun, Levent, and Zecchina, Riccardo. Entropy-sgd: Biasing gradient descent into wide valleys. In ICLR’2017, arXiv:1611.01838, 2017.