Jump to content

User:Xemizt/mtd

From Wikipedia, the free encyclopedia

High-dimensional approach[edit]

Typical (single-replica) MTD simulations can include up to 3 CVs, even using the multi-replica approach, it is hard to exceed 8 CVs, in practice. This limitation comes from the bias potential, constructed by adding Gaussian functions (kernels). It is a special case of the kernel density estimator (KDE). The number of required kernels, for a constant KDE accuracy, increases exponentially with the number of dimensions. So MTD simulation length has to increase exponentially with the number of CVs to maintain the same accuracy of the bias potential. Also, the bias potential, for fast evaluation, is typically approximated with a regular grid.[1] The required memory to store the grid increases exponentially with the number of dimensions (CVs) too.

A high-dimensional generalization of metadynamics is NN2B[2]. It is based on two machine learning algorithms: the nearest-neighbor density estimator (NNDE) and the artificial neural network (ANN). NNDE replaces KDE to estimate the updates of bias potential from short biased simulations, while ANN is used to approximate the resulting bias potential. ANN is a memory-efficient representation of high-dimensional functions, where derivatives (biasing forces) are effectively computed with the backpropagation algorithm.[3]

An alternative method, exploiting ANN for the adaptive bias potential approximation, uses mean potential forces for the estimation.[4] This methods are also a high-dimensional generalization of the adaptive bias force (ABF) method.[5] Additionally, the training of ANN is improved using the Bayesian regularization,[6] and the error of approximation can be inferred by training an ensemble of ANNs.[4]

Algorithms[edit]

Free energy estimator[edit]

The finite size of the kernel makes the bias potential to fluctuate around a mean value. A converged free energy can be obtained by averaging the bias potential. The averaging is started from , when the motion along the collective variable becomes diffusive:

[edit]

Adaptive kernel algorithms[edit]

Well-tempered metadynamics[edit]

Well-tempered metadynamics (WT-MTD) is a modification of the original metadynamics algorithm, where the scale of the Gaussian kernel is varied during the simulations.

(1)

Test 1

Well-tempered ensemble metadynamics[edit]

Well-tempered ensemble metadynamics (WTE-MTD)

Transition-tempered metadynamics[edit]

Transition-tempered metadynamics (TT-MTD)

Adaptive Gaussian metadynamics[edit]

Adaptive Gaussian metadynamics (AG-MTD)

Multiple-replica algorithms[edit]

Mutiple-walker metadynamics[edit]

Multiple-walker metadyanmics (MW-MTD)

Parallel tempering metadynamics[edit]

Parallel tempering metadynamics (PT-MTD)

Bias-exchange metadynamics[edit]

Bias-exchange metadynamics (BE-MTD)

Collective-variable tempering metadynamics[edit]

Collective-varialbe tempering metadynamics (CVT-MTD)

Parallel bias metadynamics[edit]

Parallel bias metadynamics (PB-MTD)

Replica state exchange metadynamics[edit]

Replica state exchange metadynamics (RSE-MTD)

Reconnaissance metadynamics[edit]

Reconnaissance metadynamics (RC-MTD)

Flux-tempered metadynamics[edit]

Flux-temepered metadynamics (FT-MTD)

Replica-averaged metadynamics[edit]

Replica-average metadynamics (RA-MTD)

Ensemble-biased metadynamics[edit]

Ensemble-biased metadynamics (EB-MTD)

Path integral metadynamics[edit]

Path integral metadynamics (PI-MTD)

Discreet metadynamics[edit]

Discreet metadynamics (D-MTD)

Lagrangian metadynamcis[edit]

Lagrangian metadynamics (L-MTD)

  1. ^ "PLUMED: Metadynamics". plumed.github.io. Retrieved 2018-01-13.
  2. ^ Galvelis, Raimondas; Sugita, Yuji (2017-06-13). "Neural Network and Nearest Neighbor Algorithms for Enhancing Sampling of Molecular Dynamics". Journal of Chemical Theory and Computation. 13 (6): 2489–2500. doi:10.1021/acs.jctc.7b00188. ISSN 1549-9618.
  3. ^ Schneider, Elia; Dai, Luke; Topper, Robert Q.; Drechsel-Grau, Christof; Tuckerman, Mark E. (2017-10-11). "Stochastic Neural Network Approach for Learning High-Dimensional Free Energy Surfaces". Physical Review Letters. 119 (15): 150601. doi:10.1103/PhysRevLett.119.150601.
  4. ^ a b Zhang, Linfeng; Wang, Han; E, Weinan (2017-12-09). "Reinforced dynamics for enhanced sampling in large atomic and molecular systems. I. Basic Methodology". arXiv:1712.03461 [physics].
  5. ^ Comer, Jeffrey; Gumbart, James C.; Hénin, Jérôme; Lelièvre, Tony; Pohorille, Andrew; Chipot, Christophe (2015-01-22). "The Adaptive Biasing Force Method: Everything You Always Wanted To Know but Were Afraid To Ask". The Journal of Physical Chemistry B. 119 (3): 1129–1151. doi:10.1021/jp506633n. ISSN 1520-6106.
  6. ^ Sidky, Hythem; Whitmer, Jonathan K. (2017-12-07). "Learning Free Energy Landscapes Using Artificial Neural Networks". arXiv:1712.02840 [cond-mat, physics:physics].