List of Folding@home cores

From Wikipedia, the free encyclopedia
  (Redirected from Folding@home cores)
Jump to: navigation, search

The distributed-computing project Folding@home uses scientific computer programs, referred to as "cores" or "fahcores", to perform calculations.[1][2] Folding@home's cores are based on modified and optimized versions of molecular simulation programs for calculation, including TINKER, GROMACS, AMBER, CPMD, SHARPEN, ProtoMol and Desmond.[1][3][4] These variants are each given an arbitrary identifier (Core xx). While the same core can be used by various versions of the client, separating the core from the client enables the scientific methods to be updated automatically as needed without a client update.[1]

Active cores[edit]

These cores listed below are currently used by the project.[1]

GROMACS[edit]

GROMACS, short for GROningen MAchine for Chemical Simulations, is an GPL open source molecular dynamics simulation package originally developed in the University of Groningen, and currently maintained by several universities and institutions worldwide.[5][6] Gromacs is extremely optimized,[7] with built-in consistency checking and continual ETA estimates, and is primarily designed for biochemical molecules with complicated bonding interactions such as proteins, lipids and nucleic acids.[8] Gromacs calculations are single precision, but it is used extensively throughout the project.[9] Folding@home has been granted a non-commercial, non-GPL license for Gromacs, and is thus not required to release its source code.[7][10] All variants use SIMD optimizations including SSE on Intel processors, 3DNow+ on AMD chips and AltiVec on PowerPC processors.[9] This allows for a very significant speed increase over TINKER-based cores.[7]

  • Gromacs (Core 78)
    • This is the original Gromacs core,[7] and is currently available for uniprocessor clients only, supporting Windows, Linux, and macOS.[9]
  • Gromacs 33 (Core a0)
    • Available to Windows, Linux, and macOS uniprocessor clients only, this core uses the Gromacs 3.3 codebase, which allowing a broader range of simulations to be run.[7][11]
  • Gromacs SREM (Core 80)
    • This core uses the Serial Replica Exchange Method, which is also known as REMD (Replica Exchange Molecular Dynamics) or GroST (Gromacs Serial replica exchange with Temperatures) in its simulations, and is available for Windows and Linux uniprocessor clients only.[7][12][13]
  • GroSimT (Core 81)
    • This core performs simulated tempering, of which the basic idea is to enhance sampling by periodically raising and lowering temperature. This may allow Folding@home to more efficiently sample the transitions between folded and unfolded conformations of proteins.[7] Available for Windows and Linux uniprocessor clients only.[14]

Double Precision[edit]

Variants of Gromacs with double precision instead of single precision.[15]

  • DGromacs (Core 79)
    • Available for uniprocessor clients, this core uses SSE2 processor optimization where supported and is capable of running on Windows, Linux, and macOS.[7][15]
  • DGromacsB (Core 7b)
    • Distinct from Core 79 in that it has several scientific additions.[7] Initially released only to the Linux platform in August 2007, it will eventually be available for all platforms.[16]
  • DGromacsC (Core 7c)
    • Very similar to Core 79, and initially released for Linux and Windows in April 2008 for Windows, Linux, and macOS uniprocessor clients.[17]

GB[edit]

This form of Gromacs uses a Generalized Born implicit solvent model. These support SSE instruction optimizations.[1]

  • GB Gromacs (Core 7a)
    • Available solely for all uniprocessor clients on Windows, Linux, and macOS.[1][7][18]
  • GB Gromacs (Core a4)
    • Available for Windows, Linux,[19] and macOS,[20] this core was originally released in early October 2010,[21] and as of February 2010 uses the latest version of Gromacs, v4.5.3.[19]

SMP[edit]

Uses Symmetric Multiprocessing on multiprocessor/multicore systems for faster calculations.[22][23]

  • SMP2 (Core a3)
    • The next generation of the SMP cores, this core uses threads instead of MPI for inter-process communication, and is available for Windows, Linux, and macOS.[24][25]
  • SMP2 bigadv (Core a5)
    • Similar to a3, but this core is specifically designed to run larger-than-normal simulations.[26][27]
  • SMP2 bigadv (Core a6)
    • A newer version of the a5 core.
  • Core a7

GPU[edit]

Cores for the Graphics Processing Unit use the graphics chip of modern video cards to do molecular dynamics. The GPU Gromacs core is not a true port of Gromacs, but rather key elements from Gromacs were taken and enhanced for GPU capabilities.[29]

GPU2[edit]

These are the second generation GPU cores. Unlike the retired GPU1 cores, these variants are for ATI CAL-enabled 2xxx/3xxx or later series and NVIDIA CUDA-enabled NVIDIA 8xxx or later series GPUs.[30]

  • GPU2 (Core 11)
    • Available for x86 Windows clients only.[30] Supported until approximately September 1, 2011 due to AMD/ATI dropping support for the utilized Brook programming language and moving to OpenCL. This forced F@h to rewrite its ATI GPU core code in OpenCL, the result of which is Core 16.[31]
  • GPU2 (Core 12)
    • Available for x86 Windows clients only.[30]
  • GPU2 (Core 13)
    • Available for x86 Windows clients only.[30]
  • GPU2 (Core 14)
    • Available for x86 Windows clients only,[30] this core was officially released Mar 02, 2009.[32]

GPU3[edit]

These are the third generation GPU cores, and are based on OpenMM, Pande Group's own open library for molecular simulation. Although based on the GPU2 code, this adds stability and new capabilities.[33]

  • GPU3 (core 15)
    • Available to x86 Windows only.[34]
  • GPU3 (core 16)
    • Available to x86 Windows only.[34] Released alongside the new v7 client, this is a rewrite of Core 11 in OpenCL.[31]
  • GPU3 (core 17) [35]
    • Available to Windows and Linux for AMD and NVIDIA GPUs using OpenCL. Much better performance because of OpenMM 5.1
  • GPU3 (core 18)
    • Available to Windows for AMD and NVIDIA GPUs using OpenCL. This core was developed to address some critical scientific issues in Core17 [36] and uses the latest technology from OpenMM[37] 6.0.1. There are currently issues regarding the stability and performance of this core on some AMD and NVIDIA Maxwell GPUs. This is why assignment of work units running on this core has been temporarily stopped for some GPUs.[38]
  • GPU3 (core 21)
    • Available to Windows and Linux for AMD and NVIDIA GPUs using OpenCL or CUDA. It uses OpenMM 6.2 and fixes the Core 18 AMD/NVIDIA performance issues. [Source]

AMBER[edit]

Short for Assisted Model Building with Energy Refinement, AMBER is a family of force fields for molecular dynamics, as well as the name for the software package that simulates these force fields.[39] AMBER was originally developed by Peter Kollman at the University of California, San Francisco, and is currently maintained by professors at various universities.[40] The double-precision AMBER core is not currently optimized with SSE nor SSE2,[41][42] but AMBER is significantly faster than Tinker cores and adds some functionality which cannot be performed using Gromacs cores.[42]

  • PMD (Core 82)
    • Available for Windows and Linux uniprocessor clients only.[41]

ProtoMol[edit]

ProtoMol is an object-oriented, component based, framework for molecular dynamics (MD) simulations. ProtoMol offers high flexibility, easy extendibility and maintenance, and high performance demands, including parallelization.[43] In 2009, the Pande Group was working on a complementary new technique called Normal Mode Langevin Dynamics which had the possibility to greatly speed simulations while maintaining the same accuracy.[33][44]

  • ProtoMol Core (Core b4)
    • Available to Linux x86/64 and x86 Windows.[45]

Inactive cores[edit]

These cores are not currently used by the project, as they are either retired due to becoming obsolete, or are not yet ready for general release.[1]

TINKER[edit]

TINKER is a computer software application for molecular dynamics simulation with a complete and general package for molecular mechanics and molecular dynamics, with some special features for biopolymers.[46]

  • Tinker core (Core 65)
    • An unoptimized uniprocessor core, this was officially retired as the AMBER and Gromacs cores perform the same tasks much faster. This core was available for Windows, Linux, and Macs.[47]

GROMACS[edit]

  • GroGPU (Core 10)
    • Available for ATI series 1xxx GPUs running under Windows.[48][49] Although mostly Gromacs based, parts of the core were rewritten.[48] This core was retired as of June 6, 2008 due to a move to the second generation of the GPU clients.[48]
  • Gro-SMP (Core a1)
  • GroCVS (Core a2)
    • Available only to x86 Macs and x86/64 Linux, this core is very similar to Core a1, as it uses much of the same core base, including use of MPI. However, this core utilizes more recent Gromacs code, and supports more features such as extra-large work units.[51][52] Officially retired due to move to a threads-based SMP2 client.
  • Gro-PS3
    • Also known as the SCEARD core, this variant was for the PlayStation 3 game system,[53][54] which supported a Folding@Home client until it was retired in November 2012. This core performed implicit solvation calculations like the GPU cores, but was also capable of running explicit solvent calculations like the CPU cores, and took the middle ground between the inflexible high-speed GPU cores and flexible low-speed CPU cores.[55] This core used SPE cores for optimization, but did not support SIMD.

CPMD[edit]

Short for Car–Parrinello Molecular Dynamics, this core performs ab-initio quantum mechanical molecular dynamics. Unlike classical molecular dynamics calculations which use a force field approach, CPMD includes the motion of electrons in the calculations of energy, forces and motion.[56][57] Quantum chemical calculations have the possibility to yield a very reliable potential energy surface, and can naturally incorporate multi-body interactions.[57]

  • QMD (Core 96)
    • This is a double-precision[57] variant for Windows and Linux uniprocessor clients.[58] This core is currently "on hold" due to the main QMD developer, Young Min Rhee, graduating in 2006.[57] This core can use a substantial amount of memory, and was only available to machines that chose to "opt in".[57] SSE2 optimization on Intel CPUs is supported.[57] Due to licensing issues involving Intel libraries and SSE2, QMD Work Units were not assigned to AMD CPUs.[57][59]

SHARPEN[edit]

  • SHARPEN Core[60][61]
    • In early 2010 Vijay Pande said "We've put SHARPEN on hold for now. No ETA to give, sorry. Pushing it further depends a lot on the scientific needs at the time."[62] This core uses different format to standard F@H cores, in that there is more than one "Work Unit" (using the normal definition) in each work packet sent to clients.

Desmond[edit]

The software for this core was developed at D. E. Shaw Research. Desmond performs high-speed molecular dynamics simulations of biological systems on conventional computer clusters.[63][64][65][66] The code uses novel parallel algorithms[67] and numerical techniques[68] to achieve high performance on platforms containing a large number of processors,[69] but may also be executed on a single computer. Desmond and its source code are available without cost for non-commercial use by universities and other not-for-profit research institutions.

  • Desmond Core
    • Possible available for Windows x86 and Linux x86/64,[70] this core is currently in development.[33]

References[edit]

  1. ^ a b c d e f g "Folding@home Cores". Retrieved 2007-11-06. 
  2. ^ Zagen30 (2011). "Re: Lucid Virtu and Foldig At Home". Retrieved 2011-08-30. 
  3. ^ Vijay Pande (2005-10-16). "Folding@home with QMD core FAQ" (FAQ). Stanford University. Retrieved 2006-12-03.  The site indicates that Folding@home uses a modification of CPMD allowing it to run on the supercluster environment.
  4. ^ Vijay Pande (2009-06-17). "Folding@home: How does FAH code development and sysadmin get done?". Retrieved 2009-06-25. 
  5. ^ Van Der Spoel D, Lindahl E, Hess B, Groenhof G, Mark AE, Berendsen HJ (2005). "GROMACS: fast, flexible, and free". J Comput Chem. 26 (16): 1701–18. doi:10.1002/jcc.20291. PMID 16211538. 
  6. ^ Hess B, Kutzner C, Van Der Spoel D, Lindahl E (2008). "GROMACS 4: Algorithms for Highly Efficient, Load-Balanced, and Scalable Molecular Simulation". J Chem Theory Comput. 4 (2): 435. doi:10.1021/ct700301q. 
  7. ^ a b c d e f g h i j k "Gromacs FAQ". 2007. Archived from the original (FAQ) on 2012-09-21. Retrieved 2011-09-03. 
  8. ^ "About Gromacs". Retrieved 2011-08-24. 
  9. ^ a b c "Gromacs Core". 2011. Retrieved 2011-08-21. 
  10. ^ "Folding@home Open Source FAQ". 2010. Archived from the original on 2012-09-21. Retrieved 2011-08-21. 
  11. ^ "Gromacs 33 Core". 2011. Retrieved 2011-08-21. 
  12. ^ "Gromacs SREM Core". 2011. Retrieved 2011-08-24. 
  13. ^ "Replica-exchange molecular dynamics method for protein folding". Chemical Physics Letters. 314: 141–151. 1999. Bibcode:1999CPL...314..141S. doi:10.1016/S0009-2614(99)01123-9. Retrieved 2011-08-24. 
  14. ^ "Gromacs Simulated Tempering core". 2011. Retrieved 2011-08-24. 
  15. ^ a b "Double Gromacs Core". 2011. Retrieved 2011-08-22. 
  16. ^ "Double Gromacs B Core". 2011. Retrieved 2011-08-22. 
  17. ^ "Double Gromacs C Core". 2011. Retrieved 2011-08-22. 
  18. ^ "GB Gromacs". 2011. Retrieved 2011-08-22. 
  19. ^ a b http://foldingforum.org/viewtopic.php?f=24&t=17528
  20. ^ http://foldingforum.org/viewtopic.php?f=24&t=18887#p189345
  21. ^ "Project 10412 now on advanced". 2010. Retrieved 2011-09-03. 
  22. ^ a b "SMP FAQ". 2011. Archived from the original (FAQ) on 2012-09-21. Retrieved 2011-08-22. 
  23. ^ "Gromacs SMP Core". 2011. Retrieved 2011-08-22. 
  24. ^ "Gromacs CVS SMP2 Core". 2011. Retrieved 2011-08-22. 
  25. ^ kasson (2011-10-11). "Re: Project:6099 run:3 clone:4 gen:0 - Core needs updating". Retrieved 2011-10-11. 
  26. ^ "Gromacs CVS SMP2 bigadv Core". 2011. Retrieved 2011-08-22. 
  27. ^ "Introduction of a new SMP core, changes to bigadv". 2011. Retrieved 2011-08-24. 
  28. ^ "CPU FAH core with AVX support? Mentioned a while back?". 2016-11-07. Retrieved 2017-02-18. 
  29. ^ Vijay Pande (2011). "ATI FAQ: Are these WUs compatible with other fahcores?". Archived from the original (FAQ) on 2012-09-21. Retrieved 2011-08-23. 
  30. ^ a b c d e "GPU2 Core". 2011. Retrieved 2011-08-23. 
  31. ^ a b "FAH Support for ATI GPUs". 2011. Retrieved 2011-08-31. 
  32. ^ ihaque (Pande Group member) (2009). "Folding Forum: Announcing project 5900 and Core_14 on advmethods". Retrieved 2011-08-23. 
  33. ^ a b c Vijay Pande (2009). "Update on new FAH cores and clients". Retrieved 2011-08-23. 
  34. ^ a b "GPU3 Core". 2011. Retrieved 2011-08-23. 
  35. ^ "GPU Core 17". 2014. Retrieved 2014-07-12. 
  36. ^ "Core 18 and Maxwell". Retrieved 19 February 2015. 
  37. ^ "Core18 Projects 10470-10473 to FAH". Retrieved 19 February 2015. 
  38. ^ "New Core18 (login required)". Retrieved 19 February 2015. 
  39. ^ "Amber". 2011. Retrieved 2011-08-23. 
  40. ^ "Amber Developers". 2011. Retrieved 2011-08-23. 
  41. ^ a b "AMBER Core". 2011. Retrieved 2011-08-23. 
  42. ^ a b "Folding@Home with AMBER FAQ" (FAQ). 2004. Retrieved 2011-08-23. 
  43. ^ "ProtoMol". Retrieved 2011-08-24. 
  44. ^ "Folding@home - About" (FAQ). 
  45. ^ "ProtoMol core". 2011. Retrieved 2011-08-24. 
  46. ^ "TINKER Home Page". Retrieved 2012-08-24. 
  47. ^ "Tinker Core". 2011. Retrieved 2012-08-24. 
  48. ^ a b c "Folding@home on ATI's GPUs: a major step forward". 2011. Archived from the original on 2012-09-21. Retrieved 2011-08-28. 
  49. ^ "GPU core". 2011. Retrieved 2011-08-28. 
  50. ^ "Gromacs SMP core". 2011. Retrieved 2011-08-28. 
  51. ^ "Gromacs CVS SMP core". 2011. Retrieved 2011-08-28. 
  52. ^ "New release: extra-large work units". 2011. Retrieved 2011-08-28. 
  53. ^ "PS3 Screenshot". 2007. Retrieved 2011-08-24. 
  54. ^ "PS3 Client". 2008. Retrieved 2011-08-28. 
  55. ^ "PS3 FAQ". 2009. Retrieved 2011-08-28. 
  56. ^ R. Car & M. Parrinello (1985). "Unified Approach for Molecular Dynamics and Density-Functional Theory". Phys. Rev. Lett. 55 (22): 2471–2474. Bibcode:1985PhRvL..55.2471C. doi:10.1103/PhysRevLett.55.2471. PMID 10032153. 
  57. ^ a b c d e f g "QMD FAQ" (FAQ). 2007. Retrieved 2011-08-28. 
  58. ^ "QMD Core". 2011. Retrieved 2011-08-24. 
  59. ^ "FAH & QMD & AMD64 & SSE2" (FAQ). 
  60. ^ "SHARPEN". Archived from the original on December 2, 2008. 
  61. ^ "SHARPEN: Systematic Hierarchical Algorithms for Rotamers and Proteins on an Extended Network (deadlink)". Archived from the original (About) on December 1, 2008. 
  62. ^ "Re: SHARPEN". 2010. Retrieved 2011-08-29. 
  63. ^ Kevin J. Bowers; Edmond Chow; Huafeng Xu; Ron O. Dror; Michael P. Eastwood; Brent A. Gregersen; John L. Klepeis; István Kolossváry; Mark A. Moraes; Federico D. Sacerdoti; John K. Salmon; Yibing Shan & David E. Shaw (2006). "Scalable Algorithms for Molecular Dynamics Simulations on Commodity Clusters" (PDF). Proceedings of the ACM/IEEE Conference on Supercomputing (SC06), Tampa, Florida, November 11–17, 2006. ACM. ISBN 0-7695-2700-0. 
  64. ^ Morten Ø. Jensen; David W. Borhani; Kresten Lindorff-Larsen; Paul Maragakis; Vishwanath Jogini; Michael P. Eastwood; Ron O. Dror & David E. Shaw (2010). "Principles of Conduction and Hydrophobic Gating in K+ Channels". Proceedings of the National Academy of Sciences of the United States of America. PNAS. 107 (13): 5833–5838. Bibcode:2010PNAS..107.5833J. doi:10.1073/pnas.0911691107. PMC 2851896Freely accessible. PMID 20231479. 
  65. ^ Ron O. Dror; Daniel H. Arlow; David W. Borhani; Morten Ø. Jensen; Stefano Piana & David E. Shaw (2009). "Identification of Two Distinct Inactive Conformations of the ß2-Adrenergic Receptor Reconciles Structural and Biochemical Observations". Proceedings of the National Academy of Sciences of the United States of America. PNAS. 106 (12): 4689–4694. Bibcode:2009PNAS..106.4689D. doi:10.1073/pnas.0811065106. PMC 2650503Freely accessible. PMID 19258456. 
  66. ^ Yibing Shan; Markus A. Seeliger; Michael P. Eastwood; Filipp Frank; Huafeng Xu; Morten Ø. Jensen; Ron O. Dror; John Kuriyan & David E. Shaw (2009). "A Conserved Protonation-Dependent Switch Controls Drug Binding in the Abl Kinase". Proceedings of the National Academy of Sciences of the United States of America. PNAS. 106 (1): 139–144. Bibcode:2009PNAS..106..139S. doi:10.1073/pnas.0811223106. PMC 2610013Freely accessible. PMID 19109437. 
  67. ^ Kevin J. Bowers; Ron O. Dror & David E. Shaw (2006). "The Midpoint Method for Parallelization of Particle Simulations". Journal of Chemical Physics. J. Chem. Phys. 124 (18): 184109:1–11. Bibcode:2006JChPh.124r4109B. doi:10.1063/1.2191489. PMID 16709099. 
  68. ^ Ross A. Lippert; Kevin J. Bowers; Ron O. Dror; Michael P. Eastwood; Brent A. Gregersen; John L. Klepeis; István Kolossváry & David E. Shaw (2007). "A Common, Avoidable Source of Error in Molecular Dynamics Integrators". Journal of Chemical Physics. J. Chem. Phys. 126 (4): 046101:1–2. Bibcode:2007JChPh.126d6101L. doi:10.1063/1.2431176. PMID 17286520. 
  69. ^ Edmond Chow; Charles A. Rendleman; Kevin J. Bowers; Ron O. Dror; Douglas H. Hughes; Justin Gullingsrud; Federico D. Sacerdoti & David E. Shaw (2008). "Desmond Performance on a Cluster of Multicore Processors". D. E. Shaw Research Technical Report DESRES/TR--2008-01, July 2008. 
  70. ^ "Desmond core". Retrieved 2011-08-24. 

External links[edit]