Jump to content

Big memory

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Eustachiusz (talk | contribs) at 01:49, 28 September 2016 (add cats). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Template:New unreviewed article

Big-memory is a term used to describe server workloads which need to run on machines with a large amount of RAM (Random-access memory) memory. Some example workloads are databases, in-memory caches, and graph analytics.[1] Or, more generally, Data Science and Big data.

Some database systems are designed to run mostly in memory, rarely if ever retrieving data from disk or flash memory. See a List of in-memory databases.

The performance of big memory systems depends on how the CPU's or CPU cores access the memory, via a conventional Memory controller or via NUMA ( Non-uniform memory access ). Performance also depends on the size and design of the CPU cache.

Performance also depends on OS design. The "Huge pages" feature in Linux can improve the efficiency of Virtual Memory.[2] The new "Transparent huge pages" feature in Linux can offer better performance for some big-memory workloads.[3] The "Large-Page Support" in Microsoft Windows enables server applications to establish large-page memory regions which are typically three orders of magnitude larger than the native page size.[4]

References

  1. ^ "Efficient Virtual Memory for Big Memory Servers" (PDF). Retrieved 2016-09-24.
  2. ^ "Huge pages part 1 (Introduction)". Retrieved 2016-09-24.
  3. ^ "Transparent huge pages in 2.6.38". Retrieved 2016-09-24.
  4. ^ "Large-Page Support". Retrieved 2016-09-24.