Memory access pattern
In computing, a memory access pattern or IO access pattern is the pattern with which a system or program reads and writes memory on secondary storage. These patterns differ in the level of locality of reference and drastically affect cache performance,[1] and also have implications for the approach to parallelism[2][3] and distribution of workload in shared memory systems.[4] Further, cache coherency issues can affect multiprocessor performance,[5] which means that certain memory access patterns place a ceiling on parallelism (which manycore approaches seek to break).[6]
Computer memory is usually described as "random access", but traversals by software will still exhibit patterns that can be exploited for efficiency. Various tools exist to help system designers[7] and programmers understand, analyse and improve the memory access pattern, including VTune and Vectorization Advisor,[8][9][10][11][12] including tools to address GPU memory access patterns.[13]
Memory access patterns also have implications for security,[14][15] which motivates some to try and disguise a program's activity for privacy reasons.[16][17]
Examples
[edit]Sequential
[edit]The simplest extreme is the sequential access pattern, where data is read, processed, and written out with straightforward incremented/decremented addressing. These access patterns are highly amenable to prefetching.
Strided
[edit]Strided or simple 2D, 3D access patterns (e.g., stepping through multi-dimensional arrays) are similarly easy to predict, and are found in implementations of linear algebra algorithms and image processing. Loop tiling is an effective approach.[19] Some systems with DMA provided a strided mode for transferring data between subtile of larger 2D arrays and scratchpad memory.[20]
Linear
[edit]A linear access pattern is closely related to "strided", where a memory address may be computed from a linear combination of some index. Stepping through indices sequentially with a linear pattern yields strided access. A linear access pattern for writes (with any access pattern for non-overlapping reads) may guarantee that an algorithm can be parallelised, which is exploited in systems supporting compute kernels.
Nearest neighbor
[edit]Nearest neighbor memory access patterns appear in simulation, and are related to sequential or strided patterns. An algorithm may traverse a data structure using information from the nearest neighbors of a data element (in one or more dimensions) to perform a calculation. These are common in physics simulations operating on grids.[21] Nearest neighbor can also refer to inter-node communication in a cluster; physics simulations which rely on such local access patterns can be parallelized with the data partitioned into cluster nodes, with purely nearest-neighbor communication between them, which may have advantages for latency and communication bandwidth. This use case maps well onto torus network topology.[22]
2D spatially coherent
[edit]In 3D rendering, access patterns for texture mapping and rasterization of small primitives (with arbitrary distortions of complex surfaces) are far from linear, but can still exhibit spatial locality (e.g., in screen space or texture space). This can be turned into good memory locality via some combination of morton order[23] and tiling for texture maps and frame buffer data (mapping spatial regions onto cache lines), or by sorting primitives via tile based deferred rendering.[24] It can also be advantageous to store matrices in morton order in linear algebra libraries.[25]
Scatter
[edit]A scatter memory access pattern combines sequential reads with indexed/random addressing for writes.[26] Compared to gather, It may place less load on a cache hierarchy since a processing element may dispatch writes in a "fire and forget" manner (bypassing a cache altogether), whilst using predictable prefetching (or even DMA) for its source data.
However, it may be harder to parallelise since there is no guarantee the writes do not interact,[27] and many systems are still designed assuming that a hardware cache will coalesce many small writes into larger ones.
In the past, forward texture mapping attempted to handle the randomness with "writes", whilst sequentially reading source texture information.
The PlayStation 2 console used conventional inverse texture mapping, but handled any scatter/gather processing "on-chip" using EDRAM, whilst 3D model (and a lot of texture data) from main memory was fed sequentially by DMA. This is why it lacked support for indexed primitives, and sometimes needed to manage textures "up front" in the display list.
Gather
[edit]In a gather memory access pattern, reads are randomly addressed or indexed, whilst the writes are sequential (or linear).[26] An example is found in inverse texture mapping, where data can be written out linearly across scan lines, whilst random access texture addresses are calculated per pixel.
Compared to scatter, the disadvantage is that caching (and bypassing latencies) is now essential for efficient reads of small elements, however it is easier to parallelise since the writes are guaranteed to not overlap. As such the gather approach is more common for gpgpu programming,[27] where the massive threading (enabled by parallelism) is used to hide read latencies.[27]
Combined gather and scatter
[edit]An algorithm may gather data from one source, perform some computation in local or on chip memory, and scatter results elsewhere. This is essentially the full operation of a GPU pipeline when performing 3D rendering- gathering indexed vertices and textures, and scattering shaded pixels in screen space. Rasterization of opaque primitives using a depth buffer is "commutative", allowing reordering, which facilitates parallel execution. In the general case synchronisation primitives would be needed.
Random
[edit]At the opposite extreme is a truly random memory access pattern. A few multiprocessor systems are specialised to deal with these.[28] The PGAS approach may help by sorting operations by data on the fly (useful when the problem *is* figuring out the locality of unsorted data).[21] Data structures which rely heavily on pointer chasing can often produce poor locality of reference, although sorting can sometimes help. Given a truly random memory access pattern, it may be possible to break it down (including scatter or gather stages, or other intermediate sorting) which may improve the locality overall; this is often a prerequisite for parallelizing.
Approaches
[edit]Data-oriented design
[edit]Data-oriented design is an approach intended to maximise the locality of reference, by organising data according to how it is traversed in various stages of a program, contrasting with the more common object oriented approach (i.e., organising such that data layout explicitly mirrors the access pattern).[1]
Contrast with locality of reference
[edit]Locality of reference refers to a property exhibited by memory access patterns. A programmer will change the memory access pattern (by reworking algorithms) to improve the locality of reference,[29] and/or to increase potential for parallelism.[26] A programmer or system designer may create frameworks or abstractions (e.g., C++ templates or higher-order functions) that encapsulate a specific memory access pattern.[30][31]
Different considerations for memory access patterns appear in parallelism beyond locality of reference, namely the separation of reads and writes. E.g.: even if the reads and writes are "perfectly" local, it can be impossible to parallelise due to dependencies; separating the reads and writes into separate areas yields a different memory access pattern, maybe initially appear worse in pure locality terms, but desirable to leverage modern parallel hardware.[26]
Locality of reference may also refer to individual variables (e.g., the ability of a compiler to cache them in registers), whilst the term memory access pattern only refers to data held in an indexable memory (especially main memory).
See also
[edit]References
[edit]- ^ a b "Introduction to Data-Oriented Design" (PDF). Archived from the original (PDF) on 2019-11-16.
- ^ Jang, Byunghyun; Schaa, Dana; Mistry, Perhaad & Kaeli, David (2010-05-27). "Exploiting Memory Access Patterns to Improve Memory Performance in Data-Parallel Architectures". IEEE Transactions on Parallel and Distributed Systems. 22 (1). New York: IEEE: 105–118. doi:10.1109/TPDS.2010.107. eISSN 1558-2183. ISSN 1045-9219. S2CID 15997131. NLM unique id 101212014.
- ^ Jeffers, James; Reinders, James; Sodani, Avinash (2016-05-31). Intel Xeon Phi Processor High Performance Programming: Knights Landing Edition (2nd ed.). Morgan Kaufmann. ISBN 9780128091951.
- ^ Jana, Siddhartha; Schuchart, Joseph; Chapman, Barbara (2014-10-06). "Analysis of Energy and Performance of PGAS-based Data Access Patterns" (PDF). Proceedings of the 8th International Conference on Partitioned Global Address Space Programming Models. PGAS '14. New York, NY, USA: Association for Computing Machinery. pp. 1–10. doi:10.1145/2676870.2676882. ISBN 978-1-4503-3247-7.
- ^ Marandola, Jussara; Louise, Stéphane; Cudennec, Loïc; Acquaviva, Jean-Thomas; Bader, David (2012-10-11). "Enhancing Cache Coherent Architectures with Access Patterns for Embedded Manycore Systems". International Symposium on System-on-Chip 2012. IEEE: 1–7. doi:10.1109/ISSoC.2012.6376369. ISBN 978-1-4673-2896-8.
- ^ "intel terascale" (PDF).
- ^ Brown, Mary; Jenevein, Roy M.; Ullah, Nasr (29 November 1998). Memory Access Pattern Analysis. WWC '98: Proceedings of the Workload Characterization: Methodology and Case Studies (published 1998-11-29). p. 105. ISBN 9780769504506.
- ^ Ostadzadeh, S. Arash; Meeuws, Roel J.; Galuzzi, Carlo; Bertels, Koen (2010). "QUAD – A Memory Access Pattern Analyser" (PDF). In Sirisuk, Phaophak; Morgan, Fearghal; El-Ghazawi, Tarek; Amano, Hideharu (eds.). Reconfigurable Computing: Architectures, Tools and Applications. Lecture Notes in Computer Science. Vol. 5992. Berlin, Heidelberg: Springer. pp. 269–281. doi:10.1007/978-3-642-12133-3_25. ISBN 978-3-642-12133-3.
- ^ Che, Shuai; Sheaffer, Jeremy W.; Skadron, Kevin (2011-11-12). "Dymaxion: Optimizing memory access patterns for heterogeneous systems" (PDF). Proceedings of 2011 International Conference for High Performance Computing, Networking, Storage and Analysis. SC '11. New York, NY, USA: Association for Computing Machinery. pp. 1–11. doi:10.1145/2063384.2063401. ISBN 978-1-4503-0771-0.
- ^ Harrison, Luddy (1996-01-01). "Examination of a memory access classification scheme for pointer-intensive and numeric programs". Proceedings of the 10th international conference on Supercomputing - ICS '96. New York, NY, USA: Association for Computing Machinery. pp. 133–140. doi:10.1145/237578.237595. ISBN 978-0-89791-803-9.
- ^ Matsubara, Yuki; Sato, Yukinori (2014). "Online Memory Access Pattern Analysis on an Application Profiling Tool". 2014 Second International Symposium on Computing and Networking. pp. 602–604. doi:10.1109/CANDAR.2014.86. ISBN 978-1-4799-4152-0. S2CID 16476418.
- ^ "Putting Your Data and Code in Order: Data and layout".
- ^ Kim, Yooseong; Shrivastava, Aviral (2011-06-05). "CuMAPz: A tool to analyze memory access patterns in CUDA". Proceedings of the 48th Design Automation Conference. DAC '11. New York, NY, USA: Association for Computing Machinery. pp. 128–133. doi:10.1145/2024724.2024754. ISBN 978-1-4503-0636-2.
- ^ Kim, Yooseong; Shrivastava, Aviral (2011-06-05). "CuMAPz: A tool to analyze memory access patterns in CUDA". Proceedings of the 48th Design Automation Conference. DAC '11. New York, NY, USA: Association for Computing Machinery. pp. 128–133. doi:10.1145/2024724.2024754. ISBN 978-1-4503-0636-2.
- ^ Canteaut, Anne; Lauradoux, Cédric; Seznec, André (2006). Understanding cache attacks (report thesis). INRIA. ISSN 0249-6399.
- ^ Hardesty, Larry (2013-07-02). "Protecting data in the cloud". MIT News.
- ^ Rossi, Ben (2013-09-24). "Boosting cloud security with oblivious RAM". Information Age.
- ^ Chuck Paridon. "Storage Performance Benchmarking Guidelines - Part I: Workload Design" (PDF).
In practice, IO access patterns are as numerous as the stars
- ^ Kennedy, Ken; McKinley, Kathryn S. (1992-08-01). "Optimizing for parallelism and data locality" (PDF). Proceedings of the 6th international conference on Supercomputing - ICS '92. New York, NY, USA: Association for Computing Machinery. pp. 323–334. doi:10.1145/143369.143427. ISBN 978-0-89791-485-7.
- ^ Saidi, Selma; Tendulkar, P.; Lepley, Thierry; Maler, O. (2012). "Optimal 2D Data Partitioning for DMA Transfers on MPSoCs" (PDF). 2012 15th Euromicro Conference on Digital System Design. IEEE: 584–591. doi:10.1109/DSD.2012.99. ISBN 978-0-7695-4798-5.
- ^ a b CITRIS and the Banatao Institute (2013-09-05). Partitioned Global Address Space Programming - Kathy Yelick. Retrieved 2024-11-02 – via YouTube. covers cases where PGAS is a win, where data may not be already sorted, e.g., dealing with complex graphs - see "science across the irregularity spectrum".
- ^ Weinberg, Jonathan; McCracken, Michael O.; Snavely, Allan; Strohmaier, Erich (12–18 November 2005). "Quantifying Locality in the Memory Access Patterns of HPC Applications" (PDF). ACM/IEEE SC 2005 Conference (SC'05). Seattle, WA, USA: IEEE. p. 50. doi:10.1109/SC.2005.59. ISBN 1-59593-061-2. Archived from the original (PDF) on 2016-08-03. mentions nearest neighbor access patterns in clusters
- ^ Hakura, Ziyad S.; Gupta, Anoop (1997-05-01). "The design and analysis of a cache architecture for texture mapping" (PDF). Proceedings of the 24th annual international symposium on Computer architecture. ISCA '97. New York, NY, USA: Association for Computing Machinery. pp. 108–120. doi:10.1145/264107.264152. ISBN 978-0-89791-901-2.
- ^ Nocentino, Anthony E.; Rhodes, Philip J. (2010-04-15). "Optimizing memory access on GPUs using morton order indexing" (PDF). Proceedings of the 48th Annual Southeast Regional Conference. ACMSE '10. New York, NY, USA: Association for Computing Machinery. pp. 1–4. doi:10.1145/1900008.1900035. ISBN 978-1-4503-0064-3. Archived from the original (PDF) on 2022-12-08.
- ^ Wise, David S.; Frens, Jeremy D. (1999). "Morton-order Matrices Deserve Compilers ' Support Technical Report 533". S2CID 17192354.
{{cite journal}}
: Cite journal requires|journal=
(help) - ^ a b c d Harris, Mark (April 2005). "GPU Gems 2". 31.1.3 Stream Communication: Gather vs. Scatter. Archived from the original on 2016-06-14. Retrieved 2016-06-13.
- ^ a b c GPU gems. 2011-01-13. ISBN 9780123849892.deals with "scatter memory access patterns" and "gather memory access patterns" in the text
- ^ Wichmann, Nathan (2005). Cray and HPCC : Benchmark Developments and Results from the Past Year (PDF). CUG 2005 Proceedings. see global random access results for Cray X1. vector architecture for hiding latencies, not so sensitive to cache coherency
- ^ "optimize-data-structures-and-memory-access-patterns-to-improve-data-locality".
- ^ "Template-based Memory Access Engine for Accelerators in SoCs" (PDF).
- ^ "Multi-Target Vectorization With MTPS C++ Generic Library" (PDF).a C++ template library for producing optimised memory access patterns