Jump to content

Talk:Memory access pattern

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

memory access pattern / IO access pattern

[edit]

'IO access pattern' would redirect here?

would the article be better renamed "access pattern (computing)" - mentioning 'memory access pattern' and 'IO access pattern'?

Or is the distinction between memory and secondary storage not so important (just different forms of memory, although in most use memory means DRAM etc) - and we've just broadened the scope of the article by mentioning IO/storage explicitely?

Fmadd (talk) 15:11, 30 July 2016 (UTC)[reply]

I'd probably rename it. Memory access pattern usually doesn't refer to long-term storage IO (at least in my experience). Sizeofint (talk) 18:52, 30 July 2016 (UTC)[reply]

Memory access patterns and I/O access patterns are very distinct. Two very different research topics. — Preceding unsigned comment added by 2600:1700:FDB0:4B60:3C72:75B1:93D:2F96 (talk) 01:22, 5 November 2021 (UTC)[reply]

Unnamed section

[edit]

I'm unsure that this deserves a page but i like the idea of having clear definitions, and just run into trouble r.e. glossaries, so i gave this a go. EDIT: as I flesh it out I realise more the frustration trying to embed these concepts elsewhere. 'memory access pattern->scatter/gather', refers to a way of organising a program. but this would be out of place in 'locality of reference'. it's about the flow of data and algorithm structure, not just the layout.Fmadd (talk) 17:23, 13 June 2016 (UTC)[reply]

this would seem to cover some material in 'locality of reference', however it's all from the perspective of the "memory access pattern", the thing that you change to improve 'locality of reference', and there are differences. One is the cause, the other is the effect. Memory access pattern also affects parallelism. my thinking is definitions of specific phrases would wikipedia a better AI resource. — Preceding unsigned comment added by Fmadd (talkcontribs) 21:04, 11 June 2016‎ (UTC)[reply]

Hello! As visible from my edit, creating unreferenced articles actually isn't that helpful. Please note that we need reliable sources for new articles, which also confirm that the topic is notable enough to warrant a separate article. That's how Wikipedia works. By the way, we'd also need references for the glossary pages, no matter how straightforward it might be to define some of the terms. Also, please remember to sign your posts. — Dsimic (talk | contribs) 08:47, 13 June 2016 (UTC)[reply]
well I've tried to grab some references, and there'd be more detail to come;e.g. how certain memory access patterns affect parallelizability (turning 'scatter' into 'gather' for GPGPU). Think how this sort of thing interacts with the new 'hover cards feature - isn't it nice when wikipedia can display a direct definition in the hover card... this is why I went for glossaries, someone else told me 'make articles' and I already told him "blah blah blah notability". this is why I also have a platform suggestion - "micro articles". The magic of a wiki (IMO) is being able to explain things in context, (the flow of concepts), surely.. a graph rather than linear text.
some papers talk about the implication of memory access pattern for security, that can be mentioned here.
Scatter and Gather - vector addressing. These are 'memory access patterns'. And can further be measured in locality of reference (depending what the indices do) Fmadd (talk) 17:21, 13 June 2016 (UTC)[reply]
It's all fine, but please keep in mind the WP:NOTABILITY guideline, as well as the need for glossary pages to include a reasonable amount of references. — Dsimic (talk | contribs) 23:41, 13 June 2016 (UTC)[reply]

Additional justification for existence

[edit]

Take a look at the prospective new article List of AI accelerators which I'm hoping will become an AI accelerator article. the concept of 'memory access pattern' (not just 'locality of reference') is important to explain the difference between a GPU and an AI accelerator, beyond the fact that 'they are both high throughput' etc.

GPUs and AI accelerators both use 'locality of reference', but their use case is different 'access patterns'.. one is "gather(textures)-> scatter(pixels)", the other is (ideally) "keep weights in scratchpad; throw activations between bundles of neurons, in nearby scratchpads".

Fmadd (talk) 23:54, 17 June 2016 (UTC)[reply]