The following discussion is an archived discussion of the proposal. Please do not modify it. Subsequent comments should be made in a new section on the talk page. No further edits should be made to this section.
– In this particular case, the gerund form would be clearer. In computing a cache itself is useless without a caching-oriented system which actually stops to examine the cache. At the same time, -ing is something of a natural disambiguator. The computing use is not the primary topic for Cache (disambiguation), but perhaps for Caching it is. Pnm (talk) 21:46, 28 January 2012 (UTC)
rename to "Cache (computing)" to clarify and disambiguate right here at the article name level and since this is what the article is about. Hmains (talk) 00:57, 29 January 2012 (UTC)
Oppose, though I could be convinced on the primary topic issue. On the title of this article, though, I don't think the gerund gives the right semantics here. "What is caching? The act of putting something in a cache." Any definition of caching requires first a definition of cache, and that is strong evidence that it's the storage itself, not the act of storing, that is key. PowersT 01:01, 30 January 2012 (UTC)
Caching refers to the strategy, to the feature of a system. To implement caching requires four things: having a cache, checking the cache first when a request comes along, putting entries into the cache, and deciding when to dispose entries. In general, literature uses both terms, but with topics like web caching and database caching the gerund is more common than the noun, as you can see from those articles' sources. – Pnm (talk) 04:05, 30 January 2012 (UTC)
Those articles are about specific applications of a cache. A web cache is not materially different from a database cache; it's the use of the cache that differs. But the article about the wider concept of a cache is properly named after the object of the action, not the action. PowersT 13:05, 30 January 2012 (UTC)
The above discussion is preserved as an archive of the proposal. Please do not modify it. Subsequent comments should be made in a new section on this talk page. No further edits should be made to this section.
The article keeps telling about 'backing store'. But it doesn't mention what it is(Probably Hard disk drive?). So it needs to define backing store or provide a link to another article that describes what that backing store is.
The backing store is whatever the cache is caching. :-) It could be main memory, or a cache at a higher level of the cache hierarchy, for a CPU cache or a TLB. It could be a disk drive, SSD, or remote file server for a file data cache ("disk cache"/"page cache"); it could be a Web server for a Web cache; and so on. Guy Harris (talk) 07:51, 2 June 2014 (UTC)
In risk of sounding silly, all cache is volatile right? I saw zero mention of volatility anywhere in the article, and if that's the case, it should be added.— Preceding unsigned comment added by BlueFenixReborn (talk • contribs) 08:59, December 20, 2014 (UTC)
Hello! Obviously, that depends on what kind of a device is used as a cache. For example, using DRAM results in a volatile cache. while using an HDD or SSD means permanent storage of cached data. It's pretty much implicitly known whenever a particular cache layout is mentioned in the article; thus, I don't think that it should be clarified further. — Dsimic (talk | contribs) 08:39, 20 December 2014 (UTC)
Your browser for example also stores its caches on disk, which persists between restarts and reboots. Image viewer thumbnail caches are often stored persistently. -- intgr[talk] 08:45, 20 December 2014 (UTC)
Right, thank you both for the clarification. BlueFenixReborn (talk) 06:57, 31 December 2014 (UTC)
Often a larger, distant resource incurs a significant latency for access (e.g. it can take 100's of clock cycles for a modern 4ghz processor to reach DRAM). This is mitigated by reading the distant resource in large chunks, in the hope that subsequent reads will be from nearby locations. Prediction hardware or prefetching might also guess where future reads will come from and make requests ahead of time; if done correctly the latency is bypassed altogether.
Beyond this, granularity is important. The use of a cache also allows for much higher throughput from the underlying resource, by assembling multiple fine grain transfers into larger, more efficient requests. In the case of DRAM, this might be served by a wider bus. Imagine a program stepping through bytes, but being served by a 128bit off chip bus; individual uncached byte accesses would only allow 1/16th of the total bandwidth to be used.