Cache control instruction
This article needs additional citations for verification. (September 2016) |
In computing, a cache control instruction is a hint embedded in the instruction stream of a processor intended to improve the performance of hardware caches, using foreknowledge of the memory access pattern supplied by the programmer or compiler.[1] They may reduce cache pollution, reduce bandwidth requirement, bypass latencies, by providing better control over the working set. Most cache control instructions do not affect the semantics of a program, although some can.
Examples
[edit]Several such instructions, with variants, are supported by several processor instruction set architectures, such as ARM, MIPS, PowerPC, and x86.
Prefetch
[edit]Also termed data cache block touch, the effect is to request loading the cache line associated with a given address. This is performed by the PREFETCH
instruction in the x86 instruction set. Some variants bypass higher levels of the cache hierarchy, which is useful in a 'streaming' context for data that is traversed once, rather than held in the working set. The prefetch should occur sufficiently far ahead in time to mitigate the latency of memory access, for example in a loop traversing memory linearly. The GNU Compiler Collection intrinsic function __builtin_prefetch
can be used to invoke this in the programming languages C or C++.
Instruction prefetch
[edit]A variant of prefetch for the instruction cache.
Data cache block allocate zero
[edit]This hint is used to prepare cache lines before overwriting the contents completely. In this example, the CPU needn't load anything from main memory. The semantic effect is equivalent to an aligned memset of a cache-line sized block to zero, but the operation is effectively free.
Data cache block invalidate
[edit]This hint is used to discard cache lines, without committing their contents to main memory. Care is needed since incorrect results are possible. Unlike other cache hints, the semantics of the program are significantly modified. This is used in conjunction with allocate zero
for managing temporary data. This saves unneeded main memory bandwidth and cache pollution.
Data cache block flush
[edit]This hint requests the immediate eviction of a cache line, making way for future allocations. It is used when it is known that data is no longer part of the working set.
Other hints
[edit]Some processors support a variant of load–store instructions that also imply cache hints. An example is load last
in the PowerPC instruction set, which suggests that data will only be used once, i.e., the cache line in question may be pushed to the head of the eviction queue, whilst keeping it in use if still directly needed.
Alternatives
[edit]Automatic prefetch
[edit]In recent times, cache control instructions have become less popular as increasingly advanced application processor designs from Intel and ARM devote more transistors to accelerating code written in traditional languages, e.g., performing automatic prefetch, with hardware to detect linear access patterns on the fly. However the techniques may remain valid for throughput-oriented processors, which have a different throughput vs latency tradeoff, and may prefer to devote more area to execution units.
Scratchpad memory
[edit]Some processors support scratchpad memory into which temporaries may be put, and direct memory access (DMA) to transfer data to and from main memory when needed. This approach is used by the Cell processor, and some embedded systems. These allow greater control over memory traffic and locality (as the working set is managed by explicit transfers), and eliminates the need for expensive cache coherency in a manycore machine.
The disadvantage is it requires significantly different programming techniques to use. It is very hard to adapt programs written in traditional languages such as C and C++ which present the programmer with a uniform view of a large address space (which is an illusion simulated by caches). A traditional microprocessor can more easily run legacy code, which may then be accelerated by cache control instructions, whilst a scratchpad based machine requires dedicated coding from the ground up to even function. Cache control instructions are specific to a certain cache line size, which in practice may vary between generations of processors in the same architectural family. Caches may also help coalescing reads and writes from less predictable access patterns (e.g., during texture mapping), whilst scratchpad DMA requires reworking algorithms for more predictable 'linear' traversals.
As such scratchpads are generally harder to use with traditional programming models, although dataflow models (such as TensorFlow) might be more suitable.
Vector fetch
[edit]Vector processors (for example modern graphics processing unit (GPUs) and Xeon Phi) use massive parallelism to achieve high throughput whilst working around memory latency (reducing the need for prefetching). Many read operations are issued in parallel, for subsequent invocations of a compute kernel; calculations may be put on hold awaiting future data, whilst the execution units are devoted to working on data from past requests data that has already turned up. This is easier for programmers to leverage in conjunction with the appropriate programming models (compute kernels), but harder to apply to general purpose programming.
The disadvantage is that many copies of temporary states may be held in the local memory of a processing element, awaiting data in flight.
References
[edit]- ^ "Power PC manual, see 1.10.3 Cache Control Instructions" (PDF). Archived from the original (PDF) on 2016-10-13. Retrieved 2016-06-11.