Hardware acceleration

From Wikipedia, the free encyclopedia
  (Redirected from Hardware accelerator)
Jump to navigation Jump to search

In computing, hardware acceleration is the use of computer hardware to perform some functions more efficiently than is possible in software running on a more general-purpose CPU. Examples of hardware acceleration include Bit blit acceleration functionality in graphics processing units (GPUs), use of memristors for accelerating neural-networks[1] and regular expression hardware acceleration for spam control in the server industry.[2] The hardware that performs the acceleration may be part of a general-purpose CPU, or a separate unit. In the second case, it is referred to as a hardware accelerator, or often more specifically as a 3D accelerator, cryptographic accelerator, etc.

Traditionally, processors were sequential (instructions are executed one by one), and were designed to run general purpose algorithms controlled by instruction fetch (for example moving temporary results to and from a register file). Hardware accelerators improve the execution of a specific algorithm by allowing greater concurrency, having specific data-paths for its temporaries, and possibly reducing the overhead of instruction control. Modern processors are multi-core and often feature parallel SIMD units; however hardware acceleration still yields benefits. Hardware acceleration is suitable for any computation-intensive key algorithm which is executed frequently. Depending upon the granularity, hardware acceleration can vary from a small functional unit, to a large functional block (like motion estimation in MPEG-2).

In the hierarchy of general-purpose processors such as CPUs, more specialized processors such as GPUs, fixed-function implemented on FPGAs, and fixed-function implemented on ASICs; there is a tradeoff between flexibility and efficiency, with efficiency increasing by orders of magnitude when any given application is implemented higher up that hierarchy.[3][4]


See also[edit]


  1. ^ "A Survey of ReRAM-based Architectures for Processing-in-memory and Neural Networks", S. Mittal, Machine Learning and Knowledge Extraction, 2018
  2. ^ a b "Regular Expressions in hardware". Retrieved 17 July 2014. 
  3. ^ "Mining hardware comparison - Bitcoin". Retrieved 17 July 2014. 
  4. ^ "Non-specialized hardware comparison - Bitcoin". Retrieved 25 February 2014. 
  5. ^ a b Farabet, Clément, et al. "Hardware accelerated convolutional neural networks for synthetic vision systems." Circuits and Systems (ISCAS), Proceedings of 2010 IEEE International Symposium on. IEEE, 2010.
  6. ^ Compression Accelerators - Microsoft Research