Jump to content

Tensor Processing Unit

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 139.193.40.30 (talk) at 01:56, 27 November 2016. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Tensor processing units are application-specific integrated circuits developed specifically for machine learning. Compared to graphics processing units (which as of 2016 are frequently used for the same tasks), they are designed explicitly for a higher volume of reduced precision computation (e.g. as little as 8-bit precision[1]), and lack hardware for rasterisation/texture mapping.[2] The term has been coined for a specific chip designed for Google's TensorFlow framework. Other AI accelerator designs are appearing from other vendors also and are aimed at embedded and robotics markets.

Google has stated that its proprietary tensor processing units were used in the AlphaGo versus Lee Sedol series of man-machine Go games.[2]

See also

References

  1. ^ Armasu, Lucian (2016-05-19). "Google's Big Chip Unveil For Machine Learning: Tensor Processing Unit With 10x Better Efficiency (Updated)". Tom's Hardware. Retrieved 2016-06-26.
  2. ^ a b Jouppi, Norm (May 18, 2016). "Google supercharges machine learning tasks with TPU custom chip". Google Cloud Platform Blog. Google. Retrieved 2016-06-26.