Advanced Vector Extensions
Advanced Vector Extensions (AVX) are extensions to the x86 instruction set architecture for microprocessors from Intel and AMD proposed by Intel in March 2008 and first supported by Intel with the Sandy Bridge processor shipping in Q1 2011 and later on by AMD with the Bulldozer processor shipping in Q3 2011. AVX provides new features, new instructions and a new coding scheme.
AVX2 expands most integer commands to 256 bits and introduces FMA. AVX-512 expands AVX to 512-bit support utilizing a new EVEX prefix encoding proposed by Intel in July 2013 and first supported by Intel with the Knights Landing processor scheduled to ship in 2015.[1]
Advanced Vector Extensions
AVX uses sixteen YMM registers. Each YMM register contains:
- eight 32-bit single-precision floating point numbers or
- four 64-bit double-precision floating point numbers.
The width of the SIMD register file is increased from 128 bits to 256 bits, and renamed from XMM0–XMM7 to YMM0–YMM7 (in x86-64 mode, YMM0–YMM15). In processors with AVX support, the legacy SSE instructions (which previously operated on 128-bit XMM registers) can be extended using the VEX prefix to operate on the lower 128 bits of the YMM registers.
511 256 | 255 128 | 127 0 |
ZMM0 | YMM0 | XMM0 |
ZMM1 | YMM1 | XMM1 |
ZMM2 | YMM2 | XMM2 |
ZMM3 | YMM3 | XMM3 |
ZMM4 | YMM4 | XMM4 |
ZMM5 | YMM5 | XMM5 |
ZMM6 | YMM6 | XMM6 |
ZMM7 | YMM7 | XMM7 |
ZMM8 | YMM8 | XMM8 |
ZMM9 | YMM9 | XMM9 |
ZMM10 | YMM10 | XMM10 |
ZMM11 | YMM11 | XMM11 |
ZMM12 | YMM12 | XMM12 |
ZMM13 | YMM13 | XMM13 |
ZMM14 | YMM14 | XMM14 |
ZMM15 | YMM15 | XMM15 |
ZMM16 | YMM16 | XMM16 |
ZMM17 | YMM17 | XMM17 |
ZMM18 | YMM18 | XMM18 |
ZMM19 | YMM19 | XMM19 |
ZMM20 | YMM20 | XMM20 |
ZMM21 | YMM21 | XMM21 |
ZMM22 | YMM22 | XMM22 |
ZMM23 | YMM23 | XMM23 |
ZMM24 | YMM24 | XMM24 |
ZMM25 | YMM25 | XMM25 |
ZMM26 | YMM26 | XMM26 |
ZMM27 | YMM27 | XMM27 |
ZMM28 | YMM28 | XMM28 |
ZMM29 | YMM29 | XMM29 |
ZMM30 | YMM30 | XMM30 |
ZMM31 | YMM31 | XMM31 |
AVX introduces a three-operand SIMD instruction format, where the destination register is distinct from the two source operands. For example, an SSE instruction using the conventional two-operand form a = a + b can now use a non-destructive three-operand form c = a + b, preserving both source operands. AVX's three-operand format is limited to the instructions with SIMD operands (YMM), and does not include instructions with general purpose registers (e.g. EAX). Such support will first appear in AVX2.[2]
The alignment requirement of SIMD memory operands is relaxed.[3]
The new VEX coding scheme introduces a new set of code prefixes that extends the opcode space, allows instructions to have more than two operands, and allows SIMD vector registers to be longer than 128 bits. The VEX prefix can also be used on the legacy SSE instructions giving them a three-operand form, and making them interact more efficiently with AVX instructions without the need for VZEROUPPER and ZEROALL.
The AVX instructions support both 128-bit and 256-bit SIMD. The 128-bit versions can be useful to improve old code without needing to widen the vectorization, and avoid the penalty of going from SSE to AVX, they are also faster on some early AMD implementations of AVX. This mode is sometimes known as AVX-128.[4]
New instructions
These AVX instructions are in addition to the ones that are 256-bit extensions of the legacy 128-bit SSE instructions; most are usable on both 128-bit and 256-bit operands.
Instruction | Description |
---|---|
VBROADCASTSS , VBROADCASTSD , VBROADCASTF128
|
Copy a 32-bit, 64-bit or 128-bit memory operand to all elements of a XMM or YMM vector register. |
VINSERTF128
|
Replaces either the lower half or the upper half of a 256-bit YMM register with the value of a 128-bit source operand. The other half of the destination is unchanged. |
VEXTRACTF128
|
Extracts either the lower half or the upper half of a 256-bit YMM register and copies the value to a 128-bit destination operand. |
VMASKMOVPS , VMASKMOVPD
|
Conditionally reads any number of elements from a SIMD vector memory operand into a destination register, leaving the remaining vector elements unread and setting the corresponding elements in the destination register to zero. Alternatively, conditionally writes any number of elements from a SIMD vector register operand to a vector memory operand, leaving the remaining elements of the memory operand unchanged. On the AMD Jaguar processor architecture, this instruction with a memory source operand takes more than 300 clock cycles when the mask is zero, in which case the instruction should do nothing. This appears to be a design flaw.[5] |
VPERMILPS , VPERMILPD
|
Permute In-Lane. Shuffle the 32-bit or 64-bit vector elements of one input operand. These are in-line 256-bit instructions, meaning that they operate on all 256 bits with two separate 128-bit shuffles, so they can not shuffle across the 128-bit lanes.[6] |
VPERM2F128
|
Shuffle the four 128-bit vector elements of two 256-bit source operands into a 256-bit destination operand, with an immediate constant as selector. |
VZEROALL
|
Set all YMM registers to zero and tag them as unused. Used when switching between 128-bit use and 256-bit use. |
VZEROUPPER
|
Set the upper half of all YMM registers to zero. Used when switching between 128-bit use and 256-bit use. |
CPUs with AVX
- Intel
- Sandy Bridge processor, Q1 2011[7]
- Sandy Bridge E processor, Q4 2011[8]
- Ivy Bridge processor, Q1 2012
- Ivy Bridge E processor, Q3 2013
- Haswell processor, Q2 2013
- Haswell E processor, Q3 2014
- Broadwell processor, Q4 2014
- Broadwell E processor, Q2 2016
- Skylake processor, Q3 2015
- Kaby Lake processor, expected in 2016
- Cannonlake processor, expected in 2017
Note: Not all CPUs from the listed families support AVX. Generally, CPUs with the commercial denomination "Core i3/i5/i7" support them, whereas "Pentium" and "Celeron" CPUs don't.
- AMD:
- Bulldozer-based processor, Q4 2011[9]
- Piledriver-based processor, Q4 2012[10]
- Steamroller-based processor, Q1 2014
- Excavator-based processor, expected in 2015
- Jaguar-based processor
- Puma-based processor
Issues regarding compatibility between future Intel and AMD processors are discussed under XOP instruction set.
Compiler and assembler support
Recent releases of GCC starting with version 4.6 (although there was a 4.3 branch with certain support) and the Intel Compiler Suite starting with version 11.1 support AVX. The Visual Studio 2010/2012 compiler supports AVX via intrinsic and /arch:AVX switch. The Open64 compiler version 4.5.1 supports AVX with -mavx flag. Absoft supports with -mavx flag. PathScale supports via the -mavx flag. The Free Pascal compiler supports AVX and AVX2 with the -CfAVX and -CfAVX2 switches from version 2.7.1. The Vector Pascal compiler supports AVX via the -cpuAVX32 flag. The GNU Assembler (GAS) inline assembly functions support these instructions (accessible via GCC), as do Intel primitives and the Intel inline assembler (closely compatible to GAS, although more general in its handling of local references within inline code). Other assemblers such as MASM VS2010 version, YASM,[11] FASM, NASM and JWASM.
Operating system support
AVX adds new register-state through the 256-bit wide YMM register file, so explicit operating system support is required to properly save and restore AVX's expanded registers between context switches. The following operating system versions support AVX:
- Apple OS X: Support for AVX added in 10.6.8 (Snow Leopard) update[12] released on June 23, 2011.
- Linux: supported since kernel version 2.6.30,[13] released on June 9, 2009.[14]
- Windows: supported in Windows 7 SP1 and Windows Server 2008 R2 SP1,[15] Windows 8
- Windows Server 2008 R2 SP1 with Hyper-V requires a hotfix to support AMD AVX (Opteron 6200 and 4200 series) processors, KB2568088
- FreeBSD in a patch submitted on 21 January 2012,[16] which was included in the 9.1 stable release[17]
- DragonFly BSD added support in early 2013.
- OpenBSD added support on 21 March 2015.[18]
- Solaris 10 Update 10 and Solaris 11
Advanced Vector Extensions 2
Advanced Vector Extensions 2 (AVX2), also known as Haswell New Instructions,[2] is an expansion of the AVX instruction set introduced in Intel's Haswell microarchitecture. AVX2 makes the following additions:
- expansion of most vector integer SSE and AVX instructions to 256 bits
- three-operand general-purpose bit manipulation and multiply
- Gather support, enabling vector elements to be loaded from non-contiguous memory locations
- DWORD- and QWORD-granularity any-to-any permutes
- vector shifts.
Sometimes another extension using a different cpuid flag is considered part of AVX2; those instructions are listed on their own page and not below:
- three-operand fused multiply-accumulate support (FMA3)
New instructions
Instruction | Description |
---|---|
VBROADCASTSS , VBROADCASTSD
|
Copy a 32-bit or 64-bit register operand to all elements of a XMM or YMM vector register. These are register versions of the same instructions in AVX1. There is no 128-bit version however, but the same effect can be simply achieved using VINSERTF128. |
VPBROADCASTB , VPBROADCASTW , VPBROADCASTD , VPBROADCASTQ
|
Copy an 8, 16, 32 or 64-bit integer register or memory operand to all elements of a XMM or YMM vector register. |
VBROADCASTI128
|
Copy a 128-bit memory operand to all elements of a YMM vector register. |
VINSERTI128
|
Replaces either the lower half or the upper half of a 256-bit YMM register with the value of a 128-bit source operand. The other half of the destination is unchanged. |
VEXTRACTI128
|
Extracts either the lower half or the upper half of a 256-bit YMM register and copies the value to a 128-bit destination operand. |
VGATHERDPD , VGATHERQPD , VGATHERDPS , VGATHERQPS
|
Gathers single or double precision floating point values using either 32 or 64-bit indices and scale. |
VPGATHERDD , VPGATHERDQ , VPGATHERQD , VPGATHERQQ
|
Gathers 32 or 64-bit integer values using either 32 or 64-bit indices and scale. |
VPMASKMOVD , VPMASKMOVQ
|
Conditionally reads any number of elements from a SIMD vector memory operand into a destination register, leaving the remaining vector elements unread and setting the corresponding elements in the destination register to zero. Alternatively, conditionally writes any number of elements from a SIMD vector register operand to a vector memory operand, leaving the remaining elements of the memory operand unchanged. |
VPERMPS , VPERMD
|
Shuffle the eight 32-bit vector elements of one 256-bit source operand into a 256-bit destination operand, with a register or memory operand as selector. |
VPERMPD , VPERMQ
|
Shuffle the four 64-bit vector elements of one 256-bit source operand into a 256-bit destination operand, with a register or memory operand as selector. |
VPERM2I128
|
Shuffle the four 128-bit vector elements of two 256-bit source operands into a 256-bit destination operand, with an immediate constant as selector. |
VPBLENDD
|
Doubleword immediate version of the PBLEND instructions from SSE4. |
VPSLLVD , VPSLLVQ
|
Shift left logical. Allows variable shifts where each element is shifted according to the packed input. |
VPSRLVD , VPSRLVQ
|
Shift right logical. Allows variable shifts where each element is shifted according to the packed input. |
VPSRAVD
|
Shift right arithmetically. Allows variable shifts where each element is shifted according to the packed input. |
CPUs with AVX2
- Intel
- Haswell processor, Q2 2013
- Haswell E processor, Q3 2014
- Broadwell processor, Q4 2014
- Broadwell E processor, Q3 2016
- Skylake processor, Q3 2015
- Kaby Lake processor, expected in 2016
- Cannonlake processor, expected in 2017
- AMD
AVX-512
AVX-512 are 512-bit extensions to the 256-bit Advanced Vector Extensions SIMD instructions for x86 instruction set architecture proposed by Intel in July 2013, and scheduled to be supported in 2015 with Intel's Knights Landing processor.[1]
AVX-512 instruction are encoded with the new EVEX prefix. It allows 4 operands, 7 new 64-bit opmask registers, scalar memory mode with automatic broadcast, explicit rounding control, and compressed displacement memory addressing mode. The width of the register file is increased to 512 bits and total register count increased to 32 (registers ZMM0-ZMM31) in x86-64 mode.
AVX-512 consists of multiple extensions not all meant to be supported by all processors implementing them. The instruction set consists of the following:
- AVX-512 Foundation – adds several new instructions and expands most 32-bit and 64-bit floating point SSE-SSE4.1 and AVX/AVX2 instructions with EVEX coding scheme to support the 512-bit registers, operation masks, parameter broadcasting, and embedded rounding and exception control
- AVX-512 Conflict Detection Instructions (CDI) – efficient conflict detection to allow more loops to be vectorized, supported by Knights Landing[1]
- AVX-512 Exponential and Reciprocal Instructions (ERI) – exponential and reciprocal operations designed to help implement transcendental operations, supported by Knights Landing[1]
- AVX-512 Prefetch Instructions (PFI) – new prefetch capabilities, supported by Knights Landing[1]
- AVX-512 Vector Length Extensions (VL) – extends most AVX-512 operations to also operate on XMM (128-bit) and YMM (256-bit) registers (including XMM16-XMM31 and YMM16-YMM31 in x86-64 mode)[19]
- AVX-512 Byte and Word Instructions (BW) – extends AVX-512 to cover 8-bit and 16-bit integer operations[19]
- AVX-512 Doubleword and Quadword Instructions (DQ) – enhanced 32-bit and 64-bit integer operations[19]
- AVX-512 Integer Fused Multiply Add (IFMA) - fused multiply add for 52-bit integers.[20]: 746
- AVX-512 Vector Byte Manipulation Instructions (VBMI) adds vector byte permutation instructions which are not present in AVX-512BW.
- AVX-512 Vector Neural Network Instructions Word variable precision (VNNIW) - vector instructions for deep learning.
- AVX-512 Fused Multiply Accumulation Packed Single precision (FMAPS) - vector instructions for deep learning.
Only the core extension AVX-512F (AVX-512 Foundation) is required by all implementations; desktop processors will additionally support CDI, VL, and BW/DQ, while computing coprocessors will support CDI, ERI and PFI.
The updated SSE/AVX instructions in AVX-512F use the same mnemonics as AVX versions; they can operate on 512-bit ZMM registers, and will also support 128/256 bit XMM/YMM registers (with AVX-512VL) and byte, word, doubleword and quadword integer operands (with AVX-512BW/DQ and VBMI).[20]: 23
CPUs with AVX-512
AVX-512 Subset | F | CD | ER | PF | VL | BW | DQ | IFMA | VBMI |
---|---|---|---|---|---|---|---|---|---|
Xeon Phi x200 (aka KnightsLanding, either host processor or coprocessor, 2016) | Yes | Yes | Yes | Yes | No | ||||
Skylake EP/EX Xeon "Purley" (Xeon X5-E26xx V5) processors (expected in H2 2017) | Yes | No | Yes | No | |||||
Cannonlake processors (expected in 2017) | Yes | No | Yes |
Applications
- Suitable for floating point-intensive calculations in multimedia, scientific and financial applications (integer operations are expected in later extensions).
- Increases parallelism and throughput in floating point SIMD calculations.
- Reduces register load due to the non-destructive instructions.
- Improves Linux RAID software performance (required AVX2, AVX is not sufficient)[22]
Software
- Blender uses AVX2 in the render engine cycles.
- Prime95/MPrime, the software used for GIMPS, started using the AVX instructions since version 27.x.
- dnetc, the software used by distributed.net, has an AVX2 core available for its RC5 project and will soon release one for its OGR-28 project.
- Einstein@Home uses AVX in some of their distributed applications that search for Gravitational Waves. [23]
See also
References
- ^ a b c d e James Reinders (23 July 2013), AVX-512 Instructions, Intel, retrieved 20 August 2013
- ^ a b Haswell New Instruction Descriptions Now Available, Software.intel.com, retrieved 2012-01-17
- ^ "14.9". Intel 64 and IA-32 Architectures Software Developer's Manual Volume 1: Basic Architecture (PDF) (-051US ed.). Intel Corporation. p. 349. Retrieved 23 August 2014.
Memory arguments for most instructions with VEX prefix operate normally without causing #GP(0) on any byte-granularity alignment (unlike Legacy SSE instructions).
- ^ "i386 and x86-64 Options - Using the GNU Compiler Collection (GCC)". Retrieved 2014-02-09.
- ^ "The microarchitecture of Intel, AMD and VIA CPUs - An optimization guide for assembly programmers and compiler makers" (PDF). Retrieved 17 October 2016.
- ^ "Chess programming AVX2". Retrieved 17 October 2016.
- ^ "Intel Offers Peek at Nehalem and Larrabee". ExtremeTech. 2008-03-17.
- ^ "Intel Core i7-3960X Processor Extreme Edition". Retrieved 2012-01-17.
- ^ Dave Christie (2009-05-07), Striking a balance, AMD Developer blogs, retrieved 2012-01-17
- ^ New "Bulldozer" and "Piledriver" Instructions (PDF), AMD, October 2012
- ^ YASM 0.7.0 Release Notes http://yasm.tortall.net/releases/Release0.7.0.html
- ^ Twitter, retrieved 2010-06-23 [unreliable source?]
- ^ x86: add linux kernel support for YMM state, retrieved 2009-07-13
- ^ Linux 2.6.30 - Linux Kernel Newbies, retrieved 2009-07-13
- ^ Floating-Point Support for 64-Bit Drivers, retrieved 2009-12-06
- ^ Add support for the extended FPU states on amd64, both for native 64bit and 32bit ABIs, svnweb.freebsd.org, 2012-01-21, retrieved 2012-01-22
- ^ "FreeBSD 9.1-RELEASE Announcement". Retrieved 2013-05-20.
{{cite web}}
: Unknown parameter|deadurl=
ignored (|url-status=
suggested) (help) - ^ Add support for saving/restoring FPU state using the XSAVE/XRSTOR., retrieved 2015-03-25
- ^ a b c James Reinders (17 July 2014). "Additional AVX-512 instructions". Intel. Retrieved 3 August 2014.
- ^ a b "Intel Architecture Instruction Set Extensions Programming Reference" (PDF). Intel. Retrieved 2014-01-29.
- ^ "Intel® Software Development Emulator | Intel® Software". software.intel.com. Retrieved 2016-06-11.
- ^ "Linux RAID". LWN. 2013-02-17.
- ^ "Einstein@Home Applications".