From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
NEC SX-Aurora TSUBASA A300-8 server with eight vector engines on display at the NEC booth at SC'17 in Denver

The NEC SX-Aurora TSUBASA is a vector processor of the NEC SX architecture family.[1][2] Unlike previous SX supercomputers, the SX-Aurora TSUBASA is provided as a PCIe card, termed by NEC as a "Vector Engine" (VE).[2] Eight VE cards can be inserted into a vector host (VH) which is typically a x86-64 server running the Linux operating system.[2] The product has been announced in a press release on 25 October 2017 and NEC has started selling it in February 2018.[3] The product succeeds the SX-ACE.


SX-Aurora TSUBASA is a successor to the NEC SX series and SUPER-UX, which are vector computer systems upon which the Earth Simulator supercomputer is based. Its hardware consists of x86 Linux hosts with vector engines (VEs) connected via PCI express (PCIe) interconnect.[4]

High memory bandwidth (0.75–1.2 TB/s), comes from eight cores and six HBM2 memory modules on a silicon interposer implemented in the form-factor of a PCIe card.[5] Operating system functionality for the VE is offloaded to the VH and handled mainly by user space daemons running the VEOS.[6]

Depending on the clock frequency (1.4 or 1.6 GHz), each VE CPU has eight cores and a peak performance of 2.15 or 2.45 TFLOPS in double precision. The processor has the world's first implementation of six HBM2 modules on a Silicon interposer with a total of 24 or 48GB of high bandwidth memory. It is integrated in the form-factor of a standard full length, full height, double width PCIe card that is hosted by an x86_64 server, the Vector Host (VH). The server can host up to eight VEs, clusters VHs can scale to arbitrary number of nodes.[1][7][2]

Product releases[edit]

Version 2 Vector Engine[8]

SKU 20A 20B
Clock speed (in Ghz) 1.6 1.6
Number of cores 10 8
Core peak performance

(double precision GFLOPS)

307 307
Core peak performance

(single precision GFLOPS)

614 614
CPU peak performance

(double precision TFLOPS)

3.07 2.45
CPU peak performance

(single precision TFLOPS)

6.14 4.91
Memory bandwidth (TB/s) 1.53 1.53
Memory capacity (GB) 48 48

Version 1 Vector Engine

The version 1.0 of the Vector Engine was produced in 16 nm FinFET process (from TSMC) and released in three SKUs (subsequent versions add an E at the end):[9]

SKU 10A 10B 10C 10AE 10BE 10CE
Clock speed (in Ghz) 1.6 1.4 1.4 1.584 1.408 1.400
Number of cores 8 8 8 8 8 8
Core peak performance

(double precision GFLOPS)

307.2 268.8 268.8 304 270 268
Core peak performance

(single precision GFLOPS)

537 608 540 537
CPU peak performance

(double precision TFLOPS)

2.45 2.15 2.15 2.43 2.16 2.15
CPU peak performance

(single precision TFLOPS)

4.9 4.3 4.3 4.86 4.32 4.30
Memory bandwidth (TB/s) 1.2 1.2 0.75 1.35 1.35 1.00
Memory capacity (GB) 48 48 24 48 48 24

Functional units[edit]

Each of the eight SX-Aurora cores has 64 logical vector registers.[10] These have 256 x 64 Bits length implemented as a mix of pipeline and 32-fold parallel SIMD units. The registers are connected to three FMA floating-point multiply and add units that can run in parallel, as well as two ALU arithmetical logical units handling fixed point operations and a divide and square root pipe.[10] Considering only the FMA units and their 32-fold SIMD parallelism, a vector core is capable of 192 double precision operations per cycle.[10] In "packed" vector operations, where two single precision values are loaded into the space of one double precision slot in the vector registers, the vector unit delivers twice as many operations per clock cycle compared to double precision.

A Scalar Processing Unit (SPU) handles non-vector instructions on each of the cores.

Memory and caches[edit]

The memory of the SX-Aurora TSUBASA processor consists of six HBM2 second generation high-bandwidth memory modules implemented in the same package as the CPU with the help of Chip-on-Wafer-on-Substrate technology. Depending on the processor model, the HBM2 modules are either 4 or 8 die 3D modules with either 4 or 8 GB capacity, each. The SX-Aurora CPUs thus have either 24GB or 48GB HBM2 memory. The models implemented with large HBM2 modules have 1.2TB/s memory bandwidth.[11]

The cores of a vector engine share 16MB of "Last-Level-Cache" (LLC), a write-back cache directly connected to the vector registers and the L2 cache of the SPU. The LLC cache line size is 128 Bytes. The priority of data retention in the LLC can to some extent be controlled in software, allowing the programmer to specify which of the variables or arrays should be retained in cache, a feature comparable to that of the Advanced Data Buffer (ADB) of the NEC SX-ACE.


NEC is currently selling the SX-Aurora TSUBASA vector engine integrated into four platforms:[12][9]

  • A111-1: a tower PC with one VE card of type 10B
  • A101-1: a tower PC with one VE card of type 10CE
  • A311-4: a dual socket 1U 19 inch rack-mountable Xeon scalable server equipped with up to four VE cards of type BE
  • A311-8: a dual socket 4U 19 inch rack-mountable Xeon scalable server with up to eight VE cards of type BE
  • A511-64: a 19 inch rack equipped with 64 VEs of type AE. This is the only configuration that is explicitly sold as a supercomputer.

Within a VH node VEs can communicate with each other through PCIe. Large parallel systems built with SX-Aurora use Infiniband in a PeerDirect setup as interconnect.

NEC also used to sell the SX-Aurora TSUBASA vector engine integrated into five platforms:

  • A100-1: a tower PC with one VE card of type 10C.
  • A300-2: a single socket 1U rack-mountable Skylake server equipped with up to two VE cards of type 10B or 10C.
  • A300-4: a dual socket 1U rack-mountable Skylake server equipped with up to four VE cards of type 10B or 10C.
  • A300-8: a dual socket 4U rack-mountable Skylake server with up to eight VE cards of type 10B or 10C.
  • A500-64: a rack equipped with either Intel Xeon Silver 4100 family or Intel Xeon Gold 6100 family CPUs and 32, 48 or 64 VEs of type 10A or 10B.[13]

All types are exclusively air cooled with the exception of the A500 series, which also utilizes watercooling.


Operating system[edit]

The operating system of the vector engine (VE) is called "VEOS", and has been offloaded entirely to the host system, the vector host (VH).[14] VEOS consists of kernel modules and user space daemons that:

  • manage VE processes and their scheduling on the VE
  • manage the virtual memory address spaces of the VE processes
  • handle transfers between VH and VE memory with the help of the VE DMA engines
  • handle interrupts and exceptions of VE processes, as well as their system calls.[15]

VEOS supports multitasking on the VE and almost all Linux system calls are supported in the VE libc.[15] Offloading operating system services to the VH shifts OS jitter away from the VE at the expense of increased latencies.[15] All VE operating system related packages are licensed under the GNU General Public License and have been published at

NEC later seems to have abandoned VEOS in favor of Red Hat Enterprise Linux or CentOS.

Software development[edit]

A Software Development Kit is available from NEC for developers and customers. It contains proprietary products and must be purchased from NEC. The SDK contains:

  • C, C++ and Fortran compilers that support automatic vectorization and automatic parallelization as well as OpenMP.[16]
  • Performance optimization tools: ftraceviewer and veperf.[17]
  • Optimized numerical libraries for the VE: BLAS, SBLAS, LAPACK, SCALAPACK, ASL, Heterosolver.[18]

NEC MPI is also a proprietary implementation and is conforming to the MPI-3.1 standard specification.[19]

Hybrid programs can be created that use the VE as an accelerator for certain host kernel functions by using VE offloading C-API.[20] To some extent VE offloading is comparable to OpenCL and CUDA, but provides a simpler API and allows the kernels to be developed in normal C, C++ or Fortran and use almost any syscall on the VE.[citation needed] Python bindings to VEO are available at


  1. ^ a b "NEC SX-Aurora TSUBASA - Vector Engine". Retrieved 2018-03-20.
  2. ^ a b c d Morgan, Timothy Prickett (October 27, 2017). "Can Vector Supercomputing Be Revived?". The Next Platform.
  3. ^ "NEC releases new high-end HPC product line, SX-Aurora TSUBASA". NEC. Retrieved 2018-03-21.
  4. ^ Imai, Teruyuki (2019), Gerofi, Balazs; Ishikawa, Yutaka; Riesen, Rolf; Wisniewski, Robert W. (eds.), "NEC Earth Simulator and the SX-Aurora TSUBASA", Operating Systems for Supercomputers and High Performance Computing, High-Performance Computing Series, Singapore: Springer, vol. 1, pp. 139–160, doi:10.1007/978-981-13-6624-6_9, ISBN 978-981-13-6624-6
  5. ^ Morgan, Timothy Prickett (2017-11-22). "A Deep Dive Into NEC's Aurora Vector Engine". The Next Platform. Retrieved 2020-07-02.
  6. ^ Focht, Erich. "First steps with the SX-Aurora TSUBASA vector engine". Retrieved 2020-07-02.
  7. ^ SX-Aurora TSUBASA Brochure
  8. ^ "NEC Vector Engine Models". Retrieved 15 September 2020.
  9. ^ a b[bare URL PDF]
  10. ^ a b c "NEC SX-Aurora TSUBASA Architecture". Retrieved 2018-03-20.
  11. ^ "SX-Aurora - Microarchitectures - NEC - WikiChip". Retrieved 2020-07-02.
  12. ^ "NEC SX-Aurora TSUBASA".
  13. ^ "NEC SX-Aurora TSUBASA A500-64".
  14. ^ "NEC SX Aurora TSUBASA — VSC documentation 1.0 documentation". Retrieved 2020-07-02.
  15. ^ a b c "A Look at NEC's Latest Vector Processor, the SX-Aurora". WikiChip Fuse. 2018-12-09. Retrieved 2020-08-27.
  16. ^ "NEC SX Aurora TSUBASA — VSC documentation 1.0 documentation". Retrieved 2020-08-27.
  17. ^ "NEC SX-Aurora TSUBASA Documentation".{{cite web}}: CS1 maint: url-status (link)
  18. ^ "NEC SX-Aurora TSUBASA Vector System". Rechenzentrum der CAU. Retrieved 2020-08-27.
  19. ^ "NEC MPI User's Guide".{{cite web}}: CS1 maint: url-status (link)
  20. ^ "SX-Aurora/veoffload". GitHub. Retrieved 2018-03-21.

External links[edit]