NEC SX-Aurora TSUBASA
The NEC SX-Aurora TSUBASA is a vector processor of the NEC SX architecture family. Unlike the previous SX supercomputers, the SX-Aurora TSUBASA is provided as a PCIe card, termed by NEC as a "Vector Engine" (VE). Eight VE cards can be inserted into a vector host (VH) which is typically a x86-64 server running the Linux operating system. The product has been announced in a press release on October 25 2017 and NEC has started selling it in February 2018. The product succeeds the SX-ACE.
While its predecessors were all built in the form-factor of a main frame running some flavor of the proprietary SUPER-UX UNIX and using big-endian data representation, the SX-Aurora TSUBASA breaks with these traditions. It is implemented as a PCIe card, uses little-endian data representation like x86-64 PCs and has a Linux look and feel on the operating system side.
It features very high memory bandwidth (0.75–1.2 TB/s), eight cores and six HBM2 memory modules on a Silicon interposer implemented in the form-factor of a PCIe card. Operating system functionality for the VE is offloaded to the VH and handled mainly by user space daemons running the VEOS.
Depending on the clock frequency (1.4 or 1.6 GHz), each Vector Engine (VE) CPU has eight cores and a peak performance of 2.15 or 2.45 TFLOPS in double precision. The processor has the world's first implementation of six HBM2 modules on a Silicon interposer with a total of 24 or 48GB of high bandwidth memory. It is integrated in the form-factor of a standard full length, full height, double width PCIe card that is hosted by an x86_64 server, the Vector Host (VH). The server can host up to eight VEs, clusters VHs can scale to arbitrary number of nodes.
The version 1.0 of the Vector Engine was produced in 16 nm FinFET process and released in three SKUs:
|Number of cores||8||8||8|
|Core peak performance
(double precision GFLOPS)
|CPU peak performance
(double precision TFLOPS)
|CPU peak performance
(single precision TFLOPS)
|Memory bandwidth (TB/s)||1.2||1.2||0.75|
|Memory capacity (GB)||48||48||24|
Each of the eight SX-Aurora cores has 64 logical vector registers. These have 256 x 64 Bits length implemented as a mix of pipeline and 32-fold parallel SIMD units. The registers are connected to three FMA floating-point multiply and add units that can run in parallel, as well as two ALU arithmetical logical units handling fixed point operations and a divide and square root pipe. Considering only the FMA units and their 32-fold SIMD parallelism, a vector core is capable of 192 double precision operations per cycle. In "packed" vector operations, where two single precision values are loaded into the space of one double precision slot in the vector registers, the vector unit delivers twice as many operations per clock cycle compared to double precision.
A Scalar Processing Unit (SPU) handles non-vector instructions on each of the cores.
Memory and Caches
The memory of the SX-Aurora TSUBASA processor consists of six HBM2 second generation high-bandwidth memory modules implemented in the same package as the CPU with the help of Chip-on-Wafer-on-Substrate technology. Depending on the processor model, the HBM2 modules are either 4 or 8 die 3D modules with either 4 or 8 GB capacity, each. The SX-Aurora CPUs thus have either 24GB or 48GB HBM2 memory. The models implemented with large HBM2 modules have 1.2TB/s memory bandwidth.
The cores of a vector engine share 16MB of "Last-Level-Cache" (LLC), a write-back cache directly connected to the vector registers and the L2 cache of the SPU. The LLC cache line size is 128 Bytes. The priority of data retention in the LLC can to some extent be controlled in software, allowing the programmer to specify which of the variables or arrays should be retained in cache, a feature comparable to that of the Advanced Data Buffer (ADB) of the NEC SX-ACE.
NEC is currently selling the SX-Aurora TSUBASA vector engine integrated into five platforms:
- A100-1: a tower PC with one VE card of type 10C.
- A300-2: a single socket 1U rack-mountable Skylake server equipped with up to two VE cards of type 10B or 10C.
- A300-4: a dual socket 1U rack-mountable Skylake server equipped with up to four VE cards of type 10B or 10C.
- A300-8: a dual socket 4U rack-mountable Skylake server with up to eight VE cards of type 10B or 10C.
- A500-64: a rack equipped with 32, 48 or 64 VEs of type 10A or 10B.
Within a VH node VEs can communicate with each other through PCIe. Large parallel systems built with SX-Aurora use Infiniband in a PeerDirect setup as interconnect.
The operating system of the vector engine (VE) is called "VEOS", and has been offloaded entirely to the host system, the vector host (VH). VEOS consists of kernel modules and user space daemons that:
- manage VE processes and their scheduling on the VE,
- manage the virtual memory address spaces of the VE processes,
- handle transfers between VH and VE memory with the help of the VE DMA engines,
- handle interrupts and exceptions of VE processes, as well as their system calls.
VEOS supports multitasking on the VE and almost all Linux system calls are supported in the VE libc. Offloading operating system services to the VH shifts OS jitter away from the VE at the expense of increased latencies. All VE operating system related packages are licensed under the GNU General Public License and have been published at github
A Software Development Kit is available from NEC for developers and customers. It contains proprietary products and must be purchased from NEC. The SDK contains:
- C, C++ and Fortran compilers that support automatic vectorization and automatic parallelization as well as OpenMP.
- Performance optimization tools: ftraceviewer and veperf.
- Optimized numerical libraries for the VE: BLAS, SBLAS, LAPACK, SCALAPACK, ASL, Heterosolver.
NEC MPI is also a proprietary implementation and is conforming to the MPI-3.1 standard specification.
Hybrid programs can be created that use the VE as an accelerator for certain host kernel functions by using VE offloading C-API. To some extent VE offloading is comparable to OpenCL and CUDA, but provides a simpler API and allows the kernels to be developed in normal C, C++ or Fortran and use almost any syscall on the VE. Python bindings to VEO are available at github
- "NEC SX-Aurora TSUBASA - Vector Engine". www.nec.com. Retrieved 2018-03-20.
- The Next Platform: Can Vector Supercomputing Be Revived
- "NEC releases new high-end HPC product line, SX-Aurora TSUBASA". NEC. Retrieved 2018-03-21.
- SX-Aurora TSUBASA Brochure
- "NEC SX-Aurora TSUBASA Architecture". www.nec.com. Retrieved 2018-03-20.
- "SX-Aurora/veoffload". GitHub. Retrieved 2018-03-21.