This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these template messages)(Learn how and when to remove this template message)
|Original author(s)||Continuum Analytics|
|Initial release||15 August 2012|
0.39.0 / 6 July 2018
0.40.0dev0 / 10 July 2018
|Written in||Python, C|
Numba is an open-source NumPy-aware optimizing compiler for Python sponsored by Anaconda, Inc and a grant from the Gordon and Betty Moore Foundation. It uses the LLVM compiler infrastructure to compile Python to CPU and GPU machine code.
Numba compiles Python code with LLVM to code which can be natively executed by the CPU or GPU at runtime. This happens by decorating Python functions, which allows users to create native functions for different input types, or to create them on the fly:
@jit('f8(f8[:])') def sum1d(my_double_array): total = 0.0 for i in range(my_double_array.shape): total += my_double_array[i] return total
To make the above example work for any compatible input types automatically, we can create a function that specializes automatically:
@jit def sum1d(my_array): ...
Numba can compile Python functions to GPU code. There are two approaches available currently:
@cuda.jit def increment_a_2D_array(an_array): x, y = cuda.grid(2) if x < an_array.shape and y < an_array.shape: an_array[x, y] += 1
Simply use the annotation '@hsa.jit':
@hsa.jit(device=True) def a_device_function(a, b): return a + b
The following projects are alternative approaches to accelerating Python: