# Row-major order

In computing, row-major order and column-major order describe methods for arranging multidimensional arrays in linear storage such as memory.

The difference is simply this: in row-major order, rows of the array are contiguous in memory; in column-major order, the columns are contiguous.

Array layout is critical for correctly passing arrays between programs written in different languages. It is also important for performance when traversing an array because accessing array elements that are contiguous in memory is usually faster than accessing elements which are not, due to caching.[1] In some media such as tape or flash memory, accessing sequentially is orders of magnitude faster than nonsequential access.

## Explanation and example

Following conventional matrix notation, rows are numbered by the first index of a two-dimensional array and columns by the second index, i.e., a1,2 is the second element of the first row, counting downwards and rightwards. (Note this is the opposite of Cartesian conventions.)

The difference between row-major and column-major order is simply that the order of the dimensions is reversed. Equivalently, in row-major order the rightmost indices vary faster as one steps through consecutive memory locations, while in column-major order the leftmost indices vary faster.

This array

$\begin{bmatrix} 11 & 12 & 13 \\ 21 & 22 & 23 \end{bmatrix}$

Would be stored as follows in the two orders:

## Programming Languages

Programming languages which support multi-dimensional arrays have a native storage order for these arrays.

Row-major order is used in C/C++, Mathematica, PL/I, Pascal, Python, Speakeasy, SAS and others.

Column-major order is used in Fortran, OpenGL and OpenGL ES, MATLAB,[2] GNU Octave, S-Plus,[3] R,[4] Julia, Rasdaman, and Scilab.

## Transposition

As exchanging the indices of an array is the essence of array transposition, an array stored as row-major but read as column-major (or vice versa) will appear transposed. As actually performing this rearrangement in memory is typically an expensive operation, some systems provide options to specify individual matrices as being stored transposed.

For example, the Basic Linear Algebra Subprograms functions are passed flags indicating which arrays are transposed.[5]

The concept trivially generalizes to arrays with more than two dimensions.

For a d-dimensional $N_1 \times N_2 \times \cdots \times N_d$ array with dimensions Nk (k=1...d). A given element of this array is specified by a tuple $(n_1, n_2, \ldots, n_d)$ of d (zero-based) indices $n_k \in [0,N_k - 1]$.

In row-major order, the last dimension is contiguous, so that the memory-offset of this element is given by:

$n_d + N_d \cdot (n_{d-1} + N_{d-1} \cdot (n_{d-2} + N_{d-2} \cdot (\cdots + N_2 n_1)\cdots))) = \sum_{k=1}^d \left( \prod_{\ell=k+1}^d N_\ell \right) n_k$

In column-major order, the first dimension is contiguous, so that the memory-offset of this element is given by:

$n_1 + N_1 \cdot (n_2 + N_2 \cdot (n_3 + N_3 \cdot (\cdots + N_{d-1} n_d)\cdots))) = \sum_{k=1}^d \left( \prod_{\ell=1}^{k-1} N_\ell \right) n_k$