Row and column vectors

From Wikipedia, the free encyclopedia
  (Redirected from Column vector)
Jump to: navigation, search

In linear algebra, a column vector or column matrix is an m × 1 matrix, that is, a matrix consisting of a single column of m elements,

\mathbf{x} = \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_m \end{bmatrix} \,.

Similarly, a row vector or row matrix is a 1 × m matrix, that is, a matrix consisting of a single row of m elements[1]

\mathbf x = \begin{bmatrix} x_1 & x_2 & \dots & x_m \end{bmatrix} \,.

Throughout, boldface is used for the row and column vectors. The transpose (indicated by T) of a row vector is a column vector

\begin{bmatrix} x_1 \; x_2 \; \dots \; x_m \end{bmatrix}^{\rm T} = \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_m \end{bmatrix} \,,

and the transpose of a column vector is a row vector

\begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_m \end{bmatrix}^{\rm T} = \begin{bmatrix} x_1 \; x_2 \; \dots \; x_m \end{bmatrix} \,.

The set of all row vectors forms a vector space called row space, similarly the set of all column vectors forms a vector space called column space. The dimensions of the row and column spaces equals the number of entries in the row or column vector.

The column space can be viewed as the dual space to the row space, since any linear functional on the space of column vectors can be represented uniquely as an inner product with a specific row vector.


To simplify writing column vectors in-line with other text, sometimes they are written as row vectors with the transpose operation applied to them.

\mathbf{x} = \begin{bmatrix} x_1 \; x_2 \; \dots \; x_m \end{bmatrix}^{\rm T}


\mathbf{x} = \begin{bmatrix} x_1, x_2, \dots, x_m \end{bmatrix}^{\rm T}

Some authors also use the convention of writing both column vectors and row vectors as rows, but separating row vector elements with commas and column vector elements with semicolons (see alternative notation 2 in the table below).

Row vector Column vector
Standard matrix notation
(array spaces, no commas, transpose signs)
 \begin{bmatrix} x_1 \; x_2 \; \dots \; x_m \end{bmatrix}  \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_m \end{bmatrix} \text{ or } \begin{bmatrix} x_1 \; x_2 \; \dots \; x_m \end{bmatrix}^{\rm T}
Alternative notation 1
(commas, transpose signs)
 \begin{bmatrix} x_1, x_2, \dots, x_m \end{bmatrix}  \begin{bmatrix} x_1, x_2, \dots, x_m \end{bmatrix}^{\rm T}
Alternative notation 2
(commas and semicolons, no transpose signs)
 \begin{bmatrix} x_1, x_2, \dots, x_m \end{bmatrix}  \begin{bmatrix} x_1; x_2; \dots; x_m \end{bmatrix}


Matrix multiplication involves the action of multiplying each row vector of one matrix by each column vector of another matrix.

The dot product of two vectors a and b is equivalent to the matrix product of the row vector representation of a and the column vector representation of b,

\mathbf{a} \cdot \mathbf{b} = \mathbf{a}^\mathrm{T} \mathbf{b} = \begin{bmatrix}
    a_1  & a_2  & a_3
    b_1 \\ b_2 \\ b_3
\end{bmatrix} = a_1 b_1 + a_2 b_2 + a_3 b_3 \,,

which is also equivalent to the matrix product of the row vector representation of b and the column vector representation of a,

\mathbf{b} \cdot \mathbf{a} = \mathbf{b}^\mathrm{T} \mathbf{a} = \begin{bmatrix}
    b_1  & b_2  & b_3
    a_1 \\ a_2 \\ a_3

The matrix product of a column and a row vector gives the dyadic product of two vectors a and b, an example of the more general tensor product. The matrix product of the matrix product of the column vector representation of a and the row vector representation of b gives the components of their dyadic product,

\mathbf{a} \otimes \mathbf{b} = \mathbf{a} \mathbf{b}^\mathrm{T} = \begin{bmatrix}
    a_1 \\ a_2  \\ a_3
    b_1 & b_2 & b_3
\end{bmatrix} = \begin{bmatrix} 
a_1b_1 & a_1b_2 & a_1b_3 \\
a_2b_1 & a_2b_2 & a_2b_3 \\
a_3b_1 & a_3b_2 & a_3b_3 \\
\end{bmatrix} \,,

which is not equivalent to the column vector representation of b and the row vector representation of a,

\mathbf{b} \otimes \mathbf{a} = \mathbf{b} \mathbf{a}^\mathrm{T} = \begin{bmatrix}
    b_1 \\ b_2  \\ b_3
    a_1 & a_2 & a_3
\end{bmatrix} = \begin{bmatrix} 
b_1a_1 & b_1a_2 & b_1a_3 \\
b_2a_1 & b_2a_2 & b_2a_3 \\
b_3a_1 & b_3a_2 & b_3a_3 \\
\end{bmatrix} \,.

In this case the two matrices are different.

Preferred input vectors for matrix transformations[edit]

Frequently a row vector presents itself for an operation within n-space expressed by an n × n matrix M,

 v M = p \,.

Then p is also a row vector and may present to another n × n matrix Q,

 p Q = t \,.

Conveniently, one can write t = p Q = v MQ telling us that the matrix product transformation MQ can take v directly to t. Continuing with row vectors, matrix transformations further reconfiguring n-space can be applied to the right of previous outputs.

In contrast, when a column vector is transformed to become another column under an n × n matrix action, the operation occurs to the left,

 p = M v \,,\quad t = Q p ,

leading to the algebraic expression QM v for the composed output from v input. The matrix transformations mount up to the left in this use of a column vector for input to matrix transformation. The natural bias to read left-to-right, as subsequent transformations are applied in linear algebra, stands against column vector inputs.

Nevertheless, using the transpose operation these differences between inputs of a row or column nature are resolved by an antihomomorphism between the groups arising on the two sides. The technical construction uses the dual space associated with a vector space to develop the transpose of a linear map.

For an instance where this row vector input convention has been used to good effect see Raiz Usmani,[2] where on page 106 the convention allows the statement "The product mapping ST of U into W [is given] by:

\alpha (ST) = (\alpha S) T = \beta T = \gamma."

(The Greek letters represent row vectors).

Ludwik Silberstein used row vectors for spacetime events; he applied Lorentz transformation matrices on the right in his Theory of Relativity in 1914 (see page 143). In 1963 when McGraw-Hill published Differential Geometry by Heinrich Guggenheimer of the University of Minnesota, he uses the row vector convention in chapter 5, "Introduction to transformation groups" (eqs. 7a,9b and 12 to 15). When H. S. M. Coxeter reviewed[3] Linear Geometry by Rafael Artzy, he wrote, "[Artzy] is to be congratulated on his choice of the 'left-to-right' convention, which enables him to regard a point as a row matrix instead of the clumsy column that many authors prefer."

See also[edit]


  1. ^ Meyer (2000), p. 8
  2. ^ Raiz A. Usmani (1987) Applied Linear Algebra Marcel Dekker ISBN 0824776224. See Chapter 4: "Linear Transformations"
  3. ^ Coxeter Review of Linear Geometry from Mathematical Reviews


  • Axler, Sheldon Jay (1997), Linear Algebra Done Right (2nd ed.), Springer-Verlag, ISBN 0-387-98259-0 
  • Lay, David C. (August 22, 2005), Linear Algebra and Its Applications (3rd ed.), Addison Wesley, ISBN 978-0-321-28713-7 
  • Meyer, Carl D. (February 15, 2001), Matrix Analysis and Applied Linear Algebra, Society for Industrial and Applied Mathematics (SIAM), ISBN 978-0-89871-454-8 
  • Poole, David (2006), Linear Algebra: A Modern Introduction (2nd ed.), Brooks/Cole, ISBN 0-534-99845-3 
  • Anton, Howard (2005), Elementary Linear Algebra (Applications Version) (9th ed.), Wiley International 
  • Leon, Steven J. (2006), Linear Algebra With Applications (7th ed.), Pearson Prentice Hall