Jump to content

Matrix subtraction: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Kevin B12 (talk | contribs)
Removing indirect relation external link and the 'request reprint' links, fix extra space in see also
removing my contributions
Line 34: Line 34:
\end{bmatrix}
\end{bmatrix}
</math>
</math>

==Subtraction of matrices==
Textbooks on matrix algebra (cf., Horst, 1963; Shores, 2003), routinely describing major and minor vector products, do not include analogical operations for the major and minor differences of minuends and subtrahends. These operations are easy to imagine but not discussed, as most of their potential applications can be as well accomplished by other matrix algebra operations. However, on close scrutiny, the matrix algebra operation of subtraction (of vectors, not elements of vectors, of matrices, not elements of matrices) can be used for concise expression of many abstract concepts within the matrix algebra framework. To subtract matrices ''A'' and ''B'',

:''C'' = ''A'' &minus; ''B''

the number of columns in matrix ''A'' must equal the number of rows in matrix ''B'', in another words, the matrices must be conformable to matrix subtraction. The resulting matrix ''C'' will have the number of rows of the first matrix and the number of columns of the second matrix. For example, if matrix ''A'' is a 3&times;2 matrix and matrix ''B'' is a 2&times;3 matrix, the resulting matrix will be a 2&times;2 matrix. The schematic representation of matrix subtraction is shown below:

:<math>
\begin{bmatrix}
a & b & c \\
d & e & f
\end{bmatrix}
-
\begin{bmatrix}
g & h \\
i & j \\
k & l
\end{bmatrix}
=
\begin{bmatrix}
(a-g)+(b-i)+(c-k) & (a-h)+(b-j)+(c-l) \\
(d-g)+(e-i)+(f-k) & (d-h)+(e-j)+(f-l)
\end{bmatrix}
</math>

For instance,

:<math>
\begin{bmatrix}
1 & 2 & 3 \\
4 & 5 & 6
\end{bmatrix}
-
\begin{bmatrix}
7 & 8 \\
9 & 10 \\
11 & 12
\end{bmatrix}
=
\begin{bmatrix}
-21 & -24 \\
-12 & -15
\end{bmatrix}
</math>

==Skew-symmetric matrices==
Subtracting a transpose of a matrix from itself yields a [[skew-symmetric matrix]]. For instance,

:<math>
\begin{bmatrix}
0 & 0 & 0 \\
0 & 0 & 1 \\
0 & 1 & 1 \\
1 & 1 & 1
\end{bmatrix}
-
\begin{bmatrix}
0 & 0 & 0 & 1 \\
0 & 0 & 1 & 1 \\
0 & 1 & 1 & 1 \\
\end{bmatrix}
=
\begin{bmatrix}
0 & -1 & -2 & -3 \\
1 & 0 & -1 & -2 \\
2 & 1 & 0 & -1 \\
3 & 2 & 1 & 0 \\
\end{bmatrix}
</math>

A major difference of a vector also results in a skew difference matrix. For instance, the outcome of a major difference of a vector '''x''' [0, 1, 2, 3]

:<math>
\begin{bmatrix}
0 \\
1 \\
2 \\
3
\end{bmatrix}
-
\begin{bmatrix}
0 & 1 & 2 & 3 \end{bmatrix}
=
\begin{bmatrix}
0 & -1 & -2 & -3 \\
1 & 0 & -1 & -2 \\
2 & 1 & 0 & -1 \\
3 & 2 & 1 & 0 \\
\end{bmatrix}
</math>

is also a skew-symmetric matrix.

==Skew-asymmetric matrices==
If the elements of subtracted matrices are ordered, skew-symmetric matrices, can be transformed into skew-asymmetric matrices by triangularization, as, for the above examples

:<math>
\begin{bmatrix}
0 & 0 & 0 & 0 \\
1 & 0 & 0 & 0 \\
2 & 1 & 0 & 0 \\
3 & 2 & 1 & 0 \\
\end{bmatrix}
</math>

If the elements of subtracted matrices are not ordered, skew-symmetric matrices, can be transformed into skew-asymmetric matrices by asymmetrization, as, for instance, asymmetrization of a skew-symmetric matrix

:<math>
\begin{bmatrix}
0 & -1 & 2 & -3 \\
1 & 0 & -1 & -2 \\
-2 & 1 & 0 & -1 \\
3 & 2 & 1 & 0 \\
\end{bmatrix}
</math>

results in

:<math>
\begin{bmatrix}
0 & 0 & 2 & 0 \\
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
3 & 2 & 1 & 0 \\
\end{bmatrix}
</math>

The distinction between the ordered and unordered skew-asymmetric matrices is significant in some contexts, as, e.g., within the context of test [[Homogeneity (psychometrics)|homogeneity]] and ordinal test theory.

==Applications==
The operation of matrix subtraction facilitates expression of many abstract concepts within the matrix algebra framework. For example, the [[true variance]] of the variable ''X''[0, 1, 2, 3], defined within the algebraic context as the sum of the squared deviation scores divided by ''n'', for the example as (5/4) = 1.22. Using the matrix algebra, [[true variance]] can be defined as the sum of squared elements of skew asymmetric matrices divided by the square of their order.

:<math>
\begin{bmatrix}
0 & 0 & 0 & 0 \\
1 & 0 & 0 & 0 \\
2 & 1 & 0 & 0 \\
3 & 2 & 1 & 0 \\
\end{bmatrix}
</math>

Thus, for the above example of a skew asymmetric matrix of the 4<sup>th</sup> order, 1<sup>2</sup> + 2<sup>2</sup> +1<sup>2</sup> + 3<sup>2</sup> + 2<sup>2</sup> + 1<sup>2</sup> = 20 and (20/4<sup>2</sup>) = 1.22. The skew asymmetric matrices are also adjacent to ordered graphs, as illustrated in Fig. 1. [[Image:Variance_2.jpg|none|frame|Fig. 1. Horizontal dendrogram corresponding to the above skew asymmetric matrix.]]

==Stochastic skew-asymmetric matrices==
Generalizing the matrix algebra operations on subtracted data matrices as to envelop the '''stochastic skew-asymmetric matrices''', yields dendrograms such as shown in Fig. 2.

[[Image:Variance_3.jpg|none|frame|Fig. 2. Dendrogram associated with a stochastic skew asymmetric matrix of differences.]]

These algorithms are among the basic matrix algebra operations used in visual statistics.

==References==
* Horst, P. (1963) ''Matrix algebra for social scientists''. New York: Holt.
* Krus, D.J., & Ceuvorst, R. W. (1979) Dominance, information, and hierarchical scaling of variance space. <i>Applied Psychological Measurement,</i> 3, 515-527
* Krus, D.J., & Wilkinson, S.M. (1986) Matrix differencing as a concise expression of variance. <i>Educational and Psychological Measurement,</i> 46, 179-183
* Shores, T.S. (2003) ''Applied linear algebra and matrix analysis''. New York: McGraw-Hill.

==External links==
* [http://www.maths.hscripts.com/matrix/learn-matrix-subtraction.php Matrix subtraction calculator]
* [http://www.visualstatistics.net/PPP%20Matrix%20Algebra/Animated%20Matrix%20Algebra.htm Matrix Algebra Operations on Matrices]


==See also==
==See also==
*[[Matrix addition]]
*[[Matrix addition]]
*[[True variance]]
*[[Homogeneity|Homogeneity]]
[[Category:Matrix theory]]
[[Category:Matrix theory]]
[[Category:Linear algebra]]
[[Category:Linear algebra]]

Revision as of 06:29, 24 July 2006

In matrix algebra, operations on matrices differ from similar operations of scalar algebra in several respects. The matrix algebra operations, in general, are not commutative and attention must be paid to whether the matrices are conformable with respect to the intended operation. Also, it must be noted whether the matrix operation pertains to matrix elements or to matrices.

Subtraction of matrix elements

To subtract elements of matrices A and B,

C = A ( − ) B

elements of matrix B are subtracted from their corresponding elements in matrix A and stored as elements of matrix C. Obviously, all three matrices must have the same dimensions (the same number of rows and columns). Notice that the minus sign in the above equation is enclosed in parentheses to indicate subtraction of matrix elements, as contrasted with subtraction of matrices. The subtraction of matrix elements is defined for two matrices of same dimensions and is computed by subtracting corresponding elements, i.e., (a − b)[i, j] = a[i, j] − b[i, j]. For example,

See also