 The Hadamard product operates on identically shaped matrices and produces a third matrix of the same dimensions.

In mathematics, the Hadamard product (also known as the Schur product or the entrywise product:ch. 5) is a binary operation that takes two matrices of the same dimensions and produces another matrix of the same dimension as the operands where each element i, j is the product of elements i, j of the original two matrices. It should not be confused with the more common matrix product. It is attributed to, and named after, either French mathematician Jacques Hadamard or German mathematician Issai Schur.

The Hadamard product is associative and distributive. Unlike the matrix product, it is also commutative.

## Definition

For two matrices A and B of the same dimension m × n, the Hadamard product AB is a matrix of the same dimension as the operands, with elements given by

$(A\circ B)_{ij}=(A)_{ij}(B)_{ij}.$ For matrices of different dimensions (m × n and p × q, where mp or nq) the Hadamard product is undefined.

The Hadamard product is also often denoted using the $\odot$ symbol instead of $\circ$ .

## Example

For example, the Hadamard product for a 3 × 3 matrix A with a 3 × 3 matrix B is

${\begin{bmatrix}a_{11}&a_{12}&a_{13}\\a_{21}&a_{22}&a_{23}\\a_{31}&a_{32}&a_{33}\end{bmatrix}}\circ {\begin{bmatrix}b_{11}&b_{12}&b_{13}\\b_{21}&b_{22}&b_{23}\\b_{31}&b_{32}&b_{33}\end{bmatrix}}={\begin{bmatrix}a_{11}\,b_{11}&a_{12}\,b_{12}&a_{13}\,b_{13}\\a_{21}\,b_{21}&a_{22}\,b_{22}&a_{23}\,b_{23}\\a_{31}\,b_{31}&a_{32}\,b_{32}&a_{33}\,b_{33}\end{bmatrix}}.$ ## Properties

• The Hadamard product is commutative (when working with a commutative ring), associative and distributive over addition. That is,
{\begin{aligned}&\mathbf {A} \circ \mathbf {B} =\mathbf {B} \circ \mathbf {A} ,\\&\mathbf {A} \circ (\mathbf {B} \circ \mathbf {C} )=(\mathbf {A} \circ \mathbf {B} )\circ \mathbf {C} ,\\&\mathbf {A} \circ (\mathbf {B} +\mathbf {C} )=\mathbf {A} \circ \mathbf {B} +\mathbf {A} \circ \mathbf {C} .\end{aligned}} • The identity matrix under Hadamard multiplication of two m × n matrices is an m × n matrix where all elements are equal to 1. This is different from the identity matrix under regular matrix multiplication, where only the elements of the main diagonal are equal to 1. Furthermore, a matrix has an inverse under Hadamard multiplication if and only if none of the elements are equal to zero.
• For vectors x and y, and corresponding diagonal matrices Dx and Dy with these vectors as their leading diagonals, the following identity holds::479
$\mathbf {x} ^{*}(\mathbf {A} \circ \mathbf {B} )\mathbf {y} =\mathrm {tr} \left(\mathbf {D} _{\mathbf {x} }^{*}\mathbf {A} \mathbf {D} _{\mathbf {y} }\mathbf {B} ^{\mathsf {T}}\right),$ where x* denotes the conjugate transpose of x. In particular, using vectors of ones, this shows that the sum of all elements in the Hadamard product is the trace of ABT. A related result for square A and B, is that the row-sums of their Hadamard product are the diagonal elements of ABT:

{\begin{aligned}\sum _{i}(A\circ B)_{ij}&=\left(B^{\mathsf {T}}A\right)_{jj}\\&=\left(AB^{\mathsf {T}}\right)_{ii}.\end{aligned}} Similarly

$\left(\mathbf {y} \mathbf {x} ^{*}\right)\circ \mathbf {A} =\mathbf {D} _{\mathbf {y} }\mathbf {A} \mathbf {D} _{\mathbf {x} }^{*}$ • The Hadamard product is a principal submatrix of the Kronecker product.
• The Hadamard product satisfies the rank inequality
$\operatorname {rank} (\mathbf {A} \circ \mathbf {B} )\leq \operatorname {rank} (\mathbf {A} )\operatorname {rank} (\mathbf {B} )$ • If A and B are positive-definite matrices, then the following inequality involving the Hadamard product is valid:
$\prod _{i=k}^{n}\lambda _{i}(\mathbf {A} \circ \mathbf {B} )\geq \prod _{i=k}^{n}\lambda _{i}(\mathbf {A} \mathbf {B} ),\quad k=1,\ldots ,n,$ where λi(A) is the ith largest eigenvalue of A.
• If D and E are diagonal matrices, then
{\begin{aligned}\mathbf {D} (\mathbf {A} \circ \mathbf {B} )\mathbf {E} &=(\mathbf {D} \mathbf {A} \mathbf {E} )\circ \mathbf {B} =(\mathbf {D} \mathbf {A} )\circ (\mathbf {B} \mathbf {E} )\\&=(\mathbf {A} \mathbf {E} )\circ (\mathbf {D} \mathbf {B} )=\mathbf {A} \circ (\mathbf {D} \mathbf {B} \mathbf {E} ).\end{aligned}} ## Schur product theorem

The Hadamard product of two positive-semidefinite matrices is positive-semidefinite. This is known as the Schur product theorem, after German mathematician Issai Schur. For two positive-semidefinite matrices A and B, it is also known that the determinant of their Hadamard product is greater than or equal to the product of their respective determinants:

$\det(\mathbf {A} \circ \mathbf {B} )\geq \det(\mathbf {A} )\det(\mathbf {B} ).$ ## In programming languages

Hadamard multiplication is built into certain programming languages under various names. In MATLAB, GNU Octave, GAUSS and HP Prime, it is known as array multiplication, or in Julia broadcast multiplication, with the symbol .*. In Fortran, R, J and Wolfram Language (Mathematica), it is done through simple multiplication operator *, whereas the matrix product is done through the function matmul, %*%, +/ .* and the . operators, respectively. In Python with the NumPy numerical library or the SymPy symbolic library, multiplication of array objects as a1*a2 produces the Hadamard product, but otherwise multiplication as a1@a2 or matrix objects m1*m2 will produce a matrix product. The Eigen C++ library provides a cwiseProduct member function for the Matrix class (a.cwiseProduct(b)), while the Armadillo library uses the operator % to make compact expressions (a % b; a * b is a matrix product).

## Applications

The Hadamard product appears in lossy compression algorithms such as JPEG. The decoding step involves an entry-for-entry product, in other words the Hadamard product.

It is used in the machine learning literature, for example to describe the architecture of recurrent neural networks as GRUs or LSTMs.

## Analogous operations

Other Hadamard operations are also seen in the mathematical literature, namely the Hadamard root and Hadamard power (which are in effect the same thing because of fractional indices), defined for a matrix such that:

For

{\begin{aligned}\mathbf {B} &=\mathbf {A} ^{\circ 2}\\B_{ij}&={A_{ij}}^{2}\end{aligned}} and for

{\begin{aligned}\mathbf {B} &=\mathbf {A} ^{\circ {\frac {1}{2}}}\\B_{ij}&={A_{ij}}^{\frac {1}{2}}\end{aligned}} {\begin{aligned}\mathbf {B} &=\mathbf {A} ^{\circ -1}\\B_{ij}&={A_{ij}}^{-1}\end{aligned}} {\begin{aligned}\mathbf {C} &=\mathbf {A} \oslash \mathbf {B} \\C_{ij}&={\frac {A_{ij}}{B_{ij}}}\end{aligned}} 