Extending scalar function to matrix functions
There are several techniques for lifting a real function to a square matrix function such that interesting properties are maintained. All of the following techniques yield the same matrix function, but the domains on which the function is defined may differ.
then a matrix function can be defined by substituting x by a matrix: the powers become matrix powers, the additions become matrix sums and the multiplications become scaling operations. If the real series converges for , then the corresponding matrix series will converge for matrix argument A if for some matrix norm which satisfies .
If the matrix A is diagonalizable, the problem may be reduced to an array of the function on each eigenvalue. This is to say we can find a matrix P and a diagonal matrixD such that . Applying the power series definition to this decomposition, we find that f(A) is defined by
where denote the diagonal entries of D.
For example, suppose one is seeking A! = Γ(A+1) for
All matrices, whether they are diagonalizable or not, have a Jordan normal form, where the matrix J consists of Jordan blocks. Consider these blocks separately and apply the power series to a Jordan block:
This definition can be used to extend the domain of the matrix function beyond the set of matrices with spectral radius smaller than the radius of convergence of the power series. Note that there is also a connection to divided differences.
A Hermitian matrix has all real eigenvalues and can always be diagonalized by a unitary matrix P, according to the spectral theorem. In this case, the Jordan definition is natural. Moreover, this definition allows one to extend standard inequalities for real functions:
where C is a closed curve inside the domain D enclosing x.
Now, replace x by a matrix A and consider a path C inside D that encloses all eigenvalues of A. One possibility to achieve this is to let C be a circle around the origin with radius larger than ‖A‖ for an arbitrary matrix norm ‖•‖. Then, f(A) is definable by
This integral can readily be evaluated numerically using the trapezium rule, which converges exponentially in this case. That means that the precision of the result doubles when the number of nodes is doubled. In routine cases, this is bypassed by Sylvester's formula.
The above Taylor power series allows the scalar to be replaced by the matrix. This is not true in general when expanding in terms of about unless . A counter example is , which has a finite length Taylor series. We compute this in two ways,
Brute force :
Using scalar Taylor expansion for and replacing scalars with matrices at the end :
The scalar expression assumes commutativity while the matrix expression does not and thus they cannot be equated directly unless . For some f(x) this can be dealt with using the same method as scalar Taylor series. For example, . If exists then . The expansion of the first term then follows the power series given above,
The convergence criteria of the power series then apply, requiring to be sufficiently small under the appropriate matrix norm. For more general problems, which cannot be rewritten in such a way that the two matrices commute, the ordering of matrix products produced by repeated application of the Leibniz rule must be tracked.
A function is called operator concave if and only if
for all self-adjoint matrices with spectra in the domain of f and . This definition is analogous to a concave scalar function. An operator convex function can be defined be switching to in the definition above.
The matrix log is both operator monotone and operator concave. The matrix square is operator convex. The matrix exponential is none of these. Loewner's Theorem states that a function on an open interval is operator monotone if and only if it has an analytic extension to the upper and lower complex half planes so that the upper half plane is mapped to itself.