# Grassmann number

(Redirected from Grassmann variable)

In mathematical physics, a Grassmann number, named after Hermann Grassmann (also called an anticommuting number or supernumber), is an element of the exterior algebra over the complex numbers.[1] The special case of a 1-dimensional algebra is known as a dual number. Grassmann numbers saw an early use in physics to express a path integral representation for fermionic fields, although they are now widely used as a foundation for superspace, on which supersymmetry is constructed.

## Informal discussion

Grassmann numbers are generated by anti-commuting elements or objects. The idea of anti-commuting objects arises in multiple areas of mathematics: they are typically seen in differential geometry, where the differential forms are anti-commuting. Differential forms are normally defined in terms of derivatives on a manifold; however, one can contemplate the situation where one "forgets" or "ignores" the existence of any underlying manifold, and "forgets" or "ignores" that the forms were defined as derivatives, and instead, simply contemplate a situation where one has objects that anti-commute, and have no other pre-defined or pre-supposed properties. Such objects form an algebra, and specifically the Grassmann algebra or exterior algebra.

The Grassmann numbers are elements of that algebra. The appellation of "number" is justified by the fact that they behave not unlike "ordinary" numbers: they can be added, multiplied and divided: they behave almost like a field. More can be done: one can consider polynomials of Grassmann numbers, leading to the idea of holomorphic functions. One can take derivatives of such functions, and then consider the anti-derivatves as well. Each of these ideas can be carefully defined, and correspond reasonably well to the equivalent concepts from ordinary mathematics. The analogy does not stop there: one has an entire branch of supermathematics, where the analog of Euclidean space is superspace, the analog of a manifold is a supermanifold, the analog of a Lie algebra is a Lie superalgebra and so on. The Grassmann numbers are the underlying construct that make this all possible.

Of course, one could pursue a similar program for any other field, or even ring, and this is indeed widely and commonly done in mathematics. However, supermathematics takes on a special significance in physics, because the anti-commuting behavior can be strongly identified with the quantum-mechanical behavior of fermions: the anti-commutation is that of the Pauli exclusion principle. Thus, the study of Grassmann numbers, and of supermathematics, in general, is strongly driven by their utility in physics.

Specifically, in quantum field theory, or more narrowly, second quantization, one works with ladder operators that create multi-particle quantum states. The ladder operators for fermions create field quanta that must necessarily have anti-symmetric wave functions, as this is forced by the Pauli exclusion principle. In this situation, a Grassmann number corresponds immediately and directly to a wave function that contains some (typically indeterminate) number of fermions.

## General description and properties

Grassmann numbers are individual elements or points of the exterior algebra generated by a set of n Grassmann variables or Grassmann directions or supercharges ${\displaystyle \{\theta _{i}\}}$, with n possibly being infinite. The usage of the term "Grassmann variables" is historic; they are not variables, per se; they are better understood as the basis elements of a unital algebra. The terminology comes from the fact that a primary use is to define integrals, and that the variable of integration is Grassmann-valued, and thus, by abuse of language, is called a Grassmann variable. Similarly, the notion of direction comes from the notion of superspace, where ordinary Euclidean space is extended with additional Grassmann-valued "directions". The appellation of charge comes from the notion of charges in physics, which correspond to the generators of physical symmetries (via Noether's theorem). The perceived symmetry is that multiplication by a single Grassmann variable swaps the ${\displaystyle \mathbb {Z} _{2}}$ grading between fermions and bosons; this is discussed in greater detail below.

The Grassmann variables are the basis vectors of a vector space (of dimension n). They form an algebra over a field, with the field usually being taken to be the complex numbers, although one could contemplate other fields, such as the reals. The algebra is a unital algebra, and the generators are anti-commuting:

${\displaystyle \theta _{i}\theta _{j}=-\theta _{j}\theta _{i}}$

Since the ${\displaystyle \theta _{i}}$ form a vector space over the complex numbers, it is trivial that they commute with the complex numbers; this is by definition. That is, for complex x, one has

${\displaystyle \theta _{i}x=x\theta _{i}.}$

The squares of the generators vanish:

${\displaystyle (\theta _{i})^{2}=0,}$ since ${\displaystyle \theta _{i}\theta _{i}=-\theta _{i}\theta _{i}.}$

In other words, a Grassmann variable is a non-zero square-root of zero.

## Formal definition

Formally, let V be an n-dimensional complex vector space with basis ${\displaystyle \theta _{i},i=1,\ldots ,n}$. We define the Grassman algebra whose Grassmann variables are ${\displaystyle \theta _{i},i=1,\ldots ,n}$ to be the exterior algebra of V, namely

${\displaystyle \Lambda =\mathbb {C} \oplus V\oplus \left(V\wedge V\right)\oplus \left(V\wedge V\wedge V\right)\oplus \cdots ,}$

where ${\displaystyle \wedge }$ is the exterior product and ${\displaystyle \oplus }$ is the direct sum. The individual elements of this algebra are then called "Grassmann numbers". It is standard to omit the wedge symbol ${\displaystyle \wedge }$ when writing a Grassmann number; it is used here only to clearly illustrate how the exterior algebra is built up out of the elements of V. Thus, a completely general Grassmann number can be written as

${\displaystyle z=\sum _{k=0}^{\infty }\sum _{i_{1},i_{2},\cdots ,i_{k}}c_{i_{1}i_{2}\cdots i_{k}}\theta _{i_{1}}\theta _{i_{2}}\cdots \theta _{i_{k}},}$

where the cs are complex numbers, or, equivalently, ${\displaystyle c_{i_{1}i_{2}\cdots i_{k}}}$ is a complex-valued, completely antisymmetric tensor of rank k. Again, the ${\displaystyle \theta _{i}}$ can be seen here to be playing the role of a basis vector of a vector space.

Observe that the Grassmann algebra generated by n linearly independent Grassmann variables has dimension 2n; this follows from the binomial theorem applied to the above sum, and the fact that the (n + 1)-fold product of variables must vanish, by the anti-commutation relations, above. In other words, for n variables, the sum terminates

${\displaystyle \Lambda =\mathbb {C} \oplus \Lambda ^{1}V\oplus \Lambda ^{2}V\oplus \cdots \oplus \Lambda ^{n}V}$

where ${\displaystyle \Lambda ^{k}V}$ is the k-fold alternating product. The dimension of ${\displaystyle \Lambda ^{k}V}$ is given by n choose k, the binomial coefficient. The special case of n = 1 is called a dual number, and was introduced by William Clifford in 1873.

## Finite vs. countable generators

Two distinct kinds of supernumbers commonly appear in the literature: those with a finite number of generators, typically n = 1, 2, 3 or 4, and those with a countably-infinite number of generators. These two situations are not as unrelated as they may seem at first. First, in the definition of a supermanifold, one variant uses a countably-infinite number of generators, but then employs a topology that effectively reduces the dimension to a small finite number.[2]

In the other case, one may start with a finite number of generators, but in the course of second quantization, a need for an infinite number of generators arises: one each for every possible momentum that a fermion might carry.

## Involution, choice of field

The complex numbers are usually chosen as the field for the definition of the Grassmann numbers, as opposed to the real numbers, as this avoids some strange behaviors when a conjugation or involution is introduced. It is common to introduce an operator * on the Grassman numbers such that:

${\displaystyle \theta =\theta ^{*}}$

when ${\displaystyle \theta }$ is a generator, and such that

${\displaystyle (\theta _{i}\theta _{j}\cdots \theta _{k})^{*}=\theta _{k}\cdots \theta _{j}\theta _{i}}$

One may then consider Grassmann numbers z for which ${\displaystyle z=z^{*}}$, and term these (super) real, while those that obey ${\displaystyle z^{*}=-z}$ are termed (super) imaginary. These definitions carry through just fine, even if the Grassmann numbers use the real numbers as the base field; however, in such a case, many coefficients are forced to vanish if the number of generators are less than 4. Thus, by convention, the Grassmann numbers are usually defined over the complex numbers.

Other conventions are possible; the above is sometimes referred to as the DeWitt convention; Rogers employs ${\displaystyle \theta ^{*}=i\theta }$ for the involution. In this convention, the real supernumbers always have real coefficients; whereas in the DeWitt convention, the real supernumbers may have both real and imaginary coefficients. Despite this, it is usually easiest to work with the DeWitt convention.

## Analysis

Products of an odd number of Grassmann variables anti-commute with each other; such a product is often called an a-number. Products of an even number of Grassmann variables commute (with all Grassman numbers); they are often called c-numbers. By abuse of terminology, an a-number is sometimes called an anticommuting c-number. This decomposition into even and odd subspaces provides a ${\displaystyle \mathbb {Z} _{2}}$ grading on the algebra; thus Grassmann algebras are the prototypical examples of supercommutative algebras. Note that the c-numbers form a subalgebra of ${\displaystyle \Lambda }$, but the a-numbers do not (they are a subspace, not a subalgebra).

The definition of Grassmann numbers allows mathematical analysis to be performed, in analogy to analysis on complex numbers. That is, one may define superholomorphic functions, define derivatives, as well as defining integrals. Some of the basic concepts are developed in greater detail in the article on dual numbers.

As a general rule, it is usually easier to define the super-symmetric analogs of ordinary mathematical entities by working with Grassmann numbers with an infinite number of generators: most definitions become straightforward, and can be taken over from the corresponding bosonic definitions. For example, the Grassmann numbers can be thought of as generating a one-dimensional space;[clarification needed] a vector space, the m-dimensional superspace, then appears as the m-fold Cartesian product of ${\displaystyle \Lambda .}$[clarification needed] It can be shown that this is essentially equivalent to an algebra with m generators, but this requires work.[2][clarification needed]

## Integration

Integrals over Grassmann numbers are known as Berezin integrals. In order to reproduce the path integral for a Fermi field, the definition of Grassmann integration needs to have the following properties:

• linearity
${\displaystyle \int \,[af(\theta )+bg(\theta )]\,d\theta =a\int \,f(\theta )\,d\theta +b\int \,g(\theta )\,d\theta }$
• partial integration formula
${\displaystyle \int \left[{\frac {\partial }{\partial \theta }}f(\theta )\right]\,d\theta =0.}$

This results in the following rules for the integration of a Grassmann quantity:

${\displaystyle \int \,1\,d\theta =0}$
${\displaystyle \int \,\theta \,d\theta =1.}$

Thus we conclude that the operations of integration and differentiation of a Grassmann number are identical.

In the path integral formulation of quantum field theory the following Gaussian integral of Grassmann quantities is needed for fermionic anticommuting fields:

${\displaystyle \int \exp \left[-\theta ^{\rm {T}}A\eta \right]\,d\theta \,d\eta =\det A}$

number with A being an N × N matrix.

## Matrix representations

Grassmann numbers can be represented by matrices. Consider, for example, the Grassmann algebra generated by two Grassmann numbers ${\displaystyle \theta _{1}}$ and ${\displaystyle \theta _{2}}$. These Grassmann numbers can be represented by 4×4 matrices:

${\displaystyle \theta _{1}={\begin{bmatrix}0&0&0&0\\1&0&0&0\\0&0&0&0\\0&0&1&0\end{bmatrix}}\qquad \theta _{2}={\begin{bmatrix}0&0&0&0\\0&0&0&0\\1&0&0&0\\0&-1&0&0\end{bmatrix}}\qquad \theta _{1}\theta _{2}=-\theta _{2}\theta _{1}={\begin{bmatrix}0&0&0&0\\0&0&0&0\\0&0&0&0\\1&0&0&0\end{bmatrix}}.}$

In general, a Grassmann algebra on n generators can be represented by 2n × 2n square matrices. Physically, these matrices can be thought of as raising operators acting on a Hilbert space of n identical fermions in the occupation number basis. Since the occupation number for each fermion is 0 or 1, there are 2n possible basis states. Mathematically, these matrices can be interpreted as the linear operators corresponding to left exterior multiplication on the Grassmann algebra itself.

## Generalisations

There are some generalisations to Grassmann numbers. These require rules in terms of N variables such that:

${\displaystyle \theta _{i_{1}}\theta _{i_{2}}\cdots \theta _{i_{N}}+\theta _{i_{N}}\theta _{i_{1}}\theta _{i_{2}}\cdots +\cdots =0}$

where the indices are summed over all permutations so that as a consequence:

${\displaystyle (\theta _{i})^{N}=0\,}$

for some N > 2. These are useful for calculating hyperdeterminants of N-tensors where N > 2 and also for calculating discriminants of polynomials for powers larger than 2. There is also the limiting case as N tends to infinity in which case one can define analytic functions on the numbers. For example, in the case with N = 3 a single grassmann number can be represented by the matrix:

${\displaystyle \theta ={\begin{bmatrix}0&1&0\\0&0&1\\0&0&0\end{bmatrix}}\qquad }$

so that ${\displaystyle \theta ^{3}=0}$. For two grassmann numbers the matrix would be of size 10×10.

For example, the rules for N = 3 with two Grassmann variables imply:

${\displaystyle \theta _{1}(\theta _{2})^{2}+\theta _{2}\theta _{1}\theta _{2}+(\theta _{2})^{2}\theta _{1}=0}$

so that it can be shown that

${\displaystyle \theta _{1}(\theta _{2})^{2}=-{\frac {1}{2}}\theta _{2}\theta _{1}\theta _{2}=(\theta _{2})^{2}\theta _{1}}$

and so

${\displaystyle (\theta _{1})^{2}(\theta _{2})^{2}=(\theta _{2})^{2}(\theta _{1})^{2}=\theta _{1}(\theta _{2})^{2}\theta _{1}=\theta _{2}(\theta _{1})^{2}\theta _{2}=-{\frac {1}{2}}\theta _{1}\theta _{2}\theta _{1}\theta _{2}=-{\frac {1}{2}}\theta _{2}\theta _{1}\theta _{2}\theta _{1},}$

which gives a definition for the hyperdeterminant of a 2×2×2 tensor as

${\displaystyle (A^{abc}\theta _{a}\eta _{b}\psi _{c})^{4}=\det(A)(\theta _{1})^{2}(\theta _{2})^{2}(\eta _{1})^{2}(\eta _{2})^{2}(\psi _{1})^{2}(\psi _{2})^{2}.}$