# Grassmann number

In mathematical physics, a Grassmann number, named after Hermann Grassmann (also called an anticommuting number or supernumber), is an element of the exterior algebra over the complex numbers.[1] The special case of a 1-dimensional algebra is known as a dual number. Grassmann numbers saw an early use in physics to express a path integral representation for fermionic fields, although they are now widely used as a foundation for superspace, on which supersymmetry is constructed.

## Informal discussion

Grassmann numbers are generated by anti-commuting elements or objects. The idea of anti-commuting objects arises in multiple areas of mathematics: they are typically seen in differential geometry, where the differential forms are anti-commuting. Differential forms are normally defined in terms of derivatives on a manifold; however, one can contemplate the situation where one "forgets" or "ignores" the existence of any underlying manifold, and "forgets" or "ignores" that the forms were defined as derivatives, and instead, simply contemplate a situation where one has objects that anti-commute, and have no other pre-defined or pre-supposed properties. Such objects form an algebra, and specifically the Grassmann algebra or exterior algebra.

The Grassmann numbers are elements of that algebra. The appellation of "number" is justified by the fact that they behave not unlike "ordinary" numbers: they can be added, multiplied and divided: they behave almost like a field. More can be done: one can consider polynomials of Grassmann numbers, leading to the idea of holomorphic functions. One can take derivatives of such functions, and then consider the anti-derivatives as well. Each of these ideas can be carefully defined, and correspond reasonably well to the equivalent concepts from ordinary mathematics. The analogy does not stop there: one has an entire branch of supermathematics, where the analog of Euclidean space is superspace, the analog of a manifold is a supermanifold, the analog of a Lie algebra is a Lie superalgebra and so on. The Grassmann numbers are the underlying construct that make this all possible.

Of course, one could pursue a similar program for any other field, or even ring, and this is indeed widely and commonly done in mathematics. However, supermathematics takes on a special significance in physics, because the anti-commuting behavior can be strongly identified with the quantum-mechanical behavior of fermions: the anti-commutation is that of the Pauli exclusion principle. Thus, the study of Grassmann numbers, and of supermathematics, in general, is strongly driven by their utility in physics.

Specifically, in quantum field theory, or more narrowly, second quantization, one works with ladder operators that create multi-particle quantum states. The ladder operators for fermions create field quanta that must necessarily have anti-symmetric wave functions, as this is forced by the Pauli exclusion principle. In this situation, a Grassmann number corresponds immediately and directly to a wave function that contains some (typically indeterminate) number of fermions.

When the number of fermions is fixed and finite, an explicit relationship between anticommutation relations and spinors is given by means of the spin group. This group can be defined as the subset of unit-length vectors in the Clifford algebra, and naturally factorizes into anti-commuting Weyl spinors. Both the anti-commutation and the expression as spinors arises in a natural fashion for the spin group. In essence, the Grassmann numbers can be thought of as discarding the relationships arising from spin, and keeping only the relationships due to anti-commutation.

## General description and properties

Grassmann numbers are individual elements or points of the exterior algebra generated by a set of n Grassmann variables or Grassmann directions or supercharges ${\displaystyle \{\theta _{i}\}}$, with n possibly being infinite. The usage of the term "Grassmann variables" is historic; they are not variables, per se; they are better understood as the basis elements of a unital algebra. The terminology comes from the fact that a primary use is to define integrals, and that the variable of integration is Grassmann-valued, and thus, by abuse of language, is called a Grassmann variable. Similarly, the notion of direction comes from the notion of superspace, where ordinary Euclidean space is extended with additional Grassmann-valued "directions". The appellation of charge comes from the notion of charges in physics, which correspond to the generators of physical symmetries (via Noether's theorem). The perceived symmetry is that multiplication by a single Grassmann variable swaps the ${\displaystyle \mathbb {Z} _{2}}$ grading between fermions and bosons; this is discussed in greater detail below.

The Grassmann variables are the basis vectors of a vector space (of dimension n). They form an algebra over a field, with the field usually being taken to be the complex numbers, although one could contemplate other fields, such as the reals. The algebra is a unital algebra, and the generators are anti-commuting:

${\displaystyle \theta _{i}\theta _{j}=-\theta _{j}\theta _{i}}$

Since the ${\displaystyle \theta _{i}}$ are elements of a vector space over the complex numbers, they, by definition, commute with complex numbers. That is, for complex x, one has

${\displaystyle \theta _{i}x=x\theta _{i}.}$

The squares of the generators vanish:

${\displaystyle (\theta _{i})^{2}=0,}$ since ${\displaystyle \theta _{i}\theta _{i}=-\theta _{i}\theta _{i}.}$

In other words, a Grassmann variable is a non-zero square-root of zero.

## Formal definition

Formally, let V be an n-dimensional complex vector space with basis ${\displaystyle \theta _{i},i=1,\ldots ,n}$. The Grassmann algebra whose Grassmann variables are ${\displaystyle \theta _{i},i=1,\ldots ,n}$ is defined to be the exterior algebra of V, namely

${\displaystyle \Lambda (V)=\mathbb {C} \oplus V\oplus \left(V\wedge V\right)\oplus \left(V\wedge V\wedge V\right)\oplus \cdots \oplus \underbrace {\left(V\wedge V\wedge \cdots \wedge V\right)} _{n}\equiv \mathbb {C} \oplus \Lambda ^{1}V\oplus \Lambda ^{2}V\oplus \cdots \oplus \Lambda ^{n}V,}$

where ${\displaystyle \wedge }$ is the exterior product and ${\displaystyle \oplus }$ is the direct sum. The individual elements of this algebra are then called Grassmann numbers. It is standard to omit the wedge symbol ${\displaystyle \wedge }$ when writing a Grassmann number once the definition is established. A general Grassmann number can be written as

${\displaystyle z=\sum _{k=0}^{n}\sum _{i_{1},i_{2},\cdots ,i_{k}}c_{i_{1}i_{2}\cdots i_{k}}\theta _{i_{1}}\theta _{i_{2}}\cdots \theta _{i_{k}},}$

where ${\displaystyle (i_{1},i_{2},\ldots ,i_{k})}$ are strictly increasing k-tuples with ${\displaystyle 1\leq i_{j}\leq n,1\leq j\leq k}$, and the ${\displaystyle c_{i_{1}i_{2}\cdots i_{k}}}$ are complex, completely antisymmetric tensors of rank k. Again, the ${\displaystyle \theta _{i}}$, and the ${\displaystyle \theta _{i}\wedge \theta _{j}=\theta _{i}\theta _{j}}$ (subject to ${\displaystyle i), and larger finite products, can be seen here to be playing the role of a basis vectors of subspaces of ${\displaystyle \Lambda }$.

The Grassmann algebra generated by n linearly independent Grassmann variables has dimension 2n; this follows from the binomial theorem applied to the above sum, and the fact that the (n + 1)-fold product of variables must vanish, by the anti-commutation relations, above. The dimension of ${\displaystyle \Lambda ^{k}V}$ is given by n choose k, the binomial coefficient. The special case of n = 1 is called a dual number, and was introduced by William Clifford in 1873.

In case V is infinite-dimensional, the above series does not terminate and one defines

${\displaystyle \Lambda _{\infty }(V)=\mathbb {C} \oplus \Lambda ^{1}V\oplus \Lambda ^{2}V\oplus \cdots .}$

The general element is now

${\displaystyle z=\sum _{k=0}^{\infty }\sum _{i_{1},i_{2},\cdots ,i_{k}}{\frac {1}{k!}}c_{i_{1}i_{2}\cdots i_{k}}\theta _{i_{1}}\theta _{i_{2}}\cdots \theta _{i_{k}}\equiv z_{B}+z_{S}=z_{B}+\sum _{k=1}^{\infty }\sum _{i_{1},i_{2},\cdots ,i_{k}}{\frac {1}{k!}}c_{i_{1}i_{2}\cdots i_{k}}\theta _{i_{1}}\theta _{i_{2}}\cdots \theta _{i_{k}},}$

where ${\displaystyle z_{B}}$ is sometimes referred to as the body and ${\displaystyle z_{S}}$ as the soul of the supernumber ${\displaystyle z}$.

## Properties

In the finite-dimensional case (using the same terminology) the soul is nilpotent, i.e.

${\displaystyle z_{S}^{n+1}=0,}$

but this is not necessarily so in the infinite-dimensional case.[2]

If V is finite-dimensional, then

${\displaystyle \theta _{i}z=0,\quad 1\leq i\leq n\Rightarrow z=c\theta _{1}\theta _{2}\cdots \theta _{n},\quad c\in \mathbb {C} ,}$

and if V is infinite-dimensional[3]

${\displaystyle \theta _{a}z=0\quad \forall a\Rightarrow z=0.}$

## Finite vs. countable sets of generators

Two distinct kinds of supernumbers commonly appear in the literature: those with a finite number of generators, typically n = 1, 2, 3 or 4, and those with a countably-infinite number of generators. These two situations are not as unrelated as they may seem at first. First, in the definition of a supermanifold, one variant uses a countably-infinite number of generators, but then employs a topology that effectively reduces the dimension to a small finite number.[4][5]

In the other case, one may start with a finite number of generators, but in the course of second quantization, a need for an infinite number of generators arises: one each for every possible momentum that a fermion might carry.

## Involution, choice of field

The complex numbers are usually chosen as the field for the definition of the Grassmann numbers, as opposed to the real numbers, as this avoids some strange behaviors when a conjugation or involution is introduced. It is common to introduce an operator * on the Grassmann numbers such that:

${\displaystyle \theta =\theta ^{*}}$

when ${\displaystyle \theta }$ is a generator, and such that

${\displaystyle (\theta _{i}\theta _{j}\cdots \theta _{k})^{*}=\theta _{k}\cdots \theta _{j}\theta _{i}}$

One may then consider Grassmann numbers z for which ${\displaystyle z=z^{*}}$, and term these (super) real, while those that obey ${\displaystyle z^{*}=-z}$ are termed (super) imaginary. These definitions carry through just fine, even if the Grassmann numbers use the real numbers as the base field; however, in such a case, many coefficients are forced to vanish if the number of generators are less than 4. Thus, by convention, the Grassmann numbers are usually defined over the complex numbers.

Other conventions are possible; the above is sometimes referred to as the DeWitt convention; Rogers employs ${\displaystyle \theta ^{*}=i\theta }$ for the involution. In this convention, the real supernumbers always have real coefficients; whereas in the DeWitt convention, the real supernumbers may have both real and imaginary coefficients. Despite this, it is usually easiest to work with the DeWitt convention.

## Analysis

Products of an odd number of Grassmann variables anti-commute with each other; such a product is often called an a-number. Products of an even number of Grassmann variables commute (with all Grassman numbers); they are often called c-numbers. By abuse of terminology, an a-number is sometimes called an anticommuting c-number. This decomposition into even and odd subspaces provides a ${\displaystyle \mathbb {Z} _{2}}$ grading on the algebra; thus Grassmann algebras are the prototypical examples of supercommutative algebras. Note that the c-numbers form a subalgebra of ${\displaystyle \Lambda }$, but the a-numbers do not (they are a subspace, not a subalgebra).

The definition of Grassmann numbers allows mathematical analysis to be performed, in analogy to analysis on complex numbers. That is, one may define superholomorphic functions, define derivatives, as well as defining integrals. Some of the basic concepts are developed in greater detail in the article on dual numbers.

As a general rule, it is usually easier to define the super-symmetric analogs of ordinary mathematical entities by working with Grassmann numbers with an infinite number of generators: most definitions become straightforward, and can be taken over from the corresponding bosonic definitions. For example, a single Grassmann number can be thought of as generating a one-dimensional space. A vector space, the m-dimensional superspace, then appears as the m-fold Cartesian product of these one-dimensional ${\displaystyle \Lambda .}$[clarification needed] It can be shown that this is essentially equivalent to an algebra with m generators, but this requires work.[6][clarification needed]

### Spinor space

The spinor space is defined as the Grassmann or exterior algebra ${\displaystyle \textstyle {\bigwedge }W}$ of the space of Weyl spinors ${\displaystyle W}$ (and anti-spinors ${\displaystyle {\overline {W}}}$), such that the wave functions of n fermions belong in ${\displaystyle \textstyle {\bigwedge }^{n}W}$.

## Integration

Integrals over Grassmann numbers are known as Berezin integrals (sometimes called Grassmann integrals). In order to reproduce the path integral for a Fermi field, the definition of Grassmann integration needs to have the following properties:

• linearity
${\displaystyle \int \,[af(\theta )+bg(\theta )]\,d\theta =a\int \,f(\theta )\,d\theta +b\int \,g(\theta )\,d\theta }$
• partial integration formula
${\displaystyle \int \left[{\frac {\partial }{\partial \theta }}f(\theta )\right]\,d\theta =0.}$

Moreover, the Taylor expansion of any function ${\displaystyle f(\theta )=A+B\theta }$ terminates after two terms because ${\displaystyle \theta ^{2}=0}$, and quantum field theory additionally require invariance under the shift of integration variables ${\displaystyle \theta \to \theta +\eta }$ such that

${\displaystyle \int d\theta f(\theta )=\int d\theta (A+B\theta )\equiv \int d\theta ((A+B\eta )+B\theta ).}$

The only linear function satisfying this condition is a constant (conventionally 1) times B, so Berezin defined[7]

${\displaystyle \int d\theta (A+B\theta )\equiv B.}$

This results in the following rules for the integration of a Grassmann quantity:

• ${\displaystyle \int \,1\,d\theta =0}$
• ${\displaystyle \int \,\theta \,d\theta =1.}$

Thus we conclude that the operations of integration and differentiation of a Grassmann number are identical.

In the path integral formulation of quantum field theory the following Gaussian integral of Grassmann quantities is needed for fermionic anticommuting fields, with A being an N × N matrix:

${\displaystyle \int \exp \left[-\theta ^{\rm {T}}A\eta \right]\,d\theta \,d\eta =\det A}$.

### Conventions and complex integration

An ambiguity arises when integrating over multiple Grassmann numbers. The convention that performs the innermost integral first yields

${\displaystyle \int d\theta \int d\eta \;\eta \theta =+1.}$

Some authors also define complex conjugation similar to Hermitian conjugation of operators,[8]

${\displaystyle (\theta \eta )^{*}\equiv \eta ^{*}\theta ^{*}=-\theta ^{*}\eta ^{*}.}$

${\displaystyle \theta ={\frac {\theta _{1}+i\theta _{2}}{\sqrt {2}}},\quad \theta ^{*}={\frac {\theta _{1}-i\theta _{2}}{\sqrt {2}}},}$

we can treat θ and θ* as independent Grassmann numbers, and adopt

${\displaystyle \int d\theta ^{*}d\theta \,(\theta \theta ^{*})=1.}$

Thus a Gaussian integral evaluates to

${\displaystyle \int d\theta ^{*}d\theta \,e^{-\theta ^{*}b\theta }=\int d\theta ^{*}d\theta \,(1-\theta ^{*}b\theta )=\int d\theta ^{*}d\theta \,(1+\theta \theta ^{*}b)=b}$

and an extra factor of θθ* effectively introduces a factor of (1/b), just like an ordinary Gaussian,

${\displaystyle \int d\theta ^{*}d\theta \,\theta \theta ^{*}\,e^{-\theta ^{*}b\theta }=1.}$

After proving unitarity, we can evaluate a general Gaussian integral involving a Hermitian matrix B with eigenvalues bi,[8][9]

${\displaystyle \left(\prod _{i}\int d\theta _{i}^{*}d\theta _{i}\right)e^{-\theta _{i}^{*}B_{ij}\theta _{j}}=\left(\prod _{i}\int d\theta _{i}^{*}d\theta _{i}\right)e^{-\theta _{i}^{*}b_{i}\theta _{i}}=\prod _{i}b_{i}=\det B.}$

## Matrix representations

Grassmann numbers can be represented by matrices. Consider, for example, the Grassmann algebra generated by two Grassmann numbers ${\displaystyle \theta _{1}}$ and ${\displaystyle \theta _{2}}$. These Grassmann numbers can be represented by 4×4 matrices:

${\displaystyle \theta _{1}={\begin{bmatrix}0&0&0&0\\1&0&0&0\\0&0&0&0\\0&0&1&0\end{bmatrix}}\qquad \theta _{2}={\begin{bmatrix}0&0&0&0\\0&0&0&0\\1&0&0&0\\0&-1&0&0\end{bmatrix}}\qquad \theta _{1}\theta _{2}=-\theta _{2}\theta _{1}={\begin{bmatrix}0&0&0&0\\0&0&0&0\\0&0&0&0\\1&0&0&0\end{bmatrix}}.}$

In general, a Grassmann algebra on n generators can be represented by 2n × 2n square matrices. Physically, these matrices can be thought of as raising operators acting on a Hilbert space of n identical fermions in the occupation number basis. Since the occupation number for each fermion is 0 or 1, there are 2n possible basis states. Mathematically, these matrices can be interpreted as the linear operators corresponding to left exterior multiplication on the Grassmann algebra itself.

## Generalisations

There are some generalisations to Grassmann numbers. These require rules in terms of N variables such that:

${\displaystyle \theta _{i_{1}}\theta _{i_{2}}\cdots \theta _{i_{N}}+\theta _{i_{N}}\theta _{i_{1}}\theta _{i_{2}}\cdots +\cdots =0}$

where the indices are summed over all permutations so that as a consequence:

${\displaystyle (\theta _{i})^{N}=0\,}$

for some N > 2. These are useful for calculating hyperdeterminants of N-tensors where N > 2 and also for calculating discriminants of polynomials for powers larger than 2. There is also the limiting case as N tends to infinity in which case one can define analytic functions on the numbers. For example, in the case with N = 3 a single grassmann number can be represented by the matrix:

${\displaystyle \theta ={\begin{bmatrix}0&1&0\\0&0&1\\0&0&0\end{bmatrix}}\qquad }$

so that ${\displaystyle \theta ^{3}=0}$. For two grassmann numbers the matrix would be of size 10×10.

For example, the rules for N = 3 with two Grassmann variables imply:

${\displaystyle \theta _{1}(\theta _{2})^{2}+\theta _{2}\theta _{1}\theta _{2}+(\theta _{2})^{2}\theta _{1}=0}$

so that it can be shown that

${\displaystyle \theta _{1}(\theta _{2})^{2}=-{\frac {1}{2}}\theta _{2}\theta _{1}\theta _{2}=(\theta _{2})^{2}\theta _{1}}$

and so

${\displaystyle (\theta _{1})^{2}(\theta _{2})^{2}=(\theta _{2})^{2}(\theta _{1})^{2}=\theta _{1}(\theta _{2})^{2}\theta _{1}=\theta _{2}(\theta _{1})^{2}\theta _{2}=-{\frac {1}{2}}\theta _{1}\theta _{2}\theta _{1}\theta _{2}=-{\frac {1}{2}}\theta _{2}\theta _{1}\theta _{2}\theta _{1},}$

which gives a definition for the hyperdeterminant of a 2×2×2 tensor as

${\displaystyle (A^{abc}\theta _{a}\eta _{b}\psi _{c})^{4}=\det(A)(\theta _{1})^{2}(\theta _{2})^{2}(\eta _{1})^{2}(\eta _{2})^{2}(\psi _{1})^{2}(\psi _{2})^{2}.}$

## Notes

1. ^ DeWitt 1984, Chapter 1, page 1.
2. ^ DeWitt 1984, pp. 1-2.
3. ^ DeWitt 1984, p. 2.
4. ^ Rogers 2007a, Chapter 1 (available online)
5. ^ Rogers 2007, Chapter 1 and Chapter 8.
6. ^ Rogers 2007
7. ^ Berezin, F. A. (1966). The Method of Second Quantization. Pure and Applied Physics. 24. New York. ISSN 0079-8193.
8. ^ a b Peskin, Michael E.; Schroeder, Daniel V. (1995). An introduction to quantum field theory (5. (corrected) printing. ed.). Reading, Mass.: Addison-Wesley. ISBN 9780201503975.
9. ^ Indices' typo present in source.