# Jack function

(Redirected from Jack polynomial)

In mathematics, the Jack function is a generalization of the Jack polynomial, introduced by Henry Jack. The Jack polynomial is a homogeneous, symmetric polynomial which generalizes the Schur and zonal polynomials, and is in turn generalized by the Heckman–Opdam polynomials and Macdonald polynomials.

## Definition

The Jack function ${\displaystyle J_{\kappa }^{(\alpha )}(x_{1},x_{2},\ldots )}$ of integer partition ${\displaystyle \kappa }$, parameter ${\displaystyle \alpha }$, and indefinitely many arguments ${\displaystyle x_{1},x_{2},\ldots ,}$ can be recursively defined as follows:

For m=1
${\displaystyle J_{k}^{(\alpha )}(x_{1})=x_{1}^{k}(1+\alpha )\cdots (1+(k-1)\alpha )}$
For m>1
${\displaystyle J_{\kappa }^{(\alpha )}(x_{1},x_{2},\ldots ,x_{m})=\sum _{\mu }J_{\mu }^{(\alpha )}(x_{1},x_{2},\ldots ,x_{m-1})x_{m}^{|\kappa /\mu |}\beta _{\kappa \mu },}$

where the summation is over all partitions ${\displaystyle \mu }$ such that the skew partition ${\displaystyle \kappa /\mu }$ is a horizontal strip, namely

${\displaystyle \kappa _{1}\geq \mu _{1}\geq \kappa _{2}\geq \mu _{2}\geq \cdots \geq \kappa _{n-1}\geq \mu _{n-1}\geq \kappa _{n}}$ (${\displaystyle \mu _{n}}$ must be zero or otherwise ${\displaystyle J_{\mu }(x_{1},\ldots ,x_{n-1})=0}$) and
${\displaystyle \beta _{\kappa \mu }={\frac {\prod _{(i,j)\in \kappa }B_{\kappa \mu }^{\kappa }(i,j)}{\prod _{(i,j)\in \mu }B_{\kappa \mu }^{\mu }(i,j)}},}$

where ${\displaystyle B_{\kappa \mu }^{\nu }(i,j)}$ equals ${\displaystyle \kappa _{j}'-i+\alpha (\kappa _{i}-j+1)}$ if ${\displaystyle \kappa _{j}'=\mu _{j}'}$ and ${\displaystyle \kappa _{j}'-i+1+\alpha (\kappa _{i}-j)}$ otherwise. The expressions ${\displaystyle \kappa '}$ and ${\displaystyle \mu '}$ refer to the conjugate partitions of ${\displaystyle \kappa }$ and ${\displaystyle \mu }$, respectively. The notation ${\displaystyle (i,j)\in \kappa }$ means that the product is taken over all coordinates ${\displaystyle (i,j)}$ of boxes in the Young diagram of the partition ${\displaystyle \kappa }$.

### Combinatorial formula

In 1997, F. Knop and S. Sahi [1] gave a purely combinatorial formula for the Jack polynomials ${\displaystyle J_{\mu }^{(\alpha )}}$ in n variables:

${\displaystyle J_{\mu }^{(\alpha )}=\sum _{T}d_{T}(\alpha )\prod _{s\in T}x_{T(s)}}$.

The sum is taken over all admissible tableaux of shape ${\displaystyle \lambda }$, and ${\displaystyle d_{T}(\alpha )=\prod _{s\in T{\text{ critical}}}d_{\lambda }(\alpha )(s)}$ with ${\displaystyle d_{\lambda }(\alpha )(s)=\alpha (a_{\lambda }(s)+1)+(l_{\lambda }(s)+1)}$.

An admissible tableau of shape ${\displaystyle \lambda }$ is a filling of the Young diagram ${\displaystyle \lambda }$ with numbers 1,2,…,n such that for any box (i,j) in the tableau,

• T(i,j) ≠ T( i',j) whenever i' > i.
• T(i,j) ≠ T( i',j-1) whenever j>1 and i' < i.

A box ${\displaystyle s=(i,j)\in \lambda }$ is critical for the tableau T if j>1 and ${\displaystyle T(i,j)=T(i,j-1)}$.

This result can be seen as a special case of the more general combinatorial formula for Macdonald polynomials.

## C normalization

The Jack functions form an orthogonal basis in a space of symmetric polynomials, with inner product: ${\displaystyle \langle f,g\rangle =\int _{[0,2\pi ]^{n}}f(e^{i\theta _{1}},\cdots ,e^{i\theta _{n}}){\overline {g(e^{i\theta _{1}},\cdots ,e^{i\theta _{n}})}}\prod _{1\leq j

This orthogonality property is unaffected by normalization. The normalization defined above is typically referred to as the J normalization. The C normalization is defined as

${\displaystyle C_{\kappa }^{(\alpha )}(x_{1},x_{2},\ldots ,x_{n})={\frac {\alpha ^{|\kappa |}(|\kappa |)!}{j_{\kappa }}}J_{\kappa }^{(\alpha )}(x_{1},x_{2},\ldots ,x_{n}),}$

where

${\displaystyle j_{\kappa }=\prod _{(i,j)\in \kappa }(\kappa _{j}'-i+\alpha (\kappa _{i}-j+1))(\kappa _{j}'-i+1+\alpha (\kappa _{i}-j)).}$

For ${\displaystyle \alpha =2,\;C_{\kappa }^{(2)}(x_{1},x_{2},\ldots ,x_{n})}$ denoted often as just ${\displaystyle C_{\kappa }(x_{1},x_{2},\ldots ,x_{n})}$ is known as the Zonal polynomial.

## P normalization

The P normalization is given by the identity ${\displaystyle J_{\lambda }=H'_{\lambda }P_{\lambda }}$, where ${\displaystyle H'_{\lambda }=\prod _{s\in \lambda }(\alpha a_{\lambda }(s)+l_{\lambda }(s)+1)}$ and ${\displaystyle a_{\lambda }}$ and ${\displaystyle l_{\lambda }}$ denotes the arm and leg length respectively. Therefore, for ${\displaystyle \alpha =1}$, ${\displaystyle P_{\lambda }}$ is the usual Schur function.

Similar to Schur polynomials, ${\displaystyle P_{\lambda }}$ can be expressed as a sum over Young tableaux. However, one need to add an extra weight to each tableau that depends on the parameter ${\displaystyle \alpha }$.

Thus, a formula [2] for the Jack function ${\displaystyle P_{\lambda }}$ is given by

${\displaystyle P_{\lambda }=\sum _{T}\psi _{T}(\alpha )\prod _{s\in \lambda }x_{T(s)}}$

where the sum is taken over all tableaux of shape ${\displaystyle \lambda }$, and ${\displaystyle T(s)}$ denotes the entry in box s of T.

The weight ${\displaystyle \psi _{T}(\alpha )}$ can be defined in the following fashion: Each tableau T of shape ${\displaystyle \lambda }$ can be interpreted as a sequence of partitions ${\displaystyle \emptyset =\nu _{1}\to \nu _{2}\to \dots \to \nu _{n}=\lambda }$ where ${\displaystyle \nu _{i+1}/\nu _{i}}$ defines the skew shape with content i in T. Then ${\displaystyle \psi _{T}(\alpha )=\prod _{i}\psi _{\nu _{i+1}/\nu _{i}}(\alpha )}$ where

${\displaystyle \psi _{\lambda /\mu }(\alpha )=\prod _{s\in R_{\lambda /\mu }-C_{\lambda /\mu }}{\frac {(\alpha a_{\mu }(s)+l_{\mu }(s)+1)}{(\alpha a_{\mu }(s)+l_{\mu }(s)+\alpha )}}{\frac {(\alpha a_{\lambda }(s)+l_{\lambda }(s)+\alpha )}{(\alpha a_{\lambda }(s)+l_{\lambda }(s)+1)}}}$

and the product is taken only over all boxes s in ${\displaystyle \lambda }$ such that s has a box from ${\displaystyle \lambda /\mu }$ in the same row, but not in the same column.

## Connection with the Schur polynomial

When ${\displaystyle \alpha =1}$ the Jack function is a scalar multiple of the Schur polynomial

${\displaystyle J_{\kappa }^{(1)}(x_{1},x_{2},\ldots ,x_{n})=H_{\kappa }s_{\kappa }(x_{1},x_{2},\ldots ,x_{n}),}$

where

${\displaystyle H_{\kappa }=\prod _{(i,j)\in \kappa }h_{\kappa }(i,j)=\prod _{(i,j)\in \kappa }(\kappa _{i}+\kappa _{j}'-i-j+1)}$

is the product of all hook lengths of ${\displaystyle \kappa }$.

## Properties

If the partition has more parts than the number of variables, then the Jack function is 0:

${\displaystyle J_{\kappa }^{(\alpha )}(x_{1},x_{2},\ldots ,x_{m})=0,{\mbox{ if }}\kappa _{m+1}>0.}$

## Matrix argument

In some texts, especially in random matrix theory, authors have found it more convenient to use a matrix argument in the Jack function. The connection is simple. If ${\displaystyle X}$ is a matrix with eigenvalues ${\displaystyle x_{1},x_{2},\ldots ,x_{m}}$, then

${\displaystyle J_{\kappa }^{(\alpha )}(X)=J_{\kappa }^{(\alpha )}(x_{1},x_{2},\ldots ,x_{m}).}$