"Gaussian integration" redirects here. For the integral of a Gaussian function, see Gaussian integral.
Comparison between 2-point Gaussian and trapezoidal quadrature. The blue line is the polynomial ${\displaystyle y(x)=7x^{3}-8x^{2}-3x+3}$, whose integral in [−1, 1] is 2/3. The trapezoidal rule returns the integral of the orange dashed line, equal to ${\displaystyle y(-1)+y(1)=-10}$. The 2-point Gaussian quadrature rule returns the integral of the black dashed curve, equal to ${\displaystyle y(-{\sqrt {\scriptstyle 1/3}})+y({\sqrt {\scriptstyle 1/3}})=2/3}$. Such a result is exact, since the green region has the same area as the red regions.

In numerical analysis, a quadrature rule is an approximation of the definite integral of a function, usually stated as a weighted sum of function values at specified points within the domain of integration. (See numerical integration for more on quadrature rules.) An n-point Gaussian quadrature rule, named after Carl Friedrich Gauss, is a quadrature rule constructed to yield an exact result for polynomials of degree 2n − 1 or less by a suitable choice of the points xi and weights wi for i = 1, ..., n. The domain of integration for such a rule is conventionally taken as [−1, 1], so the rule is stated as

${\displaystyle \int _{-1}^{1}f(x)\,dx=\sum _{i=1}^{n}w_{i}f(x_{i}).}$

Gaussian quadrature as above will only produce good results if the function f(x) is well approximated by a polynomial function within the range [−1, 1]. The method is not, for example, suitable for functions with singularities. However, if the integrated function can be written as ${\displaystyle f(x)=\omega (x)g(x)}$, where g(x) is approximately polynomial and ω(x) is known, then alternative weights ${\displaystyle w_{i}'}$ and points ${\displaystyle x_{i}'}$ that depend on the weighting function ω(x) may give better results, where

${\displaystyle \int _{-1}^{1}f(x)\,dx=\int _{-1}^{1}\omega (x)g(x)\,dx\approx \sum _{i=1}^{n}w_{i}'g(x_{i}').}$

Common weighting functions include ${\displaystyle \omega (x)=1/{\sqrt {1-x^{2}}}}$ (Chebyshev–Gauss) and ${\displaystyle \omega (x)=e^{-x^{2}}}$ (Gauss–Hermite).

It can be shown (see Press, et al., or Stoer and Bulirsch) that the evaluation points xi are just the roots of a polynomial belonging to a class of orthogonal polynomials.

Graphs of Legendre polynomials (up to n = 5)

For the simplest integration problem stated above, i.e. with ${\displaystyle \omega (x)=1}$, the associated polynomials are Legendre polynomials, Pn(x), and the method is usually known as Gauss–Legendre quadrature. With the n-th polynomial normalized to give Pn(1) = 1, the i-th Gauss node, xi, is the i-th root of Pn; its weight is given by (Abramowitz & Stegun 1972, p. 887)

${\displaystyle w_{i}={\frac {2}{\left(1-x_{i}^{2}\right)[P'_{n}(x_{i})]^{2}}}.}$

Some low-order rules for solving the integration problem are listed below (over interval [−1, 1], see the section below for other intervals).

Number of points, n Points, xi Weights, wi
1 0 2
2 ${\displaystyle \pm {\sqrt {\tfrac {1}{3}}}}$ 1
3 0 ${\displaystyle {\tfrac {8}{9}}}$
${\displaystyle \pm {\sqrt {\tfrac {3}{5}}}}$ ${\displaystyle {\tfrac {5}{9}}}$
4 ${\displaystyle \pm {\sqrt {{\tfrac {3}{7}}-{\tfrac {2}{7}}{\sqrt {\tfrac {6}{5}}}}}}$ ${\displaystyle {\tfrac {18+{\sqrt {30}}}{36}}}$
${\displaystyle \pm {\sqrt {{\tfrac {3}{7}}+{\tfrac {2}{7}}{\sqrt {\tfrac {6}{5}}}}}}$ ${\displaystyle {\tfrac {18-{\sqrt {30}}}{36}}}$
5 0 ${\displaystyle {\tfrac {128}{225}}}$
${\displaystyle \pm {\tfrac {1}{3}}{\sqrt {5-2{\sqrt {\tfrac {10}{7}}}}}}$ ${\displaystyle {\tfrac {322+13{\sqrt {70}}}{900}}}$
${\displaystyle \pm {\tfrac {1}{3}}{\sqrt {5+2{\sqrt {\tfrac {10}{7}}}}}}$ ${\displaystyle {\tfrac {322-13{\sqrt {70}}}{900}}}$

## Change of interval

An integral over [a, b] must be changed into an integral over [−1, 1] before applying the Gaussian quadrature rule. This change of interval can be done in the following way:

${\displaystyle \int _{a}^{b}f(x)\,dx={\frac {b-a}{2}}\int _{-1}^{1}f\left({\frac {b-a}{2}}x+{\frac {a+b}{2}}\right)\,dx.}$

Applying the Gaussian quadrature rule then results in the following approximation:

${\displaystyle \int _{a}^{b}f(x)\,dx\approx {\frac {b-a}{2}}\sum _{i=1}^{n}w_{i}f\left({\frac {b-a}{2}}x_{i}+{\frac {a+b}{2}}\right).}$

## Other forms

The integration problem can be expressed in a slightly more general way by introducing a positive weight function ω into the integrand, and allowing an interval other than [−1, 1]. That is, the problem is to calculate

${\displaystyle \int _{a}^{b}\omega (x)\,f(x)\,dx}$

for some choices of a, b, and ω. For a = −1, b = 1, and ω(x) = 1, the problem is the same as that considered above. Other choices lead to other integration rules. Some of these are tabulated below. Equation numbers are given for Abramowitz and Stegun (A & S).

Interval ω(x) Orthogonal polynomials A & S For more information, see ...
[−1, 1] 1 Legendre polynomials 25.4.29 See Gauss–Legendre quadrature above
(−1, 1) ${\displaystyle (1-x)^{\alpha }(1+x)^{\beta },\quad \alpha ,\beta >-1}$ Jacobi polynomials 25.4.33 (β = 0) Gauss–Jacobi quadrature
(−1, 1) ${\displaystyle {\frac {1}{\sqrt {1-x^{2}}}}}$ Chebyshev polynomials (first kind) 25.4.38 Chebyshev–Gauss quadrature
[−1, 1] ${\displaystyle {\sqrt {1-x^{2}}}}$ Chebyshev polynomials (second kind) 25.4.40 Chebyshev–Gauss quadrature
[0, ∞) ${\displaystyle e^{-x}\,}$ Laguerre polynomials 25.4.45 Gauss–Laguerre quadrature
[0, ∞) ${\displaystyle x^{\alpha }e^{-x},\quad \alpha >-1}$ Generalized Laguerre polynomials Gauss–Laguerre quadrature
(−∞, ∞) ${\displaystyle e^{-x^{2}}}$ Hermite polynomials 25.4.46 Gauss–Hermite quadrature

### Fundamental theorem

Let pn be a nontrivial polynomial of degree n such that

${\displaystyle \int _{a}^{b}\omega (x)\,x^{k}p_{n}(x)\,dx=0,\quad {\text{for all }}k=0,1,\ldots ,n-1.}$

If we pick the n nodes xi to be the zeros of pn, then there exist n weights wi which make the Gauss-quadrature computed integral exact for all polynomials h(x) of degree 2n − 1 or less. Furthermore, all these nodes xi will lie in the open interval (a, b) (Stoer & Bulirsch 2002, pp. 172–175).

The polynomial pn is said to be an orthogonal polynomial of degree n associated to the weight function ω(x). It is unique up to a constant normalization factor. The idea underlying the proof is that, because of its sufficiently low degree, h(x) can be divided by ${\displaystyle p_{n}(x)}$ to produce a quotient q(x) of degree strictly lower than n, and a remainder r(x) of still lower degree, so that both will be orthogonal to ${\displaystyle p_{n}(x)}$, by the defining property of ${\displaystyle p_{n}(x)}$. Thus

${\displaystyle \int _{a}^{b}\omega (x)\,h(x)\,dx=\int _{a}^{b}\omega (x)\,r(x)\,dx.}$

Because of the choice of nodes xi, the corresponding relation

${\displaystyle \sum _{i=1}^{n}w_{i}h(x_{i})=\sum _{i=1}^{n}w_{i}r(x_{i})}$

holds also. The exactness of the computed integral for ${\displaystyle h(x)}$ then follows from corresponding exactness for polynomials of degree only n or less (as is ${\displaystyle r(x)}$).

#### General formula for the weights

The weights can be expressed as

${\displaystyle w_{i}={\frac {a_{n}}{a_{n-1}}}{\frac {\int _{a}^{b}\omega (x)p_{n-1}(x)^{2}dx}{p'_{n}(x_{i})p_{n-1}(x_{i})}}}$ (1)

where ${\displaystyle a_{k}}$ is the coefficient of ${\displaystyle x^{k}}$ in ${\displaystyle p_{k}(x)}$. To prove this, note that using Lagrange interpolation one can express r(x) in terms of ${\displaystyle r(x_{i})}$ as

${\displaystyle r(x)=\sum _{i=1}^{n}r(x_{i})\prod _{\begin{smallmatrix}1\leq j\leq n\\j\neq i\end{smallmatrix}}{\frac {x-x_{j}}{x_{i}-x_{j}}}}$

because r(x) has degree less than n and is thus fixed by the values it attains at n different points. Multiplying both sides by ω(x) and integrating from a to b yields

${\displaystyle \int _{a}^{b}\omega (x)r(x)dx=\sum _{i=1}^{n}r(x_{i})\int _{a}^{b}\omega (x)\prod _{\begin{smallmatrix}1\leq j\leq n\\j\neq i\end{smallmatrix}}{\frac {x-x_{j}}{x_{i}-x_{j}}}dx}$

The weights wi are thus given by

${\displaystyle w_{i}=\int _{a}^{b}\omega (x)\prod _{\begin{smallmatrix}1\leq j\leq n\\j\neq i\end{smallmatrix}}{\frac {x-x_{j}}{x_{i}-x_{j}}}dx}$

This integral expression for ${\displaystyle w_{i}}$ can be expressed in terms of the orthogonal polynomials ${\displaystyle p_{n}(x)}$ and ${\displaystyle p_{n-1}(x)}$ as follows.

We can write

${\displaystyle \prod _{\begin{smallmatrix}1\leq j\leq n\\j\neq i\end{smallmatrix}}\left(x-x_{j}\right)={\frac {\prod _{1\leq j\leq n}\left(x-x_{j}\right)}{x-x_{i}}}={\frac {p_{n}(x)}{a_{n}\left(x-x_{i}\right)}}}$

where ${\displaystyle a_{n}}$ is the coefficient of ${\displaystyle x^{n}}$ in ${\displaystyle p_{n}(x)}$. Taking the limit of x to ${\displaystyle x_{i}}$ yields using L'Hôpital's rule

${\displaystyle \prod _{\begin{smallmatrix}1\leq j\leq n\\j\neq i\end{smallmatrix}}\left(x_{i}-x_{j}\right)={\frac {p'_{n}(x_{i})}{a_{n}}}}$

We can thus write the integral expression for the weights as

${\displaystyle w_{i}={\frac {1}{p'_{n}(x_{i})}}\int _{a}^{b}\omega (x){\frac {p_{n}(x)}{x-x_{i}}}dx}$ ---------(2)

In the integrand, writing

${\displaystyle {\frac {1}{x-x_{i}}}={\frac {1-\left({\frac {x}{x_{i}}}\right)^{k}}{x-x_{i}}}+\left({\frac {x}{x_{i}}}\right)^{k}{\frac {1}{x-x_{i}}}}$

yields

${\displaystyle \int _{a}^{b}\omega (x){\frac {x^{k}p_{n}(x)}{x-x_{i}}}dx=x_{i}^{k}\int _{a}^{b}\omega (x){\frac {p_{n}(x)}{x-x_{i}}}dx}$

provided ${\displaystyle k\leq n}$, because

${\displaystyle {\frac {1-\left({\frac {x}{x_{i}}}\right)^{k}}{x-x_{i}}}}$

is a polynomial of degree k-1 which is then orthogonal to ${\displaystyle p_{n}(x)}$. So, if q(x) is a polynomial of at most nth degree we have

${\displaystyle \int _{a}^{b}\omega (x){\frac {p_{n}(x)}{x-x_{i}}}dx={\frac {1}{q(x_{i})}}\int _{a}^{b}\omega (x){\frac {q(x)p_{n}(x)}{x-x_{i}}}dx}$

We can evaluate the integral on the right hand side for ${\displaystyle q(x)=p_{n-1}(x)}$ as follows. Because ${\displaystyle {\frac {p_{n}(x)}{x-x_{i}}}}$ is a polynomial of degree n-1, we have

${\displaystyle {\frac {p_{n}(x)}{x-x_{i}}}=a_{n}x^{n-1}+s(x)}$

where s(x) is a polynomial of degree ${\displaystyle n-2}$. Since s(x) is orthogonal to ${\displaystyle p_{n-1}(x)}$ we have

${\displaystyle \int _{a}^{b}\omega (x){\frac {p_{n}(x)}{x-x_{i}}}dx={\frac {a_{n}}{p_{n-1}(x_{i})}}\int _{a}^{b}\omega (x)p_{n-1}(x)x^{n-1}dx}$

We can then write

${\displaystyle x^{n-1}=\left(x^{n-1}-{\frac {p_{n-1}(x)}{a_{n-1}}}\right)+{\frac {p_{n-1}(x)}{a_{n-1}}}}$

The term in the brackets is a polynomial of degree ${\displaystyle n-2}$, which is therefore orthogonal to ${\displaystyle p_{n-1}(x)}$. The integral can thus be written as

${\displaystyle \int _{a}^{b}\omega (x){\frac {p_{n}(x)}{x-x_{i}}}dx={\frac {a_{n}}{a_{n-1}p_{n-1}(x_{i})}}\int _{a}^{b}\omega (x)p_{n-1}(x)^{2}dx}$

According to Eq. (2), the weights are obtained by dividing this by ${\displaystyle p'_{n}(x_{i})}$ and that yields the expression in Eq. (1).

${\displaystyle w_{i}}$ can also be expressed in terms of the orthogonal polynomials ${\displaystyle p_{n}(x)}$ and now ${\displaystyle p_{n+1}(x)}$. In the 3-term recurrence relation ${\displaystyle p_{n+1}(x_{i})=(a)p_{n}(x_{i})+(b)p_{n-1}(x_{i})}$ the term with ${\displaystyle p_{n}(x_{i})}$ vanishes, so ${\displaystyle p_{n-1}(x_{i})}$ in Eq. (1) can be replaced by ${\displaystyle p_{n+1}(x_{i})/b}$.

#### Proof that the weights are positive

Consider the following polynomial of degree 2n-2

${\displaystyle f(x)=\prod _{\begin{smallmatrix}1\leq j\leq n\\j\neq i\end{smallmatrix}}(x-x_{j})^{2}}$

where as above the xj are the roots of the polynomial ${\displaystyle p_{n}(x)}$. Since the degree of f(x) is less than 2n-1, the Gaussian quadrature formula involving the weights and nodes obtained from ${\displaystyle p_{n}(x)}$ applies. Since ${\displaystyle f(x_{j})=0}$ for j not equal to i, we have

${\displaystyle \int _{a}^{b}\omega (x)f(x)dx=\sum _{j=1}^{N}w_{j}f(x_{j})=w_{i}f(x_{i}).}$

Since both ${\displaystyle \omega (x)}$ and f(x) are non-negative functions, it follows that ${\displaystyle w_{i}>0}$.

### Computation of Gaussian quadrature rules

For computing the nodes xi and weights wi of Gaussian quadrature rules, the fundamental tool is the three-term recurrence relation satisfied by the set of orthogonal polynomials associated to the corresponding weight function. For n points, these nodes and weights can be computed in O(n2) operations by an algorithm derived by Gautschi (1968).

#### Gautschi's theorem

Gautschi's theorem (Gautschi, 1968) states that orthogonal polynomials ${\displaystyle p_{r}}$ with ${\displaystyle (p_{r},p_{s})=0}$ for ${\displaystyle r\neq s}$ for a scalar product ${\displaystyle (,)}$, degree${\displaystyle (p_{r})=r}$ and leading coefficient one (i.e. monic orthogonal polynomials) satisfy the recurrence relation

${\displaystyle p_{r+1}(x)=(x-a_{r,r})p_{r}(x)-a_{r,r-1}p_{r-1}(x)\ldots -a_{r,0}p_{0}(x)}$

and scalar product defined

${\displaystyle (f(x),g(x))=\int _{a}^{b}\omega (x)f(x)g(x)dx}$

for ${\displaystyle r=0,1,\ldots ,n-1}$ where n is the maximal degree which can be taken to be infinity, and where ${\displaystyle a_{r,s}=(xp_{r},p_{s})/(p_{s},p_{s})}$. First of all, the polynomials defined by the recurrence relation starting with ${\displaystyle p_{0}(x)=1}$ have leading coefficient one and correct degree. Given the starting point by ${\displaystyle p_{0}}$, the orthogonality of ${\displaystyle p_{r}}$ can be shown by induction. For ${\displaystyle r=s=0}$ one has

${\displaystyle (p_{1},p_{0})=((x-a_{0,0}p_{0},p_{0})=(xp_{0},p_{0})-a_{0,0}(p_{0},p_{0})=(xp_{0},p_{0})-(xp_{0},p_{0})=0.}$

Now if ${\displaystyle p_{0},p_{1},\ldots ,p_{r}}$ are orthogonal, then also ${\displaystyle p_{r+1}}$, because in

${\displaystyle (p_{r+1},p_{s})=(xp_{r},p_{s})-a_{r,r}(p_{r},p_{s})-a_{r,r-1}(p_{r-1},p_{s})\ldots -a_{r,0}(p_{0},p_{s})}$

all scalar products vanish except for the first one and the one where ${\displaystyle p_{s}}$ meets the same orthogonal polynomial. Therefore,

${\displaystyle (p_{r+1},p_{s})=(xp_{r},p_{s})-a_{r,s}(p_{s},p_{s})=(xp_{r},p_{s})-(xp_{r},p_{s})=0.}$

However, if the scalar product satisfies ${\displaystyle (xf,g)=(f,xg)}$ (which is the case for Gaussian quadrature), the recurrence relation reduces to a three-term recurrence relation: For ${\displaystyle s\leq r-1,xp_{s}}$ is a polynomial of degree less or equal to r − 1. On the other hand, ${\displaystyle p_{r}}$ is orthogonal to every polynomial of degree less or equal to r − 1. Therefore, one has ${\displaystyle (xp_{r},p_{s})=(p_{r},xp_{s})=0}$ and ${\displaystyle a_{r,s}=0}$ for s < r − 1. The recurrence relation then simplifies to

${\displaystyle p_{r+1}(x)=(x-a_{r,r})p_{r}(x)-a_{r,r-1}p_{r-1}(x)}$

or

${\displaystyle p_{r+1}(x)=(x-a_{r})p_{r}(x)-b_{r}p_{r-1}(x)}$

(with the convention ${\displaystyle p_{-1}(x)\equiv 0}$) where

${\displaystyle a_{r}:={\frac {(xp_{r},p_{r})}{(p_{r},p_{r})}},\qquad b_{r}:={\frac {(xp_{r},p_{r-1})}{(p_{r-1},p_{r-1})}}={\frac {(p_{r},p_{r})}{(p_{r-1},p_{r-1})}}}$

(the last because of ${\displaystyle (xp_{r},p_{r-1})=(p_{r},xp_{r-1})=(p_{r},p_{r})}$, since ${\displaystyle xp_{r-1}}$ differs from ${\displaystyle p_{r}}$ by a degree less than r).

#### The Golub-Welsch algorithm

The three-term recurrence relation can be written in the matrix form ${\displaystyle J{\tilde {P}}=x{\tilde {P}}-p_{n}(x)\times \mathbf {e} _{n}}$ where ${\displaystyle {\tilde {P}}=[p_{0}(x),p_{1}(x),...,p_{n-1}(x)]^{T}}$, ${\displaystyle \mathbf {e} _{n}}$ is the ${\displaystyle n}$th standard basis vector, i.e. ${\displaystyle \mathbf {e} _{n}=[0,...,0,1]^{T}}$, and J is the so-called Jacobi matrix:

${\displaystyle \mathbf {J} ={\begin{pmatrix}a_{0}&1&0&\ldots &\ldots &\ldots \\b_{1}&a_{1}&1&0&\ldots &\ldots \\0&b_{2}&a_{2}&1&0&\ldots \\0&\ldots &\ldots &\ldots &\ldots &0\\\ldots &\ldots &0&b_{n-2}&a_{n-2}&1\\\ldots &\ldots &\ldots &0&b_{n-1}&a_{n-1}\end{pmatrix}}}$

The zeros ${\displaystyle x_{j}}$ of the polynomials up to degree n which are used as nodes for the Gaussian quadrature can be found by computing the eigenvalues of this tridiagonal matrix. This procedure is known as Golub–Welsch algorithm.

For computing the weights and nodes, it is preferable to consider the symmetric tridiagonal matrix ${\displaystyle {\mathcal {J}}}$ with elements

{\displaystyle {\begin{aligned}{\mathcal {J}}_{i,i}&=J_{i,i}=a_{i-1}&&i=1,\ldots ,n\\{\mathcal {J}}_{i-1,i}={\mathcal {J}}_{i,i-1}&={\sqrt {J_{i,i-1}J_{i-1,i}}}={\sqrt {b_{i-1}}}&&i=2,\ldots ,n.\end{aligned}}}

J and ${\displaystyle {\mathcal {J}}}$ are similar matrices and therefore have the same eigenvalues (the nodes). The weights can be computed from the corresponding eigenvectors: If ${\displaystyle \phi ^{(j)}}$ is a normalized eigenvector (i.e., an eigenvector with euclidean norm equal to one) associated to the eigenvalue xj, the corresponding weight can be computed from the first component of this eigenvector, namely:

${\displaystyle w_{j}=\mu _{0}\left(\phi _{1}^{(j)}\right)^{2}}$

where ${\displaystyle \mu _{0}}$ is the integral of the weight function

${\displaystyle \mu _{0}=\int _{a}^{b}\omega (x)dx.}$

See, for instance, (Gil, Segura & Temme 2007) for further details.

### Error estimates

The error of a Gaussian quadrature rule can be stated as follows (Stoer & Bulirsch 2002, Thm 3.6.24). For an integrand which has 2n continuous derivatives,

${\displaystyle \int _{a}^{b}\omega (x)\,f(x)\,dx-\sum _{i=1}^{n}w_{i}\,f(x_{i})={\frac {f^{(2n)}(\xi )}{(2n)!}}\,(p_{n},p_{n})}$

for some ξ in (a, b), where pn is the monic (i.e. the leading coefficient is 1) orthogonal polynomial of degree n and where

${\displaystyle (f,g)=\int _{a}^{b}\omega (x)f(x)g(x)\,dx.}$

In the important special case of ω(x) = 1, we have the error estimate (Kahaner, Moler & Nash 1989, §5.2)

${\displaystyle {\frac {(b-a)^{2n+1}(n!)^{4}}{(2n+1)[(2n)!]^{3}}}f^{(2n)}(\xi ),\qquad a<\xi

Stoer and Bulirsch remark that this error estimate is inconvenient in practice, since it may be difficult to estimate the order 2n derivative, and furthermore the actual error may be much less than a bound established by the derivative. Another approach is to use two Gaussian quadrature rules of different orders, and to estimate the error as the difference between the two results. For this purpose, Gauss–Kronrod quadrature rules can be useful.

### Gauss–Kronrod rules

If the interval [a, b] is subdivided, the Gauss evaluation points of the new subintervals never coincide with the previous evaluation points (except at zero for odd numbers), and thus the integrand must be evaluated at every point. Gauss–Kronrod rules are extensions of Gauss quadrature rules generated by adding n + 1 points to an n-point rule in such a way that the resulting rule is of order 2n + 1. This allows for computing higher-order estimates while re-using the function values of a lower-order estimate. The difference between a Gauss quadrature rule and its Kronrod extension are often used as an estimate of the approximation error.

### Gauss–Lobatto rules

Also known as Lobatto quadrature (Abramowitz & Stegun 1972, p. 888), named after Dutch mathematician Rehuel Lobatto. It is similar to Gaussian quadrature with the following differences:

1. The integration points include the end points of the integration interval.
2. It is accurate for polynomials up to degree 2n–3, where n is the number of integration points (Quarteroni, Sacco & Saleri 2000).

Lobatto quadrature of function f(x) on interval [−1, 1]:

${\displaystyle \int _{-1}^{1}{f(x)\,dx}={\frac {2}{n(n-1)}}[f(1)+f(-1)]+\sum _{i=2}^{n-1}{w_{i}f(x_{i})}+R_{n}.}$

Abscissas: xi is the ${\displaystyle (i-1)}$st zero of ${\displaystyle P'_{n-1}(x)}$.

Weights:

${\displaystyle w_{i}={\frac {2}{n(n-1)[P_{n-1}(x_{i})]^{2}}},\qquad x_{i}\neq \pm 1.}$

Remainder:

${\displaystyle R_{n}={\frac {-n(n-1)^{3}2^{2n-1}[(n-2)!]^{4}}{(2n-1)[(2n-2)!]^{3}}}f^{(2n-2)}(\xi ),\qquad -1<\xi <1.}$

Some of the weights are:

Number of points, n Points, xi Weights, wi
${\displaystyle 3}$ ${\displaystyle 0}$ ${\displaystyle {\frac {4}{3}}}$
${\displaystyle \pm 1}$ ${\displaystyle {\frac {1}{3}}}$
${\displaystyle 4}$ ${\displaystyle \pm {\sqrt {\frac {1}{5}}}}$ ${\displaystyle {\frac {5}{6}}}$
${\displaystyle \pm 1}$ ${\displaystyle {\frac {1}{6}}}$
${\displaystyle 5}$ ${\displaystyle 0}$ ${\displaystyle {\frac {32}{45}}}$
${\displaystyle \pm {\sqrt {\frac {3}{7}}}}$ ${\displaystyle {\frac {49}{90}}}$
${\displaystyle \pm 1}$ ${\displaystyle {\frac {1}{10}}}$
${\displaystyle 6}$ ${\displaystyle \pm {\sqrt {{\frac {1}{3}}-{\frac {2{\sqrt {7}}}{21}}}}}$ ${\displaystyle {\frac {14+{\sqrt {7}}}{30}}}$
${\displaystyle \pm {\sqrt {{\frac {1}{3}}+{\frac {2{\sqrt {7}}}{21}}}}}$ ${\displaystyle {\frac {14-{\sqrt {7}}}{30}}}$
${\displaystyle \pm 1}$ ${\displaystyle {\frac {1}{15}}}$
${\displaystyle 7}$ ${\displaystyle 0}$ ${\displaystyle {\frac {256}{525}}}$
${\displaystyle \pm {\sqrt {{\frac {5}{11}}-{\frac {2}{11}}{\sqrt {\frac {5}{3}}}}}}$ ${\displaystyle {\frac {124+7{\sqrt {15}}}{350}}}$
${\displaystyle \pm {\sqrt {{\frac {5}{11}}+{\frac {2}{11}}{\sqrt {\frac {5}{3}}}}}}$ ${\displaystyle {\frac {124-7{\sqrt {15}}}{350}}}$
${\displaystyle \pm 1}$ ${\displaystyle {\frac {1}{21}}}$