Jump to content

Ingleton's inequality

From Wikipedia, the free encyclopedia
(Redirected from Ingleton's Inequality)

In mathematics, Ingleton's inequality is an inequality that is satisfied by the rank function of any representable matroid. In this sense it is a necessary condition for representability of a matroid over a finite field. Let M be a matroid and let ρ be its rank function, Ingleton's inequality states that for any subsets X1, X2, X3 and X4 in the support of M, the inequality

ρ(X1)+ρ(X2)+ρ(X1X2X3)+ρ(X1X2X4)+ρ(X3X4) ≤ ρ(X1X2)+ρ(X1X3)+ρ(X1X4)+ρ(X2X3)+ρ(X2X4) is satisfied.

Aubrey William Ingleton, an English mathematician, wrote an important paper in 1969[1] in which he surveyed the representability problem in matroids. Although the article is mainly expository, in this paper Ingleton stated and proved Ingleton's inequality, which has found interesting applications in information theory, matroid theory, and network coding.[2]

Importance of inequality

[edit]

There are interesting connections between matroids, the entropy region and group theory. Some of those connections are revealed by Ingleton's Inequality.

Perhaps, the more interesting application of Ingleton's inequality concerns the computation of network coding capacities. Linear coding solutions are constrained by the inequality and it has an important consequence:

The region of achievable rates using linear network coding could be, in some cases, strictly smaller than the region of achievable rates using general network coding.[3][4][5]

For definitions see, e.g.[6]

Proof

[edit]

Theorem (Ingleton's inequality):[7] Let M be a representable matroid with rank function ρ and let X1, X2, X3 and X4 be subsets of the support set of M, denoted by the symbol E(M). Then:

ρ(X1)+ρ(X2)+ρ(X1X2X3)+ρ(X1X2X4)+ρ(X3X4) ≤ ρ(X1X2)+ρ(X1X3)+ρ(X1X4)+ρ(X2X3)+ρ(X2X4).

To prove the inequality we have to show the following result:

Proposition: Let V1,V2, V3 and V4 be subspaces of a vector space V, then

  1. dim(V1V2V3) ≥ dim(V1V2) + dim(V3) − dim(V1+V3) − dim(V2+V3) + dim(V1+V2+V3)
  2. dim(V1V2V3V4) ≥ dim(V1V2V3) + dim(V1V2V4) − dim(V1V2)
  3. dim(V1V2V3V4) ≥ dim(V1V2) + dim(V3) + dim(V4) − dim(V1+V3) − dim(V2+V3) − dim(V1+V4) − dim(V2+V4) − dim(V1+V2+V3) + dim(V1+V2+V4)
  4. dim (V1) + dim(V2) + dim(V1+V2+V3) + dim(V1+V2+V4) + dim(V3+V4) ≤ dim(V1+V2) + dim(V1+V3) + dim(V1+V4) + dim(V2+V3) + dim(V2+V4)

Where Vi+Vj represent the direct sum of the two subspaces.

Proof (proposition): We will use frequently the standard vector space identity: dim(U) + dim(W) = dim(U+W) + dim(UW).

1. It is clear that (V1V2) + V3 ⊆ (V1+ V3) ∩ (V2+V3), then

dim((V1V2)+V3) dim((V1+V2)∩(V2+V3)), hence
dim(V1V2V3) = dim(V1V2) + dim(V3) − dim((V1V2)+V3)
dim(V1V2) + dim(V3) − dim((V1+V3)∩(V2+V3))
= dim(V1V2) + dim(V3) – {dim(V1+V3) + dim(V2+V3) – dim(V1+V2+V3)}
= dim(V1V2) + dim(V3) – dim(V1+V3) − dim(V2+V3) + dim(V1+V2+V3)

2. It is clear that (V1V2V3) + (V1V2V4) ⊆ (V1V2), then

dim{(V1V2V3)+(V1V2V4)} dim(V1V2), hence
dim(V1V2V3V4) = dim(V1V2V3) + dim(V1V2V4) − dim{(V1V2V3) + (V1V2V4)}
dim(V1V2V3) + dim(V1V2V4) − dim(V1V2)

3. From (1) and (2) we have:

dim(V1V2V3V4) dim(V1V2V3) + dim(V1V2V4) − dim(V1V2)
dim(V1V2) + dim(V3) − dim(V1+V3) − dim(V2+V3) + dim(V1+V2+V3) + dim(V1V2) + dim(V4) − dim(V1+V4) − dim(V2+V4) + dim(V1+V2+V4) − dim(V1V2)
= dim(V1V2) + dim(V3) + dim(V4) − dim(V1+V3) − dim(V2+V3) − dim(V1+V4) − dim(V2+V4) + dim(V1+V2+V3) + dim(V1+V2+V3)

4. From (3) we have

dim(V1+V2+V3) + dim(V1+V2+V4) dim(V1V2V3V4) − dim(V1V2) − dim(V3) − dim(V4) + dim(V1+V3)+ dim(V2+V3) + dim(V1+V4) + dim(V2+V4)

If we add (dim(V1)+dim(V2)+dim(V3+V4)) at both sides of the last inequality, we get

dim(V1) + dim(V2) + dim(V1+V2+V3) + dim(V1+V2+V4) + dim(V3+V4) dim(V1V2V3V4) − dim(V1V2) + dim(V1+dim(V2) + dim(V3+V4) − dim(V3) − dim(V4) + dim(V1+V3) + dim(V2+V3) + dim(V1+V4) + dim(V2+V4)

Since the inequality dim(V1V2V3V4) ≤ dim(V3V4) holds, we have finished with the proof.♣

Proof (Ingleton's inequality): Suppose that M is a representable matroid and let A = [v1 v2vn] be a matrix such that M = M(A). For X, Y ⊆ E(M) = {1,2, … ,n}, define U = <{Vi : i ∈ X }>, as the span of the vectors in Vi, and we define W = <{Vj : j ∈ Y }> accordingly.

If we suppose that U = <{u1, u2, … ,um}> and W = <{w1, w2, … ,wr}> then clearly we have <{u1, u2, …, um, w1, w2, …, wr }> = U + W.

Hence: r(XY) = dim <{vi : i ∈ X } ∪ {vj : j ∈ Y }> = dim(V + W).

Finally, if we define Vi = {vr : rXi } for i = 1,2,3,4, then by last inequality and the item (4) of the above proposition, we get the result.

References

[edit]
  1. ^ Ingleton, A.W. (1971). "Representation of matroids". In Welsh, D.J.A. (ed.). Combinatorial mathematics and its applications. Proceedings, Oxford, 1969. Academic Press. pp. 149–167. ISBN 0-12-743350-3. Zbl 0222.05025.
  2. ^ Ahlswede, Rudolf; N. Cai; Shuo-Yen Robert Li; Raymond Wai-Ho Yeung (2000). "Network Information Flow". IEEE Transactions on Information Theory. 46 (4): 1204–1216. doi:10.1109/18.850663.
  3. ^ Dougherty, R.; C. Freiling; K. Zeger (2005). "Insufficiency of Linear Network Codes". IEEE International Symposium on Information Theory Adelaide, Australia: 264–267.
  4. ^ Dougherty, R.; C. Freiling; K. Zeger (2007). "Networks, matroids, and non-Shannon information inequalities". IEEE Transactions on Information Theory. 53 (6): 1949–1969. CiteSeerX 10.1.1.218.3066. doi:10.1109/TIT.2007.896862. S2CID 27096.
  5. ^ Li, S.-Y.R.; Yeung, R.W.; Ning Cai (2003). "Linear network coding". IEEE Transactions on Information Theory. 49 (2): 371. doi:10.1109/TIT.2002.807285.
  6. ^ Bassoli, Riccardo; Marques, Hugo; Rodriguez, Jonathan; Shum, Kenneth W.; Tafazolli, Rahim (2013). "Network Coding Theory: A Survey". IEEE Communications Surveys & Tutorials. 15 (4): 1950. doi:10.1109/SURV.2013.013013.00104. S2CID 691027.
  7. ^ Oxley, James (1992), Matroid Theory, Oxford: Oxford University Press, ISBN 0-19-853563-5, MR 1207587, Zbl 0784.05002.
[edit]