# Talk:Linear programming

## The Diet Problem

Hi, I don't see anything on the diet problem, a class-book example of a linear programming problem in OR. Am I free to add it? (The diet problem would be a nice example to start with, because it explains a practical use in simple terms.)

Kind regards, StitchProgramming (talk) 12:18, 8 July 2011 (UTC)

## The difference of notation

Although the mathematical formulas are accompanied by a key, I feel we need to distinguish scalar variables from vector variables by notation. The article uses the scalar notation to describe both a vector and a real-valued variable. For example, we can't differentiate between a vector 'b' and a scalar 'b' until we read the key and find out which type it is.

Does anyone have the notation for vectors that they would like to share for this article?

It also doesn't help that one of the inequalities then goes on to compare with scalar 0. Some of the plain English descriptions of the criteria in the discussion below would be better than resorting to equations so early in the article. (71.233.167.118 (talk) 04:19, 12 May 2016 (UTC))

—Preceding unsigned comment added by 24.87.7.27 (talk) 02:55, 31 January 2008 (UTC)

I removed the bit about undecidability of certain ILP problems. It wasn't correct.

Cheers, Dave Peck Davekpeck 20:34, 25 January 2007 (UTC)

Can somebody please address the question of undecidability of Integer Linear Programming in "the worst case"? I believe that this statement is simply false. Can someone provide either (1) a reference demonstrating undecidability, or (2) instructions on how to remove this misleading piece of information?

If my understanding of this space is correct, the general ILP problem can be reduced to 3SAT in polynomial time. This makes it an NP-Complete problem.

If my understanding is incorrect, can someone clarify this point about undecidability with more detail and a reference?

Thanks!

-Dave Peck Davekpeck 02:01, 25 January 2007 (UTC)

"Linear programming was used during WW2 in the planning of convoys to protect merchant shipping" - Can anyone confirm this? If so, is this worth mentioning? Lawsonsj

The Pleasures of Counting by Tom Körner (Cambridge University Press) talks about the planning of convoys during WW2. Unfortunately, I do not remember whether Linear Programming was used. This may be one of the first times that Linear Programming was actually used (Operational Research started in WW2), and therefore worth mentioning. -- Jitse Niesen 22:41, 14 Aug 2003 (UTC)
I read a local magazine about LP today and found that there is no history mentioning in wikipedia. The history of linear programming is everywhere on the web other than here. I added now on wikipedia also. Mlpkr 09:04, 23 December 2006 (UTC)

If people would like to mention in a clean way that a sufficient condition for an IP problem to be equivalent to its LP relaxation is to have its constraint matrix to be a totally unimodular matrix (see a Mathematical Programming Glossary), then write this and link to the new articles that I am creating: unimodular matrix and totally unimodular matrix. -- 02:00am PST, April 13, 2004 Simon Lacoste-Julien

This was my first contribution to Wikipedia, so apologies if I have somehow mucked something up. I made a few changes. Partly, I just tried to add a small amount of material and improve the flow of the article, but there were a couple of more substantive changes.

1) The feasible region of the LP is, in general, a polyhedron, not a polytope (which would be bounded as it is the convex hull of a finite number of points). I tried to correct this throughout.

2) The simplex method does not necessarily converge to the optimal solution without special precautions that are not usually taken. "Cycling" can occur where the algorithm gets stuck traveling around a cycle of vertices all having the same (non-optimal) objective value. One method (among many) of avoiding this is "lexicographic resolution of degeneracy." In practice, such cycling in virtually unknown, even when no precautions are taken.

Another (possibly) minor correction: does not LP require the polytope where we search the solution to be a _convex_ polytope? Albmont 13:29, 27 February 2007 (UTC)
Certainly correct, LP requires a convex polytope, i.e. a finite intersection of half-spaces. Lavaka (talk) 22:50, 7 September 2009 (UTC)
Cycling does occur in practice: See Manfred Padberg's book. (Isn't Polyhedron is the term preferred for possibly unbounded polyhedral sets, as the first editor mentioned.) Kiefer.Wolfowitz (talk) 18:32, 4 January 2010 (UTC)

## linear and nonlinear integer programming?

According to my friend, linear integer programming is in NP (complexity), not undecidable as the article claims. Can someone verify this? Now the article claims that "In contrast to linear programming...." so does it refer only to nonlinear integer programming or all integer programming? Now the chapter about integer problems is ambiguous.--Thv 07:30, 2005 May 6 (UTC)

Unbounded integer programming is undecidable. See Unsolvability of some optimization problems by Wenxing Zhu. Sciolizer 17:56, 21 September 2007 (UTC)

The decision version of linear programming (i.e. is there a feasible solution with value greater than k, etc) is of course in NP, linear or non-linear (you just plug in the numbers and check). The 0-1 linear integer program is NP-hard (decision version of course). Also note that the method is not NP-hard, since hardness in complexity deals with problems not methods.

If the domain defined by the relaxed ILP is non infinite. The decision problem is is NP. We just have to list all the possibility. the algorithm is exponential on a DTM but is polynomial on a NDTM.

I am not sure that the same proof work on infinite polyhedron since, the number of branch of execution on the NDTM could be infinite. It is possible that it make the execution non polynomial, but I'm not sure of it.


Delayed column generation works also for linear programming problems and is not practical in general unless the problem has some special structure. It would be better to list Branch&Bound, Branch&Cut and Branch&Price as advanced methods.--Palli 02:03, 22 Jun 2005 (UTC)

My first thought was that Matiyasevich's theorem implies that (nonlinear) integer programming is indeed undecidable (in the version: given a problem, does it have a feasible solution). However, your observation that the decision version is in NP sounds convincing. Is it possible that a problem is both undecidable and in NP?
I listed the algorithm you mentioned. Unfortunately, we have no aritcles on branch and cut and branch and price, and the little that I know about optimization is all continuous optimization. It would be grand if you could improve on Wikipedia's articles in this area. Jitse Niesen 21:29, 22 Jun 2005 (UTC)

## Optimizing flow difficult or not?

"LP solvers are in widespread use for optimization of various problems in industry, such as optimization of flow in transportation networks, many of which can be transformed into linear programming problems only with some difficulty."

Is this saying that optimisation of flow problems are hard to formulate as linear programs? That they use linear program because it is hard to transform, despite being hard? I'm struggling with the wording. From memory there is a special class of linear programming called transportation and another one called maximal flow, is this referring to one of these problems, or something different or what? njh 03:59, 19 January 2006 (UTC)

I think the sentence should be interpreted as saying that real-world transportation problems do not quite fit within the framework of linear programming. I believe you are right that there is a class of LP problems called transportation problems, but my best guess is that the sentence does not refer to this. I hope somebody else is able to give a more definitive answer. -- Jitse Niesen (talk) 14:20, 19 January 2006 (UTC)
The basic versions of most flow problems are considered to be easy with several efficient combinatorial algorithms available. Their formulation is straightforward as an LP. The benefit of solving them as a linear programming problem is the wide availability of LP solvers and modelling languages targeting LP/MIP problems. As the incidence matrix is totally unimodular, as long as a basis is found and the capacities are integral the basis solutions provided my a simplex method will also be integer. However, flow problems quickly become difficult, integer multicommodity flows are very common in practice and are NP-complete. — Preceding unsigned comment added by Tyrneaeplith (talkcontribs) 12:57, 14 July 2014 (UTC)

the article states "Does LP admit a strongly polynomial algorithm?" as an open problem.? I think that Karmarkar algorithm is strongly polynomial. I haven't seen any "big M" in its complexity!

Potra and Wright, "Interior-point methods", J. Comput. Appl. Math., vol. 124, 2000, state that Karmarkar's algorithm needs O(nL) iterations and O(n3.5L) bit operations, where n is number of variables and L is the length of the problem. Is there a more recent result? -- Jitse Niesen (talk) 10:32, 18 May 2006 (UTC)
I am confused about the same point. The stated complexities for Khachiyan's and Kamarkar's algorithms would imply that they're strongly polynomial, but that is contradicted later in the article, as well as by http://maven.smith.edu/~orourke/TOPP/P8.html#Problem.8 which says that it's exponential in L. We need an authoritative answer here. 115.129.27.166 (talk) —Preceding undated comment added 08:42, 16 March 2009 (UTC).
It's definitely not yet proven to be strongly polynomial. Stephen Wright's "Primal Dual methods for LP" (the title is something like that) has a good discussion, and uses the "L" notation. There have been some results that certain types of LP admit strongly polynomial algorithms, but these are not for general LP problems. A seminal result was from Tardos in the 1980s that there is a strongly polynomial algorithm if the equality matrix A has only small integer entries (the precise restrictions are not that simple to state, but it's not too important). Lavaka (talk) 22:34, 7 September 2009 (UTC)
All known polynomial-time algorithms have 2 parts. First, an iterative method delivers some uniform notion of progress with each iteration.
Second, the algebraic "height" of the (integer, rational, or real-algebraic) data guarrantees that possible solutions are discretely spaced; thus, when the iterative method delivers a rational iterate in a sufficiently small ball around a true solution, the iterate is a correct solution! These properties give the polynomial-time finite-termination result (see an article by Victor Klee and a coauthor in the Handbook of Combinatorics or H. of Convex Geometry, for an authoritative statement).
An example may help. Consider univariate integer optimization. One makes some progress with every step. Suppose your "iterative method" produces only integer iterates. If you can prove that your method produces an approximate solution with error strictly less than 1/2 after a polynomial steps, then your algorithm would then terminate at the correct integer! (Khachiyan rolled up his sleaves and did the hard work to make this argument rigorous for rational arithmetic and computations with rational approximations to square roots, etc. at each step.)Kiefer.Wolfowitz (talk) 18:41, 4 January 2010 (UTC)
Thus all known polynomial-time algorithms have complexities depending on the ${\displaystyle L}$, a measure of the algebraic height of the data. Removing this dependence (if possible) is the central problem of computational optimization theory. Kiefer.Wolfowitz (talk) 19:01, 3 January 2010 (UTC)
If I understand the lineage of interior point methods correctly, Karmarkar's algorithm is equivalent to a log-barrier interior-point method for linear programming. It was recently proven (https://arxiv.org/pdf/1708.01544.pdf, or https://epubs.siam.org/doi/10.1137/17M1142132 for the journal version) that log-barrier methods for linear programming are *not* strongly polynomial time. The proof is by an exceedingly general counter-example, which is constructed in such a way to apply to any algorithm which attempts to follow the LP's central path. Rileyjmurray (talk) 06:04, 20 September 2018 (UTC)

## Merging with Job-shop problem

I disagree with a merger. It would be a lot of work and may require big changes to this linear programming article. I would suggest either a redirect from Job-shop problem to here, or to expand that one. Oleg Alexandrov (talk) 05:09, 8 June 2006 (UTC)

The current article already has one example, and I think that suffices. Therefore, I removed the merge tags. There are many problems with job-shop problem, as I explained on the talk page (particularly, it is usually not a linear programming problem), so I tagged it with Template:Cleanup-rewrite. I wouldn't mind if the article were deleted. -- Jitse Niesen (talk) 06:25, 8 June 2006 (UTC)

## not accessible

Mortimer Adler, a former editor of Encyclopedia Britannica, complained many years ago that the mathematics articles were written in a way that only mathematicians could understand. The other articles could be written so that a general reader could grasp them. Wikipedia is by no means alone in this respect.

Schmausschmaus (talk) 12:41, 26 March 2009 (UTC)

this is written from a specialists point of view. how about an encyclopedic type description, something anyone with a high school education could get a handle on in the intro?

Justforasecond 05:22, 18 August 2006 (UTC)

You are asking too much, some things are just complicated. Woud an easier to understand introduction make you happy enough? Oleg Alexandrov (talk) 05:26, 18 August 2006 (UTC)
An introduction that an average reader could understand is more in the spirit of an encyclopedia Justforasecond 05:34, 18 August 2006 (UTC)
Given that it is exactly one year on now, I had a go myself at trying to add a one line informal explanation.JohnFlux 22:43, 19 August 2007 (UTC)

Don't dumb down the article, it's very helpful as it is. Certainly more accessible than the notes I'm supposed to study from.

## Regarding: Does LP admit apolynomial pivot algorithm

This paper gives a randomized polynomial-time simplex algorithm:

As Madhu Sudan pointed out in Jonathan Kelner's thesis defense, a 'trivial' way to acomplish this would be to first solve LP problem using any polynomial LP algorithm and then use the solution to LP problem to identify pivoting strategy.

That's a good point. I removed the following open questions for the moment.
• Does LP admit a (strongly or weakly) polynomial pivot algorithm (may be a non-simplex pivot algorithm, e.g., a criss-cross or arrangement method)?
• Does LP admit a (strongly or weakly) polynomial simplex pivot algorithm?
There might be a proper way to formulate this, but it needs to be more precise. -- Jitse Niesen (talk) 03:44, 25 November 2006 (UTC)

## Unclear Dual Example

${\displaystyle y_{A}+F_{1}y_{F}+P_{1}y_{P}\geq S_{1}}$ | (the farmer must receive no less than ${\displaystyle S_{1}}$ for his wheat)


If y variables determined the unit cost, then the right side in the constraints in the inequality is the cost for producing a unit of wheat, not the revenue (as stated). i.e we force the farmer pay for a unit of wheat more then the profit he gain for unit of wheat (S). which make interpretation and motivation of the problem very unclear. —Preceding unsigned comment added by 84.109.165.179 (talk) 09:17, 16 September 2008 (UTC)

I think one can interpret this as follows: the dual problem is now asked by a person who want to set a price to rent the land and buy the fertilizer and insecticide, so as to convince the farmer to borrow/sell all his belongings. The two inequalities can then be interpreted correctly, since if one of them is violated, the farmer would rather grow the wheat / barley than to borrow / sell. The objective function is the total cost that the person have to pay in total. And it is intuitive that the optimum is just the same as before, since the farmer should be willing to sell everything if he would not be able to gain more by growing wheat / barley.

--Isaacto (talk) 05:52, 1 April 2010 (UTC)

## software

For the listed software, "CVX" is not a solver, but rather a (very nice) front-end, like YALMIP, for the SeDuMi and SDPT3 solvers (these solvers handle quadratic programming and semidefinite programming as well; they are actually designed for semidefinite programming in particular). CVX is very nice because it translates problems from "human-style" input into the right format.

Also, another nice code is cvxopt, written in Python. This does have its own solver (again, solves semidefinite programming as well), or it can call a few special other solvers if you have them installed. Lavaka (talk) 23:17, 24 August 2009 (UTC)

I have updated our article's description to be more aligned to the official CVX description. You're correct, it's not a "solver", it is a "modeling system for disciplined convex programming." Nimur (talk) 22:56, 27 August 2009 (UTC)

AIMMS and GAMS are modelling languages primarily, and are out of place among the list of LP solvers. --Tyrneaeplith (talk) 20:49, 14 July 2014 (UTC)

## Unpublished theses from the 1980s lack the notability required to be mentioned in this Wikipedia article

Anonymous editor(s), operating from similar IP addresses, keeps adding paragraphs to University of Houston doctoral theses from the 1980s. The text previously made misleading claims about the novelty of such methods (given journal publications and books by Scarf, Cline, etc.).

Algebraically, in 1982 Nguyen [1] and in 1985 Bruni[2] represented the LP problem as a fixed-point problem: find ${\displaystyle \scriptstyle x\geq 0}$ such that ${\displaystyle \scriptstyle {\mathfrak {P}}x=x}$ where ${\displaystyle \scriptstyle {\mathfrak {P}}}$ is an idempotent symmetric matrix. Nguyen employed the proximity map onto the non-negative cone ${\displaystyle \scriptstyle \{x|x\in {\mathfrak {R}}^{2m},x\geq 0\}}$ to solve the fixed-point problem. In 1992[3]and 2009[4] Jalaluddin derived a similar representation and proposed an affine regression method of solution.
+ * Bruni, A. J., Nonnegative, Nontrivial Fixed Points of Orthogonal Projections, PhD Thesis, Department of Mathematics, University of Houston, 1985.
+ * Jalaluddin Abdullah, Optimization by the Fixed-Point Method, Version 2.01. [2]
+ * Nguyen, Tung M., Applications Of Generalized Inverse To Circulant Matrices, Intersection Projections, and Linear Programming, PhD Thesis, Department of Mathematics, University of Houston, 1985.

Would such editors kindly refer to journal publications that are discussed by Math Reviews or by survey articles, e.g. by Todd? If no such references exist, then the theses are not notable.

Pardon my ignorance if these theses were in fact notable. Kiefer.Wolfowitz (talk) 14:57, 1 January 2010 (UTC)

I looked at the paper by unpublished paper by Abdullah, which spends a lot of time paraphrasing known results in nonstandard form (e.g., the Riesz decomposition of a vector lattice) and then has a computational section consisting of c. 3-4 dimensional problems. This "paper" is unfit for publication; citing it so prominantly is certainly inappropriate for Wikipedia, particularly when an anonymous user repeatedly adds it without giving a single response to objections. Do any senior editors know how to deal with such editing? Kiefer.Wolfowitz (talk) 19:48, 4 January 2010 (UTC)

I have left a warning at User talk:Hjjalal, reminding this editor of our policies on Conflict of interest and Edit warring. I assume he is the same person as the anons from the 135.* address range who were trying to promote this material earlier. EdJohnston (talk) 00:29, 5 January 2010 (UTC)
Thank you for your help. I do hope that the editor mentioned can find another (i.e. journal) source that summarizes the content of those dissertations and explains why they are notable. I now shall go in peace! Kiefer.Wolfowitz (talk) 00:37, 5 January 2010 (UTC)

### Other History

On related matters, is it noteworthy (I'm not an expert in the field) to mention the work by A Land and A Hoig (now Harcourt) ? google scholar has over 2500 cites and A Hoig is a bit of a local hero (see this ABC New report)

A. H. Land and A. G. Doig, ‘An Automatic Method for Solving Discrete Programming Problems’ Econometrica, Vol.28 (1960), pp. 497-520 available here Thoglette (talk) 05:04, 8 October 2018 (UTC)

I added some of the best books, for intermediate-advanced readers. Nering and Tucker is a good book for beginners. The book of Williams is good for modelling, but I don't know about its 4th edition. Sincerely, Kiefer.Wolfowitz (talk) 23:35, 16 January 2010 (UTC)

## Difficulty of network flows as LPs?

In my youth, network problems were thought to be 1000 times easier to solve, because of special data structures and bit-level operations, developed by folks at Southern Methodist U. (Kennington, Helgason), Carnegie Mellon, Rice, etc. Robert Bixby at Rice supervised at least one thesis on recognizing network structures in LP problems (in the 1980s).

Could an editor please explain the statement that network problems can be solved as LP problems with difficulty? Do you mean that it is easier to solve network problems using purely network-theoretic algorithms than by solving them as LPs and ignoring the network structure? Kiefer.Wolfowitz (talk) 16:42, 17 January 2010 (UTC)

I rephrased this (brief) section. Kiefer.Wolfowitz (talk) 16:42, 17 January 2010 (UTC)

## How do I "complementary slackness"?

The term complementary slackness is used as a verb rather than an adjective in the article. I would fix it, but I can't. 70.250.179.231 (talk) 21:02, 7 February 2010 (UTC)

There are two situations in which no optimal solution can be found. First, if the constraints contradict each other (for instance, x ≥ 2 and x ≥ 1) then the feasible region is empty and there can be no optimal solution, since there are no solutions at all. In this case, the LP is said to be infeasible.

These constraints do not contradict each other. For example, x = 3 fits nicely. I'm guessing the second one should be x ≤ 1.

Could someone with editing rights fix this?

138.106.53.11 (talk) 15:09, 8 April 2010 (UTC)

Thanks for your suggestion. I fixed the error.. Kiefer.Wolfowitz (talk) 18:05, 8 April 2010 (UTC)
However, other errors remain. I don't believe that the public is served by a discussion of the decomposition of a polyhedron as the sum of a (bounded) polytope and cone, but hope that another editor (with more energy now) would simplify the explanation of extreme points. (Papadimitriou and Steiglitz have a nice result on finding rational points of bounded algebraic height for rational data, which reduces the headaches with unbounded directions, btw.) Kiefer.Wolfowitz (talk) 18:35, 8 April 2010 (UTC)

## The Hirsch conjecture

Are there already any references if the recent disprove of the Hirsch conjecture is a construct that can be generalize to show that the diamater is not allways linear, or does it "only" prove that it's larger than n-d? In other words, does it disprove the existence of a linear time edge following simplex algorithm? (Santos, Francisco (2010), "A counter-example to the Hirsch conjecture", 100 Years in Seattle: the mathematics of Klee and Grünbaum) is to be held in July. Thrufir (talk) 14:35, 11 May 2010 (UTC)

## Not accessible to most readers

I'm not a mathematician but I've studied enough maths to get two engineering degrees (BSc and MSc) and a PhD in medical physics. By the time I'd read the first two paragraphs, I'd already come across two terms which I did not have a clue what they meant (polytope and affine function) as well as a few which I really did not know, but could perhaps have some idea from the name.

If someone with two engineering degrees and a science Ph.D. finds the first two paragraphs hard going, I suggest the article needs rewriting. —Preceding unsigned comment added by 213.78.42.15 (talk) 01:40, 4 July 2010 (UTC)
We could change "affine" to "linear", and gloss "polytope" as "a solid geometric figure like solid polygons, tetrahedrons, and soccer balls" (or Buckeyballs). Would these changes satisfy you?
The ideas of linear programming are related more to functional analysis (using measure & integration theory and discussing specific dualities) and microeconomics than they are to the traditional engineering-physics curriculum (emphasizing differential & integral equations and related transforms, unless the latter curriculum is followed in Paris or Moscow!). Kiefer.Wolfowitz (talk) 16:52, 15 December 2010 (UTC)

## Dynamic programming

I think there should be some explanation about the difference between Linear Programming and Dynamic Programming.

Also, I think this article should belong to Category:Geometric algorithms, since it is mentioned as a Computational Geometry algorithm. --Erel Segal (talk) 16:20, 15 December 2010 (UTC)

Erel, I'll add the category you propose.
Second, do you have a reliable introductory source introducing linear programming, and mentioning that it's different than dynamic programming? (I can think of a bloated textbook by Rardin and advanced texts by Michel Minoux and Jeremy Shapiro that discuss both in different chapters, but I cannot think of a short introduction that does so. There is a discussion of time-structured problems in some advanced books on large-scale LP, which might be considered dynamic programming by some.) Padberg's rather demanding text does discuss "dynamic simplex" algorithms, but I don't remember any dynamic programming. Cheers, Kiefer.Wolfowitz (talk) 17:00, 15 December 2010 (UTC)
I don't really know about textbooks in this field, it just seems that dynamic programming and linear programming can be used to solve similar problems, so I think some comparison is in order. --Erel Segal (talk) 19:14, 15 December 2010 (UTC)
If you don't know of a reliable textbook or reliable survey or reliable paper (and nobody else does), then adding this material might be undue weight or original research; it might be worthwhile looking at the policies and deciding yourself or asking for a third opinion here.
Lagrangian duality: Denardo's reliable and notable book on dynamic programming has been republished by Dover and it mentions that Lagrangian dual methods are often more effective than dynamic programming! I would think that Lagrangian dual methods would be a higher priority to discuss than dynamic programming, especially for large problems or relaxations of complicating constraints for e.g. combinatorial problems; I think that the volume method of Barahona and company has been used in recent years on finding God's number for Rubik's cube and for airline scheduling, and subgradient methods were used by Karp etc. in the 1970s on the traveling salesman problem. I don't know of any similar break-throughs or interesting stories for dynamic programming, but this may just be my ignorance. Best regards, Kiefer.Wolfowitz (talk) 19:40, 15 December 2010 (UTC)

## Article not accessible redux / providing mainstream coverage

1. The book Linear Programming: Methods and Applications Fifth Edition [Paperback] Nov. 2010 by Dr. Saul I. Gass provides an authoritative, easy to read, coverage of methods and applications.

2. The book by Dr. Gass could be used to revise the structure and content of this article to considerable benefit.

3. There is no need to use the words "affine" and "polytope" when introducing the topic.

4. There is a need to introduce the term "transportation problem" and discuss the algorithms that are specific to its solution. A very quick Google search found [3]

5. I do not see any benefit to bringing dynamic programming into the article.

6. The language needs improvement.

7. This is another example of articles on math topics needing input from editors who are not mathematicians. Michael P. Barnett (talk) 02:07, 10 May 2011 (UTC)

Hi Michael!
Gass's book is a good and inexpensive introduction that can help readers who cannot afford Tucker & Nearing and who find Papadimitriou & Steiglitz too mathematical: Please add it to the bibliography.
A discussion of activity analysis and the transportation problem, mentioning Tjalling C. Koopmans would be useful. George Stigler's diet problem would be also a useful addition, particularly for a 2-variable problem, e.g. peas and carrots. Maybe a discussion of a feasible budget set in microeconomics would have a nice illustration?
Please improve the language. (I just corrected some errors in the lede, but I am skeptical about the benefit of mentioning geometry in the lede.)
Best regards, 15:15, 10 May 2011 (UTC)

## Terminology for the different forms of equation

The mailing list for the GNU GLPK solver has just had a long and considered discussion about the terminology used to describe the various formulations of the LP problem. To learn more, search for subject line "optimality conditions paragraph (KKT and LP formulations)" at help-glpk.

Our conclusions are:

• no one had ever heard of the term augmented form
• we settled on standard form to describe this form
• we did not traverse the issue of what to call this

Several texts were consulted or quoted, including Floudas and Pardalos (2009) Encyclopedia of optimization – Second edition, Springer.

As a result, my suggestions are that:

• the use of standard form should be backed up with an authority or changed
• the use of augmented form should be backed up with an authority or changed

HTH Robbiemorrison (talk) 22:09, 15 May 2011 (UTC)

Please try asking the editor who wrote that material to improve the sourcing. If that editor is non-responsive, I would suggest your consulting Murty, which is the primary source on the simplex algorithm. (Of course, textbooks disagree about "standard forms" and "augmented forms"; a homogeneous self-dual standard form differs's from Karmarkar's standard form differs from Dantzig's standard form.) Best regards, 23:49, 15 May 2011 (UTC)

## History

Two newcomers (I believe) have made good edits to the history section. I would endorse our following Alex Schrijver's Theory of Linear and Integer Programming, because of its meticulous scholarship. I am concerned that the history no longer mention's von Neumann's contribution of duality theory. 20:54, 27 June 2011 (UTC)

## Still Ridiculously Incomprehensible

Linear programming is not a difficult concept to grasp at all, nor is it difficult to teach. Calculating solutions non-graphically may be difficult, but the basic concept is not. Check out http://www.purplemath.com/modules/linprog.htm. That is how you describe the concept. Once that is done, describe the mathematically interesting problems. Perhaps describe why it is so difficult to solve. Or explain why not just plot the inequalities and go from there. You folks certainly know the interesting questions much better than I, so you decide why all the fancy maths are necessary. Once you do that, then go on to the specialized mathematical descriptions. 209.193.57.103 (talk) 10:43, 6 February 2012 (UTC)

## Integer Linear Programming Should Have It's Own Article

Integer linear programming is a very important subject, and it should have its own article. The German Wikipedia has an excellent article on the subject - I would STRONGLY urge that somebody copy it verbatim into the English Wikipedia as a good first step (including the excellent diagrams it contains for extra clarity). There's a machine translation of the German article here - link.

I would also agree with previous comments on this page that the article can be simplified. An obvious starting point would be a very simple example using, say, 2 inequalities with actual numbers. --New Thought (talk) 20:41, 16 September 2012 (UTC)

## Classification of the ellipsoid method

The ellipsoid method is classified under the interior-point class of methods. Does it belong there? Some sources seem to imply that ellipsoid method is NOT an interior-point method:

What makes the ellipsoid method an interior-point method?

If this is changed, we should remember to change Template:Optimization_algorithms.

Alexeicolin (talk) 02:47, 10 January 2013 (UTC)

Technically the ellipsoid method as it is usually defined not an interior-point method, this is because the ellipsoid method deals with finding a feasible point and not with optimality. However, looking at "Introduction to Linear Optimization" by Bertsimas and Tsitsiklis (pages 378-379), there are 2 different methods for solving LPs with the ellipsoid method. The 2nd one given in that book, called the 'sliding objective ellipsoid method' would be an interior point method (once an initial feasible point is found). I may be wrong about this since I am not an expert on different optimization algorithms and their classification, but this seems to be the distinction that should be given (between ellipsoid method and something like the 'sliding objective ellipsoid method'). Zfeinst (talk) 18:36, 10 January 2013 (UTC)

## Methods to convert nonlinear problems to linear programming problems

Hello,

I am not sure where this should go, but I believe there should be examples that convert: absolute value, min, and max into their linear counterparts.

Forgive me I make a mistake in the following examples, I do not know them by heart and am just quickly deriving them as I go.

e.g., min sum abs(x_i)

--- into ---

min sum e_i,

s.t.

e_i >= -x_i, for all i

e_i >= +x_i, for all i

e.g., min max(x_i)

--- into ---

min e,

s.t.

e >= x_i, for all i

e.g., Minimize the minimum of a finite collection min min(x_i)

--- into ---

min e,

s.t.

e <= x_i, for all i

NOTE - This has the degenerate solution of e --> negative infinity. Some software will ignore this degeneracy. Microsoft Excel's simplex solver appears to (in at least some cases) to return the correct answer for problems of the form min_x min_i(f_i(x)), where f_i(x) is linear.

e.g., Converting equality (not really converting nonlinear problem to an LP problem, but still should be mentioned IMHO)

min x_i,

s.t.

x_i = g_i

--- into ---

min x_i,

s.t.

x_i <= g_i

x_i >= g_i

## Request for better image

Previously the intro had only an image of the two-variable case, both the feasible region and the optimal isoquant. I've just added an image of a polyhedron for the three-variable feasible region -- it's the best one I can find for this purpose on Wikipedia, but it does not show a planar surface for a given value of the objective function.

Can someone good at creating images put in an image of a convex polyhedron with a plane adjacent to one of the vertices? Thanks. Duoduoduo (talk) 14:48, 7 August 2013 (UTC)

## Weak duality

There seems to be a contradiction between what it's written in this article and the one on weak duality.

Present article: "The weak duality theorem states that the objective function value of the dual at any feasible solution is always greater than or equal to the objective function value of the primal at any feasible solution."

Weak duality article: "That means the solution to the primal (minimization) problem is always greater than or equal to the solution to an associated dual problem."

Am i missing something ? — Preceding unsigned comment added by Vldgrig (talkcontribs) 05:33, 6 January 2014 (UTC)

I think the difference is that this article looks at maximization problems and the weak duality article looks at minimization problems. -- Jitse Niesen (talk) 13:23, 6 January 2014 (UTC)

Yes; weak duality states that the objective function value of any feasible solution to the primal problem is a bound to the objective function of any feasible solution of the dual - and vica-versa. If say the primal is a minimization problem, its dual will be a maximization one, or if the primal is a maximization problem then it's dual will be a minimization one. Strong duality states that if both the primal and dual problems have a feasible solution, then the optimal solution values will equal. Hope this table helps:

 Primal is optimal Primal is unbounded Primal is infeasible Dual is optimal Yes, strong duality No due to weak duality No due to strong duality Dual is unbounded No due to weak duality No due to weak duality Yes Dual is infeasible No due to strong duality Yes Yes, e.g. infeasible problems that are self-dual

--Tyrneaeplith (talk) 17:23, 14 July 2014 (UTC)

## T-Forward algorithm

The following section was recently added to the article:

A new approach for linear and nonlinear programming proposed by Gang Liu. The author claims the complexity is ${\displaystyle O(NLlnL)}$, which is faster than Interior method by a factor of ${\displaystyle O(N^{2.5}L)}$. The original paper for T-Forward method:

T-Forward Method: A Closed-Form Solution and Strongly Polynomial Time Approach for Convex Nonlinear Programming

The T-Forward Method first constructs the feasible boundary or the Basic Feasible Solution (BFS) in closed-form format. Given a vector z, the line along the direction pointed by z will have an intersection point with each of the plane defined by a constraint. The first point that the line along z to intersect with the constraint plane gives the feasible boundary point. That point can be expresed in closed-form format.

The T-Forward method frist find a boundary point x, then find the dual point y. Then starting a point on the line connecting x and y, and move forward to the direction that can increase the objective value to find another boudary point, which is called T-Forward point. By repeating the T-Forward move several times, roughly about ${\displaystyle O(lnL)}$ times, the T-Forward method will give the exact optimal solution for LP problems.

Linear programming is a mature field and the article discusses two long established methods (actually, family of methods): simplex and interior point. In contract this T-Forward method is a new method which has not been evaluated yet by the mathematical community. I think that the method and the quite extra-ordinary claims made for the method need more exposure before it can be included in the article. -- Jitse Niesen (talk) 13:16, 4 April 2014 (UTC)

## Complementary slackness: incorrect content

″It is possible to obtain an optimal solution to the dual when only an optimal solution to the primal is known using the complementary slackness theorem.″

This only holds if the problem is non-degenerate. An arbitrary primal solution does not define an optimal dual solution or vica versa, it only tells the optimal dual objective value (strong duality). As an example, consider a non-trivial polyhedron where finding a feasible primal is not immediate, and assume that the primal objective is zero: an optimal dual solution will be trivial (all zeros) and offer no information whatsoever for solving the primal problem. If the problem is non-degenerate, then the optimal primal solution will define a unique optimal primal basis, which in turn is then an optimal dual basis as well (using the nondegeneracy assumption) which defines the optimal (unique in this case) dual solution.

What about: "Optimal primal and dual feasible solutions can be characterized using the complementary slackness theorem." --Tyrneaeplith (talk) 09:42, 15 July 2014 (UTC)

## Conic sampling algorithm of Serang

This section is notably long for a method not widely known and does not read as an independent summary: bits are copy pasted from the cited paper.

"If the vertices with superior objective value are sampled in a roughly uniform manner, then the expected runtime is logarithmic in the number of vertices (and thus polynomial)."

This is a very strong claim indeed. The original paper quoted does not offer anything supporting this apart from stating it in its discussion section and offering some computational experiences that are declared as preliminary. The statement is a conjecture at best until proven, and should be removed. --Tyrneaeplith (talk) 10:02, 15 July 2014 (UTC)

## Generic blanket statement

Hi,

I don't comment much - but this statement seems kind of general, not really relevant and is not unique to the simplex algorithm: "The computing power required to test all the permutations to select the best assignment is vast; the number of possible configurations exceeds the number of particles in the observable universe. However, it takes only a moment to find the optimum solution by posing the problem as a linear program and applying the simplex algorithm. The theory behind linear programming drastically reduces the number of possible solutions that must be checked."

This is a blanket statement which can be said about any algorithm. Here is the same statement about sorting:

"The computing power required to test all the permutations to find the sorted assignment is vast; the number of possible configurations exceeds the number of particles in the observable universe. However, it takes only a moment to find the sorted solution by applying the bubble sort algorithm. The theory behind bubble sort drastically reduces the number of possible solutions that must be checked."

As already stated, the same could be said about almost any polynomial time algorithm. Every (decent) algorithm should be much better than "testing all permutations". Stating "the number of possible configurations exceeds the number of particles in the observable universe" is an amusing fact, but it only sounds impressive if you're not familiar with algorithms and computation.

Basically, I think that the above paragraph should be removed, however I'm not really sure if it's OK if I just remove it or ask you guys first. Usually when I change things I just remove/add single words etc.

Thanks.

## Software clarification: lp_solve

I was going to add a clarification to the table that linear programming in R is done using the lpSolve toolbox. However with a bit of reading around this is just a wrapper to lp_solve 5.5. But this software isn't currently listed. The website suggests lp_solve is usable from a wide variety of other software (matlab, python, octave etc). Should this be added? It seems if R is notable (which wraps lp_solve) then lp_solve should have an entry. elsewhere in wikipedia lp_solve is listed (e.g. List_of_optimization_software), but doesn't have an entry. — Preceding unsigned comment added by 82.13.45.30 (talk) 13:56, 29 April 2017 (UTC)

## Integer coefficients

Zfeinst, regarding your recent undo of my little contribution, after I asked for a justification, you have provided the Wikipedia link integer programming. But the first section of this article says explicitly "where ${\displaystyle \mathbf {c} ,\mathbf {b} }$ are vectors and ${\displaystyle {\displaystyle A}}$ is a matrix, where all entries are integers." It is not obvious how does the other link you've provided define an ILP, but all the examples given there are with integer coefficients. One thing is certain: if the coefficients are rational numbers, then the problem can be brought to a problem with integer coefficients by multiplying the inequalities by suitable constants. So, unless I'm wrong, even if the coefficients are not rational, by bounding them at a sufficient precision by rational numbers, I think it is possible (but not trivial) to reduce the problem to a problem with integer coefficients too. maimonid (talk) 16:58, 4 January 2018 (UTC)

I agree that an ILP with rational parameters can be reformulated (by scaling the variables) to have integer parameters, and likely there is a transformation if irrational parameters are desired. However, then the statement would be that an ILP is one in which it is equivalent to a linear program with integer parameters and integer constraints (which is how the canonical and standard forms in the integer programming page read, i.e., that non-standard form problems can be converted into this form). The link [4] defines an ILP as a linear program in which at least some variables are integers. Similarly "Linear Programming" by Vanderbei defines integer programming as "linear programs except that some or all of the variables are constrained to be integers." The fact that all examples provided are with integer parameters as they are simpler to write. What are some citations that explicitly state that all ILPs must have integer parameters? I do not know of any, but if they exist then I suggest that we make mention that "sometimes/often ILPs additionally require the parameters to be integer valued" and provide citations. Zfeinst (talk) 22:51, 4 January 2018 (UTC)

## Notes

1. ^ T.M. Nguyen. Applications of Generalized Inverse to Circulant Matrices, Intersection Projections, and Linear Programming. PhD thesis, University of Houston, 1982.
2. ^ Bruni, A. J., Nonnegative, Nontrivial Fixed Points of Orthogonal Projections, PhD Thesis, Department of Mathematics, University of Houston, 1985.
3. ^ Jalaluddin Abdullah, Fixed Point Algorithms for Linear Programming, Ph.D. Thesis, Department of Economics, Faculty of Commerce and Social Science, University of Birmingham, 1992.
4. ^ Jalaluddin Abdullah, Optimization by the Fixed-Point Method, Version 2.01. [1]

## BLAS, LA/LIN-PACK, etc

As they do have current articles, curious why no mention of these ? 98.4.124.117 (talk) 02:43, 3 August 2018 (UTC)

They do linear algebra, but do they do linear programming, which is a different thing? Mgnbar (talk) 03:09, 3 August 2018 (UTC)
I see that makes sense, they can be used for linear programming but they are not purpose built for it as applications of linear algebra are much wider than just linear programming, although the latter is based on the former. so that's a cogent reason. 98.4.124.117 (talk) 06:07, 3 August 2018 (UTC)
You put it well. Cheers. Mgnbar (talk) 11:13, 3 August 2018 (UTC)