Talk:Quasiconvex function
This article is rated Start-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | |||||||||||
|
Morrey's version?
[edit]There's another notion of quasiconvexity that is very important in the calculus of variations, notable enough to be included in Wikipedia, in my opinion. It has a crucial role in the so-called direct method to find solutions of variational problems, in which one proves that the action functional is lower-semicontinuous and that the space of minimization candidates is compact, which directly implies that the minimum is attained. Quasiconvexity of the Lagrangian density, Morrey proved, is equivalent to the lower-semicontinuity of the action. It just seems natural to include this somewhere. A good reference is Bernard Dacorogna's book, Direct Methods in the Calculus of Variations, part II. Zatrp (talk) 14:58, 19 August 2022 (UTC)
Redirects
[edit]I've just made several pages redirect here; this is the only actual article that deals with weak or strict quasiconcavity or quasiconvexity. Relatedly, I deleted the "see also" link to the nonexistent article on strict quasiconvexity, including a definition here. Tobacman 23:04, 9 October 2006 (UTC)
Properties
[edit]It says that a function that is both quasiconvex and quasiconcave is quasi-linear. This isn't true. For example, any monotonically increasing univariate function is both quasiconvex and quasiconcave but not necessarily quasi-linear, for example . This also extends to multivariate functions. OEdhan 10:36, 12 April 2019 (UTC)
Optimization
[edit]Would it be accurate to say that quasiconvex optimization takes advantage of the property that "going down hill" always moves you closer to the minimum and that this is why convex and quasiconvex optimization is easier than optimizing over other functions? (I had a long discussion on Talk:Convex optimization; I think this page is the better place for it.) —Ben FrantzDale 17:31, 12 February 2007 (UTC)
- Going downhill always moves you always to minimum, regardless of whether the function is convex or not. Did you mean "moves you closer to a global minimum"? Oleg Alexandrov (talk) 03:59, 13 February 2007 (UTC)
- In 1D, perhaps, but in general there are non-convex functions with a single minimum for which moving down hill will move you away from the minimum even if subsequent steps down hill would eventually get you there. For example, consider a hill sorrounded by a moat where the moat is deepest on the East side. If you are on the West side of the hill, going down hill moves you West, away from the deep part of the moat. Continuing down hill you would find the shallow side of the moat, then wander your way around to the deep end. —Ben FrantzDale 05:33, 13 February 2007 (UTC)
- OK, I don't know how to answer your question. Note by the way that "going downhill" thing has a name, it is called gradient descent. Oleg Alexandrov (talk) 16:01, 13 February 2007 (UTC)
- The thing I am trying to understand is why "convex optimization" is so important. Consider a surface, f(x,y). I have a feel for why you would want every iso-contour to be convex: otherwise moving from point a to point b, where f(b) < f(a), you might have to go uphill. What I don't understand is why it is important that the function be convex. That is, why is it important that is convex whereas is not convex? —Ben FrantzDale (talk) 23:52, 20 May 2008 (UTC)
- With a convex function, you can tell that the approximate solution you've found has a value that's close to optimal. With a quasiconvex function, you can tell that the position of the approximate solution is close to the optimal position, but you can't tell whether the function has a very thin spike even closer to the optimal position that you've missed, causing the value of your approximate solution to be very far from the optimal value. —David Eppstein (talk) 00:19, 21 May 2008 (UTC)
- Thank you! Now that you say it, it seems completely obvious. If I understand correctly (in which case this will sound like I'm saying nothing new) the big deal about full convexity is that the gradient gives us a bound on the minimum value. So if f(x,y) is a surface in R3, then we could draw a triangle in x and y around the minimum and we could draw a tetrahedron (using the slopes at the corners of the triangle) that enclosed the true minimum. With a quasiconvex function, we can still draw the triangle around the minimum, but we may not know how far the rabbit hole goes. Is that right? —Ben FrantzDale (talk) 02:32, 21 May 2008 (UTC)
- I think you've got it. —David Eppstein (talk) 03:01, 21 May 2008 (UTC)
- "The big deal about full convexity is that the gradient gives us a bound on the minimum value". That's not true when you have constraints, and even without constraints, I would't phrase it like that. Rather, if you have no constraints, then if the gradient is zero, you know you are at the minimum value. Lavaka (talk) 16:08, 15 July 2013 (UTC)
- Thank you! Now that you say it, it seems completely obvious. If I understand correctly (in which case this will sound like I'm saying nothing new) the big deal about full convexity is that the gradient gives us a bound on the minimum value. So if f(x,y) is a surface in R3, then we could draw a triangle in x and y around the minimum and we could draw a tetrahedron (using the slopes at the corners of the triangle) that enclosed the true minimum. With a quasiconvex function, we can still draw the triangle around the minimum, but we may not know how far the rabbit hole goes. Is that right? —Ben FrantzDale (talk) 02:32, 21 May 2008 (UTC)
- With a convex function, you can tell that the approximate solution you've found has a value that's close to optimal. With a quasiconvex function, you can tell that the position of the approximate solution is close to the optimal position, but you can't tell whether the function has a very thin spike even closer to the optimal position that you've missed, causing the value of your approximate solution to be very far from the optimal value. —David Eppstein (talk) 00:19, 21 May 2008 (UTC)
- The thing I am trying to understand is why "convex optimization" is so important. Consider a surface, f(x,y). I have a feel for why you would want every iso-contour to be convex: otherwise moving from point a to point b, where f(b) < f(a), you might have to go uphill. What I don't understand is why it is important that the function be convex. That is, why is it important that is convex whereas is not convex? —Ben FrantzDale (talk) 23:52, 20 May 2008 (UTC)
- OK, I don't know how to answer your question. Note by the way that "going downhill" thing has a name, it is called gradient descent. Oleg Alexandrov (talk) 16:01, 13 February 2007 (UTC)
- In 1D, perhaps, but in general there are non-convex functions with a single minimum for which moving down hill will move you away from the minimum even if subsequent steps down hill would eventually get you there. For example, consider a hill sorrounded by a moat where the moat is deepest on the East side. If you are on the West side of the hill, going down hill moves you West, away from the deep part of the moat. Continuing down hill you would find the shallow side of the moat, then wander your way around to the deep end. —Ben FrantzDale 05:33, 13 February 2007 (UTC)
Sion's minimax theorem
[edit]Hi guys. I've just kicked off Sion's minimax theorem. Anyone got any improvements? Robinh (talk) 20:10, 20 May 2008 (UTC)
come on.. go sciencedirect — Preceding unsigned comment added by 194.27.183.251 (talk) 08:37, 6 June 2012 (UTC)
relationship
[edit]The article should focus on the relationship between convex function & quasiconvex function. A concave function can be quasiconvex function. For example log(x) is concave, and it is quasiconvex. Jackzhp (talk) 19:32, 11 August 2010 (UTC)
sums of quasiconvex functions
[edit]This article says that "if the sum of a finite set of (nonconstant) quasiconvex functions is quasiconvex, then all but either zero or one of the functions must be convex; this result holds for separable functions, in particular.". The part after the semicolon could be true, depending on what your precise definition of separability is. The first part of this sentence is definitely not true: consider for example the functions f(x) = g(x) = -x^2 when x is nonnegative and 0 otherwise. These functions are both quasiconvex but not convex, and the sum of these functions is still quasiconvex.
The article gives three sources to back up the false claim that is made: First, A paper by Debreu and Koopmans is cited, but I checked this and it turns out that here the claim is proved for only a specific type of separable functions. Secondly, paper by Arrow and Enthoven is cited and this article claims that in this paper the "theorem was proved for the special case of twice continuously differentiable functions", but I scanned through the paper and I do not see where this is proved, so I don't believe it. Thirdly, a paper by Crouzeix is cited that I cannot access, so I don't know what is proved here exactly.
I hope that someone can explain in case I am mistaken. Otherwise, I will proceed with removing this claim from the article.--192.16.184.223 (talk) 11:42, 3 October 2012 (UTC)
Strict quasiconvexity/quasiconcavity
[edit]I am not a math expert but I believe that the statement
"A (strictly) quasiconvex function has (strictly) convex lower contour sets, while a (strictly) quasiconcave function has (strictly) convex upper contour sets."
does not hold in the case of strict quasiconvexity or strict quasiconcavity.
In the following I make an example with quasiconcavity.
The function
is obiviously not strictly quasiconcave. But every upper contour set
is strictly convex for all x.
141.2.232.141 (talk) 17:20, 24 July 2013 (UTC) Raphael
- That is not a contradiction to the sentence in question. A contradiction would be a strict function whose level sets are not strict. In general, for a sentence that X implies Y, the only possible contradiction is something where X holds and Y does not hold. Here, X is the statement that the function is strict, and Y is the statement that its level sets are strict. —David Eppstein (talk) 18:19, 19 August 2022 (UTC)