Pseudoconvex function

From Wikipedia, the free encyclopedia
(Redirected from Η-pseudolinearity)

In convex analysis and the calculus of variations, both branches of mathematics, a pseudoconvex function is a function that behaves like a convex function with respect to finding its local minima, but need not actually be convex. Informally, a differentiable function is pseudoconvex if it is increasing in any direction where it has a positive directional derivative. The property must hold in all of the function domain, and not only for nearby points.

Formal definition[edit]

Consider a differentiable function , defined on a (nonempty) convex open set of the finite-dimensional Euclidean space . This function is said to be pseudoconvex if the following property holds:[1]

for all .

Equivalently:

for all .

Here is the gradient of , defined by:

Note that the definition may also be stated in terms of the directional derivative of , in the direction given by the vector . This is because, as is differentiable, this directional derivative is given by:

Properties[edit]

Relation to other types of "convexity"[edit]

Every convex function is pseudoconvex, but the converse is not true. For example, the function is pseudoconvex but not convex. Similarly, any pseudoconvex function is quasiconvex; but the converse is not true, since the function is quasiconvex but not pseudoconvex. This can be summarized schematically as:

convex pseudoconvex quasiconvex
Functions x^3 (quasiconvex but not pseudoconvex) and x^3 + x (pseudoconvex and thus quasiconvex). None of them is convex.
Functions x^3 (quasiconvex but not pseudoconvex) and x^3 + x (pseudoconvex and thus quasiconvex). None of them is convex.

To see that is not pseudoconvex, consider its derivative at : . Then, if was pseudoconvex, we should have:

In particular it should be true for . But it is not, as: .

Sufficient optimality condition[edit]

For any differentiable function, we have the Fermat's theorem necessary condition of optimality, which states that: if has a local minimum at in an open domain, then must be a stationary point of (that is: ).

Pseudoconvexity is of great interest in the area of optimization, because the converse is also true for any pseudoconvex function. That is:[2] if is a stationary point of a pseudoconvex function , then has a global minimum at . Note also that the result guarantees a global minimum (not only local).

This last result is also true for a convex function, but it is not true for a quasiconvex function. Consider for example the quasiconvex function:

.

This function is not pseudoconvex, but it is quasiconvex. Also, the point is a critical point of , as . However, does not have a global minimum at (not even a local minimum).

Example of a quasiconvex function with a critical point that is not a minimum.
Example of a quasiconvex function that is not pseudoconvex. The function has a critical point at , but this is not a minimum.

Finally, note that a pseudoconvex function may not have any critical point. Take for example the pseudoconvex function: , whose derivative is always positive: .

Examples[edit]

An example of a function that is pseudoconvex, but not convex, is: The figure shows this function for the case where . This example may be generalized to two variables as:

Pseudoconvex function that is not convex: x^2 / (x^2+0.2)
Pseudoconvex function that is not convex.

The previous example may be modified to obtain a function that is not convex, nor pseudoconvex, but is quasiconvex:

The figure shows this function for the case where . As can be seen, this function is not convex because of the concavity, and it is not pseudoconvex because it is not differentiable at .

Quasiconvex function that is not convex, nor pseudoconvex:
Quasiconvex function that is not convex, nor pseudoconvex.

Generalization to nondifferentiable functions[edit]

The notion of pseudoconvexity can be generalized to nondifferentiable functions as follows.[3] Given any function , we can define the upper Dini derivative of by:

where u is any unit vector. The function is said to be pseudoconvex if it is increasing in any direction where the upper Dini derivative is positive. More precisely, this is characterized in terms of the subdifferential as follows:

For all : if is such that , then , for all ;

where denotes the line segment adjoining x and y.

Related notions[edit]

A pseudoconcave function is a function whose negative is pseudoconvex. A pseudolinear function is a function that is both pseudoconvex and pseudoconcave.[4] For example, linear–fractional programs have pseudolinear objective functions and linear–inequality constraints. These properties allow fractional-linear problems to be solved by a variant of the simplex algorithm (of George B. Dantzig).[5][6][7]

Given a vector-valued function , there is a more general notion of -pseudoconvexity[8][9] and -pseudolinearity; wherein classical pseudoconvexity and pseudolinearity pertain to the case when .

See also[edit]

Notes[edit]

  1. ^ Mangasarian 1965
  2. ^ Mangasarian 1965
  3. ^ Floudas & Pardalos 2001
  4. ^ Rapcsak 1991
  5. ^ Chapter five: Craven, B. D. (1988). Fractional programming. Sigma Series in Applied Mathematics. Vol. 4. Berlin: Heldermann Verlag. p. 145. ISBN 3-88538-404-3. MR 0949209.
  6. ^ Kruk, Serge; Wolkowicz, Henry (1999). "Pseudolinear programming". SIAM Review. 41 (4): 795–805. Bibcode:1999SIAMR..41..795K. doi:10.1137/S0036144598335259. JSTOR 2653207. MR 1723002.
  7. ^ Mathis, Frank H.; Mathis, Lenora Jane (1995). "A nonlinear programming algorithm for hospital management". SIAM Review. 37 (2): 230–234. doi:10.1137/1037046. JSTOR 2132826. MR 1343214. S2CID 120626738.
  8. ^ Ansari, Qamrul Hasan; Lalitha, C. S.; Mehta, Monika (2013). Generalized Convexity, Nonsmooth Variational Inequalities, and Nonsmooth Optimization. CRC Press. p. 107. ISBN 9781439868218. Retrieved 15 July 2019.
  9. ^ Mishra, Shashi K.; Giorgi, Giorgio (2008). Invexity and Optimization. Springer Science & Business Media. p. 39. ISBN 9783540785613. Retrieved 15 July 2019.

References[edit]