Jump to content

Submodular set function

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 18.111.100.120 (talk) at 16:56, 23 April 2016. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In mathematics, a submodular set function (also known as a submodular function) is a set function whose value, informally, has the property that the difference in the incremental value of the function that a single element makes when added to an input set decreases as the size of the input set increases. Submodular functions have a natural diminishing returns property which makes them suitable for many applications, including approximation algorithms, game theory (as functions modeling user preferences) and electrical networks. Recently, submodular functions have also found immense utility in several real world problems in machine learning and artificial intelligence, including automatic summarization, multi-document summarization, feature selection, active learning, sensor placement, image collection summarization and many other domains.[1][2][3][4]

Definition

If is a finite set, a submodular function is a set function , where denotes the power set of , which satisfies one of the following equivalent definitions.[5]

  1. For every with and every we have that .
  2. For every we have that .
  3. For every and we have that .

A nonnegative submodular function is also a subadditive function, but a subadditive function need not be submodular. If is not assumed finite, then the above conditions are not equivalent. In particular a function defined by if is finite and if is infinite satisfies the first condition above, but the second condition fails when and are infinite sets with finite intersection.

Types of submodular functions

Monotone

A submodular function is monotone if for every we have that . Examples of monotone submodular functions include:

Linear functions
Any function of the form is called a linear function. Additionally if then f is monotone.
Budget-additive functions
Any function of the form for each and is called budget additive.[citation needed]
Coverage functions
Let be a collection of subsets of some ground set . The function for is called a coverage function. This can be generalized by adding non-negative weights to the elements.
Entropy
Let be a set of random variables. Then for any we have that is a submodular function, where is the entropy of the set of random variables .[6]
Matroid rank functions
Let be the ground set on which a matroid is defined. Then the rank function of the matroid is a submodular function.[7]

Non-monotone

A submodular function which is not monotone is called non-monotone.

Symmetric

A non-monotone submodular function is called symmetric if for every we have that . Examples of symmetric non-monotone submodular functions include:

Graph cuts
Let be the vertices of a graph. For any set of vertices let denote the number of edges such that and . This can be generalized by adding non-negative weights to the edges.
Mutual information
Let be a set of random variables. Then for any we have that is a submodular function, where is the mutual information.

Asymmetric

A non-monotone submodular function which is not symmetric is called asymmetric.

Directed cuts
Let be the vertices of a directed graph. For any set of vertices let denote the number of edges such that and . This can be generalized by adding non-negative weights to the directed edges.

Continuous extensions

Lovász extension

This extension is named after mathematician László Lovász. Consider any vector such that each . Then the Lovász extension is defined as where the expectation is over chosen from the uniform distribution on the interval . The Lovász extension is a convex function.

Multilinear extension

Consider any vector such that each . Then the multilinear extension is defined as .

Convex closure

Consider any vector such that each . Then the convex closure is defined as . It can be shown that .

Concave closure

Consider any vector such that each . Then the concave closure is defined as .

Properties

  1. The class of submodular functions is closed under non-negative linear combinations. Consider any submodular function and non-negative numbers . Then the function defined by is submodular. Furthermore, for any submodular function , the function defined by is submodular. The function , where is a real number, is submodular whenever is monotonic.
  2. If is a submodular function then defined as where is a concave function, is also a submodular function.
  3. Consider a random process where a set is chosen with each element in being included in independently with probability . Then the following inequality is true where is the empty set. More generally consider the following random process where a set is constructed as follows. For each of construct by including each element in independently into with probability . Furthermore let . Then the following inequality is true .[citation needed]

Optimization problems

Submodular functions have properties which are very similar to convex and concave functions. For this reason, an optimization problem which concerns optimizing a convex or concave function can also be described as the problem of maximizing or minimizing a submodular function subject to some constraints.

Submodular Minimization

The simplest minimization problem is to find a set which minimizes a submodular function subject to no constraints. This problem is computable in (strongly)[8][9] polynomial time.[10][11] Computing the minimum cut in a graph is a special case of this general minimization problem. However, even simple constraints like cardinality lower bound constraints make this problem NP hard, with polynomial lower bound approximation factors.[12][13]

Submodular Maximization

Unlike minimization, maximization of submodular functions is usually NP-hard. Many problems, such as max cut and the maximum coverage problem, can be cast as special cases of this general maximization problem under suitable constraints. Typically, the approximation algorithms for these problems are based on either greedy algorithms or local search algorithms. The problem of maximizing a symmetric non-monotone submodular function subject to no constraints admits a 1/2 approximation algorithm.[14] Computing the maximum cut of a graph is a special case of this problem. The more general problem of maximizing an arbitrary non-monotone submodular function subject to no constraints also admits a 1/2 approximation algorithm.[15] The problem of maximizing a monotone submodular function subject to a cardinality constraint admits a approximation algorithm.[16] The maximum coverage problem is a special case of this problem. The more general problem of maximizing a monotone submodular function subject to a matroid constraint also admits a approximation algorithm.[17][18] Many of these algorithms can be unified within a semi-differential based framework of algorithms.[13]

Apart from submodular minimization and maximization, another natural problem is Difference of Submodular Optimization.[19][20] Unfortunately, this problem is not only NP hard, but also inapproximable.[20] A related optimization problem is minimize or maximize a submodular function, subject to a submodular level set constraint (also called submodular optimization subject to submodular cover or submodular knapsack constraint). This problem admits bounded approximation guarantees.[21] Another optimization problem involves partitioning data based on a submodular function, so as to maximize the average welfare. This problem is called the submodular welfare problem.[22]

Applications

Submodular functions naturally occur in several real world applications, in economics, game theory, machine learning and computer vision. Owing the diminishing returns property, submodular functions naturally model costs of items, since there is often a larger discount, with an increase in the items one buys. Submodular functions model notions of complexity, similarity and cooperation when they appear in minimization problems. In maximization problems, on the other hand, they model notions of diversity, information and coverage. For more information on applications of submodularity, particularly in machine learning, see [4][23][24]

See also

Citations

  1. ^ H. Lin and J. Bilmes, A Class of Submodular Functions for Document Summarization, ACL-2011.
  2. ^ S. Tschiatschek, R. Iyer, H. Wei and J. Bilmes, Learning Mixtures of Submodular Functions for Image Collection Summarization, NIPS-2014.
  3. ^ A. Krause and C. Guestrin, Near-optimal nonmyopic value of information in graphical models, UAI-2005.
  4. ^ a b A. Krause and C. Guestrin, Beyond Convexity: Submodularity in Machine Learning, Tutorial at ICML-2008
  5. ^ (Schrijver 2003, §44, p. 766)
  6. ^ "Information Processing and Learning" (PDF). cmu.
  7. ^ Fujishige (2005) p.22
  8. ^ S. Iwata, L. Fleischer, and S. Fujishige, A combinatorial strongly polynomial algorithm for minimizing submodular functions, J. ACM 48 (2001), pp. 761–777.
  9. ^ A. Schrijver, A combinatorial algorithm minimizing submodular functions in strongly polynomial time, J. Combin. Theory Ser. B 80 (2000), pp. 346–355.
  10. ^ M. Grötschel, L. Lovasz and A. Schrijver, The ellipsoid method and its consequences in combinatorial optimization, Combinatorica 1 (1981), pp. 169–197.
  11. ^ W. H. Cunningham, On submodular function minimization, Combinatorica,5 (1985),pp. 185–192.
  12. ^ Z. Svitkina and L. Fleischer, Submodular approximation: Sampling-based algorithms and lower bounds, SIAM Journal of Computing (2011).
  13. ^ a b R. Iyer, S. Jegelka and J. Bilmes, Fast Semidifferential based submodular function optimization, Proc. ICML (2013).
  14. ^ U. Feige, V. Mirrokni and J. Vondrák, Maximizing non-monotone submodular functions, Proc. of 48th FOCS (2007), pp. 461–471.
  15. ^ N. Buchbinder, M. Feldman, J. Naor and R. Schwartz, A tight linear time (1/2)-approximation for unconstrained submodular maximization, Proc. of 53rd FOCS (2012), pp. 649-658.
  16. ^ G. L. Nemhauser, L. A. Wolsey and M. L. Fisher, An analysis of approximations for maximizing submodular set functions I, Mathematical Programming 14 (1978), 265–294.
  17. ^ G. Calinescu, C. Chekuri, M. Pál and J. Vondrák, Maximizing a submodular set function subject to a matroid constraint, SIAM J. Comp. 40:6 (2011), 1740-1766.
  18. ^ Y. Filmus, J. Ward, A tight combinatorial algorithm for submodular maximization subject to a matroid constraint, Proc. of 53rd FOCS (2012), pp. 659-668.
  19. ^ M. Narasimhan and J. Bilmes, A submodular-supermodular procedure with applications to discriminative structure learning, In Proc. UAI (2005).
  20. ^ a b R. Iyer and J. Bilmes, Algorithms for Approximate Minimization of the Difference between Submodular Functions, In Proc. UAI (2012).
  21. ^ R. Iyer and J. Bilmes, Submodular Optimization Subject to Submodular Cover and Submodular Knapsack Constraints, In Advances of NIPS (2013).
  22. ^ J. Vondrák, Optimal approximation for the submodular welfare problem in the value oracle model, Proc. of STOC (2008), pp. 461–471.
  23. ^ http://submodularity.org/.
  24. ^ J. Bilmes, Submodularity in Machine Learning Applications, Tutorial at AAAI-2015.

References