User:A.n.campero/sandbox

From Wikipedia, the free encyclopedia

Structured sparsity regularization is a class of methods, and an area of research in statistical learning theory, that extend and generalize sparsity regularization learning methods [1]. Both sparsity and structured sparsity regularization methods seek to exploit the assumption that the output variable (i.e., response, or dependent variable) to be learned can be described by a reduced number of variables in the input space (i.e., the domain, space of features or explanatory variables). Sparsity regularization methods focus on selecting the input variables that best describe the output. Structured sparsity regularization methods generalize and extend sparsity regularization methods, by allowing for optimal selection over structures like groups or networks of input variables in [2][3] .

Common motivation for the use of structured sparsity methods are model interpretability, high-dimensional learning (where dimensionality of may be higher than the number of observations ), and reduction of computational complexity [4] . Moreover, structured sparsity methods allow to incorporate prior assumptions on the structure of the input variables, such as overlapping groups[2], non-overlapping groups, and acyclic graphs [3]. Examples of uses of structured sparsity methods include face recognition [5] , magnetic resonance image (MRI) processing [6], socio-linguistic analysis in natural language processing [7], and analysis of genetic expression in breast cancer [8].

Definition and related concepts[edit]

Sparsity regularization[edit]

Consider the linear kernel regularized empirical risk minimization problem with a loss function and the "norm" as the regularization penalty:

where , and denotes the "norm", defined as the number of nonzero entries of the vector . We say that is sparse if . Which means that the output can be described by a small subset of input variables.

More generally, assume a dictionary whith is given, such that the target function of a learning problem can be written as:

,

And define the norm as the number of non-zero components of

, where is the cardinality of set .

We say that is sparse if .


Structured sparsity regularization[edit]

Structured sparsity regularization extends and generalizes the variable selection problem that characterizes sparsity regularization [2] [3]. Consider the above regularized empirical risk minimization problem with a general kernel and associated feature map with .

The regularization term penalizes each component independently, which means that the algorithm will suppress input variables independently from each other.

In several situations we may want to impose more structure in the regularization process, so that, for example, input variables are suppressed according to predefined groups. Structured sparsity regularization methods allow to impose such structure by adding structure to the norms defining the regularization term.

Structures and norms[edit]

Non-overlapping groups: group Lasso[edit]

The non-overlapping group case is the most basic instance of structured sparsity. In it, we assume an a priori partition of the coefficient vector in non-overlapping groups. Let be the vector of coefficients in group , we can define a regularization term and its group norm as

,

where is the group norm , is group , and is the j-th component of group .

The above norm is also referred to as group Lasso[2]. This regularizer will force entire coefficient groups towards zero, rather than individual coefficients. As the groups are non-overlapping, the set of non-zero coefficients can be obtained as the union of the groups that were not set to zero, and conversely for the set of zero coefficients.

Overlapping groups[edit]

Overlapping groups is the structure sparsity case where a variable can belong to more than one group . This case is often of interest as it can represent a more general class of relationships among variables than non-overlapping groups can, such as tree structures or other type of graphs[3] [8].

There are two types of overlapping group sparsity regularization approaches, which are used to model different types of input variable relationships:

Intersection of compliments: group Lasso[edit]

The intersection of compliments approach is used in cases when we want to select only those input variables that have positive coefficients in all groups they belong to. Consider again the group Lasso for a regularized empirical risk minimization problem:

,

where is the group norm , is group , and is the j-th component of group .

As in the non-overlapping groups case, the group Lasso regularizer will potentially set entire groups of coefficients to zero. Selected variables are those with coefficients . However, as in this case groups may overlap, we take the intersection of the compliments of those groups that are not set to zero.

This intersection of complements selection criteria implies the modeling choice that we allow some coefficients within a particular group to be set to zero, while others within the same group may remain positive. In other words, coefficients within a group may differ depending on the several group memberships that each variable within the group may have.

Union of groups: latent group Lasso[edit]

A different approach is to consider union of groups for variable selection. This approach captures the modeling situation where variables can be selected as long as they belong at least to one group with positive coefficients. This modeling perspective implies that we want to preserve group structure.

The formulation of the union of groups approach is also referred to as latent group Lasso, and requires to modify the group norm considered above and introduce the following regularizer [3]

where , is the vector of coefficients of group g, and is a vector with coefficients for all variables in group , and in all others, i.e, if in group and otherwise.

This regularizer can be interpreted as effectively replicating variables that belong to more than one group, therefore conserving group structure. As intended by the union of groups approach, requiring produces a vector of weights w that effectively sums up the weights of all variables across all groups they belong to.

Algorithms for computation[edit]

Best subset selection problem[edit]

The problem of choosing the best subset of input variables can be naturally formulated under a penalization framework as: [4]

Where denotes the "norm", defined as the number of nonzero entries of the vector .

Although this formulation makes sense from a modeling perspective, it is computationally unfeasible, as it is equivalent to an exhaustive search evaluating all possible subsets of variables [4].

Two main approaches for solving the optimization problem are: 1) greedy methods, such as step-wise regression in statistics, or matching pursuit in signal processing; and 2) convex relaxation formulation approaches and proximal gradient optimization methods.

Convex relaxation[edit]

A natural approximation for the best subset selection problem is the norm regularization [4]:

Such as scheme is called basis pursuit or the Lasso, which substitutes the "norm" for the convex, non-differentiable norm.

Proximal gradient methods[edit]

Proximal gradient methods, also called forward-backward splitting, are optimization methods useful for minimizing functions with a convex and differentiable component, and a convex potentially non-differentiable component.

As such, proximal gradient methods are useful for solving sparsity and structured sparsity regularization problems[9] of the following form:

Where is a convex and differetiable loss function like the quadratic loss, and is a convex potentially non-differentiable regularizer such as the norm.

Uses and applications[edit]

Structured sparsity regularization methods have been used in a number of settings where it is desired to impose an a priori input variable structure to the regularization process. Some such applications are:

  • Compressive sensing in magnetic resonance imaging (MRI), reconstructing MR images from a small number of measurements, potentially yielding significant reductions in MR scanning time [6].
  • Robust face recognition in the presence of misalignment, occlusion and illumination variation [5].
  • Uncovering socio-linguistic associations between lexical frequencies used by tweeter authors, and the socio-demographic variables of their geographic communities [7].
  • Gene selection analysis of breast cancer data using priors of overlapping groups, e.g., biologically meaningful gene sets [8].

See also[edit]

References[edit]

  1. ^ Rosasco, Lorenzo; Poggio, Tomasso. "A Regularization Tour of Machine Learning — MIT-9.520 Lectures Notes". Manuscript, Dec. 2014.
  2. ^ a b c d Yuan, M.; Lin, Y. (2006). "Model selection and estimation in regression with grouped variables". J. R. Stat. Soc. B. 68 (1): 49–67. doi:10.1111/j.1467-9868.2005.00532.x.
  3. ^ a b c d e Obozinski, G. (2011). "Group lasso with overlaps: the latent group lasso approach". INRIA Technical Report. {{cite journal}}: Unknown parameter |coauthors= ignored (|author= suggested) (help)
  4. ^ a b c d L. Rosasco. Lecture 10 of the Lecture Notes for 9.520: Statistical Learning Theory and Applications. Massachusetts Institute of Technology, Fall 2014. Available at http://www.mit.edu/~9.520/fall14/slides/class18/class18_sparsity.pdf
  5. ^ a b Jia, Kui; et al. (2012). "Robust and Practical Face Recognition via Structured Sparsity". {{cite journal}}: Cite journal requires |journal= (help); Explicit use of et al. in: |first= (help)
  6. ^ a b Chen, Chen; et al. (2012). "Compressive Sensing MRI with Wavelet Tree Sparsity". Proc. Of the 26th Annual Conference on Neural Information Processing Systems (NIPS). {{cite journal}}: Explicit use of et al. in: |first= (help)
  7. ^ a b Eisenstein, Jacob; et al. (2011). "Discovering Sociolinguistic Associations with Structured Sparsity". Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics. {{cite journal}}: Explicit use of et al. in: |first= (help)
  8. ^ a b c Jacob, Laurent; et al. (2009). "Group Lasso with Overlap and Graph Lasso". Proceedings of the 26th International Conference on Machine Learning. {{cite journal}}: Explicit use of et al. in: |first= (help)
  9. ^ Villa, S.; Rosasco, L.; Mosci, S.; Verri, A. (2012). "Proximal methods for the latent group lasso penalty". Preprint. arXiv:1209.0368.

Category:Machine learning First order methods Category:Convex optimization