Bayesian additive regression kernels

From Wikipedia, the free encyclopedia
Jump to: navigation, search

Bayesian additive regression kernels (BARK) is a non-parametric statistical model for regression and statistical classification.[1]

The unknown mean function is represented as a weighted sum of kernel functions,[clarification needed] which is constructed by a prior using alpha-stable Lévy random fields.[citation needed] This leads to a specification of a joint prior distribution for the number of kernels, kernel regression coefficients and kernel location parameters. It can be shown[1] that the alpha-stable prior on the kernel regression coefficients may be approximated by t distributions[disambiguation needed]. With a heavy tail prior distribution on the kernel regression coefficients and a finite support on the kernel location parameter, BARK achieves sparse representations. The shape parameters in the kernel functions capture the non-linear interactions of the variables, which can be used for feature selection. A reversible-jump Markov chain Monte Carlo algorithm is developed to make posterior inference on the unknown mean function, and the R package is available on CRAN.[2] For binary classification using a probit link, the model can be augmented with latent normal variables and hence the same method for Gaussian noise applies in the classification problem.

Notes[edit]