# Causal inference

Causal inference is the process of drawing a conclusion about a causal connection based on the conditions of the occurrence of an effect. The main difference between causal inference and inference of association is that the former analyzes the response of the effect variable when the cause is changed.[1][2] The science of why things occur is called etiology. Causal inference is an example of causal reasoning.

## Definition

Inferring the cause of something has been described as:

• "...reason[ing] to the conclusion that something is, or is likely to be, the cause of something else".[3]
• "Identification of the cause or causes of a phenomenon, by establishing covariation of cause and effect, a time-order relationship with the cause preceding the effect, and the elimination of plausible alternative causes."[4]

## Methods

Epidemiological studies employ different epidemiological methods of collecting and measuring evidence of risk factors and effect and different ways of measuring association between the two. A hypothesis is formulated, and then tested with statistical methods (see Statistical hypothesis testing). It is statistical inference that helps decide if data are due to chance, also called random variation, or indeed correlated and if so how strongly. However, correlation does not imply causation, so further methods must be used to infer causation.

Common frameworks for causal inference are structural equation modeling and the Rubin causal model.[citation needed]

## In epidemiology

Epidemiology studies patterns of health and disease in defined populations of living beings in order to infer causes and effects. An association between an exposure to a putative risk factor and a disease may be suggestive of, but is not equivalent to causality because correlation does not imply causation. Historically, Koch's postulates have been used since the 19th century to decide if a microorganism was the cause of a disease. In the 20th century the Bradford Hill criteria, described in 1965[5] have been used to assess causality of variables outside microbiology, although even these criteria are not exclusive ways to determine causality.

In molecular epidemiology the phenomena studied are on a molecular biology level, including genetics, where biomarkers are evidence of cause or effects.

A recent trend[when?] is to identify evidence for influence of the exposure on molecular pathology within diseased tissue or cells, in the emerging interdisciplinary field of molecular pathological epidemiology (MPE).[third-party source needed] Linking the exposure to molecular pathologic signatures of the disease can help to assess causality.[third-party source needed] Considering the inherent nature of heterogeneity of a given disease, the unique disease principle, disease phenotyping and subtyping are trends in biomedical and public health sciences, exemplified as personalized medicine and precision medicine.[third-party source needed]

## In computer science

Determination of cause and effect from joint observational data for two time-independent variables, say X and Y, has been tackled using asymmetry between evidence for some model in the directions, X → Y and Y → X. The primary approaches are based on Algorithmic information theory models and noise models.

### Algorithmic Information Models

Compare two programs, both of which output both X and Y.

• Store Y and a compressed form of X in terms of uncompressed Y.
• Store X and a compressed form of Y in terms of uncompressed X.

The shortest such program implies the uncompressed stored variable more-likely causes the computed variable.[6][7].

### Noise Models

Incorporate an independent noise term in the model to compare the evidences of the two directions.

Here are some of the noise models for the hypothesis Y → X with the noise E:

• Additive noise:[8] ${\displaystyle Y=F(X)+E}$
• Linear noise:[9] ${\displaystyle Y=pX+qE}$
• Post-non-linear:[10] ${\displaystyle Y=G(F(X)+E)}$
• Heteroskedastic noise: ${\displaystyle Y=F(X)+E.G(X)}$
• Functional noise:[11] ${\displaystyle Y=F(X,E)}$

The common assumption in these models are:

• There are no other causes of Y.
• X and E have no common causes.
• Distribution of cause is independent from causal mechanisms.

On an intuitive level, the idea is that the factorization of the joint distribution P(Cause,Effect) into P(Cause)*P(Effect | Cause) typically yields models of lower total complexity than the factorization into P(Effect)*P(Cause | Effect). Although the notion of “complexity” is intuitively appealing, it is not obvious how it should be precisely defined.[11] A different family of methods attempt to discover causal "footprints" from large amounts of labeled data, and allow the prediction of more flexible causal relations.[12]

## In statistics and economics

In statistics and economics, causality is often tested for using regression. Several methods can be used to distinguish actual causality from spurious indications of causality. First, the explanatory variable could be one that conceptually could not be caused by the dependent variable, thereby avoiding the possibility of being misled by reverse causation: for example, if the independent variable is rainfall and the dependent variable is the futures price of some agricultural commodity. Second, the instrumental variables technique may be employed to remove any reverse causation by introducing a role for other variables (instruments) that are known to be unaffected by the dependent variable. Third, the principle that effects cannot precede causes can be invoked, by including on the right side of the regression only variables that precede in time the dependent variable. Fourth, other regressors are included to ensure that confounding variables are not causing a regressor to spuriously appear to be significant. Correlation by coincidence, as opposed to correlation reflecting actual causation, can be ruled out by using large samples and by performing cross validation to check that correlations are maintained on data that were not used in the regression.

## Education

Graduate courses on causal inference have been introduced to the curriculum of many schools.

## References

1. ^ Pearl, Judea (1 January 2009). "Causal inference in statistics: An overview" (PDF). Statistics Surveys. 3: 96–146. doi:10.1214/09-SS057.
2. ^ Morgan, Stephen; Winship, Chris (2007). Counterfactuals and Causal inference. Cambridge University Press. ISBN 978-0-521-67193-4.
3. ^ "causal inference". Encyclopædia Britannica, Inc. Retrieved 24 August 2014.
4. ^ John Shaughnessy; Eugene Zechmeister; Jeanne Zechmeister (2000). Research Methods in Psychology. McGraw-Hill Humanities/Social Sciences/Languages. pp. Chapter 1 : Introduction. ISBN 978-0077825362. Retrieved 24 August 2014.
5. ^ Hill, Austin Bradford (1965). "The Environment and Disease: Association or Causation?". Proceedings of the Royal Society of Medicine. 58 (5): 295–300. PMC 1898525. PMID 14283879.
6. ^ Kailash Budhathoki and Jilles Vreeken "Causal Inference by Compression" 2016 IEEE 16th International Conference on Data Mining (ICDM)
7. ^ Marx, Alexander; Vreeken, Jilles (2018). "Telling cause from effect by local and global regression". Knowledge and Information Systems. doi:10.1007/s10115-018-1286-7.
8. ^ Hoyer, Patrik O., et al. "Nonlinear causal discovery with additive noise models." NIPS. Vol. 21. 2008.
9. ^ Shimizu, Shohei, et al. "DirectLiNGAM: A direct method for learning a linear non-Gaussian structural equation model." The Journal of Machine Learning Research 12 (2011): 1225-1248.
10. ^ Zhang, Kun, and Aapo Hyvärinen. "On the identifiability of the post-nonlinear causal model." Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence. AUAI Press, 2009.
11. ^ a b Mooij, Joris M., et al. "Probabilistic latent variable models for distinguishing between cause and effect." NIPS. 2010.
12. ^ Lopez-Paz, David, et al. "Towards a learning theory of cause-effect inference" ICML. 2015
13. ^ "GOVT 6069: Causal Inference". Government. 2019-02-27. Retrieved 2018-02-27.
14. ^ "Introduction to Causal Inference". Political Science. 2015-05-21. Retrieved 2018-08-26.