Anomaly detection: Difference between revisions
Application: Video Surveillance |
Technique: Neural networks: deep learning |
||
Line 57: | Line 57: | ||
* [[Hidden Markov model]]s (HMMs)<ref name="replicator" /> |
* [[Hidden Markov model]]s (HMMs)<ref name="replicator" /> |
||
* [[Minimum Covariance Determinant]]<ref>{{Cite journal|last1=Hubert|first1=Mia|author-link=Mia Hubert|last2=Debruyne|first2=Michiel|last3=Rousseeuw|first3=Peter J.|author-link3=Peter J. Rousseeuw|date=2018|title=Minimum covariance determinant and extensions|journal=WIREs Computational Statistics|language=en|volume=10|issue=3|doi=10.1002/wics.1421|s2cid=67227041 |issn=1939-5108|doi-access=free}}</ref><ref>{{Cite journal|last1=Hubert|first1=Mia|author-link=Mia Hubert|last2=Debruyne|first2=Michiel|date=2010|title=Minimum covariance determinant|url=https://onlinelibrary.wiley.com/doi/abs/10.1002/wics.61|journal=WIREs Computational Statistics|language=en|volume=2|issue=1|pages=36–43|doi=10.1002/wics.61|s2cid=123086172 |issn=1939-0068}}</ref> |
* [[Minimum Covariance Determinant]]<ref>{{Cite journal|last1=Hubert|first1=Mia|author-link=Mia Hubert|last2=Debruyne|first2=Michiel|last3=Rousseeuw|first3=Peter J.|author-link3=Peter J. Rousseeuw|date=2018|title=Minimum covariance determinant and extensions|journal=WIREs Computational Statistics|language=en|volume=10|issue=3|doi=10.1002/wics.1421|s2cid=67227041 |issn=1939-5108|doi-access=free}}</ref><ref>{{Cite journal|last1=Hubert|first1=Mia|author-link=Mia Hubert|last2=Debruyne|first2=Michiel|date=2010|title=Minimum covariance determinant|url=https://onlinelibrary.wiley.com/doi/abs/10.1002/wics.61|journal=WIREs Computational Statistics|language=en|volume=2|issue=1|pages=36–43|doi=10.1002/wics.61|s2cid=123086172 |issn=1939-0068}}</ref> |
||
* [[Deep learning|Deep Learning]]<ref name=":32">{{Cite journal |date=2023-06-01 |title=Video anomaly detection system using deep convolutional and recurrent models |url=https://www.sciencedirect.com/science/article/pii/S2590123023001536 |journal=Results in Engineering |language=en-US |volume=18 |pages=101026 |doi=10.1016/j.rineng.2023.101026 |issn=2590-1230}}</ref> |
|||
** '''[[Convolutional neural network|Convolutional Neural Networks]] (CNNs):''' CNNs have shown exceptional performance in the unsupervised learning domain for anomaly detection, especially in image and video data analysis.<ref name=":32" />Their ability to automatically and hierarchically learn spatial hierarchies of features from low to high-level patterns makes them particularly suited for detecting visual anomalies. For instance, CNNs can be trained on image datasets to identify atypical patterns indicative of defects or out-of-norm conditions in industrial quality control scenarios.<ref>{{Cite journal |last=Alzubaidi |first=Laith |last2=Zhang |first2=Jinglan |last3=Humaidi |first3=Amjad J. |last4=Al-Dujaili |first4=Ayad |last5=Duan |first5=Ye |last6=Al-Shamma |first6=Omran |last7=Santamaría |first7=J. |last8=Fadhel |first8=Mohammed A. |last9=Al-Amidie |first9=Muthana |last10=Farhan |first10=Laith |date=2021-03-31 |title=Review of deep learning: concepts, CNN architectures, challenges, applications, future directions |url=https://doi.org/10.1186/s40537-021-00444-8 |journal=Journal of Big Data |volume=8 |issue=1 |pages=53 |doi=10.1186/s40537-021-00444-8 |issn=2196-1115 |pmc=PMC8010506 |pmid=33816053}}</ref> |
|||
** '''Simple Recurrent Units (SRUs):''' In time-series data, SRUs, a type of recurrent neural network, have been effectively used for anomaly detection by capturing temporal dependencies and sequence anomalies.<ref name=":32" />Unlike traditional RNNs, SRUs are designed to be faster and more parallelizable, offering a better fit for real-time anomaly detection in complex systems such as dynamic financial markets or predictive maintenance in machinery, where identifying temporal irregularities promptly is crucial.<ref>{{Cite journal |last=Belay |first=Mohammed Ayalew |last2=Blakseth |first2=Sindre Stenen |last3=Rasheed |first3=Adil |last4=Salvo Rossi |first4=Pierluigi |date=2023-01 |title=Unsupervised Anomaly Detection for IoT-Based Multivariate Time Series: Existing Solutions, Performance Analysis and Future Directions |url=https://www.mdpi.com/1424-8220/23/5/2844 |journal=Sensors |language=en |volume=23 |issue=5 |pages=2844 |doi=10.3390/s23052844 |issn=1424-8220}}</ref> |
|||
=== Cluster based === |
=== Cluster based === |
Revision as of 17:03, 8 November 2023
Part of a series on |
Machine learning and data mining |
---|
In data analysis, anomaly detection (also referred to as outlier detection and sometimes as novelty detection) is generally understood to be the identification of rare items, events or observations which deviate significantly from the majority of the data and do not conform to a well defined notion of normal behaviour.[1] Such examples may arouse suspicions of being generated by a different mechanism,[2] or appear inconsistent with the remainder of that set of data.[3]
Anomaly detection finds application in many domains including cyber security, medicine, machine vision, statistics, neuroscience, law enforcement and financial fraud to name only a few. Anomalies were initially searched for clear rejection or omission from the data to aid statistical analysis, for example to compute the mean or standard deviation. They were also removed to better predictions from models such as linear regression, and more recently their removal aids the performance of machine learning algorithms. However, in many applications anomalies themselves are of interest and are the observations most desirous in the entire data set, which need to be identified and separated from noise or irrelevant outliers.
Three broad categories of anomaly detection techniques exist.[1] Supervised anomaly detection techniques require a data set that has been labeled as "normal" and "abnormal" and involves training a classifier. However, this approach is rarely used in anomaly detection due to the general unavailability of labelled data and the inherent unbalanced nature of the classes. Semi-supervised anomaly detection techniques assume that some portion of the data is labelled. This may be any combination of the normal or anomalous data, but more often than not the techniques construct a model representing normal behavior from a given normal training data set, and then test the likelihood of a test instance to be generated by the model. Unsupervised anomaly detection techniques assume the data is unlabelled and are by far the most commonly used due to their wider and relevant application.
Definition
Many attempts have been made in the statistical and computer science communities to define an anomaly. The most prevalent ones include the following, and can be categorised into three groups: those that are ambiguous, those that are specific to a method with pre-defined thresholds usually chosen empirically, and those that are formally defined:
Ill defined
- An outlier is an observation which deviates so much from the other observations as to arouse suspicions that it was generated by a different mechanism.[2]
- Anomalies are instances or collections of data that occur very rarely in the data set and whose features differ significantly from most of the data.
- An outlier is an observation (or subset of observations) which appears to be inconsistent with the remainder of that set of data.[3]
- An anomaly is a point or collection of points that is relatively distant from other points in multi-dimensional space of features.
- Anomalies are patterns in data that do not conform to a well defined notion of normal behaviour.[1]
Specific
- Let T be observations from a univariate Gaussian distribution and O a point from T. Then the z-score for O is greater than a pre-selected threshold if and only if O is an outlier.
Applications
Anomaly detection is applicable in a very large number and variety of domains, and is an important subarea of unsupervised machine learning. As such it has applications in cyber-security, intrusion detection, fraud detection, fault detection, system health monitoring, event detection in sensor networks, detecting ecosystem disturbances, defect detection in images using machine vision, medical diagnosis and law enforcement.[4]
Intrusion detection
Anomaly detection was proposed for intrusion detection systems (IDS) by Dorothy Denning in 1986.[5] Anomaly detection for IDS is normally accomplished with thresholds and statistics, but can also be done with soft computing, and inductive learning.[6] Types of features proposed by 1999 included profiles of users, workstations, networks, remote hosts, groups of users, and programs based on frequencies, means, variances, covariances, and standard deviations.[7] The counterpart of anomaly detection in intrusion detection is misuse detection.
Preprocessing
Preprocessing data to remove anomalies can be an important step in data analysis, and is done for a number of reasons. Statistics such as the mean and standard deviation are more accurate after the removal of anomalies, and the visualisation of data can also be improved. In supervised learning, removing the anomalous data from the dataset often results in a statistically significant increase in accuracy.[8][9]
Video Surveillance
Anomaly detection has become increasingly vital in video surveillance to enhance security and safety.[10] [11]With the advent of deep learning technologies, methods using Convolutional Neural Networks (CNNs) and Simple Recurrent Units (SRUs) have shown significant promise in identifying unusual activities or behaviors in video data.[10] These models can process and analyze extensive video feeds in real-time, recognizing patterns that deviate from the norm, which may indicate potential security threats or safety violations.[10]
Popular techniques
Many anomaly detection techniques have been proposed in literature.[1][12] The performance of methods usually depend on the data sets. For example, some may be suited to detecting local outliers, while others global, and methods have little systematic advantages over another when compared across many data sets.[13][14] Almost all algorithms also require the setting of non-intuitive parameters critical for performance, and usually unknown before application. Some of the popular techniques are mentioned below and are broken down into categories:
Statistical
Parameter-free
Parametric-based
Density
- Density-based techniques (k-nearest neighbor,[15][16][17] local outlier factor,[18] isolation forests,[19][20] and many more variations of this concept[21])
- Subspace-,[22] correlation-based[23] and tensor-based [24] outlier detection for high-dimensional data[25]
- One-class support vector machines[26]
Neural networks
- Replicator neural networks,[27] autoencoders, variational autoencoders,[28] long short-term memory neural networks[29]
- Bayesian networks[27]
- Hidden Markov models (HMMs)[27]
- Minimum Covariance Determinant[30][31]
- Deep Learning[32]
- Convolutional Neural Networks (CNNs): CNNs have shown exceptional performance in the unsupervised learning domain for anomaly detection, especially in image and video data analysis.[32]Their ability to automatically and hierarchically learn spatial hierarchies of features from low to high-level patterns makes them particularly suited for detecting visual anomalies. For instance, CNNs can be trained on image datasets to identify atypical patterns indicative of defects or out-of-norm conditions in industrial quality control scenarios.[33]
- Simple Recurrent Units (SRUs): In time-series data, SRUs, a type of recurrent neural network, have been effectively used for anomaly detection by capturing temporal dependencies and sequence anomalies.[32]Unlike traditional RNNs, SRUs are designed to be faster and more parallelizable, offering a better fit for real-time anomaly detection in complex systems such as dynamic financial markets or predictive maintenance in machinery, where identifying temporal irregularities promptly is crucial.[34]
Cluster based
- Clustering: Cluster analysis-based outlier detection[35][36]
- Deviations from association rules and frequent itemsets
- Fuzzy logic-based outlier detection
Ensembles
- Ensemble techniques, using feature bagging,[37][38] score normalization[39][40] and different sources of diversity[41][42]
Others
Explainable Anomaly Detection
Many of the methods discussed above only yield an anomaly score prediction, which often can be explained to users as the point being in a region of low data density (or relatively low density compared to the neighbor's densities). In explainable artificial intelligence, the users demand methods with higher explainability. Some methods allow for more detailed explanations:
- The Subspace Outlier Degree (SOD)[22] identifies attributes where a sample is normal, and attributes in which the sample deviates from the expected.
- Correlation Outlier Probabilities (COP)[23] compute an error vector how a sample point deviates from an expected location, which can be interpreted as a counterfactual explanation: the sample would be normal if it were moved to that location.
Software
- ELKI is an open-source Java data mining toolkit that contains several anomaly detection algorithms, as well as index acceleration for them.
- PyOD is an open-source Python library developed specifically for anomaly detection.[43]
- scikit-learn is an open-source Python library that contains some algorithms for unsupervised anomaly detection.
- Wolfram Mathematica provides functionality for unsupervised anomaly detection across multiple data types [44]
Datasets
- Anomaly detection benchmark data repository with carefully chosen data sets of the Ludwig-Maximilians-Universität München; Mirror Archived 2022-03-31 at the Wayback Machine at University of São Paulo.
- ODDS – ODDS: A large collection of publicly available outlier detection datasets with ground truth in different domains.
- Unsupervised Anomaly Detection Benchmark at Harvard Dataverse: Datasets for Unsupervised Anomaly Detection with ground truth.
- KMASH Data Repository at Research Data Australia having more than 12,000 anomaly detection datasets with ground truth.
See also
References
- ^ a b c d Chandola, V.; Banerjee, A.; Kumar, V. (2009). "Anomaly detection: A survey". ACM Computing Surveys. 41 (3): 1–58. doi:10.1145/1541880.1541882. S2CID 207172599.
- ^ a b Hawkins, Douglas M. (1980). Identification of Outliers. Chapman and Hall London; New York.
- ^ a b Barnett, Vic; Lewis, Lewis (1978). Outliers in statistical data. John Wiley & Sons Ltd.
- ^ Aggarwal, Charu (2017). Outlier Analysis. Springer Publishing Company, Incorporated. ISBN 978-3319475776.
- ^ Denning, D. E. (1987). "An Intrusion-Detection Model" (PDF). IEEE Transactions on Software Engineering. SE-13 (2): 222–232. CiteSeerX 10.1.1.102.5127. doi:10.1109/TSE.1987.232894. S2CID 10028835. Archived (PDF) from the original on June 22, 2015.
- ^ Teng, H. S.; Chen, K.; Lu, S. C. (1990). "Adaptive real-time anomaly detection using inductively generated sequential patterns". Proceedings. 1990 IEEE Computer Society Symposium on Research in Security and Privacy (PDF). pp. 278–284. doi:10.1109/RISP.1990.63857. ISBN 978-0-8186-2060-7. S2CID 35632142.
- ^ Jones, Anita K.; Sielken, Robert S. (1999). "Computer System Intrusion Detection: A Survey". Technical Report, Department of Computer Science, University of Virginia, Charlottesville, VA. CiteSeerX 10.1.1.24.7802.
- ^ Tomek, Ivan (1976). "An Experiment with the Edited Nearest-Neighbor Rule". IEEE Transactions on Systems, Man, and Cybernetics. 6 (6): 448–452. doi:10.1109/TSMC.1976.4309523.
- ^ Smith, M. R.; Martinez, T. (2011). "Improving classification accuracy by identifying and removing instances that should be misclassified" (PDF). The 2011 International Joint Conference on Neural Networks. p. 2690. CiteSeerX 10.1.1.221.1371. doi:10.1109/IJCNN.2011.6033571. ISBN 978-1-4244-9635-8. S2CID 5809822.
- ^ a b c "Video anomaly detection system using deep convolutional and recurrent models". Results in Engineering. 18: 101026. 2023-06-01. doi:10.1016/j.rineng.2023.101026. ISSN 2590-1230.
- ^ Zhang, Tan; Chowdhery, Aakanksha; Bahl, Paramvir (Victor); Jamieson, Kyle; Banerjee, Suman (2015-09-07). "The Design and Implementation of a Wireless Video Surveillance System". Proceedings of the 21st Annual International Conference on Mobile Computing and Networking. MobiCom '15. New York, NY, USA: Association for Computing Machinery: 426–438. doi:10.1145/2789168.2790123. ISBN 978-1-4503-3619-2.
- ^ Zimek, Arthur; Filzmoser, Peter (2018). "There and back again: Outlier detection between statistical reasoning and data mining algorithms" (PDF). Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery. 8 (6): e1280. doi:10.1002/widm.1280. ISSN 1942-4787. S2CID 53305944. Archived from the original (PDF) on 2021-11-14. Retrieved 2019-12-09.
- ^ Campos, Guilherme O.; Zimek, Arthur; Sander, Jörg; Campello, Ricardo J. G. B.; Micenková, Barbora; Schubert, Erich; Assent, Ira; Houle, Michael E. (2016). "On the evaluation of unsupervised outlier detection: measures, datasets, and an empirical study". Data Mining and Knowledge Discovery. 30 (4): 891. doi:10.1007/s10618-015-0444-8. ISSN 1384-5810. S2CID 1952214.
- ^ Anomaly detection benchmark data repository of the Ludwig-Maximilians-Universität München; Mirror Archived 2022-03-31 at the Wayback Machine at University of São Paulo.
- ^ Knorr, E. M.; Ng, R. T.; Tucakov, V. (2000). "Distance-based outliers: Algorithms and applications". The VLDB Journal the International Journal on Very Large Data Bases. 8 (3–4): 237–253. CiteSeerX 10.1.1.43.1842. doi:10.1007/s007780050006. S2CID 11707259.
- ^ Ramaswamy, S.; Rastogi, R.; Shim, K. (2000). Efficient algorithms for mining outliers from large data sets. Proceedings of the 2000 ACM SIGMOD international conference on Management of data – SIGMOD '00. p. 427. doi:10.1145/342009.335437. ISBN 1-58113-217-4.
- ^ Angiulli, F.; Pizzuti, C. (2002). Fast Outlier Detection in High Dimensional Spaces. Principles of Data Mining and Knowledge Discovery. Lecture Notes in Computer Science. Vol. 2431. p. 15. doi:10.1007/3-540-45681-3_2. ISBN 978-3-540-44037-6.
- ^ Breunig, M. M.; Kriegel, H.-P.; Ng, R. T.; Sander, J. (2000). LOF: Identifying Density-based Local Outliers (PDF). Proceedings of the 2000 ACM SIGMOD International Conference on Management of Data. SIGMOD. pp. 93–104. doi:10.1145/335191.335388. ISBN 1-58113-217-4.
- ^ Liu, Fei Tony; Ting, Kai Ming; Zhou, Zhi-Hua (December 2008). "Isolation Forest". 2008 Eighth IEEE International Conference on Data Mining. pp. 413–422. doi:10.1109/ICDM.2008.17. ISBN 9780769535029. S2CID 6505449.
- ^ Liu, Fei Tony; Ting, Kai Ming; Zhou, Zhi-Hua (March 2012). "Isolation-Based Anomaly Detection". ACM Transactions on Knowledge Discovery from Data. 6 (1): 1–39. doi:10.1145/2133360.2133363. S2CID 207193045.
- ^ Schubert, E.; Zimek, A.; Kriegel, H. -P. (2012). "Local outlier detection reconsidered: A generalized view on locality with applications to spatial, video, and network outlier detection". Data Mining and Knowledge Discovery. 28: 190–237. doi:10.1007/s10618-012-0300-z. S2CID 19036098.
- ^ a b Kriegel, H. P.; Kröger, P.; Schubert, E.; Zimek, A. (2009). Outlier Detection in Axis-Parallel Subspaces of High Dimensional Data. Advances in Knowledge Discovery and Data Mining. Lecture Notes in Computer Science. Vol. 5476. p. 831. doi:10.1007/978-3-642-01307-2_86. ISBN 978-3-642-01306-5.
- ^ a b Kriegel, H. P.; Kroger, P.; Schubert, E.; Zimek, A. (2012). Outlier Detection in Arbitrarily Oriented Subspaces. 2012 IEEE 12th International Conference on Data Mining. p. 379. doi:10.1109/ICDM.2012.21. ISBN 978-1-4673-4649-8.
- ^ Fanaee-T, H.; Gama, J. (2016). "Tensor-based anomaly detection: An interdisciplinary survey". Knowledge-Based Systems. 98: 130–147. doi:10.1016/j.knosys.2016.01.027. S2CID 16368060.
- ^ Zimek, A.; Schubert, E.; Kriegel, H.-P. (2012). "A survey on unsupervised outlier detection in high-dimensional numerical data". Statistical Analysis and Data Mining. 5 (5): 363–387. doi:10.1002/sam.11161. S2CID 6724536.
- ^ Schölkopf, B.; Platt, J. C.; Shawe-Taylor, J.; Smola, A. J.; Williamson, R. C. (2001). "Estimating the Support of a High-Dimensional Distribution". Neural Computation. 13 (7): 1443–71. CiteSeerX 10.1.1.4.4106. doi:10.1162/089976601750264965. PMID 11440593. S2CID 2110475.
- ^ a b c Hawkins, Simon; He, Hongxing; Williams, Graham; Baxter, Rohan (2002). "Outlier Detection Using Replicator Neural Networks". Data Warehousing and Knowledge Discovery. Lecture Notes in Computer Science. Vol. 2454. pp. 170–180. CiteSeerX 10.1.1.12.3366. doi:10.1007/3-540-46145-0_17. ISBN 978-3-540-44123-6. S2CID 6436930.
- ^ J. An and S. Cho, "Variational autoencoder based anomaly detection using reconstruction probability", 2015.
- ^ Malhotra, Pankaj; Vig, Lovekesh; Shroff, Gautman; Agarwal, Puneet (22–24 April 2015). Long Short Term Memory Networks for Anomaly Detection in Time Series. European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning. Bruges (Belgium).
- ^ Hubert, Mia; Debruyne, Michiel; Rousseeuw, Peter J. (2018). "Minimum covariance determinant and extensions". WIREs Computational Statistics. 10 (3). doi:10.1002/wics.1421. ISSN 1939-5108. S2CID 67227041.
- ^ Hubert, Mia; Debruyne, Michiel (2010). "Minimum covariance determinant". WIREs Computational Statistics. 2 (1): 36–43. doi:10.1002/wics.61. ISSN 1939-0068. S2CID 123086172.
- ^ a b c "Video anomaly detection system using deep convolutional and recurrent models". Results in Engineering. 18: 101026. 2023-06-01. doi:10.1016/j.rineng.2023.101026. ISSN 2590-1230.
- ^ Alzubaidi, Laith; Zhang, Jinglan; Humaidi, Amjad J.; Al-Dujaili, Ayad; Duan, Ye; Al-Shamma, Omran; Santamaría, J.; Fadhel, Mohammed A.; Al-Amidie, Muthana; Farhan, Laith (2021-03-31). "Review of deep learning: concepts, CNN architectures, challenges, applications, future directions". Journal of Big Data. 8 (1): 53. doi:10.1186/s40537-021-00444-8. ISSN 2196-1115. PMC 8010506. PMID 33816053.
{{cite journal}}
: CS1 maint: PMC format (link) CS1 maint: unflagged free DOI (link) - ^ Belay, Mohammed Ayalew; Blakseth, Sindre Stenen; Rasheed, Adil; Salvo Rossi, Pierluigi (2023-01). "Unsupervised Anomaly Detection for IoT-Based Multivariate Time Series: Existing Solutions, Performance Analysis and Future Directions". Sensors. 23 (5): 2844. doi:10.3390/s23052844. ISSN 1424-8220.
{{cite journal}}
: Check date values in:|date=
(help)CS1 maint: unflagged free DOI (link) - ^ He, Z.; Xu, X.; Deng, S. (2003). "Discovering cluster-based local outliers". Pattern Recognition Letters. 24 (9–10): 1641–1650. Bibcode:2003PaReL..24.1641H. CiteSeerX 10.1.1.20.4242. doi:10.1016/S0167-8655(03)00003-5.
- ^ Campello, R. J. G. B.; Moulavi, D.; Zimek, A.; Sander, J. (2015). "Hierarchical Density Estimates for Data Clustering, Visualization, and Outlier Detection". ACM Transactions on Knowledge Discovery from Data. 10 (1): 5:1–51. doi:10.1145/2733381. S2CID 2887636.
- ^ Lazarevic, A.; Kumar, V. (2005). "Feature bagging for outlier detection". Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining. pp. 157–166. CiteSeerX 10.1.1.399.425. doi:10.1145/1081870.1081891. ISBN 978-1-59593-135-1. S2CID 2054204.
- ^ Nguyen, H. V.; Ang, H. H.; Gopalkrishnan, V. (2010). Mining Outliers with Ensemble of Heterogeneous Detectors on Random Subspaces. Database Systems for Advanced Applications. Lecture Notes in Computer Science. Vol. 5981. p. 368. doi:10.1007/978-3-642-12026-8_29. ISBN 978-3-642-12025-1.
- ^ Kriegel, H. P.; Kröger, P.; Schubert, E.; Zimek, A. (2011). Interpreting and Unifying Outlier Scores. Proceedings of the 2011 SIAM International Conference on Data Mining. pp. 13–24. CiteSeerX 10.1.1.232.2719. doi:10.1137/1.9781611972818.2. ISBN 978-0-89871-992-5.
- ^ Schubert, E.; Wojdanowski, R.; Zimek, A.; Kriegel, H. P. (2012). On Evaluation of Outlier Rankings and Outlier Scores. Proceedings of the 2012 SIAM International Conference on Data Mining. pp. 1047–1058. doi:10.1137/1.9781611972825.90. ISBN 978-1-61197-232-0.
- ^ Zimek, A.; Campello, R. J. G. B.; Sander, J. R. (2014). "Ensembles for unsupervised outlier detection". ACM SIGKDD Explorations Newsletter. 15: 11–22. doi:10.1145/2594473.2594476. S2CID 8065347.
- ^ Zimek, A.; Campello, R. J. G. B.; Sander, J. R. (2014). Data perturbation for outlier detection ensembles. Proceedings of the 26th International Conference on Scientific and Statistical Database Management – SSDBM '14. p. 1. doi:10.1145/2618243.2618257. ISBN 978-1-4503-2722-0.
- ^ Zhao, Yue; Nasrullah, Zain; Li, Zheng (2019). "Pyod: A python toolbox for scalable outlier detection". Journal of Machine Learning Research.
- ^ [1] Mathematica documentation