Automated machine learning

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

Automated machine learning (AutoML) is the process of automating the end-to-end process of applying machine learning to real-world problems. In a typical machine learning application, practitioners must apply the appropriate data pre-processing, feature engineering, feature extraction, and feature selection methods that make the dataset amenable for machine learning. Following those preprocessing steps, practitioners must then perform algorithm selection and hyperparameter optimization to maximize the predictive performance of their final machine learning model. As many of these steps are often beyond the abilities of non-experts, AutoML was proposed as an artificial intelligence-based solution to the ever-growing challenge of applying machine learning.[1][2] Automating the end-to-end process of applying machine learning offers the advantages of producing simpler solutions, faster creation of those solutions, and models that often outperform models that were designed by hand.

Targets of automation[edit]

Automated machine learning can target various stages of the machine learning process:[2]

  • Automated data preparation and ingestion (from raw data and miscellaneous formats)
    • Automated column type detection; e.g., boolean, discrete numerical, continuous numerical, or text
    • Automated column intent detection; e.g., target/label, stratification field, numerical feature, categorical text feature, or free text feature
    • Automated task detection; e.g., binary classification, regression, clustering, or ranking
  • Automated feature engineering
  • Automated model selection
  • Hyperparameter optimization of the learning algorithm and featurization
  • Automated pipeline selection under time, memory, and complexity constraints
  • Automated selection of evaluation metrics / validation procedures
  • Automated problem checking
    • Leakage detection
    • Misconfiguration detection
  • Automated analysis of results obtained
  • User interfaces and visualizations for automated machine learning

Examples[edit]

Notable platforms tackling various stages of AutoML:

Hyperparameter optimization and model selection[edit]

  • Auto-WEKA[3] is a Bayesian hyperparameter optimization layer on top of WEKA.
  • auto-sklearn[4] is a Bayesian hyperparameter optimization layer on top of scikit-learn.
  • ATM[5] is an open source software library under the Human Data Interaction project (HDI) at MIT. It is a distributed, scalable AutoML system designed with ease of use in mind.

Full pipeline optimization[edit]

  • TPOT[6][7] is a Python library that automatically creates and optimizes full machine learning pipelines using genetic programming.
  • H2O Driverless AI[8] is an automated machine learning platform developed by H2O.ai for automated visualization, feature engineering, model training, hyperparameter optimization, and explainability.
  • TransmogrifAI[9][10] is a Scala/SparkML library created by Salesforce for automated data cleansing, feature engineering, model selection, and hyperparameter optimization
  • RECIPE [11] is a framework based on grammar-based genetic programming that builds customized scikit-learn classification pipelines.
  • GA-Auto-MLC[12] and Auto-MEKAGGP[13] are freely-available methods that perform automated multi-label classification on the MEKA software.[14]

Deep neural network architecture search[edit]

See also[edit]

References[edit]

  1. ^ Thornton C, Hutter F, Hoos HH, Leyton-Brown K (2013). Auto-WEKA: Combined Selection and Hyperparameter Optimization of Classification Algorithms. KDD '13 Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining. pp. 847–855.
  2. ^ a b Hutter F, Caruana R, Bardenet R, Bilenko M, Guyon I, Kegl B, and Larochelle H. "AutoML 2014 @ ICML". AutoML 2014 Workshop @ ICML. Retrieved 2018-03-28.
  3. ^ Kotthoff L, Thornton C, Hoos HH, Hutter F, Leyton-Brown K (2017). "Auto-WEKA 2.0: Automatic model selection and hyperparameter optimization in WEKA". Journal of Machine Learning Research. 18 (25): 1–5.
  4. ^ Feurer M, Klein A, Eggensperger K, Springenberg J, Blum M, Hutter F (2015). "Efficient and Robust Automated Machine Learning". Advances in Neural Information Processing Systems 28 (NIPS 2015): 2962–2970.
  5. ^ Swearingen, Thomas; Drevo, Will; Cyphers, Bennett; Cuesta-Infante, Alfredo; Ross, Arun; Veeramachaneni, Kalyan (December 2017). "ATM: A distributed, collaborative, scalable system for automated machine learning". 2017 IEEE International Conference on Big Data (Big Data). IEEE: 151–162. doi:10.1109/bigdata.2017.8257923. ISBN 9781538627150.
  6. ^ Olson RS, Urbanowicz RJ, Andrews PC, Lavender NA, Kidd L, Moore JH (2016). Automating biomedical data science through tree-based pipeline optimization. Proceedings of EvoStar 2016. Lecture Notes in Computer Science. 9597. pp. 123–137. arXiv:1601.07925. doi:10.1007/978-3-319-31204-0_9. ISBN 978-3-319-31203-3.
  7. ^ Olson RS, Bartley N, Urbanowicz RJ, Moore JH (2016). Evaluation of a Tree-based Pipeline Optimization Tool for Automating Data Science. Proceedings of EvoBIO 2016. Gecco '16. pp. 485–492. arXiv:1603.06212. doi:10.1145/2908812.2908918. ISBN 9781450342063.
  8. ^ Heller, Martin (11/06/2017). "Review: H2O.ai automates machine learning". Infoworld. Check date values in: |date= (help)
  9. ^ Shubha Nabar (2018-08-16). "Open Sourcing TransmogrifAI – Automated Machine Learning for Structured Data - Salesforce Engineering". Salesforce Engineering. Retrieved 2018-08-16.
  10. ^ Kyle Wiggers (2018-08-16). "Salesforce open-sources TransmogrifAI, the machine learning library that powers Einstein". VentureBeat. Retrieved 2018-08-16. Once TransmogrifAI has extracted features from the dataset, it’s primed to begin automated model training. At this stage, it runs a cadre of machine learning algorithms in parallel on the data, automatically selects the best-performing model, and samples and recalibrates predictions to avoid imbalanced data.
  11. ^ de Sá, Alex G. C.; Pinto, Walter José G. S.; Oliveira, Luiz Otavio V. B.; Pappa, Gisele L. (2017), "RECIPE: A Grammar-Based Framework for Automatically Evolving Classification Pipelines", Lecture Notes in Computer Science, Springer International Publishing, pp. 246–261, doi:10.1007/978-3-319-55696-3_16, ISBN 9783319556956
  12. ^ de Sá, Alex G. C.; Pappa, Gisele L.; Freitas, Alex A. (2017). "Towards a Method for Automatically Selecting and Configuring Multi-label Classification Algorithms". Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO '17. New York, NY, USA: ACM: 1125–1132. doi:10.1145/3067695.3082053. ISBN 9781450349390.
  13. ^ de Sá, Alex G. C.; Freitas, Alex A.; Pappa, Gisele L. (2018). Auger, Anne; Fonseca, Carlos M.; Lourenço, Nuno; Machado, Penousal; Paquete, Luís; Whitley, Darrell (eds.). "Automated Selection and Configuration of Multi-Label Classification Algorithms with Grammar-Based Genetic Programming". Parallel Problem Solving from Nature – PPSN XV. Lecture Notes in Computer Science. Springer International Publishing. 11102: 308–320. doi:10.1007/978-3-319-99259-4_25. ISBN 9783319992594.
  14. ^ Read, Jesse; Reutemann, Peter; Pfahringer, Bernhard; Holmes, Geoff (January 2016). "Meka: A Multi-label/Multi-target Extension to Weka". J. Mach. Learn. Res. 17 (1): 667–671. ISSN 1532-4435.
  15. ^ Haifeng J, Qingquan S, Xia H (2018). "Auto-Keras: Efficient Neural Architecture Search with Network Morphism". arXiv:1806.10282 [cs.LG].