Data pre-processing

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

Data preprocessing is an important step in the data mining process. The phrase "garbage in, garbage out" is particularly applicable to data mining and machine learning projects. Data-gathering methods are often loosely controlled, resulting in out-of-range values (e.g., Income: −100), impossible data combinations (e.g., Sex: Male, Pregnant: Yes), and missing values, etc. Analyzing data that has not been carefully screened for such problems can produce misleading results. Thus, the representation and quality of data is first and foremost before running any analysis.[1] Often, data preprocessing is the most important phase of a machine learning project, especially in computational biology.[2]

If there is much irrelevant and redundant information present or noisy and unreliable data, then knowledge discovery during the training phase is more difficult. Data preparation and filtering steps can take considerable amount of processing time. Data preprocessing includes cleaning, Instance selection, normalization, transformation, feature extraction and selection, etc. The product of data preprocessing is the final training set.

Data pre-processing may affect the way in which outcomes of the final data processing can be interpreted. [3] This aspect should be carefully considered when interpretation of the results is a key point, such in the multivariate processing of chemical data (chemometrics).

Tasks of data pre-processing[edit]

Data mining[edit]

The origins of data preprocessing are located in data mining.[4] The idea is to aggregate existing information and search in the content. Later it was recognized, that for machine learning and neural networks a data preprocessing step is needed too. So it has become to a universal technique which is used in computing in general.

From a users perspective, data preprocessing is equal to put existing comma-separated values files together.[5] Data are usually stored in files. The CSV format was mentioned already but it's possible that the data are stored in a Microsoft Excel sheet or in a json file.[6] A self-created script is applied to the file. From a technical side the script can be written in Python and in R (programming language).[7]

The reason why a user transforms existing files into a new one is because of many reasons. Data preprocessing has the objective to add missing values, aggregate information, label data with categories (Data binning) and smooth a trajectory.[8] More advanced techniques like principle component analysis and feature selection are working with statistical formulas and are applied to complex datasets which are recorded by GPS trackers and motion capture devices.

Semantic data preprocessing[edit]

Complex problems are asking for more elaborated analyzing techniques of existing information. Instead of creating a simple script for aggregating different numerical values into one, it make sense to focus on semantic based data preprocessing.[9] Here is the idea to build a dedicated ontology which explains on a higher level what the problem is about.[10] The Protégé (software) is the standard tool for this purpose.[11] A second more advanced technique is Fuzzy preprocessing. Here is the idea to ground numerical values with linguistic information. Raw data are transformed into natural language.

References[edit]

  1. ^ Pyle, D., 1999. Data Preparation for Data Mining. Morgan Kaufmann Publishers, Los Altos, California.
  2. ^ Chicco D (December 2017). "Ten quick tips for machine learning in computational biology". BioData Mining. 10 (35): 35. doi:10.1186/s13040-017-0155-3. PMC 5721660. PMID 29234465.
  3. ^ Oliveri, Paolo; Malegori, Cristina; Simonetti, Remo; Casale, Monica (2019). "The impact of signal pre-processing on the final interpretation of analytical outcomes – A tutorial". Analytica Chimica Acta. 1058: 9–17. doi:10.1016/j.aca.2018.10.055. PMID 30851858.
  4. ^ Alasadi, Suad A and Bhaya, Wesam S (2017). "Review of data preprocessing techniques in data mining". Journal of Engineering and Applied Sciences. 12 (16): 4102–4107.CS1 maint: multiple names: authors list (link)
  5. ^ Shweta Srivastava (2014). "Weka: A Tool for Data preprocessing, Classification, Ensemble, Clustering and Association Rule Mining". International Journal of Computer Applications. Foundation of Computer Science. 88 (10): 26–29. Bibcode:2014IJCA...88j..26S. doi:10.5120/15389-3809.
  6. ^ Chintan H.Makwana and Kirit R. Rathod (2014). "An Efficient Technique for Web Log Preprocessing using Microsoft Excel". International Journal of Computer Applications. Foundation of Computer Science. 90 (12): 25–28. Bibcode:2014IJCA...90l..25H. doi:10.5120/15774-4517.
  7. ^ Li, Canchen (2019). "Preprocessing Methods and Pipelines of Data Mining: An Overview". arXiv:1906.08510 [cs.LG].
  8. ^ Alasadi, Suad A and Bhaya, Wesam S (2017). "Review of data preprocessing techniques in data mining". Journal of Engineering and Applied Sciences. 12 (16): 4102–4107.CS1 maint: multiple names: authors list (link)
  9. ^ Culmone, Rosario and Falcioni, Marco and Quadrini, Michela (2014). An ontology-based framework for semantic data preprocessing aimed at human activity recognition. SEMAPRO 2014: The Eighth International Conference on Advances in Semantic Processing. Alexey Cheptsov, High Performance Computing Center Stuttgart (HLRS). S2CID 196091422.CS1 maint: multiple names: authors list (link)
  10. ^ David Perez-Rey and Alberto Anguita and Jose Crespo (2006). OntoDataClean: Ontology-Based Integration and Preprocessing of Distributed Data. Biological and Medical Data Analysis. Springer Berlin Heidelberg. pp. 262–272. doi:10.1007/11946465_24.
  11. ^ F. Mary Harin Fernandez and R. Ponnusamy (2016). "Data Preprocessing and Cleansing in Web Log on Ontology for Enhanced Decision Making". Indian Journal of Science and Technology. Indian Society for Education and Environment. 9 (10). doi:10.17485/ijst/2016/v9i10/88899.

External links[edit]