This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these template messages)(Learn how and when to remove this template message)
Data profiling is the process of examining the data available from an existing information source (e.g. a database or a file) and collecting statistics or informative summaries about that data. The purpose of these statistics may be to:
- Find out whether existing data can be easily used for other purposes
- Improve the ability to search data by tagging it with keywords, descriptions, or assigning it to a category
- Assess data quality, including whether the data conforms to particular standards or patterns
- Assess the risk involved in integrating data in new applications, including the challenges of joins
- Discover metadata of the source database, including value patterns and distributions, key candidates, foreign-key candidates, and functional dependencies
- Assess whether known metadata accurately describes the actual values in the source database
- Understanding data challenges early in any data intensive project, so that late project surprises are avoided. Finding data problems late in the project can lead to delays and cost overruns.
- Have an enterprise view of all data, for uses such as master data management, where key data is needed, or data governance for improving data quality.
Data profiling refers to the analysis of information for use in a data warehouse in order to clarify the structure, content, relationships, and derivation rules of the data. Profiling helps to not only understand anomalies and assess data quality, but also to discover, register, and assess enterprise metadata. The result of the analysis is used to determine the suitability of the candidate source systems, usually giving the basis for an early go/no-go decision, and also to identify problems for later solution design. Data profiling takes place in various procedures. Majority of the time civilization is unaware of the collection of their data. Connecting to wifi in locations, entering your email in a contest, downloading any type of app, and writing surveys. Data profiling is a key attribution to the ever-developing societal civilization, it allows researchers to gain access to a comprehensive collection of data.
How Data Profiling is Conducted
Data profiling utilizes methods of descriptive statistics such as minimum, maximum, mean, mode, percentile, standard deviation, frequency, variation, aggregates such as count and sum, and additional metadata information obtained during data profiling such as data type, length, discrete values, uniqueness, occurrence of null values, typical string patterns, and abstract type recognition. The metadata can then be used to discover problems such as illegal values, misspellings, missing values, varying value representation, and duplicates.
Different analyses are performed for different structural levels. E.g. single columns could be profiled individually to get an understanding of frequency distribution of different values, type, and use of each column. Embedded value dependencies can be exposed in a cross-columns analysis. Finally, overlapping value sets possibly representing foreign key relationships between entities can be explored in an inter-table analysis.
Normally, purpose-built tools are used for data profiling to ease the process. The computation complexity increases when going from single column, to single table, to cross-table structural profiling. Therefore, performance is an evaluation criterion for profiling tools.
When Data Profiling is Conducted
According to Kimball, data profiling is performed several times and with varying intensity throughout the data warehouse developing process. A light profiling assessment should be undertaken immediately after candidate source systems have been identified and DW/BI business requirements have been satisfied. The purpose of this initial analysis is to clarify at an early stage if the correct data is available at the appropriate detail level and that anomalies can be handled subsequently. If this is not the case the project may be terminated.
Addition, more in-depth profiling is done prior to the dimensional modeling process in order assess what is required to convert data into a dimensional model. Detailed profiling extends into the ETL system design process in order to determine the appropriate data to extract and which filters to apply to the data set.
Additionally, data profiling may be conducted in the data warehouse development process after data has been loaded into staging, the data marts, etc. Conducting data at these stages helps ensure that data cleaning and transformations have been done correctly and in compliance of requirements.
Benefits and Examples
The benefits of data profiling are to improve data quality, shorten the implementation cycle of major projects, and improve users' understanding of data. Discovering business knowledge embedded in data itself is one of the significant benefits derived from data profiling. Data profiling is one of the most effective technologies for improving data accuracy in corporate databases. An example of data profiling is its relationship with Health Tracking. Data is collected from apps, and other media outlets to collect a general understanding of the health and well-being of civilization. Data is collected from apps upon various concepts, such as fitness, menstruation cycles, mental health, and health conditions such as diabetes, cardiovascular failure, and obesity. The statistics gained from these platforms are then utilized to gain extensive multiple perspectives and experiences from users. This information can be used in attribution to health care professionals to determine the most common ground on which users stand within their health. It can also give a glimpse into whether utilizing the app is improving the health of patients, and what can be done in extent to assist. It allows those in health care to tailor the app to the needs of patients, and also see if the app performs truly helps the patient. Although a concern that runs within this is the tampering of information. However, assuming the majority of users input correct information, the outcome will most typically balance out.
Data Profiling Tools
Some tools are free software and open source; however, many, but not all free data profiling tools are open source projects. In general, their functionality is more limited than that of commercial products, and they may not offer free telephone or online support. Furthermore, their documentation is not always thorough. However, some small companies still use these free tools instead of expensive commercial software, considering the benefits that free tools provide.
- Data quality
- Data governance
- Master data management
- Database normalization
- Data visualization
- Analysis paralysis
- Data Analysis
- Johnson, Theodore (2009). Springer, Heidelberg, ed. "Data Profiling". Encyclopedia of Database Systems.
- Woodall, Philip; Oberhofer, Martin; Borek, Alexander (2014). "A classification of data quality assessment and improvement methods". International Journal of Information Quality. 3 (4). doi:10.1504/ijiq.2014.068656.
- Kimball, Ralph; et al. (2008). The Data Warehouse Lifecycle Toolkit (Second ed.). Wiley. p. 376. ISBN 9780470149775.
- Loshin, David (2009). Master Data Management. Morgan Kaufmann. pp. 94–96. ISBN 9780123742254.
- Loshin, David (2003). Business Intelligence: The Savvy Manager’s Guide, Getting Onboard with Emerging IT. Morgan Kaufmann. pp. 110–111. ISBN 9781558609167.
- Rahm, Erhard; Hai Do, Hong (December 2000). "Data Cleaning: Problems and Current Approaches". Bulletin of the Technical Committee on Data Engineering. 23 (4). IEEE Computer Society.
- Singh, Ranjit; Singh, Kawaljeet; et al. (May 2010). "A Descriptive Classification of Causes of Data Quality Problems in Data Warehousing". IJCSI International Journal of Computer Science Issue. 2. 7 (3).
- Kimball, Ralph (2004). "Kimball Design Tip #59: Surprising Value of Data Profiling" (PDF). Kimball Group.
- Olson, Jack E. (2003). Data Quality: The Accuracy Dimension. Morgan Kaufmann. pp. 140–142.
- Dai, Wei; Wardlaw, Isaac. "Data Profiling Technology of Data Governance Regarding Big Data: Review and Rethinking". Information Technology, New Generations. pp. 439–450.