Data editing is defined as the process involving the review and adjustment of collected survey data. The purpose is to control the quality of the collected data. Data editing can be performed manually, with the assistance of a computer or a combination of both.
The term interactive editing is commonly used for modern computer-assisted manual editing. Most interactive data editing tools applied at National Statistical Institutes (NSIs) allow one to check the specified edits during or after data entry, and if necessary to correct erroneous data immediately. Several approaches can be followed to correct erroneous data:
- Recontact the respondent
- Compare the respondent's data to his data from previous year
- Compare the respondent's data to data from similar respondents
- Use the subject matter knowledge of the human editor
Interactive editing is a standard way to edit data. It can be used to edit both categorical and continuous data. Interactive editing reduces the time frame needed to complete the cyclical process of review and adjustment.
Selective editing is an umbrella term for several methods to identify the influential errors, [note 1] and outliers.[note 2] Selective editing techniques aim to apply interactive editing to a well-chosen subset of the records, such that the limited time and resources available for interactive editing are allocated to those records where it has the most effect on the quality of the final estimates of publication figures. In selective editing, data is split into two streams:
- The critical stream
- The non-critical stream
The critical stream consists of records that are more likely to contain influential errors. These critical records are edited in a traditional interactive manner. The records in the non-critical stream which are unlikely to contain influential errors are not edited in a computer assisted manner.
There are two methods of macro editing:
This method is followed in almost every statistical agency before publication: verifying whether figures to be published seem plausible. This is accomplished by comparing quantities in publication tables with same quantities in previous publications. If an unusual value is observed, a micro-editing procedure is applied to the individual records and fields contributing to the suspicious quantity.
Data available is used to characterize the distribution of the variables. Then all individual values are compared with the distribution. Records containing values that could be considered uncommon (given the distribution) are candidates for further inspection and possibly for editing.
In automatic editing records are edited by a computer without human intervention. Prior knowledge on the values of a single variable or a combination of variables can be formulated as a set of edit rules which specify or constrain the admissible values
- Data cleansing
- Data pre-processing
- Data wrangling
- Iterative proportional fitting
- Triangulation (social science)
- the errors that have substantial impact on the publication figures
- values that do not fit a model of data well
- "Statistics: Power from Data! Data editing". www150.statcan.gc.ca.
- Waal, Ton de et al. "Handbook of Statistical Data Editing and Imputation". Wiley publication, 2011,p.15.
- "UNECE Homepage". www.unece.org.
- Waal, Ton de et al. "Handbook of Statistical Data Editing and Imputation". Wiley publication, 2011,p.16.
- Bethlehem,J. "Applied Survey Methods A Statistical Perspective ". Wiley publication, 2009,p.205.
- Waal, Ton de et al. "Handbook of Statistical Data Editing and Imputation". Wiley publication