Error level analysis

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

Error level analysis is the analysis of compression artifacts in digital data with lossy compression such as JPEG.

Principles[edit]

When used, lossy compression is normally applied uniformly to a set of data, such as an image, resulting in a uniform level of compression artifacts.

Alternatively, the data may consist of parts with different levels of compression artifacts. This difference may arise from the different parts having been repeatedly subjected to the same lossy compression a different number of times, or the different parts having been subjected to different kinds of lossy compression. A difference in the level of compression artifacts in different parts of the data may therefore indicate that the data has been edited.

In the case of JPEG, even a composite with parts subjected to matching compressions will have a difference in the compression artifacts.[1]

In order to make the typically faint compression artifacts more readily visible, the data to be analyzed is subjected to an additional round of lossy compression, this time at a known, uniform level, and the result is subtracted from the original data under investigation. The resulting difference image is then inspected manually for any variation in the level of compression artifacts. In 2007, N. Krawetz denoted this method "error level analysis".[1]

Additionally, digital data formats such as JPEG sometimes include metadata describing the specific lossy compression used. If in such data the observed compression artifacts differ from those expected from the given metadata description, then the metadata may not describe the actual compressed data, and thus indicate that the data have been edited.

Limitations[edit]

By its nature, data without lossy compression, such as a PNG image, cannot be subjected to error level analysis. Consequently, since editing could have been performed on data without lossy compression with lossy compression applied uniformly to the edited, composite data, the presence of a uniform level of compression artifacts does not rule out editing of the data.

Additionally, any non-uniform compression artifacts in a composite may be removed by subjecting the composite to repeated, uniform lossy compression.[2] Also, if the image color space is reduced to 256 colors or less, for example, by conversion to GIF, then error level analysis will generate useless results.[3]

More significant, the actual interpretation of the level of compression artifacts in a given segment of the data is subjective, and the determination of whether editing has occurred is therefore not robust.[1]

Controversy[edit]

In May 2013, N. Krawetz used error level analysis on the 2012 World Press Photo of the Year and concluded on the Hacker Factor blog that it was "a composite" with modifications that "fail to adhere to the acceptable journalism standards used by Reuters, Associated Press, Getty Images, National Press Photographer's Association, and other media outlets". The World Press Photo organizers responded by letting two independent experts analyze the image files of the winning photographer and subsequently confirmed the integrity of the files. One of the experts, Hany Farid, said about error level analysis that "It incorrectly labels altered images as original and incorrectly labels original images as altered with the same likelihood". Krawetz responded by clarifying that "It is up to the user to interpret the results. Any errors in identification rest solely on the viewer".[4]

In May 2015, the citizen journalism team Bellingcat wrote that error level analysis revealed that the Russian Ministry of Defense had edited satellite images related to the Malaysia Airlines Flight 17 disaster.[5] In a reaction to this, image forensics expert J. Kriese said about error level analysis: "The method is subjective and not based entirely on science", and that it is "a method used by hobbyists".[6] On his Hacker Factor Blog the inventor of error level analysis N. Krawetz criticized both Bellingcat's use of error level analysis as "misinterpreting the results" but also on several points J. Kriese's "ignorance" regarding error level analysis.[7]

See also[edit]

References[edit]

  1. ^ a b c Wang, W.; Dong, J.; Tan, T. (October 2010). "Tampered Region Localization of Digital Color Images". Digital Watermarking: 9th International Workshop, IWDW 2010. Seoul, Korea: Springer. pp. 120–133. We are hardly able to tell the tampered region from the unchanged one sometimes just by human visual perception of JPEG compression noise 
  2. ^ "FotoForensics". fotoforensics.com. Retrieved 2015-09-20. If an image is resaved multiple times, then it may be entirely at a minimum error level, where more resaves do not alter the image. In this case, the ELA will return a black image and no modifications can be identified using this algorithm 
  3. ^ "FotoForensics - FAQ". fotoforensics.com. Retrieved 2015-09-20. 
  4. ^ Steadman, Ian (2013-05-16). "'Fake' World Press Photo isn't fake, is lesson in need for forensic restraint". Wired UK. Retrieved 2015-09-11. 
  5. ^ "bellingcat - MH17 - Forensic Analysis of Satellite Images Released by the Russian Ministry of Defence". bellingcat.com. 2015-05-31. Retrieved 2015-09-29. Error level analysis of the images also reveal the images have been edited 
  6. ^ Bidder, Benjamin (2015-06-04). "'Bellingcat Report Doesn't Prove Anything': Expert Criticizes Allegations of Russian MH17 Manipulation". Spiegel Online. Retrieved 2015-07-23. 
  7. ^ "Image Analysis - The Hacker Factor Blog". hackerfactor.com. Retrieved 2015-10-17. 

External links[edit]