From Wikipedia, the free encyclopedia

Audification is an auditory display technique for representing a sequence of data values as sound. By definition, it is described as a "direct translation of a data waveform to the audible domain."[1] Audification interprets a data sequence and usually a time series, as an audio waveform where input data are mapped to sound pressure levels. Various signal processing techniques are used to assess data features. The technique allows the listener to hear periodic components as frequencies. Audification typically requires large data sets with periodic components.[2]

Audification is most commonly applied to get the most direct and simple representation of data from sound and to convert it into a visual. In most cases it will always be used for taking sounds and breaking it down in a way that we can visually understand it and construct more data from it.


The idea of audification was introduced in 1992 by Greg Kramer, initially as a sonification technique. This was the beginning of audification, but is also why most people to this day still consider audification a type of sonification.

The goal of audification is to allow the listener to audibly experience the results of scientific measurements or simulations.

A 2007 study by Sandra Pauletto and Andy Hunt at the University of York suggested that users were able to detect attributes such as noise, repetitive elements, regular oscillations, discontinuities, and signal power in audification of time-series data to a degree comparable with visual inspection of spectrograms.[3]


Applications include audification of seismic data[4] and of human neurophysiological signals.[5] An example is the esophageal stethoscope, which amplifies naturally occurring sound without conveying inherently noiseless variables such as the result of gas analysis.[6]


Converting ultrasound to audible sound is a form of audification that provides a form of echolocation.[7][8] Other uses in the medical field include the stethoscope[9] and the audification of an EEG.[10]


The development of electronic music can also be considered the history of audification. This is because all electronic instruments involve electric process audified using a loudspeaker.


Audification is of interest for research into Auditory Seismology. It is used in earthquake prediction.[11] Applications include using seismic data to differentiate bomb blasts from earthquakes.[1]

The technique presents sound waves of earthquakes alongside a visual representation. The addition of audio allows both the eye and ears to contribute to better understanding.


NASA has used audification to represent radio and plasma wave[12] measurements.[13]


Both sonification and audification are representational techniques in which data sets or its selected features are mapped into audio signals.[14] However, audification is a kind of sonification, a term which encompasses all techniques for representing data in non-speech audio.[citation needed] Their relationship can be demonstrated in the way data values in some sonifications that directly define audio signals are called audification.[15][clarification needed]


  1. ^ a b Dean, Roger (2009). The Oxford Handbook of Computer Music. New York: Oxford University Press. p. 321. ISBN 9780195331615.
  2. ^ Hermann, T. & Ritter, H. (2004), "Sound and meaning in auditory data display" (PDF), Proceedings of the IEEE, IEEE, 92 (4): 730–741, doi:10.1109/jproc.2004.825904, S2CID 12354787
  3. ^ Pauletto, S. & Hunt, A. (2005), Brazil, Eoin (ed.), "A comparison of audio & visual analysis of complex time-series data sets" (PDF), Proceedings of the 11th International Conference on Auditory Display (ICAD2005): 175–181
  4. ^ Dombois, Florian (2001), Hiipakka, J.; Zacharov, N.; Takala, T. (eds.), "Using audification in planetary seismology" (PDF), Proceedings of the 7th International Conference on Auditory Display (ICAD2001): 227–230
  5. ^ Olivan, J.; Kemp, B. & Roessen, M. (2004), "Easy listening to sleep recordings: tools and examples" (PDF), Sleep Medicine, 5 (6): 601–603, doi:10.1016/j.sleep.2004.07.010, PMID 15511709, archived from the original (PDF) on 2012-04-25
  6. ^ Sanderson, Penelope; Watson, Marcus; Russell, W. John (2005). "Advanced Patient Monitoring Displays: Tools for Continuous Informing". Anesthesia and Analgesia. 101 (1): 161–8, table of contents. doi:10.1213/01.ANE.0000154080.67496.AE. PMID 15976225. S2CID 18818792.
  7. ^ Davies, Claire. "Where did that sound come from?? Comparing the ability to localise using audification or audition". {{cite journal}}: Cite journal requires |journal= (help)
  8. ^ Davies, Clare (2008). Audification of Ultrasound for Human Echolocation. Clare Davies.
  9. ^ "Introduction to Digital Stethoscopes and Electrical Component Selection Criteria - Tutorial - Maxim". Retrieved 2019-05-07.
  10. ^ Temko, A.; Marnane, W.; Boylan, G.; O'Toole, J. M.; Lightbody, G. (August 2014). "Neonatal EEG audification for seizure detection". 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. 2014: 4451–4454. doi:10.1109/EMBC.2014.6944612. ISBN 978-1-4244-7929-0. PMID 25570980. S2CID 18784120.
  11. ^ "Sounds of Seismic - Earth System Soundscape". Retrieved 2019-05-07.
  12. ^ Scarf, F. L.; Gurnett, D. A.; Kurth, W. S.; Coroniti, F. V.; Kennel, C. F.; Poynter, R. L. (1987). "Plasma wave measurements in the magnetosphere of Uranus". Journal of Geophysical Research: Space Physics. 92 (A13): 15217–15224. Bibcode:1987JGR....9215217S. doi:10.1029/JA092iA13p15217. ISSN 2156-2202.
  13. ^ Dombois, Florian. "Using Audification in planetary seismology" (PDF). Legacy.
  14. ^ Vickers, Paul; Holdrich, Robert (December 2017). "Direct Segmented Sonification of Characteristic Features of the Data Domain". arXiv:1711.11368 [cs.HC].
  15. ^ Philipsen, Lotte; Kjærgaard, Rikke (2018). The Aesthetics of Scientific Data Representation: More than Pretty Pictures. New York: Routledge. ISBN 9781138679375.