Audification

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

Audification is an auditory display technique for representing a sequence of data values as sound. A definition describes it as a "direct translation of a data waveform to the audible domain."[1] An audification does this by interpreting the data sequence, usually a time series, as an audio waveform: input data is mapped to sound pressure levels. Various signal processing is often used to bring out salient data features.

Audification is particularly applicable to large datasets with periodic components. Many data values are needed to make an audification, and audification allows the listener to hear periodic components as frequencies.[2] For instance, seismic data has been audified by the successful differentiation of events caused by bomb blasts from those caused by earthquakes.[1]

A 2007 study by Sandra Pauletto and Andy Hunt at the University of York suggests that users were able to detect attributes such as noise, repetitive elements, regular oscillations, discontinuities, and signal power in audification of time-series data to a degree comparable with using visual inspection of spectrograms.[3] Applications include audification of seismic data[4] and of human neurophysiological signals.[5] An example is the esophageal stethoscope, which amplifies naturally occurring sound without conveying inherently noiseless variables such as the result of gas analysis.[6]

Audification vs sonification[edit]

Both sonification and audification are representational techniques in which data sets or its selected features are mapped into audio signals.[7] However, audification is a kind of sonification, a term which encompasses all techniques for representing data in non-speech audio.[citation needed] Their relationship can be demonstrated in the way data values in some sonifications that directly define audio signals are called audification.[8]

See also[edit]

References[edit]

  1. ^ a b Dean, Roger (2009). The Oxford Handbook of Computer Music. New York: Oxford University Press. p. 321. ISBN 9780195331615.
  2. ^ Hermann, T. & Ritter, H. (2004), "Sound and meaning in auditory data display" (PDF), Proceedings of the IEEE, IEEE, 92 (4): 730–741, doi:10.1109/jproc.2004.825904
  3. ^ Pauletto, S. & Hunt, A. (2005), Brazil, Eoin, ed., "A comparison of audio & visual analysis of complex time-series data sets" (PDF), Proceedings of the 11th International Conference on Auditory Display (ICAD2005): 175–181
  4. ^ Dombois, Florian (2001), Hiipakka, J.; Zacharov, N.; Takala, T., eds., "Using audification in planetary seismology" (PDF), Proceedings of the 7th International Conference on Auditory Display (ICAD2001): 227–230
  5. ^ Olivan, J.; Kemp, B. & Roessen, M. (2004), "Easy listening to sleep recordings: tools and examples" (PDF), Sleep Medicine, 5 (6): 601–603, doi:10.1016/j.sleep.2004.07.010, archived from the original (PDF) on 2012-04-25
  6. ^ Sanderson, Penelope; Watson, Marcus; Russell, W. John (2005). "Advanced Patient Monitoring Displays: Tools for Continuous Informing" (PDF). Semantics Scholar. Retrieved October 11, 2018.
  7. ^ Vickers, Paul; Holdrich, Robert (December 2017). "Direct Segmented Sonification of Characteristic Features of the Data Domain" (PDF). arxiv. Retrieved October 11, 2018.
  8. ^ Philipsen, Lotte; Kjærgaard, Rikke (2018). The Aesthetics of Scientific Data Representation: More than Pretty Pictures. New York: Routledge. ISBN 9781138679375.