Sonification
Sonification is the use of non-speech audio to convey information or perceptualize data.[1] Auditory perception has advantages in temporal, spatial, amplitude, and frequency resolution that open possibilities as an alternative or complement to visualization techniques.
For example, the rate of clicking of a Geiger counter conveys the level of radiation in the immediate vicinity of the device.
Though many experiments with data sonification have been explored in forums such as the International Community for Auditory Display (ICAD), sonification faces many challenges to widespread use for presenting and analyzing data. For example, studies show it is difficult, but essential, to provide adequate context for interpreting sonifications of data.[1][2] Many sonification attempts are coded from scratch due to the lack of a flexible tool for sonification research and data exploration[3]
History
The Geiger counter, invented in 1908, is one of the earliest and most successful applications of sonification. A Geiger counter has a tube of low-pressure gas; each particle detected produces a pulse of current when it ionizes the gas, producing an audio click. The original version was only capable of detecting alpha particles. In 1928, Geiger and Walther Müller (a PhD student of Geiger) improved the counter so that it could detect more types of ionizing radiation.
In 1913, Dr. Edmund Fournier d'Albe of University of Birmingham invented the optophone, which used selenium photosensors to detect black print and convert it into an audible output.[4] A blind reader could hold a book up to the device and hold an apparatus to the area she wanted to read. The optophone played a set group of notes: g c' d' e' g' b' c'' e''. Each note corresponded with a position on the optophone's reading area, and that note was silenced if black ink was sensed. Thus, the missing notes indicated the positions where black ink was on the page and could be used to read.
Pollack and Ficks published the first perceptual experiments on the transmission of information via auditory display in 1954.[5] They experimented with combining sound dimensions such as timing, frequency, loudness, duration, and spacialization and found that they could get subjects to register changes in multiple dimensions at once. These experiments did not get into much more detail than that, since each dimension had only two possible values.
John M. Chambers, Max Mathews, and F.R. Moore at Bell Laboratories did the earliest work on auditory graphing in their "Auditory Data Inspection" technical memorandum in 1974.[6] They augmented a scatterplot using sounds that varied along frequency, spectral content, and amplitude modulation dimensions to use in classification. They did not do any formal assessment of the effectivenes of these experiments.[7]
In the 1980s, pulse oximeters came into widespread use. Pulse oximeters can sonify oxygen concentration of blood by emitting higher pitches for higher concentrations. However, in practice this particular feature of pulse oximeters may not be widely utilized by medical professionals because of the risk of too many audio stimuli in medical environments.[8]
In 1992, the International Community for Auditory Display (ICAD) was founded by Gregory Kramer as a forum for research on auditory display which includes data sonification. ICAD has since become a home for researchers from many different disciplines interested in the use of sound to convey information through its conference and peer-reviewed proceedings.[9]
Some existing applications and projects
- Auditory altimeter, also used in skydiving.[10]
- Auditory thermometer [1]
- Clocks with an audible tick every second, and with special chimes every 15 minutes
- Cockpit auditory displays
- Geiger counter
- Gravitational waves at LIGO [2]
- Interactive sonification[11][12][13]
- Medical[14][15] and surgical auditory displays[16][17][18][19]
- Multimodal (combined sense) displays to minimize visual overload and fatigue
- Space physics [3]
- Pulse oximetery in operating rooms[20][21]
- Speed alarm in motor vehicles
- Sonar
- Storm and weather sonification[22] [4]
- Volcanic activity detection
- Cluster Analysis of High Dimensional Data using Particle Trajectory Sonification [5]
Sonification techniques
Many different components can be altered to change the user's perception of the sound, and in turn, their perception of the underlying information being portrayed. Often, an increase or decrease in some level in this information is indicated by an increase or decrease in pitch, amplitude or tempo, but could also be indicated by varying other less commonly used components. For example, a stock market price could be portrayed by rising pitch as the stock price rose, and lowering pitch as it fell. To allow the user to determine that more than one stock was being portrayed, different timbres or brightnesses might be used for the different stocks, or they may be played to the user from different points in space, for example, through different sides of their headphones.
Many studies have been undertaken to try to find the best techniques for various types of information to be presented, and as yet, no conclusive set of techniques to be used has been formulated. As the area of sonification is still considered to be in its infancy, current studies are working towards determining the best set of sound components to vary in different situations.
Several different techniques for auditory rendering of data can be categorized:
- Acoustic Sonification [23]
- Audification
- Model-Based Sonification
- Parameter Mapping
- Stream-Based Sonification [24][25]
The present offerings for sonification software are relatively few, with many offerings either taking the form of specified programs for sonification of data or functions built into existing frameworks. Some examples of these are:
- SoniPy, an open source Python framework[26]
- Sonification Sandbox, a Java program to convert datasets to sounds[27]
- xSonify, a Java application to display numerical data as sound[28]
- Sound and sonification functions in the Wolfram Language[29]
- audiolyzR, an R package for data sonification[30]
- Data-to-Music API, a browser-based JavaScript API for real-time data sonification[31][32]
- Mozzi, a sonification synth for the open source Arduino platform
In addition to the software listed above, other tools commonly used to build sonifications include:
An alternative approach to traditional sonification is "sonification by replacement", for example Pulsed Melodic Affective Processing (PMAP).[33][34][35] In PMAP rather than sonifying a data stream, the computational protocol is musical data itself, for example MIDI. The data stream represents a non-musical state: in PMAP an affective state. Calculations can then be done directly on the musical data, and the results can be listened to with the minimum of translation.
See also
References
- ^ a b Kramer, Gregory, ed. (1994). Auditory Display: Sonification, Audification, and Auditory Interfaces. Santa Fe Institute Studies in the Sciences of Complexity. Vol. Proceedings Volume XVIII. Reading, MA: Addison-Wesley. ISBN 0-201-62603-9.
- ^ Smith, Daniel R., & Walker, Bruce N. (2005). "Effects of Auditory Context Cues and Training on Performance of a Point Estimation Sonification Task". Journal of Applied Cognitive Psychology, 19, 1065–1087.
{{cite journal}}
: CS1 maint: multiple names: authors list (link) - ^ Flowers, J. H. (2005), Brazil, Eoin (ed.), "Thirteen years of reflection on auditory graphing: Promises, pitfalls, and potential new directions" (PDF), Proceedings of the 11th International Conference on Auditory Display (ICAD2005): 406–409
- ^ d'Albe, E. E. Fournier (May 1914), "On a Type-Reading Optophone", Proceedings of the Royal Society of London
- ^ Pollack, I.; Ficks, L. (1954), "Information of elementary multidimensional auditory displays", Journal of the Acoustical Society of America, 26: 136, doi:10.1121/1.1917759
{{citation}}
: Unknown parameter|lastauthoramp=
ignored (|name-list-style=
suggested) (help) - ^ Chambers, J. M. and Mathews, M. V. and Moore, F. R. (1974), "Auditory Data Inspection", Technical Memorandum 74-1214-20, AT&T Bell Laboratories
{{citation}}
: CS1 maint: multiple names: authors list (link) - ^ Frysinger, S. P. (2005), Brazil, Eoin (ed.), "A brief history of auditory data representation to the 1980s" (PDF), Proceedings of the 11th International Conference on Auditory Display (ICAD2005), Department of Computer Science and Information Systems, University of Limerick: 410–413
- ^ Craven, R M; McIndoe, A K (1999), "Continuous auditory monitoring—how much information do we register?" (PDF), British Journal of Anaesthesia, 83 (5): 747–749, doi:10.1093/bja/83.5.747
{{citation}}
: Unknown parameter|lastauthoramp=
ignored (|name-list-style=
suggested) (help) - ^
Kramer, G.; Walker, B.N. (2005), "Sound science: Marking ten international conferences on auditory display", ACM Transactions on Applied Perception, 2 (4): 383–388, CiteSeerX 10.1.1.88.7945, doi:10.1145/1101530.1101531
{{citation}}
: Unknown parameter|lastauthoramp=
ignored (|name-list-style=
suggested) (help) - ^ Montgomery, E.T; Schmitt, R.W (1997), "Acoustic altimeter control of a free vehicle for near-bottom turbulence measurements", Deep Sea Research Part I: Oceanographic Research Papers, 44 (6): 1077, doi:10.1016/S0967-0637(97)87243-3
- ^ Thomas Hermann, Andy Hunt, and Sandra Pauletto. Interacting with Sonification Systems: Closing the Loop. Eighth International Conference on Information Visualisation (IV'04) : 879–884. Available: online. DOI= http://doi.ieeecomputersociety.org/10.1109/IV.2004.1320244.
- ^ Thomas Hermann, and Andy Hunt. The Importance of Interaction in Sonification. Proceedings of ICAD Tenth Meeting of the International Conference on Auditory Display, Sydney, Australia, July 6–9, 2004. Available: online
- ^ Sandra Pauletto and Andy Hunt. A Toolkit for Interactive Sonification. Proceedings of ICAD Tenth Meeting of the International Conference on Auditory Display, Sydney, Australia, July 6–9, 2004. Available: online.
- ^ Kather, Jakob Nikolas; Hermann, Thomas; Bukschat, Yannick; Kramer, Tilmann; Schad, Lothar R.; Zöllner, Frank Gerrit (2017). "Polyphonic sonification of electrocardiography signals for diagnosis of cardiac pathologies". Scientific Reports. 7: Article-number 44549. doi:10.1038/srep44549. Retrieved 20 May 2018.
- ^ Edworthy, Judy (2013). "Medical audible alarms: a review". J Am Med Inform Assoc. 20: 584–589. doi:10.1136/amiajnl-2012-001061. PMC 3628049. PMID 23100127.
- ^ Woerdeman, Peter A.; Willems, Peter W.A.; Noordsmans, Herke Jan; Berkelbach van der Sprenken, Jan Willem (2009). "Auditory feedback during frameless image-guided surgery in a phantom model and initial clinical experience". J Neurosurg. 110 (2): 257–262. doi:10.3171/2008.3.17431. Retrieved 20 May 2018.
- ^ Ziemer, Tim; Black, David (2017). "Psychoacoustically motivated sonification for surgeons". International Journal of Computer Assisted Radiology and Surgery. 12 ((Suppl 1):1): 265–266. doi:10.1007/s11548-017-1588-3. Retrieved 20 May 2018.
- ^ Ziemer, Tim; Black, David; Schultheis, Holger (2017). "Psychoacoustic sonification design for navigation in surgical interventions". Proceedings of Meetings on Acoustics. 30. doi:10.1121/2.0000557. Retrieved 20 May 2018.
- ^ Ziemer, Tim; Black, David (2017). "Psychoacoustic sonification for tracked medical instrument guidance". The Journal of the Acoustical Society of America. 141 (5): 3694. Retrieved 20 May 2018.
- ^ Hinckfuss, Kelly; Sanderson, Penelope; Loeb, Robert G.; Liley, Helen G.; Liu, David (2016). "Novel Pulse Oximetry Sonifications for Neonatal Oxygen Saturation Monitoring". Human Factors. 58 (2): 344–359. doi:10.1177/0018720815617406. Retrieved 20 May 2018.
- ^ Sanderson, Penelope M.; Watson, Marcus O.; Russell, John (2005). "Advanced Patient Monitoring Displays: Tools for Continuous Informing". Anesthesia & Analgesia. 101 (1). doi:10.1213/01.ANE.0000154080.67496.AE. Retrieved 20 May 2018.
- ^ Schuett, Jonathan H.; Winton, Riley J.; Batterman, Jared M.; Walker, Bruce N. (2014). "Auditory Weather Reports: Demonstrating Listener Comprehension of Five Concurrent Variables". Proceedings of the 9th Audio Mostly: A Conference on Interaction with Sound. AM '14. New York, NY, USA: ACM: 17:1–17:7. doi:10.1145/2636879.2636898. ISBN 9781450330329.
- ^ Barrass S. (2012) Digital Fabrication of Acoustic Sonifications, Journal of the Audio Engineering Society, September 2012. online
- ^ Barrass, S. and Best, G. (2008). Stream-based Sonification Diagrams. Proceedings of the 14th International Conference on Auditory Display, IRCAM Paris, 24–27 June 2008. online
- ^ Barrass S. (2009) Developing the Practice and Theory of Stream-based Sonification. Scan Journal of Media Arts Culture, Macquarie University online
- ^ "SoniPy | HOME". www.sonification.com.au. Retrieved 2016-07-12.
- ^ "Sonification Sandbox". sonify.psych.gatech.edu. Retrieved 2016-07-12.
- ^ "SPDF – Sonification". spdf.gsfc.nasa.gov. Retrieved 2016-07-12.
- ^ "Sound and Sonification—Wolfram Language Documentation". reference.wolfram.com. Retrieved 2016-07-12.
- ^ "audiolyzR: Data sonification with R". 2013-01-13. Retrieved 2016-07-12.
- ^ "DTM API Demo". ttsuchiya.github.io. Retrieved 2017-06-21.
- ^ Tsuchiya, Takahiko (July 2015). "Data-to-music API: Real-time data-agnostic sonification with musical structure models". Georgia Tech Library. Retrieved 2017-06-21.
{{cite web}}
: Check|archive-url=
value (help); Cite has empty unknown parameter:|dead-url=
(help) - ^ Kirke, Alexis; Miranda, Eduardo (2014-05-06). "Pulsed Melodic Affective Processing: Musical structures for increasing transparency in emotional computation". Simulation. 90 (5): 606. doi:10.1177/0037549714531060. Retrieved 2017-06-05.
- ^ "Towards Harmonic Extensions of Pulsed Melodic Affective Processing – Further Musical Structures for Increasing Transparency in Emotional Computation" (PDF). 2014-11-11. Retrieved 2017-06-05.
- ^ "A Hybrid Computer Case Study for Unconventional Virtual Computing". 2015-06-01. Retrieved 2017-06-05.
External links
- International Community for Auditory Display
- Sonification Report (1997) provides an introduction to the status of the field and current research agendas.
- The Sonification Handbook, an Open Access book that gives a comprehensive introductory presentation of the key research areas in sonification and auditory display.
- Auditory Information Design, PhD Thesis by Stephen Barrass 1998, User Centred Approach to Designing Sonifications.
- Mozzi : interactive sensor sonification on Arduino microprocessor.
- Preliminary report on design rationale, syntax, and semantics of LSL: A specification language for program auralization, D. Boardman and AP Mathur, 1993.
- A specification language for program auralization, D. Boardman, V. Khandelwal, and AP Mathur, 1994.
- Sonification tutorial
- SonEnvir general sonification environment
- Sonification.de provides information about Sonification and Auditory Display, links to interesting event and related projects
- Sonification for Exploratory Data Analysis, PhD Thesis by Thomas Hermann 2002, developing Model Based Sonfication.
- Sonification of Mobile and Wireless Communications
- Interactive Sonification a hub to news and upcoming events in the field of interactive sonification
- zero-th space-time association
- CodeSounding — an open source sonification framework which makes possible to hear how any existing Java program "sounds like", by assigning instruments and pitches to code statements (if, for, etc.) and playing them as they are executed at runtime. In this way the flowing of execution is played as a flow of music and its rhythm changes depending on user interaction.
- LYCAY, a Java library for sonification of Java source code
- WebMelody, a system for sonification of activity of web servers.
- Sonification of a Cantor set [6]
- Sonification Sandbox v.3.0, a Java program to convert datasets to sounds, GT Sonification Lab, School of Psychology, Georgia Institute of Technology.
- Program Sonification using Java, an online chapter (with code) explaining how to implement sonification using speech synthesis, MIDI note generation, and audio clips.
- [7] Live Sonification of Ocean Swell