Telematic performance

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

The term telematic performance refers to a live performance (art, dance, music, etc.) which makes use of telecommunications and information technology to distribute the performers between two or more locations.

While this may involve use of conventional videoconferencing technology, it has more recently come to mean the use of internet technologies. Performance groups my also refer to their events as internet concerts, online jamming, or teleconcerts.


On June 21, 1982, The first International Summer Solstice Radio Broadcast. Created by Charlie Morrow. Produced by New Wilderness Foundation and WNYC FM. Celebrating the first day of summer (and winter below the equator) with a satellite mix of live events in Sweden, the USA, New Zealand, Canada, Denmark, and Italy. The solstice jam in real time concluded the show.[1]

On August 26, 2012, a ghostpianist concert has been realized in Berlin Philharmonie. Italian pianist Roberto Prosseda played Chopin's Grande Polonaise Brillante op. 22 on a Yamaha Clavinova digital piano in the backstage of the hall, sending the midi signal in real time to the robot-pianist Teo Tronico, which acted as a mirrorpianist, mirroring Prosseda's performance on the Steinway grand piano on stage with Berliner Symphoniker conducted by Michelangelo Galeati.


Performers and researchers work to overcome the following obstacles:

  1. Audio latency: Most musicians who play in a group rely on audio cues to maintain tempo and other communication. The most obvious example is the use of a drummer in pop music to keep time. In a normal rehearsal or performance environment, there is a delay between the time one musician plays and another hears the sound due to the speed of sound in air. This is typically from 3 - 50ms. When audio is transmitted through a digital medium (i.e. the Internet), the delay (or latency) can be much longer. Cell phones have latency of approximately 50ms. Applications such as Skype have approximately 100ms. And, QuickTime and Windows Media server-based streaming systems add 8 or more seconds of latency. When the latency is too high, the audio cues are no longer effective. Researchers[2] have shown that some musicians can ignore this delay while others find it obtrusive. In general, the higher the audio quality and the more channels transmitted, the higher the latency needs to be to reliably transmit the audio stream.
  2. Echo cancellation: Since the latency between locations is much higher than the speed of sound, standard techniques to eliminate feedback from monitoring systems, such as [equalization], do not work. However, when the latency between locations is low enough, teleconferencing echo cancellation techniques can be used.
  3. Video latency: Musicians also rely on visual cues to maintain synchronization. The most obvious example is the use of a conductor with orchestral music. Due to the size of video data, latency tends to higher for video than audio latency.
  4. Audio/video synchronization: In order to maintain audio and video synchronization, audio may be delayed more than necessary. Some performers opt to sacrifice AV synchronization for lower audio latency. For example, those involved with improvised, musical telematic performances may require low latency audio cues.


Open source[edit]

Commercial software[edit]

Performance groups[edit]

See also[edit]


  1. ^ Rockwell, John (22 June 1982). "'DRUIDS' MARK SOLSTICE EUPHONIOUSLY". The New York Times.
  2. ^ Lester, M, & Boley, J. (2007). "The Effects of Latency on Live Sound Monitoring". Audio Engineering Society.CS1 maint: multiple names: authors list (link)[full citation needed]