Jump to content

User:SparkWorks16/Music and artificial intelligence

From Wikipedia, the free encyclopedia

Lead[edit]

Add: Erwin Panofksy proposed that in all art, there existed 3 levels of meaning: primary meaning, or the natural subject; secondary meaning, or the conventional subject; and tertiary meaning, the intrinsic content of the subject.[1][2] AI music explores the foremost of these, creating music without the "intention" which is usually behind it, leaving composers who listen to machine-generated pieces feeling unsettled by the lack of apparent meaning.[3]

History[edit]

Add the following:

Artificial intelligence finds it's beginnings in music with the transcription problem: accurately recording a performance into musical notation as it's played. Père Engramelle's schematic of a "piano roll," a mode of automatically recording note timing and duration in a way which could be easily transcribed to proper musical notation by hand, was first implemented by German engineers J.F. Unger and J. Hohlfield in 1752.[4]

In 1957, the ILLIAC I (Illinois Automatic Computer) produced the "Illiac Suite for String Quartet," a completely computer-generated piece of music. The computer was programmed to accomplish this by composer Lejaren Hiller and mathematician Leonard Isaacson.[5]

By 1983, Yamaha Corporation's Kansei Music System had gained momentum, and a paper was published on it's development in 1989. The software utilized music information processing and artificial intelligence techniques to essentially solve the transcription problem for simpler melodies, allowing users to play on the keyboard and generating the sheet music as they played.[6] However, this was only the case for simple pieces; higher-level melodies and musical complexities are regarded even today as difficult deep learning tasks, and near-perfect transcription is still a subject of research.[4]

EMI would later become the basis for a more sophisticated algorithm called Emily Howell, named for it's creator.[7]

In 2002, the music research team at the Sony Computer Science Laboratory Paris, led by French composer and scientist François Pachet, designed the Continuator, an algorithm uniquely capable of resuming a composition after a live musician stopped.[5]

Emily Howell would continue to make advancements in musical artificial intelligence, publishing it's first album, "From Darkness, Light," in 2009, and it's second, "Breathless," by 2012.[5]

In 2010, Iamus became the first AI to produce a fragment of original contemporary classical music, in it's own style: "Iamus' Opus 1." Located at the Universidad de Malága (Malága University) in Spain, the computer can generate a fully original piece in a variety of musical styles in the span of eight minutes.[5]

Software applications[edit]

ChucK[edit]

Developed at Princeton University by Ge Wang and Perry Cook, ChucK is a text-based, cross-platform language.[8] By extracting and classifying the theoretical techniques it finds in musical pieces, the software is able to synthesize entirely new pieces from the techniques it has learned.[9] The technology is used by SLOrk (Stanford Laptop Orchestra) and PLOrk (Princeton Laptop Orchestra).

Copyright[edit]

Add: The recent advancements in artificial intelligence made by groups such as Stability AI, OpenAI, and Google has incurred an enormous sum of copyright claims levelled against generative technology, including AI music. Should these lawsuits succeed, the machine learning models behind these technologies would have their datasets restricted to the public domain.[10]

Musical deepfakes[edit]

A more nascent development of AI in music is the application of audio deepfakes to cast the lyrics or musical style of a preexisting song to the voice or style of another artist. This has raised many concerns regarding the legality of technology, as well as the ethics of employing it, particularly in the context of artistic identity.[11] Furthermore, it has also raised the question of to whom the authorship of these works is attributed. As AI cannot hold authorship of it's own, current speculation suggests that there will be no clear answer until further rulings are made regarding machine learning technologies as a whole.[12]

References[edit]

  1. ^ Dilly, Heinrich (2020), "Panofsky, Erwin: Zum Problem der Beschreibung und Inhaltsdeutung von Werken der bildenden Kunst", Kindlers Literatur Lexikon (KLL), Stuttgart: J.B. Metzler, pp. 1–2, ISBN 978-3-476-05728-0, retrieved 2024-02-28
  2. ^ Erwin Panofsky, Studies in Iconology: Humanistic Themes in the Art of the Renaissance. Oxford 1939.
  3. ^ "Handbook of Artificial Intelligence for Music" (PDF). SpringerLink. doi:10.1007/978-3-030-72116-9.pdf.
  4. ^ a b "Research in music and artificial intelligence". dl.acm.org. doi:10.1145/4468.4469. Retrieved 2024-03-07.
  5. ^ a b c d Verma, Sourav (2021-01-01). "Artificial intelligence and music: History and the future perceptive". International journal of applied research.
  6. ^ Katayose, Haruhiro; Inokuchi, Seiji (1989). "The Kansei Music System". Computer Music Journal. 13 (4): 72–77. doi:10.2307/3679555. ISSN 0148-9267.
  7. ^ David Cope (1987), "Experiments in Music Intelligence." In Proceedings of the International Computer Music Conference, San Francisco: Computer Music Assn.
  8. ^ Team, ChucK. "ChucK: A Strongly-Timed Music Programming Language". ccrma.stanford.edu. Retrieved 2024-02-28.
  9. ^ Foundations of On-the-fly Learning in the ChucK Programming Language
  10. ^ Samuelson, Pamela (2023-07-14). "Generative AI meets copyright". Science. 381 (6654): 158–161. doi:10.1126/science.adi0656. ISSN 0036-8075.
  11. ^ DeepDrake ft. BTS-GAN and TayloRVC: An Exploratory Analysis of Musical Deepfakes and Hosting Platforms
  12. ^ AI and Deepfake Voice Cloning: Innovation, Copyright and Artists’ Rights