Jump to content

Talk:Additive synthesis

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 70.109.177.113 (talk) at 06:21, 4 January 2012 (→‎Okay Cluster, we need to talk about what is meant by "realtime" or "real-time".: Also YELLING WITH CAPS and dimissing others as "wasting [your] time" won't work either.). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

WikiProject iconElectronics Unassessed
WikiProject iconThis article is part of WikiProject Electronics, an attempt to provide a standard approach to writing articles about electronics on Wikipedia. If you would like to participate, you can choose to edit the article attached to this page, or visit the project page, where you can join the project and see a list of open tasks. Leave messages at the project talk page
???This article has not yet received a rating on Wikipedia's content assessment scale.
???This article has not yet received a rating on the project's importance scale.

Rephrase

Erm, the unique tone of an instrument is formed more by how those harmonics *change* over time, and by transient bits of noise and non-harmonic frequencies. Additive tries to emulate that by having a different envelope on each individual harmonic. I don't know how best to re-phrase it.

Sorry: I've reworded it a bit further. Hope all is well now. Dysprosia 09:53, 7 Sep 2003 (UTC)
I understand that additive is equivalent to wavetable if partials are harmonic. What if partials aren't harmonic? Which allows me to do this, and which does not? Petwil 06:02, 21 October 2006 (UTC)[reply]
wavetable synthesis (not to be confused with basic PCM sample playback cannot do inharmonic partials unless you were to detune the partials from their harmonic frequency by constantly moving the phase of the partial, requiring a lot of wavetable updates. in additive synthesis, the partials (sine waves) are synthesized separately, then added. in their separate synthesis, there is no need that they be at harmonic frequencies. the frequencies of partials in additive synthesis can be whatever is specified. r b-j 04:13, 22 October 2006 (UTC)[reply]


The Synclavier was a sampler and programmable harmonic definable wavetable FM synthesizer. It was NOT a real additive synth: you can construct a patch defining 24 fixed partials per voice and apply dynamic enveloping and a very simple FM modulator with envelope, only with the partial timbre upgrade you can specify several harmonic spectrums and fade between them in time. I think the article meant to refer to machines as the Kurzweil K150 or Kawai K5/5000 and remotely the Digital Keyboards Synergy, all them the first generation of additive hardware synths. The K 150 is a REAL -and a compromise between quantity of oscillators vs. poliphony- additive engine where you can program each partial individually with envelopes (it's a shame that the programming is only possible using an old apple computer, it can't be done from the front panel). The K5 does the same but is a simplification, being able to control only 4 groups of harmonics and not each one: practice shows that individual control is desirable up to the 16th partial... The K 5000 is the classic additive synth, but combined with samples: it's quite powerful but clumsy to work compared with software synthesis.

The true about the Synergy: The Synergy is a user definable PM (as FM) semi algorithmic with additive capabilities, 32 digital oscillators synth. This means that you could use it as a 16 partials two voice polyphony fully additive synth (with limited timbrical results) or the most usual way: complex 8 voice polyphony Yamaha FM style synthesis. You can think of it as a much more flexible algorithm, envelope (for frequency and amplitude for each oscillator) and filter equalization DX7 style synth. In fact, you can came very close to the original patches using a soft synth as FM7: you cannot do the best patches (as Wendy Carlos's collection) on a DX7 because of the limited envelopes and operator output fixed curves, not to consider the somewhat "metallic" quality of sound that all the DXs have.In comparison, the Synergy is really warm. That is all and is not small thing. — Preceding unsigned comment was added by r b-j at 08:17, 13 May 2007 (UTC), and edited by 190.190.31.69 at 20:46, 23 August 2010 (UTC))[reply]

(Preceding [unsigned comment] line itself was complemented by) 122.17.104.157 (talk) 00:08, 31 October 2011 (UTC)[reply]

Harmonic or inharmonics

I can't figure out from this article whether additive synthesis involves harmonic partials only, or if inharmonics can be used as well. For example, an early section reads: Additive synthesis ...[combines] waveforms pitched to different harmonics, with a different amplitude envelope on each, along with inharmonic artifacts. Usually, this involves a bank of oscillators tuned to multiples of the base frequency. The term "inharmonic artifacts" implies that they are not deliberate but faults of the technology somehow. The general idea I get here is that additive synthesis is about combining harmonic partials of the fundamental frequency. But further down we get: Additive synthesis can also create non-harmonic sounds if the individual partials are not all having a frequency that is an integer multiple of the same fundamental frequency. Finally, another section says: ...wavetable synthesis is equivalent to additive synthesis in the case that all partials or overtones are harmonic (that is all overtones are at frequencies that are an integer multiple of a fundamental frequency...).

So I'm confused. If I combines a bunch of waves, some of which are not harmonics of the fundamental, is this additive synthesis or not? I'd always assumed it was. Another sentence on this page says: Not all musical sounds have harmonic partials (e.g., bells), but many do. In these cases, an efficient implementation of additive synthesis can be accomplished with wavetable synthesis [instead of additive synthesis]. Yet I've spent time using what I thought was additive synthesis to create bell-like tones, by incorporating various harmonic and inharmonic partials (using sine waves only). It seems like additive synthesis is the right term, since a bunch of waveforms are being "added" together, whether or not they are harmonic. But then again, that's just my uninformed sense. Is there a definitive definition one way or the other? If so, let's edit the page to make that clear. If not, ....let's edit the page to make that clear! Pfly (talk) 09:51, 18 November 2007 (UTC)[reply]

Good point. I fixed it. Some copy edit can improve the prose, but I guarantee that the math is correct. 207.190.198.130 (talk) 06:00, 19 November 2007 (UTC)[reply]

Acoustic instruments and electronic additive synthesizers

This section is a bit of a mess. It delves too deep into the features of a few digital synthesizers and makes quite strongly biased claims about them ("it's quite powerful but clumsy to work", "In comparison, the Synergy is really warm. That is all and is not small thing", "it's a shame that the programming is only possible using an old apple computer"), is ambiguous and jargony at times and generally quite poor in grammar and style. Partially it feels like an advertisement for a synth. Some examples of additive synthesizers would be welcome, but I think this section needs a complete rewrite. Jakce (talk) 11:28, 22 September 2010 (UTC)[reply]

Additional citations

Why and where does this article need additional citations for verification? What references does it need and how should they be added? Hyacinth (talk) 03:42, 30 December 2011 (UTC)[reply]

Personally, I think it's fine, but I'm not gonna de-tag it. I'll leave that to someone else. 71.169.185.162 (talk) 06:21, 30 December 2011 (UTC)[reply]
It seems slightly strange question. Until December 2010, this article lacked citations at all, and since then, I'm expanding this article and adding most of all citations on the "implementations" section. However, the other sections — the lead section (definition of notion), and resynthesis section (most interesting part) — still lack any citations at all. Your contributions are welcome ! --Clusternote (talk) 09:12, 30 December 2011 (UTC)[reply]
After digging several related references, I felt again that the descriptions on the lead section and re-synthesis section seem to be too naive (not practical), and possibly not based on reliable sources (except for simple articles for beginners).
Although the descriptions are not incorrect, it seems to be hard to make association with existing reliable researches, and too abstract as foundation for adding extended results researched on several decades ago.
In my opinion, sometime, these sections should be totally re-written.
--Clusternote (talk) 05:54, 3 January 2012 (UTC)[reply]

Okay Cluster, we need to talk about what is meant by "realtime" or "real-time".

I don't think you have the common use or meaning down by "time-variant". "... time-vary transient wave" is time-variant or, if you're more a hardcore techie, nonstationary. What "real-time" means in any context is that the production of whatever is done at the time of consumption, not in advance. It is food that is not pre-cooked or pre-processed but cooked when it is eaten. As far as music synthesis or music processing is concerned, it means that the music is synthesized or processed at the time that it is heard. It means that it was not synthesized in advance and written to a soundfile to be played back later (usually because the computational cost of time in synthesis exceeded the time duration of the sound).

Real-time and time-variant really are different concepts. In music synthesis, one situation I can think of where they are related is in the cranking of a knob (or bending a pitch wheel or mod wheel) during a note. If the synthesis is not real-time and the sound is outputted from a sound file, you cannot do that in playback unless the processing to change the pitch or modulation (whatever the mod is) can be done to the sound playback via real-time post-processing.

I think you need to come up with good references that support your notion that "real-time" means "time-variant". It doesn't. But both concepts have much to do with sound synthesis. 70.109.177.113 (talk) 05:53, 3 January 2012 (UTC)[reply]

Hi, 70.109.177.113. Please create account before discussing, and show your sources for your opinion. I'm not interested on time wasting discussion with unknown person without any reliable sources. --Clusternote (talk) 05:58, 3 January 2012 (UTC)[reply]
Well, that's kinda a copout, Cluster. Consider the merit of the content of the text that appears before you, not whatever disembodied being that placed it there. Why, from what source, did you come up with the notion that in any context, let alone the context of music synthesis, that "real-time" means "time-varying"?
BTW there are some very good reasons I am posting as an IP and to itemize them here would obviate those reasons. I may edit pages that are not protected or semi-protected, so I have just as much "authority" (such as it is here in Wikipedia) as the next schlub. Just deal with the content rather than worry about who I am or may be. 70.109.177.113 (talk) 06:09, 3 January 2012 (UTC)[reply]

(reset indent)
If you need meaningfull discussion, please show your reliable sources on addtive synthesis at first, then briefly explain your opinion. I can't understand your previous complicated posts.

Note that additive synthesis for dynamic, time-varying waveform generation seems to be historically often called "realtime additive synthesis" in the meaning of both "realtime change of waveform, harmonics or timbre" and "realtime implementation, processing or computation". You can find several examples on Google Scholar search. Or more briefly, Sound on Sound's article on Oct. 1997 show both meaning of usage of the term "real time". --Clusternote (talk) 13:32, 3 January 2012 (UTC)[reply]

Again, I have removed this content change from Clusternote that some might call OR but I would just call an error of category. Clusternote, you are mistaken with your assumption that they didn't mean real-time when they wrote real-time synthesis. What is real-time computing is precisely what is meant in real-time synthesis and your own cites make that connection ("STONE-AGE ADDITIVE"). It's your original contribution to the article that is required to be defended with citations that actually support the addition. 70.109.177.113 (talk) 06:01, 4 January 2012 (UTC)[reply]

70.109.177.113, please show your reliable sources before editing article. I already show several sources on this issue on this page. You can't understand the situation yet. Almost all citations on article page were added by me. However, you didn't yet show any source supporting your opinion. Please don't revert article until when you can find any sources supporting your opinion. --Clusternote (talk) 06:10, 4 January 2012 (UTC)[reply]

You are the editor adding content without sourcing it. The sources you cite actually disprove the claim you make (that "real-time" in synthesis is not the same as real-time computing. You are totally mistaken and you need to do some studying. Start with the sources you cited above. 70.109.177.113 (talk) 06:21, 4 January 2012 (UTC)[reply]


Please create account before further discussion

Please create account before further discussion. The person without account nor reliable sources is not worth the trust. --Clusternote (talk) 07:04, 3 January 2012 (UTC)[reply]

Sorry, but I don't think it's appropriate to discriminate against an anonymous IP, even in a talk page. IPs have most of the rights of editing, discussion, consensus, etc. that registered accounts do. There are legitimate reasons to contribute via an IP and it's a core Wikipedia principle to embrace them. Verifiable content is what matters, not who contributes it. Users never need to be trusted, whether they are a named account or an IP. Enough philosophy... back to your argument!  :) --Ds13 (talk) 09:14, 3 January 2012 (UTC)[reply]
Thanks for your comment. However, most problematic thing on this issue is, on the article or talk page, other users including this user didn't show any citations or sources, and only discuss on their uncertain memories or original researches. Above discussion essentially lacks sources, and discussion without sources tends to be biased subjectively. We needs reliable sources and responsible discussion. --Clusternote (talk) 10:21, 3 January 2012 (UTC)[reply]
Agree completely. This article needs more citations and less subjectivity. I'm going to make a pass through the article now to neutralize a few things. --Ds13 (talk) 18:28, 3 January 2012 (UTC)[reply]

Speech synthesis

Hi Clusternote. I understand the point of your recent edit, replacing text I deleted. Here's my perspective: until some reliable sources verify the relevance of the speech synthesis premise (intro sentence of that section?) then I struggle with any and all of its content being there. Will wait for more info. --Ds13 (talk) 02:33, 4 January 2012 (UTC)[reply]

The similarity between speech synthesis (imply analysis and re-synthesis) and additive synthesis, is just what I already wrote on article. Historically, speech analysis and speech re-synthesis were implemented using extraction of peak frequencies of fomants, and reproduction of peak frequencies using oscillators (in the case of sinewave synthesis) or filters.
These method didn't directly implement harmonic analysis (and resynthesis based on it), however, the definition of additive synthesis should not be limited on harmonic analysis/resynthesis. A special case of additive synthesis based on harmonic analysis seems to be just called "Spectral modeling synthesis" (or, "Sinusoidal modeling" for the particular case using sinusoidal waves as basis function)[a 1]
  1. ^ Julius O. Smith III (2011), "Additive Synthesis", Spectral Audio Signal Processing, Center for Computer Research in Music and Acoustics (CCRMA), Stanford University, ISBN 978-0-9745607-3-1 {{citation}}: External link in |chapterurl= (help); Unknown parameter |chapterurl= ignored (|chapter-url= suggested) (help)
    — See also the section "Spectral modeling synthesis" which linked to the term "Sinusoidal modeling" on above page.
--Clusternote (talk) 05:35, 4 January 2012 (UTC)[reply]