Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Additive synthesis

From Wikipedia, the free encyclopedia
Sound synthesis technique


Problems playing this file? Seemedia help.

Additive synthesis is asound synthesis technique that createstimbre by addingsine waves together.[1][2]

The timbre of musical instruments can be considered in the light ofFourier theory to consist of multipleharmonic or inharmonicpartials orovertones. Each partial is a sine wave of differentfrequency andamplitude that swells and decays over time due tomodulation from anADSR envelope orlow frequency oscillator.

Additive synthesis most directly generates sound by adding the output of multiple sine wave generators. Alternative implementations may use pre-computedwavetables or the inversefast Fourier transform.

Explanation

[edit]

The sounds that are heard in everyday life are not characterized by a singlefrequency. Instead, they consist of a sum of pure sine frequencies, each one at a differentamplitude. When humans hear these frequencies simultaneously, we can recognize the sound. This is true for both "non-musical" sounds (e.g. water splashing, leaves rustling, etc.) and for "musical sounds" (e.g. a piano note, a bird's tweet, etc.). This set of parameters (frequencies, their relative amplitudes, and how the relative amplitudes change over time) are encapsulated by thetimbre of the sound.Fourier analysis is the technique that is used to determine these exact timbre parameters from an overall sound signal; conversely, the resulting set of frequencies and amplitudes is called theFourier series of the original sound signal.

In the case of a musical note, the lowest frequency of its timbre is designated as the sound'sfundamental frequency. For simplicity, we often say that the note is playing at that fundamental frequency (e.g. "middle C is 261.6 Hz"),[3] even though the sound of that note consists of many other frequencies as well. The set of the remaining frequencies is called theovertones (or theharmonics, if their frequencies are integer multiples of the fundamental frequency) of the sound.[4] In other words, the fundamental frequency alone is responsible for the pitch of the note, while the overtones define the timbre of the sound. The overtones of a piano playing middle C will be quite different from the overtones of a violin playing the same note; that's what allows us to differentiate the sounds of the two instruments. There are even subtle differences in timbre between different versions of the same instrument (for example, anupright piano vs. agrand piano).

Additive synthesis aims to exploit this property of sound in order to construct timbre from the ground up. By adding together pure frequencies (sine waves) of varying frequencies and amplitudes, we can precisely define the timbre of the sound that we want to create.

Definitions

[edit]
See also:Fourier series andFourier analysis
Schematic diagram of additive synthesis. The inputs to the oscillators are frequenciesfk{\displaystyle f_{k}} and amplitudesrk{\displaystyle r_{k}}.

Harmonic additive synthesis is closely related to the concept of aFourier series which is a way of expressing aperiodic function as the sum ofsinusoidal functions withfrequencies equal to integer multiples of a commonfundamental frequency. These sinusoids are calledharmonics,overtones, or generally,partials. In general, a Fourier series contains an infinite number of sinusoidal components, with no upper limit to the frequency of the sinusoidal functions and includes aDC component (one with frequency of 0Hz).Frequencies outside of the human audible range can be omitted in additive synthesis. As a result, only a finite number of sinusoidal terms with frequencies that lie within the audible range are modeled in additive synthesis.

A waveform or function is said to beperiodic if

y(t)=y(t+P){\displaystyle y(t)=y(t+P)}

for allt{\displaystyle t} and for some periodP{\displaystyle P}.

TheFourier series of a periodic function is mathematically expressed as:

y(t)=a02+k=1[akcos(2πkf0t)bksin(2πkf0t)]=a02+k=1rkcos(2πkf0t+ϕk){\displaystyle {\begin{aligned}y(t)&={\frac {a_{0}}{2}}+\sum _{k=1}^{\infty }\left[a_{k}\cos(2\pi kf_{0}t)-b_{k}\sin(2\pi kf_{0}t)\right]\\&={\frac {a_{0}}{2}}+\sum _{k=1}^{\infty }r_{k}\cos \left(2\pi kf_{0}t+\phi _{k}\right)\\\end{aligned}}}

where

Being inaudible, theDC component,a0/2{\displaystyle a_{0}/2}, and all components with frequencies higher than some finite limit,Kf0{\displaystyle Kf_{0}}, are omitted in the following expressions of additive synthesis.

Harmonic form

[edit]

The simplest harmonic additive synthesis can be mathematically expressed as:

y(t)=k=1Krkcos(2πkf0t+ϕk),{\displaystyle y(t)=\sum _{k=1}^{K}r_{k}\cos \left(2\pi kf_{0}t+\phi _{k}\right),}1

wherey(t){\displaystyle y(t)} is the synthesis output,rk{\displaystyle r_{k}},kf0{\displaystyle kf_{0}}, andϕk{\displaystyle \phi _{k}} are the amplitude, frequency, and the phase offset, respectively, of thek{\displaystyle k}th harmonic partial of a total ofK{\displaystyle K} harmonic partials, andf0{\displaystyle f_{0}} is thefundamental frequency of the waveform and thefrequency of the musical note.

Time-dependent amplitudes

[edit]
Example of harmonic additive synthesis in which each harmonic has a time-dependent amplitude. The fundamental frequency is 440 Hz.

Problems listening to this file? SeeMedia help

More generally, the amplitude of each harmonic can be prescribed as a function of time,rk(t){\displaystyle r_{k}(t)}, in which case the synthesis output is

y(t)=k=1Krk(t)cos(2πkf0t+ϕk){\displaystyle y(t)=\sum _{k=1}^{K}r_{k}(t)\cos \left(2\pi kf_{0}t+\phi _{k}\right)}.2

Eachenveloperk(t){\displaystyle r_{k}(t)\,} should vary slowly relative to the frequency spacing between adjacent sinusoids. Thebandwidth ofrk(t){\displaystyle r_{k}(t)} should be significantly less thanf0{\displaystyle f_{0}}.

Inharmonic form

[edit]

Additive synthesis can also produceinharmonic sounds (which areaperiodic waveforms) in which the individual overtones need not have frequencies that are integer multiples of some common fundamental frequency.[5][6] While many conventional musical instruments have harmonic partials (e.g. anoboe), some have inharmonic partials (e.g.bells). Inharmonic additive synthesis can be described as

y(t)=k=1Krk(t)cos(2πfkt+ϕk),{\displaystyle y(t)=\sum _{k=1}^{K}r_{k}(t)\cos \left(2\pi f_{k}t+\phi _{k}\right),}

wherefk{\displaystyle f_{k}} is the constant frequency ofk{\displaystyle k}th partial.

Example of inharmonic additive synthesis in which both the amplitude and frequency of each partial are time-dependent.

Problems listening to this file? SeeMedia help

Time-dependent frequencies

[edit]

In the general case, theinstantaneous frequency of a sinusoid is thederivative (with respect to time) of the argument of the sine or cosine function. If this frequency is represented inhertz, rather than inangular frequency form, then this derivative is divided by2π{\displaystyle 2\pi }. This is the case whether the partial is harmonic or inharmonic and whether its frequency is constant or time-varying.

In the most general form, the frequency of each non-harmonic partial is a non-negative function of time,fk(t){\displaystyle f_{k}(t)}, yielding

y(t)=k=1Krk(t)cos(2π0tfk(u) du+ϕk).{\displaystyle y(t)=\sum _{k=1}^{K}r_{k}(t)\cos \left(2\pi \int _{0}^{t}f_{k}(u)\ du+\phi _{k}\right).}3

Broader definitions

[edit]

Additive synthesis more broadly may mean sound synthesis techniques that sum simple elements to create more complex timbres, even when the elements are not sine waves.[7][8] For example, F. Richard Moore listed additive synthesis as one of the "four basic categories" of sound synthesis alongsidesubtractive synthesis, nonlinear synthesis, andphysical modeling.[8] In this broad sense,pipe organs, which also have pipes producing non-sinusoidal waveforms, can be considered as a variant form of additive synthesizers. Summation ofprincipal components andWalsh functions have also been classified as additive synthesis.[9]

Implementation methods

[edit]

Modern-day implementations of additive synthesis are mainly digital. (See sectionDiscrete-time equations for the underlying discrete-time theory)

Oscillator bank synthesis

[edit]

Additive synthesis can be implemented using a bank of sinusoidal oscillators, one for each partial.[1]

Wavetable synthesis

[edit]
Main article:Wavetable synthesis

In the case of harmonic, quasi-periodic musical tones,wavetable synthesis can be as general as time-varying additive synthesis, but requires less computation during synthesis.[10][11] As a result, an efficient implementation of time-varying additive synthesis of harmonic tones can be accomplished by use ofwavetable synthesis.

Group additive synthesis

[edit]

Group additive synthesis[12][13][14] is a method to group partials into harmonic groups (having different fundamental frequencies) and synthesize each group separately withwavetable synthesis before mixing the results.

Inverse FFT synthesis

[edit]

An inversefast Fourier transform can be used to efficiently synthesize frequencies that evenly divide the transform period or "frame". By careful consideration of theDFT frequency-domain representation it is also possible to efficiently synthesize sinusoids of arbitrary frequencies using a series of overlapping frames and the inversefast Fourier transform.[15]

Additive analysis/resynthesis

[edit]
Sinusoidal analysis/synthesis system for Sinusoidal Modeling (based onMcAulay & Quatieri 1988, p. 161)[16]

It is possible to analyze the frequency components of a recorded sound giving a "sum of sinusoids" representation. This representation can be re-synthesized using additive synthesis. One method of decomposing a sound into time varying sinusoidal partials isshort-time Fourier transform (STFT)-based McAulay-Quatieri Analysis.[17][18]

By modifying the sum of sinusoids representation, timbral alterations can be made prior to resynthesis. For example, a harmonic sound could be restructured to sound inharmonic, and vice versa. Sound hybridisation or "morphing" has been implemented by additive resynthesis.[19]

Additive analysis/resynthesis has been employed in a number of techniques including Sinusoidal Modelling,[20]Spectral Modelling Synthesis (SMS),[19] and the Reassigned Bandwidth-Enhanced Additive Sound Model.[21] Software that implements additive analysis/resynthesis includes: SPEAR,[22] LEMUR, LORIS,[23] SMSTools,[24] ARSS.[25]

Products

[edit]
Additive re-synthesis using timbre-frame concatenation:
Concatenation with crossfades (on Synclavier)
Concatenation with spectral envelope interpolation (on Vocaloid)

New England DigitalSynclavier had a resynthesis feature where samples could be analyzed and converted into "timbre frames" which were part of its additive synthesis engine.Technos Acxel, launched in 1987, utilized the additive analysis/resynthesis model, in anFFT implementation.

Also a vocal synthesizer,Vocaloid have been implemented on the basis of additive analysis/resynthesis: its spectral voice model calledExcitation plus Resonances (EpR) model[26][27] is extended based on Spectral Modeling Synthesis (SMS),and itsdiphoneconcatenative synthesis is processed usingspectral peak processing (SPP)[28] technique similar to modifiedphase-locked vocoder[29] (an improvedphase vocoder for formant processing).[30] Using these techniques, spectral components (formants) consisting of purely harmonic partials can be appropriately transformed into desired form for sound modeling, and sequence of short samples (diphones orphonemes) constituting desired phrase, can be smoothly connected by interpolating matched partials and formant peaks, respectively, in the inserted transition region between different samples. (See alsoDynamic timbres)

Applications

[edit]

Musical instruments

[edit]
Main articles:Synthesizer,Electronic musical instrument, andSoftware synthesizer

Additive synthesis is used in electronic musical instruments. It is the principal sound generation technique used byEminent organs.

Speech synthesis

[edit]
Main article:Speech synthesis

Inlinguistics research, harmonic additive synthesis was used in the 1950s to play back modified and synthetic speech spectrograms.[31]

Later, in the early 1980s, listening tests were carried out on synthetic speech stripped of acoustic cues to assess their significance. Time-varyingformant frequencies and amplitudes derived bylinear predictive coding were synthesized additively as pure tone whistles. This method is calledsinewave synthesis.[32][33] Also the composite sinusoidal modeling (CSM)[34][35] used on a singingspeech synthesis feature on theYamaha CX5M (1984), is known to use a similar approach which was independently developed during 1966–1979.[36][37] These methods are characterized by extraction and recomposition of a set of significant spectral peaks corresponding to the several resonance modes occurring in the oral cavity and nasal cavity, in a viewpoint ofacoustics. This principle was also utilized on aphysical modeling synthesis method, calledmodal synthesis.[38][39][40][41]

History

[edit]
Harmonic synthesizer

Harmonic analysis was discovered byJoseph Fourier,[42] who published an extensive treatise of his research in the context ofheat transfer in 1822.[43] The theory found an early application inprediction of tides. Around 1876,[44] William Thomson (later ennobled asLord Kelvin) constructed a mechanicaltide predictor. It consisted of aharmonic analyzer and aharmonic synthesizer, as they were called already in the 19th century.[45][46] The analysis of tide measurements was done usingJames Thomson'sintegrating machine. The resultingFourier coefficients were input into the synthesizer, which then used a system of cords and pulleys to generate and sum harmonic sinusoidal partials for prediction of future tides. In 1910, a similar machine was built for the analysis of periodic waveforms of sound.[47] The synthesizer drew a graph of the combination waveform, which was used chiefly for visual validation of the analysis.[47]

Tone-generator utilizing it

Georg Ohm applied Fourier's theory to sound in 1843. The line of work was greatly advanced byHermann von Helmholtz, who published his eight years worth of research in 1863.[48] Helmholtz believed that the psychological perception of tone color is subject to learning, while hearing in the sensory sense is purely physiological.[49] He supported the idea that perception of sound derives from signals from nerve cells of the basilar membrane and that the elastic appendages of these cells are sympathetically vibrated by pure sinusoidal tones of appropriate frequencies.[47] Helmholtz agreed with the finding ofErnst Chladni from 1787 that certain sound sources have inharmonic vibration modes.[49]

Rudolph Koenig's sound analyzer and synthesizer
sound synthesizer
sound analyzer

In Helmholtz's time,electronic amplification was unavailable. For synthesis of tones with harmonic partials, Helmholtz built an electricallyexcited array oftuning forks and acousticresonance chambers that allowed adjustment of the amplitudes of the partials.[50] Built at least as early as in 1862,[50] these were in turn refined byRudolph Koenig, who demonstrated his own setup in 1872.[50] For harmonic synthesis, Koenig also built a large apparatus based on hiswave siren. It was pneumatic and utilized cut-outtonewheels, and was criticized for low purity of its partial tones.[44] Alsotibia pipes ofpipe organs have nearly sinusoidal waveforms and can be combined in the manner of additive synthesis.[44]

In 1938, with significant new supporting evidence,[51] it was reported on the pages ofPopular Science Monthly that the human vocal cords function like a fire siren to produce a harmonic-rich tone, which is then filtered by the vocal tract to produce different vowel tones.[52] By the time, the additive Hammond organ was already on market. Most early electronic organ makers thought it too expensive to manufacture the plurality of oscillators required by additive organs, and began instead to buildsubtractive ones.[53] In a 1940Institute of Radio Engineers meeting, the head field engineer of Hammond elaborated on the company's newNovachord as having a"subtractive system" in contrast to the original Hammond organ in which"the final tones were built up by combining sound waves".[54] Alan Douglas used the qualifiersadditive andsubtractive to describe different types of electronic organs in a 1948 paper presented to theRoyal Musical Association.[55] The contemporary wordingadditive synthesis andsubtractive synthesis can be found in his 1957 bookThe electrical production of music, in which he categorically lists three methods of forming of musical tone-colours, in sections titledAdditive synthesis,Subtractive synthesis, andOther forms of combinations.[56]

A typical modern additive synthesizer produces its output as anelectrical,analog signal, or asdigital audio, such as in the case ofsoftware synthesizers, which became popular around year 2000.[57]

Timeline

[edit]

The following is a timeline of historically and technologically notable analog and digital synthesizers and devices implementing additive synthesis.

Research implementation or publicationCommercially availableCompany or institutionSynthesizer or synthesis deviceDescriptionAudio samples
1900[58]1906[58]New England Electric Music CompanyTelharmoniumThe first polyphonic, touch-sensitive music synthesizer.[59] Implemented sinuosoidal additive synthesis usingtonewheels andalternators. Invented byThaddeus Cahill.no known recordings[58]
1933[60]1935[60]Hammond Organ CompanyHammond OrganAn electronic additive synthesizer that was commercially more successful than Telharmonium.[59] Implemented sinusoidal additive synthesis usingtonewheels andmagnetic pickups. Invented byLaurens Hammond.Model A
1950 or earlier[31] Haskins LaboratoriesPattern PlaybackA speech synthesis system that controlled amplitudes of harmonic partials by a spectrogram that was either hand-drawn or an analysis result. The partials were generated by a multi-track opticaltonewheel.[31]samplesArchived 25 January 2012 at theWayback Machine
1958[61]  ANSAn additive synthesizer[62] that played microtonalspectrogram-like scores using multiple multi-track opticaltonewheels. Invented byEvgeny Murzin. A similar instrument that utilized electronic oscillators, theOscillator Bank, and its input deviceSpectrogram were realized byHugh Le Caine in 1959.[63][64]1964 model
1963[65] MIT An off-line system for digital spectral analysis and resynthesis of the attack and steady-state portions of musical instrument timbres by David Luce.[65] 
1964[66] University of IllinoisHarmonic Tone GeneratorAn electronic, harmonic additive synthesis system invented by James Beauchamp.[66][67]samples (info)
1974 or earlier[68][69]1974[68][69]RMIHarmonic SynthesizerThe first synthesizer product that implemented additive[70] synthesis using digital oscillators.[68][69] The synthesizer also had a time-varying analog filter.[68] RMI was a subsidiary ofAllen Organ Company, which had released the first commercialdigital church organ, theAllen Computer Organ, in 1971, using digital technology developed byNorth American Rockwell.[71]1234
1974[72] EMS (London)Digital Oscillator BankA bank of digital oscillators with arbitrary waveforms, individual frequency and amplitude controls,[73] intended for use in analysis-resynthesis with the digitalAnalysing Filter Bank (AFB) also constructed at EMS.[72][73] Also known as:DOB.in The New Sound of Music[74]
1976[75]1976[76]FairlightQasar M8An all-digital synthesizer that used thefast Fourier transform[77] to create samples from interactively drawn amplitude envelopes of harmonics.[78]samples
1977[79] Bell LabsDigital SynthesizerAreal-time, digital additive synthesizer[79] that has been called the first true digital synthesizer.[80] Also known as:Alles Machine,Alice.sampleArchived 26 July 2011 at theWayback Machine (infoArchived 24 August 2011 at theWayback Machine)
1979[80]1979[80]New England DigitalSynclavier IIA commercial digital synthesizer that enabled development of timbre over time by smooth cross-fades between waveforms generated by additive synthesis.Jon Appleton - Sashasonjon
1996[81]KawaiK5000A commercial digital synthesizer workstation capable of polyphonic, digital additive synthesis of up to 128 sinusodial waves, as well as combing PCM waves.[82]

Discrete-time equations

[edit]

In digital implementations of additive synthesis,discrete-time equations are used in place of the continuous-time synthesis equations. A notational convention for discrete-time signals uses brackets i.e.y[n]{\displaystyle y[n]\,} and the argumentn{\displaystyle n\,} can only be integer values. If the continuous-time synthesis outputy(t){\displaystyle y(t)\,} is expected to be sufficientlybandlimited; below half thesampling rate orfs/2{\displaystyle f_{\mathrm {s} }/2\,}, it suffices to directly sample the continuous-time expression to get the discrete synthesis equation. The continuous synthesis output can later bereconstructed from the samples using adigital-to-analog converter. The sampling period isT=1/fs{\displaystyle T=1/f_{\mathrm {s} }\,}.

Beginning with (3),

y(t)=k=1Krk(t)cos(2π0tfk(u) du+ϕk){\displaystyle y(t)=\sum _{k=1}^{K}r_{k}(t)\cos \left(2\pi \int _{0}^{t}f_{k}(u)\ du+\phi _{k}\right)}

and sampling at discrete timest=nT=n/fs{\displaystyle t=nT=n/f_{\mathrm {s} }\,} results in

y[n]=y(nT)=k=1Krk(nT)cos(2π0nTfk(u) du+ϕk)=k=1Krk(nT)cos(2πi=1n(i1)TiTfk(u) du+ϕk)=k=1Krk(nT)cos(2πi=1n(Tfk[i])+ϕk)=k=1Krk[n]cos(2πfsi=1nfk[i]+ϕk){\displaystyle {\begin{aligned}y[n]&=y(nT)=\sum _{k=1}^{K}r_{k}(nT)\cos \left(2\pi \int _{0}^{nT}f_{k}(u)\ du+\phi _{k}\right)\\&=\sum _{k=1}^{K}r_{k}(nT)\cos \left(2\pi \sum _{i=1}^{n}\int _{(i-1)T}^{iT}f_{k}(u)\ du+\phi _{k}\right)\\&=\sum _{k=1}^{K}r_{k}(nT)\cos \left(2\pi \sum _{i=1}^{n}(Tf_{k}[i])+\phi _{k}\right)\\&=\sum _{k=1}^{K}r_{k}[n]\cos \left({\frac {2\pi }{f_{\mathrm {s} }}}\sum _{i=1}^{n}f_{k}[i]+\phi _{k}\right)\\\end{aligned}}}

where

rk[n]=rk(nT){\displaystyle r_{k}[n]=r_{k}(nT)\,} is the discrete-time varying amplitude envelope
fk[n]=1T(n1)TnTfk(t) dt{\displaystyle f_{k}[n]={\frac {1}{T}}\int _{(n-1)T}^{nT}f_{k}(t)\ dt\,} is the discrete-timebackward difference instantaneous frequency.

This is equivalent to

y[n]=k=1Krk[n]cos(θk[n]){\displaystyle y[n]=\sum _{k=1}^{K}r_{k}[n]\cos \left(\theta _{k}[n]\right)}

where

θk[n]=2πfsi=1nfk[i]+ϕk=θk[n1]+2πfsfk[n]{\displaystyle {\begin{aligned}\theta _{k}[n]&={\frac {2\pi }{f_{\mathrm {s} }}}\sum _{i=1}^{n}f_{k}[i]+\phi _{k}\\&=\theta _{k}[n-1]+{\frac {2\pi }{f_{\mathrm {s} }}}f_{k}[n]\\\end{aligned}}} for alln>0{\displaystyle n>0\,}[15]

and

θk[0]=ϕk.{\displaystyle \theta _{k}[0]=\phi _{k}.\,}

See also

[edit]

References

[edit]
  1. ^abJulius O. Smith III."Additive Synthesis (Early Sinusoidal Modeling)". Retrieved14 January 2012.The term "additive synthesis" refers to sound being formed by adding together many sinusoidal components
  2. ^Gordon Reid."Synth Secrets, Part 14: An Introduction To Additive Synthesis".Sound on Sound (January 2000). Retrieved14 January 2012.
  3. ^Mottola, Liutaio (31 May 2017)."Table of Musical Notes and Their Frequencies and Wavelengths".
  4. ^"Fundamental Frequency and Harmonics".
  5. ^Smith III, Julius O.; Serra, Xavier (2005)."Additive Synthesis".PARSHL: An Analysis/Synthesis Program for Non-Harmonic Sounds Based on a Sinusoidal Representation. Proceedings of the International Computer Music Conference (ICMC-87, Tokyo), Computer Music Association, 1987.CCRMA, Department of Music, Stanford University. Retrieved11 January 2015. (online reprint)
  6. ^Smith III, Julius O. (2011)."Additive Synthesis (Early Sinusoidal Modeling)".Spectral Audio Signal Processing.CCRMA, Department of Music, Stanford University.ISBN 978-0-9745607-3-1. Retrieved9 January 2012.
  7. ^Roads, Curtis (1995).The Computer Music Tutorial.MIT Press. p. 134.ISBN 978-0-262-68082-0.
  8. ^abMoore, F. Richard (1995).Foundations of Computer Music.Prentice Hall. p. 16.ISBN 978-0-262-68082-0.
  9. ^Roads, Curtis (1995).The Computer Music Tutorial.MIT Press. pp. 150–153.ISBN 978-0-262-68082-0.
  10. ^Robert Bristow-Johnson (November 1996)."Wavetable Synthesis 101, A Fundamental Perspective"(PDF). Archived fromthe original(PDF) on 15 June 2013. Retrieved21 May 2005.
  11. ^Andrew Horner (November 1995)."Wavetable Matching Synthesis of Dynamic Instruments with Genetic Algorithms".Journal of the Audio Engineering Society.43 (11):916–931.
  12. ^Julius O. Smith III."Group Additive Synthesis".CCRMA, Stanford University.Archived from the original on 6 June 2011. Retrieved12 May 2011.
  13. ^P. Kleczkowski (1989). "Group additive synthesis".Computer Music Journal.13 (1):12–20.doi:10.2307/3679851.JSTOR 3679851.
  14. ^B. Eaglestone and S. Oates (1990)."Analytical tools for group additive synthesis".Proceedings of the 1990 International Computer Music Conference, Glasgow. Computer Music Association.
  15. ^abRodet, X.; Depalle, P. (1992). "Spectral Envelopes and Inverse FFT Synthesis".Proceedings of the 93rd Audio Engineering Society Convention.CiteSeerX 10.1.1.43.4818.
  16. ^McAulay, R. J.;Quatieri, T. F. (1988)."Speech Processing Based on a Sinusoidal Model"(PDF).The Lincoln Laboratory Journal.1 (2):153–167. Archived fromthe original(PDF) on 21 May 2012. Retrieved9 December 2013.
  17. ^McAulay, R. J.; Quatieri, T. F. (August 1986). "Speech analysis/synthesis based on a sinusoidal representation".IEEE Transactions on Acoustics, Speech, and Signal Processing.34 (4):744–754.doi:10.1109/TASSP.1986.1164910.
  18. ^"McAulay-Quatieri Method".
  19. ^abSerra, Xavier (1989).A System for Sound Analysis/Transformation/Synthesis based on a Deterministic plus Stochastic Decomposition (PhD thesis). Stanford University. Retrieved13 January 2012.
  20. ^Smith III, Julius O.; Serra, Xavier."PARSHL: An Analysis/Synthesis Program for Non-Harmonic Sounds Based on a Sinusoidal Representation". Retrieved9 January 2012.
  21. ^Fitz, Kelly (1999).The Reassigned Bandwidth-Enhanced Method of Additive Synthesis (PhD thesis). Dept. of Electrical and Computer Engineering, University of Illinois Urbana-Champaign.CiteSeerX 10.1.1.10.1130.
  22. ^SPEAR Sinusoidal Partial Editing Analysis and Resynthesis for Mac OS X, MacOS 9 and Windows
  23. ^"Loris Software for Sound Modeling, Morphing, and Manipulation". Archived fromthe original on 30 July 2012. Retrieved13 January 2012.
  24. ^SMSTools application for Windows
  25. ^ARSS: The Analysis & Resynthesis Sound Spectrograph
  26. ^Bonada, J.; Celma, O.; Loscos, A.; Ortola, J.; Serra, X.; Yoshioka, Y.; Kayama, H.; Hisaminato, Y.; Kenmochi, H. (2001). "Singing voice synthesis combining Excitation plus Resonance and Sinusoidal plus Residual Models".Proc. Of ICMC.CiteSeerX 10.1.1.18.6258. (PDF)
  27. ^Loscos, A. (2007).Spectral processing of the singing voice (PhD thesis). Barcelona, Spain: Pompeu Fabra University.hdl:10803/7542. (PDF).
    See "Excitation plus resonances voice model" (p. 51)
  28. ^Loscos 2007, p. 44, "Spectral peak processing"
  29. ^Loscos 2007, p. 44, "Phase locked vocoder"
  30. ^Bonada, Jordi; Loscos, Alex (2003)."Sample-based singing voice synthesizer by spectral concatenation: 6. Concatenating Samples".Proc. of SMAC 03:439–442.
  31. ^abcCooper, F. S.; Liberman, A. M.; Borst, J. M. (May 1951)."The interconversion of audible and visible patterns as a basis for research in the perception of speech".Proc. Natl. Acad. Sci. U.S.A.37 (5):318–25.Bibcode:1951PNAS...37..318C.doi:10.1073/pnas.37.5.318.PMC 1063363.PMID 14834156.
  32. ^Remez, R.E.; Rubin, P.E.; Pisoni, D.B.; Carrell, T.D. (1981). "Speech perception without traditional speech cues".Science.212 (4497):947–950.Bibcode:1981Sci...212..947R.doi:10.1126/science.7233191.PMID 7233191.S2CID 13039853.
  33. ^Rubin, P.E. (1980)."Sinewave Synthesis Instruction Manual (VAX)"(PDF).Internal Memorandum. Haskins Laboratories, New Haven, CT. Archived fromthe original(PDF) on 29 August 2021. Retrieved27 December 2011.
  34. ^Sagayama, S.[in Japanese];Itakura, F. (1979),複合正弦波による音声合成 [Speech Synthesis by Composite Sinusoidal Wave],Speech Committee of Acoustical Society of Japan (published October 1979), S79-39
  35. ^Sagayama, S.; Itakura, F. (October 1979).複合正弦波による簡易な音声合成法 [Simple Speech Synthesis method by Composite Sinusoidal Wave].Proceedings of Acoustical Society of Japan, Autumn Meeting. Vol. 3-2-3. pp. 557–558.
  36. ^Sagayama, S.;Itakura, F. (1986). "Duality theory of composite sinusoidal modeling and linear prediction".ICASSP '86. IEEE International Conference on Acoustics, Speech, and Signal Processing. Vol. 11. Acoustics, Speech, and Signal Processing, IEEE International Conference on ICASSP '86. (published April 1986). pp. 1261–1264.doi:10.1109/ICASSP.1986.1168815.S2CID 122814777.
  37. ^Itakura, F. (2004)."Linear Statistical Modeling of Speech and its Applications -- Over 36-year history of LPC --"(PDF).Proceedings of the 18th International Congress on Acoustics (ICA 2004), We3.D, Kyoto, Japan, Apr. 2004.3 (published April 2004): III–2077–2082. Archived fromthe original(PDF) on 24 May 2022. Retrieved24 October 2014.6. Composite Sinusoidal Modeling(CSM) In 1975, Itakura proposed the line spectrum representation (LSR) concept and its algorithm to obtain a set of parameters for new speech spectrum representation. Independently from this, Sagayama developed a composite sinusoidal modeling (CSM) concept which is equivalent to LSR but give a quite different formulation, solving algorithm and synthesis scheme. Sagayama clarified the duality of LPC and CSM and provided the unified view covering LPC, PARCOR, LSR, LSP and CSM, CSM is not only a new concept of speech spectrum analysis but also a key idea to understand the linear prediction from a unified point of view. ...
  38. ^Adrien, Jean-Marie (1991)."The missing link: modal synthesis". In Giovanni de Poli; Aldo Piccialli;Curtis Roads (eds.).Representations of Musical Signals. Cambridge, MA:MIT Press. pp. 269–298.ISBN 978-0-262-04113-3.
  39. ^Morrison, Joseph Derek (IRCAM); Adrien, Jean-Marie (1993). "MOSAIC: A Framework for Modal Synthesis".Computer Music Journal.17 (1):45–56.doi:10.2307/3680569.JSTOR 3680569.
  40. ^Bilbao, Stefan (October 2009),"Modal Synthesis",Numerical Sound Synthesis: Finite Difference Schemes and Simulation in Musical Acoustics, Chichester, UK: John Wiley and Sons,ISBN 978-0-470-51046-9,A different approach, with a long history of use in physical modeling sound synthesis, is based on a frequency-domain, or modal description of vibration of objects of potentially complex geometry. Modal synthesis [1,148], as it is called, is appealing, in that the complex dynamic behaviour of a vibrating object may be decomposed into contributions from a set of modes (the spatial forms of which are eigenfunctions of the particular problem at hand, and are dependent on boundary conditions), each of which oscillates at a single complex frequency. ...  (See alsocompanion page)
  41. ^Doel, Kees van den; Pai, Dinesh K. (2003). Greenebaum, K. (ed.)."Modal Synthesis For Vibrating Object"(PDF).Audio Anecdotes. Natick, MA: AK Peter.When a solid object is struck, scraped, or engages in other external interactions, the forces at the contact point causes deformations to propagate through the body, causing its outer surfaces to vibrate and emit sound waves. ... A good physically motivated synthesis model for objects like this is modal synthesis ... where a vibrating object is modeled by a bank of damped harmonic oscillators which are excited by an external stimulus.
  42. ^Prestini, Elena (2004) [Rev. ed of: Applicazioni dell'analisi armonica. Milan: Ulrico Hoepli, 1996].The Evolution of Applied Harmonic Analysis: Models of the Real World. trans. New York, USA: Birkhäuser Boston. pp. 114–115.ISBN 978-0-8176-4125-2. Retrieved6 February 2012.
  43. ^Fourier, Jean Baptiste Joseph (1822).Théorie analytique de la chaleur [The Analytical Theory of Heat] (in French). Paris, France: Chez Firmin Didot, père et fils.ISBN 9782876470460.{{cite book}}:ISBN / Date incompatibility (help)
  44. ^abcMiller, Dayton Clarence (1926) [1916].The Science of Musical Sounds. New York: The Macmillan Company. pp. 110,244–248.
  45. ^The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science.49. Taylor & Francis: 490. 1875.{{cite journal}}:Missing or empty|title= (help)[failed verification]
  46. ^Thomson, Sir W. (1878)."Harmonic analyzer".Proceedings of the Royal Society of London.27 (185–189). Taylor and Francis:371–373.doi:10.1098/rspl.1878.0062.JSTOR 113690.
  47. ^abcCahan, David (1993). Cahan, David (ed.).Hermann von Helmholtz and the foundations of nineteenth-century science. Berkeley and Los Angeles, USA: University of California Press. pp. 110–114,285–286.ISBN 978-0-520-08334-9.
  48. ^Helmholtz, von, Hermann (1863).Die Lehre von den Tonempfindungen als physiologische Grundlage für die Theorie der Musik [On the sensations of tone as a physiological basis for the theory of music] (in German) (1st ed.). Leipzig: Leopold Voss. pp. v.
  49. ^abChristensen, Thomas Street (2002).The Cambridge History of Western Music. Cambridge, United Kingdom: Cambridge University Press. pp. 251, 258.ISBN 978-0-521-62371-1.
  50. ^abcvon Helmholtz, Hermann (1875).On the sensations of tone as a physiological basis for the theory of music. London, United Kingdom: Longmans, Green, and co. pp. xii,175–179.
  51. ^Russell, George Oscar (1936).Year book - Carnegie Institution of Washington (1936). Carnegie Institution of Washington: Year Book. Vol. 35. Washington: Carnegie Institution of Washington. pp. 359–363.
  52. ^Lodge, John E. (April 1938). Brown, Raymond J. (ed.)."Odd Laboratory Tests Show Us How We Speak: Using X Rays, Fast Movie Cameras, and Cathode-Ray Tubes, Scientists Are Learning New Facts About the Human Voice and Developing Teaching Methods To Make Us Better Talkers".Popular Science Monthly.132 (4). New York, USA: Popular Science Publishing:32–33.
  53. ^Comerford, P. (1993). "Simulating an Organ with Additive Synthesis".Computer Music Journal.17 (2):55–65.doi:10.2307/3680869.JSTOR 3680869.
  54. ^"Institute News and Radio Notes".Proceedings of the IRE.28 (10):487–494. 1940.doi:10.1109/JRPROC.1940.228904.
  55. ^Douglas, A. (1948). "Electrotonic Music".Proceedings of the Royal Musical Association.75:1–12.doi:10.1093/jrma/75.1.1.
  56. ^Douglas, Alan Lockhart Monteith (1957).The Electrical Production of Music. London, UK: Macdonald. pp. 140, 142.
  57. ^Pejrolo, Andrea; DeRosa, Rich (2007).Acoustic and MIDI orchestration for the contemporary composer. Oxford, UK: Elsevier. pp. 53–54.
  58. ^abcWeidenaar, Reynold (1995).Magic Music from the Telharmonium. Lanham, MD: Scarecrow Press.ISBN 978-0-8108-2692-2.
  59. ^abMoog, Robert A. (October–November 1977). "Electronic Music".Journal of the Audio Engineering Society.25 (10/11): 856.
  60. ^abOlsen, Harvey (14 December 2011). Brown, Darren T. (ed.)."Leslie Speakers and Hammond organs: Rumors, Myths, Facts, and Lore".The Hammond Zone. Hammond Organ in the U.K. Archived fromthe original on 1 September 2012. Retrieved20 January 2012.
  61. ^Holzer, Derek (22 February 2010)."A brief history of optical synthesis". Retrieved13 January 2012.
  62. ^Vail, Mark (1 November 2002). "Eugeniy Murzin's ANS – Additive Russian synthesizer".Keyboard Magazine. p. 120.
  63. ^Young, Gayle."Oscillator Bank (1959)".
  64. ^Young, Gayle."Spectrogram (1959)".
  65. ^abLuce, David Alan (1963).Physical correlates of nonpercussive musical instrument tones (Thesis thesis). Cambridge, Massachusetts, U.S.A.: Massachusetts Institute of Technology.hdl:1721.1/27450.
  66. ^abBeauchamp, James (17 November 2009)."The Harmonic Tone Generator: One of the First Analog Voltage-Controlled Synthesizers".Prof. James W. Beauchamp Home Page. Archived fromthe original on 12 June 2018. Retrieved29 May 2018.
  67. ^Beauchamp, James W. (October 1966)."Additive Synthesis of Harmonic Musical Tones".Journal of the Audio Engineering Society.14 (4):332–342.
  68. ^abcd"RMI Harmonic Synthesizer". Synthmuseum.com. Archived from the original on 9 June 2011. Retrieved12 May 2011.
  69. ^abcReid, Gordon."PROG SPAWN! The Rise And Fall of Rocky Mount Instruments (Retro)".Sound on Sound (December 2001). Archived fromthe original on 25 December 2011. Retrieved22 January 2012.
  70. ^Flint, Tom."Jean Michel Jarre: 30 Years of Oxygene".Sound on Sound (February 2008). Retrieved22 January 2012.
  71. ^"Allen Organ Company".fundinguniverse.com.
  72. ^abCosimi, Enrico (20 May 2009)."EMS Story - Prima Parte" [EMS Story - Part One].Audio Accordo.it (in Italian). Archived fromthe original on 22 May 2009. Retrieved21 January 2012.
  73. ^abHinton, Graham (2002)."EMS: The Inside Story". Electronic Music Studios (Cornwall). Archived fromthe original on 21 May 2013.
  74. ^The New Sound of Music (TV). UK: BBC. 1979. Includes a demonstration of DOB and AFB.
  75. ^Leete, Norm."Fairlight Computer – Musical Instrument (Retro)".Sound on Sound (April 1999). Retrieved29 January 2012.
  76. ^Twyman, John (1 November 2004).(inter)facing the music: The history of the Fairlight Computer Musical Instrument(PDF) (Bachelor of Science (Honours) thesis). Unit for the History and Philosophy of Science, University of Sydney. Archived fromthe original(PDF) on 23 March 2012. Retrieved29 January 2012.
  77. ^Street, Rita (8 November 2000)."Fairlight: A 25-year long fairytale".Audio Media magazine. IMAS Publishing UK. Archived fromthe original on 8 October 2003. Retrieved29 January 2012.
  78. ^"Computer Music Journal"(JPG). 1978. Retrieved29 January 2012.
  79. ^abLeider, Colby (2004). "The Development of the Modern DAW".Digital Audio Workstation.McGraw-Hill. p. 58.
  80. ^abcJoel, Chadabe (1997).Electric Sound. Upper Saddle River, N.J., U.S.A.: Prentice Hall. pp. 177–178, 186.ISBN 978-0-13-303231-4.
  81. ^"Kawai K5000 | Vintage Synth Explorer".www.vintagesynth.com. Retrieved21 January 2024.
  82. ^"Kawai K5000R & K5000S".www.soundonsound.com. Retrieved21 January 2024.

External links

[edit]
Sample-based orSampler
Physical modelling
Analog synthesizer
Digital synthesizer
Retrieved from "https://en.wikipedia.org/w/index.php?title=Additive_synthesis&oldid=1322506033"
Category:
Hidden categories:

[8]ページ先頭

©2009-2026 Movatter.jp