This article has multiple issues. Please helpimprove it or discuss these issues on thetalk page.(Learn how and when to remove these messages) (Learn how and when to remove this message)
|

Digital music technology encompasses the use of digital instruments to produce, perform[1] orrecordmusic. These instruments vary, includingcomputers, electroniceffects units,software, anddigital audio equipment. Digital music technology is used inperformance, playback, recording,composition,mixing,analysis andediting of music, by professions in all parts of themusic industry.
In the late 19th century,Thaddeus Cahill introduced theTelharmonium, which is commonly considered the first electromechanical musical instrument.[2] In the early 20th century,Leon Theremin created theTheremin, an early electronic instrument played without physical contact, creating a new form of sound creation.
In the mid-20th century,sampling emerged, with artists likePierre Schaeffer andKarlheinz Stockhausen manipulating recorded sounds on tape to create entirely new compositions. This laid the foundation for future electronic music production techniques.
In the 1960s, theMoog synthesizer, invented byRobert Moog, popularizedanalog synthesis. MusicianWendy Carlos demonstrated Robert's invention with the albumSwitched-On Bach, which consisted of works composed byJohann Sebastian Bach interpreted with theMoog synthesizer.[3] Meanwhile, tape-based studios, like theBBC Radiophonic Workshop, were at the forefront of electronic sound design.
The 1980s saw a major shift towards digital technology with the development of the Musical Instrument Digital Interface (MIDI) standard. This allowed electronic instruments to communicate with computers and each other, transforming music production. Digital synthesizers, such as theYamaha DX7, became widely popular.[4]
The 1990s and 2000s witnessed the explosive growth ofelectronic dance music and its various subgenres, driven by the accessibility of digital music production tools and the rise of computer-based software synthesizers.
Courses in music technology are offered at many different universities as part of degree programs focusing on performance, composition, music research at the undergraduate and graduate level. The study of music technology is usually concerned with the creative use of technology for creating new sounds, performing,recording,programmingsequencers or other music-related electronic devices, and manipulating, mixing and reproducing music. Music technology programs train students for careers in "...sound engineering, computer music, audio-visual production and post-production, mastering, scoring for film and multimedia, audio forgames, software development, and multimedia production."[5] Those wishing to develop new music technologies often train to become anaudio engineer working in research and development.[6] Due to the increasing role of interdisciplinary work in music technology, individuals developing new music technologies may also have backgrounds or training inelectrical engineering,computer programming,computer hardware design,acoustics,record producing or other fields.

Digital music technologies are widely used to assist inmusic education for training students in the home, elementary school, middle school, high school, college and university music programs.Electronic keyboard labs are used for cost-effective beginner group piano instruction in high schools, colleges, and universities. Courses in music notation software and basic manipulation of audio and MIDI can be part of a student's core requirements for a music degree. Mobile and desktop applications are available to aid the study ofmusic theory andear training. Somedigital pianos provide interactive lessons and games using the built-in features of the instrument to teach music fundamentals.[7]
Classic analog synthesizers include theMoog Minimoog,ARP Odyssey,Yamaha CS-80,Korg MS-20,Sequential Circuits Prophet-5,Roland TB-303,Roland Alpha Juno.[8] One of the most iconic synthesizers is theRoland TB-303, was widely used inacid house music.
Classic digital synthesizers include theFairlight CMI,PPG Wave,Nord Modular andKorg M1.[8]
Computer and synthesizer technology joining together changed the way music is made and is one of the fastest-changing aspects of music technology today.Max Mathews, an acoustic researcher[9] atBell Telephone Laboratories' Acoustic and Behavioural Research Department, is responsible for some of the first digital music technology in the 1950s. Mathews also pioneered a cornerstone of music technology;analog-to-digital conversion.[10]
At Bell Laboratories, Matthews conducted research to improve the telecommunications quality for long-distance phone calls. Owing to long-distance and low-bandwidth, audio quality over phone calls across the United States was poor. Thus, Matthews devised a method in which sound was synthesized via computer on the distant end rather than transmitted. Matthews was an amateur violinist, and during a conversation with his superior, John Pierce at Bell Labs, Pierce posed the idea of synthesizing music through a computer. Since Matthews had already synthesized speech, he agreed and wrote a series of programs known as MUSIC. MUSIC consisted of two files: an orchestra file containing data telling the computer how to synthesize sound, and a score file instructing the program what notes to play using the instruments defined in the orchestra file. Matthews wrote five iterations of MUSIC, calling them MUSIC I-V respectively. Subsequently, as the program was adapted and expanded to run on various platforms, its name changed to reflect its new changes. This series of programs became known as theMUSIC-N paradigm. The concept of the MUSIC now exists in the form ofCsound.[11]
LaterMax Matthews worked as an advisor toIRCAMin the late 1980s. There, he taughtMiller Puckette, a researcher. Puckette developed a program in which music could be programmed graphically. The program could transmit and receive MIDI messages to generate interactive music in real-time. Inspired by Matthews, Puckette named the program Max. Later, a researcher named David Zicarelli visited IRCAM, saw the capabilities of Max and felt it could be developed further. He took a copy of Max with him when he left and eventually added capabilities to process audio signals. Zicarelli named this new part of the program MSP after Miller Puckette. Zicarelli developed the commercial version ofMaxMSP and sold it at his company,Cycling '74, beginning in 1997. The company has since been acquired byAbleton.[11]
The first generation of professional commercially available computer music instruments, orworkstations as some companies later called them, were very sophisticated elaborate systems that cost a great deal of money when they first appeared. They ranged from $25,000 to $200,000.[12] The two most popular were theFairlight, and theSynclavier.
It was not until the advent ofMIDI thatgeneral-purpose computers started to play a role in music production. Following the widespread adoption of MIDI, computer-basedMIDI editors and sequencers were developed. MIDI-to-CV/Gate converters were then used to enableanalog synthesizers to be controlled by aMIDI sequencer.[13]
At theNAMM Show of 1983 in Los Angeles, MIDI was released. A demonstration at the convention showed two previously incompatibleanalog synthesizers, theProphet 600 andRoland Jupiter-6, communicating with each other, enabling a player to play one keyboard while getting the output from both of them. This development immediately allowed synths to be accurately layered in live shows and studio recordings. MIDI enables different electronic instruments andelectronic music devices to communicate with each other and with computers. The advent of MIDI spurred a rapid expansion of the sales and production of electronic instruments and music software.
In 1985, several of the top keyboard manufacturers created theMIDI Manufacturers Association (MMA). This newly founded association standardized the MIDI protocol by generating and disseminating all the documents about it. With the development of the MIDI file format specification byOpcode, every music software company's MIDI sequencer software could read and write each other's files.
Since the 1980s,personal computers became the ideal system for utilizing the vast potential of MIDI. This has created a large consumer market for software such as MIDI-equippedelectronic keyboards, MIDI sequencers anddigital audio workstations. With universal MIDI protocols, electronic keyboards, sequencers, and drum machines can all be connected together.
Coinciding with the history of computer music is the history of vocal synthesis. Prior to Max Matthews synthesizing speech with a computer, analog devices were used to recreate speech. In the 1930s, an engineer namedHomer Dudley invented theVoice Operating Demonstrator (VODER), an electro-mechanical device which generated a sawtooth wave and white-noise. Various parts of the frequency spectrum of the waveforms could be filtered to generate the sounds of speech. Pitch was modulated via a bar on a wrist strap worn by the operator.[14] In the 1940s Dudley, invented theVoice Operated Coder (VOCODER). Rather than synthesizing speech from scratch, this machine operated by accepting incoming speech and breaking it into its spectral components. In the late 1960s and early 1970s, bands and solo artists began using the VOCODER to blend speech with notes played on a synthesizer.[15]
At Bell Laboratories, Max Matthews worked with researchers Kelly and Lochbaum to develop a model of the vocal tract to study how its properties contributed to speech generation. Using the model of the vocal tract,—a method, which would come to be known asphysical modeling synthesis, in which a computer estimates the formants and spectral content of each word based on information about the vocal model, including various applied filters representing the vocal tract—to make a computer (an IBM 704) sing for the first time in 1962. The computer performed a rendition of "Daisy Bell".[16]
At IRCAM in France, researchers developed software called CHANT (French forsing), the first version of which ran between 1979 and 1983.[17] CHANT was based FOF (Fomant ond Formatique[citation needed]) synthesis, in which the peak frequencies of a sound are created and shaped using granular synthesis—as opposed to filtering frequencies to create speech.[18]
Through the 1980s and 1990s, as MIDI devices became commercially available, speech was generated by mapping MIDI data to samples of the components of speech stored in sample libraries.[19]
In the 2010s, singing synthesis technology took advantage of the advances in artificial intelligence, deep listening and machine learning, to better represent the nuances of the human voice. New high-fidelity sample libraries combined with digital audio workstations facilitate editing in fine detail, such as shifting offormants, adjustment of vibrato, and adjustments to vowels and consonants. Sample libraries for various languages and various accents are available. With advancements in vocal synthesis, artists sometimes use sample libraries in lieu of backing singers.[20]

Asynthesizer is anelectronic musical instrument that generates electric signals that are converted to sound throughinstrument amplifiers andloudspeakers orheadphones. Synthesizers may either imitate existing sounds (instruments, vocal, natural sounds, etc.), or generate new electronictimbres or sounds that did not exist before. They are often played with an electronicmusical keyboard, but they can be controlled via a variety of other input devices, including sequencers,instrument controllers,fingerboards,guitar synthesizers,wind controllers, andelectronic drums. Synthesizers without built-in controllers are often calledsound modules, and are controlled using a controller device.
Synthesizers use various methods to generate a signal. Among the most popular waveform synthesis techniques aresubtractive synthesis,additive synthesis,wavetable synthesis,frequency modulation synthesis,phase distortion synthesis,physical modeling synthesis andsample-based synthesis or a variant,granular synthesis. Synthesizers are used in many genres ofpop,rock anddance music. Contemporaryclassical music composers from the 20th and 21st centuries write compositions for synthesizer.

Adrum machine is an electronic musical instrument designed to imitate thesound ofdrums,cymbals and otherpercussion instruments. Drum machines generate drum and cymbal sounds in a rhythm and tempo that is programmed by a musician. Drum machines are most commonly associated with electronic dance music genres such ashouse music, but are also used in many other genres. They are also used whensession drummers are not available or if the production cannot afford the cost of a professional drummer. In the 2010s, most modern drum machines are sequencers with a sample playback (often arompler) orsynthesizer component that specializes in the reproduction of drumtimbres.
Electro-mechanical drum machines were first developed in 1949, with the invention of theChamberlin Rhythmate. Transistorized electronic drum machinesSeeburg Select-A-Rhythm appeared in 1964.[21][22][23][24]
Classic drum machines include theKorg Mini Pops SR-120,PAiA Programmable Drum Set,Roland CR-78,LinnDrum,Roland TR-909,Oberheim DMX,E-MU SP-12,Alesis HR-16, andElektron SPS1 Machinedrum (in chronological order).[25]
Roland'sTR-808 andTR-909 significantly changed the landscape of rhythm production, shaping genres like hip-hop and electronic dance music. Korg's KPR-77 and DDD-1 also made an impact. These drum machines were known for their distinctive sound and affordability. Over time, Japanese companies continued to innovate, producing increasingly sophisticated and user-friendly drum machines, such as the Roland TR-8 andKorg Volca Beats.Sly and the Family Stone's 1971 albumThere's a Riot Goin' On helped to popularize the sound of early drum machines, along withTimmy Thomas' 1972R&B hit "Why Can't We Live Together" andGeorge McCrae's 1974disco hit "Rock Your Baby" which used early Roland rhythm machines.[26]
Digital sampling technology, introduced in the 1970s,[27][28][29][30][31]has become a staple ofmusic production in the 2000s.[citation needed] Devices that usesampling, record a sound digitally (often a musical instrument, such as apiano orflute being played), and replay it when a key or pad on a controller device (e.g., anelectronic keyboard,electronic drum pad, etc.) is pressed or triggered. Samplers can alter the sound using variousaudio effects and audio processing. Sampling has its roots in France with the sound experiments carried out bymusique concrète practitioners.
In the 1980s, when the technology was still in its infancy, digital samplers cost tens of thousands of dollars and they were only used by the toprecording studios and musicians. These were out of the price range of most musicians. Early samplers include the 8-bitElectronic Music Studios MUSYS-3 circa 1970,Computer Music Melodian in 1976,Fairlight CMI in 1979,Emulator I in 1981,Synclavier II Sample-to-Memory (STM) option circa 1980,Ensoniq Mirage in 1984, andAkai S612 in 1985. The latter's successor, theEmulator II (released in 1984), listed for $8,000.[12] Samplers were released during this period with high price tags, such as theK2000 andK2500.
Some important hardware samplers include theKurzweil K250,Akai MPC60,Ensoniq Mirage,Ensoniq ASR-10,Akai S1000,E-mu Emulator, andFairlight CMI.[32]
One of the biggest uses of sampling technology was byhip-hop musicDJs and performers in the 1980s. Before affordable sampling technology was readily available, DJs would use a technique pioneered byGrandmaster Flash to manually repeat certain parts in a song by juggling between two separate turntables. This can be considered as an early precursor of sampling. In turn, thisturntablism technique originates from Jamaicandub music in the 1960s and was introduced to American hip hop in the 1970s.
In the 2000s, most professional recording studios use digital technologies. In recent years, many samplers have only included digital technology. This new generation of digital samplers are capable of reproducing and manipulating sounds. Digital sampling plays an integral part in some genres of music, such as hip-hop and trap. Advanced sample libraries have made complete performances oforchestral compositions possible that sound similar to a live performance.[33] Modern sound libraries allow musicians to have the ability to use the sounds of almost any instrument in their productions.
Early samplers include the 12-bitToshibaLMD-649 [ja] in 1981.[34]
The first affordable sampler in Japan was theEnsoniq Mirage in 1984. Also theAKAI S612 became available in 1985, retailed for US$895. Other companies soon released affordable samplers, includingOberheim DPX-1 in 1987, and more byKorg,Casio,Yamaha, andRoland. Some important hardware samplers in Japan include theAkai Z4/Z8, Roland V-Synth,Casio FZ-1.[32]

MIDI has been the musical instrument industry standard interface since the 1980s through to the present day.[35] It dates back to June 1981, whenRoland Corporation founderIkutaro Kakehashi proposed the concept of standardization between different manufacturers' instruments as well as computers, toOberheim Electronics founderTom Oberheim andSequential Circuits presidentDave Smith. In October 1981, Kakehashi, Oberheim and Smith discussed the concept with representatives fromYamaha,Korg andKawai.[36] In 1983, the MIDI standard was unveiled by Kakehashi and Smith.[37][38]
Some universally accepted varieties of MIDI software applications include music instruction software, MIDI sequencing software, music notation software,hard disk recording/editing software, patch editor/sound library software, computer-assisted composition software, andvirtual instruments. Current developments in computer hardware and specialized software continue to expand MIDI applications.
Following the widespread adoption of MIDI, computer-based MIDI editors and sequencers were developed. MIDI-to-CV/Gate converters were then used to enableanalogue synthesizers to be controlled by a MIDI sequencer.[13]
Reduced prices inpersonal computers caused the masses to turn away from the more expensiveworkstations. Advancements in technology have increased the speed of hardware processing and the capacity of memory units. Software developers write new, more powerful programs for sequencing, recording, notating, and mastering music.
Digital audio workstation software, such asPro Tools,Logic, and many others, have gained popularity among the vast array of contemporary music technology in recent years. Such programs allow the user to record acoustic sounds with amicrophone or software instrument, which may then be layered and organized along a timeline and edited on aflat-panel display of acomputer. Recorded segments can be copied and duplicated ad infinitum, without any loss of fidelity or added noise (a major contrast fromanalog recording, in which every copy leads to a loss of fidelity and added noise). Digital music can be edited and processed using a multitude of audio effects.Contemporary classical music sometimes uses computer-generated sounds—either pre-recorded or generated and manipulated live—in conjunction or juxtaposed on classicalacoustic instruments like the cello or violin. Music is scored with commercially availablenotation software.[39]
In addition to the digital audio workstations and music notation software, which facilitate the creation of fixed media (material that does not change each time it is performed), software facilitating interactive or generative music continues to emerge. Composition based on conditions or rules (algorithmic composition) has given rise to software which can automatically generate music based on input conditions or rules. Thus, the resulting music evolves each time conditions change. Examples of this technology include software designed for writing music for video games—where music evolves as a player advances through a level or when certain characters appear—ormusic generated from artificial intelligence trained to convert biometrics like EEG or ECG readings into music.[40] Because this music is based on user interaction, it will be different each time it is heard. Other examples of generative music technology include the use of sensors connected to computer and artificial intelligence to generate music based on captured data, such as environmental factors, the movements of dancers, or physical inputs from a digital device such as a mouse or game controller. Software applications offering capabilities for generative and interactive music include SuperCollider, MaxMSP/Jitter, and Processing. Interactive music is made possible through physical computing, where the data from the physical world affects a computer's output and vice versa.[11]
This sectioncontainspromotional content. Please helpimprove it by removingpromotional language and inappropriateexternal links, and by adding encyclopedic text written from aneutral point of view.See ouradvice if the article is about you and read ourscam warning in case someone asks for money to edit this article.(February 2023) (Learn how and when to remove this message) |
This sectioncontainspromotional content. Please helpimprove it by removingpromotional language and inappropriateexternal links, and by adding encyclopedic text written from aneutral point of view.See ouradvice if the article is about you and read ourscam warning in case someone asks for money to edit this article.(February 2023) (Learn how and when to remove this message) |
By the time the first commercially successful digital instrument, the Yamaha DX7 (lifetime sales of two hundred thousand), appeared in 1983 ...(Note: the above sales number seems about whole DX series)
[Q]...Chronometer [3], as I understand it, the sounds of the clock mechanisms and all the rest of it were effectively sampled by an ADC, stored and manipulated by the computer and then spat out again. What was the breakthrough ... [A]Peter kept buying the latest computers that came out and of course the memory increased. Then I built him a hard disc recorder so that one could store some of the sounds on this hard disc. ...
1976 / COMPUTER MUSIC MELODIAN / Based on a DEC PDP-8 computer, it had then-unheard-of 12-bit/22kHz resolution.
FIRSTMAN existiert seit 1972 und hat seinen Ursprung in Japan. Dort ist die Firma unter dem Markennamen HILLWOOD bekannt. HILLWOOD baute dann auch 1973 den quasi ersten Synthesizer von FIRSTMAN. Die Firma MULTIVOX liess ihre Instrumente von 1976 bis 1980 bei HILLWOOD bauen. SQ-10 / mon syn kmi ? (1980) / Monophoner Synthesizer mit wahrscheinlich eingebautem Sequenzer. Die Tastatur umfasst 37 Tasten. Die Klangerzeugung beruht auf zwei VCOs.