Well before the advent of digital devices, mathematics and engineering have always been powerful driving forces in the history of music.From Pythagoras’s geometric analysis of tone intervals, all the way to the awe-inducing technical complexity of baroque organs, scientific research and craftsmanship have played a pivotal role in the evolution of humanity’s ever-increasing range of musical expressiveness.\\
Historically, it was instrument makers (a class of artists-engineers in their own right) who led this endeavor throughout the centuries; by harnessing the properties of different materials and designs, they managed to build a range of musical instruments that span a large region of the vast multidimensional space of timbral features. A long process of trial, error, and selection, started over 30'000 years ago with the first bone flutes, culminated in the standard roll call of the large acoustic orchestral ensembles of first half of last century.With the design of musical instruments came almost immediately the design of music \textit{machines}: the abstract representation of pitches and rhythms is indeed eminently numerical in its essence and lends itself naturally to mechanization. A few ancient texts describe automated singing birds built in Greece in the 2nd century BC; the first music box with a set of interchangeable playable cylinders appeared in Baghdad in the 9th century; and, of course, the 18th century automaton “la Musicienne” by Jacques Droz in Neuchatel remains one of the pinnacles of what can be achieved in a purely mechanical fashion. The invention of the phonograph, itself a purely mechanical device in its first incarnation, ushered the era of sound reproduction, an immense paradigm shift in the relationship between performers and listeners and a giant step in the democratization of music fruition. \\
And finally, electricity and electronics expanded the boundaries of all music-related domains: reproduction became broadcast, existing instruments could be amplified and their timbre could bet weaked; and new instrument such as the Theremin or the Moog appeared on the stage.Digital computers entered the world of music only in the early 1960s; yet, in just a few decades, they managed to push all that was considered standard practice to the pages of archival history. Computers have indeed started a brand new chapter in the history of our relationship with music and our new class purports to explore some of the key aspects of this “digital new normal” from an engineering point of view.
\section{The central role of computers in music}
Today digital devices play a primary and irreplaceable role in all facets of the music world: from composition to performance, from production to distribution, from analysis to enjoyment. This centrality is the result of the versatility of the digital paradigm, combining general-purpose processors with the medium-agnostic numerical encoding of sampled information. In our class we will address three fundamental aspects of how the technical approach to music has changed.
\subsection{Sound reproduction}
Virtually all audible music today originates from digital devices, be they a cellphone, a dedicated audio player, or any of the myriad appliances that provide us with a staggering amount of high-quality musical entertainment. The synergistic convergence between cheaper hardware, high-density memory, and efficient compression algorithms such as MP3, allows for an unprecedented ease of dissemination and sharing of music material. Cross-contamination of disparate musical genres is now the norm and it has never beens impler for artists to showcase high quality recordings of their work.In the first part of our class we will review the basics of digital audio, the inner workings of digital-to-analog and analog-to-digital converters, and we will study some common audio encoding and compression standards.
\subsection{Sound Synthesis}
Computers have been used to generate sounds since their inception, with Australia’s CSIRAC playing a couple of simple tunes as far back as 1951. More formal efforts by Max Mathews in the US produced a series of digital synthesis software suites called Music-I (1957) to Music-V (1967). At the time, the hard-ware was not powerful enough to generate sounds in real time and therefore these tools were almost exclusively used in avantgarde compositions for a very limited public. The real breakthrough for performing artists took place in the early 1980s, with the arrival of relatively inexpensive digital keyboards such as Yamaha’s DX7 (based on FM synthesis) and the first samplers. Two decades later, synthesizers “lost” their keyboard and became simple software modules,which could be run on general-purpose hardware – a virtualization trend that is continuing today with the availability of immense libraries of carefully sampled sounds. Today, an entire orchestra can easily be reproduced in real-time,instrument by instrument, by a sufficiently powerful PC.The second part of our class will address the basics of sound synthesis, the way in which samplers work, and the MIDI interoperability standard for digital instruments.
\subsection{Sound Recording}
The first commercial digital recordings of musical performances took place in the mid 1970s; by the end of the 1990s, virtually all stages of music production had moved to digital workstations. The art of analog recording and mastering had blossomed in the 1960s and 1970s, with recording engineers perfecting the techniques to “create a sound” that could go beyond the simple reproduction of a musical performance. Equalization, compression, and reverberation, to cite just a few, became essential production devices that helped expand the realm of what could be played back to a listener. When production moved to digital, these tools had to be reinvented – either from scratch, which created a new palette of “digital effects”, or by trying to reproduce in software the characteristics of the old analog circuitry. This virtualization of the recording studio also removed the last barrier to the ultimate democratization of music production,since now anyone with a PC can own professional recording tools.In the last part of our class we will study how a home studio works, the design and purpose of several digital effects, the VST standard for the interoperability of audio processing software, and the challenges inherent to the simulation of old analog circuitry.
\section{Syllabus}
This class, as we said, is an exploration of the relationship between digital computers and music. We will tackle this history mostly from an engineering and signal processing perspective, with a focus on the key results that took us from the original first tentative steps in music generation to today’s ubiquitous success. Contrary to most other computer music classes, on the other hand, we will not enter the fascinating but extremely subjective world of of electronic music composition –pace the admirers of Xenakis, Berio, Chowning, et al.
We will cover the following topics, together with applied examples using Python notebooks:
\begin{enumerate}
\item review of digital signal processing: discrete-time signal, spectral analysis
\item A/D and D/A converters: oversampling, sigma-delta
\item audio measurement standards: audio compression the MP3 and FLAC standards
\item time-frequency analysis: pitch shifting, time stretching, vocoder