"<img src=\"beatles.png\" alt=\"Drawing\" style=\"float: left; width: 200px; margin: 20px 30px;\"/>\n",
" \n",
"\n",
" * recorded on October 18, 1964\n",
" * one of the first (if not the first) example of distortion via feedback\n",
" \n",
" \n",
"> _\"I defy anybody to find a record... unless it is some old blues record from 1922... that uses feedback that way. So I claim it for the Beatles. Before Hendrix, before The Who, before anybody. The first feedback on record.\"_ -- John Lennon\n",
"Note the big difference in spectral content between the undistorted and the distorted sound: since we know that linear filters cannot add frequency components, the system is clearly non linear!"
- "If we look at the location of the main peak in the spectrum, we can see that its frequency is about 110Hz, which corresponds to the pitch of the guitar's A string. We can also see that the clean tone has only a few detectable overtones."
+ "### 2.1. The biquad section\n",
+ "\n",
+ "One of the most useful building blocks for applied DSP applications is the _biquad_ section, describing a generic second-order IIR filter\n",
+ "One common specialization of the biquad section is the Peaking equalization filter, namely a filter that can provide an arbitrary boost or attenuation for a given frequency band centered around a peak freqency. The filter is defined by the following parameters:\n",
+ "\n",
+ " 1. the desired gain in dB (which can be negative)\n",
+ " 1. the peak frequency $f_c$, where the desired gain is attained\n",
+ " 1. the bandwidth of the filter, defined as the interval around $f_c$ where the gain is greater (or smaller, for attenuators) than half the desired gain in dB"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 10,
+ "metadata": {
+ "slideshow": {
+ "slide_type": "slide"
+ }
+ },
+ "outputs": [],
+ "source": [
+ "def PEQ(fc, bw, gain, sf):\n",
+ " \"\"\"Biquad bandpass filter \"\"\"\n",
+ " w = 2 * np.pi * fc / sf\n",
+ " A = 10 ** (gain / 40) \n",
+ " alpha = np.tan(np. pi * bw / sf)\n",
+ " c = np.cos(w)\n",
+ " b = np.array([1 + alpha * A, -2 * c, 1 - alpha * A])\n",
+ " a = np.array([1 + alpha / A, -2 * c, 1 - alpha / A])\n",
+ "When a string oscillates too widely, it will end up bumping against the fretboard. We can approximate this effect by introducing a limiting nonlinearity."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 18,
+ "metadata": {
+ "slideshow": {
+ "slide_type": "slide"
},
+ "tags": []
+ },
+ "outputs": [],
+ "source": [
+ "def fret_buzz(x, threshold):\n",
+ " out = np.where(x > threshold, threshold, x)\n",
+ " out = np.where(out < -threshold, -threshold, out)\n",
- "So far we have triggered the string with an instantaneous \"pluck\", that is with a delta sequence. But we could use other inputs."
+ "In \"I Feel Fine\" the fret buzz appears when the sound from the amplifier drives the A string into wider and wider oscillations. To model this effect we need to simulate a feedback path."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "slideshow": {
+ "slide_type": "slide"
+ }
+ },
+ "source": [
+ "### 3.1. Sustained excitation\n",
+ "\n",
+ "So far we have triggered the string with an instantaneous \"pluck\", that is with a delta sequence. But we could use other inputs:"
+ "In the current setup, the amplifier is responsible for some slight _equalization_ of the guitar sound. We are going to cut a bit the bass end and boost the midrange using two peaking equalizers in series"
- " Your browser does not support the audio element.\n",
- " </audio>\n",
- " "
- ],
- "text/plain": [
- "<IPython.lib.display.Audio object>"
- ]
- },
- "execution_count": 149,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
- "source": [
- "Audio(s, rate=DEFAULT_SF)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### 2.3. The fret buzz"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Alternatively, we can see the nonlinearity as being the string hitting the fretboard as its vibration becomes wider and wider. The super fast hitting generates high frequencies. Visually, the nonlinearity looks like a hard clipping waveshaper, except that when the amplitude reaches one (that is, when the string reaches its maximum amplitude), the value of the output bouces back in the $[-1,1]$ range instead of staying at the value 1, as if the string was boucing back on the fretboard very quickly before going back to its normal vibration range."
- "Although we have already studied the Karplus-Strong algorithm as an effective way to simulate a plucked sound, in this case we need a model that is closer to the actual physics of a guitar, since we'll need to drive the string oscillation in the feedback loop. \n",
- "\n",
- "In a guitar, the sound is generated by the oscillation of strings that are both under tension and fixed at both ends. Under these conditions, a displacement of the string from its rest position (i.e. the initial \"plucking\") will result in an oscillatory behavior in which the energy imparted by the plucking travels back and forth between the ends of the string in the form of standing waves. The natural modes of oscillation of a string are all multiples of the string's fundamental frequency, which is determined by its length, its mass and its tension (see, for instance, [here](http://www.phys.unsw.edu.au/jw/strings.html) for a detailed explanation). This image (courtesy of [Wikipedia](http://en.wikipedia.org/wiki/Vibrating_string)) shows a few oscillation modes on a string:\n",
- "These vibrations are propagated to the body of an acoustic guitar and converted into sound pressure waves or, for an electric guitar, they are converted into an electrical waveform by the guitar's pickups."
- "plt.title(\"Harmonics of the guitar extract\")\n",
- "plt.show()"
+ "## 5. The acoustic channel"
]
},
{
- "cell_type": "code",
- "execution_count": null,
+ "cell_type": "markdown",
"metadata": {},
- "outputs": [],
"source": [
- "plot_spectrum(x[10000:40000], fs, 1000)"
+ "The feedback loop is completed by taking into account the transfer of energy from the amp's loudspeaker to the A string. "
]
},
{
"cell_type": "markdown",
- "metadata": {},
+ "metadata": {
+ "slideshow": {
+ "slide_type": "skip"
+ }
+ },
"source": [
- "Indeed we can see that the frequency content of the sound contains multiples of a fundamental frequency at 110Hz, which corresponds to the open A string on a standard-tuning guitar. \n",
+ "For feedback to kick in, two things must happen:\n",
"\n",
- "From a signal processing point of view, the guitar string acts as a resonator resonating at several multiples of a fundamental frequency; this fundamental frequency determines the _pitch_ of the played note. In the digital domain, we know we can implement a resonator at a single frequency $\\omega_0$ with a second-order IIR of the form \n",
+ "* the energy transfer from the pressure wave to the vibrating string should be non-negligible\n",
+ "* the phase of the vibrating string must be sufficiently aligned with the phase of the sound wave in order for the sound wave to \"feed\" the vibration.\n",
"\n",
- "i.e. by placing a pair of complex-conjugate poles close to the unit circle at an angle $\\pm\\omega_0$. A simple extension of this concept, which places poles at _all_ multiples of a fundamental frequency, is the **comb filter**. A comb filter of order $N$ has the transfer function\n",
+ "Sound travels in the air at about 340 meters per second and sound pressure (that is, signal amplitude) decays with the reciprocal of the traveled distance. We can build an elementary acoustic channel simulation by neglecting everything except delay and attenuation. The output of the acoustic channel for a guitar-amplifier distance of $d$ meters will be therefore\n",
- "It is easy to see that the poles of the filters are at $z_k = \\rho e^{j\\frac{2\\pi}{N}k}$, except for $k=0$ where the zero cancels the pole. For example, here is the frequency response of $H(z) = 1/(1 - (0.99)^N z^{-N})$ for $N=9$:"
+ "where $\\alpha$ is the coupling coefficient between amp and string at a reference distance of 1m, $d$ is the distance between guitar and amplifier, and $M$ is the propagation delay in samples; with an internal clock of $F_s$ Hz we have $M = \\lfloor d/(c F_s) \\rfloor$ where $c$ is the speed of sound."
]
},
{
"cell_type": "code",
- "execution_count": null,
- "metadata": {
- "scrolled": true
- },
- "outputs": [],
- "source": [
- "class guitar:\n",
- " def __init__(self, pitch=110, fs=fs):\n",
- " # init the class with desired pitch and underlying sampling frequency\n",
- " self.M = int(np.round(fs / pitch) )# fundamental period in samples\n",
- "plt.title(\"Harmonics of the generated guitar\")\n",
- "plt.show()\n",
- "plt.show()"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## 2. The amplifier"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
"source": [
- "In the \"I Feel Fine\" setup, the volume of the amplifier remains constant; however, because of the feedback, the input will keep increasing and, at one point or another, any real-world amplifier will be driven into saturation. When that happens, the output is no longer a scaled version of the input but gets \"clipped\" to the maximum output level allowed by the amp. We can easily simulate this behavior with a simple memoryless clipping operatora \"hard clipper\" as we implemented in the Nonlinear Modelling notebook: "
+ "## 6. Play it, Johnnie"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "We can easily check the characteristic of the amplifier simulator: "
+ "We simulate the recording studio events by plucking the string and moving the guitar very close to the amp after a few moments"
]
},
{
"cell_type": "markdown",
- "metadata": {},
- "source": [
- "While the response is linear between -0.9 and +0.9, it is important to remark that the clipping introduces a nonlinearity in the processing chain. In the case of linear systems, sinusoids are eigenfunctions and therefore a linear system can only alter a sinusoid by modifying its amplitude and phase. This is not the case with nonlinear systems, which can profoundly alter the spectrum of a signal by creating new frequencies. While these effects are very difficult to analyze mathematically, from the acoustic point of view nonlinear distortion can be very interesting, and \"I Feel Fine\" is just one example amongst countless others. \n",
- "\n",
- "It is instructive at this point to look at the spectrogram (i.e. the STFT) of the sound sample (figure obtained with a commercial audio spectrum analyzer); note how, indeed, the spectral content shows many more spectral lines after the nonlinearity of the amplifier comes into play.\n",
- "The last piece of the processing chain is the acoustic channel that closes the feedback loop. The sound pressure waves generated by the loudspeaker of the amplifier travel through the air and eventually reach the vibrating string. For feedback to kick in, two things must happen:\n",
- "\n",
- "* the energy transfer from the pressure wave to the vibrating string should be non-negligible\n",
- "* the phase of the vibrating string must be sufficiently aligned with the phase of the sound wave in order for the sound wave to \"feed\" the vibration.\n",
- "\n",
- "Sound travels in the air at about 340 meters per second and sound pressure decays with the reciprocal of the traveled distance. We can build an elementary acoustic channel simulation by neglecting everything except delay and attenuation. The output of the acoustic channel for a guitar-amplifier distance of $d$ meters will be therefore\n",
- "\n",
- "$$\n",
- "\ty[n] = \\alpha x[n-M]\n",
- "$$\n",
- "\n",
- "where $\\alpha = 1/d$ and $M$ is the propagation delay in samples; with an internal clock of $F_s$ Hz we have $M = \\lfloor d/(c F_s) \\rfloor$ where $c$ is the speed of sound."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## 4. Play it, Johnny"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
"source": [
- "OK, we're ready to play. We will generate a few seconds of sound, one sample at a time, following these steps:\n",
+ "We will synthesize a few seconds of sound, one sample at a time, following these steps:\n",
"\n",
"* generate a guitar sample\n",
- "* process it with the nonlinear amplifier\n",
+ "* apply the fret buzz nonlinearity (which will kick in only after the signal exceeds a certain level)\n",
+ "* filter the sample with the equalizer\n",
"* feed it back to the guitar via the acoustic channel using a time-varying distance\n",
"\n",
- "During the simulation, we will change the distance used in the feedback channel model to account for the fact that the guitar is first played at a distance from the amplifier, and then it is placed very close to it. In the first phase, the sound will simply be a decaying note and then the feedback will start moving the string back in full swing and drive the amp into saturation. We also need to introduce some coupling loss between the sound pressure waves emitted by the loudspeaker and the string, since air and wound steel have rather different impedences. \n",
+ "During the simulation, we will change the distance used in the feedback channel model to account for the fact that the guitar is first played at a distance from the amplifier, and then it is placed very close to it. In the first phase, the sound will simply be a decaying note and then the feedback will start moving the string back in full swing and drive the amp into saturation. "
]
},
{
"cell_type": "code",
- "execution_count": null,
- "metadata": {},
+ "execution_count": 53,
+ "metadata": {
+ "slideshow": {
+ "slide_type": "slide"
+ }
+ },
"outputs": [],
"source": [
- "g = guitar(110) # the A string\n",
- "f = feedback() # the feedback channel\n",
- "\n",
- "# the \"coupling loss\" between air and string is high. Let's say that\n",
- "# it is about 80dBs\n",
- "COUPLING_LOSS = 0.0001\n",
- "\n",
- "# John starts 3m away and then places the guitar basically against the amp\n",
- "# after 1.5 seconds\n",
- "START_DISTANCE = 3 \n",
- "END_DISTANCE = 0.05\n",
- "\n",
- "N = int(fs * 5) # play for 5 seconds\n",
- "y = np.zeros(N) \n",
- "x = [1] # the initial plucking\n",
- "# now we create each sample in a loop by processing the guitar sound\n",
- "# thru the amp and then feeding back the attenuated and delayed sound\n",
- "# to the guitar\n",
- "for n in range(N):\n",
- " y[n] = hit_fret(g.play(x), 1/0.8415)\n",
- " x = [COUPLING_LOSS * f.get(y[n], START_DISTANCE if n < (1.5 * fs) else END_DISTANCE)]\n",
- " \n",
- "Audio(data=y, rate=fs, embed=True)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Pretty close, no? Of course the sound is not as rich as the original recording since\n",
- "\n",
- "* real guitars and real amplifiers are very complex physical system with many more types of nonlinearities; amongst others:\n",
- " * the spectral content generated by the string varies with the amplitude of its oscillation\n",
- " * the spectrum of the generated sound is not perfectly harmonic due to the physical size of the string\n",
- " * the string may start touching the frets when driven into large oscillations\n",
- " * the loudspeaker may introduce additional frequencies if driven too hard\n",
- " * ...\n",
- "* we have neglected the full frequency response of the amp both in linear and in nonlinear mode\n",
- "* it's the BEATLES, man! How can DSP compete?\n",
+ "A_string = String(pitch=110)\n",
+ "amp = ToneControls()\n",
"\n",
- "Well, hope this was a fun and instructive foray into music and signal processing. You can now play with the parameters of the simulation and try to find alternative setups: \n",
+ "# create a trajectory for the guitar, from A to B (in meters)\n",
+ "A, B = 1.5, 0.05\n",
+ "position = np.r_[\n",
+ " np.linspace(A, B, int(1 * DEFAULT_SF)), # one second to get close to the amp\n",
+ " np.ones(int(3 * DEFAULT_SF)) * B # remain there for 3 seconds\n",
+ "]\n",
+ "N = len(position)\n",
+ "x, y = 1, np.zeros(N) \n",
"\n",
- "* try to change the characteristic of the amp, maybe using a sigmoid (hyperbolic tangent)\n",
- "* change the gain, the coupling loss or the frequency of the guitar\n",
- "* change John's guitar's position and verify that feedback does not occur at all distances."
+ "importing Jupyter notebook from FilterUtils.ipynb\n"
+ ]
+ }
+ ],
+ "source": [
+ "%matplotlib inline\n",
+ "import matplotlib\n",
+ "import matplotlib.pyplot as plt\n",
+ "import numpy as np\n",
+ "from IPython.display import Audio\n",
+ "from scipy import signal\n",
+ "\n",
+ "import import_ipynb\n",
+ "from FilterUtils import *"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "plt.rcParams['figure.figsize'] = 14, 4 \n",
+ "matplotlib.rcParams.update({'font.size': 14})"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "DEFAULT_SF = 16000"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 1. Introduction"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Although ultimately we need to design _discrete-time_ filters, in audio applications it is important to become familiar with some of the _analog_ filter design conventions; in particular:\n",
+ "\n",
+ " * filter specifications will most likely be expressed in terms of real-world frequencies in Hz. This will require that we know the sampling frequency of the signals that the digital filter will work on\n",
+ " * contrary to the usual linear plots used in discrete time, the freqency response of audio filters is generally displayed on a log-log graph; this is because human audio perception is approximately logarithmic both in frequency and in amplitude"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### 1.1. The analog prototypes\n",
+ "\n",
+ "Historically, the development of electronic filters began with the design of passive analog filters, that is, filters using only resistors, capacitors and inductors (aka RLC circuits). Since these filters have no active elements that can provide signal amplification, the power of their output is necessarily smaller than the power of the input (or, in the limit, equal). This also guarantees the stability of these systems although, at least in theory, strong resonances can appear in the frequency response."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": []
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": []
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 2. The Biquad structure"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 3. Lowpass and highpass\n",
+ "\n",
+ "A second-order lowpass filter section will have a passband with approximately unit gain (0 dB) and a monotonically decreasing transition to the stopband. It is defined by two parameters:\n",
+ "\n",
+ " 1. the cutoff $f_c$, which corresponds to the frequency where the attenuation reaches -3 dB. Note that -3 dB corresponds to a gain of $1/2$ since $10\\log_{10}(1/2) \\approx 3$\n",
+ " 1. the \"quality factor\" $Q$, which determines the steepnes off the transition band; by default $Q = 1/\\sqrt{2}$, which yields a Butterworth filter."
+ "When $Q = 1/\\sqrt{2}$, as we said, the lowpass section corresponds to a Butterworth filter, that is, a filter with the maximally steep transition band that can be achieved _without_ incurring a peak resonance at $f_c$. For higher values of $Q$ the magnitude response exhibits a peak around $f_c$, which results in an oscillatory impulse response as shown in the following examples:"
+ "A second-order bandpass filter section will have approximately unit gain (0 dB) in the passband and will decrease monotonically to zero in the stopband. It is defined by two parameters:\n",
+ "\n",
+ " 1. the center frequency $f_c$, where the gain is unitary\n",
+ " 1. the bandwidth $b = (f_+ - f_-)$, where $f_- < f_c < f_+$ are the first frequencies, left and right of $f_c$ where the attenuation reaches $-3$ dB. For the reasons explained above, note that the passband is almost but not exactly symmetric around $f_c$."
+ " frequency_response(b, a, dB=-50, half=True, axis=ax)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "A notch filter is the complementary filter to a resonator; its attenuation reaches $-\\infty$ at $f_c$ and its bandwidth is usually kept very small in order to selectively remove only a given frequency."
+ "Shelving filters are used to amplify either the low or the high end of a signal's spectrum. A high shelf, for instanance, provides an arbitrary gain for high frequencies and has approximately unit gain in the low end of the spectrum. Shelving filters, high or low, are defined by the following parameters:\n",
+ "\n",
+ " 1. the desired _shelf gain_ in dB\n",
+ " 1. the midpoint frequency $f_c$, which corresponds to the frequency in the transition band where the gain reaches half its value.\n",
+ " 1. the \"quality factor\" $Q$, which determines the steepnes off the transition band; as for lowpass filters, the default value $Q = 1/\\sqrt{2}$ yields the steepest transition band while avoiding resonances."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 140,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def LSH(fc, gain, sf, Q=(1/np.sqrt(2))):\n",
+ " \"\"\"Biquad low shelf\"\"\"\n",
+ " w = 2 * np.pi * fc / sf\n",
+ " A = 10 ** (gain / 40) \n",
+ " alpha = np.sin(w) / (2 * Q)\n",
+ " c = np.cos(w)\n",
+ " b = np.array([A * ((A + 1) - (A - 1) * c + 2 * np.sqrt(A) * alpha),\n",
+ " 2 * A * ((A - 1) - (A + 1) * c),\n",
+ " A * ((A + 1) - (A - 1) * c - 2 * np.sqrt(A) * alpha)])\n",
+ " a = np.array([(A + 1) + (A - 1) * c + 2 * np.sqrt(A) * alpha,\n",
+ " -2 * ((A - 1) + (A + 1) * c),\n",
+ " (A + 1) + (A - 1) * c - 2 * np.sqrt(A) * alpha])\n",
+ "A peaking equalizer filter is the fundamental ingrediend in multiband parametric equalization. Each filter provides an arbitrary boost or attenuation for a given frequency band centered around a peak freqency and flattens to unit gain elsewhere. The filter is defined by the following parameters:\n",
+ "\n",
+ " 1. the desired gain in dB (which can be negative)\n",
+ " 1. the peak frequency $f_c$, where the desired gain is attained\n",
+ " 1. the bandwidth of the filter, defined as the interval around $f_c$ where the gain is greater (or smaller, for attenuators) than half the desired gain in dB; for instance, if the desired gain is 40dB, all frequencies within the filter's bandwidth will be boosted by at least 20dB. Note that the bandwdidth is not exactly symmetrical around $f_c$ "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 152,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def PEQ(fc, bw, gain, sf):\n",
+ " \"\"\"Biquad bandpass filter \"\"\"\n",
+ " w = 2 * np.pi * fc / sf\n",
+ " A = 10 ** (gain / 40) \n",
+ " alpha = np.tan(np. pi * bw / sf)\n",
+ " c = np.cos(w)\n",
+ " b = np.array([1 + alpha * A, -2 * c, 1 - alpha * A])\n",
+ " a = np.array([1 + alpha / A, -2 * c, 1 - alpha / A])\n",
+ " analog_response(cb, ca, DEFAULT_SF, dB=-50, points=10001)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 1. The Fundamental Filters"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### 1.1. The notch filter"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "The first filter we will see is called the **Notch filter**. It gets rid of one selected frequency in the audio. It is generally used to remove a buzzing noise with constant frequency. For instance, home amplifiers often produce a buzzing noise around 50Hz, the frequency of the alternating current. This noise can be deleted by placing a notch filter at 60Hz in the amplifier.\n",
+ "\n",
+ "Generally, 2 parameters are used to control a notch filter:\n",
+ "- The cutoff frequency $\\omega_c$ which is going to be eliminated\n",
+ "- The quality factor $Q$ which defines how sharp or broad the filter will be around $\\omega_c$.\n",
+ "\n",
+ "<p style=\"color:red;\">The meaning of Q in analog filter design is a bit different (see https://en.wikipedia.org/wiki/Q_factor). I haven't yet found a precise equivalence in the discrete-time domain so maybe we should clarify this. see also the answers in https://dsp.stackexchange.com/questions/19148/whats-the-q-factor-of-a-digital-filters-pole (PP)</p>\n",
+ "\n",
+ "\n",
+ "A notch can be simply implemented as a second order filter where a zero is placed on the unit circle at $\\omega_c$, and another zero is also placed symmetrically at $-\\omega_c$. A pole is placed next to each zero, inside of the unit circle in order to keep stability, and the distance from the pole to the circle is determined by $Q$: the closest the pole to the circle, the sharpest the filter. The frequency response of a notch can be written as \n",
+ "Note the the frequency $\\omega_c$ is an angular frequency, taking values between $0$ and $\\pi$. $Q$ takes values between 0 and 1, where 0 means that the filter is maximally broad (that is, an allpass), and 1 meaning that only the exact frequency is removed. Let's now implement it!"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def notch(x, wc, Q=0.99, pole=False):\n",
+ " '''\n",
+ " Notch filter taking an audio signal 'x' and returning its filtered version.\n",
+ " x: the input signal\n",
+ " wc: the cutoff angular frequency\n",
+ " Q: the quality factor\n",
+ " pole: when set to True, plots the filter's pole-zero plot and frequency response. when set to False, return the filtered signal\n",
+ " '''\n",
+ " # Define the numerator and the denominator of the transfer function\n",
+ " # Plot pole-zero and frequency response when True\n",
+ " if pole==True:\n",
+ " freq_response(b,a)\n",
+ " return zplane(b,a)\n",
+ " \n",
+ " # Create filter from transfer function and normalize\n",
+ " x = lfilter(b, a, x)\n",
+ " return normalize(x) # Normalize"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "We can now plot the pole-zero plot and the frequency (and phase) responses of the filter. We remark that the frequency looks like a rather flat line with two negative poles tending to zero (the notches) centered on the cutoff frequency. \n",
+ "\n",
+ "The pole-zero plot has 2 poles and 2 zeros, positionned as explained before. Try to change the values of $\\omega_c$ and $Q$ and obersve how the pole and zeros move. One can observe that the gain factor $G$ is not observable on the ploe-zero plot.\n",
+ "\n",
+ "<p style=\"color:red;\">I tend to find linear scale graphs more interesting... maybe both? (PP)</p>\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "notch(np.ones(1), 1, Q=0.9, pole=True);"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### 1.2. The shelf filter"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "The second filter we will implement is the shelf filter. This filter exists in 2 versions, low shelf and high shelf. A low shelf filter increases all the frequencies below the cutoff frequency $\\omega_c$ by a gain factor $G$. The frequencies above $\\omega_c$ and left unchanged. A high shelf does the same the other way around. The filters take 3 arguments: $\\omega_c$, $G$, and the type of shepf (high or low).\n",
+ "\n",
+ "The shelf filter has a slightly different implementation for its high and low variant. The low variant is composed of one pole and one zero that move on the real axis within the unit circle. The pole is always on the right of the zero. When the cutoff frequency $\\omega_c$ increases, both the pole and the zero move to the left, having less influence on the low frequencies and more on the high frequencies. The transfer function can be written as\n",
+ "The high shelf filter is basically the same, except that the pole and the zero have exchanged places, and the zero is now on the right of the pole. The transfer function is obtain by replacing $G$ with $1/G$ in the low shelf filter's, and subsequently multiplying it with $G$."
+ " a = np.array((np.sqrt(G) * np.tan(wc/2) + 1,(np.sqrt(G)*np.tan(wc/2)-1)))\n",
+ " \n",
+ " # Plot pole-zero and frequency response when True\n",
+ " if pole==True:\n",
+ " print(b, a)\n",
+ " freq_response(b,a)\n",
+ " return zplane(b,a)\n",
+ " \n",
+ " # Create filter from transfer function and normalize\n",
+ " x = lfilter(b, a, x)\n",
+ " return normalize(x) # Normalize"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "We can now observe the effect of $G$ and $\\omega_c$ on the frequency response and the pole-zero plot. "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "shelf(np.ones(1), 1, 2, \"low\", pole=True);"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### 1.3. The cut filter (or pass filter)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "The lowcut, resp. highcut filters (also commonly called highcut, resp. lowcut) are a family of filters that eliminate all frequencies below, resp. above a given cutoff frequency $\\omega_c$. They are used to get rid of the unwanted frequencies outside of the working range. For instance, when recording a guitar, that typically produces sounds between 80 and 5000 Hz, a producer may want to use a lowcut and highcut filters to get rid of all the sound captured ooutside this range. An increasing lowcut filter is also often used by DJs to create a suspense effect before a bass drop, reducing the frequencies of the music to only the highest pitches, before letting back all of the frequencies at once to create the drop.\n",
+ "\n",
+ "One of the simplest implementation of the cut filters is a first order implementation, consisting of one zero and one pole. The zero is stationary and is located one the unit circle at angle 0 for the lowcut filter, and angle $\\pi$ for the highcut, in order to make sure that the frequencies at the low, resp. high end of the spectrum are eliminated. A pole is positioned on the real axis to control how fast or slow the frequencies should reach 0 in the neighbourhoud of the zero, depeding on the cutoff frequency $\\omega_c$.\n",
+ "\n",
+ "The transfer function of an highcut (lowpass) filter is written as\n",
+ " pole: when set to True, plots the filter's pole-zero plot and frequency response. when set to False, return the filtered signal\n",
+ " '''\n",
+ " alpha = (1-np.sin(wc))/np.cos(wc)\n",
+ " \n",
+ " # Design the transfer function for both versions of cut filter\n",
+ " if band==\"high\":\n",
+ " b = 0.5 * np.array([1-alpha, 1-alpha])\n",
+ " if band==\"low\":\n",
+ " b = 0.5 * np.array([1+alpha, -1-alpha])\n",
+ " \n",
+ " a = np.array([1, -alpha])\n",
+ " \n",
+ " # Plot pole-zero and frequency response when True\n",
+ " if pole==True:\n",
+ " print(b, a)\n",
+ " freq_response(b,a)\n",
+ " return zplane(b,a)\n",
+ " \n",
+ " # Create filter from transfer function and normalize\n",
+ " x = lfilter(b, a, x)\n",
+ " return normalize(x) # Normalize"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "We can now play with the parameter and observe how the transition evolves. We remark that this simple first-order pass filters always have a transition from a gain of 1 to a gain of 0 (or the other way around). The only change that occurs when moving the cutoff frequency is how fast the transition happens as the frequencies increase."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "cut(np.ones(1), .2, \"low\", pole=True);"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 2. Testing with a guitar sample"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### 2.1. Notch filter"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Okay, we can now test our filter on a real example of guitar recording! First, let's listen to the recording without EQ."
+ "We can now test our notch filter. Let's say our guitar was connected to an amplifier that produces a buzzing sound at 60Hz. This can be modeled by simply adding a 60Hz sine to the signal."
+ "To get rid of it, we apply the notch filter with a frequency of 60Hz. We first convert 60Hz into an angular frequency using a rule of 3, and we choose a high quality factor to damage as least as possible the neighbouring frequencies of the buzzing sound. The frequency of the notch filter is plotted in red below."
+ "plt.title(\"Guitar after buzz notched out\")\n",
+ "plt.ylabel('Frequency [Hz]')\n",
+ "plt.xlabel('Time [sec]')\n",
+ "plt.show()\n",
+ "\n",
+ "Audio(guitar_notch, rate=fs)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "The buzzing sound is gone!"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### 2.2 Shelf filters"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Now let's move to the shelf filters. Let's assume we only want to boost the bass to mid range of the guitar, to give it a warmer sound. The cutoff frequecy of the shelf filter is plotted in red below. Note that when filtering a sound with a shelf filter, all frequencies change, since the global maximum amplitude is generally wanted to be 1. The \"untouched\" frequencies are then reduced while the \"amplified\" frequencies gain a bigger amplitude."
+ "plt.title(\"Frequencies of the guitar before and after shelving\")\n",
+ "plt.show()\n",
+ "\n",
+ "Audio(guitar_shelf, rate=fs)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Finally, let's use the cut filters. A guitar typically spans from 80 to 5000Hz. We can then get rid of all the other frequencies by subsequently applying a lowcut and a highcut filter. In the plot below, the 2 cutoff frequencies of the low and highcut filters are plotted in red. Together, these 2 filters can be called a **bandpass filter** since they only keep one band of frquencies. The result may not by shockingly distinct from the original recording, since the removed frequencies are not produced by the guitar itself, but may mostly come from ambient noises and artifacts of the amplifier. However, when mixing the guitar down with other intruments, keeping these unwanted (although discrete) out of range frequencies may quickly lead to a muddy mix."
+ "plt.title(\"Guitar before and after bandpass\")\n",
+ "plt.legend(loc=\"upper left\")\n",
+ "plt.show()\n",
+ "\n",
+ "Audio(guitar_cut, rate=fs)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "That's it, we now have the basic ingredients to a basic EQ for music production. By combining these effects on several tracks, it is possible to create a clean mix! Have fun trying!"