Category Archives: JavaScript

Comb filters in acoustics

In this post, I’ll use a feedforward comb filter to explain interference between two sources at some specific location.

The comb filter shows the frequency response of the system. If we have two sources emitting the same signal in space, they will attenuate and amplify certain frequencies at some location according to the frequency response of the comb filter.

The simulation

The red dot represents an ideal microphone in space. Click anywhere inside the simulation to move the sources and the microphone around (you need to click in three separate locations). You can adjust the frequency of the sources using the slider to the right.

How it works

The simulation is done using WebGL shaders, which makes the simulation run really smoothly. The two sources are summed for each pixel in each frame, which gives a nice visual representation of their interference in a 2D plane.

The simulation has the following properties:

  • The sources have identical phases and frequencies.
  • 2000 seconds in the simulation represents 1 second.
  • The size of the box is 1 meter by 1 meter.
  • The sound sources are modeled as cylindrical waves, as per $$\frac{A}{\sqrt{r}}\mathrm{cos}(kr\pm\omega t)$$, with $$A = 1.0$$ for both sources.
  • The initial delay from the nearer source is left out of the diagram of the comb filter, but it could be added without any change in the magnitude of the response.
  • The frequency response for a point is calculated directly from the frequency response of the depicted comb filter.

Room modes explained

Note: you need a modern browser that supports WebGL (I recommend Chrome, as the simulation works best on Chrome) to read this post. This post also assumes you’re on a desktop or laptop. Mobile devices (iPad etc) have poor support for WebGL at the moment.

Why are room modes bad?

Room modes accentuate specific frequencies. Here are some examples of when you might have stumbled upon them:

  • When listening to music using your high quality audio equipment, some specific bass notes always tend to sound much louder than the others.
  • The sound level on low frequencies seems to vary a lot depending on where in the room you are located.
  • When the neighbor is listening to music, and you always hear some bass notes louder than the rest of the music, it might be caused by room modes in your or your neighbors apartment.
  • A large vehicle drives by your apartment, and you can hear how the sound resonates at a specific frequency. This is also often caused by room modes.
  • The low frequency sounds from your washing machine gets amplified at certain rotation speeds.

The easiest to understand, and perhaps most obvious, disadvantage of room modes is in sound reproduction. It should be noted that room modes can cause numerous other problems, not directly related to high fidelity audio, in residential apartments. They might amplify sounds caused by traffic. They might sometimes amplify the sounds caused by HVAC equipment  (ventilation, pumps, compressors). They might also cause some low frequencies to travel very efficiently from the neighbor’s apartment to your apartment in a residential building (due to coupling), even if the structures in themselves have good sound insulating properties.

What are room modes?

A sound wave can be visualized, literally,  as a wave. In the simulation above, you will see what happens when a sound source emits an impulse in a room with two walls (the sound is allowed to freely escape in the free directions). The plane represents a cut plane, i.e. the sound pressure at a certain height in the room. The deflection of the plane represents sound pressure. You can specify how many times the sound reflects from the walls using the controls (“open controls – reflections”).

Try moving the source around a bit, to get a feeling of how the simulation works. You can do this by adjusting the “position” slider in the control panel. Press “reset” to restart the simulation.

In this post, I will explain to you what room modes (standing waves) are. Just follow  the steps below. If you want to, you can open the simulation in a new window.

  • Try setting the reflection count down to 1, to get a clear picture of what happens when the sound reflects from the walls.
  • Restart the simulation (“reset“).
  • Enable “show reflections”. This shows virtual sound sources, which is another way to think of reflections. It might be a bit confusing at first, but you’ll see that it makes some things clearer later on. Take a while to see how virtual sources are formed to form a single reflection (remember to reset the view!).
  • Change the signal type to “SIN“, which represents a pure sound at a specific frequency.
  • Set the reflection count to 0, to get a clearer view of what’s happening. The sound this type of curve represents is very close to what you hear when you whistle. A sine wave with a long wavelength is perceived as a low note, while a short wavelength is perceived as a high note.
  • Set the sound position to -10 for the next step. Remember to keep the reflection count at 0.
  • Try playing around with the “frequency scale” setting (still without the reflections!). When the scale is set to 1, the length of the wave (the distance between two “peaks”) will be the same as the distance between the walls. When the scale is set to 2, two wavelengths will fit into the room. When the scale is set to 3, three wavelengths, and so on.
  • Set the frequency scale to 2.
  • Set the reflection count to 1.
  • Reset to get a feeling of what is happening. Remember that you can also close the controls.
  • If you’re confused at this point, try setting the signal type to PULSE, and then change it back to SIN. This should make things clearer.
  • At this point, what you’re seeing is constructive and destructive interference.
  • Try adding more reflections, this will make the effect even clearer.
  • This is what a room mode is. It’s exactly this, but with more complicated rooms with additional walls and details. Note that the mode can be heard clearly in positions where the sound pressure varies the most.
  • When you now change the frequency scale slider to something else than a multiple of 0.5, you’ll see that the room modes disappear (completely, if you’re far away from a multiple of 0.5). They only happen close to specific frequencies. At these frequencies, you might sometimes hear a distinct ringing sound in the room.


The good news is that annoying room modes can be attenuated. There are multiple ways to do it. In the case of hifi equipment, some modern amplifiers attempt to correct room modes using digital signal processing. But these digital methods won’t sound nearly as good as the room would sound if you would fix the acoustics of the room itself.

Listening to vibrations

Update: here’s a small tune I made to demonstrate what the resulting sample would sound like as an instrument:

I’ve always been fascinated by how things, especially sound waves, look like in slow motion. By examining something in slow motion, somehow even complex phenomena start to seem intelligible.

In this post, I’m examining the behavior of a very simple instrument in slow motion. The physics are very much the same as when examining vibrations in structures, but instruments are (very much) nicer to listen to. The principles shown here are naturally applicable to larger scale phenomena that occur in structures of different kind (buildings, bridges, etc).

The instrument

The instrument will consist of a cantilever beam made out of steel, with a length of 20 cm. The cross-sectional dimensions of the beam are 2 cm x 2 cm. The instrument will thus resemble something like a simple vibraphone / glockenspiel / large music box.

We’ll use the same principle to get the sound of the beam as an electric guitar uses to pick up the sound of a string, by placing a (virtual) pickup at a position 3/4:th to the left of the tip of the beam (or 1/4:th from the left of the beam, whichever suits your fancy).

What will happen if we hit the very tip of the beam, with a very sharp, impulse-like force (even sharper and quicker than what’s possible with the hammer in the picture)?

The result

First, it should be noted that the same principle is used as when an electric guitar picks up the sound of a string. The pickup transforms the deflection at a certain point on the string (or in this case, beam) directly to sound. The resulting sound for the beam is as following:

To see what the beam looks like directly after being hit by the sharp impulse, examine the resulting deformation of the beam as a function of time here (click on the blue area to load the content). The sound is generated from the movement of the circle in the direction of the y-axis.

Some things to notice when watching  the time lapse of the deformation:

  • There’s a transient part at the very beginning (slightly visible in the 0.0015 s time span) which attenuates very quickly. This is the part of the response where standing waves (resonances) have not yet formed.
  • The two lowest natural frequencies (resonances) can be seen clearly; one at 464 Hz and another at 2910 Hz. The third natural frequency, 8150 Hz, can be seen at the very beginning of the response.
  • For this setup, the higher frequencies attenuate quickly. In the end, only the lowest natural frequency, 464 Hz, can be heard. This gives the distinct pitch you can hear in the sound sample above.

The theory

I’m using finite element analysis to examine the behavior of the beam. The theory is the same as in the previous post, but I’ve also calculated the mass matrix for the beam. I’ve used 25 elements for the beam, thus solving a 50-degree of freedom system. Note that I’m using the Euler-Bernoulli beam theory, so some simplifications are made.

The damping was done using Rayleigh damping. For those familiar with this type of damping, the value for $$\alpha$$ was 1e-05 and the value for $$\beta$$ was 1.5e-06. I chose these values simply on the basis of listening to which damping values sounded better than others, not much thought were put into them. They seem surprisingly small as compared to other values I encountered online, maybe someone more acquainted with Rayleigh damping could offer me their opinion on this?

I used the Newmark algorithm for time-stepping, with a value of 0.5 for both $$\beta_1$$ and $$\beta_2$$.

The methods

I used Python for the calculations. Python is ideal for such an endeavor, and free! SymPy provided me with the tools I needed to solve the necessary equations, while NumPy did the calculations for me.

I saved the resulting calculations as a binary file, which only contained the necessary information. For example: only one byte / element describes the deformation at each time step in the animation seen above, as 8 bits is more than enough in this case.

I wrote a script that loads up these binary files (of which 3 can be seen above). The deformations can be examined as a function of time. I’m quite happy about how it turned out, even though some parts of the code are quite hacky-ish. You can see the code for the javascript here. Note that this code only displays the data from the binary files, the binary files have been calculated using Python.

Thoughts for the future

Perhaps the parameters could be tweaked a little, so the sound would be closer to the sound from a music box (which is the instrument I think the model most closely resembles). Also, the Timoshenko beam model would be nice to try, for comparison. All in all, this post was a nice exercise in programming Python, JavaScript and doing finite element analysis in the time domain.