Using beam tracing to calculate reflections in JavaScript

I have been researching beam tracing for a project of mine for a while now. Beam tracing is a method for calculating reflection paths. I won’t go into any details of how beam tracing works, as this is something you can find on google in case you’re not familiar with it.

My first attempt at beam tracing was made in Python, about a year ago. I made an algorithm that worked in 3D, but it became exceedingly complicated as I worked on it (mostly related to occlusion and clipping). As it turned out, I was overcomplicating things. The paper “Accelerated beam tracing algorithm” by Laine et al. served as an excellent introduction into how basic beam tracing should be done: as simply as possible. To get a better understanding of the paper, I decided to try to implement some of the very basics in 2D using JavaScript.

The result

Open simulation

Click to place the source, the specular reflections are updated automatically as you move the mouse around. Note that reflections are only calculated up to a certain limit (theoretically there are an infinite amount of reflections).

Firstly, it should be noted that I only utilised the very basic ideas presented in the paper in my implementation (beam trees and accelerated ray tracing). I only wanted to spend a few days on this, which means that there are probably a lot of things which could (and should) be optimised. I spent most of the time pruning out bugs, which at times were very hard to find. I also pushed the MIT-licensed source code to Bitbucket, in the hopes that someone might think of something useful to do with 2D beam tracing in JavaScript!

Airborne sound insulation

I thought it would be cool to demonstrate some of the vary basics behind airborne sound insulation using a finite element simulation. This is also something I’m planning on utilising in real projects (i.e. it’s not just for playing around with).

Some background

Let’s assume that we have two rooms with nothing but a small piece of wall between them. The rooms are perfectly separated from each other, they’re connected by nothing but this one piece of wall. This means that the situation is analogous to laboratory measurements (i.e. when measuring the sound reduction index R, or the single-number quantity Rw). Situations on the field are different from the situation described here, as flanking transmissions are not taken into account in laboratory measurements.

Let’s also assume that this simple, homogenous piece of wall is perfectly sealed. It’s modeled as simply supported (we’re allowing for rotation on the boundaries) in 2D.

I’m not going to go any deeper into the material parameters I used here, but here’s some background on the theory:

  • The simulation is done using a Timoshenko model for the piece of wall (taking shear locking into account using reduced integration).
  • I used linear shape functions for both the fluid and beam domains.
  • The bottom boundary is completely absorbing.
  • There are two completely separate fluid domains, which are both coupled to the piece of wall in the middle.
  • I did the simulation in python, and exported the results to a binary file which is read by the javascript viewer below.

Simulation 1

http://kaistale.com/blog/131125coupling/viewer.html

The simulation is a bit heavier than in the simulations I’ve usually posted, as I had to use more elements to get a nice looking result. Here’s a very short summary of what’s happening:

  1. A sound wave travels in the lower room.
  2. The sound wave arrives at the wall.
  3. The sound wave consists of positive pressure (as compared to the surroundings and other side of the wall), and will as such exert forces on the wall.
  4. The wall will deform as a consequence of the force. Note that the deformations of the wall are exaggerated in the visualization!
  5. As the wall deforms, it moves the air above it, creating new sound waves.

Simulation 2

I also made the following simulation, which is more complex and more difficult to understand, but looks way cooler. I especially like it that you can see the sound moving faster in the wall than in the air when the first wave hits the wall (compare the sound waves below the wall to the sound waves above the wall).

Note:  I switched the colors, here red represents a positive sound pressure. Also, the wall is clamped (rotation isn’t allowed at the ends of the wall).

http://kaistale.com/blog/131126coupling/viewer.html

Epilogue

The most important thing to note is that when you’re hearing sound through the wall, as in the simulated situations, what you’re hearing are the deformations of the wall. It’s the wall that radiates sound into the room. If the wall wouldn’t move at all as a consequence of the pressure waves hitting it, you wouldn’t hear anything. This is why heavy structures, such as concrete, isolate sounds so well.

The situation becomes a whole lot more complicated with separated structures (i.e. light structures or drywall), and when flanking is taken into account. Measuring the sound reduction index with a diffuse sound field is another interesting task I’ll definitely have to do. I’ll most likely return to these topics in later posts. 🙂

Comb filters in acoustics

In this post, I’ll use a feedforward comb filter to explain interference between two sources at some specific location.

The comb filter shows the frequency response of the system. If we have two sources emitting the same signal in space, they will attenuate and amplify certain frequencies at some location according to the frequency response of the comb filter.

The simulation

The red dot represents an ideal microphone in space. Click anywhere inside the simulation to move the sources and the microphone around (you need to click in three separate locations). You can adjust the frequency of the sources using the slider to the right.

http://kaistale.com/blog/131004comb/comb.html

How it works

The simulation is done using WebGL shaders, which makes the simulation run really smoothly. The two sources are summed for each pixel in each frame, which gives a nice visual representation of their interference in a 2D plane.

The simulation has the following properties:

  • The sources have identical phases and frequencies.
  • 2000 seconds in the simulation represents 1 second.
  • The size of the box is 1 meter by 1 meter.
  • The sound sources are modeled as cylindrical waves, as per $$\frac{A}{\sqrt{r}}\mathrm{cos}(kr\pm\omega t)$$, with $$A = 1.0$$ for both sources.
  • The initial delay from the nearer source is left out of the diagram of the comb filter, but it could be added without any change in the magnitude of the response.
  • The frequency response for a point is calculated directly from the frequency response of the depicted comb filter.

Room modes explained

Note: you need a modern browser that supports WebGL (I recommend Chrome, as the simulation works best on Chrome) to read this post. This post also assumes you’re on a desktop or laptop. Mobile devices (iPad etc) have poor support for WebGL at the moment.

Why are room modes bad?

Room modes accentuate specific frequencies. Here are some examples of when you might have stumbled upon them:

  • When listening to music using your high quality audio equipment, some specific bass notes always tend to sound much louder than the others.
  • The sound level on low frequencies seems to vary a lot depending on where in the room you are located.
  • When the neighbor is listening to music, and you always hear some bass notes louder than the rest of the music, it might be caused by room modes in your or your neighbors apartment.
  • A large vehicle drives by your apartment, and you can hear how the sound resonates at a specific frequency. This is also often caused by room modes.
  • The low frequency sounds from your washing machine gets amplified at certain rotation speeds.

The easiest to understand, and perhaps most obvious, disadvantage of room modes is in sound reproduction. It should be noted that room modes can cause numerous other problems, not directly related to high fidelity audio, in residential apartments. They might amplify sounds caused by traffic. They might sometimes amplify the sounds caused by HVAC equipment  (ventilation, pumps, compressors). They might also cause some low frequencies to travel very efficiently from the neighbor’s apartment to your apartment in a residential building (due to coupling), even if the structures in themselves have good sound insulating properties.

What are room modes?

http://kaistale.com/blog/130928roommodes/index.html

A sound wave can be visualized, literally,  as a wave. In the simulation above, you will see what happens when a sound source emits an impulse in a room with two walls (the sound is allowed to freely escape in the free directions). The plane represents a cut plane, i.e. the sound pressure at a certain height in the room. The deflection of the plane represents sound pressure. You can specify how many times the sound reflects from the walls using the controls (“open controls – reflections”).

Try moving the source around a bit, to get a feeling of how the simulation works. You can do this by adjusting the “position” slider in the control panel. Press “reset” to restart the simulation.

In this post, I will explain to you what room modes (standing waves) are. Just follow  the steps below. If you want to, you can open the simulation in a new window.

  • Try setting the reflection count down to 1, to get a clear picture of what happens when the sound reflects from the walls.
  • Restart the simulation (“reset“).
  • Enable “show reflections”. This shows virtual sound sources, which is another way to think of reflections. It might be a bit confusing at first, but you’ll see that it makes some things clearer later on. Take a while to see how virtual sources are formed to form a single reflection (remember to reset the view!).
  • Change the signal type to “SIN“, which represents a pure sound at a specific frequency.
  • Set the reflection count to 0, to get a clearer view of what’s happening. The sound this type of curve represents is very close to what you hear when you whistle. A sine wave with a long wavelength is perceived as a low note, while a short wavelength is perceived as a high note.
  • Set the sound position to -10 for the next step. Remember to keep the reflection count at 0.
  • Try playing around with the “frequency scale” setting (still without the reflections!). When the scale is set to 1, the length of the wave (the distance between two “peaks”) will be the same as the distance between the walls. When the scale is set to 2, two wavelengths will fit into the room. When the scale is set to 3, three wavelengths, and so on.
  • Set the frequency scale to 2.
  • Set the reflection count to 1.
  • Reset to get a feeling of what is happening. Remember that you can also close the controls.
  • If you’re confused at this point, try setting the signal type to PULSE, and then change it back to SIN. This should make things clearer.
  • At this point, what you’re seeing is constructive and destructive interference.
  • Try adding more reflections, this will make the effect even clearer.
  • This is what a room mode is. It’s exactly this, but with more complicated rooms with additional walls and details. Note that the mode can be heard clearly in positions where the sound pressure varies the most.
  • When you now change the frequency scale slider to something else than a multiple of 0.5, you’ll see that the room modes disappear (completely, if you’re far away from a multiple of 0.5). They only happen close to specific frequencies. At these frequencies, you might sometimes hear a distinct ringing sound in the room.

Epilogue

The good news is that annoying room modes can be attenuated. There are multiple ways to do it. In the case of hifi equipment, some modern amplifiers attempt to correct room modes using digital signal processing. But these digital methods won’t sound nearly as good as the room would sound if you would fix the acoustics of the room itself.