Author Archives: Kai

The sound insulation of facades

For the topic of this blog post, I made a simulation to demonstrate how the sound insulation properties of building facades can be thought of.  Take note that there are some instabilities which I didn't resolve, so running the simulation for too long might jam up your browser. Click the simulation to start it, and again to stop it.

Once again, you'll have the best experience with Chrome.

The simulation

In the simulation, the sound source (represented by a red dot) will emit sound every once in a while. The sound is represented by the small colored dots in the simulation. The color of each dot represents the part of the facade the sound traveled through.

The room is quite deep, 15 meters (almost 50 feet), and the simulation spans 0.8 seconds.

Each time the simulation restarts, the insulation properties of one of the colored parts of the facade is worse than the others. Take a look to see how this affects things.

Discussion

There are a multitude of things at play in the real world. When measuring the insulation properties of a facade, for example, the response of the room is taken into account by "correcting" the result (this involves, among other things, multiple measurement points and measurements of the reverberation in the room).

Predictive calculation methods used by consultants are usually statistical, which in practice leaves lots of details out of the situation. In most cases, though, a statistical approach is enough. The data available for doing the calculations is usually also limited, which makes the statistical approach a very viable option.

Something similar to the method presented here could in theory give better results in cases where the statistical approach isn't that viable of an option. I'm not sure about the simplification of sound traveling "straight through" the wall, though, as the situation is really more complicated than that (even though I know that some commercial software use this approach to model sound insulation).

Using beam tracing to calculate reflections in JavaScript

I have been researching beam tracing for a project of mine for a while now. Beam tracing is a method for calculating reflection paths. I won't go into any details of how beam tracing works, as this is something you can find on google in case you're not familiar with it.

My first attempt at beam tracing was made in Python, about a year ago. I made an algorithm that worked in 3D, but it became exceedingly complicated as I worked on it (mostly related to occlusion and clipping). As it turned out, I was overcomplicating things. The paper "Accelerated beam tracing algorithm" by Laine et al. served as an excellent introduction into how basic beam tracing should be done: as simply as possible. To get a better understanding of the paper, I decided to try to implement some of the very basics in 2D using JavaScript.

The result

Click to place the source, the specular reflections are updated automatically as you move the mouse around. Note that reflections are only calculated up to a certain limit (theoretically there are an infinite amount of reflections).

Firstly, it should be noted that I only utilised the very basic ideas presented in the paper in my implementation (beam trees and accelerated ray tracing). I only wanted to spend a few days on this, which means that there are probably a lot of things which could (and should) be optimised. I spent most of the time pruning out bugs, which at times were very hard to find. I also pushed the MIT-licensed source code to Bitbucket, in the hopes that someone might think of something useful to do with 2D beam tracing in JavaScript!

Stereo widening using phase cancellation

I got the idea to try the topic of this blog post during the holidays. It's nothing new (cross-talk cancellation), but interesting nonetheless. The basic principle is relatively simple, which makes it a fascinating example of applied physics!

Note that you need speakers, preferably ones which you can place quite close to each other and in front of you. A laptop should generally work nicely. Also, this doesn't work on internet explorer. Then follow these steps and fill in the information below:

  • Measure the distance between your speakers (middle-to-middle, A).
  • Measure the distance from your ears to the middle point between your speakers (B).
  • Make sure your head is in the middle, as in the picture below, and at about the same distance as when you measured the distance B.
  • Click "Left" or "Right"!
Distances to measure

Distances to measure

Try it out!

See "Troubleshooting" if nothing special seems to be happening.

Troubleshooting

What should be happening: you should see a user interface, in which you can play a sound by clicking "Left" or "Right". The sound should shift to the far right or far left when phase cancellation is enabled, even when the speakers are right in front of you.

There are some problems related to the audio playback-part of the demonstration. It doesn't seem to work on a lot of systems (even if you're not using internet explorer). If the sound is more like a sharp bell than a soft bell, something went wrong and the effect won't be that great, sorry. This means that the sound is distorted. I might fix this at some point, but it seems to be related to the Web Audio API / pico.js somehow, so it's probably not an easy fix.

Some other stuff that might affect the result:

  • The measurements need to be fairly accurate.
  • Make sure you're not close to any walls.
  • Too much reverberation will impair the effect.
  • Sit in the middle, relative to the speakers, and look straight ahead.
  • Some speakers distort the sound when you turn the volume up (my laptop does), so try lowering the volume if the demonstration doesn't work. The audible sound should be as close to a sine wave (smooth) as possible.
  • There will be problems when you raise the frequency enough. Try 400 Hz, it worked nicely for me.
  • The effect is more pronounced when the speakers are close to each other and in front of you (like in a laptop); the sound will appear to be coming from a totally different direction.
  • If your speakers already form a wide stereo field, the effect isn't that noticeable

What was that?

Let's start by considering what makes the sound appear to be coming from the direction of a speaker, in general. The two most important cues which allow you to deduce the direction of the sound are related to how the sound arrives at our ears; the sound reaches each ear at a different time and at a different volume.  People usually talk about the interaural level difference (ILD) and the interaural time difference (ITD). The sound usually has a different distance to travel for each ear, so the sound will arrive a little bit later at the other ear. It will also arrive at a slightly lower volume. In our case, the ITD and ILD are approximated using  the distances in the image below.

Sound source - ears, distance

Sound source - ears, distance

Now let's introduce the concept of phase cancellation. This is something which active noise cancelling headphones use, for example. Sound consists of pressure changes. If we can align positive and negative pressure changes in the same point in space, they sum up to zero. The basic idea is explained in the following image:

Phase cancellation [wiki]

Phase cancellation [wiki]

OK, let's now combine the two previous principles! We want to play a sound from the right speaker, but we want to cancel out the sounds arriving at the left ear. How do we do this?

Phase cancellation

Cross-talk cancellation

A
  • We play the sound using the right speaker
  • We play another sound (with inverted phase) using the left speaker. We time the signal so that the sound from the right speaker and the sound from the left speaker arrive at the left ear simultaneously. They cancel each other out at the left ear.
B + C

There's still a problem; what will happen when the sound with the inverted phase (image A) arrives at the right ear (image B)? This will cause problems, we don't want anything to arrive at the right ear after the first signal! So what do we do? We cancel it out, once more, using a signal with the correct phase (image B).

But, once more, the signal we played in image B needs to be cancelled out at the left ear (image C). Well, I hope you get the picture. This goes on and on for a while. Luckily, we won't have to do this forever, as the volume of the phase cancellation signal decreases with time.

Epilogue

I think this is a cool example which demonstrates the wave nature of sound. It's intuitive and relatively simple, yet gives results anyone with proper hearing can observe (assuming the demonstration works on your setup).

Airborne sound insulation

I thought it would be cool to demonstrate some of the vary basics behind airborne sound insulation using a finite element simulation. This is also something I'm planning on utilising in real projects (i.e. it's not just for playing around with).

Some background

Let's assume that we have two rooms with nothing but a small piece of wall between them. The rooms are perfectly separated from each other, they're connected by nothing but this one piece of wall. This means that the situation is analogous to laboratory measurements (i.e. when measuring the sound reduction index R, or the single-number quantity Rw). Situations on the field are different from the situation described here, as flanking transmissions are not taken into account in laboratory measurements.

Let's also assume that this simple, homogenous piece of wall is perfectly sealed. It's modeled as simply supported (we're allowing for rotation on the boundaries) in 2D.

I'm not going to go any deeper into the material parameters I used here, but here's some background on the theory:

  • The simulation is done using a Timoshenko model for the piece of wall (taking shear locking into account using reduced integration).
  • I used linear shape functions for both the fluid and beam domains.
  • The bottom boundary is completely absorbing.
  • There are two completely separate fluid domains, which are both coupled to the piece of wall in the middle.
  • I did the simulation in python, and exported the results to a binary file which is read by the javascript viewer below.

Simulation 1

The simulation is a bit heavier than in the simulations I've usually posted, as I had to use more elements to get a nice looking result. Here's a very short summary of what's happening:

  1. A sound wave travels in the lower room.
  2. The sound wave arrives at the wall.
  3. The sound wave consists of positive pressure (as compared to the surroundings and other side of the wall), and will as such exert forces on the wall.
  4. The wall will deform as a consequence of the force. Note that the deformations of the wall are exaggerated in the visualization!
  5. As the wall deforms, it moves the air above it, creating new sound waves.

Simulation 2

I also made the following simulation, which is more complex and more difficult to understand, but looks way cooler. I especially like it that you can see the sound moving faster in the wall than in the air when the first wave hits the wall (compare the sound waves below the wall to the sound waves above the wall).

Note:  I switched the colors, here red represents a positive sound pressure. Also, the wall is clamped (rotation isn't allowed at the ends of the wall).

Epilogue

The most important thing to note is that when you're hearing sound through the wall, as in the simulated situations, what you're hearing are the deformations of the wall. It's the wall that radiates sound into the room. If the wall wouldn't move at all as a consequence of the pressure waves hitting it, you wouldn't hear anything. This is why heavy structures, such as concrete, isolate sounds so well.

The situation becomes a whole lot more complicated with separated structures (i.e. light structures or drywall), and when flanking is taken into account. Measuring the sound reduction index with a diffuse sound field is another interesting task I'll definitely have to do. I'll most likely return to these topics in later posts. 🙂