Here are some of my favourite posts I hope you'll enjoy:

- https://roomresponse.com/
- Room modes in 3D using sketchup and FEM
- Simulating the physics behind a music box
- Listening to vibrations
- Binaural Sound with the Web Audio API
- Synthesizing thunder using JavaScript
- Using beam tracing to calculate reflections in JavaScript
- Simulating cymatics
- Airborne sound insulation
- Comb filters in acoustics
- Room modes explained
- Finite element analysis of 2D acoustics

I'm going to do the following:

- Model the room using sketchup
- Import it into python
- Place the subwoofer in the corner
- Calculate the response at the listening position using FEM
- Measure the real response at the same position and compare it with the result of the model

My living room has two open doorways without a door, which means that the geometry will usually be like the one to the left (with all other doors closed). I was really curious as to what the effect of the two "chambers" (the entryway and the kitchen) have on the response of the room.

I divided the model into tetrahedrons using MeshPy, resulting in the figure you can see above. The mesh is quite coarse, but dense enough for the purpose. In addition, I used linear elements. The room modes took about a second to calculate once the mesh was done (with the meshing only taking a few seconds), which I found impressive.

I added a *tiny* amount of damping to the system and assumed that the boundaries were perfectly rigid.

I recorded the signal in the middle of the room and applied the popular exponential sine sweep technique by Farina.

The rigid assumption resulted in the resonant response being slightly "higher up" as compared to the measured one. Still, I find the correlation between the responses below 100 Hz fantastic. Apparently there are some details above 100 Hz which should be taken into account (probably furniture, and also less damping), but below 100 Hz there are a lot of similarities. Additionally, I'm thinking that the mesh with linear elements might not be dense enough for over 100 Hz. The speaker I used has a poor response below 50 Hz, so the sharp peaks there were regrettably not visible in the measurement. Note that I wasn't really measuring the response at 170 dB .

I tried a few different techniques to visualize the modes, and found it surprisingly difficult to get a nice looking result.

Using iso surfaces in mayavi resulted in a result I think looks decent. Here's the mode representing the peak you can see at about 50 Hz, with the surfaces showing constant pressure values. As you can see, the whole room is actively participating in the mode.

Another interesting mode is the **lowest mode at ~20 Hz**. It's far lower than one would estimate using just the rectangular part of the room (37 Hz). It's fascinating how the gradient of the pressure field goes from one chamber to the other.

The living room seems to be a prime example of a room where FEM gives very nice results. I would argue that there would be absolutely no point in trying to deduce the lowest modes from the rectangular dimensions of the room, when the real room modes are so specific to the whole geometry. This analysis is also really quick to do, so it doesn't add that much extra time to a project.

]]>Well, let's do some testing.

These are the dimensions of the plate. The vibration isolators are chosen to be 10cm x 10cm & have a uniform dynamic stiffness of and a thickness of . The values I chose from concrete are and .

What you're seeing above is basically the result of a sine sweep on a damped system, where a sinusoidal force is placed at 20% from the outer right corner of the plate. So, we have a point force with a constant amplitude and a specific frequency attempting to vibrate the plate.

Note that the deformations are waaaayy exaggarated throughout this post.

One can clearly see that the deformation gets smaller as the frequency increases. Thus, a smaller portion of the exciting force is transfered to the foundations. Also note that in the beginning, mostly one corner vibrates but as we approach the modes at ~36 Hz (the modes will be shown later in this post) the plate starts to pivot/swing around the middle axis. So we're moving from a mix of modes 1 and 2/3 towards modes 2/3.

Starting from , I solved the dynamic system describing the steady state solution with a sinusoidal excitation. I used python, and obtained the mass/stiffness matrices using finite element analysis. The eigenfrequencies are calculated using regular analysis (from ).

The Mindlin–Reissner plate theory with bilinear elements was used for the plate. Also, I used regular spring elements lumped at the nodes for the vibration isolation (vertical shear deformations of the isolators are not central at the lowest frequencies). For the damping matrix, I used rayleigh damping with nothing but the stiffness matrix-related term with a value of 0.0075. I applied the damping directly to the global system.

Only vertical movement is taken into account here.

Alright! What are the lowest frequencies the system will resonate on? Or, to put it another way, what are the lowest eigenmodes, and the corresponding eigenfrequencies? The lowest mode corresponds to the one obtained from the formula for a mass-spring system, , which in this case gives .

How well does this system perform? What are the effects of these modes?

First, let's try placing the force in the **absolute middle**. This would be the ideal case, when we can be sure that everything works as well as it can. Let's also, to begin with, consider an **undamped system**. Note that concrete in itself definitely has some structural damping, so the response is by no means correct here:

Alright! We can see a sharp peak at around **mode 4**. Mode 5 isn't visible, as the force in the absolute middle has no way of waking it up. Note that we're examining an undamped system here, so this is something which would happen if the concrete in itself wouldn't dissipate energy at vibrations around ~200 Hz (which it in reality does).

Ok then! What happens if we add some damping to the system? Let's also apply the same kind of damping to the isolating pads.

Suddenly we can see that the mode at around 188 Hz vanishes completely. This is because the type of damping we added affects higher frequencies quite a bit. I chose a coefficient which very roughly should correspond to the damping behaviour of concrete, but I did not do any thorough research here so please note that the resonance could in reality very well be somewhere between the two previous figures. Still, the results seem to very strongly indicate that the mode at around 188 Hz shouldn't be visible in a situation like the one described here.

Ok then! Let's try moving the force a bit. Let's move it so it's 20% from both sides. First, let's examine the case without damping. What does the response look like now?

Apparently the force is situated in such a place that mode 4 isn't really excited. Instead, we can see that **mode 5 **is clearly visible.

Once again, let's see what happens when we add some damping.

We can see that only the three first modes seem to affect the result. Still, I'd say that it's really important to notice that **the first three modes actually affect the result quite a bit.** This is also clearly visible in the animated gif at the beginning of the post.

The modes of the plate in itself are very central when the structure in itself doesn't have strong damping characteristics. This would be the case if the plate was made of steel, for example. If the plate is made of homogenous concrete (no rebar is taken into account here), the results strongly indicate that only the modes where the structure can be assumed to act as a rigid body seem to affect the result.

Thus, in a case like the one above, it should be enough to consider the plate as a perfectly rigid structure. Still, the results strongly indicate that if the excitation isn't perfectly centered, other modes than the lowest one can have a strong effect on the performance of the isolating system. Also, if there is reason to suspect that the modes of the plate in itself aren't damped, it might be useful to examine them as part of the system.

]]>So, let's investigate things. Not by cutting down more trees, but instead by simulating doing it!

To simulate this, i drew a very simple tree-like structure in 2D (*to defend myself I'll have to say that the tree I cut down looked a lot more centered than the one above, where it looks quite obvious that the tree would fall to the left*). The tree has a lot more branches on one side as compared to the other. The center of mass is shown to to the left of the trunk. The resulting average force will force the tree to tip over to the left as it's located to the left of the trunk, no matter how we cut the main trunk (if there is some magic trick I'm not aware of, please let me know).

After cutting a dent in the trunk with the purpose of allowing the tree to fall to the right, we can see that it starts to lean to the left. Whoops. This moves the centre of mass even further to the left. From the figure above, you can also see that the tiny narrow partition to the the left of the dent is carrying the whole tree; all the major forces (in red) are now centered around the dent.

As long as the centre of mass is located to the right of the tiny partition left after the dent, we should be fine. This is not always easy to see, especially if the tree has a lot of branches.

I used my solver in python with plane elements to calculate the figures above. The colors represent the von Mises stress.

]]>The comb teeth [3] in the picture below are the ones responsible for the notes you can hear.

The first model I made consisted of a uniform thin metallic plate (see the really cool gif below!). Problematically, there's some really strong coupling between the pins. Here's what happens when you apply a static deflection to one of the pins and let it go:

As you can see, the adjacent pins start to vibrate after some time. The resulting tone is decent, but it's a bit "wobbly" due to the coupling between the pins:

The problem with the coupled pins must be something that would happen with real music boxes as well. Well then, how has this problem been solved before? What I noticed was that the thickness of the plate seems to vary. OK, so let's double the thickness of the plate everywhere except at the pins. Additionally, let's move the stiff supports a bit closer to the pins. Combined, these actions decrease the amount of sound that travels from one pin to another significantly. So, in practice, we get a tone that is a lot clearer. Great! After some minor additional changes, this results in the sound you can hear in the example at the very beginning of the post. There's still some coupling between some of the pins, but it's small enough not to be bothersome.

I used the Mindlin–Reissner plate theory, which should quite accurately describe the vibrating thin plate that causes the sound. I solved the problem numerically using finite element analysis using my own solver in python.

I started by calculating the static deflection of each pin when a force is applied at the very tip. I then "released" the pin, and calculated the resulting deformation in small time increments for each pin (using the GN22 algorithm). Then, I calculated the average position, velocity and acceleration of the plate at each time frame; if the whole plate is simplified and considered as a single mass with one degree of freedom the forces the plate directs at surrounding structures (such as a table, which I assume radiates most of the sound caused by the box to simplify things) should be somewhat proportional to the average acceleration of the plate.

The frequencies the pins vibrate at can be calculated according to the formula for the lowest natural frequency of a cantilever beam:

where L is the length of the pin, E is the elastic modulus of the material, I is the moment of inertia and m is the mass per unit length. If is the length of the longest pin, we can now get the lengths of each subsequent pin using , where is how many semitones higher we wish to tune the pin to.

I'm really happy with the result, it sounds almost as good as the real thing. I added a bit of reverb, which is quite important for a natural sound. I think this is as far as I'll go with my music box simulations, but we'll see.

]]>Here are some nice images of vibrating window frames (ooh, wow)! The finite element model I made with Python will hopefully enable me to do two things: calculate the radiation coefficient of the window frame, and calculate the effect of coupling the wooden frame with the windowpane (although shear locking is definitely a problem for the model I'm using at the moment). These things should be very central when one considers the computational airborne sound insulation of the structure as a whole.

I'm also thinking of making a more detailed model of the music box I wrote about earlier using this model at some point. That should make a fun topic for the blog.

]]>A hypothetical simplified small open plan office is shown in the picture to the left. The spheres and cubes represent the possible positions for the office workers.

I'll assume that sturdy office screens are placed air-tightly against the wall and floor, so that sound doesn't *leak through the edges of the screens*. I'll also ignore any sound diffracted over the screens.

The sturdy office screens isolate sounds very well; this means that sound doesn't travel directly from one position to another, but instead through diffraction (which I assume to be negligible) and reflections.

First degree reflections are relatively easy to predict. Second degree reflections are already significantly harder to predict. Third degree reflections are very hard to predict without computer simulations. Third order reflections (and above) are often already far from intuitive.

I wish to hear as little as possible of my coworkers. The office screens already attenuate direct sound. But this is not enough. If I don't consider the other routes the sounds travel from one position to another, the screens will function as little more than visual barriers.

So what do I want to do? It turns out that **early reflections** are almost always the most important reflections to consider when one wishes to affect speech intelligibility. Another important factor is the background noise level, but I'll assume that the ventilation provides a decent amount of masking noise. Keep in mind that by early reflections I mean reflections arriving early on in time, without taking any notice of how complicated the path the reflection has traveled is.

I'll make the following goal: I want to get rid of the early reflections as effectively as possible, using a relatively small amount of absorbing material, such as acoustic panels. Let's assume that I can't place anything on the floor, as it would make cleaning (and walking around the room) too difficult. What is the optimal way to place the absorption?

The figure to the left shows the places where absorbing material should be placed, with dark blue representing the most important positions. There are two places where the placement of absorbing material is very central in this example; the ceiling above the office workers and the wall on the opposite side of them. In this simple case, the answer is fairly intuitive. For more complex situations, this is not always the case.

]]>Use headphones and click on any point around the person below to choose a direction for the incoming sound. The blue dots are in perpendicular directions relative to the listener. Try different head-related impulse responses (HRIR). Some of them will work better than others, depending on the individual. **Note **that the simulation has only been tested on** Firefox **and** Chrome**! Also, some people get errors with their web audio context having a different sample rate than the HRIR:s*.

* You need headphones for the following simulation!*

Head-related transfer functions describe the cues we receive that enable us to determine the direction a sound arrives from. We only have two ears. To be able to determine the direction the sound arrives from in 3D, our brain has to use all the information it can.

For example, the sound will often arrive at the other ear with a small delay. Also, there will often be a difference in the sound level at one ear, as compared to the other (especially at high frequencies). But, additionally, there is a ton of information available for our brain to use. Our shoulders reflect sound. Sound reflects and diffracts around our external ears (pinna). As our features, such as the shape of our pinna, are individual, so is the way our brain perceives sound in 3D.

Still, our heads are often similar enough, which enables us to approximate 3D sound by ready made head-related transfer functions. Once we have a description of how sound arrives at our ears from different angles, we can take any sound and play it back from some direction in 3D.

The simulation in this post uses head-related transfer functions from the CIPIC HRTF database. This paper by Henrik Møller provides some nice additional information about head-related transfer functions.

The source code is here: https://github.com/kai5z/hrtf-simulation

*) If your web audio context has a different sample rate as compared to the HRIR:s sample rate (44.1 kHz), the audio won't work. Apparently the sample rate of the context isn't definable (please correct me if I'm wrong!), so the HRIR should be resampled for it to work.

]]>