# Room modes in non-rectangular rooms

I recently programmed a method for easily doing basic 2D finite element analysis of acoustics. I did this for a different kind of project, but thought it would be cool to try it out as a method of analyzing how sound behaves in L-shaped rooms. I used the following dimensions in the software (the room consists of the L-shape in the picture):

Obviously we're simplifying things a lot, as we're leaving the height of the room out of the calculations. Still, the calculations will give us a lot of information of how sound behaves from the perspective of the most interesting dimensions of the room.

Let's try feeding in a plane wave from the top of the room, just to see what happens. Try moving the slider around a bit to see how the sound field forms in the room. Please note that it can take a while for the content to load.

Some central things to note:

• When the plane wave reaches the convex corner, the corner will radiate sounds in all directions (also to the right).
• A sound field quickly forms in both the top-down and the left-right direction

The red dot represents a microphone in the room, in case you're wondering. Let's see what the microphone gives us:

We can clearly see when the first diffracted sound arrives at the microphone. Two reflections arrive shortly afterwards, in close succession. After that, the sound field quickly becomes complicated.

Let's check out the frequency response as measured by the microphone:

We can clearly see at least a few room modes. Let's try examining the modes more thoroughly.

Additional: The plane wave

The plane wave consists of a gaussian pulse. We can't feed too sharp of a pulse into the room, as that would lead to errors in the calculations. By increasing the width of the pulse, we can get a pulse which the calculations will be able to handle.

Additional: How is the response of the room calculated

I cheated a bit. The calculation model I used doesn't account for damping, which in practice means that the room would continue reverberating indefinitely. I calculated the response for 0.5 seconds and approximated damping simply by multiplying the non-fading response with a decaying curve. Which isn't something that really should be done.

The gaussian pulse has the following frequency content (magnitude): Using this, I calculated the response up to 200 Hz by deconvolving the pulse from the response. Deconvolution, in this case, means that I took into account the varying frequency content of the excitaiton. This can be done by dividing the frequency content of the response by the frequency content of the excitation.

## Room modes

I've written quite a bit about room modes recently, but not from the perspective of irregular rooms. Let's see what they look like in this case! Below are the 6 lowest room modes. Many of them are far from obvious, as can be seen. They bear very little resemblance to the lowest room modes in a rectangular room.

## Conclusions

If the geometry of the room differs even slightly from a rectangular layout, and one wishes to calculate an approximation of the room modes of the space, there really doesn't seem to be that many options available. Numerical modeling using the finite element method, as used in this post, is a method which works nicely.

# Edge diffraction with geometrical acoustics

In this post, I'm using geometrical acoustics to calculate the sound field around a wedge, including specular reflections, shadow zones and diffraction.

## The simulation

You might encounter some problems with the simulation if you're not using recent hardware/software with full WebGL support.

The simulation: http://doca.kaistale.com/btm/

Note that when you move the source or change the frequency of the source, it will take a while for the diffraction calculations to update.

I recommend that you keep the source to the left of the wedge if you want to observe the results of the diffraction calculations, as the calculations are done starting to the right of the wedge (you'll see what I mean if you play around with the simulation).

I could make the code much more effective by optimizing things (many times more effective, probably), but I won't do any of that as it's just a proof of concept.

## The theory

The specular reflections are calculated by reflecting the source relative to the wedge. The shadowing is self-explanatory.

The diffraction is calculated using the Biot-Tolstoy expressions, using the method presented by Svensson et al (JASA 1999). The simulation spans 1 meter by 1 meters. The wedge spans 1 meter to both sides of the source. The simulation represents a cut plane perpendicular to to the wedge.

The amplitude and phase of the diffracted signal is calculated from the impulse response at many points in space, using the Goertzel algorithm (for a single frequency). The algorithm is implemented server-side, using Python/Numpy.

The calculations are done with some simplifications, to keep the server happy. I haven't validated the results in the simulation thoroughly, small errors could be seen for the few points I tested (most likely due to the simplifications). But the calculation model works; I compared my Python implementation more rigorously with the examples in the paper by Svensson.

## The implementation

A lot of things are calculated using the shaders which definitely should not be calculated using them. If you delve deeper into the code (esp. the shaders), you will most definitely encounter quite a few ugly things. But the simulation runs happily on my computer(s), so I'm happy.

Each time you move the source or change the frequency, the contribution of the diffracted signal is calculated for a polar grid, spanning 32 x 32 points (it extends quite a bit outside the visible view).  The origin of the grid is at the edge of the wedge. The calculations are done server-side, and fetched one data point at a time using jQuery/ajax/JSON. This makes the calculations really slow, but it's partly also because I don't want to strain the server too much. I mainly wanted to test how these types of calculations can be done using Django.

This diffraction data is passed to the shaders using a 2D texture, with two bytes ("red" and "green") representing the amplitude of the signal and one byte ("blue") representing the necessary phase data.

WebGL can handle linear interpolation for textures automatically, so the data is interpolated nicely in-between the data points.

# Listening to vibrations

Update: here's a small tune I made to demonstrate what the resulting sample would sound like as an instrument:

[audio kaistale.com/blog/130722beam1d/kaistale_blog_beamtune.mp3]

I've always been fascinated by how things, especially sound waves, look like in slow motion. By examining something in slow motion, somehow even complex phenomena start to seem intelligible.

In this post, I'm examining the behavior of a very simple instrument in slow motion. The physics are very much the same as when examining vibrations in structures, but instruments are (very much) nicer to listen to. The principles shown here are naturally applicable to larger scale phenomena that occur in structures of different kind (buildings, bridges, etc).

## The instrument

The instrument will consist of a cantilever beam made out of steel, with a length of 20 cm. The cross-sectional dimensions of the beam are 2 cm x 2 cm. The instrument will thus resemble something like a simple vibraphone / glockenspiel / large music box.

We'll use the same principle to get the sound of the beam as an electric guitar uses to pick up the sound of a string, by placing a (virtual) pickup at a position 3/4:th to the left of the tip of the beam (or 1/4:th from the left of the beam, whichever suits your fancy).

What will happen if we hit the very tip of the beam, with a very sharp, impulse-like force (even sharper and quicker than what's possible with the hammer in the picture)?

## The result

First, it should be noted that the same principle is used as when an electric guitar picks up the sound of a string. The pickup transforms the deflection at a certain point on the string (or in this case, beam) directly to sound. The resulting sound for the beam is as following:

[audio http://kaistale.com/blog/130722beam1d/Waveform.wav]

To see what the beam looks like directly after being hit by the sharp impulse, examine the resulting deformation of the beam as a function of time here (click on the blue area to load the content). The sound is generated from the movement of the circle in the direction of the y-axis.

Some things to notice when watching  the time lapse of the deformation:

• There's a transient part at the very beginning (slightly visible in the 0.0015 s time span) which attenuates very quickly. This is the part of the response where standing waves (resonances) have not yet formed.
• The two lowest natural frequencies (resonances) can be seen clearly; one at 464 Hz and another at 2910 Hz. The third natural frequency, 8150 Hz, can be seen at the very beginning of the response.
• For this setup, the higher frequencies attenuate quickly. In the end, only the lowest natural frequency, 464 Hz, can be heard. This gives the distinct pitch you can hear in the sound sample above.

I've always found that spectrograms describe situations like this nicely. Here is a spectrogram of the first 0.1 seconds of the resulting sound. The y-axis shows the frequency content, while the x-axis shows time.

## The theory

I'm using finite element analysis to examine the behavior of the beam. The theory is the same as in the previous post, but I've also calculated the mass matrix for the beam. I've used 25 elements for the beam, thus solving a 50-degree of freedom system. Note that I'm using the Euler-Bernoulli beam theory, so some simplifications are made.

The damping was done using Rayleigh damping. For those familiar with this type of damping, the value for $\alpha$ was 1e-05 and the value for $\beta$ was 1.5e-06. I chose these values simply on the basis of listening to which damping values sounded better than others, not much thought were put into them. They seem surprisingly small as compared to other values I encountered online, maybe someone more acquainted with Rayleigh damping could offer me their opinion on this?

I used the Newmark algorithm for time-stepping, with a value of 0.5 for both $\beta_1$ and $\beta_2$.

## The methods

I used Python for the calculations. Python is ideal for such an endeavor, and free! SymPy provided me with the tools I needed to solve the necessary equations, while NumPy did the calculations for me.

I saved the resulting calculations as a binary file, which only contained the necessary information. For example: only one byte / element describes the deformation at each time step in the animation seen above, as 8 bits is more than enough in this case.

I wrote a script that loads up these binary files (of which 3 can be seen above). The deformations can be examined as a function of time. I'm quite happy about how it turned out, even though some parts of the code are quite hacky-ish. You can see the code for the javascript here. Note that this code only displays the data from the binary files, the binary files have been calculated using Python.

## Thoughts for the future

Perhaps the parameters could be tweaked a little, so the sound would be closer to the sound from a music box (which is the instrument I think the model most closely resembles). Also, the Timoshenko beam model would be nice to try, for comparison. All in all, this post was a nice exercise in programming Python, JavaScript and doing finite element analysis in the time domain.