Mode 1

Mode 2

Mode 3

Mode 4

# Vibrating windows

Here are some nice images of vibrating window frames (ooh, wow)! The finite element model I made with Python will hopefully enable me to do two things: calculate the radiation coefficient of the window frame, and calculate the effect of coupling the wooden frame with the windowpane (although shear locking is definitely a problem for the model I'm using at the moment). These things should be very central when one considers the computational airborne sound insulation of the structure as a whole.

I'm also thinking of making a more detailed model of the music box I wrote about earlier using this model at some point. That should make a fun topic for the blog.

# Acoustics of small open plan offices

In this post, I'm going to examine a hypothetical small open plan office, and the optimal way to treat the space acoustically. Check out the publication related to some of the theory I'm going to base this on here. I'm going to make the example geometrically simple, so the result will be clear and somewhat intuitive.

# The setup

Small open office setup

A hypothetical simplified small open plan office is shown in the picture to the left. The spheres and cubes represent the possible positions for the office workers.

I'll assume that sturdy office screens are placed air-tightly against the wall and floor, so that sound doesn't leak through the edges of the screens. I'll also ignore any sound diffracted over the screens.

# Reflections

First degree reflections

The sturdy office screens isolate sounds very well; this means that sound doesn't travel directly from one position to another, but instead through diffraction (which I assume to be negligible) and reflections.

First and second degree reflections

First degree reflections are relatively easy to predict. Second degree reflections are already significantly harder to predict. Third degree reflections are very hard to predict without computer simulations. Third order reflections (and above) are often already far from intuitive.

# The goal

I wish to hear as little as possible of my coworkers. The office screens already attenuate direct sound. But this is not enough. If I don't consider the other routes the sounds travel from one position to another, the screens will function as little more than visual barriers.

So what do I want to do? It turns out that early reflections are almost always the most important reflections to consider when one wishes to affect speech intelligibility. Another important factor is the background noise level, but I'll assume that the ventilation provides a decent amount of masking noise. Keep in mind that by early reflections I mean reflections arriving early on in time, without taking any notice of how complicated the path the reflection has traveled is.

I'll make the following goal: I want to get rid of the early reflections as effectively as possible, using a relatively small amount of absorbing material, such as acoustic panels. Let's assume that I can't place anything on the floor, as it would make cleaning (and walking around the room) too difficult. What is the optimal way to place the absorption?

# The result

The result

The figure to the left shows the places where absorbing material should be placed, with dark blue representing the most important positions. There are two places where the placement of absorbing material is very central in this example; the ceiling above the office workers and the wall on the opposite side of them. In this simple case, the answer is fairly intuitive. For more complex situations, this is not always the case.

# Binaural Sound with the Web Audio API

## The simulation

Use headphones and click on any point around the person below to choose a direction for the incoming sound. The blue dots are in perpendicular directions relative to the listener. Try different head-related impulse responses (HRIR). Some of them will work better than others, depending on the individual. Note that the simulation has only been tested on Firefox and Chrome! Also, some people get errors with their web audio context having a different sample rate than the HRIR:s*.

You need headphones for the following simulation!

## The theory

Head-related transfer functions describe the cues we receive that enable us to determine the direction a sound arrives from. We only have two ears. To be able to determine the direction the sound arrives from in 3D, our brain has to use all the information it can.

For example, the sound will often arrive at the other ear with a small delay. Also, there will often be a difference in the sound level at one ear, as compared to the other (especially at high frequencies). But, additionally, there is a ton of information available for our brain to use. Our shoulders reflect sound. Sound reflects and diffracts around our external ears (pinna). As our features, such as the shape of our pinna, are individual, so is the way our brain perceives sound in 3D.

Still, our heads are often similar enough, which enables us to approximate 3D sound by ready made head-related transfer functions. Once we have a description of how sound arrives at our ears from different angles, we can take any sound and play it back from some direction in 3D.

The simulation in this post uses head-related transfer functions from the CIPIC HRTF database. This paper by Henrik Møller provides some nice additional information about head-related transfer functions.

## The source code

The source code is here: https://github.com/kai5z/hrtf-simulation

*) If your web audio context has a different sample rate as compared to the HRIR:s sample rate (44.1 kHz), the audio won't work. Apparently the sample rate of the context isn't definable (please correct me if I'm wrong!), so the HRIR should be resampled for it to work.

# Predicting the absorption coefficient of micro-perforated plates

Aluminium Honeycomb perforated panel

I was playing around with the topic of this blog post on my laptop in the train a while ago (while reading the paper "Predicting the absorption of open weave textiles and micro-perforated membranes backed by an air space" by Kang et al), and thought it might make a good topic for a blog post.

## Micro-perforated plates

What are micro-perforated plates/membranes? I'll just quote Wikipedia's concise explanation: a thin [...] plate, made from one of several different materials, with small holes [...] in it. An MPP offers an alternative to traditional sound absorbers made from porous materials.

The basic idea is this: because the holes are so tiny, sound gets absorbed while passing through the holes due to friction. A nice example of micro-perforated plates are transparent membranes, which you can see through while they still offer sound absorbing qualities.

## Designing micro-perforated panels

I wrote up some equations from the paper in JavaScript. The equations give the absorption coefficients for normal incident sound and a diffuse sound field. Note that the following equations are only meant for designing flat, micro-perforated plates with a relatively small closed air space behind them. Also, I take no responsibility for any errors in the calculations, although I noticed that the calculations did seem to agree with the plots in the paper.

This is currently for non-metallic substances only (as given by the paper), let me know if someone out there wants me to implement metallic membranes as well (or equations given by some other paper)!

If you're at a loss as to what to try, you can try the following values: hole width: 0.3 mm, hole spacing: 3.3 mm, cavity depth: 0.1 m, mass: 0.19 kg/m2, thickness: 0.17 mm.