June 2023 Newsletter: Location, Location, Localization

In our April 2023 newsletter on selective auditory attention, we discussed how our brain processes sounds from the environment, separating what is important from what is background noise. Part of this process is the determination of the location of different sounds in order to identify and categorize them. But how exactly is this spatial calculus accomplished? Read on to learn more about sound localization.

The Dynamic Duo
Unsurprisingly, it all starts with our ears – our two ears to be exact. Humans have binaural hearing, meaning that our ears independently receive auditory signals. It is our ears working together in tandem that forms the basis of our ability to locate sounds. Using the subtle differences of a sound as it is received in each ear, along with some nifty neural processing between the ears, we are able to effectively infer the location of a sound, generating a 3D soundscape in our mind.

(2014AIST, https://creativecommons.org/licenses/by/3.0, via Wikimedia Commons)

Timing is Everything
One cue our brains utilize is the slight difference in time it takes a sound to reach each ear. This effect is known as the Interaural Time Difference (ITD). A dog barking to your right, for example, will be heard a fraction of a second earlier in your right ear than your left. Your brain is able to implicitly interpret the length of the delay and estimate the angle of the incoming sound wave relative to your head.

Taking it to Another Level
A second cue, the Interaural Level Difference (ILD), depends on the relative intensity of the sound in each ear. Take the same dog barking to your right: the sound will be louder in your right ear than in your left. This difference is less due to proximity, and more due to the fact that your head itself obstructs the sound, with your far ear caught in its acoustic shadow.

There is one catch, however, as the ILD is not equal across all frequencies of sound. The lower the frequency of a sound, the longer wavelength; if the wavelength becomes sufficiently large, then the head becomes a diminishingly small obstacle by comparison. The wavelength of a 500 Hz sound, for example, is over two feet – multiple times the diameter of a human head. A 20 Hz sound, the lower limit of human hearing, has a nearly 60-foot wavelength. This size difference allows the sound to diffract, effectively “bending” around the head with minimal attenuation.

Ahead of the Game
A final clue in our soundscape sleuthing arises due to the way an incoming sound is filtered by your upper torso, head, and ears. Depending on the direction of its origin, the spectral content of sound will be modified as it scatters and diffracts, intensifying some frequencies and attenuating others, giving your brain additional information to pinpoint a sound. At high frequencies with especially short wavelengths every detail matters, down to the minute contours of your face and ears.

All of the phenomena described in this newsletter contribute to what is known as the Head Related Transfer Function (HRTF). Every person has a unique, personalized HRTF – your auditory fingerprint. This fingerprint may not be very good at solving crimes, but without it, sound localization would be impossible.

July 2023 Newsletter: PHL to PNQ (Peace & Quiet!)
Bungalow Beach Boardwalk Run