March 2024 Newsletter: To be or not to be… Intelligible

Have you ever sat in a noisy restaurant and had a hard time understanding your server, or been on a train and heard every word of a conversation being held three rows behind you? In both of these situations, you were aware of speech intelligibility. Sometimes, good speech intelligibility is key, such as in classrooms or conference rooms. At other times, we want to reduce speech intelligibility and improve speech privacy, such as in open offices. Read on to learn about some of the factors that go into making speech intelligible (or not).

To understand speech intelligibility, you need to understand speech. As you know, words are formed out of vowels and consonants. Both of these speech sounds start by producing an air stream with the lungs. Vowels are formed by engaging the vocal cords and shaping the air stream with the throat, tongue, lips, and nasal cavity. Consonants, on the other hand, are caused by restricting the air stream at one or more points within the vocal tract. Some consonants engage the vocal cords (voiced), while others do not (unvoiced). For example, the consonants “b” and “p” are both formed by restricting the air stream with the lips, but “b” is voiced and “p” is unvoiced.

So, what’s the big deal with consonants and vowels? In general, vowels are easier to identify, but consonants provide the nuance needed to give words meaning. In an environment with poor speech intelligibility, a person might say “bat”, but a listener could hear any number of words that have the same long-a vowel sound, such as “pat,” “bad,” or “pad.” The role that consonants play in speech intelligibility can by shown by the acoustic metric ALcons (articulation loss of consonants). This metric describes the percentage of consonants a listener hears incorrectly. A low number, such as 0.05, means very few consonants have been misheard (in this case, 5%), which indicates good speech intelligibility.

While ALcons can be a good metric for understanding speech intelligibility as it relates to room conditions like reverberation time, another important aspect in determining speech intelligibility is the speech-to-noise ratio. A positive speech-to-noise ratio (meaning the speech is louder than the background sound level) is more likely to result in good speech intelligibility. However, people can still communicate when the speech-to-noise ratio is negative because context clues aid in understanding. People are most likely to understand sentences as opposed to individual words, because their brains supply any missing information based on the sounds they are able to hear. One of the main metrics used to quantify speech intelligibility using speech-to-noise ratio is the articulation index (AI), which describes the percentage of words a listener can understand. A low number, such as 0.08, means very few words are understood (in this case, only 8%), which indicates poor speech intelligibility but good speech privacy.

These are just a few of the factors which go into determining speech intelligibility. Here at Metropolitan Acoustics, we are well-versed in the many speech intelligibility metrics and how to interpret them. Whether you need privacy in your open office or clarity in your lecture hall, we are here to help!

Dunellen High School Renovations
February 2024 Newsletter: Wading the Acoustical Waters