What’s the McGurk Effect?

Print anything with Printful



The McGurk effect shows that visual cues, such as observing mouth and facial movements, play an important role in understanding spoken language. This phenomenon is observed in both deaf and hearing individuals and has implications for speech recognition software development.

The McGurk effect is a principle involved in understanding human language. The essence of the effect is that visual cues play an important role in a listener’s understanding of the spoken word. People depend on their mouths and facial shapes, as well as hearing, to understand the speaker’s meaning; in the absence of this information, there are more likely to be communication problems. The effect was first documented by researchers Harry McGurk and John MacDonald in 1976. It is sometimes called the McGurk-MacDonald effect.

The McGurk effect was evident to some people long before it was actually named. Deaf and hard of hearing people have been communicating by lip reading for centuries. An experienced lip reader picks up most of a speaker’s meaning by observing the movements of the mouth and face. Speech therapists and researchers who study this phenomenon quickly realized that it also applied to people with normal hearing. That is, all people unconsciously practice some form of lip reading in the course of everyday conversation.

The McGurk effect is easily observed. In a normal conversation, the listener looks at the speaker’s face. If the listener looks away, it takes more concentration to follow the speaker’s words and some sentences may need to be repeated. When the speaker’s face is visible, the listener will depend on facial and mouth movements, as well as the speaker’s context and intent, to fully understand the speech. This concept, universal for all spoken languages, is well known to the world deaf community and also plays a role in sign language.

Researchers can demonstrate the McGurk effect more thoroughly with computer speech simulators. These programs project an image of a human face which is coordinated with pre-programmed spoken phrases. They can also be programmed, unlike human speakers, to take the visual form of sound other than speech. When this happens, listeners are likely to perceive a third sound altogether. This has been observed even when listeners know what the simulator is doing, suggesting that the McGurk effect is ingrained deep in human consciousness.

The McGurk effect has been studied by computer specialists working on speech recognition software. It seems likely that to fully understand the vast nuances of human speech, such programs must take into account the McGurk effect. Programs under development will use a small camera to observe a person’s facial movements as they issue a command. The computer will then integrate this information with the recorded sound for a more accurate understanding of the command being spoken.




Protect your devices with Threat Protection by NordVPN


Skip to content