Is it like a rough inference of what’s being said based on mouth movements, or is it more precise somehow? Would it be a mistake to think you knew exactly what was said by reading lips (even if you were good at it)?
As someone who was born with unilateral moderate->severe hearing loss, I can read lips. The experience is likely unique across the hearing loss spectrum / time of onset, and some people may be able to learn that skill themselves, idk. I’m sure 100% deaf people experience it in their own interesting way.
To me it’s not anything I conciously do, and it’s not something that’s really that visible to me. The fact I can still hear, but not as well as people with normal hearing affects how it works. The way I’d explain it for me is kinda like this:
Sometimes I can’t hear enough to tell what is being said, one way my brain naturally deals with this is by reading the speakers lips and using that to help filter and understand what its hearing. I can kinda apply it as a skill, like with muted videos and people I can’t hear because of distance, but it doesn’t work that well and isn’t worthy of trust.
So for me it’s more of a sense, not something I do or think about. However, its basically the least effort way to understand speech that isn’t clear enough. This is in contrast to another way I/my brain goes about it, which is trying really hard to figure out what it just heard.
To answer your last question, yes it is likely a mistake. Theres a youtube channel about that whole concept, called bad lip reading or something. They dub over video with audio that matches the lips well enough.
To put my experience into perspective, which might work for at least a few people: closed captions
subtitles. I mean… I’ve never asked anyone else but yall arent just reading them, right? To me they just clarify the speech subconsciously (for the most part), rather than me reading them off the screen when I need them. Captions are weird… Who knows if this is accurate to my experience or similar to others.This, completely. I didn’t even know how much I depended on reading lips until everyone worth listening to was wearing a mask.
Yeah it really sucked, slightly muffled by the mask and no lips to read.
Same. My mind makes up random words to sounds all the time, so if I dont have context or lips to read, the sentences I hear people say are just wild.
I repeat things back to people wearing masks so that I can be sure that I understand them. It annoys some of them, but whatever
As far as being similar to others’ experiences: I don’t have any significant hearing loss, but you basically just described my subjective experience reading lips (and subtitles).
Thanks! I appreciate the perspective on this, as lip-reading is kinda like “eye-reading” to me in that I’ve struggled to understand what’s involved.
To put my experience into perspective, which might work for at least a few people: subtitles. I mean… I’ve never asked anyone else but yall arent just reading them, right? To me they just clarify the speech subconsciously (for the most part), rather than me reading them off the screen when I need them. Subtitles are weird… Who knows if this is accurate to my experience or similar to others.
This also helps me understand, as I often do watch stuff with subtitles to help better follow dialogue, and I’m usually not closely reading them all throughout.
When you say subtitles do you mean closed captions? Because I agree those are a boost for me to follow what I’m also seeing and hearing the person say. But with subtitles they’re speaking a different language so lip-reading isn’t helpful and hearing just adds tone of voice.
Yeah I mean closed captions, oops
I mix them up all the time myself
It’s inference based on mouth movements, but it isn’t as rough as it seems like - context plays a huge role on disambiguation, just like it would for you with homonyms that you hear. It’s just that the number of words that look similar when you mouth read is larger than the number of words that sound the same, since some sounds are distinguished by articulations that you can’t immediately see (such as [f] vs. [v] - you won’t see the vocal folds vibrating for the later, so “fine” and “vine” look almost* the same.)
Also, the McGurk effect hints that everyone uses a bit of lip reading, on an unconscious level; it’s just that for most of us [users of spoken languages] it’s just to disambiguate/reinforce the acoustic signal.
*still not identical - for [v] the teeth will touch the bottom lip a bit longer.
Seinfeld had an episode that revolved around lip reading.
As a linguist, I suspect that everyone lipreads to some extent as a conversation repair mechanism. Accuracy probably depends on skill and context. Family members with hearing loss are pretty good at understanding a speaker that they can see clearly, even when there’s no sound information available at all.
Hearing loss my dude. Army. Necessity is the mother of invention. It makes date night fun with my partner though. I can read and sign remote convos to her and sometimes that’s spicy/fun. It’s not perfect but I can usually follow the thread. More so if the target is animated/angry/excited. Unfortunately the best ones are the hardest. We love the first date awkward convos, the public breakups, and admissions of guilt but those tend to be subdued and difficult. When you get them though it is choice.
All I know is that when people started wearing masks, suddenly I had trouble understanding them. I guess I’d picked it up subconsciously alongside my hearing loss.
I still wear masks in public, but holy shit does it make people harder to understand when I can’t see their lips. I wanna say that someone made a “clear” mask for that exact reason. Dunno how exactly or where I remember that from, but it’s a good idea
There’s others but this looks like a good one
The muffling of the mask doesn’t help at all