
These are my Phonak hearing aids, plus a cat hair, which you’ll never get away from in this house.
I’m deaf. I wasn’t always deaf: I was born hearing, then I became hard of hearing, now I’m deaf. It might be due to a brain injury I suffered at 25 or it might be thanks to my father’s genetics. It could be both. Regardless, my experience of the world today is radically different to the one I grew up with: not inherently worse, but different in expected and unexpected ways. Life is harder because society is set up for hearing people; but life is not worse, even if I sometimes need to remind myself of that.
Journey towards silence will be a series of posts exploring the experience of sound, conversation, music and sign language from age 25 to now.
Of course, silence is inaccurate. Like the majority of deaf people, I can pick up some sounds, especially with hearing aids in; just not enough to qualify as hearing. When I’m not wearing hearing aids, I pick up a few syllables — and the deeper the voice, the more syllables come through — and some mechanical sounds, if they’re loud enough. I think I can hear some of the sounds my body makes inside, like when my stomach gurgles. But mostly, without those two little devices, I just “hear” tinnitus-like noises. When I am wearing them, if the environment isn’t loud and I’m facing someone, I can hold a conversation, but not in larger groups or out at events.
My deafness has two aspects to it: not enough information gets to my auditory cortex and I have difficulty processing the information that does get there. As an example, if someone with the right kind of voice says “Joan” with no other context, I’ll likely “hear” the vowel sounds and maybe the “n”, leaving me with “owe” or “own”. With context, I might work out that the person was talking about Joan Pawford (pictured), but even then, it takes me a moment. Conversations require a lot of focus. Recently, I’ve started to notice there’s a delay between what I see and what I “hear” (understand) — people’s mouth movements are out of sync with the sounds they’re making, as if they’re lip-syncing a syllable ahead of the soundtrack.

Joan Pawford would like someone in this bar to bring her a packet of Virginia Slims and a pint of G&T.
It’s also harder to understand television, video calls, phone calls and so on. It kind of works if it’s direct from the source to the hearing aids via Bluetooth, probably because I can do a lot of adjustments to the sound in the hearing aids app, but I still need subtitles to follow what’s going on. Unfortunately, in the case of the news, live television or calls, those are auto-generated closed captions that range from okay (e.g., a word or two wrong in a sentence, some missing phrases, about a half-sentence behind whatever sound I can hear) to incomprehensible. The focus it takes to decipher the content of a call is one of the biggest drains on my energy.
While I have hearing aids, I can’t rely on them, so I’m also learning Irish Sign Language (ISL) and making other adaptations to my life. I have the privilege of access to an online ISL tutor and some apps, as well as a certain amount of security. That helps. Due to the auditory processing disorder, cochlear implants are not an option as they would essentially just supply sound to my auditory cortex in a different way (more on that in a later post). The “fixes” that occur to most hearing people aren’t right for so many Deaf or deaf people, so I’m finding my way led by many brilliant Deaf educators and activists, as I’ll discuss in this series.
Want to support the blog, which is going to be my primary source of income moving forward? I have a Patreon account here, a Liberapay account here, and a PayPal account here.