The Sitdown: Benjamin Lachman, cued speech advocate, Twista video guest

Written By AS TOLD TO SANDRA GUY, STAFF REPORTER Posted: 08/17/2014, 02:28am
Array Benjamin Lachman uses cued Speech to talk to students at the Alexander Graham Bell Montessori School. | Michael Schmidt/Sun-Times ORG XMIT: CST1408142040559165

Benjamin Lachman comes from a family with a survival mentality — his father’s parents were Holocaust survivors. His parents helped him adapt when at 18 months they taught him cued speech because Mondini syndrome, a birth defect in which the cochlea is underdeveloped, left him deaf. His mission is to bring cued speech — an alternative to sign language — to the mainstream, and he took a big step forward in when he did cues speech in a video featuring Chicago-area artist Twista. Cued speech involves eight hand symbols that are used in four positions near the face. It essentially is a phonetic visual representation of verbal language. The Northbrook native, 32, lives in River North with his wife, Katie, who is hearing. He is a principal at Lachman Goldman Ventures, a computer consulting firm that analyzes and strategizes medical, scientific, information and agricultural technologies; serves on the board of the Lachman Foundation, a not-for-profit founded by his parents to advance cued speech, Jewish education, environmental and social justice issues; and he serves as a director of the Alexander Graham Bell Montessori School in Wheeling, a private school for the deaf and the hearing.

After I was born, my parents first followed the conventional path toward the generally accepted principles of deaf education by taking a class in American Sign Language.

Within six months, they had a vocabulary of 200 words. Deaf education professions were ecstatic. A 2-year-old with a sign vocabulary of 200 words was above expectations, they exclaimed. Yet, my parents struggled to come to terms with their inability to explain some of the most basic concepts.

My father asked the ASL teacher why he and my mother could not communicate with me in English phonetically. “If I can teach my computer to interact phonetically, why can’t I teach my son?” he asked. To answer that question, they discovered cued speech.

Cued speech takes two weeks to learn.Even if you use technology or choose surgery, it takes time to get surgery. It takes months to get on a list. That time goes by with no exposure to language.

We initially lost (a challenge to the public school system) but the long-term result was a win. We ended up losing the case in 1996 after the Illinois State Board of Education refused to allow parents to specify the methodology of education provided in public schools.

The legal perspective now is that cued speech is just a way to make English or any other spoken language accessible.

As a result of the 1996 ruling in my case, my parents and several other parents and teachers founded Alexander Graham Bell Montessori School.

I went to school at (the original) Alexander Graham Bell through second grade. Then I was mainstreamed at Solomon Schechter Day School, a private Jewish school, because they let me use a cued speech transliterator.

I went to Glenbrook North High Schoolwith a cued speech transliterator. After graduating from California Polytechnic State University in San Luis Obispo, Calif., and working for a few companies, I went back to get my MBA at the University of Illinois at Chicago.

One of the things about deafness –it has a lot of variables.There are all kinds of cases and a lot of different outcomes with hearing technology.

Visual language,usually combined with auditory feedback technology, helps level those playing fields. It creates a uniform product. So you create a communications safety net.

I only had about 10 percent effectiveness with my cochlear implant. So my only option was visual language — or no language.

Cued speech has come a long way. We are doing a lot of visual content creation. We are able to communicate about our work more easily and make it more accessible.

I’m also working with businesseson technology products such as those that transcribe voice or other biofeedback into text. Some options would be audio recognition technology.

Another option would be visual or motion-capture products that convert cued speech into text, similar to a stenography machine that requires someone to learn to use a phonemic typewriter.

There are also speech-to-text or cue-to-text possibilities that can be integrated into wearable augmented-reality products such as Google Glass or our own SeeBright headset that provides real-time captioning and other communications.

The Holy Grailis automatic voice to text for all voices despite background noise.

Katie and I catch a lot of Cubs games, get our dog out of the apartment and try to learn more about this beautiful city we live in.

As for music, as a deaf individual, my musical experience is extremely tactile. I feel the vibrations and rhythms.

Many of my favorites are particularly bass-driven, but I enjoy diverse and dynamic rhythms, so my taste could range from Michael Jackson riffs to hip hop beats to just about anything with a good beat.


Twitter: @sandraguy

Browse More 'Uncategorized'