Monday, November 26, 2012

Deep Neural Networks for Acoustic Modeling in Speech Recognition

An overview of progress representing the shared views of four research groups who have had recent successes in using deep neural networks for acoustic modeling in speech recognition.

Geoffrey Hinton, Li Deng, Dong Yu, George Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara Sainath, and Brian Kingsbury -- Deep Neural Networks for Acoustic Modeling in Speech Recognition

The Guardian Article: "Feel the Music" project teaches deaf children a touch of Beethoven

Feel the Music Studies show musical vibrations can have as much of an impact on the brain as real sounds, and that exposing deaf children to music early on can stimulate their brain music centres. "It's very rare that deaf children get the chance to work together with professional musicians, and especially with an orchestra," Whittaker says. "Not only does it open up a new world to children with hearing handicaps, it also takes musicians out of their comfort zones and makes them think anew about how they hear and understand music."
Feel the Music The Mahler Chamber Orchestra and Norwegian pianist Leif Ove Andsnes teamed up with Paul Whittaker, a profoundly deaf musician, who runs the UK charity Music and the Deaf, to create Feel the Music. This project, a part of the MCO's Beethoven Journey concert series visiting 40 European cities between now and 2015, aims to open up the world of music to hearing-impaired children across Europe.  Part of the project examines the way Beethoven's own deafness greatly influenced his compositions.

After feeling the instruments as they are played, the children are offered a chance to explore a concert hall and experience a performance by the MCO and Andsnes from the heart of a fee-paying audience. Find the full article here. Learn more about Music and the Deaf.

Wednesday, November 14, 2012

Film: Voices from El-Sayed

Capita Foundation announces: FILM
What: "Voices From El - Sayed"  is a unique and moving documentary offering us intimate cinematic dialogue with El-Sayed's marvelous silent people.
When: Wed. Nov. 14
Where: UCSD Cross Cultural Center, 2nd floor of the Price Center. Room Comunidad Lg
Who: Featuring a discussion after the film with director Oded Adomi Leshem and UCSD Professor of communication Carol Padden. Co sponsored by Tritons for Israel, JStreetU and Hillel.
Cost: Free
RSVP: Click HERE to RSVP on Facebook event
More:  Click HERE for film preview or see trailer below. In the picturesque Israeli Negev desert lays the Arab Bedouin Village of El-Sayed. It has the largest percentage of deaf people in the world yet, no hearing aids can be seen because in El-Sayed deafness is not a handicap. The tranquility of the village is interrupted by Salim El-Sayed's decision to change his deaf son's fate by inviting an Israeli doctor to give a Cochlear Implant Operation. This bionic implanted chip, that can make deaf people hear, is slowly reaching more secluded areas, even to El-Sayed which has neither paved roads nor electricity. "Voices from El-Sayed" is a unique and moving documentary offering intimate cinematic dialogue with El-Sayed's marvelous silent people and highlights Israel's innovative way to heal the world.

Tuesday, November 13, 2012

New York Times article: The Science and Art of Listening

Published November 9, 2012

Here's a trick question. What do you hear right now? If your home is like mine, you hear the humming sound of a printer, the low throbbing of traffic from the nearby highway and the clatter of plastic followed by the muffled impact of paws landing on linoleum — meaning that the cat has once again tried to open the catnip container atop the fridge and succeeded only in knocking it to the kitchen floor.

The slight trick in the question is that, by asking you what you were hearing, I prompted your brain to take control of the sensory experience — and made you listen rather than just hear. That, in effect, is what happens when an event jumps out of the background enough to be perceived consciously rather than just being part of your auditory surroundings. The difference between the sense of hearing and the skill of listening is attention.

Hearing is a vastly underrated sense. We tend to think of the world as a place that we see, interacting with things and people based on how they look. Studies have shown that conscious thought takes place at about the same rate as visual recognition, requiring a significant fraction of a second per event. But hearing is a quantitatively faster sense. While it might take you a full second to notice something out of the corner of your eye, turn your head toward it, recognize it and respond to it, the same reaction to a new or sudden sound happens at least 10 times as fast.

This is because hearing has evolved as our alarm system — it operates out of line of sight and works even while you are asleep. And because there is no place in the universe that is totally silent, your auditory system has evolved a complex and automatic “volume control,” fine-tuned by development and experience, to keep most sounds off your cognitive radar unless they might be of use as a signal that something dangerous or wonderful is somewhere within the kilometer or so that your ears can detect.

This is where attention kicks in.

Attention is not some monolithic brain process. There are different types of attention, and they use different parts of the brain. The sudden loud noise that makes you jump activates the simplest type: the startle. A chain of five neurons from your ears to your spine takes that noise and converts it into a defensive response in a mere tenth of a second — elevating your heart rate, hunching your shoulders and making you cast around to see if whatever you heard is going to pounce and eat you. This simplest form of attention requires almost no brains at all and has been observed in every studied vertebrate.

More complex attention kicks in when you hear your name called from across a room or hear an unexpected birdcall from inside a subway station. This stimulus-directed attention is controlled by pathways through the temporoparietal and inferior frontal cortex regions, mostly in the right hemisphere — areas that process the raw, sensory input, but don’t concern themselves with what you should make of that sound. (Neuroscientists call this a “bottom-up” response.)

But when you actually pay attention to something you’re listening to, whether it is your favorite song or the cat meowing at dinnertime, a separate “top-down” pathway comes into play. Here, the signals are conveyed through a dorsal pathway in your cortex, part of the brain that does more computation, which lets you actively focus on what you’re hearing and tune out sights and sounds that aren’t as immediately important.

In this case, your brain works like a set of noise-suppressing headphones, with the bottom-up pathways acting as a switch to interrupt if something more urgent — say, an airplane engine dropping through your bathroom ceiling — grabs your attention.

Hearing, in short, is easy. You and every other vertebrate that hasn’t suffered some genetic, developmental or environmental accident have been doing it for hundreds of millions of years. It’s your life line, your alarm system, your way to escape danger and pass on your genes. But listening, really listening, is hard when potential distractions are leaping into your ears every fifty-thousandth of a second — and pathways in your brain are just waiting to interrupt your focus to warn you of any potential dangers.

Listening is a skill that we’re in danger of losing in a world of digital distraction and information overload. Find the full article here.

Monday, November 12, 2012

New York Times article: Imaginary Prizes Take Aim at Real Problems


Published November 8, 2012

IMAGINE putting up a prize of $20 million to inspire others to solve a particular problem. What would your challenge be?

In a series of interviews, one winner of the annual John D. and Catherine T. MacArthur Foundation fellowships was asked to propose her own $20 million challenge prizes.

CHALLENGE: Use crowd sourcing to help the hearing-impaired

The paradox of America’s economy is that while it is hard for many people to find one paying job, almost everybody has several they do free. We are bank tellers when we use the A.T.M., airline employees when we check ourselves in for flights and cashiers when we scan our items at the supermarket.

And we work on the cutting edge of technology, helping Google and Apple refine their voice recognition software each time we ask our phones to name the capital of Burkina Faso (it’s Ouagadougou) and follow up by asking, “How the heck do you pronounce that?”

Carol Padden, who is deaf and teaches communication at the University of California, San Diego, said she wanted to enlist volunteers to crowdsource a labor-intensive service: captioning video for the deaf and hard of hearing. Her $20 million prize would reward the person or team who devised an effective method to tap the power of the Internet to caption videos. She said this could involve “breaking down a video segment into very short one-minute clips which are sent out in the universe to be captioned by anyone. The short clips would be recombined to produce a captioned version of the original segment.”

Like many efforts initially aimed at helping those with disabilities, Ms. Padden noted that the project would almost certainly have broader benefits. Parents pushing strollers, she noted, are grateful for the curb cuts created for people in wheelchairs, just as patrons watching “Monday Night Football” in noisy bars count on closed captions to see what the announcers are saying.

Tuesday, November 6, 2012

New York Times Article: The Oops in the ‘O’ for Oregon

By: Isolde Raftery
Published: November 17, 2011

The Oregon players Josh Huff, center, and Eric Dungy, right, are filling their foreign language requirement by taking sign language.

If this makes some Ducks players blush, it is because many of them chose sign language to fulfill their foreign language requirement, and in sign language, the fans are saying — screaming, really — the word vagina.

Twenty-nine players on the team are enrolled in the university’s American Sign Language program. Their teacher delights in telling them the true meaning of the sign when they form a spade-shaped “O” with their hands.

 “I did the ‘O’ once, and I never did it again,” said LaMichael James, the team’s star running back, who recently injured his right elbow. When discussing this, James spoke quietly so that those nearby would not hear. He would not make the sign. His elbow hurt, he demurred.

Older players recommended the sign language course, players said, because they found it engaging and intuitive — they had grown up using different signing systems on the field. A few players said sign language was a welcome alternative to Spanish, which had been a struggle in high school.

“A lot of people stereotype us and think we’re just sitting around and not doing anything,” said Dewitt Stuckey, a senior linebacker and second-year sign language student. “But in this class you have to pay attention. If not, you get completely lost.”

Stuckey, who said he wants to be a college counselor at a junior college, signed as he spoke. “It’s kind of rude for us not to sign when we talk,” he explained, motioning across the room to his teacher, Valentino Vasquez, who is deaf.

Another player mentioned Derrick Coleman, a running back at U.C.L.A., who is deaf. In fact, in 2009,, an online news source for the deaf community, counted at least 76 deaf and hard-of-hearing students who played in the N.C.A.A. Thirty-nine of them played for Division I teams.

Larson plans to tell her students that Gallaudet, a leading institution for the deaf in Washington, claims to have originated the football huddle. The story goes that on a blustery day in 1894, the team’s star player, Paul Hubbard, suspected that someone on the opposing team could read their signs and was anticipating their plays. Hubbard called for his teammates to form a circle. The huddle, at least in this version of its origin, was born.

Larson talks about football with her students in part because the sport is important in deaf culture, but also because she wants to reach the athletes. She first noticed large numbers of football players four years ago, when sign language was approved for the undergraduate foreign language requirement. Read the full article here.

New York Times Article: During Storm Updates, Eyes on an Interpreter

By Jeremy Peters
Published: October 30, 2012

Lydia Callis, left, a sign-language interpreter for Mayor Michael R. Bloomberg’s news briefings, has picked up a following.

The stories of devastation and destruction on the local news lately have not provided much in the way of relief — unless, that is, you happened to catch sight of a sign-language interpreter named Lydia Callis.

And she was pretty hard to miss. Ms. Callis has been a fixture at Mayor Michael R. Bloomberg’s news briefings, gesticulating, bobbing and nodding her way through the words of city officials as she communicates for the hearing-impaired.

Her expressiveness has caught the attention of the news media, and evidently the mayor himself, who now thanks her before almost everyone else as he prepares to give New Yorkers the latest updates on the storm.

Official news conferences in New York are often attended by sign-language interpreters. But they generally go unnoticed, blending in with the aides or elected officials that surround a mayor or a governor at such events.

Ms. Callis’s form makes it all but impossible not to notice her. With her smartly coifed short dark hair and sharp suits, she literally throws her whole body into signing, from her head to her hands to her hips.

She has inspired a tribute Tumblr page: Lydia Callis’s Face for Mayor, which has compiled images of her expressions as she signs. In one photo, Mr. Bloomberg looks on from behind, seemingly fixated on her hands.

New York magazine’s Web site called her “ a legitimate reason to smile” amid all the grim news about the storm. Someone on YouTube set her signing to music, her gestures and jabs punctuating each beat.

On Twitter, she has been called hypnotizing, mesmerizing and a rock star. “I could watch her for hours,” one admirer wrote. “She needs to do sign language interpreting for everything everywhere forever,” another wrote.