Sunday, December 30, 2012

LETTER FROM THE PRESIDENT


December 30, 2012

Dear Capita Foundation researchers, supporters, and friends,

Our 2012 closes with considerable progress in hearing research.  We thank the Capita community and welcome achievements from hearing scientists around the globe.  We are proud to present this video of our hearing research community at San Diego's ARO meeting: Capita Foundation: Sound and Vision 2012.


Each year we receive remarkable grant applications for innovations that need support.  Funding is too limited to take on all the great projects this year.  Kindly review http://www.capitafoundation.org/programs.aspx that lists the 2012 grant recipients. 

Capita Foundation continues to be a resource for nimble scientific minds to create opportunity.  Recent projects include gene stimulation to grow hearing hair cells, understanding auditory cortex interactions that improve speech of cochlear implant (CI) users, use of computers to improve audiology tasks, and preventing sensory/hearing loss in elderly populations.

We honor all dedicated researchers in science and medicine who improve the quality of our hearing and the future of hearing recovery.

Wishing you the very best for health and happiness as we enter this new year.

Sincerely,

Robert E. Capita, President/CEO
Capita Foundation  


Thursday, December 20, 2012

Dr. William F. House, Inventor of Pioneering Ear-Implant Device, Dies at 89



By Douglas Martin
Published December 15, 2012 New York Times

Dr. House in 1981 with the first pre-school-age child to get a cochlear implant.
Dr. William F. House, a medical researcher who braved skepticism to invent the cochlear implant, an electronic device considered to be the first to restore a human sense, died on Dec. 7 at his home in Aurora, Ore. He was 89. The cause was metastatic melanoma, his daughter, Karen House, said.

 Dr. House pushed against conventional thinking throughout his career. Over the objections of some, he introduced the surgical microscope to ear surgery. Tackling a form of vertigo that doctors had believed was psychosomatic, he developed a surgical procedure that enabled the first American in space to travel to the moon. Peering at the bones of the inner ear, he found enrapturing beauty.

Even after his ear-implant device had largely been supplanted by more sophisticated, and more expensive, devices, Dr. House remained convinced of his own version’s utility and advocated that it be used to help the world’s poor.

Today, more than 200,000 people in the world have inner-ear implants, a third of them in the United States. A majority of young deaf children receive them, and most people with the implants learn to understand speech with no visual help.

Dr. House’s cochlear implant electronically translated sound into mechanical vibrations. His initial device, implanted in 1961, was eventually rejected by the body. But after refining its materials, he created a long-lasting version and implanted it in 1969.

He also developed the first surgical treatment for Meniere’s disease, which involves debilitating vertigo and had been viewed as a psychosomatic condition. His procedure cured the astronaut Alan B. Shepard Jr. of the disease, clearing him to command the Apollo 14 mission to the moon in 1971. In 1961, Shepard had become the first American launched into space.

In presenting Dr. House with an award in 1995, the American Academy of Otolaryngology-Head and Neck Surgery Foundation said, “He has developed more new concepts in otology than almost any other single person in history.”

Monday, December 17, 2012

The Scientific Power of Music


Music is powerful and has existed in all cultures throughout history. But why do humans find music so addicting and pleasurable?

Check out a short, neat video on the scientific power of music. 

Thursday, December 13, 2012

New Test To Better Understand Cause Of Childhood Deafness Within A Year

Published December 5, 2012
Medical News Today
A major advance in the diagnosis of inherited hearing loss has been made as a result of research funded by Action on Hearing Loss. A new genetic test has been piloted by scientists at the University of Antwerp that will ultimately make it possible to rapidly screen all known deafness genes to give a far more accurate diagnosis of the cause of a hearing loss.

The new test will help parents of a deaf child understand the chances of future siblings also being born deaf. Similar tests are also being developed at Great Ormond Street Hospital for Children, London and should be available to families by late 2013.

The findings, published today in the American Journal of Medical Genetics, show that by screening just 34 known deafness genes, an accurate diagnosis could be given in roughly half the cases. Ultimately, all known deafness genes could be screened for the same cost as it takes to test one or two genes today.

Professor Guy Van Camp, who led the project, said: "Using today's technology only a few of the many deafness genes can be routinely tested, which means that an accurate diagnosis can typically only be given in 10-20% of cases. Our new test uses advanced DNA sequencing technology that can in principle screen all known deafness genes at the same time."

Dr Ralph Holme, Action on Hearing Loss's Head of Biomedical Research, said: "Knowing the cause of a child's deafness can also make it easier to predict how their hearing loss may change over time and help choose the most appropriate treatment or method of communication. This new test will also be very useful in providing a more accurate picture of the prevalence of different types of deafness affecting people across the UK."

For information about how Action on Hearing Loss is funding biomedical research to develop treatments to improve the everyday lives of people with hearing loss, click here.

Tuesday, December 4, 2012

Video: Just Listen Project explores the science of sound and the art of listening

Acoustic researchers are producing cutting-edge technologies and making discoveries that promise to change how we listen to and understand the world around us. The JUSTLISTENPROJECT will be the first to illuminate some of these advances for the public using giant-screen cinema, social media, and mobile apps—tools that make learning about science fun for people of all ages. 

Click here to view the video.

Monday, November 26, 2012

Deep Neural Networks for Acoustic Modeling in Speech Recognition

An overview of progress representing the shared views of four research groups who have had recent successes in using deep neural networks for acoustic modeling in speech recognition.

Geoffrey Hinton, Li Deng, Dong Yu, George Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara Sainath, and Brian Kingsbury -- Deep Neural Networks for Acoustic Modeling in Speech Recognition

The Guardian Article: "Feel the Music" project teaches deaf children a touch of Beethoven

Feel the Music Studies show musical vibrations can have as much of an impact on the brain as real sounds, and that exposing deaf children to music early on can stimulate their brain music centres. "It's very rare that deaf children get the chance to work together with professional musicians, and especially with an orchestra," Whittaker says. "Not only does it open up a new world to children with hearing handicaps, it also takes musicians out of their comfort zones and makes them think anew about how they hear and understand music."
Feel the Music The Mahler Chamber Orchestra and Norwegian pianist Leif Ove Andsnes teamed up with Paul Whittaker, a profoundly deaf musician, who runs the UK charity Music and the Deaf, to create Feel the Music. This project, a part of the MCO's Beethoven Journey concert series visiting 40 European cities between now and 2015, aims to open up the world of music to hearing-impaired children across Europe.  Part of the project examines the way Beethoven's own deafness greatly influenced his compositions.

After feeling the instruments as they are played, the children are offered a chance to explore a concert hall and experience a performance by the MCO and Andsnes from the heart of a fee-paying audience. Find the full article here. Learn more about Music and the Deaf.

Wednesday, November 14, 2012

Film: Voices from El-Sayed

Capita Foundation announces: FILM
What: "Voices From El - Sayed"  is a unique and moving documentary offering us intimate cinematic dialogue with El-Sayed's marvelous silent people.
When: Wed. Nov. 14
Where: UCSD Cross Cultural Center, 2nd floor of the Price Center. Room Comunidad Lg
Who: Featuring a discussion after the film with director Oded Adomi Leshem and UCSD Professor of communication Carol Padden. Co sponsored by Tritons for Israel, JStreetU and Hillel.
Cost: Free
RSVP: Click HERE to RSVP on Facebook event
More:  Click HERE for film preview or see trailer below. In the picturesque Israeli Negev desert lays the Arab Bedouin Village of El-Sayed. It has the largest percentage of deaf people in the world yet, no hearing aids can be seen because in El-Sayed deafness is not a handicap. The tranquility of the village is interrupted by Salim El-Sayed's decision to change his deaf son's fate by inviting an Israeli doctor to give a Cochlear Implant Operation. This bionic implanted chip, that can make deaf people hear, is slowly reaching more secluded areas, even to El-Sayed which has neither paved roads nor electricity. "Voices from El-Sayed" is a unique and moving documentary offering intimate cinematic dialogue with El-Sayed's marvelous silent people and highlights Israel's innovative way to heal the world.

Tuesday, November 13, 2012

New York Times article: The Science and Art of Listening


By SETH S. HOROWITZ,
Published November 9, 2012


Here's a trick question. What do you hear right now? If your home is like mine, you hear the humming sound of a printer, the low throbbing of traffic from the nearby highway and the clatter of plastic followed by the muffled impact of paws landing on linoleum — meaning that the cat has once again tried to open the catnip container atop the fridge and succeeded only in knocking it to the kitchen floor.

The slight trick in the question is that, by asking you what you were hearing, I prompted your brain to take control of the sensory experience — and made you listen rather than just hear. That, in effect, is what happens when an event jumps out of the background enough to be perceived consciously rather than just being part of your auditory surroundings. The difference between the sense of hearing and the skill of listening is attention.

Hearing is a vastly underrated sense. We tend to think of the world as a place that we see, interacting with things and people based on how they look. Studies have shown that conscious thought takes place at about the same rate as visual recognition, requiring a significant fraction of a second per event. But hearing is a quantitatively faster sense. While it might take you a full second to notice something out of the corner of your eye, turn your head toward it, recognize it and respond to it, the same reaction to a new or sudden sound happens at least 10 times as fast.

This is because hearing has evolved as our alarm system — it operates out of line of sight and works even while you are asleep. And because there is no place in the universe that is totally silent, your auditory system has evolved a complex and automatic “volume control,” fine-tuned by development and experience, to keep most sounds off your cognitive radar unless they might be of use as a signal that something dangerous or wonderful is somewhere within the kilometer or so that your ears can detect.

This is where attention kicks in.

Attention is not some monolithic brain process. There are different types of attention, and they use different parts of the brain. The sudden loud noise that makes you jump activates the simplest type: the startle. A chain of five neurons from your ears to your spine takes that noise and converts it into a defensive response in a mere tenth of a second — elevating your heart rate, hunching your shoulders and making you cast around to see if whatever you heard is going to pounce and eat you. This simplest form of attention requires almost no brains at all and has been observed in every studied vertebrate.

More complex attention kicks in when you hear your name called from across a room or hear an unexpected birdcall from inside a subway station. This stimulus-directed attention is controlled by pathways through the temporoparietal and inferior frontal cortex regions, mostly in the right hemisphere — areas that process the raw, sensory input, but don’t concern themselves with what you should make of that sound. (Neuroscientists call this a “bottom-up” response.)

But when you actually pay attention to something you’re listening to, whether it is your favorite song or the cat meowing at dinnertime, a separate “top-down” pathway comes into play. Here, the signals are conveyed through a dorsal pathway in your cortex, part of the brain that does more computation, which lets you actively focus on what you’re hearing and tune out sights and sounds that aren’t as immediately important.

In this case, your brain works like a set of noise-suppressing headphones, with the bottom-up pathways acting as a switch to interrupt if something more urgent — say, an airplane engine dropping through your bathroom ceiling — grabs your attention.

Hearing, in short, is easy. You and every other vertebrate that hasn’t suffered some genetic, developmental or environmental accident have been doing it for hundreds of millions of years. It’s your life line, your alarm system, your way to escape danger and pass on your genes. But listening, really listening, is hard when potential distractions are leaping into your ears every fifty-thousandth of a second — and pathways in your brain are just waiting to interrupt your focus to warn you of any potential dangers.

Listening is a skill that we’re in danger of losing in a world of digital distraction and information overload. Find the full article here.

Monday, November 12, 2012

New York Times article: Imaginary Prizes Take Aim at Real Problems


""


J. PEDER ZANE
Published November 8, 2012

IMAGINE putting up a prize of $20 million to inspire others to solve a particular problem. What would your challenge be?

In a series of interviews, one winner of the annual John D. and Catherine T. MacArthur Foundation fellowships was asked to propose her own $20 million challenge prizes.

CHALLENGE: Use crowd sourcing to help the hearing-impaired

The paradox of America’s economy is that while it is hard for many people to find one paying job, almost everybody has several they do free. We are bank tellers when we use the A.T.M., airline employees when we check ourselves in for flights and cashiers when we scan our items at the supermarket.

And we work on the cutting edge of technology, helping Google and Apple refine their voice recognition software each time we ask our phones to name the capital of Burkina Faso (it’s Ouagadougou) and follow up by asking, “How the heck do you pronounce that?”

Carol Padden, who is deaf and teaches communication at the University of California, San Diego, said she wanted to enlist volunteers to crowdsource a labor-intensive service: captioning video for the deaf and hard of hearing. Her $20 million prize would reward the person or team who devised an effective method to tap the power of the Internet to caption videos. She said this could involve “breaking down a video segment into very short one-minute clips which are sent out in the universe to be captioned by anyone. The short clips would be recombined to produce a captioned version of the original segment.”

Like many efforts initially aimed at helping those with disabilities, Ms. Padden noted that the project would almost certainly have broader benefits. Parents pushing strollers, she noted, are grateful for the curb cuts created for people in wheelchairs, just as patrons watching “Monday Night Football” in noisy bars count on closed captions to see what the announcers are saying.

Tuesday, November 6, 2012

New York Times Article: The Oops in the ‘O’ for Oregon

stuff
By: Isolde Raftery
Published: November 17, 2011

The Oregon players Josh Huff, center, and Eric Dungy, right, are filling their foreign language requirement by taking sign language.

If this makes some Ducks players blush, it is because many of them chose sign language to fulfill their foreign language requirement, and in sign language, the fans are saying — screaming, really — the word vagina.

Twenty-nine players on the team are enrolled in the university’s American Sign Language program. Their teacher delights in telling them the true meaning of the sign when they form a spade-shaped “O” with their hands.

 “I did the ‘O’ once, and I never did it again,” said LaMichael James, the team’s star running back, who recently injured his right elbow. When discussing this, James spoke quietly so that those nearby would not hear. He would not make the sign. His elbow hurt, he demurred.

Older players recommended the sign language course, players said, because they found it engaging and intuitive — they had grown up using different signing systems on the field. A few players said sign language was a welcome alternative to Spanish, which had been a struggle in high school.

“A lot of people stereotype us and think we’re just sitting around and not doing anything,” said Dewitt Stuckey, a senior linebacker and second-year sign language student. “But in this class you have to pay attention. If not, you get completely lost.”

Stuckey, who said he wants to be a college counselor at a junior college, signed as he spoke. “It’s kind of rude for us not to sign when we talk,” he explained, motioning across the room to his teacher, Valentino Vasquez, who is deaf.

Another player mentioned Derrick Coleman, a running back at U.C.L.A., who is deaf. In fact, in 2009, Deafdigest.net, an online news source for the deaf community, counted at least 76 deaf and hard-of-hearing students who played in the N.C.A.A. Thirty-nine of them played for Division I teams.

Larson plans to tell her students that Gallaudet, a leading institution for the deaf in Washington, claims to have originated the football huddle. The story goes that on a blustery day in 1894, the team’s star player, Paul Hubbard, suspected that someone on the opposing team could read their signs and was anticipating their plays. Hubbard called for his teammates to form a circle. The huddle, at least in this version of its origin, was born.

Larson talks about football with her students in part because the sport is important in deaf culture, but also because she wants to reach the athletes. She first noticed large numbers of football players four years ago, when sign language was approved for the undergraduate foreign language requirement. Read the full article here.

New York Times Article: During Storm Updates, Eyes on an Interpreter

storm
By Jeremy Peters
Published: October 30, 2012

Lydia Callis, left, a sign-language interpreter for Mayor Michael R. Bloomberg’s news briefings, has picked up a following.

The stories of devastation and destruction on the local news lately have not provided much in the way of relief — unless, that is, you happened to catch sight of a sign-language interpreter named Lydia Callis.

And she was pretty hard to miss. Ms. Callis has been a fixture at Mayor Michael R. Bloomberg’s news briefings, gesticulating, bobbing and nodding her way through the words of city officials as she communicates for the hearing-impaired.

Her expressiveness has caught the attention of the news media, and evidently the mayor himself, who now thanks her before almost everyone else as he prepares to give New Yorkers the latest updates on the storm.

Official news conferences in New York are often attended by sign-language interpreters. But they generally go unnoticed, blending in with the aides or elected officials that surround a mayor or a governor at such events.

Ms. Callis’s form makes it all but impossible not to notice her. With her smartly coifed short dark hair and sharp suits, she literally throws her whole body into signing, from her head to her hands to her hips.

She has inspired a tribute Tumblr page: Lydia Callis’s Face for Mayor, which has compiled images of her expressions as she signs. In one photo, Mr. Bloomberg looks on from behind, seemingly fixated on her hands.

New York magazine’s Web site called her “ a legitimate reason to smile” amid all the grim news about the storm. Someone on YouTube set her signing to music, her gestures and jabs punctuating each beat.

On Twitter, she has been called hypnotizing, mesmerizing and a rock star. “I could watch her for hours,” one admirer wrote. “She needs to do sign language interpreting for everything everywhere forever,” another wrote.

Wednesday, October 3, 2012

Founder's Day Lecture

FOUNDER’S DAY LECTURE

Tuesday, January 8, 2013

4 pm

CNCB Large Conference Room (formerly CMG), UCSD, La Jolla, CA

Ed Rubel, Ph.D.

U of WA

Fish in a dish: discovering genetic and chemical modulators of inner ear hair cell death

The Founder’s Day lectures honor the founders of the U.C.S.D. Neurosciences Department, Drs. Robert B. Livingston M.D., Theodore H. Bullock Ph.D. and Robert Galambos M.D., Ph.D.  In 1965 Dr. Livingston became the founding Chair of what was then the world’s first Neurosciences Department.   He organized the Department according to his then-radical vision of bringing together all the sub-disciplines of neuroscience from the molecular to the cognitive and clinical with the aim of fostering multi-level collaborations.  The first two professors that Livingston hired were Ted Bullock, a distinguished neuroethologist and comparative physiologist, and Bob Galambos, a pioneer in auditory neurophysiology.  Together, these far-sighted scientists laid the foundations and guiding principles on which the Neurosciences Department and Neurosciences Graduate Program have grown and flourished.   

Please save the date

Thursday, August 30, 2012

American Sign Language Interpreted Tour | Museum of Photographic Arts

Museum of Photographic Arts
American Sign Language Interpreted Tour
09.15.2012 1:00 pm

Join MOPA for an Interpreted American Sign Language guided tour. If you would like more information about tour, please contact Jazmyne Lemar at: lemar@mopa.org or 619-238.7559x230.
Group rate for this event is $4.00 Three Story House Drawn from MOPA’s photography collection of more than 7000 images, Three Story House traces how photographers have captured the familiarity of the domestic environment to tell stories of how we live and where we live, as well as transforming it into a creative space to make art. The Jazz Loft Project: W.Eugene Smith in NYC, 1957-1965 In 1957, W. Eugene Smith, a former photographer at Life magazine, moved out of the home he shared with his wife and four children in Croton-on-Hudson, New York and moved into a dilapidated, five-story loft building at 821 Sixth Avenue in New York City’s wholesale flower district. 821 Sixth Avenue was a late-night haunt of musicians, including some of the biggest names in jazz—Charles Mingus, Zoot Sims, Bill Evans, and Thelonious Monk among them—and countless fascinating, underground characters.

http://www.mopa.org/event/2012-09-15/american-sign-language-interpreted-tour 

Thursday, August 23, 2012

Medical News Today Article: "What Is Deafness? What Is Hearing Loss?"




Medical News Today releases an article that not only defines the anatomy of sound but also offers an insight to the differences between hearing loss and deafness. The literature also explores the different types of hearing loss, whether hearing impairment can be prevented, signs and symptoms of hearing loss and what treatment options are currently available.

To read the full article, please visit: http://www.medicalnewstoday.com/articles/249285.php 


Robert Capita visits Hearing Research Institute

 
President Robert Capita at Hearing Research Institute (formerly House Ear Institute) last month with Principal Investigator Ray Goldsworthy and a summer intern.


 
President Robert Capita with Bob Shannon, Director/Division of Communication and Auditory Neuroscience, Neil Segil, Executive Vice President/Research and Rahda Kalleri Principle Investigator/Division of Communication and Auditory Neuroscienc.