Friday, December 13, 2013

2013 CFAR grant recipients


Announcing the 2013 Capita Foundation Auditory Research Grant Recipients


Alain Dabdoub, Ph.D.
University of Toronto
 
Project:  “Induction of Cochlear Neurons by Defined Transcription Factors”
Auditory neurons (AN’s) play a critical role in hearing as they transmit sound information from the inner ear to the brain and their progressive degeneration is associated with disease conditions, excessive noise and aging.  AN’s are like most neurons in the brain, they lack the ability to regenerate.  Therefore, the loss of these cells leads to permanent hearing impairment and methods for inducing neuron replacement and regeneration have yet to be fully developed.


Daniel Bendor, Ph.D.
University College London

 

Project:  “Optimizing the Encoding of Temporal Information in an Auditory Cortical Prosthetic”

James Simmons, Ph.D.
Brown University
 
Project:  A Novel Biological Model for Protection from Noise-Induced Hearing Loss”
Bats live in close proximity to lots of other bats, and they regularly undergo prolonged exposure to intense sound at 80-110 dB SPL.  We found that big brown bats do not experience temporary or permanent threshold shifts (TTS, PTS) after exposure to intense noise at levels and durations that cause massive hearing losses in other mammals like humans, monkeys, cats, mice, and gerbils.  Can brown bats prove to be useful in providing a key to successful noise protection in humans? 
Kazuaki Homma, Ph.D.
Northwestern University
 
Project: “Investigating Prestin’s Role in Outer Hair Cell Survival”
The objective of this study is to investigate how prestin contributes to survival of outer hair cells, the outcome of which could allow development of a novel strategy to reduce hearing impairment.
 
Lina Reiss, Ph.D.
Oregon Health & Science University
 
Project:  “Effects of Changing Frequency-to-Electrode Maps on Electrode Pitch Plasticity and Discrimination with Cochlear Implants”
Our laboratory recently demonstrated that pitch perceived through a cochlear implant (CI) can change over time by as much as 2-3 octaves, and that these changes depend on how the CI is programmed.  We hypothesize that a novel method of CI programming will lead to increased pitch differences between electrodes, and that this increased pitch separation will improve both electrode discrimination and speech perception in noise.

Marc Bassim, Ph.D.
American University of Beirut – Medical Center 

Project:  Congenital Hearing Loss in the Middle East Area: Generating Patient-Specific iPS
The purpose of this project is to model the pathological processes of Congenital Hearing Loss in vitro. This will be done through establishing iPS cells from patients with Congenital Hearing Loss belonging to a highly consanguineous population and inducing their differentiation into sensorineural cells.

Friday, September 13, 2013

Lasker Foundation Awards for Innovative Hearing Discoveries

Congratulations to Dr. Graeme M. Clark of the University of Melbourne in Australia, Dr. Ingeborg Hochmair of Med-El in Innsbruck, Austria, and Blake S. Wilson of Duke University in North Carolina for their work in developing the modern cochlear implant.  

The Albert and Mary Lasker Foundation recognized these fearless scientists with awards, often termed “American Nobels”, in the category of Clinical Medical Research totaling $250,000. The scientists’ cochlear devices use electrical stimuli to bypass damaged hair cells and directly stimulate the main auditory nerve that conveys messages to the brain for processing as hearing. Hearing has been restored through their efforts, despite public skepticism along the way.


Dr. Graeme M. Clark
Dr. Ingeborg Hochmair

Blake S. Wilson

Thursday, July 18, 2013

Playhouse Reteams New York Cast for 'Tribes'


Published June 22, 2013 UT-San Diego

When Nina Raine’s 2010 drama “Tribes” opens at La Jolla Playhouse this week, it will feature the same director and cast from its off-Broadway premiere last year. n Leading the cast is Russell Harvard, a Texas-bred actor who stars as Billy, a deaf son in an intellectual British family who battle for attention over the dinner table. Billy does his best to keep up with his family’s verbal sparring by lip-reading, but he’s isolated from the world until he finds a home in a new tribe — the deaf community.

Harvard, 32, won a 2012 Theatre World Award for his performance as Billy in New York, and he recently reprised the role at the Mark Taper Forum in L.A. His film credits include playing the adult deaf son of Daniel Day-Lewis’ character in 2007’s “There Will Be Blood” and deaf mixed martial arts fighter Matt Hamill in 2010s “The Hammer.”

Harvard is one of two deaf sons born to deaf parents. He communicates by sign language, but with hearing aids he can hear music and stage commands. In a recent email interview from his home in New York, Harvard talked about his life, his career and “Tribes.”

Q: The play “Tribes” relates to the different societal groupings that occur between the hearing and deaf worlds. But are there different tribes within the deaf community, and do you belong to a specific tribe?

A: Yes I do. In my experience, there are two main tribes. My family comes from a “D” or Deaf family. We’re like an everyday family with just the language of American Sign Language, culture and arts. There’s the “d” deaf that consists of deaf and hard-of-hearing people that does or doesn’t use the language and isn’t just involved in the (deaf) community.

Q: Tell me about your childhood.

A: I was born in Pasadena, Texas, in the backyard of my grandmother’s guesthouse. I come from a third-generation Deaf family. We moved to Austin when I was about a year old. During my childhood, my parents, brother and I communicated in American Sign Language. Difficulty only existed when my brother and I fought, almost about everything.

Q: Did you ever feel the sense of isolation like Billy does in the play?

A: I have, at times. Not all my hearing friends know ASL. There was a time when my hearing friend who knows how to sign would forget to interpret for me when we were in a big group of friends just chatting the night away. Then I’m lost in translation, which sometimes leads to paranoia. Then I think I shouldn’t even be here. It gets frustrating and hurtful when some friends don’t realize they aren’t accommodating me in the conversation. At the same time, I understand that there’s a learning curve and try to see it from their perspective. I have family members and friends who feel the same way being isolated. I’ve also heard that CODAs (children of deaf adults), who are born to deaf parents and have deaf brothers or sisters, sometimes feel left out because no one voices in the family and they get left out.

Q: How accurately does the play reflect real life for the deaf or hearing-impaired?

A: It’s accurate, yes. Billy was well portrayed as a deaf character, and the information about the deaf community was precise. My brother in the show, “Daniel,” and his psychological issues were honest because the family in “Tribes” captures the essence of Nina Raine’s family.

Q: Some of the roles you’ve played in films focus on your isolation from the hearing world. Do you think this is reflective of the reality of being deaf in a hearing world, or is this the perceived reality that hearing playwrights and screenwriters write for deaf characters?

A: I think screenwriters who write characters in fictional screenplays … portray a perceived reality with knowledge they received or researched about being deaf. They could be close to the reflective reality. However, nonfictional movies such as “Dummy Hoy” or “The Hammer,” for instance, are more reflective of the reality.

Q: Tell me about the experience with “Tribes” and working with director David Cromer.

A: It’s a wild ride I’ve been on, I have to say. I am fortunate to be working with a talented and incredible team of actors, directors and crews of “Tribes.” I am humbly blessed and proud of how far we’ve come. David is and always will be brilliant. He has these tremendous values and beliefs for the art of theater. He finds the truth in this production. I am very lucky to be working with him at La Jolla Playhouse, and I would work for him again in a heartbeat.

Q: What have your audiences been like for “Tribes”?

A: Not once have I heard bad things about this play. Occasionally someone from the deaf audience said they wish the whole play was subtitled on stage and not captioned offstage. They loved the idea of seeing the texts in the show so they don’t lose focus by looking at the caption and not at the actors. A hearing woman in New York City came to me after the matinee show and said, “I felt like I was in the show, like I was a part of the family. The mom and dad — how dare they neglect their children.”

Q: Tell me about the hurdles you’ve faced as a deaf actor.

A: I hate to say deafness is a factor. I really hope it isn’t. It shouldn’t be. I’ve said this before, but we do need fearless writers to write more roles for actors who happened to be deaf. Another factor would be my height. I’ve been told I’m too tall for roles I’ve auditioned for. I scored one on Fox Television’s “Fringe,” but the other one I didn’t. This (one) artistic director told me I should chop off parts of my legs for the role of Pippin.

Q: What are some dream roles on your bucket list?

A: I’ve got some crazy ideas, like competing on “Wipeout” or having a deaf tribe in “The Walking Dead.” I would love to play a creature on one of Syfy network’s television shows — “American Horror Story” and “Being Human” among my favorite television shows. A main or recurring role would be exciting. I hope to take motorcycle lessons after this show, and I am considering attending a school of bartending. I’ve been researching about a well-known deaf man who has saved more than 930 lives. I hope to make a movie about his life.

pam.kragen@utsandiego.com

Monday, July 15, 2013

Why We Hear Music the Way We Do



The magnificent Vi Hart — mathemusician extraordinaire, who has previously stop-motion-doodled our way to understanding such mysteries as space-time, Möbius strips, Fibonacci numbers, and the science of sound, frequency, and pitch — is back with another gem, this time illuminating Stravinsky’s atonal composition for Edward Lear’s classic nonsense poem, “The Owl and the Pussycat.” Stravinsky actually borrowed the basis for his composition from the 12-tone technique Arnold Schoenberg invented, which Hart explains as well. Enjoy, and keep an eye open for Hart’s delightful sideways sleight against the brokenness of copyright law, one that would’ve actually left Stravinsky particularly miffed.

Monday, June 17, 2013

Article: Why Music Makes our Brain Sing


Published June 7, 2013 in the NY Times JThe New York Times
By ROBERT J. ZATORRE and VALORIE N. SALIMPOOR

MUSIC is not tangible. You can’t eat it, drink it or mate with it. It doesn’t protect against the rain, wind or cold. It doesn’t vanquish predators or mend broken bones. And yet humans have always prized music — or well beyond prized, loved it.

In the modern age we spend great sums of money to attend concerts, download music files, play instruments and listen to our favorite artists whether we’re in a subway or salon. But even in Paleolithic times, people invested significant time and effort to create music, as the discovery of flutes carved from animal bones would suggest.

So why does this thingless “thing” — at its core, a mere sequence of sounds — hold such potentially enormous intrinsic value?

The quick and easy explanation is that music brings a unique pleasure to humans. Of course, that still leaves the question of why. But for that, neuroscience is starting to provide some answers.

More than a decade ago, our research team used brain imaging to show that music that people described as highly emotional engaged the reward system deep in their brains — activating subcortical nuclei known to be important in reward, motivation and emotion. Subsequently we found that listening to what might be called “peak emotional moments” in music — that moment when you feel a “chill” of pleasure to a musical passage — causes the release of the neurotransmitter dopamine, an essential signaling molecule in the brain.

When pleasurable music is heard, dopamine is released in the striatum — an ancient part of the brain found in other vertebrates as well — which is known to respond to naturally rewarding stimuli like food and sex and which is artificially targeted by drugs like cocaine and amphetamine.

The idea that reward is partly related to anticipation (or the prediction of a desired outcome) has a long history in neuroscience. Making good predictions about the outcome of one’s actions would seem to be essential in the context of survival, after all. And dopamine neurons, both in humans and other animals, play a role in recording which of our predictions turn out to be correct. Read full article here.



Friday, June 7, 2013

Field Trip to Copley Symphony Hall



With the support of the San Diego Symphony and Capita Foundation, 200 students and families from Maryland Avenue Elementary School in La Mesa were given tickets and transportation to the world famous Romero Royal Family of Guitars along with the Orchestra on May 16, 2013.

Monday, May 20, 2013

Article: Imaginary Prizes Take Aim at Real Problems

By J. PEDER ZANE
Published: November 8, 2012 New York Times


IMAGINE putting up a prize of $20 million to inspire others to solve a particular problem. What would your challenge be?

Some of the world’s leading companies, including Google, Qualcomm and Nokia, have sponsored big-money contests challenging competitors around the world to design a host of wonders, including robots that can explore the moon, superefficient electric vehiclesand more accurate methods for sequencing the human genome. The online movie streaming company Netflix awarded $1 million to a winning team of outsiders that helped it develop better ways to predict which films its customers would like.

Carol Padden, 2010 Fellow

CHALLENGE Use crowdsourcing to help the hearing-impaired
The paradox of America’s economy is that while it is hard for many people to find one paying job, almost everybody has several they do free. We are bank tellers when we use the A.T.M., airline employees when we check ourselves in for flights and cashiers when we scan our items at the supermarket.

And we work on the cutting edge of technology, helping Google and Apple refine their voice recognition software each time we ask our phones to name the capital of Burkina Faso (it’s Ouagadougou) and follow up by asking, “How the heck do you pronounce that?”

Carol Padden, who is deaf and teaches communication at the University of California, San Diego, said she wanted to enlist volunteers to crowdsource a labor-intensive service: captioning video for the deaf and hard of hearing. Her $20 million prize would reward the person or team who devised an effective method to tap the power of the Internet to caption videos. She said this could involve “breaking down a video segment into very short one-minute clips which are sent out in the universe to be captioned by anyone. The short clips would be recombined to produce a captioned version of the original segment.”

Like many efforts initially aimed at helping those with disabilities, Ms. Padden noted that the project would almost certainly have broader benefits. Parents pushing strollers, she noted, are grateful for the curb cuts created for people in wheelchairs, just as patrons watching “Monday Night Football” in noisy bars count on closed captions to see what the announcers are saying.


Friday, May 10, 2013

Article: Road Traffic Noise and Diabetes: Long-Term Exposure May Increase Disease Risk


Cars driving through traffic
Noise from honking cars and police sirens can disrupt sleep, but it also may increase the chance of developing diabetes, according to a large study from Denmark.

The researchers compared noise levels from road traffic to the incidence of diabetes in 57,000 people. As the noise levels increased so did the risk for developing the disease. The risk increased by 8 - 11 percent for every 10-decibel (dB) increase in road noise. A decibel is a measure of loudness and intensity of sound.

The results suggest that living near heavily traveled roads may increase the risk of developing diabetes. To make sure they were measuring effects from noise, the researchers adjusted for several other variables associated with diabetes, including body mass index, education, lifestyle characteristics and nitrogen oxides, which are formed from vehicle exhaust and are known to increase the risk of the disease.

The results have important implications for urban planning. As major cities attempt to increase urban density, more people may live closer to heavier traffic and noisier roads. Further, people with low incomes typically live closer to major roads and highways, putting them at greater risk.

More here.

Friday, April 5, 2013

UPDATE FROM 2011 CAPITA FOUNDATION AUDITORY RESEARCH GRANT RECIPIENT

Dr. Carol Lee De Filippo
Professor at National Technical
Institute for the Deaf, Rochester, NY

Improving Speech Perception in Prelingually Deaf Adult Listeners: Exploring a Novel Training Concept


Audiologists at the National Technical Institute for the Deaf (NTID) located at Rochester Institute of Technology (RIT), have designed a novel audiovisual speech training strategy for adults born deaf (prelingually deaf adults) who obtained cochlear implants (CI) beyond the critical period for auditory stimulation. The rehabilitative program examines the hypothesis that fading dominant visual speech cues will trigger neuroplasticity by fostering useful sensory integration of the visual and acoustic components of spoken language. Initial findings have implications for the efficacy of the strategy for use with prelingually deaf adults who have not benefitted from traditional auditory training. 

Current CIs can produce dramatic speech perception benefits by restoring hearing and reorganizing audiovisual pathways developed in adults who have lost all or most of their hearing later in life and capitalizing on developmental plasticity in early implanted children. In contrast, prelingually deaf adult CI recipients typically have limited speech recognition skills and continue to be visually dominant, both behaviorally and neurologically.

 “Life-long dependence on lipreading prior to implant may be one reason for the continued use of visual cortex for processing of speech, even post-implant. Thus, typical auditory training (listening only) is often frustrating, resulting in slow progress and highly varied outcomes, although exceptional cases of open-set speech recognition suggest that learning-dependent plasticity is possible”, says Dr. Carol DeFilippo, NTID/RIT professor, PI for the study.

For the study, Dr. De Filippo and colleague Dr. Catherine Clark recruited prelingually deaf adults who perceived inadequate benefit from their CIs and were interested in a new rehabilitative strategy. One older adult and 7 young adults participated. Subjects completed 4-9 blocks of training (3240 -7920 trials) over 3 weeks.  On each trial, they viewed a head-only audiovisual clip of one of 6 talkers speaking a vowel-consonant-vowel syllable (b, d, or g; with ah, ee, or oo) in one of 5 conditions, including the original (no added effects) and 4 edited clips that progressively obscured lipreading cues. Training used a three-alternative forced-choice task (“B”, “D”, or “G”) with feedback. As expected, performance decreased in degraded conditions, indicating that the training materials were effective in requiring attention to auditory cues.  As training increased, all subjects obtained better scores by the last block in the most degraded condition (auditory-only) for at least 2 of the 3 consonants. As a group, they also demonstrated a greater listening advantage on one or all of the consonants, ranging up to a 31% improvement in the clear audiovisual condition by the end of the study. These levels of gain are typically unheard of for prelingually deaf adult listeners, particularly in a short period of time (3 weeks). The ultimate goal of this work is to develop a unique clinical intervention that can evoke beneficial change in prelingually-deaf adult CI users.

Thursday, April 4, 2013

Robert Capita on USA Network




Be sure to check out all the sailing excitement and drama with Capita Foundation's President at the helm in "Americas Cup Sailor," the episode that aired this spring on USA Network's new TV series "THE MOMENT." 

http://www.youtube.com/watch?v=7LlMFqlweQg

Enjoy.

Robert E. Capita, President/CEO
Capita Foundation

Monday, March 18, 2013

TED Talk: The way we think about charity is dead wrong



Activist and fundraiser Dan Pallotta calls out the double standard that drives our broken relationship to charities. Too many nonprofits, he says, are rewarded for how little they spend -- not for what they get done. Instead of equating frugality with morality, he asks us to start rewarding charities for their big goals and big accomplishments (even if that comes with big expenses). In this bold talk, he says: Let's change the way we think about changing the world.

Everything the donating public has been taught about giving is dysfunctional, says AIDS Ride founder Dan Pallotta. He aims to transform the way society thinks about charity and giving and change. 

Monday, March 4, 2013

Article: In the news: extracting energy from the biologic battery in the inner-ear

A group of researchers consisting of Patrick Mercier (principal investigator of the Energy-Efficient Microsystems Group), Andrew Lysaght, Saurav Bandyopadhyay, Anantha Chandrakasan, and Konstantina Stankovic have discovered how to extract power from the biologic battery that occurs naturally within the inner-ear of mammals.  The results, featured in the journal Nature Biotechnology this week, show for the first time that it is not only possible to extract energy from the ear, but that it is also possible to use this energy to power useful electronic devices – in this case a miniaturized radio transmitter and sensor.

One of the main engineering challanges of building such a bioelectronics and energy harvesting system is that the extractable power from the inner-ear is extremely small – on the order of a few nanowatts.  By employing innovative near-zero-leakage power electronics, the researchers were able to boost the voltage of the biologic battery from approximately 80 mV to 1 V, which was then used to operated a 2.4 GHz radio transmitter.  The resulting chip design, implemented in a 180nm CMOS technology, employed an extremely duty-cycled energy-buffering architecture, where the radio transmitted a single packet approximately once per minute.


More detailed information regarding chip implementation results, clinical experiments, and future directions can be found in the paper here.

Anatomy and physiology of the inner ear.
(a) Schematic of a mammalian ear including the external, middle and inner ear, which contains the cochlea and vestibular end organs. The endoelectronics chip is illustrated in one possible location, although the experiments were done with the chip located outside of the middle ear cavity. (b) Cross-section of a typical cochlear half-turn, showing the endolymphatic space (yellow) bordered by tight junctions (red), the stria vascularis (green) and hair cells (blue), which are contacted by primary auditory neurons (orange).

Article: Medical devices powered by the ear itself

By Larry Hardesty, MIT News Office

For the first time, researchers power an implantable electronic device using an electrical potential — a natural battery — deep in the inner ear. 

Deep in the inner ear of mammals is a natural battery — a chamber filled with ions that produces an electrical potential to drive neural signals. In today’s issue of the journal Nature Biotechnology, a team of researchers from MIT, the Massachusetts Eye and Ear Infirmary (MEEI) and the Harvard-MIT Division of Health Sciences and Technology (HST) demonstrate for the first time that this battery could power implantable electronic devices without impairing hearing.

The devices could monitor biological activity in the ears of people with hearing or balance impairments, or responses to therapies. Eventually, they might even deliver therapies themselves.

In experiments, Konstantina Stankovic, an otologic surgeon at MEEI, and HST graduate student Andrew Lysaght implanted electrodes in the biological batteries in guinea pigs’ ears. Attached to the electrodes were low-power electronic devices developed by MIT’s Microsystems Technology Laboratories (MTL). After the implantation, the guinea pigs responded normally to hearing tests, and the devices were able to wirelessly transmit data about the chemical conditions of the ear to an external receiver.

“In the past, people have thought that the space where the high potential is located is inaccessible for implantable devices, because potentially it’s very dangerous if you encroach on it,” Stankovic says. “We have known for 60 years that this battery exists and that it’s really important for normal hearing, but nobody has attempted to use this battery to power useful electronics.”

The ear converts a mechanical force — the vibration of the eardrum — into an electrochemical signal that can be processed by the brain; the biological battery is the source of that signal’s current. Located in the part of the ear called the cochlea, the battery chamber is divided by a membrane, some of whose cells are specialized to pump ions. An imbalance of potassium and sodium ions on opposite sides of the membrane, together with the particular arrangement of the pumps, creates an electrical voltage.

Although the voltage is the highest in the body (outside of individual cells, at least), it’s still very low. Moreover, in order not to disrupt hearing, a device powered by the biological battery can harvest only a small fraction of its power. Low-power chips, however, are precisely the area of expertise of Anantha Chandrakasan’s group at MTL.

The MTL researchers — Chandrakasan, who heads MIT’s Department of Electrical Engineering and Computer Science; his former graduate student Patrick Mercier, who’s now an assistant professor at the University of California at San Diego; and Saurav Bandyopadhyay, a graduate student in Chandrakasan’s group — equipped their chip with an ultralow-power radio transmitter: After all, an implantable medical monitor wouldn’t be much use if there were no way to retrieve its measurements.

But while the radio is much more efficient than those found in cellphones, it still couldn’t run directly on the biological battery. So the MTL chip also includes power-conversion circuitry — like that in the boxy converters at the ends of many electronic devices’ power cables — that gradually builds up charge in a capacitor. The voltage of the biological battery fluctuates, but it would take the control circuit somewhere between 40 seconds and four minutes to amass enough charge to power the radio. The frequency of the signal was thus itself an indication of the electrochemical properties of the inner ear.

To reduce its power consumption, the control circuit had to be drastically simplified, but like the radio, it still required a higher voltage than the biological battery could provide. Once the control circuit was up and running, it could drive itself; the problem was getting it up and running.

The MTL researchers solve that problem with a one-time burst of radio waves. “In the very beginning, we need to kick-start it,” Chandrakasan says. “Once we do that, we can be self-sustaining. The control runs off the output.”

Stankovic, who still maintains an affiliation with HST, and Lysaght implanted electrodes attached to the MTL chip on both sides of the membrane in the biological battery of each guinea pig’s ear. In the experiments, the chip itself remained outside the guinea pig’s body, but it’s small enough to nestle in the cavity of the middle ear.

Cliff Megerian, chairman of otolaryngology at Case Western Reserve University and University Hospitals Case Medical Center, says that he sees three possible applications of the researchers’ work: in cochlear implants, diagnostics and implantable hearing aids. “The fact that you can generate the power for a low voltage from the cochlea itself raises the possibility of using that as a power source to drive a cochlear implant,” Megerian says. “Imagine if we were able to measure that voltage in various disease states. There would potentially be a diagnostic algorithm for aberrations in that electrical output.” More here.

Friday, February 22, 2013

Cornell Scientists Create Functional, Lifelike Ear Using 3D-Printing and Living Cell Injections

By Lidija Grozdanic 
Published February 21, 2013 in Inhabitat 

Cornell bioengineers have combined 3-D printing with injectable gel molds to create an artificial ear that looks and functions like a real human ear. The new bioengineering method offers a more natural-feeling and painless alternative to conventional reconstructive surgery and prosthetics. The technology could help children born with microtia, congenital deformity and those who suffered ear loss due to cancer or an accident.

Dr. Jason Spector, director of the Laboratory for Bioregenerative Medicine and Surgery and associate professor of plastic surgery at Weill Cornell Medical College, along with his colleague Dr. Lawrence Bonassar first constructed the ears with a digitized 3D image of a person’s ear. The image was used to build a mold of a solid ear using a 3D printer.
“The reason why cartilage tissue engineering lends itself to this type of approach is that cartilage is unique—it doesn’t require an immediate blood supply to survive,” said Spector. “The use of our special collagen hydro-gel allows the cells to not only survive, but thrive, and lay down a cartilaginous matrix.”

The best time for implanting the 3D-printed ear would be around the age of 5 or 6, when the head and ears reach 80 percent of adult size, said Spector. The researchers are now developing ways of using cartilage cells to create synthetic organ transplants, by mixing cartilage cells with stem cells extracted from bone marrow. The procedure could take less than 30 minutes, instead of 5 to 6 hours required for rib cartilage harvesting surgery, according to Spector. More here.

Thursday, January 24, 2013

New Test To Better Understand Cause Of Childhood Deafness Within A Year


A new genetic test has been piloted by scientists at the University of Antwerp that will ultimately make it possible to rapidly screen all known deafness genes to give a far more accurate diagnosis of the cause of a hearing loss.

The new test will help parents of a deaf child understand the chances of future siblings also being born deaf. The findings, published today in the American Journal of Medical Genetics, show that by screening just 34 known deafness genes, an accurate diagnosis could be given in roughly half the cases.

The majority of childhood deafness is inherited and knowing the gene responsible can be incredibly important for parents who want to know the likelihood of subsequent children inheriting deafness. Knowing the cause of a child's deafness can also make it easier to predict how their hearing loss may change over time and help choose the most appropriate treatment or method of communication. More here.

Sound and Vision 2012

We are proud to present this captioned video of our hearing research community at San Diego's ARO meeting: Capita Foundation: Sound and Vision 2012.


Monday, January 21, 2013

Free student tickets to ASL Tour at the Museum of Photographic Arts!


The Capita Foundation is sponsoring 25 tickets for students interested in attending the American Sign Language (ASL) Tour this Saturday, January 26 from noon-1PM at the Museum of Photographic Arts in Balboa Park.

The following galleries will be showcased during the ASL Tour:

Ruud van Empel: Strange Beauty

Photo|Synthesis: 7th Annual Youth Exhibition.

Soapbox! The Audience Speaks


Please contact us at info@capitafoundation.org if you are interested in attending.

Thursday, January 17, 2013

Article: Alzheimer's Drug Dials Back Deafness In Mice



By Jon Hamilton
Published January 9, 2013 NPR

If you've spent years CRANKING YOUR MUSIC UP TO 11, this item's for you.

A drug developed for Alzheimer's disease can partially reverse hearing loss caused by exposure to extremely loud sounds, an international team reports in the journal Neuron.

Before you go back to rocking the house with your Van Halen collection, though, consider that the drug has only been tried in mice so far. And it has never been approved for human use.

Loud noises cause hearing loss by injuring or killing hair cells, cells in the inner ear that transform sounds into electrical signals that are sent to the brain. Read more here.