Links for Keyword: Hearing

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 41 - 60 of 524

Melissa Dahl TODAY The video will melt your heart: A deaf little boy is stunned when he hears his father’s voice for the first time after receiving an auditory brainstem implant. “Daddy loves you,” Len Clamp tells his 3-year-old son, Grayson, in a video that was recorded May 21 but is going viral today. (He signs the words, too, to be sure the boy would understand.) Grayson was born without cochlear nerves, the “bridge” that carries auditory information from the inner ear to the brain. He’s now the among the first children in the U.S. to receive an auditory brainstem implant in a surgery done at the University of North Carolina in Chapel Hill, N.C., led by UNC head and neck surgeon Dr. Craig Buchman. The device is already being used in adults, but is now being tested in children at UNC as part of an FDA-approved trial. It’s similar to a cochlear implant, but instead of sending electrical stimulation to the cochlea, the electrodes are placed on the brainstem itself. Brain surgery is required to implant the device. "Our hope is, because we're putting it into a young child, that their brain is plastic enough that they'll be able to take the information and run with it," Buchman told NBCNews.com.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 13: Memory, Learning, and Development
Link ID: 18302 - Posted: 06.24.2013

By NICHOLAS BAKALAR Hearing loss in older adults increases the risk for hospitalization and poor health, a new study has found, even taking into account other risk factors. Researchers analyzed data on 529 men and women over 70 with normal hearing, comparing them with 1,140 whose hearing was impaired, most with mild or moderate hearing loss. The data were gathered in a large national health survey in 2005-6 and again in 2009-10. The results appeared in The Journal of the American Medical Association. After adjusting for race, sex, education, hypertension, diabetes, stroke, cardiovascular disease and other risks, the researchers found that people with poor hearing were 32 percent more likely to be hospitalized, 36 percent more likely to report poor physical health and 57 percent more likely to report poor emotional or mental health. The authors acknowledge that this is an association only, and that there may be unknown factors that could have affected the result. “There has been a belief that hearing loss is an inconsequential part of aging,” said the lead author, Dr. Frank R. Lin, an associate professor of otolaryngology at Johns Hopkins. “But it’s probably not. Everyone knows someone with hearing loss, and as we think about health costs, we have to take its effects into account.” Copyright 2013 The New York Times Company

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 18267 - Posted: 06.13.2013

A team of NIH-supported researchers is the first to show, in mice, an unexpected two-step process that happens during the growth and regeneration of inner ear tip links. Tip links are extracellular tethers that link stereocilia, the tiny sensory projections on inner ear hair cells that convert sound into electrical signals, and play a key role in hearing. The discovery offers a possible mechanism for potential interventions that could preserve hearing in people whose hearing loss is caused by genetic disorders related to tip link dysfunction. The work was supported by the National Institute on Deafness and Other Communication Disorders (NIDCD), a component of the National Institutes of Health. Stereocilia are bundles of bristly projections that extend from the tops of sensory cells, called hair cells, in the inner ear. Each stereocilia bundle is arranged in three neat rows that rise from lowest to highest like stair steps. Tip links are tiny thread-like strands that link the tip of a shorter stereocilium to the side of the taller one behind it. When sound vibrations enter the inner ear, the stereocilia, connected by the tip links, all lean to the same side and open special channels, called mechanotransduction channels. These pore-like openings allow potassium and calcium ions to enter the hair cell and kick off an electrical signal that eventually travels to the brain where it is interpreted as sound. The findings build on a number of recent discoveries in laboratories at NIDCD and elsewhere that have carefully plotted the structure and function of tip links and the proteins that comprise them. Earlier studies had shown that tip links are made up of two proteins — cadherin-23 (CDH23) and protocadherin-15 (PCDH15) — that join to make the link, with PCDH15 at the bottom of the tip link at the site of the mechanotransduction channel, and CDH23 on the upper end. Scientists assumed that the assembly was static and stable once the two proteins bonded.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 18265 - Posted: 06.13.2013

By ROBERT J. ZATORRE and VALORIE N. SALIMPOOR MUSIC is not tangible. You can’t eat it, drink it or mate with it. It doesn’t protect against the rain, wind or cold. It doesn’t vanquish predators or mend broken bones. And yet humans have always prized music — or well beyond prized, loved it. In the modern age we spend great sums of money to attend concerts, download music files, play instruments and listen to our favorite artists whether we’re in a subway or salon. But even in Paleolithic times, people invested significant time and effort to create music, as the discovery of flutes carved from animal bones would suggest. So why does this thingless “thing” — at its core, a mere sequence of sounds — hold such potentially enormous intrinsic value? The quick and easy explanation is that music brings a unique pleasure to humans. Of course, that still leaves the question of why. But for that, neuroscience is starting to provide some answers. More than a decade ago, our research team used brain imaging to show that music that people described as highly emotional engaged the reward system deep in their brains — activating subcortical nuclei known to be important in reward, motivation and emotion. Subsequently we found that listening to what might be called “peak emotional moments” in music — that moment when you feel a “chill” of pleasure to a musical passage — causes the release of the neurotransmitter dopamine, an essential signaling molecule in the brain. When pleasurable music is heard, dopamine is released in the striatum — an ancient part of the brain found in other vertebrates as well — which is known to respond to naturally rewarding stimuli like food and sex and which is artificially targeted by drugs like cocaine and amphetamine. © 2013 The New York Times Company

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 15: Language and Our Divided Brain
Link ID: 18251 - Posted: 06.10.2013

by Kim Krieger The song of the cicada has been romanticized in mariachi music, used to signify summer in Japanese cinematography, and cursed by many an American suburbanite wishing for peace and quiet. Despite the bugs' ubiquity, scientists haven't uncovered how they sing so loudly—some are as noisy as a jet engine—and why they don't expend much energy doing it. But researchers reported in Montreal yesterday at the 21st International Congress on Acoustics that they now have the answer. The detailed mechanism of the cicada's song is far from fully understood, says Paulo Fonseca, an animal acoustician at the University of Lisbon who was not involved in the project. The work by the researchers "is innovative and paves our way to a better understanding of this complex system allowing such small animals to produce such powerful sound." Cicadas aren't just a natural curiosity. Small devices that produce extremely loud noises while requiring very little power appeal to the U.S. Navy, which uses sonar for underwater exploration and communication. Derke Hughes, a research engineer at the Naval Undersea Warfare Center in Newport, Rhode Island, says that the loudest cicadas can make a noise 20 to 40 dB louder than the compact off-the-shelf RadioShack speaker in his office using the same amount of power. Intrigued, he and his colleagues used microcomputed tomography)—a kind of CT scan that picks up details as small as a micron in size—to image a cicada's tymbal, which helps the insect make its deafening chirp. © 2010 American Association for the Advancement of Science

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 12: Sex: Evolutionary, Hormonal, and Neural Bases
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 8: Hormones and Sex
Link ID: 18234 - Posted: 06.05.2013

Zoe Cormier A study of two ancient hominins from South Africa suggests that changes in the shape and size of the middle ear occurred early in our evolution. Such alterations could have profoundly changed what our ancestors could hear — and perhaps how they could communicate. Palaeoanthropologist Rolf Quam of Binghamton University in New York state and his colleagues recovered and analysed a complete set of the three tiny middle-ear bones, or ossicles, from a 1.8-million-year-old specimen of Paranthropus robustus and an incomplete set of ossicles from Australopithecus africanus, which lived from about 3.3 million to around 2.1 million years ago. The ossicles are the smallest bones in the human body, and are rarely preserved intact in hominin fossils, Quam says. In both specimens, the team found that the malleus (the first in the chain of the three middle-ear bones) was human-like — smaller in proportion compared to the ones in our ape relatives. Its size would also imply a smaller eardrum. The similarity between the two species points to a “deep and ancient origin” of this feature, Quam says. “This could be like bipedalism: a defining characteristic of hominins.” It is hard to draw conclusions about hearing just from the shape of the middle-ear bones because the process involves so many different ear structures, as well as the brain itself. However, some studies have shown that the relative sizes of the middle-ear bones do affect what primates can hear2. Genomic comparisons with gorillas have indicated that changes in the genes that code for these structures might also demarcate humans from apes3. © 2013 Nature Publishing Group

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 18151 - Posted: 05.14.2013

by Michael Balter Researchers debate when language first evolved, but one thing is sure: Language requires us not only to talk but also to listen. A team of scientists now reports recovering the earliest known complete set of the three tiny middle ear bones—the malleus ("hammer"), incus ("anvil"), and stapes ("stirrup")—in a 2.0-million-year-old skull of Paranthropus robustus, a distant human relative found in South Africa (see photo). Reporting online today in the Proceedings of the National Academy of Sciences, the researchers found that the malleus of P. robustus, as well one found earlier in the early human relative Australopithecus africanus, is similar to that of modern humans, whereas the two other ear bones most closely resemble existing African and Asian great apes. The team is not entirely sure what this precocious appearance of a human-like malleus means. But since the malleus is attached directly to the eardrum, the researchers suggest that it might be an early sign of the high human sensitivity to middle-range acoustic frequencies between 2 and 4 kilohertz—frequencies critical to spoken language, but which apes and other primates are much less sensitive to. © 2010 American Association for the Advancement of Science

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 18150 - Posted: 05.14.2013

Ed Yong Many moths have evolved sensitive hearing that can pick up the ultrasonic probes of bats that want to eat them. But one species comes pre-adapted for anything that bats might bring to this evolutionary arms race. Even though its ears are extremely simple — a pair of eardrums on its flanks that each vibrate four receptor cells — it can sense frequencies up to 300 kilohertz, well beyond the range of any other animal and higher than any bat can squeak. “A lot of previous work has suggested that some bats have evolved calls that are out of the hearing range of the moths they are hunting. But this moth can hear the calls of any bat,” says James Windmill, an acoustical engineer at the University of Strathclyde, UK, who discovered the ability in the greater wax moth (Galleria mellonella). His study is published in Biology Letters1. Windmill's collaborator Hannah Moir, a bioacoustician now at the University of Leeds, UK, played sounds of varying frequencies to immobilized wax moths. As the insects “listened”, Moir used a laser to measure the vibrations of their eardrums, and electrodes to record the activity of their auditory nerves. The moths were most sensitive to frequencies of around 80 kilohertz, the average frequency of their courtship calls. But when exposed to 300 kilohertz, the highest level that the team tested, the insects' eardrums still vibrated and their neurons still fired. © 2013 Nature Publishing Group

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 18137 - Posted: 05.09.2013

By Jill U. Adams, Urban living may be harmful to your ears. That’s the takeaway from a study that found that more than eight in 10 New Yorkers were exposed to enough noise to damage their hearing. Perhaps more surprising was that so much of the city dwellers’ noise exposure was related not to noisy occupations but rather to voluntary activities such as listening to music. Which makes it hard for me not to worry that when my 16-year-old son is sitting nearby with his earbuds in, I can hear his music. There’s a pretty good chance that he’s got the volume up too loud — loud enough to potentially damage the sensory cells deep in his ear and eventually lead to permanent hearing loss. That’s according to Christopher Chang, an ear, nose and throat doctor at Fauquier ENT Consultants in Warrenton, who sees patients every day with hearing-related issues. “What he’s hearing is way too loud, because it’s concentrated directly into the ear itself,” he says of my son, adding that the anatomy of the ear magnifies sound as it travels through the ear canal. Listening to music through earbuds or headphones is just one way that many of us are routinely exposed to excessive noise. Mowing the lawn, going to a nightclub, riding the Metro, using a power drill, working in a factory, playing in a band and riding a motorcycle are activities that can lead to hearing problems. Aging is the primary cause of hearing loss; noise is second, says Brian Fligor, who directs the diagnostic audiology program at Boston Children’s Hospital, and it’s usually the culprit when the condition affects younger people. Approximately 15 percent of American adults between the ages of 20 and 69 have high-frequency hearing loss, probably the result of noise exposure, according to the National Institute on Deafness and Other Communication Disorders. © 1996-2013 The Washington Post

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 18038 - Posted: 04.15.2013

By Meghan Rosen Whether you’re rocking out to Britney Spears or soaking up Beethoven’s classics, you may be enjoying music because it stimulates a guessing game in your brain. This mental puzzling explains why humans like music, a new study suggests. By looking at activity in just one part of the brain, researchers could predict roughly how much volunteers dug a new song. When people hear a new tune they like, a clump of neurons deep within their brains bursts into excited activity, researchers report April 12 in Science. The blueberry-sized cluster of cells, called the nucleus accumbens, helps make predictions and sits smack-dab in the “reward center” of the brain — the part that floods with feel-good chemicals when people eat chocolate or have sex. The berry-sized bit acts with three other regions in the brain to judge new jams, MRI scans showed. One region looks for patterns, another compares new songs to sounds heard before, and the third checks for emotional ties. As our ears pick up the first strains of a new song, our brains hustle to make sense of the music and figure out what’s coming next, explains coauthor Valorie Salimpoor, who is now at the Baycrest Rotman Research Institute in Toronto. And when the brain’s predictions are right (or pleasantly surprising), people get a little jolt of pleasure. All four brain regions work overtime when people listen to new songs they like, report the researchers, including Robert Zatorre of the Montreal Neurological Institute at McGill University © Society for Science & the Public 2000 - 2013

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 4: The Chemistry of Behavior: Neurotransmitters and Neuropharmacology
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 4: The Chemistry of Behavior: Neurotransmitters and Neuropharmacology
Link ID: 18030 - Posted: 04.13.2013

By Andrew Grant A simple plastic shell has cloaked a three-dimensional object from sound waves for the first time. With some improvements, a similar cloak could eventually be used to reduce noise pollution and to allow ships and submarines to evade enemy detection. The experiments appear March 20 in Physical Review Letters. “This paper implements a simplified version of invisibility using well-designed but relatively simple materials,” says Steven Cummer, an electrical engineer at Duke University, who was not involved in the study. Cummer proposed the concept of a sound cloak in 2007. Scientists’ recent efforts to render objects invisible to the eye are based on the fact that our perception of the world depends on the scattering of waves. We can see objects because waves of light strike them and scatter. Similarly, the Navy can detect faraway submarines because they scatter sound waves (sonar) that hit them. So for the last several years scientists have been developing cloaks that prevent scattering by steering light or sound waves around an object. The drawback of this approach, however, is that it requires complex synthetic materials that are difficult to produce. © Society for Science & the Public 2000 - 2013

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 17966 - Posted: 03.30.2013

By Susan Milius Hey evolution, thanks for nothing. When a mammal embryo develops, its middle ear appears to form in a pop-and-patch way that seals one end with substandard, infection-prone tissue. “The way evolution works doesn’t always create the most perfect, engineered structure,” says Abigail Tucker, a developmental biologist at King’s College London. “Definitely, it’s made an ear that’s slightly imperfect.” The mammalian middle ear catches sound and transfers it, using three tiny bones that jiggle against the eardrum, to the inner ear chamber. Those three bones — the hammer, anvil and stirrup — are a distinctive trait that distinguishes the group from other evolutionary lineages. Research in mouse embryos finds that the middle ear begins as a pouch of tissue. Then its lining ruptures at one end and the break lets in a different kind of tissue, which forms the tiny bones of the middle ear. This intruding tissue originates from what’s called the embryo’s neural crest, a population of cells that gives rise to bone and muscle. Neural crest tissue has never been known before to create a barrier in the body. Yet as the mouse middle ear forms, this tissue creates a swath of lining that patches the rupture, Tucker and colleague Hannah Thompson, report in the March 22 Science. © Society for Science & the Public 2000 - 2013

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 17947 - Posted: 03.25.2013

By JANE E. BRODY Noise, not age is the leading cause of hearing loss. Unless you take steps now to protect to your ears, sooner or later many of you — and your children — will have difficulty understanding even ordinary speech. Tens of millions of Americans, including 12 percent to 15 percent of school-age children, already have permanent hearing loss caused by the everyday noise that we take for granted as a fact of life. “The sad truth is that many of us are responsible for our own hearing loss,” writes Katherine Bouton in her new book, “Shouting Won’t Help: Why I — and 50 Million Other Americans — Can’t Hear You.” The cause, she explains, is “the noise we blithely subject ourselves to day after day.” While there are myriad regulations to protect people who work in noisy environments, there are relatively few governing repeated exposure to noise outside the workplace, from portable music devices, rock concerts, hair dryers, sirens, lawn mowers, leaf blowers, vacuum cleaners, car alarms and countless other sources. We live in a noisy world, and every year it seems to get noisier. Ms. Bouton notes that the noise level inside Allen Fieldhouse at the University of Kansas often exceeds that of a chain saw. After poor service, noise is the second leading complaint about restaurants. Proprietors believe that people spend more on food and drink in bustling eateries, and many have created new venues or retrofitted old ones to maximize sound levels. Copyright 2013 The New York Times Company

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 17946 - Posted: 03.25.2013

By Rachel Ehrenberg BOSTON — For the first time, researchers have snapped pictures of mouse inner ear cells using an approach that doesn’t damage tissue or require elaborate dyes. The approach could offer a way to investigate hearing loss — the most common sensory deficit in the world — and may help guide the placement of cochlear devices or other implants. Inner ear damage and the deafness that results have long challenged scientists. The small delicate cochlea and associated parts are encased in the densest bone in the body and near crucial anatomical landmarks, including the jugular vein, carotid artery and facial nerve, which make them difficult to access. With standard anatomical imaging techniques such as MRI, the inner ear just looks like a small grey blob. “We can’t biopsy it, we can’t image it, so it’s very difficult to figure out why people are deaf,” said ear surgeon and neuroscientist Konstantina Stankovic of the Massachusetts Eye and Ear Infirmary in Boston. Stankovic and her colleagues took a peek at inner ear cells using an existing technique called two-photon microscopy. This approach shoots photons at the target tissue, exciting particular molecules that then emit light. The researchers worked with mice exposed to 160 decibels of sound for two hours —levels comparable to the roaring buzz of a snowmobile or power tools. Then they removed the rodents’ inner ears, which includes the spiraled, snail-shaped cochlea and other organs. Instead of cutting into the cochlea, the researchers peered through the “round window” — a middle ear opening covered by a thin membrane that leads to the cochlea. © Society for Science & the Public 2000 - 2013

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 2: Functional Neuroanatomy: The Nervous System and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 2: Cells and Structures: The Anatomy of the Nervous System
Link ID: 17820 - Posted: 02.19.2013

by Kelli Whitlock Burton Having a conversation in a noisy restaurant can be difficult. For many elderly adults, it's often impossible. But with a little practice, the brain can learn to hear above the din, a new study suggests. Age-related hearing loss can involve multiple components, such as the disappearance of sensory cells in the inner ear. But scientists say that part of the problem may stem from our brains. As we get older, our brains slow down—a natural part of aging called neural slowing. One side effect of this sluggishness is the inability to process the fast-moving parts of speech, particularly consonants at the beginning of words that sound alike, such as "b," "p," "g," and "d." Add background noise to the mix and "bad" may sound like "dad," says Nina Kraus, director of the Auditory Neuroscience Laboratory at Northwestern University in Evanston, Illinois. "Neural slowing especially affects our ability to hear in a noisy background because the sounds we need to hear are acoustically less salient and because noise also taxes our ability to remember what we hear." Building on animal studies that pointed to an increase in neural speed following auditory training, Kraus and colleagues enrolled 67 people aged 55 to 70 years old with no hearing loss or dementia in an experiment. Half the group completed about 2 months of exercises with Brain Fitness, a commercially available auditory training program by Posit Science. (The team has no connection to the company.) The exercises helped participants better identify different speech sounds and distinguish between similar-sounding syllables, such as "ba" or "ta." © 2010 American Association for the Advancement of Science

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 13: Memory, Learning, and Development
Link ID: 17800 - Posted: 02.14.2013

By KATHERINE BOUTON At a party the other night, a fund-raiser for a literary magazine, I found myself in conversation with a well-known author whose work I greatly admire. I use the term “conversation” loosely. I couldn’t hear a word he said. But worse, the effort I was making to hear was using up so much brain power that I completely forgot the titles of his books. A senior moment? Maybe. (I’m 65.) But for me, it’s complicated by the fact that I have severe hearing loss, only somewhat eased by a hearing aid and cochlear implant. Dr. Frank Lin, an otolaryngologist and epidemiologist at Johns Hopkins School of Medicine, describes this phenomenon as “cognitive load.” Cognitive overload is the way it feels. Essentially, the brain is so preoccupied with translating the sounds into words that it seems to have no processing power left to search through the storerooms of memory for a response. Over the past few years, Dr. Lin has delivered unwelcome news to those of us with hearing loss. His work looks “at the interface of hearing loss, gerontology and public health,” as he writes on his Web site. The most significant issue is the relation between hearing loss and dementia. In a 2011 paper in The Archives of Neurology, Dr. Lin and colleagues found a strong association between the two. The researchers looked at 639 subjects, ranging in age at the beginning of the study from 36 to 90 (with the majority between 60 and 80). The subjects were part of the Baltimore Longitudinal Study of Aging. None had cognitive impairment at the beginning of the study, which followed subjects for 18 years; some had hearing loss. Copyright 2013 The New York Times Company

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 13: Memory, Learning, and Development
Link ID: 17787 - Posted: 02.12.2013

by Jacob Aron The mystery of how our brains perceive sound has deepened, now that musicians have smashed a limit on sound perception imposed by a famous algorithm. On the upside this means it should be possible to improve upon today's gold-standard methods for audio perception. Devised over 200 years ago, the Fourier transform is a mathematical process that splits a sound wave into its individual frequencies. It is the most common method for digitising analogue signals and some had thought that brains make use of the same algorithm when turning the cacophony of noise around us into individual sounds and voices. To investigate, Jacob Oppenheim and Marcelo Magnasco of Rockefeller University in New York turned to the Gabor limit, a part of the Fourier transform's mathematics that makes the determination of pitch and timing a trade-off. Rather like the uncertainty principle of quantum mechanics, the Gabor limit states you can't accurately determine a sound's frequency and its duration at the same time. 13 times better The pair reasoned that if people's hearing obeyed the Gabor limit, this would be a sign that they were using the Fourier transform. But when 12 musicians, some instrumentalists, some conductors, took a series of tests, such as judging slight changes in the pitch and duration of sounds at the same time, they beat the limit by up to a factor of 13. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 17775 - Posted: 02.09.2013

By C. CLAIBORNE RAY Q. Nearing 70, I have increasing difficulty hearing conversations, yet music in restaurants is too loud. Why? A. Age-related hearing loss, called presbycusis, is characterized by loss of hair cells in the base of the cochlea, or inner ear, that are attuned to capture and transmit high-frequency sounds, said Dr. Anil K. Lalwani, director of otology, neurotology and skull-base surgery at NewYork-Presbyterian Hospital/Columbia University Medical Center. Loss of high-frequency hearing leads to deterioration in the ability to distinguish words in conversation. Additionally, any noise in the environment leads to even greater loss in clarity of hearing. “Contrary to expectation, presbycusis is also associated with sensitivity to loud noises,” Dr. Lalwani said. “This is due to a poorly understood phenomenon called recruitment.” Normally, a specific sound frequency activates a specific population of hair cells located at a specific position within the cochlea. With hearing loss, this specificity is lost, and a much larger population of hair cells in the adjacent areas is “recruited” and also activated, producing sensitivity to noise. “Patients with presbycusis perceive an incremental increase in loudness to be much greater than those with normal hearing,” he said. “This explains why the elderly parent complains that ‘I am not deaf!’ ” when a son or daughter repeats a misheard sentence. © 2013 The New York Times Company

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 13: Memory, Learning, and Development
Link ID: 17760 - Posted: 02.05.2013

By James Gallagher Health and science reporter, BBC News A tiny "genetic patch" can be used to prevent a form of deafness which runs in families, according to animal tests. Patients with Usher syndrome have defective sections of their genetic code which cause problems with hearing, sight and balance. A study, published in the journal Nature Medicine, showed the same defects could be corrected in mice to restore some hearing. Experts said it was an "encouraging" start. There are many types of Usher syndrome tied to different errors in a patient's DNA - the blueprint for building every component of the body. One of those mutations runs in families descended from French settlers in North America. When they try to build a protein called hormonin, which is needed to form the tiny hairs in the ear that detect sound, they do not finish the job. It results in hearing loss at birth and has a similar effect in the eye where it causes a gradual loss of vision. Scientists at the Rosalind Franklin University of Medicine and Science, in Chicago in the US, designed a small strip of genetic material which attaches to the mutation and keeps the body's factories building the protein. There has been something of a flurry of developments in restoring hearing in the past year. BBC © 2013

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 17759 - Posted: 02.05.2013

by Elizabeth Devitt Birds may not have big brains, but they know how to navigate. They wing around town and across continents with amazing accuracy, while we watch and wonder. Biologists believe that sight, smell, and an internal compass all contribute to avian orienteering. But none of these skills completely explains how birds fly long distances or return home from places they've never been. A new study proposes that the animals use infrasound—low-level background noise in our atmosphere—to fly by "images" they hear. These acoustical maps may also explain how other creatures steer. Scientists have long considered infrasound as a navigational cue for birds. But until U.S. Geological Survey geophysicist Jonathan Hagstrum in Menlo Park, California, became intrigued by the unexplained loss of almost 60,000 pigeons during a race from France to England in 1997, no one pinpointed how the process worked. The race went bust when the birds' flight route crossed that of a Concorde jet, and Hagstrum wanted to know why. "When I realized the birds in that race were on the same flight path as the Concorde, I knew it had to be infrasound," he says. The supersonic plane laid down a sonic boom when most of the animals were flying across the English Channel. Normally, infrasound is generated when deep ocean waves send pressure waves reverberating into the land and atmosphere. Infrasound can come from other natural causes, such as earthquakes, or humanmade events, such as the acceleration of the Concorde. The long, slow waves move across vast distances. Although humans can't hear them, birds and other animals are able to tune in. © 2010 American Association for the Advancement of Science

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 17746 - Posted: 02.02.2013