Links for Keyword: Hearing

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 21 - 40 of 524

by Jennifer Viegas Music skills evolved at least 30 million years ago in the common ancestor of humans and monkeys, according to a new study that could help explain why chimpanzees drum on tree roots and monkey calls sound like singing. The study, published in the latest issue of Biology Letters, also suggests an answer to this chicken-and-egg question: Which came first, language or music? The answer appears to be music. "Musical behaviors would constitute a first step towards phonological patterning, and therefore language," lead author Andrea Ravignani told Discovery News. For the study, Ravignani, a doctoral candidate at the University of Vienna's Department of Cognitive Biology, and his colleagues focused on an ability known as "dependency detection." This has to do with recognizing relationships between syllables, words and musical notes. For example, once we hear a certain pattern like Do-Re-Mi, we listen for it again. Hearing something like Do-Re-Fa sounds wrong because it violates the expected pattern. Normally monkeys don't respond the same way, but this research grabbed their attention since it used sounds within their frequency ranges. In the study, squirrel monkeys sat in a sound booth and listened to a set of three novel patterns. (The researchers fed the monkeys insects between playbacks, so the monkeys quickly got to like this activity.) Whenever a pattern changed, similar to our hearing Do-Re-Fa, the monkeys stared longer, as if to say, "Huh?" © 2013 Discovery Communications, LLC.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 18918 - Posted: 11.13.2013

By PAULA SPAN Jim Cooke blames his hearing loss on the constant roar of C-119 aircraft engines he experienced in the Air Force. He didn’t wear protective gear because, like most 20-year-olds, “you think you’re indestructible,” he said. By the time he was 45, he needed hearing aids for both ears. Still, he had a long career as a telephone company executive while he and his wife, Jean, raised two children in Broadview Heights, Ohio. Only after retirement, he told me in an interview, did he start having trouble communicating. Jean and Jim Cooke Jean and Jim Cooke Mr. Cooke had to relinquish a couple of part-time jobs he enjoyed because “I felt insecure about dealing with people on the phone,” he said. He withdrew from a church organization he led because he couldn’t grasp what members were saying at meetings. “He didn’t want to be in social situations,” Mrs. Cooke said. “It gave him a feeling of inadequacy, and anger at times.” Two years ago, when their grandchildren began saying that Granddad needed to replace his hearing aid batteries — although the batteries were fine — the Cookes went to the Cleveland Clinic, where an audiologist there, Dr. Sarah Sydlowski, told Jim that at 76, he might consider a cochlear implant. Perhaps the heart-tugging YouTube videos of deaf toddlers suddenly hearing sounds have led us to think of cochlear implants as primarily for children. Or perhaps, said Dr. Frank R. Lin, a Johns Hopkins University epidemiologist, we consider late-life hearing loss normal (which it is), “an unfortunate but inconsequential aspect of aging,” and don’t explore treatment beyond hearing aids. In any case, the idea of older adults having a complex electronic device surgically implanted has been slow to catch on, even though by far the greatest number of people with severe hearing loss are seniors. © 2013 The New York Times Company

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 18912 - Posted: 11.12.2013

by Laura Sanders Neonatal intensive care units are crammed full of life-saving equipment and people. The technology that fills these bustling hubs is responsible for saving the lives of fragile young babies. That technology is also responsible for quite a bit of noise. In the NICU, monitors beep, incubators whir and nurses, doctors and family members talk. This racket isn’t just annoying: NICU noise often exceeds acceptable levels set by the American Academy of Pediatrics, a 2009 analysis found. To dampen the din, many hospitals are shifting away from open wards to private rooms for preemies. Sounds like a no-brainer, right? Fragile babies get their own sanctuaries where they can recover and grow in peace. But in a surprising twist, a new study finds that this peace and quiet may actually be bad for some babies. Well aware of the noise problem in the NICU ward, Roberta Pineda of Washington University School of Medicine in St. Louis and colleagues went into their study of 136 preterm babies expecting to see benefits in babies who stayed in private rooms. Instead, the researchers found the exact opposite. By the time they left the hospital, babies who stayed in private rooms had less mature brains than those who stayed in an open ward. And two years later, babies who had stayed in private rooms performed worse on language tests. The results were not what the team expected. “It was extremely surprising,” Pineda told me. The researchers believe that the noise abatement effort made things too quiet for these babies. As distressing data from Romanian orphanages highlights, babies need stimulation to thrive. Children who grew up essentially staring at white walls with little contact from caregivers develop serious brain and behavioral problems, heartbreaking results from the Bucharest Early Intervention Project show. Hearing language early in life, even before birth, might be a crucial step in learning to talk later. And babies tucked away in private rooms might be missing out on some good stimulation. © Society for Science & the Public 2000 - 2013

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 13: Memory, Learning, and Development
Link ID: 18898 - Posted: 11.09.2013

Learning a musical instrument as a child gives the brain a boost that lasts long into adult life, say scientists. Adults who used to play an instrument, even if they have not done so in decades, have a faster brain response to speech sounds, research suggests. The more years of practice during childhood, the faster the brain response was, the small study found. The Journal of Neuroscience work looked at 44 people in their 50s, 60s and 70s. The volunteers listened to a synthesised speech syllable, "da", while researchers measured electrical activity in the region of the brain that processes sound information - the auditory brainstem. Despite none of the study participants having played an instrument in nearly 40 years, those who completed between four and 14 years of music training early in life had a faster response to the speech sound than those who had never been taught music. Lifelong skill Researcher Michael Kilgard, of Northwestern University, said: "Being a millisecond faster may not seem like much, but the brain is very sensitive to timing and a millisecond compounded over millions of neurons can make a real difference in the lives of older adults." As people grow older, they often experience changes in the brain that compromise hearing. For instance, the brains of older adults show a slower response to fast-changing sounds, which is important for interpreting speech. Musical training may help offset this, according to Dr Kilgard's study. BBC © 2013

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 13: Memory, Learning, and Development
Link ID: 18887 - Posted: 11.07.2013

On Easter Sunday in 2008, the phantom noises in Robert De Mong’s head dropped in volume -- for about 15 minutes. For the first time in months, he experienced relief, enough at least to remember what silence was like. And then they returned, fierce as ever. It was six months earlier that the 66-year-old electrical engineer first awoke to a dissonant clamor in his head. There was a howling sound, a fingernails-on-a-chalkboard sound, “brain zaps” that hurt like a headache and a high frequency "tinkle" noise, like musicians hitting triangles in an orchestra. Many have since disappeared, but two especially stubborn noises remain. One he describes as monkeys banging on symbols. Another resembles frying eggs and the hissing of high voltage power lines. He hears those sounds every moment of every day. De Mong was diagnosed in 2007 with tinnitus, a condition that causes a phantom ringing, buzzing or roaring in the ears, perceived as external noise. When the sounds first appeared, they did so as if from a void, he said. No loud noise trauma had preceded the tinnitus, as it does for some sufferers -- it was suddenly just there. And the noises haunted him, robbed him of sleep and fueled a deep depression. He lost interest in his favorite hobby: tinkering with his ‘78 Trans Am and his two Corvettes. He stopped going into work. That month, De Mong visited an ear doctor, who told him he had high frequency hearing loss in both ears. Another doctor at the Stanford Ear, Nose and Throat clinic confirmed it, and suggested hearing aids as a possibility. They helped the hearing, but did nothing for the ringing. © 1996-2013 MacNeil/Lehrer Productions.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 18885 - Posted: 11.07.2013

by Sarah Zielinski In the United States, you’re rarely far from a road. And as you get closer to one, or other bits of human infrastructure, bird populations decline. But are the birds avoiding our cars or the noises produced by them? Noise might be a big factor, scientists have reasoned, because they’ve seen declines in bird populations near noisy natural gas compressor sites. It turns out that the sound of cars driving down a road is enough to deter many bird species from an area. Researchers from Boise State University in Idaho created a “phantom road” at a site in the Boise Foothills that is a stopover for migratory birds in the fall. They put up 15 speakers in Douglas fir trees and played recorded sounds of a road at intervals of four days — four days on, four days off. They then counted birds at three locations along their phantom road and three locations nearby where the road noises couldn’t be heard. The scientists spotted lots of birds during their study — more than 8,000 detections and 59 species. The birds they saw changed as the fall progressed, which was natural because the various species of migrating birds hit the stopover point at different times. But all that variation was good for the experiment, the researchers say, because it helped even out any fluctuations they might have seen from site to site and from noise-on to noise-off intervals, letting the researchers tease out the effects of the road noise. © Society for Science & the Public 2000 - 2013.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 18879 - Posted: 11.06.2013

Brian Owens Bats that nest inside curled-up leaves may be getting an extra benefit from their homes: the tubular roosts act as acoustic horns, amplifying the social calls that the mammals use to keep their close-knit family groups together. South American Spix’s disc-winged bats (Thyroptera tricolor) roost in groups of five or six inside unfurling Heliconia and Calathea leaves. The leaves remain curled up for only about 24 hours, so the bats have to find new homes almost every day, and have highly specialized social calls to help groups stay together. When out flying, they emit a simple inquiry call. Bats inside leaves answer with a more complex response call to let group members know where the roost is. Gloriana Chaverri, a biologist at the University of Costa Rica in Golfito, took curled leaves into the lab and played recorded bat calls through them, to see how the acoustics were changed by the tapered tubular shape of the leaves. “The call emitted by flying bats got really amplified,” she says, “while the calls from inside the leaves were not amplified as much.” Sound system The inquiry calls from outside the roost were boosted by as much as 10 decibels as the sound waves were compressed while moving down the narrowing tube — the same thing that happens in an amplifying ear trumpet. Most response calls from inside the leaf were boosted by only 1–2 decibels, but the megaphone shape of the leaf made them highly directional. The results are published today in Proceedings of the Royal Society B1. © 2013 Nature Publishing Group

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 15: Language and Our Divided Brain
Link ID: 18791 - Posted: 10.16.2013

By PETER ANDREY SMITH In a cavernous basement laboratory at the University of Minnesota, Thomas Stoffregen thrusts another unwitting study subject — well, me — into the “moving room.” The chamber has a concrete floor and three walls covered in faux marble. As I stand in the middle, on a pressure sensitive sensor about the size of a bathroom scale, the walls lurch inward by about a foot, a motion so disturbing that I throw up my arms and stumble backward. Indeed, the demonstration usually throws adults completely off balance. I’m getting off lightly. Dr. Stoffregen, a professor of kinesiology, uses the apparatus to study motion sickness, and often subjects must stand and endure subtle computer-driven oscillations in the walls until they are dizzy and swaying. Dr. Stoffregen’s research has also taken him on cruises — cruise ships are to motion sickness what hospitals are to pneumonia. “No one’s ever vomited in our lab,” he said. “But our cruises are a different story.” For decades now, Dr. Stoffregen, 56, director of the university’s Affordance Perception-Action Laboratory, has been amassing evidence in support of a surprising theory about the causes of motion sickness. The problem does not arise in the inner ear, he believes, but rather in a disturbance in the body’s system for maintaining posture. The idea, once largely ignored, is beginning to gain grudging recognition. “Most theories say when you get motion sick, you lose your equilibrium,” said Robert Kennedy, a psychology professor at the University of Central Florida. “Stoffregen says because you lose your equilibrium, you get motion sick.” Motion sickness is probably a problem as old as passive transportation. The word “nausea” derives from the Greek for “boat,” but the well-known symptoms arise from a variety of stimuli: lurching on the back of a camel, say, or riding the Tilt-a-Whirl at a fair. “Pandemonium,” the perpetually seasick Charles Darwin called it. Copyright 2013 The New York Times Company

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 7: Vision: From Eye to Brain
Link ID: 18691 - Posted: 09.24.2013

By Bruce Bower Babies have an ear for primeval dangers, a new study suggests. By age 9 months, infants pay special attention to sounds that have signaled threats to children’s safety and survival throughout human evolution, say psychologist Nicole Erlich of the University of Queensland, Australia, and her colleagues. Those sounds include a snake hissing, adults’ angry voices, a crackling fire, thunder claps and — as a possible indicator of a nearby but unseen danger — another infant’s cries. Noises denoting modern dangers, as well as pleasant sounds, failed to attract the same level of interest from 9-month-olds, Erlich and her colleagues report Aug. 27 in Developmental Science. People can learn to fear just about anything. But tens of thousands of years of evolution have primed infants’ brains to home in on longstanding perils, the scientists propose. “There is something special about evolutionarily threatening sounds that infants respond to,” Erlich says. Another study that supported that idea, by psychologist David Rakison of Carnegie Mellon University in Pittsburgh, found that 11-month-olds rapidly learn to associate fearful faces with images of snakes and spiders (SN: 9/26/09, p. 11). “There is now a coherent argument that infants are biologically prepared in at least two sensory systems to learn quickly which evolutionarily relevant objects to fear,” Rakison says. © Society for Science & the Public 2000 - 2013

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 13: Memory, Learning, and Development
Link ID: 18627 - Posted: 09.10.2013

Two pioneers in the study of neural signaling and three researchers responsible for modern cochlear implants are winners of The Albert and Mary Lasker Foundation’s annual prize, announced today. The prestigious award honoring contributions in the medical sciences is often seen as a hint at future Nobel contenders. The prizes for basic and clinical research each carry a $250,000 honorarium. Richard Scheller of the biotech company Genentech and Thomas Südhof of Stanford University in Palo Alto, California, got their basic research Laskers for discovering the mechanisms behind rapid the release of neurotransmitters—the brain’s chemical messengers—into the space between neurons. This process underlies all communication among brain cells, and yet it was “a black box” before Scheller and Südhof’s work, says their colleague Robert Malenka, a synaptic physiologist at Stanford. The two worked independently in the late 1980s to identify individual proteins that mediate the process, and their development of genetically altered mice lacking these proteins was “an ambitious and high-risk approach,” Malenka says. Although “they weren’t setting out to understand any sort of disease,” their discoveries have helped unravel the genetic basis for neurological disorders such as Parkinson’s disease. This year’s clinical research prizes went to Graeme Clark, Ingeborg Hochmair, and Blake Wilson for their work to restore hearing to the deaf. In the 1970s, Hochmair and Clark of the cochlear implant company MED-EL in Innsbruck, Austria, and the University of Melbourne, respectively, were the first to insert multiple electrodes into the human cochlea to stimulate nerves that respond to different frequencies of sound. © 2012 American Association for the Advancement of Science

Related chapters from BP7e: Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals; Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals; Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 18625 - Posted: 09.10.2013

By Bruce Bower Strange things happen when bad singers perform in public. Comedienne Roseanne Barr was widely vilified in 1990 after she screeched the national anthem at a major league baseball game. College student William Hung earned worldwide fame and a recording contract in 2004 with a tuneless version of Ricky Martin’s hit song “She Bangs” on American Idol. Several singers at karaoke bars in the Philippines have been shot to death by offended spectators for mangling the melody of Frank Sinatra’s “My Way.” For all the passion evoked by pitch-impaired vocalists, surprisingly little is known about why some people are cringe-worthy crooners. But now a rapidly growing field of research is beginning to untangle the mechanics of off-key singing. The new results may improve scientists’ understanding of how musical abilities develop and help create a toolbox of teaching strategies for aspiring vocalists. Glimpses are also emerging into what counts as “in tune” to the mind’s ear. It seems that listeners are more likely to label stray notes as in tune when those notes are sung as opposed to played on a violin. Running through this new wave of investigations is a basic theme: There is one way to carry a tune and many ways to fumble it. “It’s kind of amazing that any of us can vocally control pitch enough to sing well,” says psychologist Peter Pfordresher of the University at Buffalo, New York. Still, only about 10 percent of adults sing poorly, several reports suggest (although some researchers regard that figure as an underestimate). Some of those tune-challenged crooners have tone deafness, a condition called amusia, which afflicts about 4 percent of the population. Genetic and brain traits render these folks unable to tell different musical notes apart or to recognize a tune as common as “Happy Birthday.” Amusia often — but curiously, not always — results in inept singing. Preliminary evidence suggests that tone-deaf individuals register pitch changes unconsciously, although they can’t consciously decide whether one pitch differs from another. © Society for Science & the Public 2000 - 2013

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 18612 - Posted: 09.07.2013

Elizabeth Pennisi Dolphins and bats don't have much in common, but they share a superpower: Both hunt their prey by emitting high-pitched sounds and listening for the echoes. Now, a study shows that this ability arose independently in each group of mammals from the same genetic mutations. The work suggests that evolution sometimes arrives at new traits through the same sequence of steps, even in very different animals. The research also implies that this convergent evolution is common—and hidden—within genomes, potentially complicating the task of deciphering some evolutionary relationships between organisms. Nature is full of examples of convergent evolution, wherein very distantly related organisms wind up looking alike or having similar skills and traits: Birds, bats, and insects all have wings, for example. Biologists have assumed that these novelties were devised, on a genetic level, in fundamentally different ways. That was also the case for two kinds of bats and toothed whales, a group that includes dolphins and certain whales, that have converged on a specialized hunting strategy called echolocation. Until recently, biologists had thought that different genes drove each instance of echolocation and that the relevant proteins could change in innumerable ways to take on new functions. But in 2010, Stephen Rossiter, an evolutionary biologist at Queen Mary, University of London, and his colleagues determined that both types of echolocating bats, as well as dolphins, had the same mutations in a particular protein called prestin, which affects the sensitivity of hearing. Looking at other genes known to be involved in hearing, they and other researchers found several others whose proteins were similarly changed in these mammals. © 2012 American Association for the Advancement of Science

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 18608 - Posted: 09.05.2013

by Jennifer Viegas Goldfish not only listen to music, but they also can distinguish one composer from another, a new study finds. The paper adds to the growing body of evidence that many different animals understand music. For the study, published in the journal Behavioural Processes, Shinozuka and colleagues Haruka Ono and Shigeru Watanabe played two pieces of classical music near goldfish in a tank. The pieces were Toccata and Fugue in D minor by Johann Sebastian Bach and The Rite of Spring by Igor Stravinsky. The scientists trained the fish to gnaw on a little bead hanging on a filament in the water. Half of the fish were trained with food to gnaw whenever Bach played and the other half were taught to gnaw whenever Stravinsky music was on. The goldfish aced the test, easily distinguishing the two composers and getting a belly full of food in the process. The fish were more interested in the vittles than the music, but earlier studies on pigeons and songbirds suggest that Bach is the preferred choice, at least for birds. “These pieces can be classified as classical (Bach) and modern (Stravinsky) music,” Shinozuka explained. “Previously we demonstrated that Java sparrows preferred classical over modern music. Also, we demonstrated Java sparrows could discriminate between consonance and dissonance.” © 2013 Discovery Communications, LLC.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 13: Memory, Learning, and Development
Link ID: 18600 - Posted: 09.03.2013

by Michael Marshall Life is tough when you're small. It's not just about getting trodden on by bigger animals. Some of the tiniest creatures struggle to make their bodies work properly. This leads to problems that us great galumphing humans will never experience. For instance, the smallest frogs are prone to drying out because water evaporates so quickly from their skin. Miniature animals can't have many offspring, because there is no room in their bodies to grow them. One tiny spider has even had to let its brain spill into its legs, because its head is too small to accommodate it. Gardiner's Seychelles frog is one of the smallest vertebrates known to exist, at just 11 millimetres long. Its tiny head is missing parts of its ears, which means it shouldn't be able to hear anything. It can, though, and that is thanks to its big mouth. One of only four species in the genus Sechellophryne, Gardiner's Seychelles frog is a true rarity. It is confined to a few square kilometres of two islands in the Seychelles, and even if you visit its habitat you're unlikely to see it. That's because the frog spends most of its time in moist leaf litter, so that it doesn't dry out. It eats tiny insects and other invertebrates. When it comes to hearing, it is sadly under-equipped. Unlike most frogs, it doesn't have an external eardrum. Inside its head, it does have the amphibian equivalent of a cochlea, which is the bit that actually detects sounds. But it doesn't have a middle ear to transmit the sound to the cochlea, and is also missing a bone called the columella that would normally help carry the sound. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 18599 - Posted: 09.03.2013

By Dina Fine Maron Using sensors tucked inside the ears of live gerbils, researchers from Columbia University are providing critical insights into how the ear processes sound. In particular, the researchers have uncovered new evidence on how the cochlea, a coiled portion of the inner ear, processes and amplifies sound. The findings could lay the initial building blocks for better hearing aids and implants. The research could also help settle a long-simmering debate: Do the inner workings of the ear function somewhat passively with sound waves traveling into the cochlea, bouncing along sensory tissue, and slowing as they encounter resistance until they are boosted and processed into sound? Or does the cochlea actively amplify sound waves? The study, published in Biophysical Journal, suggests the latter is the case. The team, led by Elizabeth Olson, a biomedical engineer of Columbia University, used sensors that simultaneously measured small pressure fluctuations and cell-generated voltages within the ear. The sensors allowed the researchers to pick up phase shifts—a change in the alignment of the vibrations of the sound waves within the ear—suggesting that some part of the ear was amplifying sound. What causes that phase shift is still unclear although the researchers think the power behind the phase shift comes from the outer hair cells. Apparently the hair cells’ movement serves to localize and sharpen the frequency region of amplification. The researchers wrote that the mechanism appears to be akin to a child swinging on the playground. If somebody pushes a swing just once, the oscillations will eventually die out. If a child pumps her legs at certain times, however, it will put energy into the oscillations—that is power amplification at work. © 2013 Scientific American

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 18544 - Posted: 08.22.2013

By C. CLAIBORNE RAY A. A widely discussed 2006 study of transit noise in New York City, measuring noise on buses as well as in subway cars and on platforms, was described at the time as the first such formal study published since the 1930s. Done by scientists at the Mailman School of Public Health at Columbia University and published in The Journal of Urban Health, the study concluded that noise levels at subway and bus stops could easily exceed recognized public health recommendations and had the potential to damage hearing, given sufficient exposure. For example, guidelines from the Environmental Protection Agency and the World Health Organization set a limit of 45 minutes’ exposure to 85 decibels, the mean noise level measured on subway platforms. And nearly 60 percent of the platform measurements exceeded that level. The maximum noise levels inside subway cars were even higher than those on the platforms, with one-fifth exceeding 100 decibels and more than two-thirds exceeding 90 decibels. The study recommended properly fitted earplugs and earmuff-type protectors in loud transit environments, saying they could cut noise levels significantly at the eardrum. And it warned that personal listening devices only increased the total noise and risk. © 2013 The New York Times Company

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 18495 - Posted: 08.13.2013

by Tanya Lewis, LiveScience In waters from Florida to the Caribbean, dolphins are showing up stranded or entangled in fishing gear with an unusual problem: They can't hear. More than half of stranded bottlenose dolphins are deaf, one study suggests. The causes of hearing loss in dolphins aren't always clear, but aging, shipping noise and side effects from antibiotics could play roles. "We're at a stage right now where we're determining the extent of hearing loss [in dolphins], and figuring out all the potential causes," said Judy St. Leger, director of pathology and research at SeaWorld in San Diego. "The better we understand that, the better we have a sense of what we should be doing [about it]." Whether the hearing loss is causing the dolphin strandings -- for instance, by steering the marine mammals in the wrong direction or preventing them from finding food -- is also still an open question. Dolphins are a highly social species. They use echolocation to orient themselves by bouncing high-pitched sound waves off of objects in their environment. They also "speak" to one another in a language of clicks and buzzing sounds. Because hearing is so fundamental to dolphins' survival, losing it can be detrimental. A 2010 study found that more than half of stranded bottlenose dolphins and more than a third of stranded rough-toothed dolphins had severe hearing loss. The animals' hearing impairment may have been a critical factor in their strandings, and all rescued cetaceans should be tested, the researchers said in the study, detailed in the journal PLOS ONE. © 2013 Discovery Communications, LLC.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 18447 - Posted: 08.03.2013

Researchers have found in mice that supporting cells in the inner ear, once thought to serve only a structural role, can actively help repair damaged sensory hair cells, the functional cells that turn vibrations into the electrical signals that the brain recognizes as sound. The study in the July 25, 2013 online edition of the Journal of Clinical Investigation reveals the rescuing act that supporting cells and a chemical they produce called heat shock protein 70 (HSP70) appear to play in protecting damaged hair cells from death. Finding a way to jumpstart this process in supporting cells offers a potential pathway to prevent hearing loss caused by certain drugs, and possibly by exposure to excess noise. The study was led by scientists at the National Institutes of Health. Over half a million Americans experience hearing loss every year from ototoxic drugs — drugs that can damage hair cells in the inner ear. These include some antibiotics and the chemotherapy drug cisplatin. In addition, about 15 percent of Americans between the ages of 20 and 69 have noise-induced hearing loss, which also results from damage to the sensory hair cells. Once destroyed or damaged by noise or drugs, sensory hair cells in the inner ears of humans don’t grow back or self-repair, unlike the sensory hair cells of other animals such as birds and amphibians. This has made exploring potential pathways to protect or regrow hair cells in humans a major focus of hearing research.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 18411 - Posted: 07.27.2013

By Emma Tracey BBC News, Ouch An online magazine for the deaf community, Limping Chicken, recently ran an item on how deaf and hearing people sneeze differently. The article by partially deaf journalist Charlie Swinbourne got readers talking - and the cogs started turning at Ouch too. Swinbourne observes that deaf people don't make the "achoo!" sound when they sneeze, while hearing people seem to do it all the time - in fact, he put it in his humorous list, The Top 10 Annoying Habits of Hearing People. Nor is "achoo" universal - it's what English-speaking sneezers say. The French sneeze "atchoum". In Japan, it's "hakashun" and in the Philippines, they say "ha-ching". Inserting words into sneezes - and our responses such as "bless you" - are cultural habits we pick up along the way. So it's not surprising that British deaf people, particularly users of sign language, don't think to add the English word "achoo" to this most natural of actions. For deaf people, "a sneeze is what it should be... something that just happens", says Swinbourne in his article. He even attempts to describe what an achoo-free deaf sneeze sounds like: "[There is] a heavy breath as the deep pre-sneeze breath is taken, then a sharper, faster sound of air being released." Very little deaf-sneeze research exists, but a study has been done on deaf people and their laughter. BBC © 2013

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 18355 - Posted: 07.08.2013

by Helen Fields When a bat moves in for the kill, some moths jiggle their genitals. Researchers made the observation by studying three species of hawk moths—big moths that can hover—in Malaysia. They snared the insects with bright lights, tied tiny leashes around their waists, and let them fly while bat attack sounds played. All three species responded to the noises with ultrasound—which they made by shaking their private parts, the team reports online today in Biology Letters. Males have a structure they use for hanging onto females when they mate; to make the sound, they scrape a patch of large scales on the structure against the very end of their abdomen , letting out two bursts of rapid clicks. Females also make a sound, but the researchers aren't sure how. The scientists don't know exactly what the sounds are for, either. The noise may warn the bats that they're trying to mess with a fast-moving, hard-to-catch piece of prey, or it might jam the bat's ultrasound signals. Either way, the racy display may save their lives. © 2010 American Association for the Advancement of Science

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 12: Sex: Evolutionary, Hormonal, and Neural Bases
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 8: Hormones and Sex
Link ID: 18343 - Posted: 07.03.2013