Links for Keyword: Hearing

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 81 - 100 of 533

by Elizabeth Norton Stop that noise! Many creatures, such as human babies, chimpanzees, and chicks, react negatively to dissonance—harsh, unstable, grating sounds. Since the days of the ancient Greeks, scientists have wondered why the ear prefers harmony. Now, scientists suggest that the reason may go deeper than an aversion to the way clashing notes abrade auditory nerves; instead, it may lie in the very structure of the ear and brain, which are designed to respond to the elegantly spaced structure of a harmonious sound. "Over the past century, researchers have tried to relate the perception of dissonance to the underlying acoustics of the signals," says psychoacoustician Marion Cousineau of the University of Montreal in Canada. In a musical chord, for example, several notes combine to produce a sound wave containing all of the individual frequencies of each tone. Specifically, the wave contains the base, or "fundamental," frequency for each note plus multiples of that frequency known as harmonics. Upon reaching the ear, these frequencies are carried by the auditory nerve to the brain. If the chord is harmonic, or "consonant," the notes are spaced neatly enough so that the individual fibers of the auditory nerve carry specific frequencies to the brain. By perceiving both the parts and the harmonious whole, the brain responds to what scientists call harmonicity. In a dissonant chord, however, some of the notes and their harmonics are so close together that two notes will stimulate the same set of auditory nerve fibers. This clash gives the sound a rough quality known as beating, in which the almost-equal frequencies interfere to create a warbling sound. Most researchers thought that phenomenon accounted for the unpleasantness of a dissonance. © 2010 American Association for the Advancement of Science

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 17486 - Posted: 11.13.2012

by Will Ferguson For the first time, an electrical device has been powered by the ear alone. The team behind the technology used a natural electrochemical gradient in cells within the inner ear of a guinea pig to power a wireless transmitter for up to five hours. The technique could one day provide an autonomous power source for brain and cochlear implants, says Tina Stankovic, an auditory neuroscientist at Harvard University Medical School in Boston, Massachusetts. Nerve cells use the movement of positively charged sodium ions and negatively charged potassium ions across a membrane to create an electrochemical gradient that drives neural signals. Some cells in the cochlear have the same kind of gradient, which is used to convert the mechanical force of the vibrating eardrum into electrical signals that the brain can understand. Tiny voltage A major challenge in tapping such electrical potential is that the voltage created is tiny – a fraction of that generated by a standard AA battery. "We have known about DC potential in the human ear for 60 years but no one has attempted to harness it," Stankovic says. Now, Stankovic and her colleagues have developed an electronic chip containing several tiny, low resistance electrodes that can harness a small amount of this electrical activity without damaging hearing. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 17471 - Posted: 11.10.2012

When you hear the sound of a nail scratching a blackboard, the emotional and auditory part of your brain are interacting with one another, a new study reveals. The heightened activity and interaction between the amygdala, which is active in processing negative emotions, and the auditory parts of the brain explain why some sounds are so unpleasant to hear, scientists at Newcastle University have found. "It appears there is something very primitive kicking in," said Dr. Sukhbinder Kumar, the paper’s author. "It’s a possible distress signal from the amygdala to the auditory cortex." Researchers at the Wellcome Trust Centre for Neuroimaging at UCL and Newcastle University used functional magnetic resonance imaging (fMRI) to examine how the brains of 13 volunteers responded to a range of sounds. Listening to the noises inside the scanner, the volunteers rated them from the most unpleasant, like the sound of knife on a bottle, to the most pleasing, like bubbling water. Researchers were then able to study the brain response to each type of sound. "At the end of every sound, the volunteers told us by pressing a button how unpleasant they thought the sound was," Dr. Kumar said. Researchers found that the activity of the amygdala and the auditory cortex were directly proportional to the ratings of perceived unpleasantness. They concluded that the emotional part of the brain, the amygdala, in effect takes charge and modulates the activity of the auditory part of the brain, provoking our negative reaction. © CBC 2012

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 15: Emotions, Aggression, and Stress
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 11: Emotions, Aggression, and Stress
Link ID: 17364 - Posted: 10.13.2012

By Jason G. Goldman My high school biology teacher once told me that nothing was binary in biology except for alive and dead, and pregnant and not pregnant. Any other variation, he said, existed along a continuum. Whether or not the claim is technically accurate, it serves to illustrate an important feature of biological life. That is, very little in the biological world falls neatly into categories. A new finding, published today in PLoS ONE by Gustavo Arriaga, Eric P. Zhou, and Erich D. Jarvis from Duke University adds to the list of phenomena that scientists once thought were categorical but may, in fact, not be. The consensus among researchers was that, in general, animals divide neatly into two categories: singers and non-singers. The singers include songbirds, parrots, hummingbirds, humans, dolphins, whales, bats, elephants, sea lions and seals. What these species all have in common – and what distinguishes them from the non-singers of the animal world – is that they are vocal learners. That is, these species can change the composition of their sounds that emanate from the larynx (for mammals) or syrinx (for birds), both in terms of the acoustic qualities such as pitch, and in terms of syntax (the particular ordering of the parts of the song). It is perhaps not surprising that songbirds and parrots have been extremely useful as models for understanding human speech and language acquisition. When other animals, such as monkeys or non-human apes, produce vocalizations, they are always innate, usually reflexive, and never learned. But is the vocal learner/non-learner dichotomy truly reflective of biological reality? Maybe not. It turns out that mice make things more complicated. © 2012 Scientific American

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 12: Sex: Evolutionary, Hormonal, and Neural Bases
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 8: Hormones and Sex
Link ID: 17352 - Posted: 10.11.2012

by Sarah C. P. Williams Scientists have enabled deaf gerbils to hear again—with the help of transplanted cells that develop into nerves that can transmit auditory information from the ears to the brain. The advance, reported today in Nature, could be the basis for a therapy to treat various kinds of hearing loss In humans, deafness is most often caused by damage to inner ear hair cells—so named because they sport hairlike cilia that bend when they encounter vibrations from sound waves—or by damage to the neurons that transmit that information to the brain. When the hair cells are damaged, those associated spiral ganglion neurons often begin to degenerate from lack of use. Implants can work in place of the hair cells, but if the sensory neurons are damaged, hearing is still limited. "Obviously the ultimate aim is to replace both cell types," says Marcelo Rivolta of the University of Sheffield in the United Kingdom, who led the new work. "But we already have cochlear implants to replace hair cells, so we decided the first priority was to start by targeting the neurons." In the past, scientists have tried to isolate so-called auditory stem cells from embryoid bodie—aggregates of stem cells that have begun to differentiate into different types. But such stem cells can only divide about 25 times, making it impossible to produce them in the quantity needed for a neuron transplant. © 2010 American Association for the Advancement of Science.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 13: Memory, Learning, and Development
Link ID: 17252 - Posted: 09.13.2012

By PERRI KLASS, M.D. When children learn to play a musical instrument, they strengthen a range of auditory skills. Recent studies suggest that these benefits extend all through life, at least for those who continue to be engaged with music. But a study published last month is the first to show that music lessons in childhood may lead to changes in the brain that persist years after the lessons stop. Researchers at Northwestern University recorded the auditory brainstem responses of college students — that is to say, their electrical brain waves — in response to complex sounds. The group of students who reported musical training in childhood had more robust responses — their brains were better able to pick out essential elements, like pitch, in the complex sounds when they were tested. And this was true even if the lessons had ended years ago. Indeed, scientists are puzzling out the connections between musical training in childhood and language-based learning — for instance, reading. Learning to play an instrument may confer some unexpected benefits, recent studies suggest. We aren’t talking here about the “Mozart effect,” the claim that listening to classical music can improve people’s performance on tests. Instead, these are studies of the effects of active engagement and discipline. This kind of musical training improves the brain’s ability to discern the components of sound — the pitch, the timing and the timbre. Copyright 2012 The New York Times Company

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 13: Memory, Learning, and Development
Link ID: 17242 - Posted: 09.11.2012

by Hal Hodson IF YOU can hear, you probably take sound for granted. Without thinking, we swing our attention in the direction of a loud or unexpected sound - the honk of a car horn, say. Because deaf people lack access to such potentially life-saving cues, a group of researchers from the Korea Advanced Institute of Science and Technology (KAIST) in Daejeon built a pair of glasses which allows the wearer to "see" when a loud sound is made, and gives an indication of where it came from. An array of seven microphones, mounted on the frame of the glasses, pinpoints the location of such sounds and relays that directional information to the wearer through a set of LEDs embedded inside the frame. The glasses will only flash alerts on sounds louder than a threshold level, which is defined by the wearer. Previous attempts at devices which could alert deaf users to surrounding noises have been ungainly. For example, research in 2003 at the University of California, Berkeley, used a computer monitor to provide users with a visual aid to pinpoint the location of a sound. The Korean team have not beaten this problem quite yet - the prototype requires a user to carry a laptop around in a backpack to process the signal. But lead researcher Yang-Hann Kim stresses that the device is a first iteration that will be miniaturised over the next few years. Richard Ladner at the University of Washington in Seattle questions whether the device would prove beneficial enough to gain acceptance. "Does the benefit of wearing such a device outweigh the inconvenience of having extra technology that is seldom needed?" he asks. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 17233 - Posted: 09.07.2012

Tuning a piano also tunes the brain, say researchers who have seen structural changes within the brains of professional piano tuners. Researchers at University College London and Newcastle University found listening to two notes played simultaneously makes the brain adapt. Brain scans revealed highly specific changes in the hippocampus, which governs memory and navigation. These correlated with the number of years tuners had been doing this job. The Wellcome Trust researchers used magnetic resonance imaging to compare the brains of 19 professional piano tuners - who play two notes simultaneously to make them pitch-perfect - and 19 other people. What they saw was highly specific changes in both the grey matter - the nerve cells where information processing takes place - and the white matter - the nerve connections - within the brains of the piano tuners. Investigator Sundeep Teki said: "We already know that musical training can correlate with structural changes, but our group of professionals offered a rare opportunity to examine the ability of the brain to adapt over time to a very specialised form of listening." Other researchers have noted similar hippocampal changes in taxi drivers as they build up detailed information needed to find their way around London's labyrinth of streets. BBC © 2012

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 13: Memory, Learning, and Development
Link ID: 17216 - Posted: 08.29.2012

By Christie Wilcox Music has a remarkable ability to affect and manipulate how we feel. Simply listening to songs we like stimulates the brain’s reward system, creating feelings of pleasure and comfort. But music goes beyond our hearts to our minds, shaping how we think. Scientific evidence suggests that even a little music training when we’re young can shape how brains develop, improving the ability to differentiate sounds and speech. With education funding constantly on the rocks and tough economic times tightening many parents’ budgets, students often end up with only a few years of music education. Studies to date have focused on neurological benefits of sustained music training, and found many upsides. For example, researchers have found that musicians are better able to process foreign languages because of their ability to hear differences in pitch, and have incredible abilities to detect speech in noise. But what about the kids who only get sparse musical tutelage? Does picking up an instrument for a few years have any benefits? The answer from a study just published in the Journal of Neuroscience is a resounding yes. The team of researchers from Northwestern University’s Auditory Neuroscience Laboratory tested the responses of forty-five adults to different complex sounds ranging in pitch. The adults were grouped based on how much music training they had as children, either having no experience, one to five years of training, or six to eleven years of music instruction. © 2012 Scientific American

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 13: Memory, Learning, and Development
Link ID: 17188 - Posted: 08.22.2012

by Nicholas St. Fleur With their trumpet-like calls, elephants may seem like some of the loudest animals on Earth. But we can't hear most of the sounds they make. The creatures produce low-frequency noises between 1 to 20 Hertz, known as infrasounds, that help them keep in touch over distances as large as 10 kilometers. A new study reveals for the first time how elephants produce these low notes. Scientists first discovered that elephants made infrasounds in the 1980s. The head female in a herd may produce the noises to guide her group's movements, whereas a male who’s in a mating state called musth might use the calls to thwart competition from other males. Mother elephants even rely on infrasounds to keep tabs on a separated calf, exchanging "I'm here" calls with the wayward offspring in a fashion similar to a game of Marco Polo. These noises, which fall below the hearing range for humans, are often accompanied by strong rumbles with slightly higher frequencies that people can hear. By recording the rumbles and then speeding up the playback, the scientists can increase the frequency of the infrasounds, making them audible. Good vibrations. The vocal folds of the excised larynx vibrating according to the myoelastic-aerodynamic method. Researchers have speculated that the noises come from vibrations in the vocal folds of the elephant larynx. This could happen in two ways. In the first, called active muscular contraction (AMC), neural signals cause the muscles in the larynx to contract in a constant rhythm. Cats do this when they purr. The second possibility is known as the myoelastic-aerodynamic (MEAD) method, and it occurs when air flows through the vocal folds causing them to vibrate—this also happens when humans talk. © 2010 American Association for the Advancement of Science

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 17125 - Posted: 08.04.2012

by Nicholas St. Fleur A house fly couple settles down on the ceiling of a manure-filled cowshed for a romantic night of courtship and copulation. Unbeknownst to the infatuated insects, their antics have attracted the acute ears of a lurking Natterer's bat. But this eavesdropper is no pervert—he's a predator set on a two-for-one dinner special. As a new study reveals, the hungry bat swoops in on the unsuspecting flies, guided by the sound of their precoital "clicks." Previous studies of freshwater amphipods, water striders, and locusts have shown that mating can make animals more vulnerable to predators, but these studies did not determine why. A team from the Max Planck Institute for Ornithology in Germany, led by the late Björn Siemers, found that the bat-fly interactions in the cowshed provided clues for understanding what tips off a predator to a mating couple. The researchers observed a teenage horror film-like scene as Natterer's bats (Myotis nattereri)preyed on mating house flies (Musca domestica). Bats find prey primarily through two methods: echolocation and passive acoustics. For most bats, echolocation is the go-to tracking tool. They send out a series of high frequency calls and listen for the echoes produced when the waves hit something. The researchers found that by using echolocation, bats could easily find and catch house flies midflight, yet they had difficulty hunting stationary house flies. © 2010 American Association for the Advancement of Science

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 12: Sex: Evolutionary, Hormonal, and Neural Bases
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 8: Hormones and Sex
Link ID: 17085 - Posted: 07.24.2012

By WILLIAM J. BROAD Scientists have long known that man-made, underwater noises — from engines, sonars, weapons testing, and such industrial tools as air guns used in oil and gas exploration — are deafening whales and other sea mammals. The Navy estimates that loud booms from just its underwater listening devices, mainly sonar, result in temporary or permanent hearing loss for more than a quarter-million sea creatures every year, a number that is rising. Now, scientists have discovered that whales can decrease the sensitivity of their hearing to protect their ears from loud noise. Humans tend to do this with index fingers; scientists haven’t pinpointed how whales do it, but they have seen the first evidence of the behavior. “It’s equivalent to plugging your ears when a jet flies over,” said Paul E. Nachtigall, a marine biologist at the University of Hawaii who led the discovery team. “It’s like a volume control.” The finding, while preliminary, is already raising hopes for the development of warning signals that would alert whales, dolphins and other sea mammals to auditory danger. Peter Madsen, a professor of marine biology at Aarhus University in Denmark, said he applauded the Hawaiian team for its “elegant study” and the promise of innovative ways of “getting at some of the noise problems.” But he cautioned against letting the discovery slow global efforts to reduce the oceanic roar, which would aid the beleaguered sea mammals more directly. © 2012 The New York Times Company

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 17048 - Posted: 07.17.2012

People who are born deaf process the sense of touch differently than people who are born with normal hearing, according to research funding by the National Institutes of Health. The finding reveals how the early loss of a sense — in this case hearing — affects brain development. It adds to a growing list of discoveries that confirm the impact of experiences and outside influences in molding the developing brain. The study is published in the July 11 online issue of The Journal of Neuroscience. The researchers, Christina M. Karns, Ph.D., a postdoctoral research associate in the Brain Development Lab at the University of Oregon, Eugene, and her colleagues, show that deaf people use the auditory cortex to process touch stimuli and visual stimuli to a much greater degree than occurs in hearing people. The finding suggests that since the developing auditory cortex of profoundly deaf people is not exposed to sound stimuli, it adapts and takes on additional sensory processing tasks. "This research shows how the brain is capable of rewiring in dramatic ways," said James F. Battey, Jr., M.D., Ph.D., director of the NIDCD. "This will be of great interest to other researchers who are studying multisensory processing in the brain." Previous research, including studies performed by the lab director, Helen Neville Ph.D., has shown that people who are born deaf are better at processing peripheral vision and motion. Deaf people may process vision using many different brain regions, especially auditory areas, including the primary auditory cortex. However, no one has tackled whether vision and touch together are processed differently in deaf people, primarily because in experimental settings, it is more difficult to produce the kind of precise tactile stimuli needed to answer this question.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 8: General Principles of Sensory Processing, Touch, and Pain
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 5: The Sensorimotor System
Link ID: 17035 - Posted: 07.12.2012

by Elizabeth Preston It's 20 million years ago in the forests of Argentina, and Homunculus patagonicus is on the move. The monkey travels quickly, swinging between tree branches as it goes. Scientists have a good idea of how Homunculus got around thanks to a new fossil analysis of its ear canals and those of 15 other ancient primates. These previously hidden passages reveal some surprises about the locomotion of extinct primates—including hints that our own ancestors spent their lives moving at a higher velocity than today's apes. Wherever skeletons of ancient primates exist, anthropologists have minutely analyzed arm, leg, and foot bones to learn about the animals' locomotion. Some of these primates seem to have bodies built for leaping. Others look like they moved more deliberately. But in species such as H. patagonicus, there's hardly anything to go on aside from skulls. That's where the inner ear canals come in. "The semicircular canals function essentially as angular accelerometers for the head," helping an animal keep its balance while its head jerks around, says Timothy Ryan, an anthropologist at Pennsylvania State University, University Park. In the new study, he and colleagues used computed tomography scans to peer inside the skulls of 16 extinct primates, spanning 35 million years of evolution, and reconstruct the architecture of their inner ears. © 2010 American Association for the Advancement of Science

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 16951 - Posted: 06.23.2012

Children exposed to HIV in the womb may be more likely to experience hearing loss by age 16 than are their unexposed peers, according to scientists in a National Institutes of Health research network. The researchers estimated that hearing loss affects 9 to 15 percent of HIV-infected children and 5 to 8 percent of children who did not have HIV at birth but whose mothers had HIV infection during pregnancy. Study participants ranged from 7 to 16 years old. The researchers defined hearing loss as the level at which sounds could be detected, when averaged over four frequencies important for speech understanding (500, 1000, 2000, and 4000 Hertz), that was 20 decibels or higher than the normal hearing level for adolescents or young adults in either ear. “Children exposed to HIV before birth are at higher risk for hearing difficulty, and it's important for them—and the health providers who care for them—to be aware of this,” said George K. Siberry, M.D., of the Pediatric, Adolescent, and Maternal AIDS Branch of the Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD), the NIH institute that leads the research network. Compared to national averages for other children their age, children with HIV infection were about 200 to 300 percent more likely to have a hearing loss. Children whose mothers had HIV during pregnancy but who themselves were born without HIV were 20 percent more likely than to have hearing loss. The study was published online in The Pediatric Infectious Disease Journal.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 13: Memory, Learning, and Development
Link ID: 16947 - Posted: 06.21.2012

By Susan Milius ALBUQUERQUE — Unbeknownst to humans, peacocks may be having infrasonic conversations. New recordings reveal that males showing off their feathers make deep rumbling sounds that are too low pitched for humans to hear. Other peacocks hear it though, Angela Freeman reported June 13 at the annual meeting of the Animal Behavior Society. When she played recordings of the newly discovered sound to peafowl, females looked alert and males were likely to shriek out a (human-audible) call. Peacocks are thus the first birds known to make and perceive noises below human hearing, Freeman said. ”Really exciting,” said Roslyn Dakin of Queen’s University in Kingston, Canada, who studies the visual allure of peacock courtship. If peacocks can rumble, she suspects that other birds may be able to, too. “I don’t think this is a weird case,” she said. Such infrasound, or noise below 20 hertz, extends below the limit of human hearing. Biologists watched creatures such as elephants for centuries before recording technology uncovered the infrasound side of those animal conversations. But making infrasound doesn’t always mean communicating with it. Recordings have picked up infrasound from another bird, the capercaillie, but playing back the sounds to those birds has so far revealed no sign that they hear or care about their own infrasound. Freeman, an animal behaviorist at the University of Manitoba, was inspired to make detailed recordings of male peacocks by her coauthor’s impression that their fanned-out feather display curved slightly forward like a shallow satellite dish. © Society for Science & the Public 2000 - 2012

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 12: Sex: Evolutionary, Hormonal, and Neural Bases
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 8: Hormones and Sex
Link ID: 16936 - Posted: 06.20.2012

By Susan Milius New high-speed video of the tropical bats swooping toward various frogs and toads shows that the predators deploy a sequence of senses to update their judgment of prey during an attack to avoid eating a toxic amphibian, says behavioral ecologist Rachel Page of the Smithsonian Tropical Research Institute in Gamboa, Panama. The bats proved hard to fool even when researchers played the call of a favorite edible frog while offering up another species, Page and her colleagues report in an upcoming Naturwissenschaften. In the tropics, various bats will nab a frog if given half a chance, but only the fringe-lipped species (Trachops cirrhosus) is known to follow frog calls, such as the “tuuun chuck” call of the túngara frog (Engystomops pustulosus). In tests in Panama, Page and her colleagues found that fringe-lipped bats turned aside in mid-air if researchers broadcast enticing túngara calls but offered up a cane toad (Rhinella marina), which is way too big for a bat to carry off. The possibility that incoming bats might use echolocation to avoid overweight prey intrigues bat specialist Brock Fenton at the University of Western Ontario in Canada. Early studies of these bats largely ignored possible last-minute echolocation, he says. The new tests also revealed that playing túngara calls while offering a right-sized but toxic leaf litter toad (Rhinella alata) led bats to catch and then drop the unpleasant prey. (Both bats and toads survived.) © Society for Science & the Public 2000 - 2012

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 16840 - Posted: 05.26.2012

By Rebecca Cheung Constant low-level noise might cause hearing problems, a new study in rats finds. The discovery, published online May 15 in Nature Communications, suggests that extended exposure to noise at levels usually deemed safe for human ears could actually impair sound perception. The findings are “definitely a warning flag,” says study coauthor Michael Merzenich, an integrative neuroscientist at the University of California, San Francisco. He adds that it will be important to find out whether people employed at factories where continuous low-intensity noise is emitted throughout the workday experience similar consequences. “The big picture is that there is no safe sound,” says Jos Eggermont, an auditory neuroscientist at the University of Calgary in Canada. Even sounds considered safe can cause damage if delivered in a repetitive way, he says. “There might be not-so-subtle effects that accumulate and affect communication and speech understanding.” It’s common knowledge that sustained exposure to louder noises — such as that above 85 decibels — or brief exposures to very loud noises above 100 decibels can cause inner ear damage and hearing impairments. But until recently, the impact of chronic, quieter sound hasn’t been well studied. In the new study, Merzenich and his colleague Xiaoming Zhou of East China Normal University in Shanghai exposed adult mice to 65 decibel sound — roughly at the higher end of normal human speech volume — for 10 hours daily. © Society for Science & the Public 2000 - 2012

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 16802 - Posted: 05.16.2012

DULL fingers? Blame your genes. It has just been discovered that sensitivity to touch is heritable, and apparently linked to hearing as well. Gary Lewin and colleagues at the Max Delbrück Center for Molecular Medicine in Berlin, Germany, measured touch in 100 healthy pairs of fraternal and identical twins. They tested finger sensitivity in two ways: by response to a high-frequency vibration and the ability to identify the orientation of very fine grating. Lewin's team found that up to 50 per cent of the variation in sensitivity to touch was genetically determined. Audio tests also showed that those with good hearing were more likely to have sensitive touch. The link between the two is logical, as both touch and hearing rely on sensory cells that detect mechanical forces. Next the researchers studied touch sensitivity in students with congenital deafness. They found that 1 in 5 also had impaired touch, indicating that some genes causing deafness may also dull the sense of touch. When they looked at a subset of individuals who were deaf and blind due to Usher syndrome, they found that mutations in a single gene, USH2A, caused both the disease and reduced sensitivity to touch (PLoS Biology, DOI: 10.1371/journal.pbio.1001318). The next step is to try to identify more genes that affect our sense of touch. "There are many more genes than just the one we found," says Lewin, adding that finding them "will hopefully show us more about the biology of touch". © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 8: General Principles of Sensory Processing, Touch, and Pain
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 5: The Sensorimotor System
Link ID: 16785 - Posted: 05.14.2012

By ANNE EISENBERG DIGITAL hearing aids can do wonders for faded hearing. But other devices can help, too, as audio technology adds new options to help people converse at a noisy restaurant, or talk quietly with a pharmacist at a crowded drugstore counter. Richard Einhorn, a composer who suddenly lost much of his hearing two years ago, relies on his hearing aid, of course, for general use. But when he is meeting friends at a busy coffee shop — where his hearing aid is not always good at distinguishing their voices amid the clatter — he removes it. He has a better solution. He pops on a pair of in-ear earphones and snaps a directional mike on his iPhone, which has an app to amplify and process sound. “I put the iPhone on the table,” he said. “I point it at whoever’s talking, and I can have conversations with them. Soon we forget the iPhone is sitting there.” Mr. Einhorn’s ad hoc solution to restaurant racket is a feasible one, said Jay T. Rubinstein, a professor of bioengineering and otolaryngology at the University of Washington. “It makes sense when you need to capture a speaker’s voice in a noisy environment,” he said. “A system that gives you a high-quality directional mike and good earphones can help people hear in a complex setting.” © 2012 The New York Times Company

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 16755 - Posted: 05.07.2012