Links for Keyword: Hearing

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 81 - 100 of 539

Julian Richards, deputy editor, newscientist.com Let's take it from the top again... Human singing stars these days rely on Auto-Tune technology to produce the right pitch, but this songbird does it the old way - by listening out for its own mistakes. And it's also smart enough to ignore notes that are too far off to be true. Brains monitor their owners' physical actions via the senses, and use this feedback to correct mistakes in those actions. Many models of learning assume that the bigger the perceived mistake, the bigger the correction will be. Samuel Sober at Emory University in Atlanta, Georgia, and Michael Brainard of the University of California, San Francisco, suspected that the system is a bit cleverer than that - otherwise, for instance, a bird might over-correct its singing if it confused external sounds with its own voice, or if its brain made a mistake in processing sounds. They decided to fool Bengalese finches into thinking that they were singing out of tune, and measured what happened at different levels of this apparent tone-deafness. To do this, they fitted the birds with the stylish headphones shown in the photo above and fed them back the sound of their own singing, processed to sound sharper than it really was. The researchers sharpened the birdsong by degrees ranging from a quarter-tone to one-and-a-half tones. They found that the birds learned to "correct" their pitch more accurately and more quickly when they heard a smaller mistake than when they heard a large one. It was also clear that the bird brains took "errors" seriously when they fell within the normal range of pitches in the bird's song: the birds seemed to ignore errors outside this range. © Copyright Reed Business Information Ltd

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 13: Memory, Learning, and Development
Link ID: 17627 - Posted: 12.22.2012

By Wynne Parry and LiveScience NEW YORK — While jazz musician Vijay Iyer played a piece on the piano, he wore an expression of intense concentration. Afterward, everyone wanted to know: What was going on in his head? The way this music is often taught, "they tell you, you must not be thinking when you are playing," Iyer said after finishing his performance of John Coltrane's "Giant Steps," a piece that requires improvisation. "I think that is an impoverished view of what thought is. … Thought is distributed through all of our actions." Iyer's performance opened a panel discussion on music and the mind at the New York Academy of Sciences on Wednesday (Dec. 13). Music elicits "a splash" of activity in many parts of the brain, said panelist Jamshed Bharucha, a neuroscientist and musician, after moderator Steve Paulson of the public radio program "To the Best of Our Knowledge" asked about the brain's response to music. "I think you are asking a question we can only scratch the surface of in terms of what goes on in the brain," Bharucha said. [Why Music Moves Us] Creativity in the brain scanner Charles Limb, a surgeon who studies the neuroscience of music, is attempting to better understand creativity by putting jazz musicians and rappers in a brain-imaging scanner called a functional MRI, which measures blood flow in the brain, and asking them to create music or rap once in there. © 2012 Scientific American

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 15: Emotions, Aggression, and Stress
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 11: Emotions, Aggression, and Stress
Link ID: 17618 - Posted: 12.19.2012

By DOUGLAS MARTIN Dr. William F. House, a medical researcher who braved skepticism to invent the cochlear implant, an electronic device considered to be the first to restore a human sense, died on Dec. 7 at his home in Aurora, Ore. He was 89. The cause was metastatic melanoma, his daughter, Karen House, said. Dr. House pushed against conventional thinking throughout his career. Over the objections of some, he introduced the surgical microscope to ear surgery. Tackling a form of vertigo that doctors had believed was psychosomatic, he developed a surgical procedure that enabled the first American in space to travel to the moon. Peering at the bones of the inner ear, he found enrapturing beauty. Even after his ear-implant device had largely been supplanted by more sophisticated, and more expensive, devices, Dr. House remained convinced of his own version’s utility and advocated that it be used to help the world’s poor. Today, more than 200,000 people in the world have inner-ear implants, a third of them in the United States. A majority of young deaf children receive them, and most people with the implants learn to understand speech with no visual help. Hearing aids amplify sound to help the hearing-impaired. But many deaf people cannot hear at all because sound cannot be transmitted to their brains, however much it is amplified. This is because the delicate hair cells that line the cochlea, the liquid-filled spiral cavity of the inner ear, are damaged. When healthy, these hairs — more than 15,000 altogether — translate mechanical vibrations produced by sound into electrical signals and deliver them to the auditory nerve. Dr. House’s cochlear implant electronically translated sound into mechanical vibrations. His initial device, implanted in 1961, was eventually rejected by the body. But after refining its materials, he created a long-lasting version and implanted it in 1969. © 2012 The New York Times Company

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 17610 - Posted: 12.17.2012

By WILLIAM J. BROAD When a hurricane forced the Nautilus to dive in Jules Verne’s “Twenty Thousand Leagues Under the Sea,” Captain Nemo took the submarine down to a depth of 25 fathoms, or 150 feet. There, to the amazement of the novel’s protagonist, Prof. Pierre Aronnax, no whisper of the howling turmoil could be heard. “What quiet, what silence, what peace!” he exclaimed. That was 1870. Today — to the dismay of whale lovers and friends of marine mammals, if not divers and submarine captains — the ocean depths have become a noisy place. The causes are human: the sonar blasts of military exercises, the booms from air guns used in oil and gas exploration, and the whine from fleets of commercial ships that relentlessly crisscross the global seas. Nature has its own undersea noises. But the new ones are loud and ubiquitous. Marine experts say the rising clamor is particularly dangerous to whales, which depend on their acute hearing to locate food and one another. To fight the din, the federal government is completing the first phase of what could become one of the world’s largest efforts to curb the noise pollution and return the sprawling ecosystem to a quieter state. The project, by the National Oceanic and Atmospheric Administration, seeks to document human-made noises in the ocean and transform the results into the world’s first large sound maps. The ocean visualizations use bright colors to symbolize the sounds radiating out through the oceanic depths, frequently over distances of hundreds of miles. © 2012 The New York Times Company

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 17589 - Posted: 12.11.2012

Alla Katsnelson Human eyes, set as they are in front-facing sockets, give us a limited angle of view: we see what is directly in front of us, with only a few degrees of peripheral vision. But bats can broaden and narrow their 'visual field' by modulating the frequency of the squeaks they use to navigate and find prey, researchers in Denmark suggest today in Nature1. Bats find their way through the night by emitting sonar signals and using the echoes that return to them to create a map of their surroundings — a process called echolocation. Researchers have long known that small bats emit higher-frequency squeaks than larger bats, and most assumed that the difference arises because the smaller animals must catch smaller insects, from which low-frequency sound waves with long wavelengths do not reflect well. That didn't sound right to Annemarie Surlykke, a neurobiologist who studies bat echolocation at the University of Southern Denmark in Odense. “When you look at the actual frequencies, small bats would be able to detect even the smallest prey they take with a much lower frequency,” she says. “So there must be another reason.” Surlykke and her colleagues decided to test the hypothesis by studying six related species of bat that varied in size. They captured the animals in the wild and set them loose in a flight room — a pitch-dark netted corridor 2.5 metres high, 4.8 metres wide and 7 metres long, rigged on all sides with microphones and infrared cameras. “It’s a pretty confined space, so this corresponds to flying close to vegetation,” says Surlykke. © 2012 Nature Publishing Group

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 17536 - Posted: 11.26.2012

by Douglas Heaven All the better to hear you with, my dear. A chance discovery has revealed that some insects have evolved mammal-like ears, with an analogous three-part structure that includes a fluid-filled vessel similar to the mammalian cochlea. Fernando Montealegre-Z at the University of Lincoln, UK, and colleagues were studying the vibration of the tympanal membrane – a taut membrane that works like an eardrum – in the foreleg of Copiphora gorgonensis, a species of katydid from the South American rainforest, when they noticed tiny vibrations in the rigid cuticle behind the membrane. When they dissected the leg behind that membrane, they unexpectedly burst a vessel filled with high-pressure fluid. The team analysed the fluid to confirm that it was not part of the insect's circulatory system and concluded instead that it played a cochlea-like role in sound detection. In most insects, sound vibrations transmit directly to neuronal sensors which sit behind the tympanal membrane. Mammals have evolved tiny bones called ossicles that transfer vibrations from the eardrum to the fluid-filled cochlea. The analogous structure in the katydid is a vibrating plate, exposed to the air on one side and fluid on the other. Smallest ear In mammals, the cochlea analyses a sound's frequency – how high or low it is – and the new structure found by the team appears to do the same job. Spanning only 600 micrometres, it is the smallest known ear of its kind in nature. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 17497 - Posted: 11.17.2012

by Elizabeth Norton Stop that noise! Many creatures, such as human babies, chimpanzees, and chicks, react negatively to dissonance—harsh, unstable, grating sounds. Since the days of the ancient Greeks, scientists have wondered why the ear prefers harmony. Now, scientists suggest that the reason may go deeper than an aversion to the way clashing notes abrade auditory nerves; instead, it may lie in the very structure of the ear and brain, which are designed to respond to the elegantly spaced structure of a harmonious sound. "Over the past century, researchers have tried to relate the perception of dissonance to the underlying acoustics of the signals," says psychoacoustician Marion Cousineau of the University of Montreal in Canada. In a musical chord, for example, several notes combine to produce a sound wave containing all of the individual frequencies of each tone. Specifically, the wave contains the base, or "fundamental," frequency for each note plus multiples of that frequency known as harmonics. Upon reaching the ear, these frequencies are carried by the auditory nerve to the brain. If the chord is harmonic, or "consonant," the notes are spaced neatly enough so that the individual fibers of the auditory nerve carry specific frequencies to the brain. By perceiving both the parts and the harmonious whole, the brain responds to what scientists call harmonicity. In a dissonant chord, however, some of the notes and their harmonics are so close together that two notes will stimulate the same set of auditory nerve fibers. This clash gives the sound a rough quality known as beating, in which the almost-equal frequencies interfere to create a warbling sound. Most researchers thought that phenomenon accounted for the unpleasantness of a dissonance. © 2010 American Association for the Advancement of Science

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 17486 - Posted: 11.13.2012

by Will Ferguson For the first time, an electrical device has been powered by the ear alone. The team behind the technology used a natural electrochemical gradient in cells within the inner ear of a guinea pig to power a wireless transmitter for up to five hours. The technique could one day provide an autonomous power source for brain and cochlear implants, says Tina Stankovic, an auditory neuroscientist at Harvard University Medical School in Boston, Massachusetts. Nerve cells use the movement of positively charged sodium ions and negatively charged potassium ions across a membrane to create an electrochemical gradient that drives neural signals. Some cells in the cochlear have the same kind of gradient, which is used to convert the mechanical force of the vibrating eardrum into electrical signals that the brain can understand. Tiny voltage A major challenge in tapping such electrical potential is that the voltage created is tiny – a fraction of that generated by a standard AA battery. "We have known about DC potential in the human ear for 60 years but no one has attempted to harness it," Stankovic says. Now, Stankovic and her colleagues have developed an electronic chip containing several tiny, low resistance electrodes that can harness a small amount of this electrical activity without damaging hearing. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 17471 - Posted: 11.10.2012

When you hear the sound of a nail scratching a blackboard, the emotional and auditory part of your brain are interacting with one another, a new study reveals. The heightened activity and interaction between the amygdala, which is active in processing negative emotions, and the auditory parts of the brain explain why some sounds are so unpleasant to hear, scientists at Newcastle University have found. "It appears there is something very primitive kicking in," said Dr. Sukhbinder Kumar, the paper’s author. "It’s a possible distress signal from the amygdala to the auditory cortex." Researchers at the Wellcome Trust Centre for Neuroimaging at UCL and Newcastle University used functional magnetic resonance imaging (fMRI) to examine how the brains of 13 volunteers responded to a range of sounds. Listening to the noises inside the scanner, the volunteers rated them from the most unpleasant, like the sound of knife on a bottle, to the most pleasing, like bubbling water. Researchers were then able to study the brain response to each type of sound. "At the end of every sound, the volunteers told us by pressing a button how unpleasant they thought the sound was," Dr. Kumar said. Researchers found that the activity of the amygdala and the auditory cortex were directly proportional to the ratings of perceived unpleasantness. They concluded that the emotional part of the brain, the amygdala, in effect takes charge and modulates the activity of the auditory part of the brain, provoking our negative reaction. © CBC 2012

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 15: Emotions, Aggression, and Stress
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 11: Emotions, Aggression, and Stress
Link ID: 17364 - Posted: 10.13.2012

By Jason G. Goldman My high school biology teacher once told me that nothing was binary in biology except for alive and dead, and pregnant and not pregnant. Any other variation, he said, existed along a continuum. Whether or not the claim is technically accurate, it serves to illustrate an important feature of biological life. That is, very little in the biological world falls neatly into categories. A new finding, published today in PLoS ONE by Gustavo Arriaga, Eric P. Zhou, and Erich D. Jarvis from Duke University adds to the list of phenomena that scientists once thought were categorical but may, in fact, not be. The consensus among researchers was that, in general, animals divide neatly into two categories: singers and non-singers. The singers include songbirds, parrots, hummingbirds, humans, dolphins, whales, bats, elephants, sea lions and seals. What these species all have in common – and what distinguishes them from the non-singers of the animal world – is that they are vocal learners. That is, these species can change the composition of their sounds that emanate from the larynx (for mammals) or syrinx (for birds), both in terms of the acoustic qualities such as pitch, and in terms of syntax (the particular ordering of the parts of the song). It is perhaps not surprising that songbirds and parrots have been extremely useful as models for understanding human speech and language acquisition. When other animals, such as monkeys or non-human apes, produce vocalizations, they are always innate, usually reflexive, and never learned. But is the vocal learner/non-learner dichotomy truly reflective of biological reality? Maybe not. It turns out that mice make things more complicated. © 2012 Scientific American

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 12: Sex: Evolutionary, Hormonal, and Neural Bases
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 8: Hormones and Sex
Link ID: 17352 - Posted: 10.11.2012

by Sarah C. P. Williams Scientists have enabled deaf gerbils to hear again—with the help of transplanted cells that develop into nerves that can transmit auditory information from the ears to the brain. The advance, reported today in Nature, could be the basis for a therapy to treat various kinds of hearing loss In humans, deafness is most often caused by damage to inner ear hair cells—so named because they sport hairlike cilia that bend when they encounter vibrations from sound waves—or by damage to the neurons that transmit that information to the brain. When the hair cells are damaged, those associated spiral ganglion neurons often begin to degenerate from lack of use. Implants can work in place of the hair cells, but if the sensory neurons are damaged, hearing is still limited. "Obviously the ultimate aim is to replace both cell types," says Marcelo Rivolta of the University of Sheffield in the United Kingdom, who led the new work. "But we already have cochlear implants to replace hair cells, so we decided the first priority was to start by targeting the neurons." In the past, scientists have tried to isolate so-called auditory stem cells from embryoid bodie—aggregates of stem cells that have begun to differentiate into different types. But such stem cells can only divide about 25 times, making it impossible to produce them in the quantity needed for a neuron transplant. © 2010 American Association for the Advancement of Science.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 13: Memory, Learning, and Development
Link ID: 17252 - Posted: 09.13.2012

By PERRI KLASS, M.D. When children learn to play a musical instrument, they strengthen a range of auditory skills. Recent studies suggest that these benefits extend all through life, at least for those who continue to be engaged with music. But a study published last month is the first to show that music lessons in childhood may lead to changes in the brain that persist years after the lessons stop. Researchers at Northwestern University recorded the auditory brainstem responses of college students — that is to say, their electrical brain waves — in response to complex sounds. The group of students who reported musical training in childhood had more robust responses — their brains were better able to pick out essential elements, like pitch, in the complex sounds when they were tested. And this was true even if the lessons had ended years ago. Indeed, scientists are puzzling out the connections between musical training in childhood and language-based learning — for instance, reading. Learning to play an instrument may confer some unexpected benefits, recent studies suggest. We aren’t talking here about the “Mozart effect,” the claim that listening to classical music can improve people’s performance on tests. Instead, these are studies of the effects of active engagement and discipline. This kind of musical training improves the brain’s ability to discern the components of sound — the pitch, the timing and the timbre. Copyright 2012 The New York Times Company

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 13: Memory, Learning, and Development
Link ID: 17242 - Posted: 09.11.2012

by Hal Hodson IF YOU can hear, you probably take sound for granted. Without thinking, we swing our attention in the direction of a loud or unexpected sound - the honk of a car horn, say. Because deaf people lack access to such potentially life-saving cues, a group of researchers from the Korea Advanced Institute of Science and Technology (KAIST) in Daejeon built a pair of glasses which allows the wearer to "see" when a loud sound is made, and gives an indication of where it came from. An array of seven microphones, mounted on the frame of the glasses, pinpoints the location of such sounds and relays that directional information to the wearer through a set of LEDs embedded inside the frame. The glasses will only flash alerts on sounds louder than a threshold level, which is defined by the wearer. Previous attempts at devices which could alert deaf users to surrounding noises have been ungainly. For example, research in 2003 at the University of California, Berkeley, used a computer monitor to provide users with a visual aid to pinpoint the location of a sound. The Korean team have not beaten this problem quite yet - the prototype requires a user to carry a laptop around in a backpack to process the signal. But lead researcher Yang-Hann Kim stresses that the device is a first iteration that will be miniaturised over the next few years. Richard Ladner at the University of Washington in Seattle questions whether the device would prove beneficial enough to gain acceptance. "Does the benefit of wearing such a device outweigh the inconvenience of having extra technology that is seldom needed?" he asks. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 17233 - Posted: 09.07.2012

Tuning a piano also tunes the brain, say researchers who have seen structural changes within the brains of professional piano tuners. Researchers at University College London and Newcastle University found listening to two notes played simultaneously makes the brain adapt. Brain scans revealed highly specific changes in the hippocampus, which governs memory and navigation. These correlated with the number of years tuners had been doing this job. The Wellcome Trust researchers used magnetic resonance imaging to compare the brains of 19 professional piano tuners - who play two notes simultaneously to make them pitch-perfect - and 19 other people. What they saw was highly specific changes in both the grey matter - the nerve cells where information processing takes place - and the white matter - the nerve connections - within the brains of the piano tuners. Investigator Sundeep Teki said: "We already know that musical training can correlate with structural changes, but our group of professionals offered a rare opportunity to examine the ability of the brain to adapt over time to a very specialised form of listening." Other researchers have noted similar hippocampal changes in taxi drivers as they build up detailed information needed to find their way around London's labyrinth of streets. BBC © 2012

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 13: Memory, Learning, and Development
Link ID: 17216 - Posted: 08.29.2012

By Christie Wilcox Music has a remarkable ability to affect and manipulate how we feel. Simply listening to songs we like stimulates the brain’s reward system, creating feelings of pleasure and comfort. But music goes beyond our hearts to our minds, shaping how we think. Scientific evidence suggests that even a little music training when we’re young can shape how brains develop, improving the ability to differentiate sounds and speech. With education funding constantly on the rocks and tough economic times tightening many parents’ budgets, students often end up with only a few years of music education. Studies to date have focused on neurological benefits of sustained music training, and found many upsides. For example, researchers have found that musicians are better able to process foreign languages because of their ability to hear differences in pitch, and have incredible abilities to detect speech in noise. But what about the kids who only get sparse musical tutelage? Does picking up an instrument for a few years have any benefits? The answer from a study just published in the Journal of Neuroscience is a resounding yes. The team of researchers from Northwestern University’s Auditory Neuroscience Laboratory tested the responses of forty-five adults to different complex sounds ranging in pitch. The adults were grouped based on how much music training they had as children, either having no experience, one to five years of training, or six to eleven years of music instruction. © 2012 Scientific American

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 13: Memory, Learning, and Development
Link ID: 17188 - Posted: 08.22.2012

by Nicholas St. Fleur With their trumpet-like calls, elephants may seem like some of the loudest animals on Earth. But we can't hear most of the sounds they make. The creatures produce low-frequency noises between 1 to 20 Hertz, known as infrasounds, that help them keep in touch over distances as large as 10 kilometers. A new study reveals for the first time how elephants produce these low notes. Scientists first discovered that elephants made infrasounds in the 1980s. The head female in a herd may produce the noises to guide her group's movements, whereas a male who’s in a mating state called musth might use the calls to thwart competition from other males. Mother elephants even rely on infrasounds to keep tabs on a separated calf, exchanging "I'm here" calls with the wayward offspring in a fashion similar to a game of Marco Polo. These noises, which fall below the hearing range for humans, are often accompanied by strong rumbles with slightly higher frequencies that people can hear. By recording the rumbles and then speeding up the playback, the scientists can increase the frequency of the infrasounds, making them audible. Good vibrations. The vocal folds of the excised larynx vibrating according to the myoelastic-aerodynamic method. Researchers have speculated that the noises come from vibrations in the vocal folds of the elephant larynx. This could happen in two ways. In the first, called active muscular contraction (AMC), neural signals cause the muscles in the larynx to contract in a constant rhythm. Cats do this when they purr. The second possibility is known as the myoelastic-aerodynamic (MEAD) method, and it occurs when air flows through the vocal folds causing them to vibrate—this also happens when humans talk. © 2010 American Association for the Advancement of Science

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 17125 - Posted: 08.04.2012

by Nicholas St. Fleur A house fly couple settles down on the ceiling of a manure-filled cowshed for a romantic night of courtship and copulation. Unbeknownst to the infatuated insects, their antics have attracted the acute ears of a lurking Natterer's bat. But this eavesdropper is no pervert—he's a predator set on a two-for-one dinner special. As a new study reveals, the hungry bat swoops in on the unsuspecting flies, guided by the sound of their precoital "clicks." Previous studies of freshwater amphipods, water striders, and locusts have shown that mating can make animals more vulnerable to predators, but these studies did not determine why. A team from the Max Planck Institute for Ornithology in Germany, led by the late Björn Siemers, found that the bat-fly interactions in the cowshed provided clues for understanding what tips off a predator to a mating couple. The researchers observed a teenage horror film-like scene as Natterer's bats (Myotis nattereri)preyed on mating house flies (Musca domestica). Bats find prey primarily through two methods: echolocation and passive acoustics. For most bats, echolocation is the go-to tracking tool. They send out a series of high frequency calls and listen for the echoes produced when the waves hit something. The researchers found that by using echolocation, bats could easily find and catch house flies midflight, yet they had difficulty hunting stationary house flies. © 2010 American Association for the Advancement of Science

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 12: Sex: Evolutionary, Hormonal, and Neural Bases
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 8: Hormones and Sex
Link ID: 17085 - Posted: 07.24.2012

By WILLIAM J. BROAD Scientists have long known that man-made, underwater noises — from engines, sonars, weapons testing, and such industrial tools as air guns used in oil and gas exploration — are deafening whales and other sea mammals. The Navy estimates that loud booms from just its underwater listening devices, mainly sonar, result in temporary or permanent hearing loss for more than a quarter-million sea creatures every year, a number that is rising. Now, scientists have discovered that whales can decrease the sensitivity of their hearing to protect their ears from loud noise. Humans tend to do this with index fingers; scientists haven’t pinpointed how whales do it, but they have seen the first evidence of the behavior. “It’s equivalent to plugging your ears when a jet flies over,” said Paul E. Nachtigall, a marine biologist at the University of Hawaii who led the discovery team. “It’s like a volume control.” The finding, while preliminary, is already raising hopes for the development of warning signals that would alert whales, dolphins and other sea mammals to auditory danger. Peter Madsen, a professor of marine biology at Aarhus University in Denmark, said he applauded the Hawaiian team for its “elegant study” and the promise of innovative ways of “getting at some of the noise problems.” But he cautioned against letting the discovery slow global efforts to reduce the oceanic roar, which would aid the beleaguered sea mammals more directly. © 2012 The New York Times Company

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 17048 - Posted: 07.17.2012

People who are born deaf process the sense of touch differently than people who are born with normal hearing, according to research funding by the National Institutes of Health. The finding reveals how the early loss of a sense — in this case hearing — affects brain development. It adds to a growing list of discoveries that confirm the impact of experiences and outside influences in molding the developing brain. The study is published in the July 11 online issue of The Journal of Neuroscience. The researchers, Christina M. Karns, Ph.D., a postdoctoral research associate in the Brain Development Lab at the University of Oregon, Eugene, and her colleagues, show that deaf people use the auditory cortex to process touch stimuli and visual stimuli to a much greater degree than occurs in hearing people. The finding suggests that since the developing auditory cortex of profoundly deaf people is not exposed to sound stimuli, it adapts and takes on additional sensory processing tasks. "This research shows how the brain is capable of rewiring in dramatic ways," said James F. Battey, Jr., M.D., Ph.D., director of the NIDCD. "This will be of great interest to other researchers who are studying multisensory processing in the brain." Previous research, including studies performed by the lab director, Helen Neville Ph.D., has shown that people who are born deaf are better at processing peripheral vision and motion. Deaf people may process vision using many different brain regions, especially auditory areas, including the primary auditory cortex. However, no one has tackled whether vision and touch together are processed differently in deaf people, primarily because in experimental settings, it is more difficult to produce the kind of precise tactile stimuli needed to answer this question.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 8: General Principles of Sensory Processing, Touch, and Pain
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 5: The Sensorimotor System
Link ID: 17035 - Posted: 07.12.2012

by Elizabeth Preston It's 20 million years ago in the forests of Argentina, and Homunculus patagonicus is on the move. The monkey travels quickly, swinging between tree branches as it goes. Scientists have a good idea of how Homunculus got around thanks to a new fossil analysis of its ear canals and those of 15 other ancient primates. These previously hidden passages reveal some surprises about the locomotion of extinct primates—including hints that our own ancestors spent their lives moving at a higher velocity than today's apes. Wherever skeletons of ancient primates exist, anthropologists have minutely analyzed arm, leg, and foot bones to learn about the animals' locomotion. Some of these primates seem to have bodies built for leaping. Others look like they moved more deliberately. But in species such as H. patagonicus, there's hardly anything to go on aside from skulls. That's where the inner ear canals come in. "The semicircular canals function essentially as angular accelerometers for the head," helping an animal keep its balance while its head jerks around, says Timothy Ryan, an anthropologist at Pennsylvania State University, University Park. In the new study, he and colleagues used computed tomography scans to peer inside the skulls of 16 extinct primates, spanning 35 million years of evolution, and reconstruct the architecture of their inner ears. © 2010 American Association for the Advancement of Science

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 16951 - Posted: 06.23.2012