Links for Keyword: Hearing

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 41 - 60 of 533

Elizabeth Pennisi Dolphins and bats don't have much in common, but they share a superpower: Both hunt their prey by emitting high-pitched sounds and listening for the echoes. Now, a study shows that this ability arose independently in each group of mammals from the same genetic mutations. The work suggests that evolution sometimes arrives at new traits through the same sequence of steps, even in very different animals. The research also implies that this convergent evolution is common—and hidden—within genomes, potentially complicating the task of deciphering some evolutionary relationships between organisms. Nature is full of examples of convergent evolution, wherein very distantly related organisms wind up looking alike or having similar skills and traits: Birds, bats, and insects all have wings, for example. Biologists have assumed that these novelties were devised, on a genetic level, in fundamentally different ways. That was also the case for two kinds of bats and toothed whales, a group that includes dolphins and certain whales, that have converged on a specialized hunting strategy called echolocation. Until recently, biologists had thought that different genes drove each instance of echolocation and that the relevant proteins could change in innumerable ways to take on new functions. But in 2010, Stephen Rossiter, an evolutionary biologist at Queen Mary, University of London, and his colleagues determined that both types of echolocating bats, as well as dolphins, had the same mutations in a particular protein called prestin, which affects the sensitivity of hearing. Looking at other genes known to be involved in hearing, they and other researchers found several others whose proteins were similarly changed in these mammals. © 2012 American Association for the Advancement of Science

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 18608 - Posted: 09.05.2013

by Jennifer Viegas Goldfish not only listen to music, but they also can distinguish one composer from another, a new study finds. The paper adds to the growing body of evidence that many different animals understand music. For the study, published in the journal Behavioural Processes, Shinozuka and colleagues Haruka Ono and Shigeru Watanabe played two pieces of classical music near goldfish in a tank. The pieces were Toccata and Fugue in D minor by Johann Sebastian Bach and The Rite of Spring by Igor Stravinsky. The scientists trained the fish to gnaw on a little bead hanging on a filament in the water. Half of the fish were trained with food to gnaw whenever Bach played and the other half were taught to gnaw whenever Stravinsky music was on. The goldfish aced the test, easily distinguishing the two composers and getting a belly full of food in the process. The fish were more interested in the vittles than the music, but earlier studies on pigeons and songbirds suggest that Bach is the preferred choice, at least for birds. “These pieces can be classified as classical (Bach) and modern (Stravinsky) music,” Shinozuka explained. “Previously we demonstrated that Java sparrows preferred classical over modern music. Also, we demonstrated Java sparrows could discriminate between consonance and dissonance.” © 2013 Discovery Communications, LLC.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 13: Memory, Learning, and Development
Link ID: 18600 - Posted: 09.03.2013

by Michael Marshall Life is tough when you're small. It's not just about getting trodden on by bigger animals. Some of the tiniest creatures struggle to make their bodies work properly. This leads to problems that us great galumphing humans will never experience. For instance, the smallest frogs are prone to drying out because water evaporates so quickly from their skin. Miniature animals can't have many offspring, because there is no room in their bodies to grow them. One tiny spider has even had to let its brain spill into its legs, because its head is too small to accommodate it. Gardiner's Seychelles frog is one of the smallest vertebrates known to exist, at just 11 millimetres long. Its tiny head is missing parts of its ears, which means it shouldn't be able to hear anything. It can, though, and that is thanks to its big mouth. One of only four species in the genus Sechellophryne, Gardiner's Seychelles frog is a true rarity. It is confined to a few square kilometres of two islands in the Seychelles, and even if you visit its habitat you're unlikely to see it. That's because the frog spends most of its time in moist leaf litter, so that it doesn't dry out. It eats tiny insects and other invertebrates. When it comes to hearing, it is sadly under-equipped. Unlike most frogs, it doesn't have an external eardrum. Inside its head, it does have the amphibian equivalent of a cochlea, which is the bit that actually detects sounds. But it doesn't have a middle ear to transmit the sound to the cochlea, and is also missing a bone called the columella that would normally help carry the sound. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 18599 - Posted: 09.03.2013

By Dina Fine Maron Using sensors tucked inside the ears of live gerbils, researchers from Columbia University are providing critical insights into how the ear processes sound. In particular, the researchers have uncovered new evidence on how the cochlea, a coiled portion of the inner ear, processes and amplifies sound. The findings could lay the initial building blocks for better hearing aids and implants. The research could also help settle a long-simmering debate: Do the inner workings of the ear function somewhat passively with sound waves traveling into the cochlea, bouncing along sensory tissue, and slowing as they encounter resistance until they are boosted and processed into sound? Or does the cochlea actively amplify sound waves? The study, published in Biophysical Journal, suggests the latter is the case. The team, led by Elizabeth Olson, a biomedical engineer of Columbia University, used sensors that simultaneously measured small pressure fluctuations and cell-generated voltages within the ear. The sensors allowed the researchers to pick up phase shifts—a change in the alignment of the vibrations of the sound waves within the ear—suggesting that some part of the ear was amplifying sound. What causes that phase shift is still unclear although the researchers think the power behind the phase shift comes from the outer hair cells. Apparently the hair cells’ movement serves to localize and sharpen the frequency region of amplification. The researchers wrote that the mechanism appears to be akin to a child swinging on the playground. If somebody pushes a swing just once, the oscillations will eventually die out. If a child pumps her legs at certain times, however, it will put energy into the oscillations—that is power amplification at work. © 2013 Scientific American

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 18544 - Posted: 08.22.2013

By C. CLAIBORNE RAY A. A widely discussed 2006 study of transit noise in New York City, measuring noise on buses as well as in subway cars and on platforms, was described at the time as the first such formal study published since the 1930s. Done by scientists at the Mailman School of Public Health at Columbia University and published in The Journal of Urban Health, the study concluded that noise levels at subway and bus stops could easily exceed recognized public health recommendations and had the potential to damage hearing, given sufficient exposure. For example, guidelines from the Environmental Protection Agency and the World Health Organization set a limit of 45 minutes’ exposure to 85 decibels, the mean noise level measured on subway platforms. And nearly 60 percent of the platform measurements exceeded that level. The maximum noise levels inside subway cars were even higher than those on the platforms, with one-fifth exceeding 100 decibels and more than two-thirds exceeding 90 decibels. The study recommended properly fitted earplugs and earmuff-type protectors in loud transit environments, saying they could cut noise levels significantly at the eardrum. And it warned that personal listening devices only increased the total noise and risk. © 2013 The New York Times Company

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 18495 - Posted: 08.13.2013

by Tanya Lewis, LiveScience In waters from Florida to the Caribbean, dolphins are showing up stranded or entangled in fishing gear with an unusual problem: They can't hear. More than half of stranded bottlenose dolphins are deaf, one study suggests. The causes of hearing loss in dolphins aren't always clear, but aging, shipping noise and side effects from antibiotics could play roles. "We're at a stage right now where we're determining the extent of hearing loss [in dolphins], and figuring out all the potential causes," said Judy St. Leger, director of pathology and research at SeaWorld in San Diego. "The better we understand that, the better we have a sense of what we should be doing [about it]." Whether the hearing loss is causing the dolphin strandings -- for instance, by steering the marine mammals in the wrong direction or preventing them from finding food -- is also still an open question. Dolphins are a highly social species. They use echolocation to orient themselves by bouncing high-pitched sound waves off of objects in their environment. They also "speak" to one another in a language of clicks and buzzing sounds. Because hearing is so fundamental to dolphins' survival, losing it can be detrimental. A 2010 study found that more than half of stranded bottlenose dolphins and more than a third of stranded rough-toothed dolphins had severe hearing loss. The animals' hearing impairment may have been a critical factor in their strandings, and all rescued cetaceans should be tested, the researchers said in the study, detailed in the journal PLOS ONE. © 2013 Discovery Communications, LLC.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 18447 - Posted: 08.03.2013

Researchers have found in mice that supporting cells in the inner ear, once thought to serve only a structural role, can actively help repair damaged sensory hair cells, the functional cells that turn vibrations into the electrical signals that the brain recognizes as sound. The study in the July 25, 2013 online edition of the Journal of Clinical Investigation reveals the rescuing act that supporting cells and a chemical they produce called heat shock protein 70 (HSP70) appear to play in protecting damaged hair cells from death. Finding a way to jumpstart this process in supporting cells offers a potential pathway to prevent hearing loss caused by certain drugs, and possibly by exposure to excess noise. The study was led by scientists at the National Institutes of Health. Over half a million Americans experience hearing loss every year from ototoxic drugs — drugs that can damage hair cells in the inner ear. These include some antibiotics and the chemotherapy drug cisplatin. In addition, about 15 percent of Americans between the ages of 20 and 69 have noise-induced hearing loss, which also results from damage to the sensory hair cells. Once destroyed or damaged by noise or drugs, sensory hair cells in the inner ears of humans don’t grow back or self-repair, unlike the sensory hair cells of other animals such as birds and amphibians. This has made exploring potential pathways to protect or regrow hair cells in humans a major focus of hearing research.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 18411 - Posted: 07.27.2013

By Emma Tracey BBC News, Ouch An online magazine for the deaf community, Limping Chicken, recently ran an item on how deaf and hearing people sneeze differently. The article by partially deaf journalist Charlie Swinbourne got readers talking - and the cogs started turning at Ouch too. Swinbourne observes that deaf people don't make the "achoo!" sound when they sneeze, while hearing people seem to do it all the time - in fact, he put it in his humorous list, The Top 10 Annoying Habits of Hearing People. Nor is "achoo" universal - it's what English-speaking sneezers say. The French sneeze "atchoum". In Japan, it's "hakashun" and in the Philippines, they say "ha-ching". Inserting words into sneezes - and our responses such as "bless you" - are cultural habits we pick up along the way. So it's not surprising that British deaf people, particularly users of sign language, don't think to add the English word "achoo" to this most natural of actions. For deaf people, "a sneeze is what it should be... something that just happens", says Swinbourne in his article. He even attempts to describe what an achoo-free deaf sneeze sounds like: "[There is] a heavy breath as the deep pre-sneeze breath is taken, then a sharper, faster sound of air being released." Very little deaf-sneeze research exists, but a study has been done on deaf people and their laughter. BBC © 2013

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 18355 - Posted: 07.08.2013

by Helen Fields When a bat moves in for the kill, some moths jiggle their genitals. Researchers made the observation by studying three species of hawk moths—big moths that can hover—in Malaysia. They snared the insects with bright lights, tied tiny leashes around their waists, and let them fly while bat attack sounds played. All three species responded to the noises with ultrasound—which they made by shaking their private parts, the team reports online today in Biology Letters. Males have a structure they use for hanging onto females when they mate; to make the sound, they scrape a patch of large scales on the structure against the very end of their abdomen , letting out two bursts of rapid clicks. Females also make a sound, but the researchers aren't sure how. The scientists don't know exactly what the sounds are for, either. The noise may warn the bats that they're trying to mess with a fast-moving, hard-to-catch piece of prey, or it might jam the bat's ultrasound signals. Either way, the racy display may save their lives. © 2010 American Association for the Advancement of Science

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 12: Sex: Evolutionary, Hormonal, and Neural Bases
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 8: Hormones and Sex
Link ID: 18343 - Posted: 07.03.2013

Melissa Dahl TODAY The video will melt your heart: A deaf little boy is stunned when he hears his father’s voice for the first time after receiving an auditory brainstem implant. “Daddy loves you,” Len Clamp tells his 3-year-old son, Grayson, in a video that was recorded May 21 but is going viral today. (He signs the words, too, to be sure the boy would understand.) Grayson was born without cochlear nerves, the “bridge” that carries auditory information from the inner ear to the brain. He’s now the among the first children in the U.S. to receive an auditory brainstem implant in a surgery done at the University of North Carolina in Chapel Hill, N.C., led by UNC head and neck surgeon Dr. Craig Buchman. The device is already being used in adults, but is now being tested in children at UNC as part of an FDA-approved trial. It’s similar to a cochlear implant, but instead of sending electrical stimulation to the cochlea, the electrodes are placed on the brainstem itself. Brain surgery is required to implant the device. "Our hope is, because we're putting it into a young child, that their brain is plastic enough that they'll be able to take the information and run with it," Buchman told NBCNews.com.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 13: Memory, Learning, and Development
Link ID: 18302 - Posted: 06.24.2013

By NICHOLAS BAKALAR Hearing loss in older adults increases the risk for hospitalization and poor health, a new study has found, even taking into account other risk factors. Researchers analyzed data on 529 men and women over 70 with normal hearing, comparing them with 1,140 whose hearing was impaired, most with mild or moderate hearing loss. The data were gathered in a large national health survey in 2005-6 and again in 2009-10. The results appeared in The Journal of the American Medical Association. After adjusting for race, sex, education, hypertension, diabetes, stroke, cardiovascular disease and other risks, the researchers found that people with poor hearing were 32 percent more likely to be hospitalized, 36 percent more likely to report poor physical health and 57 percent more likely to report poor emotional or mental health. The authors acknowledge that this is an association only, and that there may be unknown factors that could have affected the result. “There has been a belief that hearing loss is an inconsequential part of aging,” said the lead author, Dr. Frank R. Lin, an associate professor of otolaryngology at Johns Hopkins. “But it’s probably not. Everyone knows someone with hearing loss, and as we think about health costs, we have to take its effects into account.” Copyright 2013 The New York Times Company

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 18267 - Posted: 06.13.2013

A team of NIH-supported researchers is the first to show, in mice, an unexpected two-step process that happens during the growth and regeneration of inner ear tip links. Tip links are extracellular tethers that link stereocilia, the tiny sensory projections on inner ear hair cells that convert sound into electrical signals, and play a key role in hearing. The discovery offers a possible mechanism for potential interventions that could preserve hearing in people whose hearing loss is caused by genetic disorders related to tip link dysfunction. The work was supported by the National Institute on Deafness and Other Communication Disorders (NIDCD), a component of the National Institutes of Health. Stereocilia are bundles of bristly projections that extend from the tops of sensory cells, called hair cells, in the inner ear. Each stereocilia bundle is arranged in three neat rows that rise from lowest to highest like stair steps. Tip links are tiny thread-like strands that link the tip of a shorter stereocilium to the side of the taller one behind it. When sound vibrations enter the inner ear, the stereocilia, connected by the tip links, all lean to the same side and open special channels, called mechanotransduction channels. These pore-like openings allow potassium and calcium ions to enter the hair cell and kick off an electrical signal that eventually travels to the brain where it is interpreted as sound. The findings build on a number of recent discoveries in laboratories at NIDCD and elsewhere that have carefully plotted the structure and function of tip links and the proteins that comprise them. Earlier studies had shown that tip links are made up of two proteins — cadherin-23 (CDH23) and protocadherin-15 (PCDH15) — that join to make the link, with PCDH15 at the bottom of the tip link at the site of the mechanotransduction channel, and CDH23 on the upper end. Scientists assumed that the assembly was static and stable once the two proteins bonded.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 18265 - Posted: 06.13.2013

By ROBERT J. ZATORRE and VALORIE N. SALIMPOOR MUSIC is not tangible. You can’t eat it, drink it or mate with it. It doesn’t protect against the rain, wind or cold. It doesn’t vanquish predators or mend broken bones. And yet humans have always prized music — or well beyond prized, loved it. In the modern age we spend great sums of money to attend concerts, download music files, play instruments and listen to our favorite artists whether we’re in a subway or salon. But even in Paleolithic times, people invested significant time and effort to create music, as the discovery of flutes carved from animal bones would suggest. So why does this thingless “thing” — at its core, a mere sequence of sounds — hold such potentially enormous intrinsic value? The quick and easy explanation is that music brings a unique pleasure to humans. Of course, that still leaves the question of why. But for that, neuroscience is starting to provide some answers. More than a decade ago, our research team used brain imaging to show that music that people described as highly emotional engaged the reward system deep in their brains — activating subcortical nuclei known to be important in reward, motivation and emotion. Subsequently we found that listening to what might be called “peak emotional moments” in music — that moment when you feel a “chill” of pleasure to a musical passage — causes the release of the neurotransmitter dopamine, an essential signaling molecule in the brain. When pleasurable music is heard, dopamine is released in the striatum — an ancient part of the brain found in other vertebrates as well — which is known to respond to naturally rewarding stimuli like food and sex and which is artificially targeted by drugs like cocaine and amphetamine. © 2013 The New York Times Company

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 15: Language and Our Divided Brain
Link ID: 18251 - Posted: 06.10.2013

by Kim Krieger The song of the cicada has been romanticized in mariachi music, used to signify summer in Japanese cinematography, and cursed by many an American suburbanite wishing for peace and quiet. Despite the bugs' ubiquity, scientists haven't uncovered how they sing so loudly—some are as noisy as a jet engine—and why they don't expend much energy doing it. But researchers reported in Montreal yesterday at the 21st International Congress on Acoustics that they now have the answer. The detailed mechanism of the cicada's song is far from fully understood, says Paulo Fonseca, an animal acoustician at the University of Lisbon who was not involved in the project. The work by the researchers "is innovative and paves our way to a better understanding of this complex system allowing such small animals to produce such powerful sound." Cicadas aren't just a natural curiosity. Small devices that produce extremely loud noises while requiring very little power appeal to the U.S. Navy, which uses sonar for underwater exploration and communication. Derke Hughes, a research engineer at the Naval Undersea Warfare Center in Newport, Rhode Island, says that the loudest cicadas can make a noise 20 to 40 dB louder than the compact off-the-shelf RadioShack speaker in his office using the same amount of power. Intrigued, he and his colleagues used microcomputed tomography)—a kind of CT scan that picks up details as small as a micron in size—to image a cicada's tymbal, which helps the insect make its deafening chirp. © 2010 American Association for the Advancement of Science

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 12: Sex: Evolutionary, Hormonal, and Neural Bases
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 8: Hormones and Sex
Link ID: 18234 - Posted: 06.05.2013

Zoe Cormier A study of two ancient hominins from South Africa suggests that changes in the shape and size of the middle ear occurred early in our evolution. Such alterations could have profoundly changed what our ancestors could hear — and perhaps how they could communicate. Palaeoanthropologist Rolf Quam of Binghamton University in New York state and his colleagues recovered and analysed a complete set of the three tiny middle-ear bones, or ossicles, from a 1.8-million-year-old specimen of Paranthropus robustus and an incomplete set of ossicles from Australopithecus africanus, which lived from about 3.3 million to around 2.1 million years ago. The ossicles are the smallest bones in the human body, and are rarely preserved intact in hominin fossils, Quam says. In both specimens, the team found that the malleus (the first in the chain of the three middle-ear bones) was human-like — smaller in proportion compared to the ones in our ape relatives. Its size would also imply a smaller eardrum. The similarity between the two species points to a “deep and ancient origin” of this feature, Quam says. “This could be like bipedalism: a defining characteristic of hominins.” It is hard to draw conclusions about hearing just from the shape of the middle-ear bones because the process involves so many different ear structures, as well as the brain itself. However, some studies have shown that the relative sizes of the middle-ear bones do affect what primates can hear2. Genomic comparisons with gorillas have indicated that changes in the genes that code for these structures might also demarcate humans from apes3. © 2013 Nature Publishing Group

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 18151 - Posted: 05.14.2013

by Michael Balter Researchers debate when language first evolved, but one thing is sure: Language requires us not only to talk but also to listen. A team of scientists now reports recovering the earliest known complete set of the three tiny middle ear bones—the malleus ("hammer"), incus ("anvil"), and stapes ("stirrup")—in a 2.0-million-year-old skull of Paranthropus robustus, a distant human relative found in South Africa (see photo). Reporting online today in the Proceedings of the National Academy of Sciences, the researchers found that the malleus of P. robustus, as well one found earlier in the early human relative Australopithecus africanus, is similar to that of modern humans, whereas the two other ear bones most closely resemble existing African and Asian great apes. The team is not entirely sure what this precocious appearance of a human-like malleus means. But since the malleus is attached directly to the eardrum, the researchers suggest that it might be an early sign of the high human sensitivity to middle-range acoustic frequencies between 2 and 4 kilohertz—frequencies critical to spoken language, but which apes and other primates are much less sensitive to. © 2010 American Association for the Advancement of Science

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 18150 - Posted: 05.14.2013

Ed Yong Many moths have evolved sensitive hearing that can pick up the ultrasonic probes of bats that want to eat them. But one species comes pre-adapted for anything that bats might bring to this evolutionary arms race. Even though its ears are extremely simple — a pair of eardrums on its flanks that each vibrate four receptor cells — it can sense frequencies up to 300 kilohertz, well beyond the range of any other animal and higher than any bat can squeak. “A lot of previous work has suggested that some bats have evolved calls that are out of the hearing range of the moths they are hunting. But this moth can hear the calls of any bat,” says James Windmill, an acoustical engineer at the University of Strathclyde, UK, who discovered the ability in the greater wax moth (Galleria mellonella). His study is published in Biology Letters1. Windmill's collaborator Hannah Moir, a bioacoustician now at the University of Leeds, UK, played sounds of varying frequencies to immobilized wax moths. As the insects “listened”, Moir used a laser to measure the vibrations of their eardrums, and electrodes to record the activity of their auditory nerves. The moths were most sensitive to frequencies of around 80 kilohertz, the average frequency of their courtship calls. But when exposed to 300 kilohertz, the highest level that the team tested, the insects' eardrums still vibrated and their neurons still fired. © 2013 Nature Publishing Group

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 18137 - Posted: 05.09.2013

By Jill U. Adams, Urban living may be harmful to your ears. That’s the takeaway from a study that found that more than eight in 10 New Yorkers were exposed to enough noise to damage their hearing. Perhaps more surprising was that so much of the city dwellers’ noise exposure was related not to noisy occupations but rather to voluntary activities such as listening to music. Which makes it hard for me not to worry that when my 16-year-old son is sitting nearby with his earbuds in, I can hear his music. There’s a pretty good chance that he’s got the volume up too loud — loud enough to potentially damage the sensory cells deep in his ear and eventually lead to permanent hearing loss. That’s according to Christopher Chang, an ear, nose and throat doctor at Fauquier ENT Consultants in Warrenton, who sees patients every day with hearing-related issues. “What he’s hearing is way too loud, because it’s concentrated directly into the ear itself,” he says of my son, adding that the anatomy of the ear magnifies sound as it travels through the ear canal. Listening to music through earbuds or headphones is just one way that many of us are routinely exposed to excessive noise. Mowing the lawn, going to a nightclub, riding the Metro, using a power drill, working in a factory, playing in a band and riding a motorcycle are activities that can lead to hearing problems. Aging is the primary cause of hearing loss; noise is second, says Brian Fligor, who directs the diagnostic audiology program at Boston Children’s Hospital, and it’s usually the culprit when the condition affects younger people. Approximately 15 percent of American adults between the ages of 20 and 69 have high-frequency hearing loss, probably the result of noise exposure, according to the National Institute on Deafness and Other Communication Disorders. © 1996-2013 The Washington Post

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 18038 - Posted: 04.15.2013

By Meghan Rosen Whether you’re rocking out to Britney Spears or soaking up Beethoven’s classics, you may be enjoying music because it stimulates a guessing game in your brain. This mental puzzling explains why humans like music, a new study suggests. By looking at activity in just one part of the brain, researchers could predict roughly how much volunteers dug a new song. When people hear a new tune they like, a clump of neurons deep within their brains bursts into excited activity, researchers report April 12 in Science. The blueberry-sized cluster of cells, called the nucleus accumbens, helps make predictions and sits smack-dab in the “reward center” of the brain — the part that floods with feel-good chemicals when people eat chocolate or have sex. The berry-sized bit acts with three other regions in the brain to judge new jams, MRI scans showed. One region looks for patterns, another compares new songs to sounds heard before, and the third checks for emotional ties. As our ears pick up the first strains of a new song, our brains hustle to make sense of the music and figure out what’s coming next, explains coauthor Valorie Salimpoor, who is now at the Baycrest Rotman Research Institute in Toronto. And when the brain’s predictions are right (or pleasantly surprising), people get a little jolt of pleasure. All four brain regions work overtime when people listen to new songs they like, report the researchers, including Robert Zatorre of the Montreal Neurological Institute at McGill University © Society for Science & the Public 2000 - 2013

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 4: The Chemistry of Behavior: Neurotransmitters and Neuropharmacology
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 4: The Chemistry of Behavior: Neurotransmitters and Neuropharmacology
Link ID: 18030 - Posted: 04.13.2013

By Andrew Grant A simple plastic shell has cloaked a three-dimensional object from sound waves for the first time. With some improvements, a similar cloak could eventually be used to reduce noise pollution and to allow ships and submarines to evade enemy detection. The experiments appear March 20 in Physical Review Letters. “This paper implements a simplified version of invisibility using well-designed but relatively simple materials,” says Steven Cummer, an electrical engineer at Duke University, who was not involved in the study. Cummer proposed the concept of a sound cloak in 2007. Scientists’ recent efforts to render objects invisible to the eye are based on the fact that our perception of the world depends on the scattering of waves. We can see objects because waves of light strike them and scatter. Similarly, the Navy can detect faraway submarines because they scatter sound waves (sonar) that hit them. So for the last several years scientists have been developing cloaks that prevent scattering by steering light or sound waves around an object. The drawback of this approach, however, is that it requires complex synthetic materials that are difficult to produce. © Society for Science & the Public 2000 - 2013

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 17966 - Posted: 03.30.2013