Links for Keyword: Hearing

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 586

By Maggie Koerth-Baker Q: I want to hear what the loudest thing in the world is! — Kara Jo, age 5 No. No, you really don’t. See, there’s this thing about sound that even we grown-ups tend to forget — it’s not some glitter rainbow floating around with no connection to the physical world. Sound is mechanical. A sound is a shove — just a little one, a tap on the tightly stretched membrane of your ear drum. The louder the sound, the heavier the knock. If a sound is loud enough, it can rip a hole in your ear drum. If a sound is loud enough, it can plow into you like a linebacker and knock you flat on your butt. When the shock wave from a bomb levels a house, that’s sound tearing apart bricks and splintering glass. Sound can kill you. Consider this piece of history: On the morning of Aug. 27, 1883, ranchers on a sheep camp outside Alice Springs, Australia, heard a sound like two shots from a rifle. At that very moment, the Indonesian volcanic island of Krakatoa was blowing itself to bits 2,233 miles away. Scientists think this is probably the loudest sound humans have ever accurately measured. Not only are there records of people hearing the sound of Krakatoa thousands of miles away, there is also physical evidence that the sound of the volcano’s explosion traveled all the way around the globe multiple times. Now, nobody heard Krakatoa in England or Toronto. There wasn’t a “boom” audible in St. Petersburg. Instead, what those places recorded were spikes in atmospheric pressure — the very air tensing up and then releasing with a sigh, as the waves of sound from Krakatoa passed through. There are two important lessons about sound in there: One, you don’t have to be able to see the loudest thing in the world in order to hear it. Second, just because you can’t hear a sound doesn’t mean it isn’t there. Sound is powerful and pervasive and it surrounds us all the time, whether we’re aware of it or not.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 22453 - Posted: 07.19.2016

Paula Span An estimated one zillion older people have a problem like mine. First: We notice age-related hearing loss. A much-anticipated report on hearing health from the National Academies of Sciences, Engineering and Medicine last month put the prevalence at more than 45 percent of those aged 70 to 74, and more than 80 percent among those over 85. Then: We do little or nothing about it. Fewer than 20 percent of those with hearing loss use hearing aids. I’ve written before about the reasons. High prices ($2,500 and up for a decent hearing aid, and most people need two). Lack of Medicare reimbursement, because the original 1965 law creating Medicare prohibits coverage. Time and hassle. Stigma. Both the National Academies and the influential President’s Council of Advisors on Science and Technology have proposed pragmatic steps to make hearing technology more accessible and affordable. But until there’s progress on those, many of us with mild to moderate hearing loss may consider a relatively inexpensive alternative: personal sound amplification products, or P.S.A.P.s. They offer some promise — and some perils, too. Unlike for a hearing aid, you don’t need an audiologist to obtain a P.S.A.P. You see these gizmos advertised on the back pages of magazines or on sale at drugstore chains. You can buy them online. © 2016 The New York Times Company

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 22449 - Posted: 07.16.2016

Ramin Skibba Is Justin Bieber a musical genius or a talentless hack? What you 'belieb' depends on your cultural experiences. Some people like to listen to the Beatles, while others prefer Gregorian chants. When it comes to music, scientists find that nurture can trump nature. Musical preferences seem to be mainly shaped by a person’s cultural upbringing and experiences rather than biological factors, according to a study published on 13 July in Nature1. “Our results show that there is a profound cultural difference” in the way people respond to consonant and dissonant sounds, says Josh McDermott, a cognitive scientist at the Massachusetts Institute of Technology in Cambridge and lead author of the paper. This suggests that other cultures hear the world differently, he adds. The study is one of the first to put an age-old argument to the test. Some scientists believe that the way people respond to music has a biological basis, because pitches that people often like have particular interval ratios. They argue that this would trump any cultural shaping of musical preferences, effectively making them a universal phenomenon. Ethnomusicologists and music composers, by contrast, think that such preferences are more a product of one’s culture. If a person’s upbringing shapes their preferences, then they are not a universal phenomenon. © 2016 Macmillan Publishers Limited

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 15: Emotions, Aggression, and Stress
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 11: Emotions, Aggression, and Stress
Link ID: 22439 - Posted: 07.14.2016

By Michael Price The blind comic book star Daredevil has a highly developed sense of hearing that allows him to “see” his environment with his ears. But you don’t need to be a superhero to pull a similar stunt, according to a new study. Researchers have identified the neural architecture used by the brain to turn subtle sounds into a mind’s-eye map of your surroundings. The study appears to be “very solid work,” says Lore Thaler, a psychologist at Durham University in the United Kingdom who studies echolocation, the ability of bats and other animals to use sound to locate objects. Everyone has an instinctive sense of the world around them—even if they can’t always see it, says Santani Teng, a postdoctoral researcher at the Massachusetts Institute of Technology (MIT) in Cambridge who studies auditory perception in both blind and sighted people. “We all kind of have that intuition,” says Teng over the phone. “For instance, you can tell I’m not in a gymnasium right now. I’m in a smaller space, like an office.” That office belongs to Aude Oliva, principal research scientist for MIT’s Computational Perception & Cognition laboratory. She and Teng, along with two other colleagues, wanted to quantify how well people can use sounds to judge the size of the room around them, and whether that ability could be detected in the brain. © 2016 American Association for the Advancement of Science.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 22427 - Posted: 07.12.2016

By Brian Platzer It started in 2010 when I smoked pot for the first time since college. It was cheap, gristly weed I’d had in my freezer for nearly six years, but four hours after taking one hit I was still so dizzy I couldn’t stand up without holding on to the furniture. The next day I was still dizzy, and the next, and the next, but it tapered off gradually until about a month later I was mostly fine. Over the following year I got married, started teaching seventh and eighth grade, and began work on a novel. Every week or so the disequilibrium sneaked up on me. The feeling was one of disorientation as much as dizziness, with some cloudy vision, light nausea and the sensation of being overwhelmed by my surroundings. During one eighth-grade English class, when I turned around to write on the blackboard, I stumbled and couldn’t stabilize myself. I fell in front of my students and was too disoriented to stand. My students stared at me slumped on the floor until I mustered enough focus to climb up to a chair and did my best to laugh it off. I was only 29, but my father had had a benign brain tumor around the same age, so I had a brain scan. My brain appeared to be fine. A neurologist recommended I see an ear, nose and throat specialist. A technician flooded my ear canal with water to see if my acoustic nerve reacted properly. The doctor suspected either benign positional vertigo (dizziness caused by a small piece of bonelike calcium stuck in the inner ear) or Ménière’s disease (which leads to dizziness from pressure). Unfortunately, the test showed my inner ear was most likely fine. But just as the marijuana had triggered the dizziness the year before, the test itself catalyzed the dizziness now. In spite of the negative results, doctors still believed I had an inner ear problem. They prescribed exercises to unblock crystals, and salt pills and then prednisone to fight Ménière’s disease. All this took months, and I continued to be dizzy, all day, every day. It felt as though I woke up every morning having already drunk a dozen beers — some days, depending on how active and stressful my day was, it felt like much more. Most days ended with me in tears. © 2016 The New York Times Company

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 4: The Chemistry of Behavior: Neurotransmitters and Neuropharmacology
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 4: The Chemistry of Behavior: Neurotransmitters and Neuropharmacology
Link ID: 22318 - Posted: 06.14.2016

Meghan Rosen SALT LAKE CITY — In the Indian Ocean off the coast of Sri Lanka, pygmy blue whales are changing their tune — and they might be doing it on purpose. From 2002 to 2012, the frequency of one part of the whales’ calls steadily fell, marine bioacoustician Jennifer Miksis-Olds reported May 25 at a meeting of the Acoustical Society of America. But unexpectedly, another part of the whales’ call stayed the same, she found. “I’ve never seen results like this before,” says marine bioacoustician Leanna Matthews of Syracuse University in New York, who was not involved with the work. Miksis-Olds’ findings add a new twist to current theories about blue whale vocalizations and spark all sorts of questions about what the animals are doing, Matthews said. “It’s a huge mystery.” Over the last 40 to 50 years, the calls of blue whales around the world have been getting deeper. Researchers have reported frequency drops in blue whale populations from the Arctic Ocean to the North Pacific. Some researchers think that blue whales are just getting bigger, said Miksis-Olds, of the University of New Hampshire in Durham. Whaling isn’t as common as it used to be, so whales have been able to grow larger — and larger whales have deeper calls. Another theory blames whales’ changing calls on an increasingly noisy ocean. Whales could be automatically adjusting their calls to be heard better, kind of like a person raising their voice to speak at a party, she said. If the whales were just getting bigger, you’d expect all components of the calls to be deeper, said acoustics researcher Pasquale Bottalico at Michigan State University in East Lansing. But the new data don’t support that, he said. © Society for Science & the Public 2000 - 2016. A

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 22280 - Posted: 06.04.2016

Amy McDermott Giant pandas have better ears than people — and polar bears. Pandas can hear surprisingly high frequencies, conservation biologist Megan Owen of the San Diego Zoo and colleagues report in the April Global Ecology and Conservation. The scientists played a range of tones for five zoo pandas trained to nose a target in response to sound. Training, which took three to six months for each animal, demanded serious focus and patience, says Owen, who called the effort “a lot to ask of a bear.” Both males and females heard into the range of a “silent” ultrasonic dog whistle. Polar bears, the only other bears scientists have tested, are less sensitive to sounds at or above 14 kilohertz. Researchers still don’t know why pandas have ultrasonic hearing. The bears are a vocal bunch, but their chirps and other calls have never been recorded at ultrasonic levels, Owen says. Great hearing may be a holdover from the bears’ ancient past. Citations M.A. Owen et al. Hearing sensitivity in context: Conservation implications for a highly vocal endangered species. Global Ecology and Conservation. Vol. 6, April 2016, p. 121. doi: 10.1016/j.gecco.2016.02.007. © Society for Science & the Public 2000 - 2016.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 22269 - Posted: 06.01.2016

by Helen Thompson In hunting down delicious fish, Flipper may have a secret weapon: snot. Dolphins emit a series of quick, high-frequency sounds — probably by forcing air over tissues in the nasal passage — to find and track potential prey. “It’s kind of like making a raspberry,” says Aaron Thode of the Scripps Institution of Oceanography in San Diego. Thode and colleagues tweaked a human speech modeling technique to reproduce dolphin sounds and discern the intricacies of their unique style of sound production. He presented the results on May 24 in Salt Lake City at the annual meeting of the Acoustical Society of America. Dolphin chirps have two parts: a thump and a ring. Their model worked on the assumption that lumps of tissue bumping together produce the thump, and those tissues pulling apart produce the ring. But to match the high frequencies of live bottlenose dolphins, the researchers had to make the surfaces of those tissues sticky. That suggests that mucus lining the nasal passage tissue is crucial to dolphin sonar. The vocal model also successfully mimicked whistling noises used to communicate with other dolphins and faulty clicks that probably result from inadequate snot. Such techniques could be adapted to study sound production or echolocation in sperm whales and other dolphin relatives. © Society for Science & the Public 2000 - 2016.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 22244 - Posted: 05.25.2016

By Linda Zajac For nearly 65 million years, bats and tiger moths have been locked in an aerial arms race: Bats echolocate to detect and capture tiger moths, and tiger moths evade them with flight maneuvers and their own ultrasonic sounds. Scientists have long wondered why certain species emit these high-frequency clicks that sound like rapid squeaks from a creaky floorboard. Does the sound jam bat sonar or does it warn bats that the moths are toxic? To find out, scientists collected two types of tiger moths: red-headed moths (pictured above) and Martin’s lichen moths. They then removed the soundmaking organs from some of the insects. In a grassy field in Arizona they set up infrared video cameras, ultrasonic microphones, and ultraviolet lights, the last of which they used to attract bats. In darkness, they released one tiger moth at a time and recorded the moth-bat interactions. They found that the moths rarely produced ultrasonic clicks fast enough to jam bat sonar. They also discovered that without sound organs, 64% of the red-headed moths and 94% of the Martin’s lichen moths were captured and spit out. Together, these findings reported late last month in PLOS ONE suggest that instead of jamming sonar like some tiger moths, these species act tough, flexing their soundmaking organs to warn predators of their toxin. © 2016 American Association for the Advancement of Science

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 22185 - Posted: 05.07.2016

By NATALIE ANGIER Whether to enliven a commute, relax in the evening or drown out the buzz of a neighbor’s recreational drone, Americans listen to music nearly four hours a day. In international surveys, people consistently rank music as one of life’s supreme sources of pleasure and emotional power. We marry to music, graduate to music, mourn to music. Every culture ever studied has been found to make music, and among the oldest artistic objects known are slender flutes carved from mammoth bone some 43,000 years ago — 24,000 years before the cave paintings of Lascaux. Given the antiquity, universality and deep popularity of music, many researchers had long assumed that the human brain must be equipped with some sort of music room, a distinctive piece of cortical architecture dedicated to detecting and interpreting the dulcet signals of song. Yet for years, scientists failed to find any clear evidence of a music-specific domain through conventional brain-scanning technology, and the quest to understand the neural basis of a quintessential human passion foundered. Now researchers at the Massachusetts Institute of Technology have devised a radical new approach to brain imaging that reveals what past studies had missed. By mathematically analyzing scans of the auditory cortex and grouping clusters of brain cells with similar activation patterns, the scientists have identified neural pathways that react almost exclusively to the sound of music — any music. It may be Bach, bluegrass, hip-hop, big band, sitar or Julie Andrews. A listener may relish the sampled genre or revile it. No matter. When a musical passage is played, a distinct set of neurons tucked inside a furrow of a listener’s auditory cortex will fire in response. Other sounds, by contrast — a dog barking, a car skidding, a toilet flushing — leave the musical circuits unmoved. Nancy Kanwisher and Josh H. McDermott, professors of neuroscience at M.I.T., and their postdoctoral colleague Sam Norman-Haignere reported their results in the journal Neuron. The findings offer researchers a new tool for exploring the contours of human musicality. © 2016 The New York Times Company

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 2: Functional Neuroanatomy: The Nervous System and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 2: Cells and Structures: The Anatomy of the Nervous System
Link ID: 21873 - Posted: 02.09.2016

By Elizabeth Pennisi PACIFIC GROVE, CALIFORNIA—Bats have an uncanny ability to track and eat insects on the fly with incredible accuracy. But some moths make these agile mammals miss their mark. Tiger moths, for example, emit ultrasonic clicks that jam bat radar. Now, scientists have shown that hawk moths (above) and other species have also evolved this behavior. The nocturnal insects—which are toxic to bats—issue an ultrasonic “warning” whenever a bat is near. After a few nibbles, the bat learns to avoid the noxious species altogether. The researchers shot high-speed videos of bat chases in eight countries over 4 years. Their studies found that moths with an intact sound-producing apparatus—typically located at the tip of the genitals—were spared, whereas those silenced by the researchers were readily caught. As the video shows, when the moths hear the bat’s clicks intensifying as it homes in, they emit their own signal, causing the bat to veer off at the last second. It could be that, like the tiger moths, the hawk moths are jamming the bat’s signal. But, because most moth signals are not the right type to interfere with the bat’s, the researchers say it’s more likely that the bat recognizes the signal and avoids the target on its own. Presenting here last week at a meeting of the American Society of Naturalists, the researchers say this signaling ability has evolved three times in hawk moths and about a dozen more times overall among other moths. © 2016 American Association for the Advancement of Science

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 21810 - Posted: 01.23.2016

By Geoffrey Giller The experience of seeing a lightning bolt before hearing its associated thunder some seconds later provides a fairly obvious example of the differential speeds of light and sound. But most intervals between linked visual and auditory stimuli are so brief as to be imperceptible. A new study has found that we can glean distance information from these minimally discrepant arrival times nonetheless. In a pair of experiments at the University of Rochester, 12 subjects were shown projected clusters of dots. When a sound was played about 40 or 60 milliseconds after the dots appeared (too short to be detected consciously), participants judged the clusters to be farther away than clusters with simultaneous or preceding sounds. Philip Jaekl, the lead author of the study and a postdoctoral fellow in cognitive neuroscience, says it makes sense that the brain would use all available sensory information for calculating distance. “Distance is something that's very difficult to compute,” he explains. The study was recently published in the journal PLOS ONE. Aaron Seitz, a professor of psychology and neuroscience at the University of California, Riverside, who was not involved in the work, says the results may be useful clinically, such as by helping people with amblyopia (lazy eye) improve their performance when training to see with both eyes. And there might be other practical applications, including making virtual-reality environments more realistic. “Adding in a delay,” says Nick Whiting, a VR engineer for Epic Games, “can be another technique in our repertoire in creating believable experiences.” © 2016 Scientific American,

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 21807 - Posted: 01.21.2016

Bret Stetka In June of 2001 musician Peter Gabriel flew to Atlanta to make music with two apes. The jam went surprisingly well. At each session Gabriel, a known dabbler in experimental music and a founding member of the band Genesis, would riff with a small group of musicians. The bonobos – one named Panbanisha, the other Kanzi — were trained to play in response on keyboards and showed a surprising, if rudimentary, awareness of melody and rhythm. Since then Gabriel has been working with scientists to help better understand animal cognition, including musical perception. Plenty of related research has explored whether or not animals other than humans can recognize what we consider to be music – whether they can they find coherence in a series of sounds that could otherwise transmit as noise. Many do, to a degree. And it's not just apes that respond to song. Parrots reportedly demonstrate some degree of "entrainment," or the syncing up of brainwave patterns with an external rhythm; dolphins may — and I stress may — respond to Radiohead; and certain styles of music reportedly influence dog behavior (Wagner supposedly honed his operas based on the response of his Cavalier King Charles Spaniel). But most researchers agree that fully appreciating what we create and recognize as music is a primarily human phenomenon. Recent research hints at how the human brain is uniquely able to recognize and enjoy music — how we render simple ripples of vibrating air into visceral, emotional experiences. It turns out, the answer has a lot to do with timing. The work also reveals why your musician friends are sometimes more tolerant of really boring music. © 2015 npr

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 15: Emotions, Aggression, and Stress
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 11: Emotions, Aggression, and Stress
Link ID: 21713 - Posted: 12.19.2015

When we hear speech, electrical waves in our brain synchronise to the rhythm of the syllables, helping us to understand what’s being said. This happens when we listen to music too, and now we know some brains are better at syncing to the beat than others. Keith Doelling at New York University and his team recorded the brain waves of musicians and non-musicians while listening to music, and found that both groups synchronised two types of low-frequency brain waves, known as delta and theta, to the rhythm of the music. Synchronising our brain waves to music helps us decode it, says Doelling. The electrical waves collect the information from continuous music and break it into smaller chunks that we can process. But for particularly slow music, the non-musicians were less able to synchronise, with some volunteers saying they couldn’t keep track of these slower rhythms. Rather than natural talent, Doelling thinks musicians are more comfortable with slower tempos because of their musical training. As part of his own musical education, he remembers being taught to break down tempo into smaller subdivisions. He suggests that grouping shorter beats together in this way is what helps musicians to process slow music better. One theory is that musicians have heard and played much more music, allowing them to acquire “meta-knowledge”, such as a better understanding of how composers structure pieces. This could help them detect a broader range of tempos, says Usha Goswami of the University of Cambridge. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 17: Learning and Memory; Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 13: Memory, Learning, and Development; Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 21574 - Posted: 10.27.2015

Using a sensitive new technology called single-cell RNA-seq on cells from mice, scientists have created the first high-resolution gene expression map of the newborn mouse inner ear. The findings provide new insight into how epithelial cells in the inner ear develop and differentiate into specialized cells that serve critical functions for hearing and maintaining balance. Understanding how these important cells form may provide a foundation for the potential development of cell-based therapies for treating hearing loss and balance disorders. The research was conducted by scientists at the National Institute on Deafness and Other Communication Disorders (NIDCD), part of the National Institutes of Health. In a companion study led by NIDCD-supported scientists at the University of Maryland School of Medicine and scientists at the Sackler School of Medicine at Tel Aviv University, researchers used a similar technique to identify a family of proteins critical for the development of inner ear cells. Both studies were published online on October 15 in the journal Nature Communications. “Age-related hearing loss occurs gradually in most of us as we grow older. It is one of the most common conditions among older adults, affecting half of people over age 75,” said James F. Battey, Jr., M.D., Ph.D., director of the NIDCD. “These new findings may lead to new regenerative treatments for this critical public health issue.” Specialized sensory epithelial cells in the inner ear include hair cells and supporting cells, which provide the hair cells with crucial structural and functional support. Hair cells and supporting cells located in the cochlea — the snail-shaped structure in the inner ear — work together to detect sound, thus enabling us to hear. In contrast, hair cells and supporting cells in the utricle, a fluid-filled pouch near the cochlea, play a critical role in helping us maintain our balance.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 21516 - Posted: 10.16.2015

Music can be a transformative experience, especially for your brain. Musicians’ brains respond more symmetrically to the music they listen to. And the size of the effect depends on which instrument they play. People who learn to play musical instruments can expect their brains to change in structure and function. When people are taught to play a piece of piano music, for example, the part of their brains that represents their finger movements gets bigger. Musicians are also better at identifying pitch and speech sounds – brain imaging studies suggest that this is because their brains respond more quickly and strongly to sound. Other research has found that the corpus callosum – the strip of tissue that connects the left and right hemisphere of the brain – is also larger in musicians. Might this mean that the two halves of a musician’s brain are better at communicating with each other compared with non-musicians? To find out, Iballa Burunat at the University of Jyväskylä in Finland and her colleagues used an fMRI scanner to look at the brains of 18 musicians and 18 people who have never played professionally. The professional musicians – all of whom had a degree in music – included cellists, violinists, keyboardists and bassoon and trombone players. While they were in the scanner, all of the participants were played three different pieces of music – prog rock, an Argentinian tango and some Stravinsky. Burunat recorded how their brains responded to the music, and used software to compare the activity of the left and right hemispheres of each person’s brain. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 21465 - Posted: 10.01.2015

By Jane E. Brody Mark Hammel’s hearing was damaged in his 20s by machine gun fire when he served in the Israeli Army. But not until decades later, at 57, did he receive his first hearing aids. “It was very joyful, but also very sad, when I contemplated how much I had missed all those years,” Dr. Hammel, a psychologist in Kingston, N.Y., said in an interview. “I could hear well enough sitting face to face with someone in a quiet room, but in public, with background noise, I knew people were talking, but I had no idea what they were saying. I just stood there nodding my head and smiling. “Eventually, I stopped going to social gatherings. Even driving, I couldn’t hear what my daughter was saying in the back seat. I live in the country, and I couldn’t hear the birds singing. “People with hearing loss often don’t realize what they’re missing,” he said. “So much of what makes us human is social contact, interaction with other human beings. When that’s cut off, it comes with a very high cost.” And the price people pay is much more than social. As Dr. Hammel now realizes, “the capacity to hear is so essential to overall health.” Hearing loss is one of the most common conditions affecting adults, and the most common among older adults. An estimated 30 million to 48 million Americans have hearing loss that significantly diminishes the quality of their lives — academically, professionally and medically as well as socially. One person in three older than 60 has life-diminishing hearing loss, but most older adults wait five to 15 years before they seek help, according to a 2012 report in Healthy Hearing magazine. And the longer the delay, the more one misses of life and the harder it can be to adjust to hearing aids. © 2015 The New York Times Company

Related chapters from BP7e: Chapter 8: General Principles of Sensory Processing, Touch, and Pain; Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 5: The Sensorimotor System; Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 21449 - Posted: 09.28.2015

Bill McQuay and Christopher Joyce Acoustic biologists who have learned to tune their ears to the sounds of life know there's a lot more to animal communication than just, "Hey, here I am!" or "I need a mate." From insects to elephants to people, we animals all use sound to function and converse in social groups — especially when the environment is dark, or underwater or heavily forested. "We think that we really know what's going on out there," says Dartmouth College biologist Laurel Symes, who studies crickets. But there's a cacophony all around us, she says, that's full of information still to be deciphered. "We're getting this tiny slice of all of the sound in the world." Recently scientists have pushed the field of bioacoustics even further, to record whole environments, not just the animals that live there. Some call this "acoustic ecology" — listening to the rain, streams, wind through the trees. A deciduous forest sounds different from a pine forest, for example, and that soundscape changes seasonally. Neuroscientist Seth Horowitz, author of the book The Universal Sense: How Hearing Shapes the Mind, is especially interested in the ways all these sounds, which are essentially vibrations, have shaped the evolution of the human brain. "Vibration sensitivity is found in even the most primitive life forms," Horowitz says — even bacteria. "It's so critical to your environment, knowing that something else is moving near you, whether it's a predator or it's food. Everywhere you go there is vibration and it tells you something." © 2015 NPR

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 21394 - Posted: 09.10.2015

By Gary Stix A decline in hearing acuity is not only an occurrence that happens in the aged. An article in the August Scientific American by M. Charles Liberman, a professor of otology and laryngology at Harvard Medical School and director of the Eaton-Peabody Laboratories at Massachusetts Eye and Ear, focuses on relatively recent discoveries that show the din of a concert or high-decibel machine noise is enough to cause some level of hearing damage. After reading the article check out this video by medical illustrator Brandon Pletsch and its narrated animation explaining how the sensory system that detects sound functions. © 2015 Scientific American

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 21250 - Posted: 08.02.2015

Chris Woolston A study that did not find cognitive benefits of musical training for young children triggered a “media firestorm”. Researchers often complain about inaccurate science stories in the popular press, but few air their grievances in a journal. Samuel Mehr, a PhD student at Harvard University in Cambridge, Massachusetts, discussed in a Frontiers in Psychology article1 some examples of media missteps from his own field — the effects of music on cognition. The opinion piece gained widespread attention online. Arseny Khakhalin, a neuroscientist at Bard College in Annandale-on-Hudson, New York, tweeted: Mehr gained first-hand experience of the media as the first author of a 2013 study in PLoS ONE2. The study involved two randomized, controlled trials of a total of 74 four-year-olds. For children who did six weeks of music classes, there was no sign that musical activities improved scores on specific cognitive tests compared to children who did six weeks of art projects or took part in no organized activities. The authors cautioned, however, that the lack of effect of the music classes could have been a result of how they did the studies. The intervention in the trials was brief and not especially intensive — the children mainly sang songs and played with rhythm instruments — and older children might have had a different response than the four-year-olds. There are many possible benefits of musical training, Mehr said in an interview, but finding them was beyond the scope of the study. © 2015 Nature Publishing Group

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 13: Memory, Learning, and Development
Link ID: 21216 - Posted: 07.25.2015