Links for Keyword: Hearing

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 577

By NATALIE ANGIER Whether to enliven a commute, relax in the evening or drown out the buzz of a neighbor’s recreational drone, Americans listen to music nearly four hours a day. In international surveys, people consistently rank music as one of life’s supreme sources of pleasure and emotional power. We marry to music, graduate to music, mourn to music. Every culture ever studied has been found to make music, and among the oldest artistic objects known are slender flutes carved from mammoth bone some 43,000 years ago — 24,000 years before the cave paintings of Lascaux. Given the antiquity, universality and deep popularity of music, many researchers had long assumed that the human brain must be equipped with some sort of music room, a distinctive piece of cortical architecture dedicated to detecting and interpreting the dulcet signals of song. Yet for years, scientists failed to find any clear evidence of a music-specific domain through conventional brain-scanning technology, and the quest to understand the neural basis of a quintessential human passion foundered. Now researchers at the Massachusetts Institute of Technology have devised a radical new approach to brain imaging that reveals what past studies had missed. By mathematically analyzing scans of the auditory cortex and grouping clusters of brain cells with similar activation patterns, the scientists have identified neural pathways that react almost exclusively to the sound of music — any music. It may be Bach, bluegrass, hip-hop, big band, sitar or Julie Andrews. A listener may relish the sampled genre or revile it. No matter. When a musical passage is played, a distinct set of neurons tucked inside a furrow of a listener’s auditory cortex will fire in response. Other sounds, by contrast — a dog barking, a car skidding, a toilet flushing — leave the musical circuits unmoved. Nancy Kanwisher and Josh H. McDermott, professors of neuroscience at M.I.T., and their postdoctoral colleague Sam Norman-Haignere reported their results in the journal Neuron. The findings offer researchers a new tool for exploring the contours of human musicality. © 2016 The New York Times Company

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 2: Functional Neuroanatomy: The Nervous System and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 2: Cells and Structures: The Anatomy of the Nervous System
Link ID: 21873 - Posted: 02.09.2016

By Elizabeth Pennisi PACIFIC GROVE, CALIFORNIA—Bats have an uncanny ability to track and eat insects on the fly with incredible accuracy. But some moths make these agile mammals miss their mark. Tiger moths, for example, emit ultrasonic clicks that jam bat radar. Now, scientists have shown that hawk moths (above) and other species have also evolved this behavior. The nocturnal insects—which are toxic to bats—issue an ultrasonic “warning” whenever a bat is near. After a few nibbles, the bat learns to avoid the noxious species altogether. The researchers shot high-speed videos of bat chases in eight countries over 4 years. Their studies found that moths with an intact sound-producing apparatus—typically located at the tip of the genitals—were spared, whereas those silenced by the researchers were readily caught. As the video shows, when the moths hear the bat’s clicks intensifying as it homes in, they emit their own signal, causing the bat to veer off at the last second. It could be that, like the tiger moths, the hawk moths are jamming the bat’s signal. But, because most moth signals are not the right type to interfere with the bat’s, the researchers say it’s more likely that the bat recognizes the signal and avoids the target on its own. Presenting here last week at a meeting of the American Society of Naturalists, the researchers say this signaling ability has evolved three times in hawk moths and about a dozen more times overall among other moths. © 2016 American Association for the Advancement of Science

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 21810 - Posted: 01.23.2016

By Geoffrey Giller The experience of seeing a lightning bolt before hearing its associated thunder some seconds later provides a fairly obvious example of the differential speeds of light and sound. But most intervals between linked visual and auditory stimuli are so brief as to be imperceptible. A new study has found that we can glean distance information from these minimally discrepant arrival times nonetheless. In a pair of experiments at the University of Rochester, 12 subjects were shown projected clusters of dots. When a sound was played about 40 or 60 milliseconds after the dots appeared (too short to be detected consciously), participants judged the clusters to be farther away than clusters with simultaneous or preceding sounds. Philip Jaekl, the lead author of the study and a postdoctoral fellow in cognitive neuroscience, says it makes sense that the brain would use all available sensory information for calculating distance. “Distance is something that's very difficult to compute,” he explains. The study was recently published in the journal PLOS ONE. Aaron Seitz, a professor of psychology and neuroscience at the University of California, Riverside, who was not involved in the work, says the results may be useful clinically, such as by helping people with amblyopia (lazy eye) improve their performance when training to see with both eyes. And there might be other practical applications, including making virtual-reality environments more realistic. “Adding in a delay,” says Nick Whiting, a VR engineer for Epic Games, “can be another technique in our repertoire in creating believable experiences.” © 2016 Scientific American,

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 21807 - Posted: 01.21.2016

Bret Stetka In June of 2001 musician Peter Gabriel flew to Atlanta to make music with two apes. The jam went surprisingly well. At each session Gabriel, a known dabbler in experimental music and a founding member of the band Genesis, would riff with a small group of musicians. The bonobos – one named Panbanisha, the other Kanzi — were trained to play in response on keyboards and showed a surprising, if rudimentary, awareness of melody and rhythm. Since then Gabriel has been working with scientists to help better understand animal cognition, including musical perception. Plenty of related research has explored whether or not animals other than humans can recognize what we consider to be music – whether they can they find coherence in a series of sounds that could otherwise transmit as noise. Many do, to a degree. And it's not just apes that respond to song. Parrots reportedly demonstrate some degree of "entrainment," or the syncing up of brainwave patterns with an external rhythm; dolphins may — and I stress may — respond to Radiohead; and certain styles of music reportedly influence dog behavior (Wagner supposedly honed his operas based on the response of his Cavalier King Charles Spaniel). But most researchers agree that fully appreciating what we create and recognize as music is a primarily human phenomenon. Recent research hints at how the human brain is uniquely able to recognize and enjoy music — how we render simple ripples of vibrating air into visceral, emotional experiences. It turns out, the answer has a lot to do with timing. The work also reveals why your musician friends are sometimes more tolerant of really boring music. © 2015 npr

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 15: Emotions, Aggression, and Stress
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 11: Emotions, Aggression, and Stress
Link ID: 21713 - Posted: 12.19.2015

When we hear speech, electrical waves in our brain synchronise to the rhythm of the syllables, helping us to understand what’s being said. This happens when we listen to music too, and now we know some brains are better at syncing to the beat than others. Keith Doelling at New York University and his team recorded the brain waves of musicians and non-musicians while listening to music, and found that both groups synchronised two types of low-frequency brain waves, known as delta and theta, to the rhythm of the music. Synchronising our brain waves to music helps us decode it, says Doelling. The electrical waves collect the information from continuous music and break it into smaller chunks that we can process. But for particularly slow music, the non-musicians were less able to synchronise, with some volunteers saying they couldn’t keep track of these slower rhythms. Rather than natural talent, Doelling thinks musicians are more comfortable with slower tempos because of their musical training. As part of his own musical education, he remembers being taught to break down tempo into smaller subdivisions. He suggests that grouping shorter beats together in this way is what helps musicians to process slow music better. One theory is that musicians have heard and played much more music, allowing them to acquire “meta-knowledge”, such as a better understanding of how composers structure pieces. This could help them detect a broader range of tempos, says Usha Goswami of the University of Cambridge. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 17: Learning and Memory; Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 13: Memory, Learning, and Development; Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 21574 - Posted: 10.27.2015

Using a sensitive new technology called single-cell RNA-seq on cells from mice, scientists have created the first high-resolution gene expression map of the newborn mouse inner ear. The findings provide new insight into how epithelial cells in the inner ear develop and differentiate into specialized cells that serve critical functions for hearing and maintaining balance. Understanding how these important cells form may provide a foundation for the potential development of cell-based therapies for treating hearing loss and balance disorders. The research was conducted by scientists at the National Institute on Deafness and Other Communication Disorders (NIDCD), part of the National Institutes of Health. In a companion study led by NIDCD-supported scientists at the University of Maryland School of Medicine and scientists at the Sackler School of Medicine at Tel Aviv University, researchers used a similar technique to identify a family of proteins critical for the development of inner ear cells. Both studies were published online on October 15 in the journal Nature Communications. “Age-related hearing loss occurs gradually in most of us as we grow older. It is one of the most common conditions among older adults, affecting half of people over age 75,” said James F. Battey, Jr., M.D., Ph.D., director of the NIDCD. “These new findings may lead to new regenerative treatments for this critical public health issue.” Specialized sensory epithelial cells in the inner ear include hair cells and supporting cells, which provide the hair cells with crucial structural and functional support. Hair cells and supporting cells located in the cochlea — the snail-shaped structure in the inner ear — work together to detect sound, thus enabling us to hear. In contrast, hair cells and supporting cells in the utricle, a fluid-filled pouch near the cochlea, play a critical role in helping us maintain our balance.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 21516 - Posted: 10.16.2015

Music can be a transformative experience, especially for your brain. Musicians’ brains respond more symmetrically to the music they listen to. And the size of the effect depends on which instrument they play. People who learn to play musical instruments can expect their brains to change in structure and function. When people are taught to play a piece of piano music, for example, the part of their brains that represents their finger movements gets bigger. Musicians are also better at identifying pitch and speech sounds – brain imaging studies suggest that this is because their brains respond more quickly and strongly to sound. Other research has found that the corpus callosum – the strip of tissue that connects the left and right hemisphere of the brain – is also larger in musicians. Might this mean that the two halves of a musician’s brain are better at communicating with each other compared with non-musicians? To find out, Iballa Burunat at the University of Jyväskylä in Finland and her colleagues used an fMRI scanner to look at the brains of 18 musicians and 18 people who have never played professionally. The professional musicians – all of whom had a degree in music – included cellists, violinists, keyboardists and bassoon and trombone players. While they were in the scanner, all of the participants were played three different pieces of music – prog rock, an Argentinian tango and some Stravinsky. Burunat recorded how their brains responded to the music, and used software to compare the activity of the left and right hemispheres of each person’s brain. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 21465 - Posted: 10.01.2015

By Jane E. Brody Mark Hammel’s hearing was damaged in his 20s by machine gun fire when he served in the Israeli Army. But not until decades later, at 57, did he receive his first hearing aids. “It was very joyful, but also very sad, when I contemplated how much I had missed all those years,” Dr. Hammel, a psychologist in Kingston, N.Y., said in an interview. “I could hear well enough sitting face to face with someone in a quiet room, but in public, with background noise, I knew people were talking, but I had no idea what they were saying. I just stood there nodding my head and smiling. “Eventually, I stopped going to social gatherings. Even driving, I couldn’t hear what my daughter was saying in the back seat. I live in the country, and I couldn’t hear the birds singing. “People with hearing loss often don’t realize what they’re missing,” he said. “So much of what makes us human is social contact, interaction with other human beings. When that’s cut off, it comes with a very high cost.” And the price people pay is much more than social. As Dr. Hammel now realizes, “the capacity to hear is so essential to overall health.” Hearing loss is one of the most common conditions affecting adults, and the most common among older adults. An estimated 30 million to 48 million Americans have hearing loss that significantly diminishes the quality of their lives — academically, professionally and medically as well as socially. One person in three older than 60 has life-diminishing hearing loss, but most older adults wait five to 15 years before they seek help, according to a 2012 report in Healthy Hearing magazine. And the longer the delay, the more one misses of life and the harder it can be to adjust to hearing aids. © 2015 The New York Times Company

Related chapters from BP7e: Chapter 8: General Principles of Sensory Processing, Touch, and Pain; Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 5: The Sensorimotor System; Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 21449 - Posted: 09.28.2015

Bill McQuay and Christopher Joyce Acoustic biologists who have learned to tune their ears to the sounds of life know there's a lot more to animal communication than just, "Hey, here I am!" or "I need a mate." From insects to elephants to people, we animals all use sound to function and converse in social groups — especially when the environment is dark, or underwater or heavily forested. "We think that we really know what's going on out there," says Dartmouth College biologist Laurel Symes, who studies crickets. But there's a cacophony all around us, she says, that's full of information still to be deciphered. "We're getting this tiny slice of all of the sound in the world." Recently scientists have pushed the field of bioacoustics even further, to record whole environments, not just the animals that live there. Some call this "acoustic ecology" — listening to the rain, streams, wind through the trees. A deciduous forest sounds different from a pine forest, for example, and that soundscape changes seasonally. Neuroscientist Seth Horowitz, author of the book The Universal Sense: How Hearing Shapes the Mind, is especially interested in the ways all these sounds, which are essentially vibrations, have shaped the evolution of the human brain. "Vibration sensitivity is found in even the most primitive life forms," Horowitz says — even bacteria. "It's so critical to your environment, knowing that something else is moving near you, whether it's a predator or it's food. Everywhere you go there is vibration and it tells you something." © 2015 NPR

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 21394 - Posted: 09.10.2015

By Gary Stix A decline in hearing acuity is not only an occurrence that happens in the aged. An article in the August Scientific American by M. Charles Liberman, a professor of otology and laryngology at Harvard Medical School and director of the Eaton-Peabody Laboratories at Massachusetts Eye and Ear, focuses on relatively recent discoveries that show the din of a concert or high-decibel machine noise is enough to cause some level of hearing damage. After reading the article check out this video by medical illustrator Brandon Pletsch and its narrated animation explaining how the sensory system that detects sound functions. © 2015 Scientific American

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 21250 - Posted: 08.02.2015

Chris Woolston A study that did not find cognitive benefits of musical training for young children triggered a “media firestorm”. Researchers often complain about inaccurate science stories in the popular press, but few air their grievances in a journal. Samuel Mehr, a PhD student at Harvard University in Cambridge, Massachusetts, discussed in a Frontiers in Psychology article1 some examples of media missteps from his own field — the effects of music on cognition. The opinion piece gained widespread attention online. Arseny Khakhalin, a neuroscientist at Bard College in Annandale-on-Hudson, New York, tweeted: Mehr gained first-hand experience of the media as the first author of a 2013 study in PLoS ONE2. The study involved two randomized, controlled trials of a total of 74 four-year-olds. For children who did six weeks of music classes, there was no sign that musical activities improved scores on specific cognitive tests compared to children who did six weeks of art projects or took part in no organized activities. The authors cautioned, however, that the lack of effect of the music classes could have been a result of how they did the studies. The intervention in the trials was brief and not especially intensive — the children mainly sang songs and played with rhythm instruments — and older children might have had a different response than the four-year-olds. There are many possible benefits of musical training, Mehr said in an interview, but finding them was beyond the scope of the study. © 2015 Nature Publishing Group

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 13: Memory, Learning, and Development
Link ID: 21216 - Posted: 07.25.2015

By Hanae Armitage Playing an instrument is good for your brain. Compared to nonmusicians, young children who strum a guitar or blow a trombone become better readers with better vocabularies. A new study shows that the benefits extend to teenagers as well. Neuroscientists compared two groups of high school students over 3 years: One began learning their first instrument in band class, whereas the other focused on physical fitness in Junior Reserve Officers’ Training Corps (JROTC). At the end of 3 years, those students who had played instruments were better at detecting speech sounds, like syllables and words that rhyme, than their JROTC peers, the team reports online today in the Proceedings of the National Academy of Sciences. Researchers know that as children grow up, their ability to soak up new information, especially language, starts to diminish. These findings suggest that musical training could keep that window open longer. But the benefits of music aren’t just for musicians; taking up piano could be the difference between an A and a B in Spanish class. © 2015 American Association for the Advancement of Science

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 13: Memory, Learning, and Development
Link ID: 21194 - Posted: 07.21.2015

By C. CLAIBORNE RAY Q. Can you hear without an intact eardrum? A. “When the eardrum is not intact, there is usually some degree of hearing loss until it heals,” said Dr. Ashutosh Kacker, an ear, nose and throat specialist at NewYork-Presbyterian Hospital and a professor at Weill Cornell Medical College, “but depending on the size of the hole, you may still be able to hear almost normally.” Typically, Dr. Kacker said, the larger an eardrum perforation is, the more severe the hearing loss it will cause. The eardrum, or tympanic membrane, is a thin, cone-shaped, pearly gray tissue separating the outer and middle ear canals, he explained. Soundwaves hit the eardrum, which in turn vibrates the bones of the middle ear. The bones pass the vibration to the cochlea, which leads to a signal cascade culminating in the sound being processed by the brain and being heard. There are several ways an eardrum can be ruptured, Dr. Kacker said, including trauma, exposure to sudden or very loud noises, foreign objects inserted deeply into the ear canal, and middle-ear infection. “Usually, the hole will heal by itself and hearing will improve within about two weeks to a few months, especially in cases where the hole is small,” he said. Sometimes, when the hole is larger or does not heal well, surgery will be required to repair the eardrum. Most such operations are done by placing a patch over the hole to allow it to heal, and the surgery is usually very successful in restoring hearing, Dr. Kacker said. © 2015 The New York Times Company

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 21187 - Posted: 07.20.2015

Jon Hamilton It's almost impossible to ignore a screaming baby. (Click here if you doubt that.) And now scientists think they know why. "Screams occupy their own little patch of the soundscape that doesn't seem to be used for other things," says David Poeppel, a professor of psychology and neuroscience at New York University and director of the Department of Neuroscience at the Max Planck Institute in Frankfurt. And when people hear the unique sound characteristics of a scream — from a baby or anyone else — it triggers fear circuits in the brain, Poeppel and a team of researchers report in Cell Biology. The team also found that certain artificial sounds, like alarms, trigger the same circuits. "That's why you want to throw your alarm clock on the floor," Poeppel says. The researchers in Poeppel's lab decided to study screams in part because they are a primal form of communication found in every culture. And there was another reason. "Many of the postdocs in my lab are in the middle of having kids and, of course, screams are very much on their mind," Poeppel says. "So it made perfect sense for them to be obsessed with this topic." The team started by trying to figure out "what makes a scream a scream," Poeppel says. Answering that question required creating a large database of recorded screams — from movies, from the Internet and from volunteers who agreed to step into a sound booth. A careful analysis of these screams found that they're not like any other sound that people make, including other loud, high-pitched vocalizations. The difference is something called the amplitude modulation rate, which is how often the loudness of a sound changes. © 2015 NPR

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 21183 - Posted: 07.18.2015

By Sarah Schwartz In a possible step toward treating genetic human deafness, scientists have used gene therapy to partially restore hearing in deaf mice. Some mice with genetic hearing loss could sense and respond to noises after receiving working copies of their faulty genes, researchers report July 8 in Science Translational Medicine. Because the mice’s mutated genes closely correspond to those responsible for some hereditary human deafness, the scientists hope the results will inform future human therapies. “I would call this a really exciting big step,” says otolaryngologist Lawrence Lustig of Columbia University Medical Center. The ear’s sound-sensing hair cells convert noises into information the brain can process. Hair cells need specific proteins to work properly, and alterations in the genetic blueprints for these proteins can cause deafness. To combat the effects of two such mutations, the scientists injected viruses containing healthy genes into the ears of deaf baby mice. The virus infected some hair cells, giving them working genes. The scientists tried this therapy on two different deafness-causing mutations. Within a month, around half the mice with one mutation showed brainwave activity consistent with hearing and jumped when exposed to loud noises. Treated mice with the other mutation didn’t respond to noises, but the gene therapy helped their hair cells — which normally die off quickly due to the mutation — survive. All of the untreated mice remained deaf. © Society for Science & the Public 2000 - 2015

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 21152 - Posted: 07.09.2015

By Sarah Lewin Evolutionary biologists have long wondered why the eardrum—the membrane that relays sound waves to the inner ear—looks in humans and other mammals remarkably like the one in reptiles and birds. Did the membrane and therefore the ability to hear in these groups evolve from a common ancestor? Or did the auditory systems evolve independently to perform the same function, a phenomenon called convergent evolution? A recent set of experiments performed at the University of Tokyo and the RIKEN Evolutionary Morphology Laboratory in Japan resolves the issue. When the scientists genetically inhibited lower jaw development in both fetal mice and chickens, the mice formed neither eardrums nor ear canals. In contrast, the birds grew two upper jaws, from which two sets of eardrums and ear canals sprouted. The results, published in Nature Communications, confirm that the middle ear grows out of the lower jaw in mammals but emerges from the upper jaw in birds—all supporting the hypothesis that the similar anatomy evolved independently in mammals and in reptiles and birds. (Scientific American is part of Springer Nature.) Fossils of auditory bones had supported this conclusion as well, but eardrums do not fossilize and so could not be examined directly. © 2015 Scientific American

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 21098 - Posted: 06.27.2015

By Rachel Feltman Here's why you hate the sound of your own voice(1:07) If you've ever listened to your voice recorded, chances are you probably didn't like what you heard. So, why do most people hate the sound of their own voice? The answer: It's all in how sound travel to your ears. (Pamela Kirkland/The Washington Post) Whether you've heard yourself talking on the radio or just gabbing in a friend's Instagram video, you probably know the sound of your own voice -- and chances are pretty good that you hate it. As the video above explains, your voice as you hear it when you speak out loud is very different from the voice the rest of the world perceives. That's because it comes to you via a different channel than everyone else. When sound waves from the outside world -- someone else's voice, for example -- hit the outer ear, they're siphoned straight through the ear canal to hit the ear drum, creating vibrations that the brain will translate into sound. When we talk, our ear drums and inner ears vibrate from the sound waves we're putting out into the air. But they also have a another source of vibration -- the movements caused by the production of the sound. Our vocal cords and airways are trembling, too, and those vibrations make their way over to auditory processing as well. Your body is better at carrying low, rich tones than the air is. So when those two sources of sound get combined into one perception of your own voice, it sounds lower and richer. That's why hearing the way your voice sounds without all the body vibes can be off-putting -- it's unfamiliar -- or even unpleasant, because of the relative tinniness.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 21062 - Posted: 06.17.2015

How echolocation really works By Dwayne Godwin and Jorge Cham © 2015 Scientific American

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 21017 - Posted: 06.06.2015

Lauren Silverman Jiya Bavishi was born deaf. For five years, she couldn't hear and she couldn't speak at all. But when I first meet her, all she wants to do is say hello. The 6-year-old is bouncing around the room at her speech therapy session in Dallas. She's wearing a bright pink top; her tiny gold earrings flash as she waves her arms. "Hi," she says, and then uses sign language to ask who I am and talk about the ice cream her father bought for her. Jiya is taking part in a clinical trial testing a new hearing technology. At 12 months, she was given a cochlear implant. These surgically implanted devices send signals directly to the nerves used to hear. But cochlear implants don't work for everyone, and they didn't work for Jiya. A schoolboy with a cochlear implant listens to his teacher during lessons at a school for the hearing impaired in Germany. The implants have dramatically changed the way deaf children learn and transition out of schools for the deaf and into classrooms with non-disabled students. "The physician was able to get all of the electrodes into her cochlea," says Linda Daniel, a certified auditory-verbal therapist and rehabilitative audiologist with HEAR, a rehabilitation clinic in Dallas. Daniel has been working with Jiya since she was a baby. "However, you have to have a sufficient or healthy auditory nerve to connect the cochlea and the electrodes up to the brainstem." But Jiya's connection between the cochlea and the brainstem was too thin. There was no way for sounds to make that final leg of the journey and reach her brain. © 2015 NPR

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 21005 - Posted: 06.01.2015

By Meeri Kim The dangers of concussions, caused by traumatic stretching and damage to nerve cells in the brain that lead to dizziness, nausea and headache, has been well documented. But ear damage that is sometimes caused by a head injury has symptoms so similar to the signs of a concussion that doctors may misdiagnose it and administer the wrong treatment. A perilymph fistula is a tear or defect in the small, thin membranes that normally separate the air-filled middle ear from the inner ear, which is filled with a fluid called perilymph. When a fistula forms, tiny amounts of this fluid leak out of the inner ear, an organ crucial not only for hearing but also for balance. Losing even a few small drops of perilymph leaves people disoriented, nauseous and often with a splitting headache, vertigo and memory loss. While most people with a concussion recover within a few days, a perilymph fistula can leave a person disabled for months. There is some controversy around perilymph fistula due to its difficulty of diagnosis — the leak is not directly observable, but rather identified by its symptoms. However, it is generally accepted as a real condition by otolaryngologists and sports physicians, and typically known to follow a traumatic event. But concussions — as well as post-concussion syndrome, which is marked by dizziness, headache and other symptoms that can last even a year after the initial blow — also occur as the result of such an injury.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 20968 - Posted: 05.23.2015