Chapter 6. Hearing, Balance, Taste, and Smell
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
By Hanae Armitage Playing an instrument is good for your brain. Compared to nonmusicians, young children who strum a guitar or blow a trombone become better readers with better vocabularies. A new study shows that the benefits extend to teenagers as well. Neuroscientists compared two groups of high school students over 3 years: One began learning their first instrument in band class, whereas the other focused on physical fitness in Junior Reserve Officers’ Training Corps (JROTC). At the end of 3 years, those students who had played instruments were better at detecting speech sounds, like syllables and words that rhyme, than their JROTC peers, the team reports online today in the Proceedings of the National Academy of Sciences. Researchers know that as children grow up, their ability to soak up new information, especially language, starts to diminish. These findings suggest that musical training could keep that window open longer. But the benefits of music aren’t just for musicians; taking up piano could be the difference between an A and a B in Spanish class. © 2015 American Association for the Advancement of Science
By C. CLAIBORNE RAY Q. Can you hear without an intact eardrum? A. “When the eardrum is not intact, there is usually some degree of hearing loss until it heals,” said Dr. Ashutosh Kacker, an ear, nose and throat specialist at NewYork-Presbyterian Hospital and a professor at Weill Cornell Medical College, “but depending on the size of the hole, you may still be able to hear almost normally.” Typically, Dr. Kacker said, the larger an eardrum perforation is, the more severe the hearing loss it will cause. The eardrum, or tympanic membrane, is a thin, cone-shaped, pearly gray tissue separating the outer and middle ear canals, he explained. Soundwaves hit the eardrum, which in turn vibrates the bones of the middle ear. The bones pass the vibration to the cochlea, which leads to a signal cascade culminating in the sound being processed by the brain and being heard. There are several ways an eardrum can be ruptured, Dr. Kacker said, including trauma, exposure to sudden or very loud noises, foreign objects inserted deeply into the ear canal, and middle-ear infection. “Usually, the hole will heal by itself and hearing will improve within about two weeks to a few months, especially in cases where the hole is small,” he said. Sometimes, when the hole is larger or does not heal well, surgery will be required to repair the eardrum. Most such operations are done by placing a patch over the hole to allow it to heal, and the surgery is usually very successful in restoring hearing, Dr. Kacker said. © 2015 The New York Times Company
Link ID: 21187 - Posted: 07.20.2015
Jon Hamilton It's almost impossible to ignore a screaming baby. (Click here if you doubt that.) And now scientists think they know why. "Screams occupy their own little patch of the soundscape that doesn't seem to be used for other things," says David Poeppel, a professor of psychology and neuroscience at New York University and director of the Department of Neuroscience at the Max Planck Institute in Frankfurt. And when people hear the unique sound characteristics of a scream — from a baby or anyone else — it triggers fear circuits in the brain, Poeppel and a team of researchers report in Cell Biology. The team also found that certain artificial sounds, like alarms, trigger the same circuits. "That's why you want to throw your alarm clock on the floor," Poeppel says. The researchers in Poeppel's lab decided to study screams in part because they are a primal form of communication found in every culture. And there was another reason. "Many of the postdocs in my lab are in the middle of having kids and, of course, screams are very much on their mind," Poeppel says. "So it made perfect sense for them to be obsessed with this topic." The team started by trying to figure out "what makes a scream a scream," Poeppel says. Answering that question required creating a large database of recorded screams — from movies, from the Internet and from volunteers who agreed to step into a sound booth. A careful analysis of these screams found that they're not like any other sound that people make, including other loud, high-pitched vocalizations. The difference is something called the amplitude modulation rate, which is how often the loudness of a sound changes. © 2015 NPR
That song really is stuck in your head. The experience of hearing tunes in your mind appears to be linked to physical differences in brain structure. The study is the first to look at the neural basis for “involuntary musical imagery” – or “earworms”. They aren’t just a curiosity, says study co-author Lauren Stewart at Goldsmith’s, University of London, but could have a biological function. Stewart, a music psychologist, was first inspired to study earworms by a regular feature on the radio station BBC 6Music, in which listeners would write in with songs they had woken up with in their heads. There was a lot of interest from the public in what they are and where they had come from, but there was little research on the topic, she says. Once Stewart and her team started researching earworms, it became clear that some people are affected quite severely: one person even wrote to them saying he had lost his job because of an earworm. To find out what makes some people more susceptible to the phenomenon, the team asked 44 volunteers about how often they got earworms and how they were affected by them. Then they used MRI scans to measure the thickness of volunteers’ cerebral cortices and the volume of their grey matter in various brain areas. Brain differences People who suffered earworms more frequently had thicker cortices in areas involved in auditory perception and pitch discrimination. © Copyright Reed Business Information Ltd.
By Sarah Schwartz In a possible step toward treating genetic human deafness, scientists have used gene therapy to partially restore hearing in deaf mice. Some mice with genetic hearing loss could sense and respond to noises after receiving working copies of their faulty genes, researchers report July 8 in Science Translational Medicine. Because the mice’s mutated genes closely correspond to those responsible for some hereditary human deafness, the scientists hope the results will inform future human therapies. “I would call this a really exciting big step,” says otolaryngologist Lawrence Lustig of Columbia University Medical Center. The ear’s sound-sensing hair cells convert noises into information the brain can process. Hair cells need specific proteins to work properly, and alterations in the genetic blueprints for these proteins can cause deafness. To combat the effects of two such mutations, the scientists injected viruses containing healthy genes into the ears of deaf baby mice. The virus infected some hair cells, giving them working genes. The scientists tried this therapy on two different deafness-causing mutations. Within a month, around half the mice with one mutation showed brainwave activity consistent with hearing and jumped when exposed to loud noises. Treated mice with the other mutation didn’t respond to noises, but the gene therapy helped their hair cells — which normally die off quickly due to the mutation — survive. All of the untreated mice remained deaf. © Society for Science & the Public 2000 - 2015
By SINDYA N. BHANOO It may be possible to diagnose autism by giving children a sniff test, a new study suggests. Most people instinctively take a big whiff when they encounter a pleasant smell and limit their breathing when they encounter a foul smell. Children with autism spectrum disorder don’t make this natural adjustment, said Liron Rozenkrantz, a neuroscientist at the Weizmann Institute of Science in Israel and one of the researchers involved with the study. She and her colleagues report their findings in the journal Current Biology. They presented 18 children who had an autism diagnosis and 18 typically developing children with pleasant and unpleasant odors and measured their sniff responses. The pleasant smells were rose and soap, and the unpleasant smells were sour milk and rotten fish. Typically developing children adjusted their sniffing almost immediately — within about 305 milliseconds. Children with autism did not respond as rapidly. As they were exposed to the smells, the children were watching a cartoon or playing a video game. “It’s a semi-automated response,” Ms. Rozenkrantz said. “It does not require the subject’s attention.” Using the sniff test alone, the researchers, who had not been told which children had autism, were able to correctly identify those with autism 81 percent of the time. They also found that the farther removed an autistic child’s sniff response was from the average for typically developing children, the more severe the child’s social impairments were. © 2015 The New York Times Company
By Victoria Gill Science reporter, BBC News Cat v mouse: it is probably the most famous predator-prey pairing, enshrined in idioms and a well-known cartoon. And cats, it turns out, even have chemical warfare in their anti-mouse arsenal - contained in their urine. Researchers found that when very young mice were exposed to a chemical in cat urine, they were less likely to avoid the scent of cats later in life. The findings were presented at the Society for Experimental Biology's annual meeting in Prague. The researchers, from the AN Severtov Institute of Ecology and Evolution in Moscow, had previously found that the compound - aptly named felinine - causes pregnant mice to abort. Dr Vera Voznessenskaya explained that mice have a physiological response to this cat-specific compound. Chemical-sensing mouse neurons in the mouse's brain pick up the scent, triggering a reaction which includes an increase in the levels of stress hormones. "It's something that has existed in cats and mice for thousands of years," said Dr Voznessenskaya. This new study revealed that baby mice exposed to the compound during a "critical period" in their development would, as adults, react quite differently to their arch enemy's smell. The team exposed one-month-old mice to the chemical over two weeks. When they were tested later for their reaction, they were much less likely to flee the same scent. The interaction between cats and mice has a long history "Their physical sensitivity [to the chemical] was actually actually much higher," Dr Voznessenskaya explained. "More of their receptors detect the compound and they produce higher levels of stress hormone." Despite this though, mice raised around the unmistakable scent of cat pee are less inclined to show signs of fear, or to flee when they sniff it out. © 2015 BBC.
by Sarah Zielinski Seabirds called shearwaters manage to navigate across long stretches of open water to islands where the birds breed. It’s not been clear how the birds do this, but there have been some clues. When scientists magnetically disturbed Cory’s shearwaters, the birds still managed to find their way. But when deprived of their sense of smell, the shearwaters had trouble homing in on their final destination. Smell wouldn’t seem to be all that useful out over the ocean, especially with winds and other atmospheric disturbances playing havoc on any scents wafting through the air. But now researchers say they have more evidence that shearwaters are using olfactory cues to navigate. Andrew Reynolds of Rothamsted Research in Harpenden, England, and colleagues make their case June 30 in the Proceedings of the Royal Society B. Messing with Cory’s shearwaters or other seabirds, like researchers did in earlier studies, wasn’t a good option, the researchers say, because there are conservation concerns when it comes to these species. Instead, they attached tiny GPS loggers to 210 shearwaters belonging to three species: Cory’s shearwaters, Scopoli’s shearwaters and Cape Verde shearwaters. But how would the birds’ path reveal how they were navigating? If they were using olfactory cues, the team reasoned, the birds wouldn’t take a straight path to their target. Instead, they would fly straight for a time, guided in that direction by a particular smell. When they lost that scent, their direction would change, until they picked up another scent that could guide them. And only when a bird got close would it use landmarks, other birds and the odor of the breeding colony as guides. If the birds were using some other method of navigation — or randomly searching for where to go — their paths would look much different. © Society for Science & the Public 2000 - 2015
By Christopher Intagliata Two decades ago, Swiss researchers had women smell the tee shirts that various men had slept in for two nights. Turned out that if women liked the aroma of a particular shirt, the guy who’d worn it was likely to have genetically coded immunity that was unlike the woman’s. Well the effect isn't just limited to sweaty shirts. Turns out we all smell things a little differently—you pick up a note of cloves, say, where I smell something more soapy—and that too gives clues to our degree of genetic similarity. Researchers tried that test with 89 people—having them sniff a couple dozen samples, and label each one using terms like lemony, coconut, fishy and floral. And each volunteer classified the scents differently enough that the researchers could single them out in subsequent tests, based on what they called each subject’s "olfactory fingerprint." Researchers then repeated that sniff test on another 130 subjects. But this time they did a blood test, too, to figure out each person's HLA type—an immune factor that determines whether you'll reject someone's organ, for example. They found that people who perceived smells similarly also had similar HLA types. Study author Lavi Secundo, a neuroscientist at the Weizmann Institute of Science in Israel, says the smell test could have real-world applications. "For organ donation you can think of this method as a quick, maybe a quick and dirty, method to sift between the best and the rest." He and his colleagues say it might even eliminate the need for 30 percent of the HLA tests done today. The work appears in the Proceedings of the National Academy of Sciences. [Lavi Secundo et al, Individual olfactory perception reveals meaningful nonolfactory genetic information] © 2015 Scientific American
By Sarah Lewin Evolutionary biologists have long wondered why the eardrum—the membrane that relays sound waves to the inner ear—looks in humans and other mammals remarkably like the one in reptiles and birds. Did the membrane and therefore the ability to hear in these groups evolve from a common ancestor? Or did the auditory systems evolve independently to perform the same function, a phenomenon called convergent evolution? A recent set of experiments performed at the University of Tokyo and the RIKEN Evolutionary Morphology Laboratory in Japan resolves the issue. When the scientists genetically inhibited lower jaw development in both fetal mice and chickens, the mice formed neither eardrums nor ear canals. In contrast, the birds grew two upper jaws, from which two sets of eardrums and ear canals sprouted. The results, published in Nature Communications, confirm that the middle ear grows out of the lower jaw in mammals but emerges from the upper jaw in birds—all supporting the hypothesis that the similar anatomy evolved independently in mammals and in reptiles and birds. (Scientific American is part of Springer Nature.) Fossils of auditory bones had supported this conclusion as well, but eardrums do not fossilize and so could not be examined directly. © 2015 Scientific American
Sarah Schwartz A person’s sense of smell may reveal a lot about his or her identity. A new test can distinguish individuals based upon their perception of odors, possibly reflecting a person’s genetic makeup, scientists report online June 22 in Proceedings of the National Academy of Sciences. Most humans perceive a given odor similarly. But the genes for the molecular machinery that humans use to detect scents are about 30 percent different in any two people, says neuroscientist Noam Sobel of the Weizmann Institute of Science in Rehovot, Israel. This variation means that nearly every person’s sense of smell is subtly different. Nobody had ever developed a way to test this sensory uniqueness, Sobel says. Sobel and his colleagues designed a sensitive scent test they call the “olfactory fingerprint.” In an experiment, test subjects rated how strongly 28 odors such as clove or compost matched 54 adjectives such as “nutty” or “pleasant.” An olfactory fingerprint describes individuals’ perceptions of odors’ similarities, not potentially subjective scent descriptions. All 89 subjects in the study had distinct olfactory fingerprints. The researchers calculated that just seven odors and 11 descriptors could have identified each individual in the group. With 34 odors, 35 descriptors, and around five hours of testing per person, the scientists estimate they could individually identify about 7 billion different people, roughly the entire human population. © Society for Science & the Public 2000 - 2015.
By Rachel Feltman Here's why you hate the sound of your own voice(1:07) If you've ever listened to your voice recorded, chances are you probably didn't like what you heard. So, why do most people hate the sound of their own voice? The answer: It's all in how sound travel to your ears. (Pamela Kirkland/The Washington Post) Whether you've heard yourself talking on the radio or just gabbing in a friend's Instagram video, you probably know the sound of your own voice -- and chances are pretty good that you hate it. As the video above explains, your voice as you hear it when you speak out loud is very different from the voice the rest of the world perceives. That's because it comes to you via a different channel than everyone else. When sound waves from the outside world -- someone else's voice, for example -- hit the outer ear, they're siphoned straight through the ear canal to hit the ear drum, creating vibrations that the brain will translate into sound. When we talk, our ear drums and inner ears vibrate from the sound waves we're putting out into the air. But they also have a another source of vibration -- the movements caused by the production of the sound. Our vocal cords and airways are trembling, too, and those vibrations make their way over to auditory processing as well. Your body is better at carrying low, rich tones than the air is. So when those two sources of sound get combined into one perception of your own voice, it sounds lower and richer. That's why hearing the way your voice sounds without all the body vibes can be off-putting -- it's unfamiliar -- or even unpleasant, because of the relative tinniness.
Link ID: 21062 - Posted: 06.17.2015
By Brian Handwerk When it comes to mating, female mice must follow their noses. For the first time, scientists have shown that hormones in mice hijack smell receptors in the nose to drive behavior, while leaving the brain completely out of the loop. According to the study, appearing this week in Cell, female mice can smell attractant male pheromones during their reproductive periods. But during periods of diestrus, when the animals are unable to reproduce, the hormone progesterone prompts nasal sensory cells to block male pheromone signals so that they don't reach a female's brain. During this time, female mice display indifference or even hostility toward males. The same sensors functioned normally with regard to other smells, like cat urine, showing they are selective for male pheromones. When ovulation begins, progesterone levels drop, enabling the females to once more smell male pheromones. In short, the system "blinds" female mice to potential mates when the animals are not in estrus. The finding that the olfactory system usurped the brain's role shocked the research team, says lead author Lisa Stowers of the Scripps Research Institute. “The sensory systems are just supposed to sort of suck up everything they can in the environment and pass it all on to the brain. The result just seems wacky to us,” Stowers says. “Imagine this occurring in your visual system," she adds. "If you just ate a big hamburger and then saw a buffet, you might see things like the table and some people and maybe some fruit—but you simply wouldn't see the hamburgers anymore. That's kind of what happens here. Based on this female's internal-state change, she's missing an entire subset of the cues being passed on to her brain.”
How echolocation really works By Dwayne Godwin and Jorge Cham © 2015 Scientific American
Link ID: 21017 - Posted: 06.06.2015
By Emily DeMarco For owners of picky cats, that disdainful sniff—signaling the refusal of yet another Friskies flavor—can be soul-crushing. Some cats are notoriously finicky eaters, but the reasons behind such fussy behavior remain fuzzy. Previous research has shown that cats can’t taste sweet flavors, but little is known about how they perceive bitter tastes. Now, researchers in the pet food industry have identified two bitter taste receptors in domestic cats, which could help explain why some felines are so choosy when it comes to their chow. In the study, published today in BMC Neuroscience, the scientists used cell-based experiments to see how the two cat taste receptors, known as Tas2r38 and Tas2r43, responded to bitter compounds such as phenylthiocarbamide (PTC) and 6-n-propylthiouracil (PROP)—which have molecular structures similar to ones in Brussels sprouts and broccoli—as well as aloin (from the aloe plant) and denatonium (used to prevent inadvertent ingestion of some chemicals). When compared with the human versions of these receptors, the researchers found that the cat bitter receptor Tas2r38 was less sensitive to PTC and did not respond to PROP, whereas Tas2r43 was less sensitive to aloin but more sensitive to denatonium, leading the researchers to conclude that cats taste different, and perhaps more narrow, ranges of bitter flavors than humans. The research could help pharmaceutical and pet food manufacturers create compounds that block or inhibit these bitter taste receptors, the team says, potentially leading to more appetizing medicines (if such a thing exists) and foods for our feline companions. © 2015 American Association for the Advancement of Science
Lauren Silverman Jiya Bavishi was born deaf. For five years, she couldn't hear and she couldn't speak at all. But when I first meet her, all she wants to do is say hello. The 6-year-old is bouncing around the room at her speech therapy session in Dallas. She's wearing a bright pink top; her tiny gold earrings flash as she waves her arms. "Hi," she says, and then uses sign language to ask who I am and talk about the ice cream her father bought for her. Jiya is taking part in a clinical trial testing a new hearing technology. At 12 months, she was given a cochlear implant. These surgically implanted devices send signals directly to the nerves used to hear. But cochlear implants don't work for everyone, and they didn't work for Jiya. A schoolboy with a cochlear implant listens to his teacher during lessons at a school for the hearing impaired in Germany. The implants have dramatically changed the way deaf children learn and transition out of schools for the deaf and into classrooms with non-disabled students. "The physician was able to get all of the electrodes into her cochlea," says Linda Daniel, a certified auditory-verbal therapist and rehabilitative audiologist with HEAR, a rehabilitation clinic in Dallas. Daniel has been working with Jiya since she was a baby. "However, you have to have a sufficient or healthy auditory nerve to connect the cochlea and the electrodes up to the brainstem." But Jiya's connection between the cochlea and the brainstem was too thin. There was no way for sounds to make that final leg of the journey and reach her brain. © 2015 NPR
By Meeri Kim The dangers of concussions, caused by traumatic stretching and damage to nerve cells in the brain that lead to dizziness, nausea and headache, has been well documented. But ear damage that is sometimes caused by a head injury has symptoms so similar to the signs of a concussion that doctors may misdiagnose it and administer the wrong treatment. A perilymph fistula is a tear or defect in the small, thin membranes that normally separate the air-filled middle ear from the inner ear, which is filled with a fluid called perilymph. When a fistula forms, tiny amounts of this fluid leak out of the inner ear, an organ crucial not only for hearing but also for balance. Losing even a few small drops of perilymph leaves people disoriented, nauseous and often with a splitting headache, vertigo and memory loss. While most people with a concussion recover within a few days, a perilymph fistula can leave a person disabled for months. There is some controversy around perilymph fistula due to its difficulty of diagnosis — the leak is not directly observable, but rather identified by its symptoms. However, it is generally accepted as a real condition by otolaryngologists and sports physicians, and typically known to follow a traumatic event. But concussions — as well as post-concussion syndrome, which is marked by dizziness, headache and other symptoms that can last even a year after the initial blow — also occur as the result of such an injury.
By SINDYA N. BHANOO Male Java sparrows are songbirds — and, scientists reported on Wednesday, natural percussionists. The sparrows click their bills against a hard surface while singing. That clicking is done in coordination with the song, much as a percussion instrument accompanies a melody. Researchers at Hokkaido University in Japan observed the birds producing clicks frequently toward the beginning of their songs and around specific notes. Birds that were related produced similar percussive patterns, but whether this behavior is learned or innate is unclear. Next the scientists, who described their findings on Wednesday in the journal PLOS One, would like to know whether male sparrows use bill clicks during courtship communication. © 2015 The New York Times Company
Jon Hamilton When Sam Swiller used hearing aids, his musical tastes ran to AC/DC and Nirvana – loud bands with lots of drums and bass. But after Swiller got a cochlear implant in 2005, he found that sort of music less appealing. "I was getting pushed away from sounds I used to love," he says, "but also being more attracted to sounds that I never appreciated before." So he began listening to folk and alternative music, including the Icelandic singer Bjork. There are lots of stories like this among people who get cochlear implants. And there's a good reason. A cochlear implant isn't just a fancy hearing aid. When his cochlear implant was first switched on, the world sounded different. "A hearing aid is really just an amplifier," says Jessica Phillips-Silver, a neuroscience researcher at Georgetown University. "The cochlear implant is actually bypassing the damaged part of the ear and delivering electrical impulses directly to the auditory nerve." As a result, the experience of listening to music or any other sound through the ear, with or without a hearing aid, can be completely unlike the experience of listening through a cochlear implant. "You're basically remapping the audio world," Swiller says. Swiller is 39 years old and lives in Washington, D.C. He was born with an inherited disorder that caused him to lose much of his hearing by his first birthday. That was in the 1970s, and cochlear implants were still considered experimental devices. So Swiller got hearing aids. They helped, but Swiller still wasn't hearing what other people were. © 2015 NPR
By Virginia Morell Like humans, dolphins, and a few other animals, North Atlantic right whales (Eubalaena glacialis) have distinctive voices. The usually docile cetaceans utter about half a dozen different calls, but the way in which each one does so is unique. To find out just how unique, researchers from Syracuse University in New York analyzed the “upcalls” of 13 whales whose vocalizations had been collected from suction cup sensors attached to their backs. An upcall is a contact vocalization that lasts about 1 to 2 seconds and rises in frequency, sounding somewhat like a deep-throated cow’s moo. Researchers think the whales use the calls to announce themselves and to “touch base” with others of their kind, they explained in a poster presented today at the Meeting of the Acoustical Society of America in Pittsburgh, Pennsylvania. After analyzing the duration and harmonic frequency of these upcalls, as well as the rate at which the frequencies changed, the scientists found that they could distinguish the voices of each of the 13 whales. They think their discovery will provide a new tool for tracking and monitoring the critically endangered whales, which number about 450 and range primarily from Florida to Newfoundland. © 2015 American Association for the Advancement of Science.