Links for Keyword: Hearing

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 625

Susan Milius Sonar pings from a hungry bat closing in can inspire hawkmoths to get their genitals trilling. The ultrasonic “eeeee” of scraping moth sex organs may serve as a last-second acoustic defense, says behavioral ecologist Jesse Barber of Boise State University in Idaho. In theory, the right squeak could jam bats’ targeting sonar, remind them of a noisy moth that tasted terrible or just startle them enough for the hawkmoth to escape. Males of at least three hawkmoth species in Malaysia squeak in response to recorded echolocation sounds of the final swoop in a bat attack, Barber and Akito Kawahara of the University of Florida in Gainesville report July 3 in Biology Letters. Female hawkmoths are hard to catch, but the few Barber and Kawahara have tested squeak too. Although they’re the same species as the males, they use their genitals in a different way to make ultrasound. Squeak power may have arisen during courtship and later proved useful during attacks. Until now, researchers knew of only two insect groups that talk back to bats: some tiger moths and tiger beetles. Neither is closely related to hawkmoths, so Barber speculates that anti-bat noises might be widespread among insects. Slowed-down video shows first the male and then the female hawkmoth creating ultrasonic trills at the tips of their abdomens. Males use a pair of claspers that grasp females in mating. To sound off, these quickly slide in and out of the abdomen, rasping specialized scales against the sides. Females rub the left and right sides of their abdominal structures together. J. Barber and A.Y. Kawahara. Hawkmoths produce anti-bat ultrasound. Biology Letters. Posted July 3, 2013. doi: 10.1098/rsbl.2013.0161 [Go to] |© Society for Science & the Public 2000 - 2017.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 23864 - Posted: 07.24.2017

By Aylin Woodward See, hear. Our eardrums appear to move to shift our hearing in the same direction as our eyes are looking. Why this happens is unclear, but it may help us work out which objects we see are responsible for the sounds we can hear. Jennifer Groh at Duke University in Durham, North Carolina, and her team have been using microphones inserted into people’s ears to study how their eardrums change during saccades – the movement that occurs when we shift visual focus from one place to another. You won’t notice it, but our eyes go through several saccades a second to take in our surroundings. Examining 16 people, the team detected changes in ear canal pressure that were probably caused by middle-ear muscles tugging on the eardrum. These pressure changes indicate that when we look left, for example, the drum of our left ear gets pulled further into the ear and that of our right ear pushed out, before they both swing back and forth a few times. These changes to the eardrums began as early as 10 milliseconds before the eyes even started to move, and continued for a few tens of milliseconds after the eyes stopped. Making sense “We think that before actual eye movement occurs, the brain sends a signal to the ear to say ‘I have commanded the eyes to move 12 degrees to the right’,” says Groh. The eardrum movements that follow the change in focus may prepare our ears to hear sounds from a particular direction. © Copyright New Scientist Ltd.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 7: Vision: From Eye to Brain
Link ID: 23860 - Posted: 07.22.2017

Nicola Davis People who experience hearing loss could be at greater risk of memory and thinking problems later in life than those without auditory issues, research suggests. The study focused on people who were at risk of Alzheimer’s disease, revealing that those who were diagnosed with hearing loss had a higher risk of “mild cognitive impairment” four years later. “It’s really not mild,” said Clive Ballard, professor of age-related disease at the University of Exeter. “They are in the lowest 5% of cognitive performance and about 50% of those individuals will go on to develop dementia.” Guardian Morning Briefing - sign up and start the day one step ahead Read more Presented at the Alzheimer’s Association International Conference in London, researchers from the US looked at the memory and thinking skills of 783 cognitively healthy participants in late middle age, more than two-thirds of whom had at least one parent who had been diagnosed with Alzheimer’s disease. The team carried out a range of cognitive tests on the participants over a four-year period, aimed at probing memory and mental processing, revealing that those who had hearing loss at the start of the study were more than twice as likely to be found to have mild cognitive impairment four years later than those with no auditory problems, once a variety of other risk factors were taken into account. Taylor Fields, a PhD student at the University of Wisconsin who led the research, said that the findings suggest hearing loss could be an early warning sign that an individual might be at greater risk of future cognitive impairment - but added more research was necessary to unpick the link. “There is something here and it should be looked into,” she said. © 2017 Guardian News and Media Limited

Related chapters from BP7e: Chapter 7: Life-Span Development of the Brain and Behavior; Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 13: Memory, Learning, and Development; Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 23840 - Posted: 07.17.2017

By Mo Costandi You can’t teach an old dog new tricks—or can you? Textbooks tell us that early infancy offers a narrow window of opportunity during which sensory experience shapes the way neuronal circuits wire up to process sound and other inputs. A lack of proper stimulation during this “critical period” has a permanent and detrimental effect on brain development. But new research shows the auditory system in the adult mouse brain can be induced to revert to an immature state similar to that in early infancy, improving the animals’ ability to learn new sounds. The findings, published Thursday in Science, suggest potential new ways of restoring brain function in human patients with neurological diseases—and of improving adults’ ability to learn languages and musical instruments. In mice, a critical period occurs during which neurons in a portion of the brain’s wrinkled outer surface, the cortex, are highly sensitized to processing sound. This state of plasticity allows them to strengthen certain connections within brain circuits, fine-tuning their auditory responses and enhancing their ability to discriminate between different tones. In humans, a comparable critical period may mark the beginning of language acquisition. But heightened plasticity declines rapidly, and this continues throughout life, making it increasingly difficult to learn. In 2011 Jay Blundon, a developmental neurobiologist at Saint Jude Children's Research Hospital, and his colleagues reported that the critical periods for circuits connecting the auditory cortex and the thalamus occur at about the same time. © 2017 Scientific American,

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 13: Memory, Learning, and Development
Link ID: 23793 - Posted: 06.30.2017

Elizabeth Hellmuth Margulis Whether tapping a foot to samba or weeping at a ballad, the human response to music seems almost instinctual. Yet few can articulate how music works. How do strings of sounds trigger emotion, inspire ideas, even define identities? Cognitive scientists, anthropologists, biologists and musicologists have all taken a crack at that question (see go.nature.com/2sdpcb5), and it is into this line that Adam Ockelford steps. Comparing Notes draws on his experience as a composer, pianist, music researcher and, most notably, a music educator working for decades with children who have visual impairments or are on the autistic spectrum, many with extraordinary musical abilities. Through this “prism of the overtly remarkable”, Ockelford seeks to shed light on music perception and cognition in all of us. Existing models based on neurotypical children could overlook larger truths about the human capacity to learn and make sense of music he contends. Some of the children described in Comparing Notes might (for a range of reasons) have trouble tying their shoelaces or carrying on a basic conversation. Yet before they hit double digits in age, they can hear a complex composition for the first time and immediately play it on the piano, their fingers flying to the correct notes. This skill, Ockelford reminds us, eludes many adults with whom he studied at London's Royal Academy of Music. Weaving together the strands that let these children perform such stunning feats, Ockelford constructs an argument for rethinking conventional wisdom on music education. He positions absolute pitch (AP) as central to these abilities to improvise, listen and play. © 2017 Macmillan Publishers Limited,

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 23744 - Posted: 06.15.2017

Paula Span A few years hence, when you’ve finally tired of turning up the TV volume and making dinner reservations at 5:30 p.m. because any later and the place gets too loud, you may go shopping. Perhaps you’ll head to a local boutique called The Hear Better Store, or maybe Didja Ear That? (Reader nominees for kitschy names invited.) Maybe you’ll opt for a big-box retailer or a kiosk at your local pharmacy. If legislation now making its way through Congress succeeds, these places will all offer hearing aids. You’ll try out various models — they’ll all meet newly established federal requirements — to see what seems to work and feel best. Your choices might include products from big consumer electronics specialists like Apple, Samsung and Bose. If you want assistance, you might pay an audiologist to provide customized services, like adjusting frequencies or amplification levels. But you won’t need to go through an audiologist-gatekeeper, as you do now, to buy hearing aids. The best part of this over-the-counter scenario: Instead of spending an average of $1,500 to $2,000 per device (and nearly everyone needs two), you’ll find that the price has plummeted. You might pay $300 per ear, maybe even less. So many people will be using these new over-the-counter hearing aids — along with the hordes wearing earbuds for other reasons — that you won’t feel self-conscious. You’ll blend right in. That, at least, represents the future envisioned by supporters of the Over-the-Counter Hearing Aid Act of 2017, which would give the Food and Drug Administration three years to create a regulatory category for such devices and to establish standards for safety, effectiveness and labeling.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 23736 - Posted: 06.13.2017

By Lore Thaler, Liam Norman Echolocation is probably most associated with bats and dolphins. These animals emit bursts of sounds and listen to the echoes that bounce back to detect objects in their environment and to perceive properties of the objects (e.g. location, size, material). Bats, for example, can tell the distance of objects with high precision using the time delay between emission and echo, and are able to determine a difference in distance as small as one centimeter. This is needed for them to be able to catch insects in flight. People, remarkably, can also echolocate. By making mouth clicks, for example, and listening for the returning echoes, they can perceive their surroundings. Humans, of course, cannot hear ultrasound, which may put them at a disadvantage. Nonetheless, some people have trained themselves to an extraordinary level. Daniel Kish, who is blind and is a well-known expert echolocator, is able to ride his bicycle, hike in unfamiliar terrain, and travel in unfamiliar cities on his own. Daniel is the founder and president of World Access for the Blind, a non-profit charity in the US that offers training in echolocation alongside training in other mobility techniques such as the long cane. Since 2011, the scientific interest in human echolocation has gained momentum. For example, technical advances have made it feasible to scan people’s brains while they echolocate. This research has shown that people who are blind and have expertise in echolocation use ‘visual’ parts of their brain to process information from echoes. It has also been found that anyone with normal hearing can learn to use echoes to determine the sizes, locations, or distance of objects or to use it to avoid obstacles during walking. Remarkably, both blind and sighted people can improve their ability to interpret and use sound echoes within a session or two. © 2017 Scientific American

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 23570 - Posted: 05.04.2017

By Chris Baraniuk Bat-detecting drones could help us find out what the animals get up to when flying. Ultrasonic detectors on drones in the air and on the water are listening in on bat calls, in the hope of discovering more about the mammals’ lives beyond the reach of ground-based monitoring devices. Drone-builder Tom Moore and bat enthusiast Tom August have developed three different drones to listen for bat calls while patrolling a pre-planned route. Since launching the scheme, known as Project Erebus, in 2014, they have experimented with two flying drones and one motorised boat, all equipped with ultrasonic detectors. The pair’s latest tests have demonstrated the detection capabilities of the two airborne drone models: a quadcopter and a fixed-wing drone. Last month, the quadcopter successfully followed a predetermined course and picked up simulated bat calls produced by an ultrasonic transmitter. The bat signal Moore says one of the major hurdles is detecting the call of bats over the noise of the drones’ propellers, which emit loud ultrasonic frequencies. They overcame this with the quadcopter by dangling the detector underneath the body and rotors of the drone. This is not such a problem for the water-based drone. Last year, Moore and August tested a remote-controlled boat in Oxfordshire, UK, and picked up bat calls thought to belong to common pipistrelle and Daubenton’s bats. The different species often emit different ultrasonic frequencies. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 23524 - Posted: 04.22.2017

By C. CLAIBORNE RAY. The yellow stuff in the outer part of the ear canal, scientifically named cerumen, is only partly a waxy substance, according to the National Institute on Deafness and Other Communication Disorders. The rest of the so-called wax is an accretion of some dust and lots of dead skin cells, which normally collect in the passage as they are shed. The waxy part, which holds the compacted waste together and smooths the way for it to leave the ear, comes from the ceruminous glands, which secrete lipids and other substances. They are specialized sweat glands just under the surface of the skin in the outer part of the canal. Besides lubricating the skin of the canal while keeping it dry, the lipids also help maintain a protective acidic coating, which helps kill bacteria and fungi that can cause infection and irritation. The normal working of muscles in the head, especially those that move the jaw, help guide the wax outward along the ear canal. The ceruminous glands commonly shrink in old age, producing less of the lipids and making it harder for waste to leave the ear. Excess wax buildup can usually be safely softened with warm olive or almond oil or irrigated with warm water, though specialized softening drops are also sold. Take care not to compress the buildup further with cotton swabs or other tools. If it cannot be safely removed, seek medical help. © 2017 The New York Times Company

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 23472 - Posted: 04.11.2017

By David Owen When my mother’s mother was in her early twenties, a century ago, a suitor took her duck hunting in a rowboat on a lake near Austin, Texas, where she grew up. He steadied his shotgun by resting the barrel on her right shoulder—she was sitting in the bow—and when he fired he not only missed the duck but also permanently damaged her hearing, especially on that side. The loss became more severe as she got older, and by the time I was in college she was having serious trouble with telephones. (“I’m glad it’s not raining! ” I’d shout, for the third or fourth time, while my roommates snickered.) Her deafness probably contributed to one of her many eccentricities: ending phone conversations by suddenly hanging up. I’m a grandparent myself now, and lots of people I know have hearing problems. A guy I played golf with last year came close to making a hole in one, then complained that no one in our foursome had complimented him on his shot—even though, a moment before, all three of us had complimented him on his shot. (We were walking behind him.) The man who cuts my wife’s hair began wearing two hearing aids recently, to compensate for damage that he attributes to years of exposure to professional-quality blow-dryers. My sister has hearing aids, too. She traces her problem to repeatedly listening at maximum volume to Anne’s Angry and Bitter Breakup Song Playlist, which she created while going through a divorce. My ears ring all the time—a condition called tinnitus. I blame China, because the ringing started, a decade ago, while I was recovering from a monthlong cold that I’d contracted while breathing the filthy air in Beijing, and whose symptoms were made worse by changes in cabin pressure during the long flight home. Tinnitus is almost always accompanied by hearing loss. My internist ordered an MRI, to make sure I didn’t have a brain tumor, and held up a vibrating tuning fork and asked me to tell him when I could no longer hear it. After a while, he leaned forward to make sure the tuning fork was still humming, since he himself could no longer hear it. (We’re about the same age.) © 2017 Condé Nast.

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 23434 - Posted: 03.31.2017

By Catherine Offord | Recognizing when you’re singing the right notes is a crucial skill for learning a melody, whether you’re a human practicing an aria or a bird rehearsing a courtship song. But just how the brain executes this sort of trial-and-error learning, which involves comparing performances to an internal template, is still something of a mystery. “It’s been an important question in the field for a long time,” says Vikram Gadagkar, a postdoctoral neurobiologist in Jesse Goldberg’s lab at Cornell University. “But nobody’s been able to find out how this actually happens.” Gadagkar suspected, as others had hypothesized, that internally driven learning might rely on neural mechanisms similar to traditional reward learning, in which an animal learns to anticipate a treat based on a particular stimulus. When an unexpected outcome occurs (such as receiving no treat when one was expected), the brain takes note via changes in dopamine signaling. So Gadagkar and his colleagues investigated dopamine signaling in a go-to system for studying vocal learning, male zebra finches. First, the researchers used electrodes to record the activity of dopaminergic neurons in the ventral tegmental area (VTA), a brain region important in reward learning. Then, to mimic singing errors, they used custom-written software to play over, and thus distort, certain syllables of that finch’s courtship song while the bird practiced. “Let’s say the bird’s song is ABCD,” says Gadagkar. “We distort one syllable, so it sounds like something between ABCD and ABCB.” © 1986-2017 The Scientist

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 12: Sex: Evolutionary, Hormonal, and Neural Bases
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 8: Hormones and Sex
Link ID: 23426 - Posted: 03.30.2017

By Tim Falconer HOUSE OF ANANSI, MAY 2016I’ve spent my career bothering people. As a journalist and author, I hang around and watch what folks do, and I ask too many questions, some better than others. Later, I have follow-up queries and clarification requests, and I bug them for those stats they promised to provide me. But something different happened when I started researching congenital amusia, the scientific term for tone deafness present at birth, for my new book, Bad Singer. The scientists were as interested in me as I was in them. My idea was to learn to sing and then write about the experience as a way to explore the science of singing. After my second voice lesson, I went to the Université de Montréal’s International Laboratory for Brain, Music, and Sound Research (BRAMS). I fully expected Isabelle Peretz, a pioneer in amusia research, to say I was just untrained. Instead, she diagnosed me as amusic. “So this means what?” I asked. “We would love to test you more.” The BRAMS researchers weren’t alone. While still at Harvard’s Music and Neuroimaging Lab, Psyche Loui—who now leads Wesleyan University’s Music, Imaging, and Neural Dynamics (MIND) Lab—identified a neural pathway called the arcuate fasciculus as the culprit of congenital amusia. So I emailed her to set up an interview. She said sure—and asked if I’d be willing to undergo an fMRI scan. And I’d barely started telling my story to Frank Russo, who runs Ryerson University’s Science of Music, Auditory Research, and Technology (SMART) Lab in Toronto, before he blurted out, “Sorry, I’m restraining myself from wanting to sign you up for all kinds of research and figuring what we can do with you.” © 1986-2017 The Scientist

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 23425 - Posted: 03.30.2017

By Bob Grant In the past decade, some bat species have been added to the ranks of “singing” animals, with complex, mostly ultrasonic vocalizations that, when slowed down, rival the tunes of some songbirds. Like birds, bats broadcast chirps, warbles, and trills to attract mates and defend territories. There are about 1,300 known bat species, and the social vocalizations of about 50 have been studied. Of those, researchers have shown that about 20 species seem to be singing, with songs that are differentiated from simpler calls by both their structural complexity and their function. Bats don’t sound like birds to the naked ear; most singing species broadcast predominately in the ultrasonic range, undetectable by humans. And in contrast to the often lengthy songs of avian species, the flying mammals sing in repeated bursts of only a few hundred milliseconds. Researchers must first slow down the bat songs—so that their frequencies drop into the audible range—to hear the similarities. Kirsten Bohn, a behavioral biologist at Johns Hopkins University, first heard Brazilian free-tailed bats (Tadarida brasiliensis) sing more than 10 years ago, when she was a postdoc in the lab of Mike Smotherman at Texas A&M University. “I started hearing a couple of these songs slowed down,” she recalls. “And it really was like, ‘Holy moly—that’s a song! That sounds like a bird.’” The neural circuitry used to learn and produce song may also share similarities between bats and birds. Bohn and Smotherman say they’ve gathered some tantalizing evidence that bats use some of the same brain regions—namely, the basal ganglia and prefrontal cortex—that birds rely upon to produce, process, and perhaps even learn songs. “We have an idea of how the neural circuits control vocalizing in the bats and how they might be adapted to produce song,” Smotherman says. © 1986-2017 The Scientist

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 23369 - Posted: 03.17.2017

By Catherine Offord A few years ago, UK composer and technology reporter LJ Rich participated in a music technology competition as part of a project with the BBC. The 24-hour event brought together various musicians, and entailed staying awake into the wee hours trying to solve technical problems related to music. Late into the night, during a break from work, Rich thought of a way to keep people’s spirits up. “At about four in the morning, I remember playing different tastes to people on a piano in the room we were working in,” she says. For instance, “to great amusement, during breakfast I played people the taste of eggs.” It didn’t take long before Rich learned, for the first time, that food’s association with music was not as universally appreciated as she had assumed. “You realize everybody else doesn’t perceive the world that way,” she says. “For me, it was quite a surprise to find that people didn’t realize that certain foods had different keys.” Rich had long known she had absolute pitch—the ability to identify a musical note, such as B flat, without any reference. But that night, she learned she also has what’s known as synesthesia, a little-understood mode of perception that links senses such as taste and hearing in unusual ways, and is thought to be present in around 4 percent of the general population. It’s a difficult phenomenon to get to the bottom of. Like Rich, many synesthetes are unaware their perception is atypical; what’s more, detecting synesthesia usually relies on self-reported experiences—an obstacle for standardized testing. But a growing body of evidence suggests that Rich is far from being alone in possessing both absolute pitch and synesthesia. © 1986-2017 The Scientist

Related chapters from BP7e: Chapter 8: General Principles of Sensory Processing, Touch, and Pain; Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 5: The Sensorimotor System; Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 23353 - Posted: 03.14.2017

By Diana Kwon Deep in the Amazon rainforests of Bolivia live the Tsimane’, a tribe that has remained relatively untouched by Western civilization. Tsimane’ people possess a unique characteristic: they do not cringe at musical tones that sound discordant to Western ears. The vast majority of Westerners prefer consonant chords to dissonant ones, based on the intervals between the musical notes that compose the chords. One particularly notable example of this is the Devil’s Interval, or flatted fifth, which received its name in the Middle Ages because the sound it produced was deemed so unpleasant that people associated it with sinister forces. The flatted fifth later became a staple of numerous jazz, blues, and rock-and-roll songs. Over the years, scientists have gathered compelling evidence to suggest that an aversion to dissonance is innate. In 1996, in a letter to Nature, Harvard psychologists, Marcel Zentner and Jerome Kagan, reported on a study suggesting that four-month-old infants preferred consonant intervals to dissonant ones. Researchers subsequently replicated these results: one lab discovered the same effect in two-month-olds and another in two-day-old infants of both deaf and hearing parents. Some scientists even found these preferences in certain animals, such as young chimpanzees and baby chickens. “Of course the ambiguity is [that] even young infants have quite a bit of exposure to typical Western music,” says Josh McDermott, a researcher who studies auditory cognition at MIT. “So the counter-argument is that they get early exposure, and that shapes their preference.” © 1986-2017 The Scientist

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 15: Emotions, Aggression, and Stress
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 11: Emotions, Aggression, and Stress
Link ID: 23345 - Posted: 03.11.2017

By Aylin Woodward Noise is everywhere, but that’s OK. Your brain can still keep track of a conversation in the face of revving motorcycles, noisy cocktail parties or screaming children – in part by predicting what’s coming next and filling in any blanks. New data suggests that these insertions are processed as if the brain had really heard the parts of the word that are missing. “The brain has evolved a way to overcome interruptions that happen in the real world,” says Matthew Leonard at the University of California, San Francisco. We’ve known since the 1970s that the brain can “fill in” inaudible sections of speech, but understanding how it achieves this phenomenon – termed perceptual restoration – has been difficult. To investigate, Leonard’s team played volunteers words that were partially obscured or inaudible to see how their brains responded. The experiment involved people who already had hundreds of electrodes implanted into their brain to monitor their epilepsy. These electrodes detect seizures, but can also be used to record other types of brain activity. The team played the volunteers recordings of a word that could either be “faster” or “factor”, with the middle sound replaced by noise. Data from the electrodes showed that their brains responded as if they had actually heard the missing “s” or “c” sound. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 23344 - Posted: 03.11.2017

By Catherine Offord Getting to Santa María, Bolivia, is no easy feat. Home to a farming and foraging society, the village is located deep in the Amazon rainforest and is accessible only by river. The area lacks electricity and running water, and the Tsimane’ people who live there make contact with the outside world only occasionally, during trips to neighboring towns. But for auditory researcher Josh McDermott, this remoteness was central to the community’s scientific appeal. In 2015, the MIT scientist loaded a laptop, headphones, and a gasoline generator into a canoe and pushed off from the Amazonian town of San Borja, some 50 kilometers downriver from Santa María. Together with collaborator Ricardo Godoy, an anthropologist at Brandeis University, McDermott planned to carry out experiments to test whether the Tsimane’ could discern certain combinations of musical tones, and whether they preferred some over others. The pair wanted to address a long-standing question in music research: Are the features of musical perception seen across cultures innate, or do similarities in preferences observed around the world mirror the spread of Western culture and its (much-better-studied) music? “Particular musical intervals are used in Western music and in other cultures,” McDermott says. “They don’t appear to be random—some are used more commonly than others. The question is: What’s the explanation for that?” © 1986-2017 The Scientist

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 23332 - Posted: 03.09.2017

By Jia Naqvi He loves dancing to songs, such as Michael Jackson’s "Beat It" and the "Macarena," but he can't listen to music in the usual way. He laughs whenever someone takes his picture with a camera flash, which is the only intensity of light he can perceive. He loves trying to balance himself, but his legs don't allow him to walk without support. He is one in a million, literally. Born deaf-blind and with a condition, osteopetrosis, that makes bones both dense and fragile, 6-year-old Orion Theodore Withrow is among an unknown number of children with a newly identified genetic disorder that researchers are just beginning to decipher. It goes by an acronym, COMMAD, that gives little away until each letter is explained, revealing an array of problems that also affect eye formation and pigmentation in eyes, skin and hair. The rare disorder severely impairs the person's ability to communicate. Children such as Orion, who are born to genetically deaf parents, are at a higher risk, according to a recent study published in the American Journal of Human Genetics. The finding has important implications for the deaf community, said its senior author, Brian Brooks, clinical director and chief of the Pediatric, Developmental and Genetic Ophthalmology Section at the National Eye Institute. “It is relatively common for folks in deaf community to marry each other,” he said, and what's key is whether each of the couple has a specific genetic "misspelling" that causes a syndrome called Waardenburg 2A. If yes, there's the likelihood of a child inheriting the mutation from both parents. The result, researchers found, is COMMAD. © 1996-2017 The Washington Post

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 13: Memory, Learning, and Development
Link ID: 23317 - Posted: 03.06.2017

By Lenny Bernstein Forty million American adults have lost some hearing because of noise, and half of them suffered the damage outside the workplace, from everyday exposure to leaf blowers, sirens, rock concerts and other loud sounds, the Centers for Disease Control and Prevention reported Tuesday. A quarter of people ages 20 to 69 were suffering some hearing deficits, the CDC reported in its Morbidity and Mortality Weekly Report, even though the vast majority of the people in the study claimed to have good or excellent hearing. The researchers found that 24 percent of adults had “audiometric notches” — a deterioration in the softest sound a person can hear — in one or both ears. The data came from 3,583 people who had undergone hearing tests and reported the results in the 2011-2012 National Health and Nutrition Examination Survey (NHANES). The review's more surprising finding — which the CDC had not previously studied — was that 53 percent of those people said they had no regular exposure to loud noise at work. That means the hearing loss was caused by other environmental factors, including listening to music through headphones with the volume turned up too high. “Noise is damaging hearing before anyone notices or diagnoses it,” said Anne Schuchat, the CDC's acting director. “Because of that, the start of hearing loss is underrecognized.” The study revealed that 19 percent of people between the ages of 20 and 29 had some hearing loss, a finding that Schuchat called alarming. © 1996-2017 The Washington Post

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 23197 - Posted: 02.08.2017

By JANE E. BRODY Dizziness is not a disease but rather a symptom that can result from a huge variety of underlying disorders or, in some cases, no disorder at all. Readily determining its cause and how best to treat it — or whether to let it resolve on its own — can depend on how well patients are able to describe exactly how they feel during a dizziness episode and the circumstances under which it usually occurs. For example, I recently experienced a rather frightening attack of dizziness, accompanied by nausea, at a food and beverage tasting event where I ate much more than I usually do. Suddenly feeling that I might faint at any moment, I lay down on a concrete balcony for about 10 minutes until the disconcerting sensations passed, after which I felt completely normal. The next morning I checked the internet for my symptom — dizziness after eating — and discovered the condition had a name: Postprandial hypotension, a sudden drop in blood pressure when too much blood is diverted to the digestive tract, leaving the brain relatively deprived. The condition most often affects older adults who may have an associated disorder like diabetes, hypertension or Parkinson’s disease that impedes the body’s ability to maintain a normal blood pressure. Fortunately, I am thus far spared any disorder linked to this symptom, but I’m now careful to avoid overeating lest it happen again. “An essential problem is that almost every disease can cause dizziness,” say two medical experts who wrote a comprehensive new book, “Dizziness: Why You Feel Dizzy and What Will Help You Feel Better.” Although the vast majority of patients seen at dizziness clinics do not have a serious health problem, the authors, Dr. Gregory T. Whitman and Dr. Robert W. Baloh, emphasize that doctors must always “be on the alert for a serious disease presenting as ‘dizziness,’” like “stroke, transient ischemic attacks, multiple sclerosis and brain tumors.” © 2017 The New York Times Company

Related chapters from BP7e: Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 23196 - Posted: 02.07.2017