Chapter 9. Hearing, Vestibular Perception, Taste, and Smell

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 1455

By Bob Holmes As soon as I decided to write a book on the science of flavor, I knew I wanted to have myself genotyped. Every one of us, I learned through my preliminary research for Flavor: The Science of Our Most Neglected Sense, probably has a unique set of genes for taste and odor receptors. So each person lives in their own flavor world. I wanted to know what my genes said about my own world. Sure enough, there was a lesson there—but not the one I expected. Our senses of smell and taste detect chemicals in the environment as they bind to receptors on the olfactory epithelium in the nose or on taste buds studding the mouth. From these two inputs, plus a few others, the brain assembles the compound perception we call flavor. Taste is pretty simple: basically, one receptor type each for sweet, sour, salty, and the savory taste called umami, and a family of maybe 20 or more bitter receptors, each of which is sensitive to different chemicals. Smell, on the other hand, relies on more than 400 different odor receptor types, the largest gene family in the human genome. Variation in any of these genes—and, probably, many other genes that affect the pathways involved in taste or smell—should affect how we perceive the flavors of what we eat and drink. Hence the genotyping. One April morning a few years ago, I drooled into a vial and sent that DNA sample off to the Monell Chemical Senses Center in Philadelphia, home to what is likely the world’s biggest research group dedicated to the basic science of flavor. A few months later, I visited Monell to take a panel of perceptual tests and compare the results to my genetic profile. © 1986-2017 The Scientist

Keyword: Chemical Senses (Smell & Taste)
Link ID: 23652 - Posted: 05.24.2017

By Lindzi Wessel You’ve probably heard that your sense of smell isn’t that great. After all, compared with a dog or even a mouse, the human olfactory system doesn’t take up that much space. And when was the last time you went sniffing the ground alongside your canine companion? But now, in a new review published today in Science, neuroscientist John McGann of Rutgers University in New Brunswick, New Jersey, argues that the myth of the nonessential nose is a huge mistake—one that has led scientists to neglect research in a critical and mysterious part of our minds. Science checked in with McGann to learn more about why he thinks our noses know more than we realize. Q: Many of us assume our sense of smell is terrible, especially compared with other animals. Where did this idea come from? A: I traced part of this history back to 19th century [anatomist and] anthropologist Paul Broca, who was interested in comparing brains across lots of different animals. Compared to the olfactory bulbs [the first stop for smell signals in the brain], the rest of the human brain is very large. So if you look at whole brains, the bulbs look like these tiny afterthoughts; if you look at a mouse or a rat, the olfactory bulb seems quite big. You can almost forgive Broca for thinking that they didn't matter because they look so small comparatively. Broca believed that a key part of having free will was not being forced to do things by odors. And he thought of smell as this almost dirty, animalistic thing that compelled behaviors—it compelled animals to have sex with each other and things like that. So he put humans in the nonsmeller category—not because they couldn't smell, but because we had free will and could decide how to respond to smells. The idea also got picked up by Sigmund Freud, who then thought of smell as an animalistic thing that had to be left behind as a person grew into a rational adult. So you had in psychology, philosophy, and anthropology all these different pathways leading to presumption that humans didn't have a good sense of smell. © 2017 American Association for the Advancement of Scienc

Keyword: Chemical Senses (Smell & Taste)
Link ID: 23604 - Posted: 05.12.2017

By Kerry Grens In June of 2014, Pablo Meyer went to Rockefeller University in New York City to give a talk about open data. He leads the Translational Systems Biology and Nanobiotechnology group at IBM Research and also guides so-called DREAM challenges, or Dialogue for Reverse Engineering Assessments and Methods. These projects crowdsource the development of algorithms from open data to make predictions for all manner of medical and biological problems—for example, prostate cancer survival or how quickly ALS patients’ symptoms will progress. Andreas Keller, a neuroscientist at Rockefeller, was in the audience that day, and afterward he emailed Meyer with an offer and a request. “He said, ‘We have this data set, and we don’t model,’” recalls Meyer. “‘Do you think you could organize a competition?’” The data set Keller had been building was far from ordinary. It was the largest collection of odor perceptions of its kind—dozens of volunteers, each having made 10 visits to the lab, described 476 different smells using 19 descriptive words (including sweet, urinous, sweaty, and warm), along with the pleasantness and intensity of the scent. Before Keller’s database, the go-to catalog at researchers’ disposal was a list of 10 odor compounds, described by 150 participants using 146 words, which had been developed by pioneering olfaction scientist Andrew Dravnieks more than three decades earlier. Meyer was intrigued, so he asked Keller for the data. Before launching a DREAM challenge, Meyer has to ensure that the raw data provided to competitors do indeed reflect some biological phenomenon. In this case, he needed to be sure that algorithms could determine what a molecule might smell like when only its chemical characteristics were fed in. There were more than 4,800 molecular features for each compound, including structural properties, functional groups, chemical compositions, and the like. “We developed a simple linear model just to see if there’s a signal there,” Meyer says. “We were very, very surprised we got a result. We thought there was a bug.” © 1986-2017 The Scientist

Keyword: Chemical Senses (Smell & Taste); Robotics
Link ID: 23592 - Posted: 05.09.2017

Natalie Jacewicz Sometimes people develop strange eating habits as they age. For example, Amy Hunt, a stay-at-home mom in Austin, Texas, says her grandfather cultivated some unusual taste preferences in his 80s. "I remember teasing him because he literally put ketchup or Tabasco sauce on everything," says Hunt. "When we would tease him, he would shrug his shoulders and just say he liked it." But Hunt's father, a retired registered nurse, had a theory: Her grandfather liked strong flavors because of his old age and its effects on taste. When people think about growing older, they may worry about worsening vision and hearing. But they probably don't think to add taste and smell to the list. "You lose all your senses as you get older, except hopefully not your sense of humor," says Steven Parnes, an ENT-otolaryngologist (ear, nose and throat doctor) working in Albany, N.Y. To understand how aging changes taste, a paean to the young tongue might be appropriate. The average person is born with roughly 9,000 taste buds, according to Parnes. Each taste bud is a bundle of sensory cells, grouped together like the tightly clumped petals of a flower bud. These taste buds cover the tongue and send taste signals to the brain through nerves. Taste buds vary in their sensitivity to different kinds of tastes. Some will be especially good at sensing sweetness, while others will be especially attune to bitter flavors, and so on. © 2017 npr

Keyword: Chemical Senses (Smell & Taste); Development of the Brain
Link ID: 23577 - Posted: 05.05.2017

By Lore Thaler, Liam Norman Echolocation is probably most associated with bats and dolphins. These animals emit bursts of sounds and listen to the echoes that bounce back to detect objects in their environment and to perceive properties of the objects (e.g. location, size, material). Bats, for example, can tell the distance of objects with high precision using the time delay between emission and echo, and are able to determine a difference in distance as small as one centimeter. This is needed for them to be able to catch insects in flight. People, remarkably, can also echolocate. By making mouth clicks, for example, and listening for the returning echoes, they can perceive their surroundings. Humans, of course, cannot hear ultrasound, which may put them at a disadvantage. Nonetheless, some people have trained themselves to an extraordinary level. Daniel Kish, who is blind and is a well-known expert echolocator, is able to ride his bicycle, hike in unfamiliar terrain, and travel in unfamiliar cities on his own. Daniel is the founder and president of World Access for the Blind, a non-profit charity in the US that offers training in echolocation alongside training in other mobility techniques such as the long cane. Since 2011, the scientific interest in human echolocation has gained momentum. For example, technical advances have made it feasible to scan people’s brains while they echolocate. This research has shown that people who are blind and have expertise in echolocation use ‘visual’ parts of their brain to process information from echoes. It has also been found that anyone with normal hearing can learn to use echoes to determine the sizes, locations, or distance of objects or to use it to avoid obstacles during walking. Remarkably, both blind and sighted people can improve their ability to interpret and use sound echoes within a session or two. © 2017 Scientific American

Keyword: Hearing
Link ID: 23570 - Posted: 05.04.2017

By Chris Baraniuk Bat-detecting drones could help us find out what the animals get up to when flying. Ultrasonic detectors on drones in the air and on the water are listening in on bat calls, in the hope of discovering more about the mammals’ lives beyond the reach of ground-based monitoring devices. Drone-builder Tom Moore and bat enthusiast Tom August have developed three different drones to listen for bat calls while patrolling a pre-planned route. Since launching the scheme, known as Project Erebus, in 2014, they have experimented with two flying drones and one motorised boat, all equipped with ultrasonic detectors. The pair’s latest tests have demonstrated the detection capabilities of the two airborne drone models: a quadcopter and a fixed-wing drone. Last month, the quadcopter successfully followed a predetermined course and picked up simulated bat calls produced by an ultrasonic transmitter. The bat signal Moore says one of the major hurdles is detecting the call of bats over the noise of the drones’ propellers, which emit loud ultrasonic frequencies. They overcame this with the quadcopter by dangling the detector underneath the body and rotors of the drone. This is not such a problem for the water-based drone. Last year, Moore and August tested a remote-controlled boat in Oxfordshire, UK, and picked up bat calls thought to belong to common pipistrelle and Daubenton’s bats. The different species often emit different ultrasonic frequencies. © Copyright Reed Business Information Ltd.

Keyword: Hearing; Animal Migration
Link ID: 23524 - Posted: 04.22.2017

By C. CLAIBORNE RAY. The yellow stuff in the outer part of the ear canal, scientifically named cerumen, is only partly a waxy substance, according to the National Institute on Deafness and Other Communication Disorders. The rest of the so-called wax is an accretion of some dust and lots of dead skin cells, which normally collect in the passage as they are shed. The waxy part, which holds the compacted waste together and smooths the way for it to leave the ear, comes from the ceruminous glands, which secrete lipids and other substances. They are specialized sweat glands just under the surface of the skin in the outer part of the canal. Besides lubricating the skin of the canal while keeping it dry, the lipids also help maintain a protective acidic coating, which helps kill bacteria and fungi that can cause infection and irritation. The normal working of muscles in the head, especially those that move the jaw, help guide the wax outward along the ear canal. The ceruminous glands commonly shrink in old age, producing less of the lipids and making it harder for waste to leave the ear. Excess wax buildup can usually be safely softened with warm olive or almond oil or irrigated with warm water, though specialized softening drops are also sold. Take care not to compress the buildup further with cotton swabs or other tools. If it cannot be safely removed, seek medical help. © 2017 The New York Times Company

Keyword: Hearing
Link ID: 23472 - Posted: 04.11.2017

Jon Hamilton The U.S. military is trying to figure out whether certain heavy weapons are putting U.S. troops in danger. The concern centers on the possibility of brain injuries from shoulder-fired weapons like the Carl Gustaf, a recoilless rifle that resembles a bazooka and is powerful enough to blow up a tank. A single round for the Carl Gustaf can weigh nearly 10 pounds. The shell leaves the gun's barrel at more than 500 miles per hour. And as the weapon fires, it directs an explosive burst of hot gases out of the back of the barrel. For safety reasons, troops are trained to take positions to the side of weapons like this. Even so, they get hit by powerful blast waves coming from both the muzzle and breech. "It feels like you get punched in your whole body," is the way one Army gunner described the experience in a military video made in Afghanistan. "The blast bounces off the ground and it overwhelms you." During the wars in Iraq and Afghanistan, the military recognized that the blast from a roadside bomb could injure a service member's brain without leaving a scratch. Hundreds of thousands of U.S. troops sustained this sort of mild traumatic brain injury, which has been linked to long-term problems ranging from memory lapses to post-traumatic stress disorder. Also during those wars, the military began to consider the effects on the brain of repeated blasts from weapons like the Carl Gustaf. And some members of Congress became concerned. © 2017 npr

Keyword: Brain Injury/Concussion; Hearing
Link ID: 23451 - Posted: 04.05.2017

By David Owen When my mother’s mother was in her early twenties, a century ago, a suitor took her duck hunting in a rowboat on a lake near Austin, Texas, where she grew up. He steadied his shotgun by resting the barrel on her right shoulder—she was sitting in the bow—and when he fired he not only missed the duck but also permanently damaged her hearing, especially on that side. The loss became more severe as she got older, and by the time I was in college she was having serious trouble with telephones. (“I’m glad it’s not raining! ” I’d shout, for the third or fourth time, while my roommates snickered.) Her deafness probably contributed to one of her many eccentricities: ending phone conversations by suddenly hanging up. I’m a grandparent myself now, and lots of people I know have hearing problems. A guy I played golf with last year came close to making a hole in one, then complained that no one in our foursome had complimented him on his shot—even though, a moment before, all three of us had complimented him on his shot. (We were walking behind him.) The man who cuts my wife’s hair began wearing two hearing aids recently, to compensate for damage that he attributes to years of exposure to professional-quality blow-dryers. My sister has hearing aids, too. She traces her problem to repeatedly listening at maximum volume to Anne’s Angry and Bitter Breakup Song Playlist, which she created while going through a divorce. My ears ring all the time—a condition called tinnitus. I blame China, because the ringing started, a decade ago, while I was recovering from a monthlong cold that I’d contracted while breathing the filthy air in Beijing, and whose symptoms were made worse by changes in cabin pressure during the long flight home. Tinnitus is almost always accompanied by hearing loss. My internist ordered an MRI, to make sure I didn’t have a brain tumor, and held up a vibrating tuning fork and asked me to tell him when I could no longer hear it. After a while, he leaned forward to make sure the tuning fork was still humming, since he himself could no longer hear it. (We’re about the same age.) © 2017 Condé Nast.

Keyword: Hearing
Link ID: 23434 - Posted: 03.31.2017

By Catherine Offord | Recognizing when you’re singing the right notes is a crucial skill for learning a melody, whether you’re a human practicing an aria or a bird rehearsing a courtship song. But just how the brain executes this sort of trial-and-error learning, which involves comparing performances to an internal template, is still something of a mystery. “It’s been an important question in the field for a long time,” says Vikram Gadagkar, a postdoctoral neurobiologist in Jesse Goldberg’s lab at Cornell University. “But nobody’s been able to find out how this actually happens.” Gadagkar suspected, as others had hypothesized, that internally driven learning might rely on neural mechanisms similar to traditional reward learning, in which an animal learns to anticipate a treat based on a particular stimulus. When an unexpected outcome occurs (such as receiving no treat when one was expected), the brain takes note via changes in dopamine signaling. So Gadagkar and his colleagues investigated dopamine signaling in a go-to system for studying vocal learning, male zebra finches. First, the researchers used electrodes to record the activity of dopaminergic neurons in the ventral tegmental area (VTA), a brain region important in reward learning. Then, to mimic singing errors, they used custom-written software to play over, and thus distort, certain syllables of that finch’s courtship song while the bird practiced. “Let’s say the bird’s song is ABCD,” says Gadagkar. “We distort one syllable, so it sounds like something between ABCD and ABCB.” © 1986-2017 The Scientist

Keyword: Hearing; Sexual Behavior
Link ID: 23426 - Posted: 03.30.2017

By Tim Falconer HOUSE OF ANANSI, MAY 2016I’ve spent my career bothering people. As a journalist and author, I hang around and watch what folks do, and I ask too many questions, some better than others. Later, I have follow-up queries and clarification requests, and I bug them for those stats they promised to provide me. But something different happened when I started researching congenital amusia, the scientific term for tone deafness present at birth, for my new book, Bad Singer. The scientists were as interested in me as I was in them. My idea was to learn to sing and then write about the experience as a way to explore the science of singing. After my second voice lesson, I went to the Université de Montréal’s International Laboratory for Brain, Music, and Sound Research (BRAMS). I fully expected Isabelle Peretz, a pioneer in amusia research, to say I was just untrained. Instead, she diagnosed me as amusic. “So this means what?” I asked. “We would love to test you more.” The BRAMS researchers weren’t alone. While still at Harvard’s Music and Neuroimaging Lab, Psyche Loui—who now leads Wesleyan University’s Music, Imaging, and Neural Dynamics (MIND) Lab—identified a neural pathway called the arcuate fasciculus as the culprit of congenital amusia. So I emailed her to set up an interview. She said sure—and asked if I’d be willing to undergo an fMRI scan. And I’d barely started telling my story to Frank Russo, who runs Ryerson University’s Science of Music, Auditory Research, and Technology (SMART) Lab in Toronto, before he blurted out, “Sorry, I’m restraining myself from wanting to sign you up for all kinds of research and figuring what we can do with you.” © 1986-2017 The Scientist

Keyword: Hearing
Link ID: 23425 - Posted: 03.30.2017

By Kate Yandell In a 1971 paper published in Science, biologist Roger Payne, then at Rockefeller University, and Scott McVay, then an administrator at Princeton University, described the “surprisingly beautiful sounds” made by humpback whales (Megaptera novaeangliae; Science, 173:585-97). Analyzing underwater recordings made by a Navy engineer, the duo found that these whale sounds were intricately repetitive. “Because one of the characteristics of bird songs is that they are fixed patterns of sounds that are repeated, we call the fixed patterns of humpback sounds ‘songs,’” they wrote. OCEAN SONGS: Humpback whales make diverse, broadband sounds that travel miles through the ocean. Their function, however, remains somewhat murky.PLOS ONE, dx.doi.org/10.1371/journal.pone.0079422, 2013 It’s now clear that, in addition to simpler calls, several baleen whale species—including blue, fin, and bowhead—make series of sounds known as song. Humpback song is the most complex and by far the best studied. Units of humpback songs form phrases, series of similar phrases form themes, and multiple themes form songs. All the males in a given population sing the same song, which evolves over time. When whale groups come into contact, songs can spread. But why do whales sing? “The short answer is, we don’t know,” says Alison Stimpert, a bioacoustician at Moss Landing Marine Laboratories in California. Humpback songs are only performed by males and are often heard on breeding grounds, so the dominant hypothesis is that these songs are a form of courtship. The quality of a male’s performance could be a sign of his fitness, for example. But female whales do not tend to approach singing males. Alternatively, whale researchers have proposed that the male whales sing to demarcate territory or to form alliances with other males during mating season. © 1986-2017 The Scientist

Keyword: Sexual Behavior; Hearing
Link ID: 23419 - Posted: 03.29.2017

By Erin Blakemore It’s scientific canard so old it’s practically cliché: When people lose their sight, other senses heighten to compensate. But are there really differences between the senses of blind and sighted people? It’s been hard to prove, until now. As George Dvorsky reports for Gizmodo, new research shows that blind people’s brains are structurally different than those of sighted people. In a new study published in the journal PLOS One, researchers reveal that the brains of people who are born blind or went blind in early childhood are wired differently than people born with their sight. The study is the first to look at both structural and functional differences between blind and sighted people. Researchers used MRI scanners to peer at the brains of 12 people born with “early profound blindness”—that is, people who were either born without sight or lost it by age three, reports Dvorsky. Then they compared the MRI images to images of the brains of 16 people who were born with sight and who had normal vision (either alone or with corrective help from glasses). The comparisons showed marked differences between the brains of those born with sight and those born without. Essentially, the brains of blind people appeared to be wired differently when it came to things like structure and connectivity. The researchers noticed enhanced connections between some areas of the brain, too—particularly the occipital and frontal cortex areas, which control working memory. There was decreased connectivity between some areas of the brain, as well.

Keyword: Vision; Hearing
Link ID: 23410 - Posted: 03.27.2017

By Bob Grant In the past decade, some bat species have been added to the ranks of “singing” animals, with complex, mostly ultrasonic vocalizations that, when slowed down, rival the tunes of some songbirds. Like birds, bats broadcast chirps, warbles, and trills to attract mates and defend territories. There are about 1,300 known bat species, and the social vocalizations of about 50 have been studied. Of those, researchers have shown that about 20 species seem to be singing, with songs that are differentiated from simpler calls by both their structural complexity and their function. Bats don’t sound like birds to the naked ear; most singing species broadcast predominately in the ultrasonic range, undetectable by humans. And in contrast to the often lengthy songs of avian species, the flying mammals sing in repeated bursts of only a few hundred milliseconds. Researchers must first slow down the bat songs—so that their frequencies drop into the audible range—to hear the similarities. Kirsten Bohn, a behavioral biologist at Johns Hopkins University, first heard Brazilian free-tailed bats (Tadarida brasiliensis) sing more than 10 years ago, when she was a postdoc in the lab of Mike Smotherman at Texas A&M University. “I started hearing a couple of these songs slowed down,” she recalls. “And it really was like, ‘Holy moly—that’s a song! That sounds like a bird.’” The neural circuitry used to learn and produce song may also share similarities between bats and birds. Bohn and Smotherman say they’ve gathered some tantalizing evidence that bats use some of the same brain regions—namely, the basal ganglia and prefrontal cortex—that birds rely upon to produce, process, and perhaps even learn songs. “We have an idea of how the neural circuits control vocalizing in the bats and how they might be adapted to produce song,” Smotherman says. © 1986-2017 The Scientist

Keyword: Hearing; Language
Link ID: 23369 - Posted: 03.17.2017

By Jenny Rood To human ears, the trilling of birdsong ranks among nature’s most musical sounds. That similarity to human music is now inspiring researchers to apply music theory to avian vocalizations. For example, zebra finch neurobiologist Ofer Tchernichovski of the City University of New York, together with musician and musicologist Hollis Taylor, recently analyzed the song of the Australian pied butcherbird (Cracticus nigrogularis) and found an inverse relationship between motif complexity and repetition that paralleled patterns found in human music (R Soc Open Sci, 3:160357, 2016). Tchernichovski’s work also suggests that birds can perceive rhythm and change their calls in response. Last year, he and colleague Eitan Globerson, a symphony conductor at the Jerusalem Academy of Music and Dance as well as a neurobiologist at Bar Ilan University in Israel, demonstrated that zebra finches, a vocal learning species, adapt their innate calls—as opposed to learned song—to avoid overlapping with unusual rhythmic patterns produced by a vocal robot (Curr Biol, 26:309-18, 2016). The researchers also found that both males and females use the brain’s song system to do this, although females do not learn song. But these complexities of birdsong might be more comparable to human speech than to human music, says Henkjan Honing, a music cognition scientist at the University of Amsterdam. Honing’s research suggests that some birds don’t discern rhythm well. Zebra finches, for example, seem to pay attention to pauses between notes on short time scales but have trouble recognizing overarching rhythmic patterns—one of the key skills thought necessary for musical perception (Front Psychol, doi:10.3389/fpsyg.2016.00730, 2016). © 1986-2017 The Scientist

Keyword: Animal Communication; Hearing
Link ID: 23368 - Posted: 03.17.2017

By Catherine Offord A few years ago, UK composer and technology reporter LJ Rich participated in a music technology competition as part of a project with the BBC. The 24-hour event brought together various musicians, and entailed staying awake into the wee hours trying to solve technical problems related to music. Late into the night, during a break from work, Rich thought of a way to keep people’s spirits up. “At about four in the morning, I remember playing different tastes to people on a piano in the room we were working in,” she says. For instance, “to great amusement, during breakfast I played people the taste of eggs.” It didn’t take long before Rich learned, for the first time, that food’s association with music was not as universally appreciated as she had assumed. “You realize everybody else doesn’t perceive the world that way,” she says. “For me, it was quite a surprise to find that people didn’t realize that certain foods had different keys.” Rich had long known she had absolute pitch—the ability to identify a musical note, such as B flat, without any reference. But that night, she learned she also has what’s known as synesthesia, a little-understood mode of perception that links senses such as taste and hearing in unusual ways, and is thought to be present in around 4 percent of the general population. It’s a difficult phenomenon to get to the bottom of. Like Rich, many synesthetes are unaware their perception is atypical; what’s more, detecting synesthesia usually relies on self-reported experiences—an obstacle for standardized testing. But a growing body of evidence suggests that Rich is far from being alone in possessing both absolute pitch and synesthesia. © 1986-2017 The Scientist

Keyword: Hearing
Link ID: 23353 - Posted: 03.14.2017

By Diana Kwon Deep in the Amazon rainforests of Bolivia live the Tsimane’, a tribe that has remained relatively untouched by Western civilization. Tsimane’ people possess a unique characteristic: they do not cringe at musical tones that sound discordant to Western ears. The vast majority of Westerners prefer consonant chords to dissonant ones, based on the intervals between the musical notes that compose the chords. One particularly notable example of this is the Devil’s Interval, or flatted fifth, which received its name in the Middle Ages because the sound it produced was deemed so unpleasant that people associated it with sinister forces. The flatted fifth later became a staple of numerous jazz, blues, and rock-and-roll songs. Over the years, scientists have gathered compelling evidence to suggest that an aversion to dissonance is innate. In 1996, in a letter to Nature, Harvard psychologists, Marcel Zentner and Jerome Kagan, reported on a study suggesting that four-month-old infants preferred consonant intervals to dissonant ones. Researchers subsequently replicated these results: one lab discovered the same effect in two-month-olds and another in two-day-old infants of both deaf and hearing parents. Some scientists even found these preferences in certain animals, such as young chimpanzees and baby chickens. “Of course the ambiguity is [that] even young infants have quite a bit of exposure to typical Western music,” says Josh McDermott, a researcher who studies auditory cognition at MIT. “So the counter-argument is that they get early exposure, and that shapes their preference.” © 1986-2017 The Scientist

Keyword: Hearing; Emotions
Link ID: 23345 - Posted: 03.11.2017

By Aylin Woodward Noise is everywhere, but that’s OK. Your brain can still keep track of a conversation in the face of revving motorcycles, noisy cocktail parties or screaming children – in part by predicting what’s coming next and filling in any blanks. New data suggests that these insertions are processed as if the brain had really heard the parts of the word that are missing. “The brain has evolved a way to overcome interruptions that happen in the real world,” says Matthew Leonard at the University of California, San Francisco. We’ve known since the 1970s that the brain can “fill in” inaudible sections of speech, but understanding how it achieves this phenomenon – termed perceptual restoration – has been difficult. To investigate, Leonard’s team played volunteers words that were partially obscured or inaudible to see how their brains responded. The experiment involved people who already had hundreds of electrodes implanted into their brain to monitor their epilepsy. These electrodes detect seizures, but can also be used to record other types of brain activity. The team played the volunteers recordings of a word that could either be “faster” or “factor”, with the middle sound replaced by noise. Data from the electrodes showed that their brains responded as if they had actually heard the missing “s” or “c” sound. © Copyright Reed Business Information Ltd.

Keyword: Hearing; Attention
Link ID: 23344 - Posted: 03.11.2017

By Catherine Offord Getting to Santa María, Bolivia, is no easy feat. Home to a farming and foraging society, the village is located deep in the Amazon rainforest and is accessible only by river. The area lacks electricity and running water, and the Tsimane’ people who live there make contact with the outside world only occasionally, during trips to neighboring towns. But for auditory researcher Josh McDermott, this remoteness was central to the community’s scientific appeal. In 2015, the MIT scientist loaded a laptop, headphones, and a gasoline generator into a canoe and pushed off from the Amazonian town of San Borja, some 50 kilometers downriver from Santa María. Together with collaborator Ricardo Godoy, an anthropologist at Brandeis University, McDermott planned to carry out experiments to test whether the Tsimane’ could discern certain combinations of musical tones, and whether they preferred some over others. The pair wanted to address a long-standing question in music research: Are the features of musical perception seen across cultures innate, or do similarities in preferences observed around the world mirror the spread of Western culture and its (much-better-studied) music? “Particular musical intervals are used in Western music and in other cultures,” McDermott says. “They don’t appear to be random—some are used more commonly than others. The question is: What’s the explanation for that?” © 1986-2017 The Scientist

Keyword: Hearing
Link ID: 23332 - Posted: 03.09.2017

By Lindzi Wessel You may have seen the ads: Just spray a bit of human pheromone on your skin, and you’re guaranteed to land a date. Scientists have long debated whether humans secrete chemicals that alter the behavior of other people. A new study throws more cold water on the idea, finding that two pheromones that proponents have long contended affect human attraction to each other have no such impact on the opposite sex—and indeed experts are divided about whether human pheromones even exist. The study, published today in Royal Society Open Science, asked heterosexual participants to rate opposite-sex faces on attractiveness while being exposed to two steroids that are putative human pheromones. One is androstadienone (AND), found in male sweat and semen, whereas the second, estratetraenol (EST), is in women’s urine. Researchers also asked participants to judge gender-ambiguous, or “neutral,” faces, created by merging images of men and women together. The authors reasoned that if the steroids were pheromones, female volunteers given AND would see gender-neutral faces as male, and male volunteers given EST would see gender-neutral faces as female. They also theorized that the steroids corresponding to the opposite sex would lead the volunteers to rate opposite sex faces as more attractive. That didn’t happen. The researchers found no effects of the steroids on any behaviors and concluded that the label of “putative human pheromone” for AND and EST should be dropped. “I’ve convinced myself that AND and EST are not worth pursuing,” says the study’s lead author, Leigh Simmons, an evolutionary biologist at the University of Western Australia in Crawley. © 2017 American Association for the Advancement of Science.

Keyword: Chemical Senses (Smell & Taste); Hormones & Behavior
Link ID: 23327 - Posted: 03.08.2017