Chapter 9. Hearing, Vestibular Perception, Taste, and Smell

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 1422

By JONAH ENGEL BROMWICH I have a big, dumb, deep, goofy voice. But I’m reminded of it only when I hear a recording of myself while playing back an interview — or when friends do impressions of me, lowering their voices several octaves. My high school classmate Walter Suskind has one of the deepest voices I’ve ever heard in person. His experience has been similar to mine. “My voice sounds pretty normal in my head,” he said. “It’s when I catch the echo on the back of the phone or when I hear myself when it’s been taped that I realize how deep it is. Also, when people come up to me and, to imitate my voice, go as deep as they possibly can and growl in my face.” He added, “I’ve been told that the one advantage to voices like ours is we make really good hostage negotiators.” (Here’s Walter on an episode of “Radiolab.” His segment starts at about 12:20, and the host immediately comments on his voice.) Many people have heard their recorded voices and reeled in disgust (“Do I really sound like that?”) Others are surprised how high their voices sound. The indie musician Mitski Miyawaki, who has earned praise for exceptional control over her singing voice, said that she, too, is often unpleasantly surprised by her speaking voice, which she perceives as “lower, more commanding,” than it sounds to others. “And then I listen to a radio interview and I’m like ‘uuuch,’ ” she said, making a disgusted noise. “I listen to my voice and I go, ‘Oh it sounds exactly like a young girl.’ ” There’s an easy explanation for experiences like Ms. Miyawaki’s, said William Hartmann, a physics professor at Michigan State University who specializes in acoustics and psychoacoustics. There are two pathways through which we perceive our own voice when we speak, he explained. One is the route through which we perceive most other sounds. Waves travel from the air through the chain of our hearing systems, traversing the outer, middle and inner ear. © 2017 The New York Times Company

Keyword: Hearing
Link ID: 23107 - Posted: 01.16.2017

Susan Milius NEW ORLEANS — The self-cleaning marvel known as earwax may turn the dust particles it traps into agents of their own disposal. Earwax, secreted in the ear canal, protects ears from building up dunes of debris from particles wafting through the air. The wax creates a sticky particle-trapper inside the canal, explained Zac Zachow January 6 at the annual meeting of the Society of Integrative and Comparative Biology. The goo coats hairs and haphazardly pastes them into a loose net. Then, by a process not yet fully understood, bits of particle-dirtied wax leave the ear, taking their burden of debris with them. Earwax may accomplish such a feat because trapping more and more dust turns it from gooey to crumbly, Zachow said. Working with Alexis Noel in David Hu’s lab at Georgia Tech in Atlanta, he filmed a rough demonstration of this idea: Mixing flour into a gob of pig’s earwax eventually turned the lump from stickier to drier, with crumbs fraying away at the edges. Jaw motions might help shake loose these crumbs, Zachow said. A video inside the ear of someone eating a doughnut showed earwax bucking and shifting. This dust-to-crumb scenario needs more testing, but Noel points out that earwax might someday inspire new ways of reducing dust buildup in machinery such as home air-filtration systems. Z. Zachow, A. Noel and D.L. Hu. Earwax has properties like paint, enabling self-cleaning. Annual meeting of the Society for Integrative and Comparative Biology, New Orleans, January 6, 2017. © Society for Science & the Public 2000 - 2017

Keyword: Hearing
Link ID: 23100 - Posted: 01.14.2017

Alison Abbott Bats have brain cells that keep track of their angle and distance to a target, researchers have discovered. The neurons, called ‘vector cells’, are a key piece of the mammalian’s brain complex navigation system — and something that neuroscientists have been seeking for years. Our brain’s navigation system has many types of cells, but a lot of them seem designed to keep track of where we are. Researchers know of ‘place’ cells, for example, which fire when animals are in a particular location, and ‘head direction’ cells that fire in response to changes in the direction the head is facing. Bats also have a kind of neuronal compass that enables them to orient themselves as they fly. The vector cells, by contrast, keep spatial track of where we are going. They are in the brain’s hippocampus, which is also where ‘place’ and ‘head-direction’ cells were discovered. That’s a surprise, considering how well this area has been studied by researchers, says Nachum Ulanovsky, who led the team at the Weizmann Institute of Science in Rehovot, Israel, that discovered the new cells. His team published their findings in Science on 12 January1. Finding the cells "was one of those very rare discovery moments in a researcher’s life,” says Ulanovsky. “My heart raced, I started jumping around.” The trick to finding them was a simple matter of experimental design, he says. © 2017 Macmillan Publishers Limited

Keyword: Learning & Memory; Hearing
Link ID: 23097 - Posted: 01.13.2017

By Joshua Rapp Learn The Vietnamese pygmy dormouse is as blind as a bat—and it navigates just like one, too. Scientists have found that the small, nimble brown rodent (Typhlomys cinereus chapensis), native to Vietnam and parts of China, uses sound waves to get a grip on its environment. Measurements of the mice in the Moscow Zoo revealed that the species can't see objects because of a folded retina and a low number of neurons capable of collecting visual information, among other things. When researchers recorded the animals, they discovered they make ultrasonic noises similar to those used by some bat species, and videos showed they made the sounds at a much greater pulse rate when moving than while resting. These sound waves bounce off objects, allowing the rodent to sense its surroundings—an ability known as echolocation, or biological sonar. The find makes the dormouse the only tree-climbing mammal known to use ultrasonic echolocation, the team reports in Integrative Zoology. The authors suggest that an extinct ancestor of these dormice was likely a leaf bed–dwelling animal that lost the ability to see in the darkness in which it is active. As the animals began to move up into the trees over time, they likely developed the ultrasonic echolocation abilities to help them deal with a new acrobatic lifestyle. The discovery lends support to the idea that bats may have evolved echolocation before the ability to fly. © 2017 American Association for the Advancement of Science

Keyword: Hearing
Link ID: 23080 - Posted: 01.11.2017

By Michael Price We’ve all heard the stories about humans losing their jobs to robots. But what about man’s best friend? A new study suggests that drug-sniffing dogs may soon have a competitor in the workplace: an insect-piloted robotic vehicle that could help scientists build better odor-tracking robots to find disaster victims, detect illicit drugs or explosives, and sense leaks of hazardous materials. The robotic car’s driver is a silkworm moth (Bombyx mori) tethered in a tiny cockpit so that its legs can move freely over an air-supported ball, a bit like an upside-down computer mouse trackball. Using optical sensors, the car follows the ball’s movement and moves in the same direction. With its odor-sensitive antennae, the moth senses a target smell—in this case, female silkworm sex pheromones—and walks toward it along the trackball, driving the robotic car. Across seven trials with seven different drivers, the insects piloted the vehicle consistently toward the pheromones, nearly as well as 10 other silkworm moths who could walk freely on the ground toward the smells, the researchers reported last month in the Journal of Visualized Experiments. On average, the driving moths reached their target about 2 seconds behind the walking moths, although their paths were more circuitous. The researchers say their findings could help roboticists better integrate biologically inspired odor detection systems into their robots. Engineers might even be able to develop more powerful and maneuverable versions of the study’s robot car that could be driven by silkworms genetically modified to detect a wide variety of smells to help with sniffing tasks traditionally done by trained animals. Time to start polishing up those résumés, pooches. © 2016 American Association for the Advancement of Science

Keyword: Chemical Senses (Smell & Taste)
Link ID: 23051 - Posted: 01.04.2017

Lisa Vincenz-Donnelly A test that records the way the brain processes sound might provide a simple and reliable measure of concussion, a small study suggests. If the method works, it could help scientists work out how best to treat the poorly understood brain injury. In a paper published on 22 December in Scientific Reports1, neuroscientist Nina Kraus of Northwestern University in Evanston, Illinois, and other researchers say that they have found that a particular signal in neural activity, recorded with electrodes placed on the head as children listen to 'da' sounds from a speech synthesizer, can objectively demarcate concussed children from a healthy control group. The research was done on just 40 people — a tiny group — and will have to be repeated in larger samples. But other researchers are still excited by the report, because concussion is hard to diagnose, particularly in children. The study “may for the first time offer a simple and objective biomarker to measure the severity of brain injuries”, says Thomas Wisniewski, a neurologist at New York University’s Langone Medical Center. There is intense interest in finding a clear-cut biological signature for concussion, he says. “We have been crying out for a reliable method." Millions of people enter hospitals every year with blows to the head, and some of have concussion, a minor brain injury that can betoken more serious damage. To diagnose it, physicians rely on subjective complaints of dizziness, coordination tests and sometimes more involved procedures, such as magnetic resonance imaging (MRI) or computed tomography (CT) scans. But there’s no single objective way to detect concussion and measure its severity — and no simple test that can be administered regularly to determine when someone has recovered, a particularly important issue for athletes keen to be allowed back on the field. © 2016 Macmillan Publishers Limited

Keyword: Brain Injury/Concussion; Hearing
Link ID: 23014 - Posted: 12.23.2016

By Victoria Gill Science reporter, BBC News Direct recordings have revealed what is happening in our brains as we make sense of speech in a noisy room. Focusing on one conversation in a loud, distracting environment is called "the cocktail party effect". It is a common festive phenomenon and of interest to researchers seeking to improve speech recognition technology. Neuroscientists recorded from people's brains during a test that recreated the moment when unintelligible speech suddenly makes sense. A team measured people's brain activity as the words of a previously unintelligible sentence suddenly became clear when a subject was told the meaning of the "garbled speech". The findings are published in the journal Nature Communications. Lead researcher Christopher Holdgraf from the University of California, Berkeley, and his colleagues were able to work with epilepsy patients, who had had a portion of their skull removed and electrodes placed on the brain surface to track their seizures. First, the researchers played a very distorted, garbled sentence to each subject, which almost no-one was able to understand. They then played a normal, easy to understand version of the same sentence and immediately repeated the garbled version. "After hearing the intact sentence" the researchers explained in their paper, all the subjects understood the subsequent "noisy version". The brain recordings showed this moment of recognition as brain activity patterns in the areas of the brain that are known to be associated with processing sound and understanding speech. When the subjects heard the very garbled sentence, the scientists reported that they saw little activity in those parts of the brain. Hearing the clearly understandable sentence then triggered patterns of activity in those brain areas. © 2016 BBC.

Keyword: Attention; Hearing
Link ID: 23004 - Posted: 12.22.2016

By Sara Reardon The din of what sounds like a high-pitched cocktail party fills the lab of neuroscientist Xiaoqin Wang at Johns Hopkins University in Baltimore. But the primates making the racket are dozens of marmosets, squirrel-sized monkeys with patterned coats and white puffs of fur on either side of their heads. The animals chatter to each other, stopping to tilt their heads and consider their visitors with inquisitive expressions. Common marmosets (Callithrix jacchus) are social and communicative in captivity, unlike the macaque that is more commonly used as a model primate. And in January, Wang and his colleagues revealed that marmosets are also the only non-human animal that can hear different pitches, such as those found in music and tonal languages like Chinese, in the same way people can1. This makes the marmoset the closest proxy researchers have to the human brain when it comes to hearing and speech, says Quianjie Fu, an auditory researcher at the University of California, Los Angeles, who was not involved with the paper. Until recently, researchers have relied on songbirds for such work, but the birds’ brains are so different from human ones that the insights they provide are limited. Wang hopes that marmosets will improve researchers’ understanding of the evolution of communication and help them refine devices such as cochlear implants for deaf people. © 2016 Scientific American

Keyword: Hearing
Link ID: 23000 - Posted: 12.20.2016

By Catherine Matacic Have you ever wondered why a strange piece of music can feel familiar—how it is, for example, that you can predict the next beat even though you’ve never heard the song before? Music everywhere seems to share some “universals,” from the scales it uses to the rhythms it employs. Now, scientists have shown for the first time that people without any musical training also create songs using predictable musical beats, suggesting that humans are hardwired to respond to—and produce—certain features of music. “This is an excellent and elegant paper,” says Patrick Savage, an ethnomusicologist at the Tokyo University of the Arts who was not involved in the study. “[It] shows that even musical evolution obeys some general rules [similar] to the kind that govern biological evolution.” Last year, Savage and colleagues traced that evolution by addressing a fundamental question: What aspects of music are consistent across cultures? They analyzed hundreds of musical recordings from around the world and identified 18 features that were widely shared across nine regions, including six related to rhythm. These “rhythmic universals” included a steady beat, two- or three-beat rhythms (like those in marches and waltzes), a preference for two-beat rhythms, regular weak and strong beats, a limited number of beat patterns per song, and the use of those patterns to create motifs, or riffs. “That was a really remarkable job they did,” says Andrea Ravignani, a cognitive scientist at the Vrije Universiteit Brussel in Belgium. “[It convinced me that] the time was ripe to investigate this issue of music evolution and music universals in a more empirical way.” © 2016 American Association for the Advancement of Science.

Keyword: Hearing
Link ID: 22997 - Posted: 12.20.2016

By GINA KOLATA As concern rises over the effect of continuous use of headphones and earbuds on hearing, a new paper by federal researchers has found something unexpected. The prevalence of hearing loss in Americans of working age has declined. The paper, published on Thursday in the journal JAMA Otolaryngology — Head & Neck Surgery, used data from the National Health and Nutrition Survey, which periodically administers health tests to a representative sample of the population. The investigators, led by Howard J. Hoffman, the director of the epidemiology and statistics program at the National Institute on Deafness and Other Communication Disorders, compared data collected between 1999 and 2004 with data from 2011 and 2012, the most recent available. Hearing loss in this study meant that a person could not hear, in at least one ear, a sound about as loud as rustling leaves. The researchers reported that while 15.9 percent of the population studied in the earlier period had problems hearing, just 14.1 percent of the more recent group had hearing loss. The good news is part of a continuing trend — Americans’ hearing has gotten steadily better since 1959. Most surprising to Mr. Hoffman, a statistician, was that even though the total population of 20- to 69-year-olds grew by 20 million over the time period studied — and the greatest growth was in the oldest people, a group most likely to have hearing problems — the total number of people with hearing loss fell, from 28 million to 27.7 million. Hearing experts who were not associated with the study said they were utterly convinced by the results. “It’s a fantastic paper,” said Brian Fligor, an audiologist with Lantos Technologies of Wakefield, Mass., which develops custom earpieces to protect ears from noise. “I totally believe them.” © 2016 The New York Times Company

Keyword: Hearing
Link ID: 22996 - Posted: 12.17.2016

Betsy Mason With virtual reality finally hitting the consumer market this year, VR headsets are bound to make their way onto a lot of holiday shopping lists. But new research suggests these gifts could also give some of their recipients motion sickness — especially if they’re women. In a test of people playing one virtual reality game using an Oculus Rift headset, more than half felt sick within 15 minutes, a team of scientists at the University of Minnesota in Minneapolis reports online December 3 in Experimental Brain Research. Among women, nearly four out of five felt sick. So-called VR sickness, also known as simulator sickness or cybersickness, has been recognized since the 1980s, when the U.S. military noticed that flight simulators were nauseating its pilots. In recent years, anecdotal reports began trickling in about the new generation of head-mounted virtual reality displays making people sick. Now, with VR making its way into people’s homes, there’s a steady stream of claims of VR sickness. “It's a high rate of people that you put in [VR headsets] that are going to experience some level of symptoms,” says Eric Muth, an experimental psychologist at Clemson University in South Carolina with expertise in motion sickness. “It’s going to mute the ‘Wheee!’ factor.” Oculus, which Facebook bought for $2 billion in 2014, released its Rift headset in March. The company declined to comment on the new research but says it has made progress in making the virtual reality experience comfortable for most people, and that developers are getting better at creating VR content. All approved games and apps get a comfort rating based on things like the type of movements involved, and Oculus recommends starting slow and taking breaks. But still some users report getting sick. © Society for Science & the Public 2000 - 2016.

Keyword: Vision
Link ID: 22962 - Posted: 12.07.2016

By CATHERINE SAINT LOUIS These days, even 3-year-olds wear headphones, and as the holidays approach, retailers are well stocked with brands that claim to be “safe for young ears” or to deliver “100 percent safe listening.” The devices limit the volume at which sound can be played; parents rely on them to prevent children from blasting, say, Rihanna at hazardous levels that could lead to hearing loss. But a new analysis by The Wirecutter, a product recommendations website owned by The New York Times, has found that half of 30 sets of children’s headphones tested did not restrict volume to the promised limit. The worst headphones produced sound so loud that it could be hazardous to ears in minutes. “These are terribly important findings,” said Cory Portnuff, a pediatric audiologist at the University of Colorado Hospital, who was not involved in the analysis. “Manufacturers are making claims that aren’t accurate.” The new analysis should be a wake-up call to parents who thought volume-limiting technology offered adequate protection, said Dr. Blake Papsin, the chief otolaryngologist at the Hospital for Sick Children in Toronto. “Headphone manufacturers aren’t interested in the health of your child’s ears,” he said. “They are interested in selling products, and some of them are not good for you.” Half of 8- to 12-year-olds listen to music daily, and nearly two-thirds of teenagers do, according to a 2015 report with more than 2,600 participants. Safe listening is a function of both volume and duration: The louder a sound, the less time you should listen to it. It’s not a linear relationship. Eighty decibels is twice as loud as 70 decibels, and 90 decibels is four times louder. Exposure to 100 decibels, about the volume of noise caused by a power lawn mower, is safe for just 15 minutes; noise at 108 decibels, however, is safe for less than three minutes. © 2016 The New York Times Company

Keyword: Hearing
Link ID: 22953 - Posted: 12.06.2016

Emily Conover A bird in laser goggles has helped scientists discover a new phenomenon in the physics of flight. Swirling vortices appear in the flow of air that follows a bird’s wingbeat. But for slowly flying birds, these vortices were unexpectedly short-lived, researchers from Stanford University report December 6 in Bioinspiration and Biomimetics. The results could help scientists better understand how animals fly, and could be important for designing flying robots (SN: 2/7/15, p. 18). To study the complex air currents produced by birds’ flapping wings, the researchers trained a Pacific parrotlet, a small species of parrot, to fly through laser light — with the appropriate eye protection, of course. Study coauthor Eric Gutierrez, who recently graduated from Stanford, built tiny, 3-D‒printed laser goggles for the bird, named Obi. Gutierrez and colleagues tracked the air currents left in Obi’s wake by spraying a fine liquid mist in the air, and illuminating it with a laser spread out into a two-dimensional sheet. High-speed cameras recorded the action at 1,000 frames per second. The vortex produced by the bird “explosively breaks up,” says mechanical engineer David Lentink, a coauthor of the study. “The flow becomes very complex, much more turbulent.” Comparing three standard methods for calculating the lift produced by flapping wings showed that predictions didn’t match reality, thanks to the unexpected vortex breakup. |© Society for Science & the Public 2000 - 20

Keyword: Miscellaneous
Link ID: 22952 - Posted: 12.06.2016

Barbara J. King Birdsong is music to human ears. It has inspired famous composers. For the rest of us, it may uplift the spirit and improve attention or simply be a source of delight, fun and learning. But have you ever wondered what birds themselves hear when they sing? After all, we know that other animals' perceptions don't always match ours. Anyone who lives with a dog has probably experienced their incredible acute hearing and smell. Psychologists Robert J. Dooling and Nora H. Prior think they've found an answer to that question — for, at least, some birds. In an article published online last month in the journal Animal Behaviour, they conclude that "there is an acoustic richness in bird vocalizations that is available to birds but likely out of reach for human listeners." Dooling and Prior explain that most scientific investigations of birdsong focus on things like pitch, tempo, complexity, structural organization and the presence of stereotypy. They instead focused on what's called temporal fine structure and its perception by zebra finches. Temporal fine structure, they write, "is generally defined as rapid variations in amplitude within the more slowly varying envelope of sound." Struggling to fully grasp that definition, I contacted Robert Dooling by email. In his response, he suggested that I think of temporal fine structure as "roughly the difference between voices when they are the same pitch and loudness." Temporal fine structure is akin, then, to timbre, sometimes defined as "tone color" or, in Dooling's words, the feature that's "left between two complex sounds when the pitch and level are equalized." © 2016 npr

Keyword: Sexual Behavior; Hearing
Link ID: 22942 - Posted: 12.03.2016

By Torah Kachur, A dog's nose is an incredible scent detector. This ability has been used to train bomb-sniffing dogs, narcotics and contraband sniffers as well as tracking hounds. But even the best electronic scent-detection devices — which use the dog's nose as their gold standard — have never been able to quite live up to their canine competition. But new research — which took a plastic dog nose and strapped it to a bomb sniffing device — might change that. The shape and function of a dog's nose is being used to improve electronic scent detectors. (Flickr / montillon.a) Dogs have almost 300 million smell receptors in their noses, compared to the meagre six million us humans have: their sense of smell is more than 40 times better than ours. But those smell receptors are just part of the puzzle. Matthew Staymates, lead author on a new paper published Thursday, figured that the canine sniffing skill also has something to do with the anatomy of a dog's nose. A former roommate of his had done his PhD in dog nose anatomy and actually had a computer model of a dog's nose and entire head. So Staymates used a 3D printer, printed out a dog's nose, and attached it to an electronic detector. "Sure enough, a week or two later, I had a fully functioning, anatomically correct dog's nose that sniffs like a real dog." From that, he worked with something called a schlieren imager to watch air go in and out of a nose when the dog is snuffling around the ground. ©2016 CBC/Radio-Canada.

Keyword: Chemical Senses (Smell & Taste); Robotics
Link ID: 22941 - Posted: 12.03.2016

By Virginia Morell If you’ve ever watched ants, you’ve probably noticed their tendency to “kiss,” quickly pressing their mouths together in face-to-face encounters. That’s how they feed each other and their larvae. Now, scientists report that the insects are sharing much more than food. They are also communicating—talking via chemical cocktails designed to shape each other and the colonies they live in. The finding suggests that saliva exchange could play yet-undiscovered roles in many other animals, from birds to humans, says Adria LeBoeuf, an evolutionary biologist at the University of Lausanne in Switzerland, and the study’s lead author. “We’ve paid little attention to what besides direct nutrition is being transmitted” in ants or other species, adds Diana Wheeler, an evolutionary biologist at the University of Arizona in Tucson, who was not involved with the work. Social insects—like ants, bees, and wasps—have long been known to pass food to one another through mouth-to-mouth exchange, a behavior known as trophallaxis. They store liquid food in “social stomachs,” or crops, from which they can regurgitate it later. It’s how nutrients are passed from foraging ants to nurse ants, and from nurses to the larvae in a colony. Other research has suggested that ants also use trophallaxis to spread the colony’s odor, helping them identify their own nest mates. © 2016 American Association for the Advancement of Science

Keyword: Chemical Senses (Smell & Taste)
Link ID: 22928 - Posted: 11.29.2016

By Ben Andrew Henry As a graduate student in the field of olfactory neuroscience, conducting what his former mentor describes as ambitiously clever research, Jason Castro felt something was missing. “I wanted to use science to make a connection with people,” he says, not just to churn out results. In 2012, the 34-year-old Castro accepted a faculty position at Bates College, a small liberal arts school in Maine, in order to “do the science equivalent of running a mom-and-pop—a small operation, working closely with students, and staying close to the data and the experiments myself,” he says. Students who passed through his lab or his seminars recall Castro as a dedicated mentor. “He spent hours with me just teaching me how to code,” recalled Torben Noto, a former student who went on to earn a PhD in neuroscience. After he arrived at Bates, Castro, along with two computational scientists, enlisted big-data methodologies to search for the olfactory equivalent of primary colors: essential building blocks of the odors we perceive. Their results, based on a classic set of data in which thousands of participants described various odors, identify 10 basic odor categories.1 Castro launched another project a few months later, when a paper published in Science reported that humans could discriminate between at least a trillion different odors. A friend from grad school, Rick Gerkin, smelled something fishy about the findings and gave Castro a call. “We became obsessed with the topic,” says Gerkin, now at Arizona State University. The researchers spent almost two years pulling apart the statistical methods of the study, finding that little tweaks to parameters such as the number of test subjects created large swings in the final estimate—a sign that the results were not robust.2 This August, the original study’s authors published a correction in Science. © 1986-2016 The Scientist

Keyword: Chemical Senses (Smell & Taste)
Link ID: 22922 - Posted: 11.29.2016

By Yasemin Saplakoglu Even if you don’t have rhythm, your pupils do. In a new study, neuroscientists played drumming patterns from Western music, including beats typical in pop and rock, while asking volunteers to focus on computer screens for an unrelated fast-paced task that involved pressing the space bar as quickly as possible in response to a signal on the screen. Unbeknownst to the participants, the music omitted strong and weak beats at random times. (You can listen below for an example of a music clip they used. If you listen carefully, you can hear bass and hi-hat beats omitted throughout.) Eye scanners tracked the dilations of the subjects’ pupils as the music played. Their pupils enlarged when the rhythms dropped certain beats, even though the participants weren’t paying attention to the music. The biggest dilations matched the omissions of the beats in the most prominent locations in the music, usually the important first beat in a repeated set of notes. The results suggest that we may have an automatic sense of “hierarchical meter”—a pattern of strong and weak beats—that governs our expectations of music, the researchers write in the February 2017 issue of Brain and Cognition. Perhaps, the authors say, our eyes reveal clues into the importance that music and rhythm plays in our lives. © 2016 American Association for the Advancement of Science

Keyword: Attention; Hearing
Link ID: 22920 - Posted: 11.29.2016

By Andy Coghlan It may sound like a healthy switch, but sometimes people who drink diet soft drinks put on more weight and develop chronic disorders like diabetes. This has puzzled nutritionists, but experiments in mice now suggest that in some cases, this could partly be down to the artificial sweetener aspartame. Artificial sweeteners that contain no calories are synthetic alternatives to sugar that can taste up to 20,000 times sweeter. They are often used in products like low or zero-calorie drinks and sugar-free desserts, and are sometimes recommended for people who have type 2 diabetes. But mouse experiments now suggest that when aspartame breaks down in the gut, it may disrupt processes that are vital for neutralising harmful toxins from the bacteria that live there. By interfering with a crucial enzyme, these toxins seem to build up, irritating the gut lining and causing the kinds of low-level inflammation that can ultimately cause chronic diseases. “Our results are providing a mechanism for why aspartame may not always work to keep people thin, or even cause problems like obesity, heart disease, diabetes and metabolic syndrome,” says Richard Hodin at Massachusetts General Hospital in Boston. Irritating bacteria © Copyright Reed Business Information Ltd.

Keyword: Obesity; Chemical Senses (Smell & Taste)
Link ID: 22908 - Posted: 11.25.2016

Ramin Skibba Bats sing just as birds and humans do. But how they learn their melodies is a mystery — one that scientists will try to solve by sequencing the genomes of more than 1,000 bat species. The project, called Bat 1K, was announced on 14 November at the annual meeting of the Society for Neuroscience in San Diego, California. Its organizers also hope to learn more about the flying mammals’ ability to navigate in the dark through echolocation, their strong immune systems that can shrug off Ebola and their relatively long lifespans. “The genomes of all these other species, like birds and mice, are well-understood,” says Sonja Vernes, a neurogeneticist at the Max Planck Institute for Psycholinguistics in Nijmegen, Netherlands, and co-director of the project. “But we don’t know anything about bat genes yet.” Some bats show babbling behaviour, including barks, chatter, screeches, whistles and trills, says Mirjam Knörnschild, a behavioural ecologist at Free University Berlin, Germany. Young bats learn the songs and sounds from older male tutors. They use these sounds during courtship and mating, when they retrieve food and as they defend their territory against rivals. Scientists have studied the songs of only about 50 bat species so far, Knörnschild says, and they know much less about bat communication than about birds’. Four species of bats have so far been found to learn songs from each other, their fathers and other adult males, just as a child gradually learns how to speak from its parents1. © 2016 Macmillan Publishers Limited,

Keyword: Hearing; Sexual Behavior
Link ID: 22888 - Posted: 11.19.2016