Chapter 6. Hearing, Balance, Taste, and Smell
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
By Greta Keenan The ocean might seem like a quiet place, but listen carefully and you might just hear the sounds of the fish choir. Most of this underwater music comes from soloist fish, repeating the same calls over and over. But when the calls of different fish overlap, they form a chorus. Robert McCauley and colleagues at Curtin University in Perth, Australia, recorded vocal fish in the coastal waters off Port Headland in Western Australia over an 18-month period, and identified seven distinct fish choruses, happening at dawn and at dusk. You can listen to three of them here: The low “foghorn” call is made by the Black Jewfish (Protonibea diacanthus) while the grunting call that researcher Miles Parsons compares to the “buzzer in the Operation board game” comes from a species of Terapontid. The third chorus is a quieter batfish that makes a “ba-ba-ba” call. “I’ve been listening to fish squawks, burble and pops for nearly 30 years now, and they still amaze me with their variety,” says McCauley, who led the research. Sound plays an important role in various fish behaviours such as reproduction, feeding and territorial disputes. Nocturnal predatory fish use calls to stay together to hunt, while fish that are active during the day use sound to defend their territory. “You get the dusk and dawn choruses like you would with the birds in the forest,” says Steve Simpson, a marine biologist at the University of Exeter, UK. © Copyright Reed Business Information Ltd.
By Robert F. Service Predicting color is easy: Shine a light with a wavelength of 510 nanometers, and most people will say it looks green. Yet figuring out exactly how a particular molecule will smell is much tougher. Now, 22 teams of computer scientists have unveiled a set of algorithms able to predict the odor of different molecules based on their chemical structure. It remains to be seen how broadly useful such programs will be, but one hope is that such algorithms may help fragrancemakers and food producers design new odorants with precisely tailored scents. This latest smell prediction effort began with a recent study by olfactory researcher Leslie Vosshall and colleagues at The Rockefeller University in New York City, in which 49 volunteers rated the smell of 476 vials of pure odorants. For each one, the volunteers labeled the smell with one of 19 descriptors, including “fish,” “garlic,” “sweet,” or “burnt.” They also rated each odor’s pleasantness and intensity, creating a massive database of more than 1 million data points for all the odorant molecules in their study. When computational biologist Pablo Meyer learned of the Rockefeller study 2 years ago, he saw an opportunity to test whether computer scientists could use it to predict how people would assess smells. Besides working at IBM’s Thomas J. Watson Research Center in Yorktown Heights, New York, Meyer heads something called the DREAM challenges, contests that ask teams of computer scientists to solve outstanding biomedical problems, such as predicting the outcome of prostate cancer treatment based on clinical variables or detecting breast cancer from mammogram data. “I knew from graduate school that olfaction was still one of the big unknowns,” Meyer says. Even though researchers have discovered some 400 separate odor receptors in humans, he adds, just how they work together to distinguish different smells remains largely a mystery. © 2017 American Association for the Advancement of Science
By Rachael Lallensack Goats know who their real friends are. A study published today in Royal Society Open Science shows that the animals can recognize what other goats look like and sound like, but only those they are closest with. Up until the late 1960s, the overwhelming assumption was that only humans could mentally keep track of how other individuals look, smell, and sound—what scientists call cross-modal recognition. We now know that many different kinds of animals can do this like horses, lions, crows, dogs, and certain primates. Instead of a lab, these researchers settled into Buttercups Sanctuary for Goats in Boughton Monchelsea, U.K., to find out whether goats had the ability to recognize each other. To do so, they first recorded the calls of individual goats. Then, they set up three pens in the shape of a triangle in the sanctuary’s pasture. Equidistant between the two pens at the base of the triangle was a stereo speaker, camouflaged as to not distract the goat participants. A “watcher” goat stood at the peak of the triangle, and the two remaining corners were filled with the watcher’s “stablemate” (they share a stall at night) and a random herd member. Then, the team would play either the stablemate’s or the random goat’s call over the speaker and time how long it took for the watcher to match the call with the correct goat. They repeated this test again, but with two random goats. The researchers found that the watcher goat would look at the goat that matched the call quickly and for a longer time, but only in the test that included their stablemate. The results indicate that goats are not only capable of cross-modal recognition, but that they might also be able to use inferential reasoning, in other words, process of elimination. Think back to the test: Perhaps when the goat heard a call that it knew was not its pal, it inferred that it must have been the other one. © 2017 American Association for the Advancement of Science.
By Lenny Bernstein Forty million American adults have lost some hearing because of noise, and half of them suffered the damage outside the workplace, from everyday exposure to leaf blowers, sirens, rock concerts and other loud sounds, the Centers for Disease Control and Prevention reported Tuesday. A quarter of people ages 20 to 69 were suffering some hearing deficits, the CDC reported in its Morbidity and Mortality Weekly Report, even though the vast majority of the people in the study claimed to have good or excellent hearing. The researchers found that 24 percent of adults had “audiometric notches” — a deterioration in the softest sound a person can hear — in one or both ears. The data came from 3,583 people who had undergone hearing tests and reported the results in the 2011-2012 National Health and Nutrition Examination Survey (NHANES). The review's more surprising finding — which the CDC had not previously studied — was that 53 percent of those people said they had no regular exposure to loud noise at work. That means the hearing loss was caused by other environmental factors, including listening to music through headphones with the volume turned up too high. “Noise is damaging hearing before anyone notices or diagnoses it,” said Anne Schuchat, the CDC's acting director. “Because of that, the start of hearing loss is underrecognized.” The study revealed that 19 percent of people between the ages of 20 and 29 had some hearing loss, a finding that Schuchat called alarming. © 1996-2017 The Washington Post
Link ID: 23197 - Posted: 02.08.2017
By JANE E. BRODY Dizziness is not a disease but rather a symptom that can result from a huge variety of underlying disorders or, in some cases, no disorder at all. Readily determining its cause and how best to treat it — or whether to let it resolve on its own — can depend on how well patients are able to describe exactly how they feel during a dizziness episode and the circumstances under which it usually occurs. For example, I recently experienced a rather frightening attack of dizziness, accompanied by nausea, at a food and beverage tasting event where I ate much more than I usually do. Suddenly feeling that I might faint at any moment, I lay down on a concrete balcony for about 10 minutes until the disconcerting sensations passed, after which I felt completely normal. The next morning I checked the internet for my symptom — dizziness after eating — and discovered the condition had a name: Postprandial hypotension, a sudden drop in blood pressure when too much blood is diverted to the digestive tract, leaving the brain relatively deprived. The condition most often affects older adults who may have an associated disorder like diabetes, hypertension or Parkinson’s disease that impedes the body’s ability to maintain a normal blood pressure. Fortunately, I am thus far spared any disorder linked to this symptom, but I’m now careful to avoid overeating lest it happen again. “An essential problem is that almost every disease can cause dizziness,” say two medical experts who wrote a comprehensive new book, “Dizziness: Why You Feel Dizzy and What Will Help You Feel Better.” Although the vast majority of patients seen at dizziness clinics do not have a serious health problem, the authors, Dr. Gregory T. Whitman and Dr. Robert W. Baloh, emphasize that doctors must always “be on the alert for a serious disease presenting as ‘dizziness,’” like “stroke, transient ischemic attacks, multiple sclerosis and brain tumors.” © 2017 The New York Times Company
Link ID: 23196 - Posted: 02.07.2017
By James Gallagher Health and science reporter, BBC News website Deaf mice have been able to hear a tiny whisper after being given a "landmark" gene therapy by US scientists. They say restoring near-normal hearing in the animals paves the way for similar treatments for people "in the near future". Studies, published in Nature Biotechnology, corrected errors that led to the sound-sensing hairs in the ear becoming defective. The researchers used a synthetic virus to nip in and correct the defect. "It's unprecedented, this is the first time we've seen this level of hearing restoration," said researcher Dr Jeffrey Holt, from Boston Children's Hospital. Hair defect About half of all forms of deafness are due to an error in the instructions for life - DNA. In the experiments at Boston Children's Hospital and Harvard Medical School, the mice had a genetic disorder called Usher syndrome. It means there are inaccurate instructions for building microscopic hairs inside the ear. In healthy ears, sets of outer hair cells magnify sound waves and inner hair cells then convert sounds to electrical signals that go to the brain. The hairs normally form these neat V-shaped rows. But in Usher syndrome they become disorganised - severely affecting hearing. The researchers developed a synthetic virus that was able to "infect" the ear with the correct instructions for building hair cells. © 2017 BBC.
By Tiffany O'Callaghan Imagine feeling angry or upset whenever you hear a certain everyday sound. It’s a condition called misophonia, and we know little about its causes. Now there’s evidence that misophonics show distinctive brain activity whenever they hear their trigger sounds, a finding that could help devise coping strategies and treatments. Olana Tansley-Hancock knows misophonia’s symptoms only too well. From the age of about 7 or 8, she experienced feelings of rage and discomfort whenever she heard the sound of other people eating. By adolescence, she was eating many of her meals alone. As time wore on, many more sounds would trigger her misophonia. Rustling papers and tapping toes on train journeys constantly forced her to change seats and carriages. Clacking keyboards in the office meant she was always making excuses to leave the room. Finally, she went to a doctor for help. “I got laughed at,” she says. “People who suffer from misophonia often have to make adjustments to their lives, just to function,” says Miren Edelstein at the University of California, San Diego. “Misophonia seems so odd that it’s difficult to appreciate how disabling it can be,” says her colleague, V. S. Ramachandran. The condition was first given the name misophonia in 2000, but until 2013, there had only been two case studies published. More recently, clear evidence has emerged that misophonia isn’t a symptom of other conditions, such as obsessive compulsive disorder, nor is it a matter of being oversensitive to other people’s bad manners. Some studies, including work by Ramachandran and Edelstein, have found that trigger sounds spur a full fight-or-flight response in people with misophonia. © Copyright Reed Business Information Ltd.
Erin Hare One chilly day in February 1877, a British cotton baron named Joseph Sidebotham heard what he thought was a canary warbling near his hotel window. He was vacationing with his family in France, and soon realized the tune wasn’t coming from outside. “The singing was in our salon,” he wrote of the incident in Nature. “The songster was a mouse.” The family fed the creature bits of biscuit, and it quickly became comfortable enough to climb onto the warm hearth at night and regale them with songs. It would sing for hours. Clearly, Sidebotham concluded, this was no ordinary mouse. More than a century later, however, scientists discovered he was wrong. It turns out that all mice chitter away to each other. Their language is usually just too high-pitched for human ears to detect. Today, mouse songs are no mere curiosity. Researchers are able to engineer mice to express genetic mutations associated with human speech disorders, and then measure the changes in the animals’ songs. They’re leveraging these beautifully complex vocalizations to uncover the mysteries of human speech. Anecdotal accounts of singing mice date back to 1843. In the journal The Zoologist, the British entomologist and botanist Edward Newman wrote that the song of a rare “murine Orpheus” sounds as “if the mouth of a canary were carefully closed, and the bird, in revenge, were to turn ventriloquist, and sing in the very centre of his stomach.” © 2017 by The Atlantic Monthly Group.
By Matt Blois Some of the signals animals use to communicate are obvious. Birds sing. Lions roar. But there’s a whole category of signals in the natural world that humans rarely notice. Researchers have found that one species of cichlid uses urine to send chemical signals to rivals during aggressive displays. The team separated large fish from small fish with a transparent divider. Half the dividers contained holes to allow water to flow back and forth. The scientists then injected the fish with a violet dye (pictured), turning their urine bright blue. When the animals saw each other, they raised their fins and rushed toward the divider. They also changed the way they peed. Fish separated by a solid barrier couldn’t detect their opponent’s urine. In an attempt to get their message across, they urinated even more. Without the chemical cues provided by the urine, smaller fish often tried to attack their larger opponents, the team reports this month in Behavioral Ecology and Sociobiology. Humans could be missing other signals as well, the researchers contend. In addition to chemical signals, animals use seismic vibrations, electricity, and ultraviolet light to communicate. Visual signals might be more obvious, but this research stresses the importance of looking for less noticeable forms of communication, the authors say. © 2017 American Association for the Advancement of Science.
Keyword: Chemical Senses (Smell & Taste)
Link ID: 23160 - Posted: 01.28.2017
By Marcy Cuttler, CBC News Imagine waking up suddenly deaf in one ear. Musician and composer Richard Einhorn has lived through it. In June 2010, the 64-year-old New Yorker awoke to his ears ringing. "The first thing you think of, of course, is a brain tumour or a stroke," he said. At the time, he was in upstate Massachusetts, far from help. So he called a cab and went to the closest hospital. Doctors eventually told him it was sudden sensorineural hearing loss (SSHL) — a little-known and not well understood condition that affects one person per 5,000 every year according to the U.S. National Institutes of Health. What doctors do know: that most people diagnosed with it are between the ages of 40 and 60; that men and women can be equally afflicted; and that it usually only impacts one ear. Einhorn, who couldn't hear well in his other ear due to a pre-existing condition, was left completely deaf. "It was incredibly difficult to communicate with anybody ... we were doing it with notes," he said. "I wouldn't recommend it on my worst enemy. It was really, really terrible." Dr. James Bonaparte says if you wake up with ringing in your ears that continues throughout the day, or if you notice a drop in hearing on one side — and you don't have a cold at the time — get checked. (CBC) ©2017 CBC/Radio-Canada
Link ID: 23153 - Posted: 01.27.2017
By NATALIE ANGIER Whether personally or professionally, Daniel Kronauer of Rockefeller University is the sort of biologist who leaves no stone unturned. Passionate about ants and other insects since kindergarten, Dr. Kronauer says he still loves flipping over rocks “just to see what’s crawling around underneath.” In an amply windowed fourth-floor laboratory on the east side of Manhattan, he and his colleagues are assaying the biology, brain, genetics and behavior of a single species of ant in ambitious, uncompromising detail. The researchers have painstakingly hand-decorated thousands of clonal raider ants, Cerapachys biroi, with bright dots of pink, blue, red and lime-green paint, a color-coded system that allows computers to track the ants’ movements 24 hours a day — and makes them look like walking jelly beans. The scientists have manipulated the DNA of these ants, creating what Dr. Kronauer says are the world’s first transgenic ants. Among the surprising results is a line of Greta Garbo types that defy the standard ant preference for hypersociality and instead just want to be left alone. The researchers also have identified the molecular and neural cues that spur ants to act like nurses and feed the young, or to act like queens and breed more young, or to serve as brutal police officers, capturing upstart nestmates, spread-eagling them on the ground and reducing them to so many chitinous splinters. Dr. Kronauer, who was born and raised in Germany and just turned 40, is tall, sandy-haired, blue-eyed and married to a dentist. He is amiable and direct, and his lab’s ambitions are both lofty and pragmatic. “Our ultimate goal is to have a fundamental understanding of how a complex biological system works,” Dr. Kronauer said. “I use ants as a model to do this.” As he sees it, ants in a colony are like cells in a multicellular organism, or like neurons in the brain: their fates joined, their labor synchronized, the whole an emergent force to be reckoned with. © 2017 The New York Times Company
By JONAH ENGEL BROMWICH I have a big, dumb, deep, goofy voice. But I’m reminded of it only when I hear a recording of myself while playing back an interview — or when friends do impressions of me, lowering their voices several octaves. My high school classmate Walter Suskind has one of the deepest voices I’ve ever heard in person. His experience has been similar to mine. “My voice sounds pretty normal in my head,” he said. “It’s when I catch the echo on the back of the phone or when I hear myself when it’s been taped that I realize how deep it is. Also, when people come up to me and, to imitate my voice, go as deep as they possibly can and growl in my face.” He added, “I’ve been told that the one advantage to voices like ours is we make really good hostage negotiators.” (Here’s Walter on an episode of “Radiolab.” His segment starts at about 12:20, and the host immediately comments on his voice.) Many people have heard their recorded voices and reeled in disgust (“Do I really sound like that?”) Others are surprised how high their voices sound. The indie musician Mitski Miyawaki, who has earned praise for exceptional control over her singing voice, said that she, too, is often unpleasantly surprised by her speaking voice, which she perceives as “lower, more commanding,” than it sounds to others. “And then I listen to a radio interview and I’m like ‘uuuch,’ ” she said, making a disgusted noise. “I listen to my voice and I go, ‘Oh it sounds exactly like a young girl.’ ” There’s an easy explanation for experiences like Ms. Miyawaki’s, said William Hartmann, a physics professor at Michigan State University who specializes in acoustics and psychoacoustics. There are two pathways through which we perceive our own voice when we speak, he explained. One is the route through which we perceive most other sounds. Waves travel from the air through the chain of our hearing systems, traversing the outer, middle and inner ear. © 2017 The New York Times Company
Link ID: 23107 - Posted: 01.16.2017
Susan Milius NEW ORLEANS — The self-cleaning marvel known as earwax may turn the dust particles it traps into agents of their own disposal. Earwax, secreted in the ear canal, protects ears from building up dunes of debris from particles wafting through the air. The wax creates a sticky particle-trapper inside the canal, explained Zac Zachow January 6 at the annual meeting of the Society of Integrative and Comparative Biology. The goo coats hairs and haphazardly pastes them into a loose net. Then, by a process not yet fully understood, bits of particle-dirtied wax leave the ear, taking their burden of debris with them. Earwax may accomplish such a feat because trapping more and more dust turns it from gooey to crumbly, Zachow said. Working with Alexis Noel in David Hu’s lab at Georgia Tech in Atlanta, he filmed a rough demonstration of this idea: Mixing flour into a gob of pig’s earwax eventually turned the lump from stickier to drier, with crumbs fraying away at the edges. Jaw motions might help shake loose these crumbs, Zachow said. A video inside the ear of someone eating a doughnut showed earwax bucking and shifting. This dust-to-crumb scenario needs more testing, but Noel points out that earwax might someday inspire new ways of reducing dust buildup in machinery such as home air-filtration systems. Z. Zachow, A. Noel and D.L. Hu. Earwax has properties like paint, enabling self-cleaning. Annual meeting of the Society for Integrative and Comparative Biology, New Orleans, January 6, 2017. © Society for Science & the Public 2000 - 2017
Link ID: 23100 - Posted: 01.14.2017
Alison Abbott Bats have brain cells that keep track of their angle and distance to a target, researchers have discovered. The neurons, called ‘vector cells’, are a key piece of the mammalian’s brain complex navigation system — and something that neuroscientists have been seeking for years. Our brain’s navigation system has many types of cells, but a lot of them seem designed to keep track of where we are. Researchers know of ‘place’ cells, for example, which fire when animals are in a particular location, and ‘head direction’ cells that fire in response to changes in the direction the head is facing. Bats also have a kind of neuronal compass that enables them to orient themselves as they fly. The vector cells, by contrast, keep spatial track of where we are going. They are in the brain’s hippocampus, which is also where ‘place’ and ‘head-direction’ cells were discovered. That’s a surprise, considering how well this area has been studied by researchers, says Nachum Ulanovsky, who led the team at the Weizmann Institute of Science in Rehovot, Israel, that discovered the new cells. His team published their findings in Science on 12 January1. Finding the cells "was one of those very rare discovery moments in a researcher’s life,” says Ulanovsky. “My heart raced, I started jumping around.” The trick to finding them was a simple matter of experimental design, he says. © 2017 Macmillan Publishers Limited
By Joshua Rapp Learn The Vietnamese pygmy dormouse is as blind as a bat—and it navigates just like one, too. Scientists have found that the small, nimble brown rodent (Typhlomys cinereus chapensis), native to Vietnam and parts of China, uses sound waves to get a grip on its environment. Measurements of the mice in the Moscow Zoo revealed that the species can't see objects because of a folded retina and a low number of neurons capable of collecting visual information, among other things. When researchers recorded the animals, they discovered they make ultrasonic noises similar to those used by some bat species, and videos showed they made the sounds at a much greater pulse rate when moving than while resting. These sound waves bounce off objects, allowing the rodent to sense its surroundings—an ability known as echolocation, or biological sonar. The find makes the dormouse the only tree-climbing mammal known to use ultrasonic echolocation, the team reports in Integrative Zoology. The authors suggest that an extinct ancestor of these dormice was likely a leaf bed–dwelling animal that lost the ability to see in the darkness in which it is active. As the animals began to move up into the trees over time, they likely developed the ultrasonic echolocation abilities to help them deal with a new acrobatic lifestyle. The discovery lends support to the idea that bats may have evolved echolocation before the ability to fly. © 2017 American Association for the Advancement of Science
Link ID: 23080 - Posted: 01.11.2017
By Michael Price We’ve all heard the stories about humans losing their jobs to robots. But what about man’s best friend? A new study suggests that drug-sniffing dogs may soon have a competitor in the workplace: an insect-piloted robotic vehicle that could help scientists build better odor-tracking robots to find disaster victims, detect illicit drugs or explosives, and sense leaks of hazardous materials. The robotic car’s driver is a silkworm moth (Bombyx mori) tethered in a tiny cockpit so that its legs can move freely over an air-supported ball, a bit like an upside-down computer mouse trackball. Using optical sensors, the car follows the ball’s movement and moves in the same direction. With its odor-sensitive antennae, the moth senses a target smell—in this case, female silkworm sex pheromones—and walks toward it along the trackball, driving the robotic car. Across seven trials with seven different drivers, the insects piloted the vehicle consistently toward the pheromones, nearly as well as 10 other silkworm moths who could walk freely on the ground toward the smells, the researchers reported last month in the Journal of Visualized Experiments. On average, the driving moths reached their target about 2 seconds behind the walking moths, although their paths were more circuitous. The researchers say their findings could help roboticists better integrate biologically inspired odor detection systems into their robots. Engineers might even be able to develop more powerful and maneuverable versions of the study’s robot car that could be driven by silkworms genetically modified to detect a wide variety of smells to help with sniffing tasks traditionally done by trained animals. Time to start polishing up those résumés, pooches. © 2016 American Association for the Advancement of Science
Keyword: Chemical Senses (Smell & Taste)
Link ID: 23051 - Posted: 01.04.2017
Lisa Vincenz-Donnelly A test that records the way the brain processes sound might provide a simple and reliable measure of concussion, a small study suggests. If the method works, it could help scientists work out how best to treat the poorly understood brain injury. In a paper published on 22 December in Scientific Reports1, neuroscientist Nina Kraus of Northwestern University in Evanston, Illinois, and other researchers say that they have found that a particular signal in neural activity, recorded with electrodes placed on the head as children listen to 'da' sounds from a speech synthesizer, can objectively demarcate concussed children from a healthy control group. The research was done on just 40 people — a tiny group — and will have to be repeated in larger samples. But other researchers are still excited by the report, because concussion is hard to diagnose, particularly in children. The study “may for the first time offer a simple and objective biomarker to measure the severity of brain injuries”, says Thomas Wisniewski, a neurologist at New York University’s Langone Medical Center. There is intense interest in finding a clear-cut biological signature for concussion, he says. “We have been crying out for a reliable method." Millions of people enter hospitals every year with blows to the head, and some of have concussion, a minor brain injury that can betoken more serious damage. To diagnose it, physicians rely on subjective complaints of dizziness, coordination tests and sometimes more involved procedures, such as magnetic resonance imaging (MRI) or computed tomography (CT) scans. But there’s no single objective way to detect concussion and measure its severity — and no simple test that can be administered regularly to determine when someone has recovered, a particularly important issue for athletes keen to be allowed back on the field. © 2016 Macmillan Publishers Limited
By Victoria Gill Science reporter, BBC News Direct recordings have revealed what is happening in our brains as we make sense of speech in a noisy room. Focusing on one conversation in a loud, distracting environment is called "the cocktail party effect". It is a common festive phenomenon and of interest to researchers seeking to improve speech recognition technology. Neuroscientists recorded from people's brains during a test that recreated the moment when unintelligible speech suddenly makes sense. A team measured people's brain activity as the words of a previously unintelligible sentence suddenly became clear when a subject was told the meaning of the "garbled speech". The findings are published in the journal Nature Communications. Lead researcher Christopher Holdgraf from the University of California, Berkeley, and his colleagues were able to work with epilepsy patients, who had had a portion of their skull removed and electrodes placed on the brain surface to track their seizures. First, the researchers played a very distorted, garbled sentence to each subject, which almost no-one was able to understand. They then played a normal, easy to understand version of the same sentence and immediately repeated the garbled version. "After hearing the intact sentence" the researchers explained in their paper, all the subjects understood the subsequent "noisy version". The brain recordings showed this moment of recognition as brain activity patterns in the areas of the brain that are known to be associated with processing sound and understanding speech. When the subjects heard the very garbled sentence, the scientists reported that they saw little activity in those parts of the brain. Hearing the clearly understandable sentence then triggered patterns of activity in those brain areas. © 2016 BBC.
By Sara Reardon The din of what sounds like a high-pitched cocktail party fills the lab of neuroscientist Xiaoqin Wang at Johns Hopkins University in Baltimore. But the primates making the racket are dozens of marmosets, squirrel-sized monkeys with patterned coats and white puffs of fur on either side of their heads. The animals chatter to each other, stopping to tilt their heads and consider their visitors with inquisitive expressions. Common marmosets (Callithrix jacchus) are social and communicative in captivity, unlike the macaque that is more commonly used as a model primate. And in January, Wang and his colleagues revealed that marmosets are also the only non-human animal that can hear different pitches, such as those found in music and tonal languages like Chinese, in the same way people can1. This makes the marmoset the closest proxy researchers have to the human brain when it comes to hearing and speech, says Quianjie Fu, an auditory researcher at the University of California, Los Angeles, who was not involved with the paper. Until recently, researchers have relied on songbirds for such work, but the birds’ brains are so different from human ones that the insights they provide are limited. Wang hopes that marmosets will improve researchers’ understanding of the evolution of communication and help them refine devices such as cochlear implants for deaf people. © 2016 Scientific American
Link ID: 23000 - Posted: 12.20.2016
By Catherine Matacic Have you ever wondered why a strange piece of music can feel familiar—how it is, for example, that you can predict the next beat even though you’ve never heard the song before? Music everywhere seems to share some “universals,” from the scales it uses to the rhythms it employs. Now, scientists have shown for the first time that people without any musical training also create songs using predictable musical beats, suggesting that humans are hardwired to respond to—and produce—certain features of music. “This is an excellent and elegant paper,” says Patrick Savage, an ethnomusicologist at the Tokyo University of the Arts who was not involved in the study. “[It] shows that even musical evolution obeys some general rules [similar] to the kind that govern biological evolution.” Last year, Savage and colleagues traced that evolution by addressing a fundamental question: What aspects of music are consistent across cultures? They analyzed hundreds of musical recordings from around the world and identified 18 features that were widely shared across nine regions, including six related to rhythm. These “rhythmic universals” included a steady beat, two- or three-beat rhythms (like those in marches and waltzes), a preference for two-beat rhythms, regular weak and strong beats, a limited number of beat patterns per song, and the use of those patterns to create motifs, or riffs. “That was a really remarkable job they did,” says Andrea Ravignani, a cognitive scientist at the Vrije Universiteit Brussel in Belgium. “[It convinced me that] the time was ripe to investigate this issue of music evolution and music universals in a more empirical way.” © 2016 American Association for the Advancement of Science.
Link ID: 22997 - Posted: 12.20.2016