Chapter 6. Hearing, Balance, Taste, and Smell
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
By Chris Baraniuk Bat-detecting drones could help us find out what the animals get up to when flying. Ultrasonic detectors on drones in the air and on the water are listening in on bat calls, in the hope of discovering more about the mammals’ lives beyond the reach of ground-based monitoring devices. Drone-builder Tom Moore and bat enthusiast Tom August have developed three different drones to listen for bat calls while patrolling a pre-planned route. Since launching the scheme, known as Project Erebus, in 2014, they have experimented with two flying drones and one motorised boat, all equipped with ultrasonic detectors. The pair’s latest tests have demonstrated the detection capabilities of the two airborne drone models: a quadcopter and a fixed-wing drone. Last month, the quadcopter successfully followed a predetermined course and picked up simulated bat calls produced by an ultrasonic transmitter. The bat signal Moore says one of the major hurdles is detecting the call of bats over the noise of the drones’ propellers, which emit loud ultrasonic frequencies. They overcame this with the quadcopter by dangling the detector underneath the body and rotors of the drone. This is not such a problem for the water-based drone. Last year, Moore and August tested a remote-controlled boat in Oxfordshire, UK, and picked up bat calls thought to belong to common pipistrelle and Daubenton’s bats. The different species often emit different ultrasonic frequencies. © Copyright Reed Business Information Ltd.
By C. CLAIBORNE RAY. The yellow stuff in the outer part of the ear canal, scientifically named cerumen, is only partly a waxy substance, according to the National Institute on Deafness and Other Communication Disorders. The rest of the so-called wax is an accretion of some dust and lots of dead skin cells, which normally collect in the passage as they are shed. The waxy part, which holds the compacted waste together and smooths the way for it to leave the ear, comes from the ceruminous glands, which secrete lipids and other substances. They are specialized sweat glands just under the surface of the skin in the outer part of the canal. Besides lubricating the skin of the canal while keeping it dry, the lipids also help maintain a protective acidic coating, which helps kill bacteria and fungi that can cause infection and irritation. The normal working of muscles in the head, especially those that move the jaw, help guide the wax outward along the ear canal. The ceruminous glands commonly shrink in old age, producing less of the lipids and making it harder for waste to leave the ear. Excess wax buildup can usually be safely softened with warm olive or almond oil or irrigated with warm water, though specialized softening drops are also sold. Take care not to compress the buildup further with cotton swabs or other tools. If it cannot be safely removed, seek medical help. © 2017 The New York Times Company
Link ID: 23472 - Posted: 04.11.2017
Jon Hamilton The U.S. military is trying to figure out whether certain heavy weapons are putting U.S. troops in danger. The concern centers on the possibility of brain injuries from shoulder-fired weapons like the Carl Gustaf, a recoilless rifle that resembles a bazooka and is powerful enough to blow up a tank. A single round for the Carl Gustaf can weigh nearly 10 pounds. The shell leaves the gun's barrel at more than 500 miles per hour. And as the weapon fires, it directs an explosive burst of hot gases out of the back of the barrel. For safety reasons, troops are trained to take positions to the side of weapons like this. Even so, they get hit by powerful blast waves coming from both the muzzle and breech. "It feels like you get punched in your whole body," is the way one Army gunner described the experience in a military video made in Afghanistan. "The blast bounces off the ground and it overwhelms you." During the wars in Iraq and Afghanistan, the military recognized that the blast from a roadside bomb could injure a service member's brain without leaving a scratch. Hundreds of thousands of U.S. troops sustained this sort of mild traumatic brain injury, which has been linked to long-term problems ranging from memory lapses to post-traumatic stress disorder. Also during those wars, the military began to consider the effects on the brain of repeated blasts from weapons like the Carl Gustaf. And some members of Congress became concerned. © 2017 npr
By David Owen When my mother’s mother was in her early twenties, a century ago, a suitor took her duck hunting in a rowboat on a lake near Austin, Texas, where she grew up. He steadied his shotgun by resting the barrel on her right shoulder—she was sitting in the bow—and when he fired he not only missed the duck but also permanently damaged her hearing, especially on that side. The loss became more severe as she got older, and by the time I was in college she was having serious trouble with telephones. (“I’m glad it’s not raining! ” I’d shout, for the third or fourth time, while my roommates snickered.) Her deafness probably contributed to one of her many eccentricities: ending phone conversations by suddenly hanging up. I’m a grandparent myself now, and lots of people I know have hearing problems. A guy I played golf with last year came close to making a hole in one, then complained that no one in our foursome had complimented him on his shot—even though, a moment before, all three of us had complimented him on his shot. (We were walking behind him.) The man who cuts my wife’s hair began wearing two hearing aids recently, to compensate for damage that he attributes to years of exposure to professional-quality blow-dryers. My sister has hearing aids, too. She traces her problem to repeatedly listening at maximum volume to Anne’s Angry and Bitter Breakup Song Playlist, which she created while going through a divorce. My ears ring all the time—a condition called tinnitus. I blame China, because the ringing started, a decade ago, while I was recovering from a monthlong cold that I’d contracted while breathing the filthy air in Beijing, and whose symptoms were made worse by changes in cabin pressure during the long flight home. Tinnitus is almost always accompanied by hearing loss. My internist ordered an MRI, to make sure I didn’t have a brain tumor, and held up a vibrating tuning fork and asked me to tell him when I could no longer hear it. After a while, he leaned forward to make sure the tuning fork was still humming, since he himself could no longer hear it. (We’re about the same age.) © 2017 Condé Nast.
Link ID: 23434 - Posted: 03.31.2017
By Catherine Offord | Recognizing when you’re singing the right notes is a crucial skill for learning a melody, whether you’re a human practicing an aria or a bird rehearsing a courtship song. But just how the brain executes this sort of trial-and-error learning, which involves comparing performances to an internal template, is still something of a mystery. “It’s been an important question in the field for a long time,” says Vikram Gadagkar, a postdoctoral neurobiologist in Jesse Goldberg’s lab at Cornell University. “But nobody’s been able to find out how this actually happens.” Gadagkar suspected, as others had hypothesized, that internally driven learning might rely on neural mechanisms similar to traditional reward learning, in which an animal learns to anticipate a treat based on a particular stimulus. When an unexpected outcome occurs (such as receiving no treat when one was expected), the brain takes note via changes in dopamine signaling. So Gadagkar and his colleagues investigated dopamine signaling in a go-to system for studying vocal learning, male zebra finches. First, the researchers used electrodes to record the activity of dopaminergic neurons in the ventral tegmental area (VTA), a brain region important in reward learning. Then, to mimic singing errors, they used custom-written software to play over, and thus distort, certain syllables of that finch’s courtship song while the bird practiced. “Let’s say the bird’s song is ABCD,” says Gadagkar. “We distort one syllable, so it sounds like something between ABCD and ABCB.” © 1986-2017 The Scientist
By Tim Falconer HOUSE OF ANANSI, MAY 2016I’ve spent my career bothering people. As a journalist and author, I hang around and watch what folks do, and I ask too many questions, some better than others. Later, I have follow-up queries and clarification requests, and I bug them for those stats they promised to provide me. But something different happened when I started researching congenital amusia, the scientific term for tone deafness present at birth, for my new book, Bad Singer. The scientists were as interested in me as I was in them. My idea was to learn to sing and then write about the experience as a way to explore the science of singing. After my second voice lesson, I went to the Université de Montréal’s International Laboratory for Brain, Music, and Sound Research (BRAMS). I fully expected Isabelle Peretz, a pioneer in amusia research, to say I was just untrained. Instead, she diagnosed me as amusic. “So this means what?” I asked. “We would love to test you more.” The BRAMS researchers weren’t alone. While still at Harvard’s Music and Neuroimaging Lab, Psyche Loui—who now leads Wesleyan University’s Music, Imaging, and Neural Dynamics (MIND) Lab—identified a neural pathway called the arcuate fasciculus as the culprit of congenital amusia. So I emailed her to set up an interview. She said sure—and asked if I’d be willing to undergo an fMRI scan. And I’d barely started telling my story to Frank Russo, who runs Ryerson University’s Science of Music, Auditory Research, and Technology (SMART) Lab in Toronto, before he blurted out, “Sorry, I’m restraining myself from wanting to sign you up for all kinds of research and figuring what we can do with you.” © 1986-2017 The Scientist
Link ID: 23425 - Posted: 03.30.2017
By Kate Yandell In a 1971 paper published in Science, biologist Roger Payne, then at Rockefeller University, and Scott McVay, then an administrator at Princeton University, described the “surprisingly beautiful sounds” made by humpback whales (Megaptera novaeangliae; Science, 173:585-97). Analyzing underwater recordings made by a Navy engineer, the duo found that these whale sounds were intricately repetitive. “Because one of the characteristics of bird songs is that they are fixed patterns of sounds that are repeated, we call the fixed patterns of humpback sounds ‘songs,’” they wrote. OCEAN SONGS: Humpback whales make diverse, broadband sounds that travel miles through the ocean. Their function, however, remains somewhat murky.PLOS ONE, dx.doi.org/10.1371/journal.pone.0079422, 2013 It’s now clear that, in addition to simpler calls, several baleen whale species—including blue, fin, and bowhead—make series of sounds known as song. Humpback song is the most complex and by far the best studied. Units of humpback songs form phrases, series of similar phrases form themes, and multiple themes form songs. All the males in a given population sing the same song, which evolves over time. When whale groups come into contact, songs can spread. But why do whales sing? “The short answer is, we don’t know,” says Alison Stimpert, a bioacoustician at Moss Landing Marine Laboratories in California. Humpback songs are only performed by males and are often heard on breeding grounds, so the dominant hypothesis is that these songs are a form of courtship. The quality of a male’s performance could be a sign of his fitness, for example. But female whales do not tend to approach singing males. Alternatively, whale researchers have proposed that the male whales sing to demarcate territory or to form alliances with other males during mating season. © 1986-2017 The Scientist
By Erin Blakemore It’s scientific canard so old it’s practically cliché: When people lose their sight, other senses heighten to compensate. But are there really differences between the senses of blind and sighted people? It’s been hard to prove, until now. As George Dvorsky reports for Gizmodo, new research shows that blind people’s brains are structurally different than those of sighted people. In a new study published in the journal PLOS One, researchers reveal that the brains of people who are born blind or went blind in early childhood are wired differently than people born with their sight. The study is the first to look at both structural and functional differences between blind and sighted people. Researchers used MRI scanners to peer at the brains of 12 people born with “early profound blindness”—that is, people who were either born without sight or lost it by age three, reports Dvorsky. Then they compared the MRI images to images of the brains of 16 people who were born with sight and who had normal vision (either alone or with corrective help from glasses). The comparisons showed marked differences between the brains of those born with sight and those born without. Essentially, the brains of blind people appeared to be wired differently when it came to things like structure and connectivity. The researchers noticed enhanced connections between some areas of the brain, too—particularly the occipital and frontal cortex areas, which control working memory. There was decreased connectivity between some areas of the brain, as well.
By Bob Grant In the past decade, some bat species have been added to the ranks of “singing” animals, with complex, mostly ultrasonic vocalizations that, when slowed down, rival the tunes of some songbirds. Like birds, bats broadcast chirps, warbles, and trills to attract mates and defend territories. There are about 1,300 known bat species, and the social vocalizations of about 50 have been studied. Of those, researchers have shown that about 20 species seem to be singing, with songs that are differentiated from simpler calls by both their structural complexity and their function. Bats don’t sound like birds to the naked ear; most singing species broadcast predominately in the ultrasonic range, undetectable by humans. And in contrast to the often lengthy songs of avian species, the flying mammals sing in repeated bursts of only a few hundred milliseconds. Researchers must first slow down the bat songs—so that their frequencies drop into the audible range—to hear the similarities. Kirsten Bohn, a behavioral biologist at Johns Hopkins University, first heard Brazilian free-tailed bats (Tadarida brasiliensis) sing more than 10 years ago, when she was a postdoc in the lab of Mike Smotherman at Texas A&M University. “I started hearing a couple of these songs slowed down,” she recalls. “And it really was like, ‘Holy moly—that’s a song! That sounds like a bird.’” The neural circuitry used to learn and produce song may also share similarities between bats and birds. Bohn and Smotherman say they’ve gathered some tantalizing evidence that bats use some of the same brain regions—namely, the basal ganglia and prefrontal cortex—that birds rely upon to produce, process, and perhaps even learn songs. “We have an idea of how the neural circuits control vocalizing in the bats and how they might be adapted to produce song,” Smotherman says. © 1986-2017 The Scientist
By Jenny Rood To human ears, the trilling of birdsong ranks among nature’s most musical sounds. That similarity to human music is now inspiring researchers to apply music theory to avian vocalizations. For example, zebra finch neurobiologist Ofer Tchernichovski of the City University of New York, together with musician and musicologist Hollis Taylor, recently analyzed the song of the Australian pied butcherbird (Cracticus nigrogularis) and found an inverse relationship between motif complexity and repetition that paralleled patterns found in human music (R Soc Open Sci, 3:160357, 2016). Tchernichovski’s work also suggests that birds can perceive rhythm and change their calls in response. Last year, he and colleague Eitan Globerson, a symphony conductor at the Jerusalem Academy of Music and Dance as well as a neurobiologist at Bar Ilan University in Israel, demonstrated that zebra finches, a vocal learning species, adapt their innate calls—as opposed to learned song—to avoid overlapping with unusual rhythmic patterns produced by a vocal robot (Curr Biol, 26:309-18, 2016). The researchers also found that both males and females use the brain’s song system to do this, although females do not learn song. But these complexities of birdsong might be more comparable to human speech than to human music, says Henkjan Honing, a music cognition scientist at the University of Amsterdam. Honing’s research suggests that some birds don’t discern rhythm well. Zebra finches, for example, seem to pay attention to pauses between notes on short time scales but have trouble recognizing overarching rhythmic patterns—one of the key skills thought necessary for musical perception (Front Psychol, doi:10.3389/fpsyg.2016.00730, 2016). © 1986-2017 The Scientist
By Catherine Offord A few years ago, UK composer and technology reporter LJ Rich participated in a music technology competition as part of a project with the BBC. The 24-hour event brought together various musicians, and entailed staying awake into the wee hours trying to solve technical problems related to music. Late into the night, during a break from work, Rich thought of a way to keep people’s spirits up. “At about four in the morning, I remember playing different tastes to people on a piano in the room we were working in,” she says. For instance, “to great amusement, during breakfast I played people the taste of eggs.” It didn’t take long before Rich learned, for the first time, that food’s association with music was not as universally appreciated as she had assumed. “You realize everybody else doesn’t perceive the world that way,” she says. “For me, it was quite a surprise to find that people didn’t realize that certain foods had different keys.” Rich had long known she had absolute pitch—the ability to identify a musical note, such as B flat, without any reference. But that night, she learned she also has what’s known as synesthesia, a little-understood mode of perception that links senses such as taste and hearing in unusual ways, and is thought to be present in around 4 percent of the general population. It’s a difficult phenomenon to get to the bottom of. Like Rich, many synesthetes are unaware their perception is atypical; what’s more, detecting synesthesia usually relies on self-reported experiences—an obstacle for standardized testing. But a growing body of evidence suggests that Rich is far from being alone in possessing both absolute pitch and synesthesia. © 1986-2017 The Scientist
Link ID: 23353 - Posted: 03.14.2017
By Diana Kwon Deep in the Amazon rainforests of Bolivia live the Tsimane’, a tribe that has remained relatively untouched by Western civilization. Tsimane’ people possess a unique characteristic: they do not cringe at musical tones that sound discordant to Western ears. The vast majority of Westerners prefer consonant chords to dissonant ones, based on the intervals between the musical notes that compose the chords. One particularly notable example of this is the Devil’s Interval, or flatted fifth, which received its name in the Middle Ages because the sound it produced was deemed so unpleasant that people associated it with sinister forces. The flatted fifth later became a staple of numerous jazz, blues, and rock-and-roll songs. Over the years, scientists have gathered compelling evidence to suggest that an aversion to dissonance is innate. In 1996, in a letter to Nature, Harvard psychologists, Marcel Zentner and Jerome Kagan, reported on a study suggesting that four-month-old infants preferred consonant intervals to dissonant ones. Researchers subsequently replicated these results: one lab discovered the same effect in two-month-olds and another in two-day-old infants of both deaf and hearing parents. Some scientists even found these preferences in certain animals, such as young chimpanzees and baby chickens. “Of course the ambiguity is [that] even young infants have quite a bit of exposure to typical Western music,” says Josh McDermott, a researcher who studies auditory cognition at MIT. “So the counter-argument is that they get early exposure, and that shapes their preference.” © 1986-2017 The Scientist
By Aylin Woodward Noise is everywhere, but that’s OK. Your brain can still keep track of a conversation in the face of revving motorcycles, noisy cocktail parties or screaming children – in part by predicting what’s coming next and filling in any blanks. New data suggests that these insertions are processed as if the brain had really heard the parts of the word that are missing. “The brain has evolved a way to overcome interruptions that happen in the real world,” says Matthew Leonard at the University of California, San Francisco. We’ve known since the 1970s that the brain can “fill in” inaudible sections of speech, but understanding how it achieves this phenomenon – termed perceptual restoration – has been difficult. To investigate, Leonard’s team played volunteers words that were partially obscured or inaudible to see how their brains responded. The experiment involved people who already had hundreds of electrodes implanted into their brain to monitor their epilepsy. These electrodes detect seizures, but can also be used to record other types of brain activity. The team played the volunteers recordings of a word that could either be “faster” or “factor”, with the middle sound replaced by noise. Data from the electrodes showed that their brains responded as if they had actually heard the missing “s” or “c” sound. © Copyright Reed Business Information Ltd.
By Catherine Offord Getting to Santa María, Bolivia, is no easy feat. Home to a farming and foraging society, the village is located deep in the Amazon rainforest and is accessible only by river. The area lacks electricity and running water, and the Tsimane’ people who live there make contact with the outside world only occasionally, during trips to neighboring towns. But for auditory researcher Josh McDermott, this remoteness was central to the community’s scientific appeal. In 2015, the MIT scientist loaded a laptop, headphones, and a gasoline generator into a canoe and pushed off from the Amazonian town of San Borja, some 50 kilometers downriver from Santa María. Together with collaborator Ricardo Godoy, an anthropologist at Brandeis University, McDermott planned to carry out experiments to test whether the Tsimane’ could discern certain combinations of musical tones, and whether they preferred some over others. The pair wanted to address a long-standing question in music research: Are the features of musical perception seen across cultures innate, or do similarities in preferences observed around the world mirror the spread of Western culture and its (much-better-studied) music? “Particular musical intervals are used in Western music and in other cultures,” McDermott says. “They don’t appear to be random—some are used more commonly than others. The question is: What’s the explanation for that?” © 1986-2017 The Scientist
Link ID: 23332 - Posted: 03.09.2017
By Lindzi Wessel You may have seen the ads: Just spray a bit of human pheromone on your skin, and you’re guaranteed to land a date. Scientists have long debated whether humans secrete chemicals that alter the behavior of other people. A new study throws more cold water on the idea, finding that two pheromones that proponents have long contended affect human attraction to each other have no such impact on the opposite sex—and indeed experts are divided about whether human pheromones even exist. The study, published today in Royal Society Open Science, asked heterosexual participants to rate opposite-sex faces on attractiveness while being exposed to two steroids that are putative human pheromones. One is androstadienone (AND), found in male sweat and semen, whereas the second, estratetraenol (EST), is in women’s urine. Researchers also asked participants to judge gender-ambiguous, or “neutral,” faces, created by merging images of men and women together. The authors reasoned that if the steroids were pheromones, female volunteers given AND would see gender-neutral faces as male, and male volunteers given EST would see gender-neutral faces as female. They also theorized that the steroids corresponding to the opposite sex would lead the volunteers to rate opposite sex faces as more attractive. That didn’t happen. The researchers found no effects of the steroids on any behaviors and concluded that the label of “putative human pheromone” for AND and EST should be dropped. “I’ve convinced myself that AND and EST are not worth pursuing,” says the study’s lead author, Leigh Simmons, an evolutionary biologist at the University of Western Australia in Crawley. © 2017 American Association for the Advancement of Science.
By Jia Naqvi He loves dancing to songs, such as Michael Jackson’s "Beat It" and the "Macarena," but he can't listen to music in the usual way. He laughs whenever someone takes his picture with a camera flash, which is the only intensity of light he can perceive. He loves trying to balance himself, but his legs don't allow him to walk without support. He is one in a million, literally. Born deaf-blind and with a condition, osteopetrosis, that makes bones both dense and fragile, 6-year-old Orion Theodore Withrow is among an unknown number of children with a newly identified genetic disorder that researchers are just beginning to decipher. It goes by an acronym, COMMAD, that gives little away until each letter is explained, revealing an array of problems that also affect eye formation and pigmentation in eyes, skin and hair. The rare disorder severely impairs the person's ability to communicate. Children such as Orion, who are born to genetically deaf parents, are at a higher risk, according to a recent study published in the American Journal of Human Genetics. The finding has important implications for the deaf community, said its senior author, Brian Brooks, clinical director and chief of the Pediatric, Developmental and Genetic Ophthalmology Section at the National Eye Institute. “It is relatively common for folks in deaf community to marry each other,” he said, and what's key is whether each of the couple has a specific genetic "misspelling" that causes a syndrome called Waardenburg 2A. If yes, there's the likelihood of a child inheriting the mutation from both parents. The result, researchers found, is COMMAD. © 1996-2017 The Washington Post
By Steve Mirsky To conserve water, members of my household abide by the old aphorism “If it's yellow, let it mellow.” You're in a state of ignorance about that wizened phrase? If so, it recommends that one not flush the toilet after each relatively innocent act of micturition. But there's one exception to the rule: after asparagus, it's one and done—because those delicious stalks make urine smell like hell. To me and mine, anyway. The digestion of asparagus produces methanethiol and S-methyl thioesters, chemical compounds containing stinky sulfur, also known as brimstone. Hey, when I said that postasparagus urine smells like hell, I meant it literally. Methanethiol is the major culprit in halitosis and flatus, which covers both ends of that discussion. And although thioesters can also grab your nostrils by the throat, they might have played a key role in the origin of life. So be glad they were there stinking up the abiotic Earth. But does a compound reek if nobody is there to sniff it? Less philosophically, does it reek if you personally can't smell it? For only some of us are genetically gifted enough to fully appreciate the distinctive scents of postasparagus urine. The rest wander around unaware of their own olfactory offenses. Recently researchers dove deep into our DNA to determine, although we've all dealt it, exactly who smelt it. Their findings can be found in a paper entitled “Sniffing Out Significant ‘Pee Values’: Genome Wide Association Study of Asparagus Anosmia.” Asparagus anosmia refers to the inability “to smell the metabolites of asparagus in urine,” the authors helpfully explain. They don't bother to note that their bathroom humor plays on the ubiquity in research papers of the p-value, a statistical evaluation of the data that assesses whether said data look robust or are more likely the stuff that should never be allowed to mellow. © 2017 Scientific American,
By Greta Keenan The ocean might seem like a quiet place, but listen carefully and you might just hear the sounds of the fish choir. Most of this underwater music comes from soloist fish, repeating the same calls over and over. But when the calls of different fish overlap, they form a chorus. Robert McCauley and colleagues at Curtin University in Perth, Australia, recorded vocal fish in the coastal waters off Port Headland in Western Australia over an 18-month period, and identified seven distinct fish choruses, happening at dawn and at dusk. You can listen to three of them here: The low “foghorn” call is made by the Black Jewfish (Protonibea diacanthus) while the grunting call that researcher Miles Parsons compares to the “buzzer in the Operation board game” comes from a species of Terapontid. The third chorus is a quieter batfish that makes a “ba-ba-ba” call. “I’ve been listening to fish squawks, burble and pops for nearly 30 years now, and they still amaze me with their variety,” says McCauley, who led the research. Sound plays an important role in various fish behaviours such as reproduction, feeding and territorial disputes. Nocturnal predatory fish use calls to stay together to hunt, while fish that are active during the day use sound to defend their territory. “You get the dusk and dawn choruses like you would with the birds in the forest,” says Steve Simpson, a marine biologist at the University of Exeter, UK. © Copyright Reed Business Information Ltd.
By Robert F. Service Predicting color is easy: Shine a light with a wavelength of 510 nanometers, and most people will say it looks green. Yet figuring out exactly how a particular molecule will smell is much tougher. Now, 22 teams of computer scientists have unveiled a set of algorithms able to predict the odor of different molecules based on their chemical structure. It remains to be seen how broadly useful such programs will be, but one hope is that such algorithms may help fragrancemakers and food producers design new odorants with precisely tailored scents. This latest smell prediction effort began with a recent study by olfactory researcher Leslie Vosshall and colleagues at The Rockefeller University in New York City, in which 49 volunteers rated the smell of 476 vials of pure odorants. For each one, the volunteers labeled the smell with one of 19 descriptors, including “fish,” “garlic,” “sweet,” or “burnt.” They also rated each odor’s pleasantness and intensity, creating a massive database of more than 1 million data points for all the odorant molecules in their study. When computational biologist Pablo Meyer learned of the Rockefeller study 2 years ago, he saw an opportunity to test whether computer scientists could use it to predict how people would assess smells. Besides working at IBM’s Thomas J. Watson Research Center in Yorktown Heights, New York, Meyer heads something called the DREAM challenges, contests that ask teams of computer scientists to solve outstanding biomedical problems, such as predicting the outcome of prostate cancer treatment based on clinical variables or detecting breast cancer from mammogram data. “I knew from graduate school that olfaction was still one of the big unknowns,” Meyer says. Even though researchers have discovered some 400 separate odor receptors in humans, he adds, just how they work together to distinguish different smells remains largely a mystery. © 2017 American Association for the Advancement of Science
By Rachael Lallensack Goats know who their real friends are. A study published today in Royal Society Open Science shows that the animals can recognize what other goats look like and sound like, but only those they are closest with. Up until the late 1960s, the overwhelming assumption was that only humans could mentally keep track of how other individuals look, smell, and sound—what scientists call cross-modal recognition. We now know that many different kinds of animals can do this like horses, lions, crows, dogs, and certain primates. Instead of a lab, these researchers settled into Buttercups Sanctuary for Goats in Boughton Monchelsea, U.K., to find out whether goats had the ability to recognize each other. To do so, they first recorded the calls of individual goats. Then, they set up three pens in the shape of a triangle in the sanctuary’s pasture. Equidistant between the two pens at the base of the triangle was a stereo speaker, camouflaged as to not distract the goat participants. A “watcher” goat stood at the peak of the triangle, and the two remaining corners were filled with the watcher’s “stablemate” (they share a stall at night) and a random herd member. Then, the team would play either the stablemate’s or the random goat’s call over the speaker and time how long it took for the watcher to match the call with the correct goat. They repeated this test again, but with two random goats. The researchers found that the watcher goat would look at the goat that matched the call quickly and for a longer time, but only in the test that included their stablemate. The results indicate that goats are not only capable of cross-modal recognition, but that they might also be able to use inferential reasoning, in other words, process of elimination. Think back to the test: Perhaps when the goat heard a call that it knew was not its pal, it inferred that it must have been the other one. © 2017 American Association for the Advancement of Science.