Chapter 19. Language and Hemispheric Asymmetry
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
By RUSSELL GOLDMAN There’s an elephant at a zoo outside Seoul that speaks Korean. — You mean, it understands some Korean commands, the way a dog can be trained to understand “sit” or “stay”? No, I mean it can actually say Korean words out loud. — Pics or it didn’t happen. Here, watch the video. To be fair, the elephant, a 26-year-old Asian male named Koshik, doesn’t really speak Korean, any more than a parrot can speak Korean (or English or Klingon). But parrots are supposed to, well, parrot — and elephants are not. And Koshik knows how to say at least five Korean words, which are about five more than I do. The really amazing part is how he does it. Koshik places his trunk inside his mouth and uses it to modulate the tone and pitch of the sounds his voice makes, a bit like a person putting his fingers in his mouth to whistle. In this way, Koshik is able to emulate human speech “in such detail that Korean native speakers can readily understand and transcribe the imitations,” according to the journal Current Biology. What’s in his vocabulary? Things he hears all the time from his keepers: the Korean words for hello, sit down, lie down, good and no. Elephant Speaks Korean | Video Video by LiveScienceVideos Lest you think this is just another circus trick that any Jumbo, Dumbo or Babar could pull off, the team of international scientists who wrote the journal article say Koshik’s skills represent “a wholly novel method of vocal production and formant control in this or any other species.” Like many innovations, Koshik’s may have been born of sad necessity. Researchers say he started to imitate his keepers’s sounds only after he was separated from other elephants at the age of 5 — and that his desire to speak like a human arose from sheer loneliness. © 2016 The New York Times Company
By JOHN BRANCH When the N.F.L. agreed in 2012 to donate tens of millions of dollars to concussion research overseen by the National Institutes of Health, it was widely seen as a positive turning point in football’s long history of playing down the long-term effects of brain injuries on players. At the time, the league said that it would have no influence over how the money was used. But the league and its head, neck and spine committee worked to improperly influence the government research, trying to steer the study toward a doctor with ties to the league, according to a study conducted by a congressional committee and released on Monday. “Our investigation has shown that while the N.F.L. had been publicly proclaiming its role as funder and accelerator of important research, it was privately attempting to influence that research,” the study concluded. “The N.F.L. attempted to use its ‘unrestricted gift’ as leverage to steer funding away from one of its critics.” The N.F.L., in a statement, said it rejected the accusations laid out in the study, which was conducted by Democratic members of the House Committee on Energy and Commerce. “There is no dispute that there were concerns raised about both the nature of the study in question and possible conflicts of interest,” the league said. “These concerns were raised for review and consideration through the appropriate channels.” It is the latest in a long history of instances in which the N.F.L. has been found to mismanage concussion research, dating to the league’s first exploration of the crisis when it used deeply flawed data to produce a series of studies. In this case, some of the characters are the same, including Dr. Elliot Pellman, who led the league’s concussion committee for years before he was discredited for his questionable credentials and his role as a longtime denier of the effects of concussions on players. © 2016 The New York Times Company
Keyword: Brain Injury/Concussion
Link ID: 22241 - Posted: 05.24.2016
By Christie Aschwanden When concussions make the news, it’s usually about football. But head injuries happen in other sports too, and not just to men. During a congressional hearing on concussions in youth sports on Friday, Dawn Comstock, an epidemiologist who studies sports injuries, told a House Energy and Commerce subcommittee that in sports like soccer and basketball in which girls and boys play by the same rules, with the same equipment and the same facilities, “girls have higher concussion rates than boys.” Comstock, a researcher at the Colorado School of Public Health, is the first author of a 2015 study published in JAMA Pediatrics that quantified concussions in high school soccer and found that they were about one and a half times more common in girls than in boys. When U.S. Rep. Diana DeGette, D-Colo., asked whether more data was needed to show that girls have higher concussion rates, Comstock replied, “We already have the data that’s consistently shown this gender difference.” What we don’t have, she said, is a proven explanation for the discrepancy. Some researchers have wondered whether women and girls are simply more likely to report their symptoms than men and boys are. “It’s a sexist way to say that they’re not as tough,” said Katherine Price Snedaker, executive director of Pink Concussions,1 an organization that is seeking answers to how concussions affect women and girls. The group recently held a summit on female concussion and traumatic brain injuries at Georgetown University, and one of the speakers was Shannon Bauman, a sports physician who presented data from 207 athletes — both male and female — who’d been evaluated at her specialty concussion clinic in Barrie, Ontario, between September 2014 and January 2016.
By Matthew Hutson Last week, Nature, the world’s most prestigious science journal, published a beautiful picture of a brain on its cover. The computer-generated image, taken from a paper in the issue, showed the organ’s outer layer almost completely covered with sprinkles of colorful words. The paper presents a “semantic map” revealing which parts of the brain’s cortex—meaning its outer layer, the one responsible for higher thought—respond to various spoken words. The study has generated widespread interest, receiving coverage from newspapers and websites around the world. The paper was also accompanied by an online interactive model that allowed users to explore exactly how words are mapped in our brains. The combination yielded a popular frenzy, one prompting the question: Why are millions of people suddenly so interested in the neuroanatomical distribution of linguistic representations? Have they run out of cat videos? The answer, I think, is largely the same as the answer to why “This Is Your Brain on X” (where X = food, politics, sex, podcasts, whatever) is a staple of news headlines, often residing above an fMRI image of a brain lit up in fascinating, mysterious patterns: People have a fundamental misunderstanding of the field of neuroscience and what it can tell us. But before explaining why people shouldn’t be excited about this research, let’s look at what the research tells us and why we should be excited. Different parts of the brain process different elements of thought, and some regions of the cortex are organized into “maps” such that the distance between different locations corresponds to the physical and/or conceptual distance between what it represents.
By BENEDICT CAREY Listening to music may make the daily commute tolerable, but streaming a story through the headphones can make it disappear. You were home; now you’re at your desk: What happened? Storytelling happened, and now scientists have mapped the experience of listening to podcasts, specifically “The Moth Radio Hour,” using a scanner to track brain activity. In a paper published Wednesday by the journal Nature, a research team from the University of California, Berkeley, laid out a detailed map of the brain as it absorbed and responded to a story. Widely dispersed sensory, emotional and memory networks were humming, across both hemispheres of the brain; no story was “contained” in any one part of the brain, as some textbooks have suggested. The team, led by Alexander Huth, a postdoctoral researcher in neuroscience, and Jack Gallant, a professor of psychology, had seven volunteers listen to episodes of “The Moth” — first-person stories of love, loss, betrayal, flight from an abusive husband, and more — while recording brain activity with an M.R.I. machine. Sign Up for the Science Times Newsletter Every week, we'll bring you stories that capture the wonders of the human body, nature and the cosmos. Using novel computational methods, the group broke down the stories into units of meaning: social elements, for example, like friends and parties, as well as locations and emotions . They found that these concepts fell into 12 categories that tended to cause activation in the same parts of people’s brains at the same points throughout the stories. They then retested that model by seeing how it predicted M.R.I. activity while the volunteers listened to another Moth story. Would related words like mother and father, or times, dates and numbers trigger the same parts of people’s brains? The answer was yes. © 2016 The New York Times Company
By Andy Coghlan “I’ve become resigned to speaking like this,” he says. The 17-year old boy’s mother tongue is Dutch, but for his whole life he has spoken with what sounds like a French accent. “This is who I am and it’s part of my personality,” says the boy, who lives in Belgium – where Dutch is an official language – and prefers to remain anonymous. “It has made me stand out as a person.” No matter how hard he tries, his speech sounds French. About 140 cases of foreign accent syndrome (FAS) have been described in scientific studies, but most of these people developed the condition after having a stroke. In the UK, for example, a woman in Newcastle who’d had a stroke in 2006 woke up with a Jamaican accent. Other British cases include a woman who developed a Chinese accent, and another who acquired a pronounced French-like accent overnight following a bout of cerebral vasculitis. But the teenager has had the condition from birth, sparking the interest of Jo Verhoeven of City University London and his team. Scans revealed that, compared with controls, the flow of blood to two parts of the boy’s brain were significantly reduced. One of these was the prefrontal cortex of the left hemisphere – a finding unsurprising to the team, as it is known to be associated with planning actions including speech. © Copyright Reed Business Information Ltd.
Link ID: 22161 - Posted: 04.30.2016
Ian Sample Science editor Scientists have created an “atlas of the brain” that reveals how the meanings of words are arranged across different regions of the organ. Like a colourful quilt laid over the cortex, the atlas displays in rainbow hues how individual words and the concepts they convey can be grouped together in clumps of white matter. “Our goal was to build a giant atlas that shows how one specific aspect of language is represented in the brain, in this case semantics, or the meanings of words,” said Jack Gallant, a neuroscientist at the University of California, Berkeley. No single brain region holds one word or concept. A single brain spot is associated with a number of related words. And each single word lights up many different brain spots. Together they make up networks that represent the meanings of each word we use: life and love; death and taxes; clouds, Florida and bra. All light up their own networks. Described as a “tour de force” by one researcher who was not involved in the study, the atlas demonstrates how modern imaging can transform our knowledge of how the brain performs some of its most important tasks. With further advances, the technology could have a profound impact on medicine and other fields. “It is possible that this approach could be used to decode information about what words a person is hearing, reading, or possibly even thinking,” said Alexander Huth, the first author on the study. One potential use would be a language decoder that could allow people silenced by motor neurone disease or locked-in syndrome to speak through a computer. © 2016 Guardian News and Media Limited
Jon Hamilton People who sustain a concussion or a more severe traumatic brain injury are likely to have sleep problems that continue for at least a year and a half. A study of 31 patients with this sort of brain injury found that 18 months afterward, they were still getting, on average, an hour more sleep each night than similar healthy people were getting. And despite the extra sleep, 67 percent showed signs of excessive daytime sleepiness. Only 19 percent of healthy people had that problem. Surprisingly, most of these concussed patients had no idea that their sleep patterns had changed. "If you ask them, they say they are fine," says Dr. Lukas Imbach, the study's first author and a senior physician at the University Hospital Zurich in Zurich. When Imbach confronts patients with their test results, "they are surprised," he says. The results, published Thursday in the online edition of the journal Neurology, suggest there could be a quiet epidemic of sleep disorders among people with traumatic brain injuries. The injuries are diagnosed in more than 2 million people a year in the United States. Common causes include falls, motor vehicle incidents and assaults. Previous studies have found that about half of all people who sustain sudden trauma to the brain experience sleep problems. But it has been unclear how long those problems persist. "Nobody actually had looked into that in detail," Imbach says. A sleep disorder detected 18 months after an injury will linger for at least two years, and probably much longer, the researchers say. © 2016 npr
Laura Sanders Away from home, people sleep with one ear open. In unfamiliar surroundings, part of the left hemisphere keeps watch while the rest of the brain is deeply asleep, scientists report April 21 in Current Biology. The results help explain why the first night in a hotel isn’t always restful. Some aquatic mammals and birds sleep with half a brain at a time, a trick called unihemispheric sleep. Scientists have believed that humans, however, did not show any such asymmetry in their slumber. Study coauthor Yuka Sasaki of Brown University in Providence, R.I., and colleagues looked for signs of asymmetry on the first night that young, healthy people came into their sleep lab. Usually, scientists toss the data from the inaugural night because the sleep is so disturbed, Sasaki says. But she and her team thought that some interesting sleep patterns might lurk within that fitful sleep. “It was a little bit of a crazy hunch,” she says, “but we did it anyway.” On the first night in a sleep lab, people with more “awake” left hemispheres took longer to fall asleep. This asymmetry was largely gone on the second night, and people fell asleep more quickly. During a deep sleep stage known as slow-wave sleep, a network of nerve cells in the left side of the brain showed less sleep-related activity than the corresponding network on the right side. Those results suggest that the left side of the brain is a lighter sleeper. “It looked like the left hemisphere and the right hemisphere did not show the same degree of sleep,” Sasaki says. This imbalance disappeared on the second night of sleep. © Society for Science & the Public 2000 - 2016
Cassie Martin The grunts, moans and wobbles of gelada monkeys, a chatty species residing in Ethiopia’s northern highlands, observe a universal mathematical principle seen until now only in human language. The new research, published online April 18 in the Proceedings of the National Academy of Sciences, sheds light on the evolution of primate communication and complex human language, the researchers say. “Human language is like an onion,” says Simone Pika, head of the Humboldt Research Group at the Max Planck Institute for Ornithology in Seewiesen, Germany, who was not involved in the study. “When you peel back the layers, you find that it is based on these underlying mechanisms, many of which were already present in animal communication. This research neatly shows there is another ability already present.” As the number of individual calls in gelada vocal sequences increases, the duration of the calls tends to decrease — a relationship known as Menzerath’s law. One of those mechanisms is known as Menzerath’s law, a mathematical principle that states that the longer a construct, the shorter its components. In human language, for instance, longer sentences tend to comprise shorter words. The gelada study is the first to observe this law in the vocalizations of a nonhuman species. “There are aspects of communication and language that aren’t as unique as we think,” says study coauthor Morgan Gustison of the University of Michigan in Ann Arbor. © Society for Science & the Public 2000 - 2016
By Catherine Matacic Simi Etedgi leans forward as she tells her story for the camera: The year was 1963, and she was just 15 as she left Morocco for Israel, one person among hundreds of thousands leaving for the new state. But her forward lean isn’t a casual gesture. Etedgi, now 68, is one of about 10,000 signers of Israeli Sign Language (ISL), a language that emerged only 80 years ago. Her lean has a precise meaning, signaling that she wants to get in an aside before finishing her tale. Her eyes sparkle as she explains that the signs used in the Morocco of her childhood are very different from those she uses now in Israel. In fact, younger signers of ISL use a different gesture to signal an aside—and they have different ways to express many other meanings as well. A new study presented at the Evolution of Language meeting here last month shows that the new generation has come up with richer, more grammatically complex utterances that use ever more parts of the body for different purposes. Most intriguing for linguists: These changes seem to happen in a predictable order from one generation to the next. That same order has been seen in young sign languages around the world, showing in visible fashion how linguistic complexity unfolds. This leads some linguists to think that they may have found a new model for the evolution of language. “This is a big hypothesis,” says cognitive scientist Ann Senghas of Barnard College in New York City, who has spent her life studying Nicaraguan Sign Language (NSL). “It makes a lot of predictions and tries to pull a lot of facts together into a single framework.” Although it’s too early to know what the model will reveal, linguists say it already may have implications for understanding how quickly key elements of language, from complex words to grammar, have evolved. © 2016 American Association for the Advancement of Science.
Link ID: 22130 - Posted: 04.23.2016
By JEFFREY M. ZACKS and REBECCA TREIMAN OUR favorite Woody Allen joke is the one about taking a speed-reading course. “I read ‘War and Peace’ in 20 minutes,” he says. “It’s about Russia.” The promise of speed reading — to absorb text several times faster than normal, without any significant loss of comprehension — can indeed seem too good to be true. Nonetheless, it has long been an aspiration for many readers, as well as the entrepreneurs seeking to serve them. And as the production rate for new reading matter has increased, and people read on a growing array of devices, the lure of speed reading has only grown stronger. The first popular speed-reading course, introduced in 1959 by Evelyn Wood, was predicated on the idea that reading was slow because it was inefficient. The course focused on teaching people to make fewer back-and-forth eye movements across the page, taking in more information with each glance. Today, apps like SpeedRead With Spritz aim to minimize eye movement even further by having a digital device present you with a stream of single words one after the other at a rapid rate. Unfortunately, the scientific consensus suggests that such enterprises should be viewed with suspicion. In a recent article in Psychological Science in the Public Interest, one of us (Professor Treiman) and colleagues reviewed the empirical literature on reading and concluded that it’s extremely unlikely you can greatly improve your reading speed without missing out on a lot of meaning. Certainly, readers are capable of rapidly scanning a text to find a specific word or piece of information, or to pick up a general idea of what the text is about. But this is skimming, not reading. We can definitely skim, and it may be that speed-reading systems help people skim better. Some speed-reading systems, for example, instruct people to focus only on the beginnings of paragraphs and chapters. This is probably a good skimming strategy. Participants in a 2009 experiment read essays that had half the words covered up — either the beginning of the essay, the end of the essay, or the beginning or end of each individual paragraph. Reading half-paragraphs led to better performance on a test of memory for the passage’s meaning than did reading only the first or second half of the text, and it worked as well as skimming under time pressure. © 2016 The New York Times Company
By David Shultz Mice supposedly don't speak, so they can't stutter. But by tinkering with a gene that appears to be involved in human speech, researchers have created transgenic mice whose pups produce altered vocalizations in a way that is similar to stuttering in humans. The mice could make a good model for understanding stuttering; they could also shed more light on how mutations in the gene, called Gnptab, cause the speech disorder. Stuttering is one of the most common speech disorders in the world, affecting nearly one out of 100 adults in the United States. But the cause of the stammering, fragmented speech patterns remains unclear. Several years ago, researchers discovered that stutterers often have mutations in a gene called Gnptab. Like a dispatcher directing garbage trucks, Gnptab encodes a protein that helps to direct enzymes into the lysosome—a compartment in animal cells that breaks down waste and recycles old cellular machinery. Mutations to other genes in this system are known to lead to the buildup of cellular waste products and often result in debilitating diseases, such as Tay-Sachs. How mutations in Gnptab causes stuttered speech remains a mystery, however. To get to the bottom of things, neuroscientist Terra Barnes and her team at Washington University in St. Louis in Missouri produced mice with mutation in the Gnptab gene and studied whether it affected the ultrasonic vocalizations that newly born mouse pups emit when separated from their mothers. Determining whether a mouse is stuttering is no easy task; as Barnes points out, it can even be difficult to tell whether people are stuttering if they’re speaking a foreign language. So the team designed a computer program that listens for stuttering vocalization patterns independent of language. © 2016 American Association for the Advancement of Science.
Link ID: 22110 - Posted: 04.16.2016
By Robin Wylie Bottlenose dolphins have been observed chattering while cooperating to solve a tricky puzzle – a feat that suggests they have a type of vocalisation dedicated to cooperating on problem solving. Holli Eskelinen of Dolphins Plus research institute in Florida and her colleagues at the University of Southern Mississippi presented a group of six captive dolphins with a locked canister filled with food. The canister could only be opened by simultaneously pulling on a rope at either end. The team conducted 24 canister trials, during which all six dolphins were present. Only two of the dolphins ever managed to crack the puzzle and get to the food. The successful pair was prolific, though: in 20 of the trials, the same two adult males worked together to open the food canister in a matter of few minutes. In the other four trials, one of the dolphins managed to solve the problem on its own, but this was much trickier and took longer to execute. But the real surprise came from recordings of the vocalisations the dolphins made during the experiment. The team found that when the dolphins worked together to open the canister, they made around three times more vocalisations than they did while opening the canister on their own or when there was either no canister present or no interaction with the canister in the pool. © Copyright Reed Business Information Ltd.
By Frank McGurty More than 40 percent of retired NFL players tested with advanced scanning technology showed signs of traumatic brain injury, a much higher rate than in the general population, according to a new study of the long-term risks of playing American football. The research, presented at an American Academy of Neurology meeting that began in Vancouver on Monday, is one of the first to provide "objective evidence" of traumatic brain injury in a large sample of National Football League veterans while they are living, said Dr. Francis X. Conidi, one of the study's authors. Conidi, a neurologist at the Florida Center for Headache and Sports Neurology and a faculty member at the Florida State University College of Medicine, said traumatic brain injury was often a "precursor" to CTE, a degenerative brain disease. "What we do know is that players with traumatic brain injury have a high incidence of going on to develop neurological degenerative disease later on in life," Conidi told Reuters. CTE, or chronic traumatic encephalopathy, has been found in dozens of the NFL's top players after they died. At present, a CTE diagnosis is only possible after death. The brain tissue of 59 or 62 deceased former NFL players examined by Boston University's CTE Center have tested positive for CTE, according to its website. The disease, which can lead to aggression and dementia, may have led to the suicides of several NFL athletes, including Hall of Famer Junior Seau. In the new study, the largest of its kind, 40 living former players were given sensitive brain scans, known as diffusion tensor imaging (DTI), as well as thinking and memory tests. © 2016 Scientific American,
By Catherine Matacic How does sign language develop? A new study shows that it takes less than five generations for people to go from simple, unconventional pantomimes—essentially telling a story with your hands—to stable signs. Researchers asked a group of volunteers to invent their own signs for a set of 24 words in four separate categories: people, locations, objects, and actions. Examples included “photographer,” “darkroom,” and “camera.” After an initial group made up the signs—pretending to shoot a picture with an old-fashioned camera for “photographer,” for example—they taught the signs to a new generation of learners. That generation then played a game where they tried to guess what sign another player in their group was making. When they got the answer right, they taught that sign to a new generation of volunteers. After a few generations, the volunteers stopped acting out the words with inconsistent gestures and started making them in ways that were more systematic and efficient. What’s more, they added markers for the four categories—pointing to themselves if the category were “person” or making the outline of a house if the category were “location,” for example—and they stopped repeating gestures, the researchers reported last month at the Evolution of Language conference in New Orleans, Louisiana. So in the video above, the first version of “photographer” is unpredictable and long, compared with the final version, which uses the person marker and takes just half the time. The researchers say their finding supports the work of researchers in the field, who have found similar patterns of development in newly emerging sign languages. The results also suggest that learning and social interaction are crucial to this development. © 2016 American Association for the Advancement of Science
Link ID: 22084 - Posted: 04.09.2016
Laura Sanders NEW YORK — Lip-readers’ minds seem to “hear” the words their eyes see being formed. And the better a person is at lipreading, the more neural activity there is in the brain’s auditory cortex, scientists reported April 4 at the annual meeting of the Cognitive Neuroscience Society. Earlier studies have found that auditory brain areas are active during lipreading. But most of those studies focused on small bits of language — simple sentences or even single words, said study coauthor Satu Saalasti of Aalto University in Finland. In contrast, Saalasti and colleagues studied lipreading in more natural situations. Twenty-nine people read the silent lips of a person who spoke Finnish for eight minutes in a video. “We can all lip-read to some extent,” Saalasti said, and the participants, who had no lipreading experience, varied widely in their comprehension of the eight-minute story. In the best lip-readers, activity in the auditory cortex was quite similar to that evoked when the story was read aloud, brain scans revealed. The results suggest that lipreading success depends on a person’s ability to “hear” the words formed by moving lips, Saalasti said. Citations J. Alho et al. Similar brain responses to lip-read, read and listened narratives. Cognitive Neuroscience Society annual meeting, New York City, April 4, 2016. Further Reading © Society for Science & the Public 2000 - 2016.
Link ID: 22077 - Posted: 04.07.2016
Laura Sanders NEW YORK — Cells in a brain structure known as the hippocampus are known to be cartographers, drawing mental maps of physical space. But new studies show that this seahorse-shaped hook of neural tissue can also keep track of social space, auditory space and even time, deftly mapping these various types of information into their proper places. Neuroscientist Rita Tavares described details of one of these new maps April 2 at the annual meeting of the Cognitive Neuroscience Society. Brain scans had previously revealed that activity in the hippocampus was linked to movement through social space. In an experiment reported last year in Neuron, people went on a virtual quest to find a house and job by interacting with a cast of characters. Through these social interactions, the participants formed opinions about how much power each character held, and how kindly they felt toward him or her. These judgments put each character in a position on a “social space” map. Activity in the hippocampus was related to this social mapmaking, Tavares and colleagues found. It turns out that this social map depends on the traits of the person who is drawing it, says Tavares, of Icahn School of Medicine at Mount Sinai in New York City. People with more social anxiety tended to give more power to characters they interacted with. What’s more, these people's social space maps were smaller overall, suggesting that they explored social space less, Tavares says. Tying these behavioral traits to the hippocampus may lead to a greater understanding of social behavior — and how this social mapping may go awry in psychiatric conditions, Tavares said. © Society for Science & the Public 2000 - 2016.
Keyword: Learning & Memory
Link ID: 22076 - Posted: 04.06.2016
By BENEDICT CAREY Some scientists studying the relationship between contact sports and memory or mood problems later in life argue that cumulative exposure to hits that cause a snap of the head — not an athlete’s number of concussions — is the most important risk factor. That possibility is particularly worrisome in football, in which frequent “subconcussive” blows are unavoidable. On Thursday, researchers based at Boston University reported the most rigorous evidence to date that overall exposure to contact in former high school and college football players could predict their likelihood of experiencing problems like depression, apathy or memory loss years later. The finding, appearing in The Journal of Neurotrauma, is not conclusive, the authors wrote. Such mental problems can stem from a variety of factors in any long life. Yet the paper represents researchers’ first attempt to precisely calculate cumulative lifetime exposure to contact in living players, experts said. Previous estimates had relied in part on former players’ memories of concussions, or number of years played. The new paper uses more objective measures, including data from helmet accelerometer studies, and provides a glimpse of where the debate over the risk of contact sports may next play out, the experts said. “They used a much more refined and quantitative approach to estimate exposure than I’ve seen in this area,” said John Meeker, a professor of environmental health sciences at the University of Michigan School of Public Health, who was not a part of the research team. But he added, “Their methods will have to be validated in much larger studies; this is very much a preliminary finding.” The study did not address the risk of chronic traumatic encephalopathy, or C.T.E., a degenerative scarring in the brain tied to head blows, which can be diagnosed only after death. © 2016 The New York Times Company
Keyword: Brain Injury/Concussion
Link ID: 22060 - Posted: 04.01.2016
By Elizabeth Pennisi The “brrreeet” you hear in the video above is not coming from this broadbill’s beak, but rather from its wings. Charles Darwin marveled at “instrumental music” of birds—from the rattled quills of peacocks to the wing-drumming of grouse and the wing “booming” of night-jars. But those percussive noises are no match for the definitive tones generated by the three Smithornis broadbills (S. rufolateralis, S. capensis, and S. sharpei) that live in remote forests in sub-Saharan Africa. One bird acoustics specialist was so intrigued in 1986 by a recording of this “song,” that he vowed to hear it for himself. More than 2 years ago, he and his colleagues tracked two of these species down in the wild. Synchronized high-speed video and acoustic recordings revealed the downstroke of the wings produces the tones as the bird flies in a meter-wide oval from its perch and back again. At first the researchers thought the outermost flight feathers flutter to make the sounds, but studies of a wing and of the feathers themselves in a wind tunnel showed that the inner flight feathers are “singing” the most, the team reports today in the Journal of Experimental Biology. The tones may scale with the species’ body and feather size, with the bigger ones producing deeper tones, the researchers suggest. The wing tones seemed to have replaced vocal singing, they note, and are likely unique to this group of birds. Audible 100 meters away in dense forest, they represent yet another innovation for communicating with one’s peers. © 2016 American Association for the Advancement of Science