Chapter 19. Language and Hemispheric Asymmetry

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 41 - 60 of 2168

By Amina Zafar, When Susan Robertson's fingers and left arm felt funny while she was Christmas shopping, they were signs of a stroke she experienced at age 36. The stroke survivor is now concerned about her increased risk of dementia. The link between stroke and dementia is stronger than many Canadians realize, the Heart and Stroke Foundation says. The group's annual report, released Thursday, is titled "Mind the connection: preventing stroke and dementia." Stroke happens when blood stops flowing to parts of the brain. Robertson, 41, of Windsor, Ont., said her short-term memory, word-finding and organizational skills were impaired after her 2011 stroke. She's extremely grateful to have recovered the ability to speak and walk after doctors found clots had damaged her brain's left parietal lobe. "I knew what was happening, but I couldn't say it," the occupational nurse recalled. Dementia risk A stroke more than doubles the risk of dementia, said Dr. Rick Swartz, a spokesman for the foundation and a stroke neurologist in Toronto. Raising awareness about the link is not to scare people, but to show how controlling blood pressure, not smoking or quitting if you do, eating a balanced diet and being physically active reduce the risk to individuals and could make a difference at a society level, Swartz said. While aging is a common risk factor in stroke and dementia, evidence in Canada and other developed countries shows younger people are also increasingly affected. ©2016 CBC/Radio-Canada.

Keyword: Stroke; Alzheimers
Link ID: 22302 - Posted: 06.09.2016

By Karin Brulliard Think about how most people talk to babies: Slowly, simply, repetitively, and with an exaggerated tone. It’s one way children learn the uses and meanings of language. Now scientists have found that some adult birds do that when singing to chicks — and it helps the baby birds better learn their song. The subjects of the new study, published last week in the journal Proceedings of the National Academy of Sciences, were zebra finches. They’re good for this because they breed well in a lab environment, and “they’re just really great singers. They sing all the time,” said McGill University biologist and co-author Jon Sakata. The males, he means — they’re the singers, and they do it for fun and when courting ladies, as well as around baby birds. Never mind that their melody is more “tinny,” according to Sakata, than pretty. Birds in general are helpful for vocal acquisition studies because they, like humans, are among the few species that actually have to learn how to make their sounds, Sakata said. Cats, for example, are born knowing how to meow. But just as people pick up speech and bats learn their calls, birds also have to figure out how to sing their special songs. Sakata and his colleagues were interested in how social interactions between adult zebra finches and chicks influences that learning process. Is face-to-face — or, as it may be, beak-to-beak — learning better? Does simply hearing an adult sing work as well as watching it do so? Do daydreaming baby birds learn as well as their more focused peers? © 1996-2016 The Washington Post

Keyword: Language; Evolution
Link ID: 22286 - Posted: 06.06.2016

By Andy Coghlan People once dependent on wheelchairs after having a stroke are walking again since receiving injections of stem cells into their brains. Participants in the small trial also saw improvements in their speech and arm movements. “One 71-year-old woman could only move her left thumb at the start of the trial,” says Gary Steinberg, a neurosurgeon at Stanford University who performed the procedure on some of the 18 participants. “She can now walk and lift her arm above her head.” Run by SanBio of Mountain View, California, this trial is the second to test whether stem cell injections into patients’ brains can help ease disabilities resulting from stroke. Patients in the first, carried out by UK company ReNeuron, also showed measurable reductions in disability a year after receiving their injections and beyond. All patients in the latest trial showed improvements. Their scores on a 100-point scale for evaluating mobility – with 100 being completely mobile – improved on average by 11.4 points, a margin considered to be clinically meaningful for patients. “The most dramatic improvements were in strength, coordination, ability to walk, the ability to use hands and the ability to communicate, especially in those whose speech had been damaged by the stroke,” says Steinberg. In both trials, improvements in patients’ mobility had plateaued since having had strokes between six months and three years previously. © Copyright Reed Business Information Ltd

Keyword: Stroke; Stem Cells
Link ID: 22281 - Posted: 06.04.2016

Meghan Rosen SALT LAKE CITY — In the Indian Ocean off the coast of Sri Lanka, pygmy blue whales are changing their tune — and they might be doing it on purpose. From 2002 to 2012, the frequency of one part of the whales’ calls steadily fell, marine bioacoustician Jennifer Miksis-Olds reported May 25 at a meeting of the Acoustical Society of America. But unexpectedly, another part of the whales’ call stayed the same, she found. “I’ve never seen results like this before,” says marine bioacoustician Leanna Matthews of Syracuse University in New York, who was not involved with the work. Miksis-Olds’ findings add a new twist to current theories about blue whale vocalizations and spark all sorts of questions about what the animals are doing, Matthews said. “It’s a huge mystery.” Over the last 40 to 50 years, the calls of blue whales around the world have been getting deeper. Researchers have reported frequency drops in blue whale populations from the Arctic Ocean to the North Pacific. Some researchers think that blue whales are just getting bigger, said Miksis-Olds, of the University of New Hampshire in Durham. Whaling isn’t as common as it used to be, so whales have been able to grow larger — and larger whales have deeper calls. Another theory blames whales’ changing calls on an increasingly noisy ocean. Whales could be automatically adjusting their calls to be heard better, kind of like a person raising their voice to speak at a party, she said. If the whales were just getting bigger, you’d expect all components of the calls to be deeper, said acoustics researcher Pasquale Bottalico at Michigan State University in East Lansing. But the new data don’t support that, he said. © Society for Science & the Public 2000 - 2016. A

Keyword: Hearing; Animal Communication
Link ID: 22280 - Posted: 06.04.2016

By RUSSELL GOLDMAN There’s an elephant at a zoo outside Seoul that speaks Korean. — You mean, it understands some Korean commands, the way a dog can be trained to understand “sit” or “stay”? No, I mean it can actually say Korean words out loud. — Pics or it didn’t happen. Here, watch the video. To be fair, the elephant, a 26-year-old Asian male named Koshik, doesn’t really speak Korean, any more than a parrot can speak Korean (or English or Klingon). But parrots are supposed to, well, parrot — and elephants are not. And Koshik knows how to say at least five Korean words, which are about five more than I do. The really amazing part is how he does it. Koshik places his trunk inside his mouth and uses it to modulate the tone and pitch of the sounds his voice makes, a bit like a person putting his fingers in his mouth to whistle. In this way, Koshik is able to emulate human speech “in such detail that Korean native speakers can readily understand and transcribe the imitations,” according to the journal Current Biology. What’s in his vocabulary? Things he hears all the time from his keepers: the Korean words for hello, sit down, lie down, good and no. Elephant Speaks Korean | Video Video by LiveScienceVideos Lest you think this is just another circus trick that any Jumbo, Dumbo or Babar could pull off, the team of international scientists who wrote the journal article say Koshik’s skills represent “a wholly novel method of vocal production and formant control in this or any other species.” Like many innovations, Koshik’s may have been born of sad necessity. Researchers say he started to imitate his keepers’s sounds only after he was separated from other elephants at the age of 5 — and that his desire to speak like a human arose from sheer loneliness. © 2016 The New York Times Company

Keyword: Language; Animal Communication
Link ID: 22253 - Posted: 05.26.2016

By JOHN BRANCH When the N.F.L. agreed in 2012 to donate tens of millions of dollars to concussion research overseen by the National Institutes of Health, it was widely seen as a positive turning point in football’s long history of playing down the long-term effects of brain injuries on players. At the time, the league said that it would have no influence over how the money was used. But the league and its head, neck and spine committee worked to improperly influence the government research, trying to steer the study toward a doctor with ties to the league, according to a study conducted by a congressional committee and released on Monday. “Our investigation has shown that while the N.F.L. had been publicly proclaiming its role as funder and accelerator of important research, it was privately attempting to influence that research,” the study concluded. “The N.F.L. attempted to use its ‘unrestricted gift’ as leverage to steer funding away from one of its critics.” The N.F.L., in a statement, said it rejected the accusations laid out in the study, which was conducted by Democratic members of the House Committee on Energy and Commerce. “There is no dispute that there were concerns raised about both the nature of the study in question and possible conflicts of interest,” the league said. “These concerns were raised for review and consideration through the appropriate channels.” It is the latest in a long history of instances in which the N.F.L. has been found to mismanage concussion research, dating to the league’s first exploration of the crisis when it used deeply flawed data to produce a series of studies. In this case, some of the characters are the same, including Dr. Elliot Pellman, who led the league’s concussion committee for years before he was discredited for his questionable credentials and his role as a longtime denier of the effects of concussions on players. © 2016 The New York Times Company

Keyword: Brain Injury/Concussion
Link ID: 22241 - Posted: 05.24.2016

By Christie Aschwanden When concussions make the news, it’s usually about football. But head injuries happen in other sports too, and not just to men. During a congressional hearing on concussions in youth sports on Friday, Dawn Comstock, an epidemiologist who studies sports injuries, told a House Energy and Commerce subcommittee that in sports like soccer and basketball in which girls and boys play by the same rules, with the same equipment and the same facilities, “girls have higher concussion rates than boys.” Comstock, a researcher at the Colorado School of Public Health, is the first author of a 2015 study published in JAMA Pediatrics that quantified concussions in high school soccer and found that they were about one and a half times more common in girls than in boys. When U.S. Rep. Diana DeGette, D-Colo., asked whether more data was needed to show that girls have higher concussion rates, Comstock replied, “We already have the data that’s consistently shown this gender difference.” What we don’t have, she said, is a proven explanation for the discrepancy. Some researchers have wondered whether women and girls are simply more likely to report their symptoms than men and boys are. “It’s a sexist way to say that they’re not as tough,” said Katherine Price Snedaker, executive director of Pink Concussions,1 an organization that is seeking answers to how concussions affect women and girls. The group recently held a summit on female concussion and traumatic brain injuries at Georgetown University, and one of the speakers was Shannon Bauman, a sports physician who presented data from 207 athletes — both male and female — who’d been evaluated at her specialty concussion clinic in Barrie, Ontario, between September 2014 and January 2016.

Keyword: Brain Injury/Concussion; Sexual Behavior
Link ID: 22229 - Posted: 05.19.2016

By Matthew Hutson Last week, Nature, the world’s most prestigious science journal, published a beautiful picture of a brain on its cover. The computer-generated image, taken from a paper in the issue, showed the organ’s outer layer almost completely covered with sprinkles of colorful words. The paper presents a “semantic map” revealing which parts of the brain’s cortex—meaning its outer layer, the one responsible for higher thought—respond to various spoken words. The study has generated widespread interest, receiving coverage from newspapers and websites around the world. The paper was also accompanied by an online interactive model that allowed users to explore exactly how words are mapped in our brains. The combination yielded a popular frenzy, one prompting the question: Why are millions of people suddenly so interested in the neuroanatomical distribution of linguistic representations? Have they run out of cat videos? The answer, I think, is largely the same as the answer to why “This Is Your Brain on X” (where X = food, politics, sex, podcasts, whatever) is a staple of news headlines, often residing above an fMRI image of a brain lit up in fascinating, mysterious patterns: People have a fundamental misunderstanding of the field of neuroscience and what it can tell us. But before explaining why people shouldn’t be excited about this research, let’s look at what the research tells us and why we should be excited. Different parts of the brain process different elements of thought, and some regions of the cortex are organized into “maps” such that the distance between different locations corresponds to the physical and/or conceptual distance between what it represents.

Keyword: Brain imaging; Language
Link ID: 22186 - Posted: 05.07.2016

By BENEDICT CAREY Listening to music may make the daily commute tolerable, but streaming a story through the headphones can make it disappear. You were home; now you’re at your desk: What happened? Storytelling happened, and now scientists have mapped the experience of listening to podcasts, specifically “The Moth Radio Hour,” using a scanner to track brain activity. In a paper published Wednesday by the journal Nature, a research team from the University of California, Berkeley, laid out a detailed map of the brain as it absorbed and responded to a story. Widely dispersed sensory, emotional and memory networks were humming, across both hemispheres of the brain; no story was “contained” in any one part of the brain, as some textbooks have suggested. The team, led by Alexander Huth, a postdoctoral researcher in neuroscience, and Jack Gallant, a professor of psychology, had seven volunteers listen to episodes of “The Moth” — first-person stories of love, loss, betrayal, flight from an abusive husband, and more — while recording brain activity with an M.R.I. machine. Sign Up for the Science Times Newsletter Every week, we'll bring you stories that capture the wonders of the human body, nature and the cosmos. Using novel computational methods, the group broke down the stories into units of meaning: social elements, for example, like friends and parties, as well as locations and emotions . They found that these concepts fell into 12 categories that tended to cause activation in the same parts of people’s brains at the same points throughout the stories. They then retested that model by seeing how it predicted M.R.I. activity while the volunteers listened to another Moth story. Would related words like mother and father, or times, dates and numbers trigger the same parts of people’s brains? The answer was yes. © 2016 The New York Times Company

Keyword: Language; Brain imaging
Link ID: 22162 - Posted: 04.30.2016

By Andy Coghlan “I’ve become resigned to speaking like this,” he says. The 17-year old boy’s mother tongue is Dutch, but for his whole life he has spoken with what sounds like a French accent. “This is who I am and it’s part of my personality,” says the boy, who lives in Belgium – where Dutch is an official language – and prefers to remain anonymous. “It has made me stand out as a person.” No matter how hard he tries, his speech sounds French. About 140 cases of foreign accent syndrome (FAS) have been described in scientific studies, but most of these people developed the condition after having a stroke. In the UK, for example, a woman in Newcastle who’d had a stroke in 2006 woke up with a Jamaican accent. Other British cases include a woman who developed a Chinese accent, and another who acquired a pronounced French-like accent overnight following a bout of cerebral vasculitis. But the teenager has had the condition from birth, sparking the interest of Jo Verhoeven of City University London and his team. Scans revealed that, compared with controls, the flow of blood to two parts of the boy’s brain were significantly reduced. One of these was the prefrontal cortex of the left hemisphere – a finding unsurprising to the team, as it is known to be associated with planning actions including speech. © Copyright Reed Business Information Ltd.

Keyword: Language
Link ID: 22161 - Posted: 04.30.2016

Ian Sample Science editor Scientists have created an “atlas of the brain” that reveals how the meanings of words are arranged across different regions of the organ. Like a colourful quilt laid over the cortex, the atlas displays in rainbow hues how individual words and the concepts they convey can be grouped together in clumps of white matter. “Our goal was to build a giant atlas that shows how one specific aspect of language is represented in the brain, in this case semantics, or the meanings of words,” said Jack Gallant, a neuroscientist at the University of California, Berkeley. No single brain region holds one word or concept. A single brain spot is associated with a number of related words. And each single word lights up many different brain spots. Together they make up networks that represent the meanings of each word we use: life and love; death and taxes; clouds, Florida and bra. All light up their own networks. Described as a “tour de force” by one researcher who was not involved in the study, the atlas demonstrates how modern imaging can transform our knowledge of how the brain performs some of its most important tasks. With further advances, the technology could have a profound impact on medicine and other fields. “It is possible that this approach could be used to decode information about what words a person is hearing, reading, or possibly even thinking,” said Alexander Huth, the first author on the study. One potential use would be a language decoder that could allow people silenced by motor neurone disease or locked-in syndrome to speak through a computer. © 2016 Guardian News and Media Limited

Keyword: Language; Brain imaging
Link ID: 22157 - Posted: 04.28.2016

Jon Hamilton People who sustain a concussion or a more severe traumatic brain injury are likely to have sleep problems that continue for at least a year and a half. A study of 31 patients with this sort of brain injury found that 18 months afterward, they were still getting, on average, an hour more sleep each night than similar healthy people were getting. And despite the extra sleep, 67 percent showed signs of excessive daytime sleepiness. Only 19 percent of healthy people had that problem. Surprisingly, most of these concussed patients had no idea that their sleep patterns had changed. "If you ask them, they say they are fine," says Dr. Lukas Imbach, the study's first author and a senior physician at the University Hospital Zurich in Zurich. When Imbach confronts patients with their test results, "they are surprised," he says. The results, published Thursday in the online edition of the journal Neurology, suggest there could be a quiet epidemic of sleep disorders among people with traumatic brain injuries. The injuries are diagnosed in more than 2 million people a year in the United States. Common causes include falls, motor vehicle incidents and assaults. Previous studies have found that about half of all people who sustain sudden trauma to the brain experience sleep problems. But it has been unclear how long those problems persist. "Nobody actually had looked into that in detail," Imbach says. A sleep disorder detected 18 months after an injury will linger for at least two years, and probably much longer, the researchers say. © 2016 npr

Keyword: Brain Injury/Concussion; Sleep
Link ID: 22155 - Posted: 04.28.2016

Laura Sanders Away from home, people sleep with one ear open. In unfamiliar surroundings, part of the left hemisphere keeps watch while the rest of the brain is deeply asleep, scientists report April 21 in Current Biology. The results help explain why the first night in a hotel isn’t always restful. Some aquatic mammals and birds sleep with half a brain at a time, a trick called unihemispheric sleep. Scientists have believed that humans, however, did not show any such asymmetry in their slumber. Study coauthor Yuka Sasaki of Brown University in Providence, R.I., and colleagues looked for signs of asymmetry on the first night that young, healthy people came into their sleep lab. Usually, scientists toss the data from the inaugural night because the sleep is so disturbed, Sasaki says. But she and her team thought that some interesting sleep patterns might lurk within that fitful sleep. “It was a little bit of a crazy hunch,” she says, “but we did it anyway.” On the first night in a sleep lab, people with more “awake” left hemispheres took longer to fall asleep. This asymmetry was largely gone on the second night, and people fell asleep more quickly. During a deep sleep stage known as slow-wave sleep, a network of nerve cells in the left side of the brain showed less sleep-related activity than the corresponding network on the right side. Those results suggest that the left side of the brain is a lighter sleeper. “It looked like the left hemisphere and the right hemisphere did not show the same degree of sleep,” Sasaki says. This imbalance disappeared on the second night of sleep. © Society for Science & the Public 2000 - 2016

Keyword: Sleep; Laterality
Link ID: 22134 - Posted: 04.23.2016

Cassie Martin The grunts, moans and wobbles of gelada monkeys, a chatty species residing in Ethiopia’s northern highlands, observe a universal mathematical principle seen until now only in human language. The new research, published online April 18 in the Proceedings of the National Academy of Sciences, sheds light on the evolution of primate communication and complex human language, the researchers say. “Human language is like an onion,” says Simone Pika, head of the Humboldt Research Group at the Max Planck Institute for Ornithology in Seewiesen, Germany, who was not involved in the study. “When you peel back the layers, you find that it is based on these underlying mechanisms, many of which were already present in animal communication. This research neatly shows there is another ability already present.” As the number of individual calls in gelada vocal sequences increases, the duration of the calls tends to decrease — a relationship known as Menzerath’s law. One of those mechanisms is known as Menzerath’s law, a mathematical principle that states that the longer a construct, the shorter its components. In human language, for instance, longer sentences tend to comprise shorter words. The gelada study is the first to observe this law in the vocalizations of a nonhuman species. “There are aspects of communication and language that aren’t as unique as we think,” says study coauthor Morgan Gustison of the University of Michigan in Ann Arbor. © Society for Science & the Public 2000 - 2016

Keyword: Language; Evolution
Link ID: 22131 - Posted: 04.23.2016

By Catherine Matacic Simi Etedgi leans forward as she tells her story for the camera: The year was 1963, and she was just 15 as she left Morocco for Israel, one person among hundreds of thousands leaving for the new state. But her forward lean isn’t a casual gesture. Etedgi, now 68, is one of about 10,000 signers of Israeli Sign Language (ISL), a language that emerged only 80 years ago. Her lean has a precise meaning, signaling that she wants to get in an aside before finishing her tale. Her eyes sparkle as she explains that the signs used in the Morocco of her childhood are very different from those she uses now in Israel. In fact, younger signers of ISL use a different gesture to signal an aside—and they have different ways to express many other meanings as well. A new study presented at the Evolution of Language meeting here last month shows that the new generation has come up with richer, more grammatically complex utterances that use ever more parts of the body for different purposes. Most intriguing for linguists: These changes seem to happen in a predictable order from one generation to the next. That same order has been seen in young sign languages around the world, showing in visible fashion how linguistic complexity unfolds. This leads some linguists to think that they may have found a new model for the evolution of language. “This is a big hypothesis,” says cognitive scientist Ann Senghas of Barnard College in New York City, who has spent her life studying Nicaraguan Sign Language (NSL). “It makes a lot of predictions and tries to pull a lot of facts together into a single framework.” Although it’s too early to know what the model will reveal, linguists say it already may have implications for understanding how quickly key elements of language, from complex words to grammar, have evolved. © 2016 American Association for the Advancement of Science.

Keyword: Language
Link ID: 22130 - Posted: 04.23.2016

By JEFFREY M. ZACKS and REBECCA TREIMAN OUR favorite Woody Allen joke is the one about taking a speed-reading course. “I read ‘War and Peace’ in 20 minutes,” he says. “It’s about Russia.” The promise of speed reading — to absorb text several times faster than normal, without any significant loss of comprehension — can indeed seem too good to be true. Nonetheless, it has long been an aspiration for many readers, as well as the entrepreneurs seeking to serve them. And as the production rate for new reading matter has increased, and people read on a growing array of devices, the lure of speed reading has only grown stronger. The first popular speed-reading course, introduced in 1959 by Evelyn Wood, was predicated on the idea that reading was slow because it was inefficient. The course focused on teaching people to make fewer back-and-forth eye movements across the page, taking in more information with each glance. Today, apps like SpeedRead With Spritz aim to minimize eye movement even further by having a digital device present you with a stream of single words one after the other at a rapid rate. Unfortunately, the scientific consensus suggests that such enterprises should be viewed with suspicion. In a recent article in Psychological Science in the Public Interest, one of us (Professor Treiman) and colleagues reviewed the empirical literature on reading and concluded that it’s extremely unlikely you can greatly improve your reading speed without missing out on a lot of meaning. Certainly, readers are capable of rapidly scanning a text to find a specific word or piece of information, or to pick up a general idea of what the text is about. But this is skimming, not reading. We can definitely skim, and it may be that speed-reading systems help people skim better. Some speed-reading systems, for example, instruct people to focus only on the beginnings of paragraphs and chapters. This is probably a good skimming strategy. Participants in a 2009 experiment read essays that had half the words covered up — either the beginning of the essay, the end of the essay, or the beginning or end of each individual paragraph. Reading half-paragraphs led to better performance on a test of memory for the passage’s meaning than did reading only the first or second half of the text, and it worked as well as skimming under time pressure. © 2016 The New York Times Company

Keyword: Language; Attention
Link ID: 22113 - Posted: 04.18.2016

By David Shultz Mice supposedly don't speak, so they can't stutter. But by tinkering with a gene that appears to be involved in human speech, researchers have created transgenic mice whose pups produce altered vocalizations in a way that is similar to stuttering in humans. The mice could make a good model for understanding stuttering; they could also shed more light on how mutations in the gene, called Gnptab, cause the speech disorder. Stuttering is one of the most common speech disorders in the world, affecting nearly one out of 100 adults in the United States. But the cause of the stammering, fragmented speech patterns remains unclear. Several years ago, researchers discovered that stutterers often have mutations in a gene called Gnptab. Like a dispatcher directing garbage trucks, Gnptab encodes a protein that helps to direct enzymes into the lysosome—a compartment in animal cells that breaks down waste and recycles old cellular machinery. Mutations to other genes in this system are known to lead to the buildup of cellular waste products and often result in debilitating diseases, such as Tay-Sachs. How mutations in Gnptab causes stuttered speech remains a mystery, however. To get to the bottom of things, neuroscientist Terra Barnes and her team at Washington University in St. Louis in Missouri produced mice with mutation in the Gnptab gene and studied whether it affected the ultrasonic vocalizations that newly born mouse pups emit when separated from their mothers. Determining whether a mouse is stuttering is no easy task; as Barnes points out, it can even be difficult to tell whether people are stuttering if they’re speaking a foreign language. So the team designed a computer program that listens for stuttering vocalization patterns independent of language. © 2016 American Association for the Advancement of Science.

Keyword: Language
Link ID: 22110 - Posted: 04.16.2016

By Robin Wylie Bottlenose dolphins have been observed chattering while cooperating to solve a tricky puzzle – a feat that suggests they have a type of vocalisation dedicated to cooperating on problem solving. Holli Eskelinen of Dolphins Plus research institute in Florida and her colleagues at the University of Southern Mississippi presented a group of six captive dolphins with a locked canister filled with food. The canister could only be opened by simultaneously pulling on a rope at either end. The team conducted 24 canister trials, during which all six dolphins were present. Only two of the dolphins ever managed to crack the puzzle and get to the food. The successful pair was prolific, though: in 20 of the trials, the same two adult males worked together to open the food canister in a matter of few minutes. In the other four trials, one of the dolphins managed to solve the problem on its own, but this was much trickier and took longer to execute. But the real surprise came from recordings of the vocalisations the dolphins made during the experiment. The team found that when the dolphins worked together to open the canister, they made around three times more vocalisations than they did while opening the canister on their own or when there was either no canister present or no interaction with the canister in the pool. © Copyright Reed Business Information Ltd.

Keyword: Language; Evolution
Link ID: 22107 - Posted: 04.16.2016

By Frank McGurty More than 40 percent of retired NFL players tested with advanced scanning technology showed signs of traumatic brain injury, a much higher rate than in the general population, according to a new study of the long-term risks of playing American football. The research, presented at an American Academy of Neurology meeting that began in Vancouver on Monday, is one of the first to provide "objective evidence" of traumatic brain injury in a large sample of National Football League veterans while they are living, said Dr. Francis X. Conidi, one of the study's authors. Conidi, a neurologist at the Florida Center for Headache and Sports Neurology and a faculty member at the Florida State University College of Medicine, said traumatic brain injury was often a "precursor" to CTE, a degenerative brain disease. "What we do know is that players with traumatic brain injury have a high incidence of going on to develop neurological degenerative disease later on in life," Conidi told Reuters. CTE, or chronic traumatic encephalopathy, has been found in dozens of the NFL's top players after they died. At present, a CTE diagnosis is only possible after death. The brain tissue of 59 or 62 deceased former NFL players examined by Boston University's CTE Center have tested positive for CTE, according to its website. The disease, which can lead to aggression and dementia, may have led to the suicides of several NFL athletes, including Hall of Famer Junior Seau. In the new study, the largest of its kind, 40 living former players were given sensitive brain scans, known as diffusion tensor imaging (DTI), as well as thinking and memory tests. © 2016 Scientific American,

Keyword: Brain Injury/Concussion; Brain imaging
Link ID: 22102 - Posted: 04.13.2016

By Catherine Matacic How does sign language develop? A new study shows that it takes less than five generations for people to go from simple, unconventional pantomimes—essentially telling a story with your hands—to stable signs. Researchers asked a group of volunteers to invent their own signs for a set of 24 words in four separate categories: people, locations, objects, and actions. Examples included “photographer,” “darkroom,” and “camera.” After an initial group made up the signs—pretending to shoot a picture with an old-fashioned camera for “photographer,” for example—they taught the signs to a new generation of learners. That generation then played a game where they tried to guess what sign another player in their group was making. When they got the answer right, they taught that sign to a new generation of volunteers. After a few generations, the volunteers stopped acting out the words with inconsistent gestures and started making them in ways that were more systematic and efficient. What’s more, they added markers for the four categories—pointing to themselves if the category were “person” or making the outline of a house if the category were “location,” for example—and they stopped repeating gestures, the researchers reported last month at the Evolution of Language conference in New Orleans, Louisiana. So in the video above, the first version of “photographer” is unpredictable and long, compared with the final version, which uses the person marker and takes just half the time. The researchers say their finding supports the work of researchers in the field, who have found similar patterns of development in newly emerging sign languages. The results also suggest that learning and social interaction are crucial to this development. © 2016 American Association for the Advancement of Science

Keyword: Language
Link ID: 22084 - Posted: 04.09.2016