Chapter 19. Language and Hemispheric Asymmetry

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 61 - 80 of 1938

Fork-tailed drongos, glossy black African songbirds with ruby-colored eyes, are the avian kingdom’s masters of deception. They mimic the alarm calls of other species to scare animals away and then swipe their dupes’ dinner. But like the boy who cried wolf, drongos can raise the alarm once too often. Now, scientists have discovered that when one false alarm no longer works, the birds switch to another species’ warning cry, a tactic that usually does the trick. “The findings are astounding,” says John Marzluff, a wildlife biologist at the University of Washington, Seattle, who was not involved in the work. “Drongos are exceedingly deceptive; their vocabularies are immense; and they match their deception to both the target animal and [its] past response. This level of sophistication is incredible.” Since 2008, Tom Flower, an evolutionary biologist at the University of Cape Town, has followed drongos in the Kuruman River Reserve in the Kalahari Desert. He’s habituated and banded about 200 of the robin-sized birds, and, using food rewards, has trained individuals to come to him when he calls. After getting its snack, the drongo quickly returns to its natural behavior—catching insects and following other bird species or meerkats—while Flower tags along. Drongos also keep an eye out for raptors and other predators. When they spot one, they utter metallic alarm cries. Meerkats and pied babblers, a highly social bird, pay attention to the drongos and dash for cover when the drongos raise an alarm—just as they do when one of their own calls out a warning. Studies have shown that having drongos around benefits animals of other species, which don’t have to be as vigilant and can spend more time foraging. But there’s a trade-off: The drongos’ cries aren’t always honest. When a meerkat has caught a fat grub or gecko, a drongo is apt to change from trustworthy sentinel to wily deceiver. © 2014 American Association for the Advancement of Science.

Keyword: Animal Communication; Aggression
Link ID: 19563 - Posted: 05.03.2014

Brian Owens If you think you know what you just said, think again. People can be tricked into believing they have just said something they did not, researchers report this week. The dominant model of how speech works is that it is planned in advance — speakers begin with a conscious idea of exactly what they are going to say. But some researchers think that speech is not entirely planned, and that people know what they are saying in part through hearing themselves speak. So cognitive scientist Andreas Lind and his colleagues at Lund University in Sweden wanted to see what would happen if someone said one word, but heard themselves saying another. “If we use auditory feedback to compare what we say with a well-specified intention, then any mismatch should be quickly detected,” he says. “But if the feedback is instead a powerful factor in a dynamic, interpretative process, then the manipulation could go undetected.” In Lind’s experiment, participants took a Stroop test — in which a person is shown, for example, the word ‘red’ printed in blue and is asked to name the colour of the type (in this case, blue). During the test, participants heard their responses through headphones. The responses were recorded so that Lind could occasionally play back the wrong word, giving participants auditory feedback of their own voice saying something different from what they had just said. Lind chose the words ‘grey’ and ‘green’ (grå and grön in Swedish) to switch, as they sound similar but have different meanings. © 2014 Nature Publishing Group

Keyword: Language; Aggression
Link ID: 19562 - Posted: 05.03.2014

Does reading faster mean reading better? That’s what speed-reading apps claim, promising to boost not just the number of words you read per minute, but also how well you understand a text. There’s just one problem: The same thing that speeds up reading actually gets in the way of comprehension, according to a new study. When you read at your natural pace, your eyes move back and forth across a sentence, rather than plowing straight through to the end. Apps like Spritz or the aptly named Speed Read are built around the idea that these eye movements, called saccades, are a redundant waste of time. It’s more efficient, their designers claim, to present words one at a time in a fixed spot on a screen, discouraging saccades and helping you get through a text more quickly. This method, called rapid serial visual presentation (RSVP), has been controversial since the 1980s, when tests showed it impaired comprehension, though researchers weren’t quite sure why. With a new crop of speed-reading products on the market, psychologists decided to dig a bit more and uncovered a simple explanation for RSVP’s flaw: Every so often, we need to scan backward and reread for a better grasp of the material. Researchers demonstrated that need by presenting 40 college students with ambiguous, unpunctuated sentences ("While the man drank the water that was clear and cold overflowed from the toilet”) while following their subjects’ gaze with an eye-tracking camera. Half the time, the team crossed out words participants had already read, preventing them from rereading (“xxxxx xxx xxx drank the water …”). Following up with basic yes-no questions about each sentence’s content, they found that comprehension dropped by about 25% in trials that blocked rereading versus those that didn’t, the researchers report online this month in Psychological Science. Crucially, the drop was about the same when subjects could, but simply hadn’t, reread parts of a sentence. Nor did the results differ much when using ambiguous sentences or their less confusing counterparts (“While the man slept the water …”). Turns out rereading isn’t a waste of time—it’s essential for understanding. © 2014 American Association for the Advancement of Science.

Keyword: Language; Aggression
Link ID: 19534 - Posted: 04.26.2014

By Linda Carroll A college education may do a lot more than provide better job opportunities — it may also make brains more resilient to trauma, a new study suggests. The more years of education people have, the more likely they will recover from a traumatic brain injury, according to the study published Wednesday in Neurology. In fact, one year after a traumatic brain injury, people with a college education were nearly four times as likely as those who hadn’t finished high school to return to work or school with no disability. Earlier studies had shown that education might have a protective effect when it comes to degenerative brain diseases like Alzheimer’s. Scientists have theorized that education leads to greater “cognitive reserve,” which allows people to overcome or compensate for brain damage. So if there are two people with the same degree of damage from Alzheimer’s, the more highly educated one will show fewer symptoms. The assumption is that education changes and expands the brain, leaving it better able to cope with challenges. “Added capacity allows us to either work around the damaged areas or to adapt,” said Eric B. Schneider, an assistant professor of surgery at the Johns Hopkins School of Medicine. Schneider and his colleagues suspected that cognitive reserve might play an equally important role in helping people rehab from acute brain damage that results from falls, car crashes and other accidents as it does in Alzheimer’s disease.

Keyword: Brain Injury/Concussion; Aggression
Link ID: 19530 - Posted: 04.24.2014

I am a sociologist by training. I come from academic world, reading scholarly articles on topics of social import, but they're almost always boring, dry and quickly forgotten. Yet I can't count how many times I've gone to a movie, a theater production or read a novel and been jarred into seeing something differently, learned something new, felt deep emotions and retained the insights gained. I know from both my research and casual conversations with people in daily life that my experiences are echoed by many. The arts can tap into issues that are otherwise out of reach and reach people in meaningful ways. This realization brought me to arts-based research (ABR). Arts-based research is an emergent paradigm whereby researchers across the disciplines adapt the tenets of the creative arts in their social research projects. Arts-based research, a term first coined by Eliot Eisner at Stanford University in the early 90s, is based on the assumption that art can teach us in ways that other forms cannot. Scholars can take interview or survey research, for instance, and represent it through art. I've written two novels based on sociological interview research. Sometimes researchers use the arts during data collection, involving research participants in the art-making process, such as drawing their response to a prompt rather than speaking. The turn by many scholars to arts-based research is most simply explained by my opening example of comparing the experience of consuming jargon-filled and inaccessible academic articles to that of experiencing artistic works. While most people know on some level that the arts can reach and move us in unique ways, there is actually science behind this. ©2014 TheHuffingtonPost.com, Inc

Keyword: Language; Aggression
Link ID: 19528 - Posted: 04.24.2014

BY Ellen Rolfes Rebecca Kamen’s sculptures appear as delicate as the brain itself. Thin, green branches stretch from a colorful mass of vein-like filaments. The branches, made from pieces of translucent mylar and stained with diluted acrylic paint, are so delicate that they sway slightly when mounted to the wall. Perched on various parts of the sculpture are mylar butterflies, whose wings also move, as if fluttering. One of Kamen's influences is the writing of Santiago Ramon y Cajal, who is called the "father of modern neuroscience." Cajal once said: “Like the entomologist in search of colorful butterflies, my attention has chased in the gardens of the grey matter cells with delicate and elegant shapes, the mysterious butterflies of the soul, whose beating of wings may one day reveal to us the secrets of the mind." One of Kamen’s artistic influences is the writing of Santiago Ramon y Cajal, who is called the “father of modern neuroscience.” The work, called “Butterflies of the Soul” was inspired by neuroscientist Santiago Ramon y Cajal, who won the 1906 Nobel Prize, for his groundbreaking work on the human nervous system. Kamen’s sculpture is a nod to his work and the development of modern neuroscience. Cajal’s observation of the cells under the microscope radically changed how scientists study the brain and its functions, Kamen said. And the butterflies in her sculpture represent Cajal’s drawings of Purkinje cells, which are found in the cerebellar cortex at the base of the brain. Purkinje cells play an important role in motor control and in certain cognitive functions, such as attention and language. And attention and language are skills of great interest to Kamen, who has dyslexia. Her fascination with the brain and its structure deepened when she discovered that she was dyslexic later in life. © 1996 - 2014 MacNeil / Lehrer Productions.

Keyword: Dyslexia
Link ID: 19503 - Posted: 04.17.2014

By KATHERINE BOUTON Like almost all newborns in this country, Alex Justh was given a hearing test at birth. He failed, but his parents were told not to worry: He was a month premature and there was mucus in his ears. A month later, an otoacoustic emission test, which measures the response of hair cells in the inner ear, came back normal. Alex was the third son of Lydia Denworth and Mark Justh (pronounced Just), and at first they “reveled at what a sweet and peaceful baby he was,” Ms. Denworth writes in her new book, “I Can Hear You Whisper: An Intimate Journey Through the Science of Sound and Language,” being published this week by Dutton. But Alex began missing developmental milestones. He was slow to sit up, slow to stand, slow to walk. His mother felt a “vague uneasiness” at every delay. He seemed not to respond to questions, the kind one asks a baby: “Can you show me the cow?” she’d ask, reading “Goodnight, Moon.” Nothing. No response. At 18 months Alex unequivocally failed a hearing test, but there was still fluid in his ears, so the doctor recommended a second test. It wasn’t until 2005, when Alex was 2 ½, that they finally realized he had moderate to profound hearing loss in both ears. This is very late to detect deafness in a child; the ideal time is before the first birthday. Alex’s parents took him to Dr. Simon Parisier, an otolaryngologist at New York Eye and Ear Infirmary, who recommended a cochlear implant as soon as possible. “Age 3 marked a critical juncture in the development of language,” Ms. Denworth writes. “I began to truly understand that we were not just talking about Alex’s ears. We were talking about his brain.” © 2014 The New York Times Company

Keyword: Hearing; Aggression
Link ID: 19485 - Posted: 04.15.2014

By Daisy Yuhas Greetings from Boston where the 21st annual meeting of the Cognitive Neuroscience Society is underway. Saturday and Sunday were packed with symposia, lectures and more than 400 posters. Here are just a few of the highlights. The bilingual brain has been a hot topic at the meeting this year, particularly as researchers grapple with the benefits and challenges of language learning. In news that will make many college language majors happy, a group of researchers led by Harriet Wood Bowden of the University of Tennessee-Knoxville have demonstrated that years of language study alter a person’s brain processing to be more like a native speaker’s brain. They found that native English speaking students with about seven semesters of study in Spanish show very similar brain activation to native speakers when processing spoken Spanish grammar. The study used electroencephalography, or EEG, in which electrodes are placed along the scalp to pick up and measure the electrical activity of neurons in the brain below. By contrast, students who have more recently begun studying Spanish show markedly different processing of these elements of the language. The study focused on the recognition of noun-adjective agreement, particularly in gender and number. Accents, however, can remain harder to master. Columbia University researchers worked with native Spanish speakers to study the difficulties encountered in hearing and reproducing English vowel sounds that are not used in Spanish. The research focused on the distinction between the extended o sound in “dock” and the soft u sound in “duck,” which is not part of spoken Spanish. The scientists used electroencephalograms to measure the brain responses to these vowel sounds in native-English and native-Spanish speakers. © 2014 Scientific American

Keyword: Language
Link ID: 19467 - Posted: 04.10.2014

By Deborah Serani Sometimes I work with children and adults who can’t put words to their feelings and thoughts. It’s not that they don’t want to – it’s more that they don’t know how. The clinical term for this experience is alexithymia and is defined as the inability to recognize emotions and their subtleties and textures [1]. Alexithymia throws a monkey wrench into a person’s ability to know their own self-experience or understand the intricacies of what others feel and think. Here are a few examples those with alexithymia experience: Difficulty identifying different types of feelings Limited understanding of what causes feelings Difficulty expressing feelings Difficulty recognizing facial cues in others Limited or rigid imagination Constricted style of thinking Hypersensitive to physical sensations Detached or tentative connection to others Alexithymia was first mentioned as a psychological construct in 1976 and was viewed as a deficit in emotional awareness [2]. Research suggests that approximately 8% of males and 2% of females experience alexithymia, and that it can come in mild, moderate and severe intensities [3]. Studies also show that alexithymia has two dimensions – a cognitive dimension, where a child or adult struggles to identify, interpret and verbalize feelings (the “thinking” part of our emotional experience). And an affective dimension, where difficulties arise in reacting, expressing, feeling and imagining (the “experiencing” part of our emotional experience) [4]. © 2014 Scientific American

Keyword: Emotions; Aggression
Link ID: 19449 - Posted: 04.05.2014

by Bob Holmes People instinctively organise a new language according to a logical hierarchy, not simply by learning which words go together, as computer translation programs do. The finding may add further support to the notion that humans possess a "universal grammar", or innate capacity for language. The existence of a universal grammar has been in hot dispute among linguists ever since Noam Chomsky first proposed the idea half a century ago. If the theory is correct, this innate structure should leave some trace in the way people learn languages. To test the idea, Jennifer Culbertson, a linguist at George Mason University in Fairfax, Virginia, and her colleague David Adger of Queen Mary University of London, constructed an artificial "nanolanguage". They presented English-speaking volunteers with two-word phrases, such as "shoes blue" and "shoes two", which were supposed to belong to a new language somewhat like English. They then asked the volunteers to choose whether "shoes two blue" or "shoes blue two" would be the correct three-word phrase. In making this choice, the volunteers – who hadn't been exposed to any three-word phrases – would reveal their innate bias in language-learning. Would they rely on familiarity ("two" usually precedes "blue" in English), or would they follow a semantic hierarchy and put "blue" next to "shoe" (because it modifies the noun more tightly than "two", which merely counts how many)? © Copyright Reed Business Information Ltd.

Keyword: Language; Aggression
Link ID: 19433 - Posted: 04.01.2014

by Hal Hodson Software has performed the first real-time translation of a dolphin whistle – and better data tools are giving fresh insights into primate communication too IT was late August 2013 and Denise Herzing was swimming in the Caribbean. The dolphin pod she had been tracking for the past 25 years was playing around her boat. Suddenly, she heard one of them say, "Sargassum". "I was like whoa! We have a match. I was stunned," says Herzing, who is the director of the Wild Dolphin Project. She was wearing a prototype dolphin translator called Cetacean Hearing and Telemetry (CHAT) and it had just translated a live dolphin whistle for the first time. It detected a whistle for sargassum, or seaweed, which she and her team had invented to use when playing with the dolphin pod. They hoped the dolphins would adopt the whistles, which are easy to distinguish from their own natural whistles – and they were not disappointed. When the computer picked up the sargassum whistle, Herzing heard her own recorded voice saying the word into her ear. As well as boosting our understanding of animal behaviour, the moment hints at the potential for using algorithms to analyse any activity where information is transmitted – including our daily activities (see "Scripts for life"). "It sounds like a fabulous observation, one you almost have to resist speculating on. It's provocative," says Michael Coen, a biostatistician at the University of Wisconsin-Madison. © Copyright Reed Business Information Ltd.

Keyword: Animal Communication; Aggression
Link ID: 19418 - Posted: 03.27.2014

Imagine you’re calling a stranger—a possible employer, or someone you’ve admired from a distance—on the telephone for the first time. You want to make a good impression, and you’ve rehearsed your opening lines. What you probably don’t realize is that the person you’re calling is going to size you up the moment you utter “hello.” Psychologists have discovered that the simple, two-syllable sound carries enough information for listeners to draw conclusions about the speaker’s personality, such as how trustworthy he or she is. The discovery may help improve computer-generated and voice-activated technologies, experts say. “They’ve confirmed that people do make snap judgments when they hear someone’s voice,” says Drew Rendall, a psychologist at the University of Lethbridge in Canada. “And the judgments are made on very slim evidence.” Psychologists have shown that we can determine a great deal about someone’s personality by listening to them. But these researchers looked at what others hear in someone’s voice when listening to a lengthy speech, says Phil McAleer, a psychologist at the University of Glasgow in the United Kingdom and the lead author of the new study. No one had looked at how short a sentence we need to hear before making an assessment, although other studies had shown that we make quick judgments about people’s personalities from a first glance at their faces. “You can pick up clues about how dominant and trustworthy someone is within the first few minutes of meeting a stranger, based on visual cues,” McAleer says. To find out if there is similar information in a person’s voice, he and his colleagues decided to test “one of the quickest and shortest of sociable words, ‘Hello.’ ” © 2014 American Association for the Advancement of Science.

Keyword: Language; Aggression
Link ID: 19358 - Posted: 03.13.2014

Matt Kaplan Humans are among the very few animals that constitute a threat to elephants. Yet not all people are a danger — and elephants seem to know it. The giants have shown a remarkable ability to use sight and scent to distinguish between African ethnic groups that have a history of attacking them and groups that do not. Now a study reveals that they can even discern these differences from words spoken in the local tongues. Biologists Karen McComb and Graeme Shannon at the University of Sussex in Brighton, UK, guessed that African elephants (Loxodonta africana) might be able to listen to human speech and make use of what they heard. To tease out whether this was true, they recorded the voices of men from two Kenyan ethnic groups calmly saying, “Look, look over there, a group of elephants is coming,” in their native languages. One of these groups was the semi-nomadic Maasai, some of whom periodically kill elephants during fierce competition for water or cattle-grazing space. The other was the Kamba, a crop-farming group that rarely has violent encounters with elephants. The researchers played the recordings to 47 elephant family groups at Amboseli National Park in Kenya and monitored the animals' behaviour. The differences were remarkable. When the elephants heard the Maasai, they were much more likely to cautiously smell the air or huddle together than when they heard the Kamba. Indeed, the animals bunched together nearly twice as tightly when they heard the Maasai. “We knew elephants could distinguish the Maasai and Kamba by their clothes and smells, but that they can also do so by their voices alone is really interesting,” says Fritz Vollrath, a zoologist at the University of Oxford, UK (see video below). © 2014 Nature Publishing Group

Keyword: Intelligence; Aggression
Link ID: 19349 - Posted: 03.11.2014

By ALBERT SUN On a frigid night recently in Randolph, N.J., the Jersey Wildcats junior hockey team flew across the home rink during practice at Aspen Ice Arena, sending ice into the air. Hockey is known for its collisions, and concussions aren’t unusual, but the players didn’t seem particularly worried. On the backs of their heads were flashing green lights, signifying that all was well. “We’ll be behind the bench, and as soon as a player comes back we can look right down and it’ll be a nice light,” said the coach, Justin Stanlick. If the light changes color, “we can know that player needs to go see a trainer to get cleared.” The light is part of a head impact sensor called the Checklight, made by Reebok. The device is a black skullcap with an electronic strip and three lights on the back. It blinks green when a player has sustained no head impact on the ice, yellow after a moderate impact and red after a severe one. The Checklight relies on an accelerometer and a gyroscope to measure the force of an impact. The Checklight flashes green for no impact, yellow for a moderate blow, red for a severe one.Bryan Thomas for The New York Times The Checklight flashes green for no impact, yellow for a moderate blow, red for a severe one. Coaches and parents have only to look to see if a player has taken a serious blow. And because the sensors are objective, Reebok executives say, they may lessen the pressure on young athletes to project toughness and play through a concussion. Gage Malinowski, a 19-year-old defenseman for the Wildcats, recently returned to practice after suffering the latest in a series of concussions during a game in February. “There’s not a game where I don’t have at least 10 hits,” he said. © 2014 The New York Times Company

Keyword: Brain Injury/Concussion
Link ID: 19343 - Posted: 03.11.2014

by Graham Lawton In August 2013, professional rugby union player Andy Hazell received a massive blow to the head while playing for his club Gloucester. Six "horrendous" months later he retired from the game, stricken by dizziness, mood swings and a sense of detachment. Hazell isn't the first rugby player to experience concussion during a game, and probably won't be the last to have to retire as a result. According to a campaign launched this week, rugby union players don't know enough about the risks of concussion – and the governing bodies aren't doing enough to prevent it. The problem isn't so much one-off blows like the one that ended Hazell's career, but long-term damage caused by repeated concussions over many years. Studies of boxers and American footballers have shown that these can lead to a degenerative brain disease called Chronic Traumatic Encephalopathy (CTE). CTE leads to memory problems, personality change and slowness of movement. It usually shows up in middle age, long after a sporting career is over. CTE has been an issue in American Football for years. Thousands of ex-professionals sued the National Football League alleging that it knew about the risks but covered them up. Last year the NFL offered a $765 million settlement package. Neurologists have long suspected that other contact sports might also lead to CTE – particularly rugby union because of its emphasis on high-speed "hits". Concussion is the fourth most common injury in the professional game. © Copyright Reed Business Information Ltd.

Keyword: Brain Injury/Concussion
Link ID: 19330 - Posted: 03.08.2014

By JOHN BRANCH Chronic traumatic encephalopathy, the degenerative brain disease linked to repeated blows to the head, has been found posthumously in a 29-year-old former soccer player, the strongest indication yet that the condition is not limited to athletes who played sports known for violent collisions, like football and boxing. Researchers at Boston University and the VA Boston Healthcare System, who have diagnosed scores of cases of C.T.E., said the player, Patrick Grange of Albuquerque, was the first named soccer player found to have C.T.E. On a four-point scale of severity, his disease was considered Stage 2. Soccer is a physical game but rarely a violent one. Players sometimes collide or fall to the ground, but the most repeated blows to the head may come from the act of heading an airborne ball — to redirect it purposely — in games and practices. Grange, who died in April after being found to have amyotrophic lateral sclerosis, was especially proud of his ability to head the ball, said his parents, Mike and Michele. They recalled him as a 3-year-old, endlessly tossing a soccer ball into the air and heading it into a net, a skill that he continued to practice and display in college and in top-level amateur and semiprofessional leagues in his quest to play Major League Soccer. Grange sustained a few memorable concussions, his parents said — falling hard as a toddler, being knocked unconscious in a high school game and once receiving 17 stitches in his head after an on-field collision in college. “He had very extensive frontal lobe damage,” said Dr. Ann McKee, the neuropathologist who performed the brain examination on Grange. “We have seen other athletes in their 20s with this level of pathology, but they’ve usually been football players.” © 2014 The New York Times Company

Keyword: Brain Injury/Concussion
Link ID: 19300 - Posted: 02.27.2014

National Institutes of Health researchers have identified gene variants that cause a rare syndrome of sporadic fevers, skin rashes and recurring strokes, beginning early in childhood. The team’s discovery coincides with findings by an Israeli research group that identified an overlapping set of variants of the same gene in patients with a similar type of blood vessel inflammation. The NIH group first encountered a patient with the syndrome approximately 10 years ago. The patient, then 3 years old, experienced fevers, skin rash and strokes that left her severely disabled. Because there was no history of a similar illness in the family, the NIH group did not at first suspect a genetic cause, and treated the patient with immunosuppressive medication. However, when the NIH team evaluated a second patient with similar symptoms two years ago — a child who had experienced recurrent fevers and six strokes by her sixth birthday — they began to suspect a common genetic cause and embarked on a medical odyssey that has led not only to a diagnosis, but to fundamental new insights into blood vessel disease. In their study, which appears in the Feb. 19, 2014, advance online edition of the New England Journal of Medicine, the researchers describe how next-generation genome sequencing, only recently available, facilitated a molecular diagnosis for patients in their study. The researchers found that harmful variants in the CECR1 gene impede production of a protein vital to the integrity of healthy blood vessel walls. The researchers showed that faulty variants in their patients’ DNA that encode the CECR1 gene cause a loss of function of the gene’s ability to produce of an enzyme called adenosine deaminase 2 (ADA2). Without it, abnormalities and inflammation in blood vessel walls result. The researchers call the new syndrome, deficiency of ADA2, or DADA2.

Keyword: Stroke; Aggression
Link ID: 19277 - Posted: 02.22.2014

Adrienne LaFrance For the better part of the past decade, Mark Kirby has been pouring drinks and booking gigs at the 55 Bar in New York City's Greenwich Village. The cozy dive bar is a neighborhood staple for live jazz that opened on the eve of Prohibition in 1919. It was the year Congress agreed to give American women the right to vote, and jazz was still in its infancy. Nearly a century later, the den-like bar is an anchor to the past in a city that's always changing. For Kirby, every night of work offers the chance to hear some of the liveliest jazz improvisation in Manhattan, an experience that's a bit like overhearing a great conversation. "There is overlapping, letting the other person say their piece, then you respond," Kirby told me. "Threads are picked up then dropped. There can be an overall mood and going off on tangents." Brain areas linked to meaning shut down during improvisational jazz interactions. In other words, this music is syntactic, not semantic. The idea that jazz can be a kind of conversation has long been an area of interest for Charles Limb, an otolaryngological surgeon at Johns Hopkins. So Limb, a musician himself, decided to map what was happening in the brains of musicians as they played. He and a team of researchers conducted a study that involved putting a musician in a functional MRI machine with a keyboard, and having him play a memorized piece of music and then a made-up piece of music as part of an improvisation with another musician in a control room. What researchers found: The brains of jazz musicians who are engaged with other musicians in spontaneous improvisation show robust activation in the same brain areas traditionally associated with spoken language and syntax. In other words, improvisational jazz conversations "take root in the brain as a language," Limb said. © 2014 by The Atlantic Monthly Group

Keyword: Hearing; Aggression
Link ID: 19275 - Posted: 02.20.2014

CHICAGO, ILLINOIS—Chances are, your baby won’t respond to questions like, “How was your day, honey?” Or, “What do you want to be when you grow up?” But just because infants can’t form sentences until toddlerhood doesn’t mean that they don’t benefit from early conversations with their parents. It’s long been observed that the better children perform in school and the more successful their careers, the higher the socioeconomic status (SES) of their family—and, according to Stanford University’s Anne Fernald, this has a lot to do with how parents of different SES speak to their babies. Those babies that are spoken to frequently in an engaging and nurturing way—generally from a higher SES—tend to develop faster word-processing skills, or the ability to follow a sentence from one object or setting to another. This word processing speed, in turn, directly relates to the development not just of vocabulary and language skills, but also memory and nonverbal cognitive abilities. In a new study, Fernald and colleagues measured parent-baby banter from round-the-clock recordings in babies’ homes, then tested those babies’ word-processing speed using retinal-following experiments that tracked how long it took them to follow a prompt to an image like a dog or juice. The researchers found that the differences in word-processing speed between high and low SES were stark: By 2 years of age, high SES children were 6 months ahead of their low SES counterparts; and by age 3, the differences in processing abilities were highly predictive of later performance in and out of school, the team reported here today at the annual meeting of AAAS, which publishes Science. Fernald hopes that this research will lead to interventions that help to shrink the language gap between kids on either side of the income gap. © 2014 American Association for the Advancement of Science

Keyword: Development of the Brain; Aggression
Link ID: 19249 - Posted: 02.15.2014

|By Dina Fine Maron Concussions are a major problem in football. But brain injury is a growing concern in soccer, too, usually resulting from heading the ball or collisions. A meta-analysis of existing studies finds that concussions accounted for between 6 and 9 percent of all injuries sustained on soccer fields. Most of those concussions come from when two players make for the ball, often when a player’s elbow, arm or hand inadvertently makes contact with another player’s head. But we’re not just talking about injuries to professionals. One work shows some 63 percent of all varsity soccer players have sustained concussions—yet only 19 percent realized it. And another says girls’ soccer can be particularly brutal, accounting for 8 percent of all sports-related concussions among high school girls. The findings are in the journal Brain Injury. [Monica E. Maher et al., Concussions and heading in soccer: A review of the evidence of incidence, mechanisms, biomarkers and neurocognitive outcomes] Professional players who reported a great deal of extensive heading the ball during their careers did the poorest in tests of verbal and visual memory compared with other players. Goalies and defenders were most likely to get concussions. So if you want to bend it like Beckham, maybe focus on playing midfield or offense. Padding the goal posts would also be a heads-up policy. © 2014 Scientific American

Keyword: Brain Injury/Concussion; Aggression
Link ID: 19245 - Posted: 02.13.2014