Chapter 19. Language and Hemispheric Asymmetry
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
By Lucy Wallis BBC News Abby and Brittany Hensel are conjoined twins determined to live the normal, active life of outgoing 20-somethings anywhere. They have been to university, they travel, they have jobs. But how easy is it for two people to inhabit one body? Like most 23-year-olds Abby and Brittany Hensel love spending time with their friends, going on holiday, driving, playing sport such as volleyball and living life to the full. The identical, conjoined twins from Minnesota, in the United States, have graduated from Bethel University and are setting out on their career as primary school teachers with an emphasis on maths. Although they have two teaching licences, there is one practical difference when it comes to the finances. "Obviously right away we understand that we are going to get one salary because we're doing the job of one person," says Abby. "As maybe experience comes in we'd like to negotiate a little bit, considering we have two degrees and because we are able to give two different perspectives or teach in two different ways." "One can be teaching and one can be monitoring and answering questions," says Brittany. "So in that sense we can do more than one person." Their friend Cari Jo Hohncke has always admired the sisters' teamwork. "They are two different girls, but yet they are able to work together to do the basic functions that I do every day that I take for granted," says Hohncke. BBC © 2013
Link ID: 18072 - Posted: 04.25.2013
By Meghan Holohan Need to remember some important facts for that big presentation at work? Clench your right hand while preparing to remember. When giving that talk, ball up your left hand and you’ll call to mind those details, no problem. That’s the finding from a new study authored by Ruth Propper, an associate professor and director of the cerebral lateralization laboratory at Montclair State University. Propper has long been intrigued by how body movements impact how the brain works. While most people realize that the brain influences the body (the brain tells your arm there is an itch, and you feel it), less is understood about how the body sways the brain. Past research suggests that clenching our hands can evoke emotions. When people ball up their right hands, for example, the left sides of their brains become more active, causing what’s known as “approach emotions,” feelings such as happiness or excitement. By squeezing the left hand, people engage the right side of the brain, which controls “withdrawal emotions” such as introversion, fear, or anxiety. (It probably seems like these might be less useful, but they come in handy in dangerous situations.) Propper theorized that if clenching hands impacted feelings, these gestures might influence the brain in other ways. © 2013 NBCNews.com
by Tanya Lewis, The lip-smacking vocalizations gelada monkeys make are surprisingly similar to human speech, a new study finds. Many nonhuman primates demonstrate lip-smacking behavior, but geladas are the only ones known to make undulating sounds, known as "wobbles," at the same time. (The wobbling sounds a little like a human hum would sound if the volume were being turned on and off rapidly.) The findings show that lip-smacking could have been an important step in the evolution of human speech, researchers say. "Our finding provides support for the lip-smacking origins of speech because it shows that this evolutionary pathway is at least plausible," Thore Bergman of the University of Michigan in Ann Arbor and author of the study published today (April 8) in the journal Current Biology,said in a statement. "It demonstrates that nonhuman primates can vocalize while lip-smacking to produce speechlike sounds." NEWS: Lip Smacks of Monkeys Prelude to Speech? Lip-smacking -- rapidly opening and closing the mouth and lips -- shares some of the features of human speech, such as rapid fluctuations in pitch and volume. (See Video of Gelada Lip-Smacking) Bergman first noticed the similarity while studying geladas in the remote mountains of Ethiopia. He would often hear vocalizations that sounded like human voices, but the vocalizations were actually coming from the geladas, he said. He had never come across other primates who made these sounds. But then he read a study on macaques from 2012 revealing how facial movements during lip-smacking were very speech-like, hinting that lip-smacking might be an initial step toward human speech. © 2013 Discovery Communications, LLC.
By Janice Lynch Schuster, My grandmother, who is 92, recently reported that she’d seen three giraffes in her Midwest back yard. She is otherwise sharp (and also kind and funny), but the giraffe episode was further evidence of the mild cognitive impairment that has been slowly creeping into her life. The question for my family has become: How should we respond? One of my sisters tried humor. (“Grandmom, I didn’t know you drank in the middle of the day!”) My father suggested that they were deer (to which she replied, “I’m 92 years old, and I know a giraffe when I see one.”) I tried to learn more about what, exactly, the giraffes were doing out there. (She didn’t seem to know, saying only that “the light shimmered.”) Communicating with a family member who has cognitive impairment can be frustrating and disheartening, even downright depressing for patient and caregiver alike. And it’s a problem faced by a growing number of Americans. According to a report published last week, about 4.1 million Americans have dementia. Alzheimer’s, one of the many forms of dementia, is the most expensive disease in the United States, costing $157 billion to $215 billion a year — more than heart disease and cancer, according to the study, which was sponsored by the National Institute on Aging. As baby boomers reach old age, these numbers are expected to increase dramatically. A number of techniques can not only reduce the frustration but also create new ways of connecting. Among the most effective and popular among experts is the “validation method,” a practice pioneered by geriatric social worker and researcher Naomi Feil in the 1980s. © 1996-2013 The Washington Post
By Bruce Bower Babies take a critical step toward learning to speak before they can say a word or even babble. By 3 months of age, infants flexibly use three types of sounds — squeals, growls and vowel-like utterances — to express a range of emotions, from positive to neutral to negative, researchers say. Attaching sounds freely to different emotions represents a basic building block of spoken language, say psycholinguist D. Kimbrough Oller of the University of Memphis in Tennessee and his colleagues. Any word or phrase can signal any mental state, depending on context and pronunciation. Infants’ flexible manipulation of sounds to signal how they feel lays the groundwork for word learning, the scientists conclude April 1 in the Proceedings of the National Academy of Sciences. Language evolution took off once this ability emerged in human babies, Oller proposes. Ape and monkey researchers have mainly studied vocalizations that have one meaning, such as distress calls. “At this point, the conservative conclusion is that the human infant at 3 months is already vocally freer than has been demonstrated for any other primate at any age,” Oller says. Oller’s group videotaped infants playing and interacting with their parents in a lab room equipped with toys and furniture. Acoustic analyses identified nearly 7,000 utterances made by infants up to 1 year of age that qualified as laughs, cries, squeals, growls or vowel-like sounds. © Society for Science & the Public 2000 - 2013
by Sid Perkins The electric fields that build up on honey bees as they fly, flutter their wings, or rub body parts together may allow the insects to talk to each other, a new study suggests. Tests show that the electric fields, which can be quite strong, deflect the bees' antennae, which, in turn, provide signals to the brain through specialized organs at their bases. Scientists have long known that flying insects gain an electrical charge when they buzz around. That charge, typically positive, accumulates as the wings zip through the air—much as electrical charge accumulates on a person shuffling across a carpet. And because an insect's exoskeleton has a waxy surface that acts as an electrical insulator, that charge isn't easily dissipated, even when the insect lands on objects, says Randolf Menzel, a neurobiologist at the Free University of Berlin in Germany. Although researchers have suspected for decades that such electrical fields aid pollination by helping the tiny grains stick to insects visiting a flower, only more recently have they investigated how insects sense and respond to such fields. Just last month, for example, a team reported that bumblebees may use electrical fields to identify flowers recently visited by other insects from those that may still hold lucrative stores of nectar and pollen. A flower that a bee had recently landed on might have an altered electrical field, the researchers speculated. Now, in a series of lab tests, Menzel and colleagues have studied how honey bees respond to electrical fields. In experiments conducted in small chambers with conductive walls that isolated the bees from external electrical fields, the researchers showed that a small, electrically charged wand brought close to a honey bee can cause its antennae to bend. Other tests, using antennae removed from honey bees, indicated that electrically induced deflections triggered reactions in a group of sensory cells, called the Johnston's organ, located near the base of the antennae. In yet other experiments, honey bees learned that a sugary reward was available when they detected a particular pattern of electrical field. © 2010 American Association for the Advancement of Science
Michael Corballis, professor of cognitive neuroscience and psychology at the University of Auckland in New Zealand, responds: Although teaching people to become ambidextrous has been popular for centuries, this practice does not appear to improve brain function, and it may even harm our neural development. Calls for ambidexterity were especially prominent in the late 19th and early 20th centuries. For instance, in the early 20th century English propagandist John Jackson established the Ambidextral Culture Society in pursuit of universal ambidexterity and “two-brainedness” for the betterment of society. This hype died down in the mid-20th century as benefits of being ambidextrous failed to materialize. Given that handedness is apparent early in life and the vast majority of people are right-handed, we are almost certainly dextral by nature. Recent evidence even associated being ambidextrous from birth with developmental problems, including reading disability and stuttering. A study of 11-year-olds in England showed that those who are naturally ambidextrous are slightly more prone to academic difficulties than either left- or right-handers. Research in Sweden found ambidextrous children to be at a greater risk for developmental conditions such as attention-deficit hyperactivity disorder. Another study, which my colleagues and I conducted, revealed that ambidextrous children and adults both performed worse than left- or right-handers on a range of skills, especially in math, memory retrieval and logical reasoning. © 2013 Scientific American
By ANAHAD O'CONNOR Slurred and incoherent speech is one of the classic signs of a stroke. But new research finds that another symptom may be garbled and disjointed text messages, which could provide early clues to the onset of a stroke. In Detroit, doctors encountered a 40-year-old patient who had no trouble reading, writing or understanding language. His only consistent problem was that he had lost the ability to type coherent text messages on his phone. An imaging scan showed that he had suffered a mild ischemic stroke, caused by a clot or blockage in his brain. The case represents at least the second instance of what doctors are calling “dystextia.” In December, a report in The Archives of Neurology described a 25-year-old pregnant woman whose husband grew concerned after she sent him a series of incoherent text messages. Doctors found that the woman had also been experiencing weakness in her right arm and leg, and that she had earlier had difficulty filling out an intake form at her obstetrician’s office. The case in Detroit was particularly unusual because garbled texting appeared to be the only conspicuous problem, at least initially, said Dr. Omran Kaskar, a senior neurology resident at Henry Ford Hospital who treated the patient in late 2011. “Stroke patients usually present with multiple neurologic deficits,” he said. The findings suggest that text messaging may be a unique form of language controlled by a distinct part of the brain. And because texts are time-stamped, they may potentially be useful as a way of helping doctors determine precisely when a patient’s stroke symptoms began. The patient was a businessman who had traveled to southeast Michigan one evening for a work trip. Shortly after midnight, the man sent text messages to his wife that were disjointed and nonsensical – and not because he was using shorthand. Copyright 2013 The New York Times Company
Coaches should pull athletes with a suspected head injury immediately until a health professional trained in concussions checks them out, according to new medical guidelines. The American Academy of Neurology updated its guidelines on Monday for evaluating and managing athletes with concussion. It’s the group's first update since 1997. Demonstration of a test with patients that have suffered concussions. It's likely that concussion risk is greater for female athletes playing soccer, according to new guidelines.Demonstration of a test with patients that have suffered concussions. It's likely that concussion risk is greater for female athletes playing soccer, according to new guidelines. (Keith Srakocic/Associated Press) "If in doubt, sit it out," said Dr. Jeffrey Kutcher with the University of Michigan Medical School in Ann Arbor and a member of the academy, in a release. "Being seen by a trained professional is extremely important after a concussion. If headaches or other symptoms return with the start of exercise, stop the activity and consult a doctor. You only get one brain; treat it well." Players should return to the rink, field or pitch slowly and only after acute signs and symptoms, such as headache, sensitivity to light and sound or changes in memory and judgment, are gone. For ice hockey, the guidelines said bodychecking is likely to increase the risk of sport-related concussion. In peewee hockey, bodychecking is likely to be a risk factor for a more severe concussion that prolongs the return to play. © CBC 2013
Keyword: Brain Injury/Concussion
Link ID: 17917 - Posted: 03.19.2013
By Jon Lieff Traditionally, we have understood the immune system and the nervous system as two distinct and unrelated entities. The former fights disease by responding to pathogens and stimulating inflammation and other responses. The latter directs sensation, movement, cognition and the functions of the internal organs. For some, therefore, the recent discovery that left-sided brain lesions correlate with an increased rate of hospital infections is difficult to understand. However, other recent research into the extremely close relationship between these two systems makes this finding comprehensible. A study, published in the March 2013 issue of Archives of Physical Medicine and Rehabilitation, looked at more than 2,000 hospital patients with brain lesions from either stroke or traumatic brain injury. They looked at how many of these brain-injured patients contracted infections within 2 to 3 days of admission. Of those patients who developed infections, 60% had left-sided lesions. The authors concluded that an unknown left-sided brain/immune network might influence infections. But why would the left side of the brain affect immunity? The nervous and immune systems are quite different in their speed and mode of action. The two major immune systems, innate and adaptive, are both wireless—they communicate through cell-to-cell contact, secreted signals, and antigen-antibody reactions. The innate system is the first responder, followed by the slower adaptive response. The nervous system, on the other hand, is wired for much more rapid communication throughout the body. It turns out that the two work surprisingly closely together. © 2013 Scientific American
By Dwayne Godwin and Jorge Cham Dwayne Godwin is a neuroscientist at the Wake Forest University School of Medicine. Jorge Cham draws the comic strip Piled Higher and Deeper at www.phdcomics.com. © 2013 Scientific American
SAN FRANCISCO (AP) — The future is unclear for a heart device aimed at preventing strokes in people at high risk of them because of an irregular heartbeat. Early results from a key study of the device, Boston Scientific’s Watchman, suggested it is safer than previous testing found, but may not be better than a drug that is used to prevent strokes, heart-related deaths and blood clots in people with atrial fibrillation in the long term. Atrial fibrillation, a common heart arrhythmia that affects millions of Americans, causes blood to pool in a small pouch. Clots can form and travel to the brain, causing a stroke. The usual treatment is blood thinners like warfarin, sold as Coumadin and other brands. But they have their own problems and some are very expensive. The Watchman is intended to be a permanent solution that would not require people to take medication for the rest of their lives. It is a tiny expandable umbrella that plugs the pouch of blood, and is inserted without surgery, via a tube pushed into a vein. A study four years ago indicated the device was at least as good at preventing strokes as warfarin, but the procedure to implant it led to strokes in some patients. The Food and Drug Administration required another test of its safety and effectiveness. The new study was led by Dr. David Holmes Jr. of the Mayo Clinic in Rochester, Minn. He and the clinic have a financial stake in the device. © 2013 The New York Times Company
Link ID: 17886 - Posted: 03.11.2013
by Lizzie Wade With its complex interweaving of symbols, structure, and meaning, human language stands apart from other forms of animal communication. But where did it come from? A new paper suggests that researchers look to bird songs and monkey calls to understand how human language might have evolved from simpler, preexisting abilities. One reason that human language is so unique is that it has two layers, says Shigeru Miyagawa, a linguist at the Massachusetts Institute of Technology (MIT) in Cambridge. First, there are the words we use, which Miyagawa calls the lexical structure. "Mango," "Amanda," and "eat" are all components of the lexical structure. The rules governing how we put those words together make up the second layer, which Miyagawa calls the expression structure. Take these three sentences: "Amanda eats the mango," "Eat the mango, Amanda," and "Did Amanda eat the mango?" Their lexical structure—the words they use—is essentially identical. What gives the sentences different meanings is the variation in their expression structure, or the different ways those words fit together. The more Miyagawa studied the distinction between lexical structure and expression structure, "the more I started to think, 'Gee, these two systems are really fundamentally different,' " he says. "They almost seem like two different systems that just happen to be put together," perhaps through evolution. One preliminary test of his hypothesis, Miyagawa knew, would be to show that the two systems exist separately in nature. So he started studying the many ways that animals communicate, looking for examples of lexical or expressive structures. © 2010 American Association for the Advancement of Science.
By Bruce Bower Children with dyslexia may read better after playing action video games that stress mayhem, not literacy, a contested study suggests. Playing fast-paced Wii video games for 12 hours over two weeks markedly increased the reading speed of 7- to 13-year-old kids with dyslexia, with no loss of reading accuracy, says a team led by psychologist Andrea Facoetti of the University of Padua, Italy. Reading gains lasted at least two months after the video game sessions. The gains matched or exceeded previously reported effects of reading-focused programs for dyslexia, the researchers report online February 28 in Current Biology. “These results are clear enough to say that action video games are able to improve reading abilities in children with dyslexia,” Facoetti says. Although the new study includes only 20 children with dyslexia, its results build on earlier evidence that many poor readers have difficulty focusing on items within arrays, Facoetti holds. By strengthening the ability to monitor central and peripheral objects in chaotic scenes, he says, action video games give kids with dyslexia a badly needed tool for tracking successive letters in written words. But evidence for Facoetti’s conclusions is shaky, asserts psychologist Nicola Brunswick of Middlesex University in London. The researchers tested word reading ability two months later but failed to test reading comprehension, she says. What’s more, they did so with a mere six of 10 kids who played the action video games. © Society for Science & the Public 2000 - 2013
By JEFF Z. KLEIN For the last two seasons, concussions and hits to the head were frequent talking points in the N.H.L., with the Pittsburgh Penguins star Sidney Crosby serving as the catalyst. As the lockout dragged on for more than four months, though, the conversation shifted from player safety to revenue percentages and competitive balance. The first few weeks of the shortened 48-game season passed without much talk of concussions. But in the past two weeks, 11 N.H.L. players are believed to have sustained them, among them Crosby’s teammate and the reigning most valuable player, Evgeni Malkin, thrusting the issue of head injuries back into the spotlight. Concussions continue to plague the league, despite its increased emphasis on reducing them. For the second season, the N.H.L. is playing under its broadened version of Rule 48, which penalizes hits that target an opponent’s head or make the head the principal point of contact. But many of the recent injuries, including Malkin’s, were not caused by hits deemed worthy of fines or suspensions. Last season, according to CBC network estimates, about 90 players missed games because of concussions, about 13 percent of N.H.L. players on active rosters on a given night. Crosby missed 60 games while recovering from a concussion he sustained in the 2011 Winter Classic. Malkin, who has 4 goals and 17 assists in 18 games this season, received a concussion diagnosis Sunday, two days after he fell awkwardly into the end boards following a routine shove from Florida’s Erik Gudbranson. Malkin slid back-first into the boards, causing his head to snap sharply backward and strike the boards. © 2013 The New York Times Company
Keyword: Brain Injury/Concussion
Link ID: 17855 - Posted: 02.27.2013
Regina Nuzzo Despite having brains that are still largely under construction, babies born up to three months before full term can already distinguish between spoken syllables in much the same way that adults do, an imaging study has shown1. Full-term babies — those born after 37 weeks' gestation — display remarkable linguistic sophistication soon after they are born: they recognize their mother’s voice2, can tell apart two languages they’d heard before birth3 and remember short stories read to them while in the womb4. But exactly how these speech-processing abilities develop has been a point of contention. “The question is: what is innate, and what is due to learning immediately after birth?” asks neuroscientist Fabrice Wallois of the University of Picardy Jules Verne in Amiens, France. To answer that, Wallois and his team needed to peek at neural processes already taking place before birth. It is tough to study fetuses, however, so they turned to their same-age peers: babies born 2–3 months premature. At that point, neurons are still migrating to their final destinations; the first connections between upper brain areas are snapping into place; and links have just been forged between the inner ear and cortex. To test these neural pathways, the researchers played soft voices to premature babies while they were asleep in their incubators a few days after birth, then monitored their brain activity using a non-invasive optical imaging technique called functional near-infrared spectroscopy. They were looking for the tell-tale signals of surprise that brains display — for example, when they suddenly hear male and female voices intermingled after hearing a long run of simply female voices. © 2013 Nature Publishing Group
By Athena Andreadis Genes are subject to multiple layers of regulation. An early regulatory point is transcription. During this process, regulatory proteins bind to DNA regions (promoters and enhancers) that direct gene expression. These DNA/protein complexes attract the transcription apparatus, which docks next to the complex and proceeds linearly downstream, producing the heteronuclear (hn) RNA that is encoded by the gene linked to the promoter. The hnRNA is then spliced and either becomes structural/regulatory RNA or is translated into protein. Transcription factors are members of large clans that arose from ancestral genes that went through successive duplications and then diverged to fit specific niches. One such family of about fifty members is called FOX. Their DNA binding portion is shaped like a butterfly, which has given this particular motif the monikers of forkhead box or winged helix. The activities of the FOX proteins extend widely in time and region. One of the FOX family members is FOXP2, as notorious as Fox News – except for different reasons: FOXP2 has become entrenched in popular consciousness as “the language gene”. As is the case with all such folklore, there is some truth in this; but as is the case with everything in biology, reality is far more complex. FOXP2, the first gene found to “affect language” (more on this anon), was discovered in 2001 by several converging observations and techniques. The clincher was a large family (code name KE), some of whose members had severe articulation and grammatical deficits with no accompanying sensory or cognitive impairment. The inheritance is autosomal dominant: one copy of the mutated gene is sufficient to confer the trait. When the researchers definitively identified the FOXP2 gene, they found that the version of FOXP2 carried by the KE affected members has a single point mutation that alters an invariant residue in its forkhead domain, thereby influencing the protein’s binding to its DNA targets. © 2013 Scientific American
By James Gallagher Health and science reporter, BBC News A part of the brain's ability to shield itself from the destructive damage caused by a stroke has been explained by researchers. It has been known for more than 85 years that some brain cells could withstand being starved of oxygen. Scientists, writing in the journal Nature Medicine, have shown how these cells switch into survival mode. They hope to one-day find a drug which uses the same trick to protect the whole brain. Treating a stroke is a race against time. Clots that block the blood supply prevent the flow of oxygen and sugar to brain cells, which then rapidly die. But in 1926, it was noticed that some cells in the hippocampus, the part of the brain involved in memory, did not follow this rule. "They're staying alive when the prediction would say that they should die," said Prof Alastair Buchan from Oxford University who has investigated how they survive. I'm a survivor Experiments on rats showed that these surviving-cells started producing a protein called hamartin - which forces cells to conserve energy. They stop producing new proteins and break down existing ones to access the raw materials. When the researchers prevented the cells from producing hamartin, they died just like other cells. BBC © 2013
Link ID: 17844 - Posted: 02.25.2013
Regina Nuzzo Say the word 'rutabaga', and you have just performed a complex dance with many body parts — lips, tongue, jaw and larynx — in a flash of time. Yet little is known about how the brain coordinates these vocal-tract movements to keep even the clumsiest of us from constantly tripping over our own tongues. A study of unprecedented detail now provides a glimpse into the neural codes that control the production of smooth speech. The results help to clarify how the brain uses muscles to organize sounds and hint at why tongue twisters are so tricky. The work is published today in Nature1. Most neural information about the vocal tract has come from watching people with brain damage or from non-invasive imaging methods, neither of which provide detailed data in time or space2, 3. A team of US researchers has now collected brain-activity data on a scale of millimetres and milliseconds. The researchers recorded brain activity in three people with epilepsy using electrodes that had been implanted in the patients' cortices as part of routine presurgical electrophysiological sessions. They then watched to see what happened when the patients articulated a series of syllables. Sophisticated multi-dimensional statistical procedures enabled the researchers to sift through the huge amounts of data and uncover how basic neural building blocks — patterns of neurons firing in different places over time — combine to form the speech sounds of American English. The patterns for consonants were quite different from those for vowels, even though the parts of speech “use the exact same parts of the vocal tract”, says author Edward Chang, a neuroscientist at the University of California, San Francisco. © 2013 Nature Publishing Group
Link ID: 17838 - Posted: 02.23.2013
by Sara Reardon Like the musicians in an orchestra, our lips, tongue and vocal cords coordinate with one another to pronounce sounds in speech. A map of the brain regions that conduct the process shows how each is carefully controlled – and how mistakes can slip into our speech. It's long been thought that the brain coordinates our speech by simultaneously controlling the movement of these "articulators". In the 1860s, Alexander Melville Bell proposed that speech could be broken down in this way and designed a writing system for deaf people based on the principle. But brain imaging had not had the resolution to see how neurons control these movements – until now. Using electrodes implanted in the brains of three people to treat their epilepsy, Edward Chang and his colleagues at the University of California mapped brain activity in each volunteer's motor cortex as they pronounced words in American English. The team had expected that each speech sound would be controlled by a unique collection of neurons, and so each would map to a different part of the brain. Instead, they found that the same groups of neurons were activated for all sounds. Each group controls muscles in the tongue, lips, jaw and larynx. The neurons – in the sensorimotor cortex – coordinated with one another to fire in different combinations. Each combination resulted in a very precise placing of the articulators to generate a given sound. Surprisingly, although each articulator can theoretically take on an almost limitless range of shapes, the neurons imposed strict limits on the range of possibilities. © Copyright Reed Business Information Ltd.
Link ID: 17837 - Posted: 02.23.2013