Chapter 19. Language and Lateralization

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 2462

By Darcey Steinke The J in “juice” was the first letter-sound, according to my mother, that I repeated in staccato, going off like a skipping record. This was when I was 3, before my stutter was stigmatized as shameful. In those earliest years my relationship to language was uncomplicated: I assumed my voice was more like a bird’s or a squirrel’s than my playmates’. This seemed exciting. I imagined, unlike fluent children, I might be able to converse with wild creatures, I’d learn their secrets, tell them mine and forge friendships based on interspecies intimacy. School put an end to this fantasy. Throughout elementary school I stuttered every time a teacher called on me and whenever I was asked to read out loud. In the third grade the humiliation of being forced to read a few paragraphs about stewardesses in the Weekly Reader still burns. The ST is hard for stutterers. What would have taken a fluent child five minutes took me an excruciating 25. It was around this time that I started separating the alphabet into good letters, V as well as M, and bad letters, S, F and T, plus the terrible vowel sounds, open and mysterious and nearly impossible to wrangle. Each letter had a degree of difficulty that changed depending upon its position in the sentence. Much later when I read that Nabokov as a child assigned colors to letters, it made sense to me that the hard G looked like “vulcanized rubber” and the R, “a sooty rag being ripped.” My beloved V, in the Nabokovian system, was a jewel-like “rose quartz.” My mother, knowing that kids ridiculed me — she once found a book, “The Mystery of the Stuttering Parrot,” that had been tossed onto our lawn — wanted to eradicate my speech impediment. She encouraged me to practice the strategies taught to me by a string of therapists, bouncing off an easy sound to a harder one and unclenching my throat, trying to slide out of a stammer. When I was 13 she got me a scholarship to a famous speech therapy program at a college near our house in Virginia. © 2019 The New York Times Company

Keyword: Language
Link ID: 26313 - Posted: 06.10.2019

By Malin Fezehai Muazzez Kocek, 46, is considered one of the best whistlers in Kuşköy, a village tucked away in the picturesque Pontic Mountains in Turkey’s northern Giresun province. Her whistle can be heard over the area’s vast tea fields and hazelnut orchards, several miles farther than a person’s voice. When President Recep Tayyip Erdogan of Turkey visited Kuşköy in 2012, she greeted him and proudly whistled, “Welcome to our village!” She uses kuş dili, or “bird language,” which transforms the full Turkish vocabulary into varied-pitch frequencies and melodic lines. For hundreds of years, this whistled form of communication has been a critical for the farming community in the region, allowing complex conversations over long distances and facilitating animal herding. Today, there are about 10,000 people in the larger region that speak it, but because of the increased use of cellphones, which remove the need for a voice to carry over great distances, that number is dwindling. The language is at risk of dying out. Of Ms. Kocek’s three children, only her middle daughter, Kader, 14, knows bird language. Ms. Kocek began learning bird language at six years old, by working in the fields with her father. She has tried to pass the tradition on to her three daughters; even though they understand it, only her middle child, Kader Kocek, 14, knows how to speak, and can whistle Turkey’s national anthem. Turkey is one of a handful of countries in the world where whistling languages exist. Similar ways of communicating are known to have been used in the Canary Islands, Greece, Mexico, and Mozambique. They fascinate researchers and linguistic experts, because they suggest that the brain structures that process language are not as fixed as once thought. There is a long-held belief that language interpretation occurs mostly in the left hemisphere, and melody, rhythm and singing on the right. But a study that biopsychologist Onur Güntürkün conducted in Kuşköy, suggests that whistling language is processed in both hemispheres. © 2019 The New York Times Company

Keyword: Language
Link ID: 26279 - Posted: 05.30.2019

Laura Sanders Advantages of speaking a second language are obvious: easier logistics when traveling, wider access to great literature and, of course, more people to talk with. Some studies have also pointed to the idea that polyglots have stronger executive functioning skills, brain abilities such as switching between tasks and ignoring distractions. But a large study of bilingual children in the U.S. finds scant evidence of those extra bilingual brain benefits. Bilingual children performed no better in tests measuring such thinking skills than children who knew just one language, researchers report May 20 in Nature Human Behaviour. To look for a relationship between bilingualism and executive function, researchers relied on a survey of U.S. adolescents called the ABCD study. From data collected at 21 research sites across the country, researchers identified 4,524 kids ages 9 and 10. Of these children, 1,740 spoke English and a second language (mostly Spanish, though 40 second languages were represented). On three tests that measured executive function, such as the ability to ignore distractions or quickly switch between tasks with different rules, the bilingual children performed similarly to children who spoke only English, the researchers found. “We really looked,” says study coauthor Anthony Dick, a developmental cognitive neuroscientist at Florida International University in Miami said. “We didn’t find anything.” |© Society for Science & the Public 2000 - 2019.

Keyword: Language
Link ID: 26265 - Posted: 05.24.2019

By Michelle Roberts Health editor, BBC News online Patients who have had a stroke caused by bleeding in the brain can safely take aspirin to cut their risk of future strokes and heart problems, according to a new study. Aspirin thins the blood and so doctors have been cautious about giving it, fearing it could make bleeds worse. But The Lancet research suggests it does not increase the risk of new brain bleeds, and may even lower it. Experts say the "strong indication" needs confirming with more research. Only take daily aspirin if your doctor recommends it, they advise. Aspirin benefits and risks Aspirin is best known as a painkiller and is sometimes also taken to help bring down a fever. But daily low-dose (75mg) aspirin is used to make the blood less sticky and can help to prevent heart attacks and stroke. Most strokes are caused by clots in the blood vessels of the brain but some are caused by bleeds. Because aspirin thins the blood, it can sometimes make the patient bleed more easily. And aspirin isn't safe for everyone. It can also cause indigestion and, more rarely, lead to stomach ulcers. Never give aspirin to children under the age of 16 (unless their doctor prescribes it). It can make children more likely to develop a very rare but serious illness called Reye's syndrome (which can cause liver and brain damage). The study The research involved 537 people from across the UK who had had a brain bleed while taking anti-platelet medicines, to stop blood clotting, including aspirin, dipyridamole or another drug called clopidogrel. Half of the patients were chosen at random to continue on their medicine (following a short pause immediately after their brain bleed), while the other half were told to stop taking it Over the five years of the study, 12 of those who kept taking the tablets suffered a brain bleed, compared with 23 of those who stopped © 2019 BBC

Keyword: Stroke
Link ID: 26263 - Posted: 05.23.2019

By David Grimm CORVALLIS, OREGON—Carl the cat was born to beat the odds. Abandoned on the side of the road in a Rubbermaid container, the scrawny black kitten—with white paws, white chest, and a white, skunklike stripe down his nose—was rescued by Kristyn Vitale, a postdoc at Oregon State University here who just happens to study the feline mind. Now, Vitale hopes Carl will pull off another coup, by performing a feat of social smarts researchers once thought was impossible. In a stark white laboratory room, Vitale sits against the back wall, flanked by two overturned cardboard bowls. An undergraduate research assistant kneels a couple of meters away, holding Carl firmly. "Carl!" Vitale calls, and then points to one of the bowls. The assistant lets go. Toddlers pass this test easily. They know that when we point at something, we're telling them to look at it—an insight into the intentions of others that will become essential as children learn to interact with people around them. Most other animals, including our closest living relative, chimpanzees, fail the experiment. But about 20 years ago, researchers discovered something surprising: Dogs pass the test with flying colors. The finding shook the scientific community and led to an explosion of studies into the canine mind. Cats like Carl were supposed to be a contrast. Like dogs, cats have lived with us in close quarters for thousands of years. But unlike our canine pals, cats descend from antisocial ancestors, and humans have spent far less time aggressively molding them into companions. So researchers thought cats couldn't possibly share our brain waves the way dogs do. © 2019 American Association for the Advancement of Science

Keyword: Learning & Memory; Evolution
Link ID: 26226 - Posted: 05.10.2019

/ By Elizabeth Svoboda As he neared his 50s, Anthony Andrews realized that living inside his own head felt different than it used to. The signs were subtle at first. “My wife started noticing that I wasn’t getting through things,” Andrews says. Every so often, he’d experience what he calls “cognitive voids,” where he’d get dizzy and blank out for a few seconds. It wasn’t just that he would lose track of things, as if the thought bubble over his head had popped. Over time, Andrews’ issues became more pronounced. It wasn’t just that he would lose track of things, as if the thought bubble over his head had popped. A dense calm had descended on him like a weighted blanket. “I felt like I was walking through the swamp,” says Andrews, now 54. He had to play internet chess each morning to penetrate the mental murk. In 2016, Anthony Andrews and his wife Mona were told he likely had CTE, a neurodegenerative disorder caused by repeated head impacts. With his wife, Mona, by his side, Andrews went to doctor after doctor racking up psychiatric diagnoses. One told him he had ADHD. Another thought he was depressed, and another said he had bipolar disorder. But the drugs and therapies they prescribed didn’t seem to help. “After a month,” Andrews recalls of these treatments, “I knew it’s not for me.” Copyright 2019 Undark

Keyword: Learning & Memory; Brain Injury/Concussion
Link ID: 26215 - Posted: 05.07.2019

By Sayuri Hayakawa, Viorica Marian As Emperor Akihito steps down from the Chrysanthemum Throne in Japan’s first abdication in 200 years, Naruhito officially becomes the new Emperor on May 1, 2019, ushering in a new era called Reiwa (令和; “harmony”). Japan’s tradition of naming eras reflects the ancient belief in the divine spirit of language. Kotodama (言霊; “word spirit”) is the idea that words have an almost magical power to alter physical reality. Through its pervasive impact on society, including its influence on superstitions and social etiquette, traditional poetry and modern pop songs, the word kotodama has, in a way, provided proof of its own concept. For centuries, many cultures have believed in the spiritual force of language. Over time, these ideas have extended from the realm of magic and mythology to become a topic of scientific investigation—ultimately leading to the discovery that language can indeed affect the physical world, for example, by altering our physiology. Our bodies evolve to adapt to our environments, not only over millions of years but also over the days and years of an individual’s life. For instance, off the coast of Thailand, there are children who can “see like dolphins.” Cultural and environmental factors have shaped how these sea nomads of the Moken tribe conduct their daily lives, allowing them to adjust their pupils underwater in a way that most of us cannot. © 2019 Scientific American

Keyword: Language
Link ID: 26190 - Posted: 05.01.2019

By Benedict Carey “In my head, I churn over every sentence ten times, delete a word, add an adjective, and learn my text by heart, paragraph by paragraph,” wrote Jean-Dominique Bauby in his memoir, “The Diving Bell and the Butterfly.” In the book, Mr. Bauby, a journalist and editor, recalled his life before and after a paralyzing stroke that left him virtually unable to move a muscle; he tapped out the book letter by letter, by blinking an eyelid. Thousands of people are reduced to similarly painstaking means of communication as a result of injuries suffered in accidents or combat, of strokes, or of neurodegenerative disorders such as amyotrophic lateral sclerosis, or A.L.S., that disable the ability to speak. Now, scientists are reporting that they have developed a virtual prosthetic voice, a system that decodes the brain’s vocal intentions and translates them into mostly understandable speech, with no need to move a muscle, even those in the mouth. (The physicist and author Stephen Hawking used a muscle in his cheek to type keyboard characters, which a computer synthesized into speech.) “It’s formidable work, and it moves us up another level toward restoring speech” by decoding brain signals, said Dr. Anthony Ritaccio, a neurologist and neuroscientist at the Mayo Clinic in Jacksonville, Fla., who was not a member of the research group. Researchers have developed other virtual speech aids. Those work by decoding the brain signals responsible for recognizing letters and words, the verbal representations of speech. But those approaches lack the speed and fluidity of natural speaking. The new system, described on Wednesday in the journal Nature, deciphers the brain’s motor commands guiding vocal movement during speech — the tap of the tongue, the narrowing of the lips — and generates intelligible sentences that approximate a speaker’s natural cadence. © 2019 The New York Times Company

Keyword: Language; Robotics
Link ID: 26174 - Posted: 04.25.2019

By Karen Weintraub Stroke, amyotrophic lateral sclerosis and other medical conditions can rob people of their ability to speak. Their communication is limited to the speed at which they can move a cursor with their eyes (just eight to 10 words per minute), in contrast with the natural spoken pace of 120 to 150 words per minute. Now, although still a long way from restoring natural speech, researchers at the University of California, San Francisco, have generated intelligible sentences from the thoughts of people without speech difficulties. The work provides a proof of principle that it should one day be possible to turn imagined words into understandable, real-time speech circumventing the vocal machinery, Edward Chang, a neurosurgeon at U.C.S.F. and co-author of the study published Wednesday in Nature, said Tuesday in a news conference. “Very few of us have any real idea of what’s going on in our mouth when we speak,” he said. “The brain translates those thoughts of what you want to say into movements of the vocal tract, and that’s what we want to decode.” But Chang cautions that the technology, which has only been tested on people with typical speech, might be much harder to make work in those who cannot speak—and particularly in people who have never been able to speak because of a movement disorder such as cerebral palsy. Chang also emphasized that his approach cannot be used to read someone’s mind—only to translate words the person wants to say into audible sounds. “Other researchers have tried to look at whether or not it’s actually possible to decode essentially just thoughts alone,” he says.* “It turns out it’s a very difficult and challenging problem. That’s only one reason of many that we focus on what people are trying to say.” © 2019 Scientific American

Keyword: Brain imaging; Language
Link ID: 26170 - Posted: 04.24.2019

By Benedict Carey More than 3 million Americans live with disabling brain injuries. The vast majority of these individuals are lost to the medical system soon after their initial treatment, to be cared for by family or to fend for themselves, managing fatigue, attention and concentration problems with little hope of improvement. On Saturday, a team of scientists reported a glimmer of hope. Using an implant that stimulates activity in key areas of the brain, they restored near-normal levels of brain function to a middle-aged woman who was severely injured in a car accident 18 years ago. Experts said the woman was a test case, and that it was far from clear whether the procedure would prompt improvements for others like her. That group includes an estimated 3 million to 5 million people, many of them veterans of the wars in Iraq and Afghanistan, with disabilities related to traumatic brain injuries. “This is a pilot study,” said Dr. Steven R. Flanagan, the chairman of the department of rehabilitation medicine at NYU Langone Health, who was not part of the research team. “And we certainly cannot generalize from it. But I think it’s a very promising start, and there is certainly more to come in this work.” The woman, now in her early 40s, was a student when the accident occurred. She soon recovered sufficiently to live independently. But she suffered from persistent fatigue and could not read or concentrate for long, leaving her unable to hold a competitive job, socialize much, or resume her studies. “Her life has changed,” said Dr. Nicholas Schiff, a professor of neurology and neuroscience at Weill Cornell Medicine and a member of the study team. “She is much less fatigued, and she’s now reading novels. The next patient might not do as well. But we want keep going to see what happens.” © 2019 The New York Times Company

Keyword: Brain Injury/Concussion
Link ID: 26142 - Posted: 04.15.2019

By Ken Belson and Benedict Carey Experimental brain scans of more than two dozen former N.F.L. players found that the men had abnormal levels of the protein linked to chronic traumatic encephalopathy, the degenerative brain disease associated with repeated hits to the head. Using positron emission tomography, or PET, scans, the researchers found “elevated amounts of abnormal tau protein” in the parts of the brain associated with the disease, known as C.T.E., compared to men of similar age who had not played football. The authors of the study and outside experts stressed that such tau imaging is far from a diagnostic test for C.T.E., which is likely years away and could include other markers, from blood and spinal fluid. The results of the study, published in The New England Journal of Medicine on Wednesday, are considered preliminary, but constitute a first step toward developing a clinical test to determine the presence of C.T.E. in living players, as well as early signs and potential risk. Thus far, pathologists have been able to confirm the diagnosis only posthumously, by identifying the tau signature in donated brains. Previous studies had reported elevated levels of the tau signature in single cases. The new study is the first to compare the brains of a group of former players to a control group, using an imaging approach that specifically picks up tau and not other proteins in the brain. “What makes this exciting is that it’s a great first step for imaging C.T.E. in the living, not just looking at single instances, but comparing averages and looking for patterns by comparing groups,” said Kevin Bieniek, director of the Biggs Institute Brain Bank Core at the University of Texas Health Science Center in San Antonio. © 2019 The New York Times Company

Keyword: Brain Injury/Concussion
Link ID: 26129 - Posted: 04.11.2019

By Lydia Denworth The vast majority of neuroscientific studies contain three elements: a person, a cognitive task and a high-tech machine capable of seeing inside the brain. That simple recipe can produce powerful science. Such studies now routinely yield images that a neuroscientist used to only dream about. They allow researchers to delineate the complex neural machinery that makes sense of sights and sounds, processes language and derives meaning from experience. But something has been largely missing from these studies: other people. We humans are innately social, yet even social neuroscience, a field explicitly created to explore the neurobiology of human interaction, has not been as social as you would think. Just one example: no one has yet captured the rich complexity of two people’s brain activity as they talk together. “We spend our lives having conversation with each other and forging these bonds,” neuroscientist Thalia Wheatley of Dartmouth College says. “[Yet] we have very little understanding of how it is people actually connect. We know almost nothing about how minds couple.” That is beginning to change. A growing cadre of neuroscientists is using sophisticated technology—and some very complicated math—to capture what happens in one brain, two brains, or even 12 or 15 at a time when their owners are engaged in eye contact, storytelling, joint attention focused on a topic or object, or any other activity that requires social give and take. Although the field of interactive social neuroscience is in its infancy, the hope remains that identifying the neural underpinnings of real social exchange will change our basic understanding of communication and ultimately improve education or inform treatment of the many psychiatric disorders that involve social impairments. © 2019 Scientific American

Keyword: Brain imaging
Link ID: 26128 - Posted: 04.11.2019

Laura Sanders Brains have long been star subjects for neuroscientists. But the typical “brain in a jar” experiments that focus on one subject in isolation may be missing a huge part of what makes us human — our social ties. “There’s this assumption that we can understand how the mind works by just looking at individual minds, and not looking at them in interactions,” says social neuroscientist Thalia Wheatley of Dartmouth College. “I think that’s wrong.” To answer some of the thorniest questions about the human brain, scientists will have to study the mind as it actually exists: steeped in social connections that involve rich interplay among family, friends and strangers, Wheatley argues. To illustrate her point, she asked the audience at a symposium in San Francisco on March 26, during the annual meeting of the Cognitive Neuroscience Society, how many had talked to another person that morning. Nearly everybody in the crowd of about 100 raised a hand. Everyday social interactions may seem inconsequential. But recent work on those who have been isolated, such as elderly people and prisoners in solitary confinement, suggests otherwise: Brains deprived of social interaction stop working well (SN: 12/8/18, p. 11). “That’s a hint that it’s not just that we like interaction,” Wheatley says. “It’s important to keep us healthy and sane.” |© Society for Science & the Public 2000 - 2019

Keyword: Learning & Memory
Link ID: 26122 - Posted: 04.09.2019

Nicola Davis A low level of alcohol consumption does not protect against stroke, new research suggests, in the latest blow to the idea that a few drinks can be beneficial to health. At least 100,000 people have strokes in the UK every year, according to recent figures. It had been thought that low levels of alcohol consumption might have a protective effect against stroke, as well as other diseases and conditions. Now researchers say that in the case of stroke, even low levels of alcohol consumption are bad news. “Moderate drinking of about one or two drinks a day does not protect against stroke,” said Dr Iona Millwood, co-author of the study from the University of Oxford. Advertisement The results chime with a major study released last year which concluded there is no healthy level of drinking. Writing in the Lancet, researchers from the UK and China described how they examined the impact of alcohol on stroke using a type of natural experiment. About a third of people from east Asia have genetic variants that affect the way alcohol is broken down in the body, which can make drinking an unpleasant experience and lead to flushed skin. People with these genetic variants are known to drink less – a situation confirmed by the latest study – but who has these genetic variants is random, meaning they can appear in people regardless of their social situation or health. As a result, the team were able to look at the impact of drinking on the risk of stroke without many of the other issues that can muddy the waters. © 2019 Guardian News & Media Limited

Keyword: Stroke; Drug Abuse
Link ID: 26118 - Posted: 04.06.2019

Helen Thompson Whether practical, dramatical or pragmatical, domestic cats appear to recognize the familiar sound of their own names and can distinguish them from other words, researchers report April 4 in Scientific Reports. While dog responses to human behavior and speech have received much attention (SN: 10/1/16, p. 11), researchers are just scratching the surface of human-cat interactions. Research has shown that domestic cats (Felis catus) appear to respond to human facial expressions, and can distinguish between different human voices. But can cats recognize their own names? “I think many cat owners feel that cats know their names, or the word ‘food,’” but until now, there was no scientific evidence to back that up, says Atsuko Saito, a psychologist at Sophia University in Tokyo and a cat owner. So Saito and her colleagues pounced on that research question. They asked cat owners to say four nouns of similar length followed by the cat’s name. Cats gradually lost interest with each noun, but then reacted strongly to their names — moving their ears, head or tail, shifting their hind paw position or, of course, meowing. The results held up with cats living alone, with other cats and at a cat café, where customers can hang out with cats. And when someone other than the owner said the name, the cats still responded to their names more than to other nouns. One finding did give the team pause. Cat café cats almost always reacted to their names and those of other cats living there. Housecats did so much less frequently. Lots of humans visit cat cafés, and cats’ names are frequently called together, so it may be harder for cats to associate their own names with positive reinforcement in these environments, the researchers write. As for whether or not a cat understands what a name is, well, only the cat knows that. |© Society for Science & the Public 2000 - 2019

Keyword: Animal Communication
Link ID: 26115 - Posted: 04.06.2019

By C. Claiborne Ray Q. How do bees find the flowers in the container garden on the fourth-floor deck of my city apartment? A. Foraging bees use the same methods to find nectar and pollen four floors up that they use at ground level. Honeybees routinely fly two miles from their hives in their search for raw material for honey; it doesn’t require much extra energy to fly several stories up. It takes only one scout to report a promising garden to the rest of the hive with a famous waggle dance. The scout relies on its sophisticated eyes, which are tuned to a variety of wavelengths, including ultraviolet color patterns in flowers that are invisible to people. We’re taking you on a journey to help you understand how bees, while hunting for pollen, use all of their senses — taste, touch, smell and more — to decide what to pick up and bring home. When the bees get closer to flowers, smell receptors begin transmitting information. And it has recently been discovered that both bumblebees and honeybees can detect and discriminate among weak electrostatic fields emanating from flowers. The bees accumulate a positive charge, while the flowers have a negative charge. The interaction between the fields is detected by antennae or sensitive hairs on the body. The electrical field helps bees to recognize pollen-rich blooms and perhaps even to transfer the pollen. © 2019 The New York Times Company

Keyword: Animal Migration; Animal Communication
Link ID: 26102 - Posted: 04.02.2019

By Malia Wollan “If you’re talking to a puppy, increase the pitch of your voice and slow the tempo,” says Mario Gallego-Abenza, a cognitive biologist and an author of a recent study analyzing canine response to human speech. People tend to use that high-register, baby-talk tone with all dogs, but it’s really only puppies under a year old that seem to like it. “With older dogs, just use your normal voice,” he says. Dogs can learn words. One well-studied border collie named Rico knew 200 objects by name and, like a toddler, could infer the names of novel objects by excluding things with labels he already knew. Use facial expressions, gestures and possibly food treats while you talk. “Maintain eye contact,” Gallego-Abenza says. Research shows that even wolves are attuned to the attention of human faces and that dogs are particularly receptive to your gaze and pointing gestures. Scientists disagree about whether dogs are capable of full-blown empathy, but studies suggest canines feel at least a form of primitive empathy known as “emotional contagion.” In one study, dogs that heard recordings of infants crying experienced the same spike in cortisol levels and alertness as their human counterparts. You might find yourself wondering: Is this dog even listening to me? Does it care? Look for the sorts of social cues you would seek in an attentive human listener. “Is the dog looking at you?” Gallego-Abenza says. “Is it getting closer?” You are a social animal; connection with other social animals can make you feel better about the world. Gallego-Abenza, no longer studying dogs, is now working on a doctorate at the University of Vienna focused on vocalizations between ravens. Last year, a couple contacted him, sure that they were able to converse with the birds in their garden. “Humans have this rich language, and we really want to communicate,” he says. “We think that every other animal is the same, but they’re not.” Go ahead and talk to dogs, but consider letting wild creatures alone to their own intraspecies squeaks, howls and whispers. © 2019 The New York Times Company

Keyword: Animal Communication
Link ID: 26077 - Posted: 03.26.2019

By Sandra G. Boodman “She never cried loudly enough to bother us,” recalled Natalia Weil of her daughter, who was born in 2011. Although Vivienne babbled energetically in her early months, her vocalizing diminished around the time of her first birthday. So did the quality of her voice, which dwindled from normal to raspy to little more than a whisper. Vivienne also was a late talker: She didn’t begin speaking until she was 2. Her suburban Maryland pediatrician initially suspected that a respiratory infection was to blame for the toddler’s hoarseness, and counseled patience. But after the problem persisted, the doctor diagnosed acid reflux and prescribed a drug to treat the voice problems reflux can cause. But Vivienne’s problem turned out to be far more serious — and unusual — than excess stomach acid. The day she learned what was wrong ranks among the worst of Weil’s life. “I had never heard of it,” said Weil, now 33, of her daughter’s diagnosis. “Most people haven’t.” The chronic illness seriously damaged Vivienne Weil’s voice. The 8-year-old has blossomed recently after a new treatment restored it. Her mother says she is eagerly making new friends and has become “a happy, babbly little girl.” (Natalia Weil) At first, Natalia, a statistician, and her husband, Jason, a photographer, were reassured by the pediatrician, who blamed a respiratory infection for their daughter’s voice problem. Her explanation sounded logical: Toddlers get an average of seven or eight colds annually. © 1996-2019 The Washington Post

Keyword: Language
Link ID: 26075 - Posted: 03.25.2019

By Emilia Clarke Just when all my childhood dreams seemed to have come true, I nearly lost my mind and then my life. I’ve never told this story publicly, but now it’s time. It was the beginning of 2011. I had just finished filming the first season of “Game of Thrones,” a new HBO series based on George R. R. Martin’s “A Song of Ice and Fire” novels. With almost no professional experience behind me, I’d been given the role of Daenerys Targaryen, also known as Khaleesi of the Great Grass Sea, Lady of Dragonstone, Breaker of Chains, Mother of Dragons. As a young princess, Daenerys is sold in marriage to a musclebound Dothraki warlord named Khal Drogo. It’s a long story—eight seasons long—but suffice to say that she grows in stature and in strength. She becomes a figure of power and self-possession. Before long, young girls would dress in platinum wigs and flowing robes to be Daenerys Targaryen for Halloween. The show’s creators, David Benioff and D. B. Weiss, have said that my character is a blend of Napoleon, Joan of Arc, and Lawrence of Arabia. And yet, in the weeks after we finished shooting the first season, despite all the looming excitement of a publicity campaign and the series première, I hardly felt like a conquering spirit. I was terrified. Terrified of the attention, terrified of a business I barely understood, terrified of trying to make good on the faith that the creators of “Thrones” had put in me. I felt, in every way, exposed. In the very first episode, I appeared naked, and, from that first press junket onward, I always got the same question: some variation of “You play such a strong woman, and yet you take off your clothes. Why?” In my head, I’d respond, “How many men do I need to kill to prove myself?” © 2019 Condé Nast

Keyword: Stroke
Link ID: 26068 - Posted: 03.23.2019

Jef Akst A robot interacting with young honey bees in Graz, Austria, exchanged information with a robot swimming with zebrafish in Lausanne, Switzerland, and the robots’ communication influenced the behavior of each animal group, according to a study published in Science Robotics today (March 20). “It’s the first time that people are using this kind of technology to have two different species communicate with each other,” says Simon Garnier, a complex systems biologist at New Jersey Institute of Technology who did not participate in the study. “It’s a proof of concept that you can have robots mediate interactions between distant groups.” He adds, however, that the specific applications of such a setup remains to be seen. As robotics technology has advanced, biologists have sought to harness it, building robots that look and behave like animals. This has allowed researchers to control one side of social interactions in studies of animal behavior. Robots that successfully integrate into animal populations also provide scientists with a means to influence the groups’ behavior. “The next step, we were thinking . . . [is] adding features to the group that the animals cannot do because they don’t have the capabilities to do so,” José Halloy, a physicist at Paris Diderot University who has been working on developing robots to interact intelligently with animals for more than a decade, writes in an email. “The simple and striking thing is that robots can use telecommunication or the Internet and animals cannot do that.” © 1986 - 2019 The Scientist.

Keyword: Animal Communication; Language
Link ID: 26060 - Posted: 03.22.2019