Chapter 19. Language and Lateralization

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 2455

By Benedict Carey “In my head, I churn over every sentence ten times, delete a word, add an adjective, and learn my text by heart, paragraph by paragraph,” wrote Jean-Dominique Bauby in his memoir, “The Diving Bell and the Butterfly.” In the book, Mr. Bauby, a journalist and editor, recalled his life before and after a paralyzing stroke that left him virtually unable to move a muscle; he tapped out the book letter by letter, by blinking an eyelid. Thousands of people are reduced to similarly painstaking means of communication as a result of injuries suffered in accidents or combat, of strokes, or of neurodegenerative disorders such as amyotrophic lateral sclerosis, or A.L.S., that disable the ability to speak. Now, scientists are reporting that they have developed a virtual prosthetic voice, a system that decodes the brain’s vocal intentions and translates them into mostly understandable speech, with no need to move a muscle, even those in the mouth. (The physicist and author Stephen Hawking used a muscle in his cheek to type keyboard characters, which a computer synthesized into speech.) “It’s formidable work, and it moves us up another level toward restoring speech” by decoding brain signals, said Dr. Anthony Ritaccio, a neurologist and neuroscientist at the Mayo Clinic in Jacksonville, Fla., who was not a member of the research group. Researchers have developed other virtual speech aids. Those work by decoding the brain signals responsible for recognizing letters and words, the verbal representations of speech. But those approaches lack the speed and fluidity of natural speaking. The new system, described on Wednesday in the journal Nature, deciphers the brain’s motor commands guiding vocal movement during speech — the tap of the tongue, the narrowing of the lips — and generates intelligible sentences that approximate a speaker’s natural cadence. © 2019 The New York Times Company

Keyword: Language; Robotics
Link ID: 26174 - Posted: 04.25.2019

By Karen Weintraub Stroke, amyotrophic lateral sclerosis and other medical conditions can rob people of their ability to speak. Their communication is limited to the speed at which they can move a cursor with their eyes (just eight to 10 words per minute), in contrast with the natural spoken pace of 120 to 150 words per minute. Now, although still a long way from restoring natural speech, researchers at the University of California, San Francisco, have generated intelligible sentences from the thoughts of people without speech difficulties. The work provides a proof of principle that it should one day be possible to turn imagined words into understandable, real-time speech circumventing the vocal machinery, Edward Chang, a neurosurgeon at U.C.S.F. and co-author of the study published Wednesday in Nature, said Tuesday in a news conference. “Very few of us have any real idea of what’s going on in our mouth when we speak,” he said. “The brain translates those thoughts of what you want to say into movements of the vocal tract, and that’s what we want to decode.” But Chang cautions that the technology, which has only been tested on people with typical speech, might be much harder to make work in those who cannot speak—and particularly in people who have never been able to speak because of a movement disorder such as cerebral palsy. Chang also emphasized that his approach cannot be used to read someone’s mind—only to translate words the person wants to say into audible sounds. “Other researchers have tried to look at whether or not it’s actually possible to decode essentially just thoughts alone,” he says.* “It turns out it’s a very difficult and challenging problem. That’s only one reason of many that we focus on what people are trying to say.” © 2019 Scientific American

Keyword: Brain imaging; Language
Link ID: 26170 - Posted: 04.24.2019

By Benedict Carey More than 3 million Americans live with disabling brain injuries. The vast majority of these individuals are lost to the medical system soon after their initial treatment, to be cared for by family or to fend for themselves, managing fatigue, attention and concentration problems with little hope of improvement. On Saturday, a team of scientists reported a glimmer of hope. Using an implant that stimulates activity in key areas of the brain, they restored near-normal levels of brain function to a middle-aged woman who was severely injured in a car accident 18 years ago. Experts said the woman was a test case, and that it was far from clear whether the procedure would prompt improvements for others like her. That group includes an estimated 3 million to 5 million people, many of them veterans of the wars in Iraq and Afghanistan, with disabilities related to traumatic brain injuries. “This is a pilot study,” said Dr. Steven R. Flanagan, the chairman of the department of rehabilitation medicine at NYU Langone Health, who was not part of the research team. “And we certainly cannot generalize from it. But I think it’s a very promising start, and there is certainly more to come in this work.” The woman, now in her early 40s, was a student when the accident occurred. She soon recovered sufficiently to live independently. But she suffered from persistent fatigue and could not read or concentrate for long, leaving her unable to hold a competitive job, socialize much, or resume her studies. “Her life has changed,” said Dr. Nicholas Schiff, a professor of neurology and neuroscience at Weill Cornell Medicine and a member of the study team. “She is much less fatigued, and she’s now reading novels. The next patient might not do as well. But we want keep going to see what happens.” © 2019 The New York Times Company

Keyword: Brain Injury/Concussion
Link ID: 26142 - Posted: 04.15.2019

By Ken Belson and Benedict Carey Experimental brain scans of more than two dozen former N.F.L. players found that the men had abnormal levels of the protein linked to chronic traumatic encephalopathy, the degenerative brain disease associated with repeated hits to the head. Using positron emission tomography, or PET, scans, the researchers found “elevated amounts of abnormal tau protein” in the parts of the brain associated with the disease, known as C.T.E., compared to men of similar age who had not played football. The authors of the study and outside experts stressed that such tau imaging is far from a diagnostic test for C.T.E., which is likely years away and could include other markers, from blood and spinal fluid. The results of the study, published in The New England Journal of Medicine on Wednesday, are considered preliminary, but constitute a first step toward developing a clinical test to determine the presence of C.T.E. in living players, as well as early signs and potential risk. Thus far, pathologists have been able to confirm the diagnosis only posthumously, by identifying the tau signature in donated brains. Previous studies had reported elevated levels of the tau signature in single cases. The new study is the first to compare the brains of a group of former players to a control group, using an imaging approach that specifically picks up tau and not other proteins in the brain. “What makes this exciting is that it’s a great first step for imaging C.T.E. in the living, not just looking at single instances, but comparing averages and looking for patterns by comparing groups,” said Kevin Bieniek, director of the Biggs Institute Brain Bank Core at the University of Texas Health Science Center in San Antonio. © 2019 The New York Times Company

Keyword: Brain Injury/Concussion
Link ID: 26129 - Posted: 04.11.2019

By Lydia Denworth The vast majority of neuroscientific studies contain three elements: a person, a cognitive task and a high-tech machine capable of seeing inside the brain. That simple recipe can produce powerful science. Such studies now routinely yield images that a neuroscientist used to only dream about. They allow researchers to delineate the complex neural machinery that makes sense of sights and sounds, processes language and derives meaning from experience. But something has been largely missing from these studies: other people. We humans are innately social, yet even social neuroscience, a field explicitly created to explore the neurobiology of human interaction, has not been as social as you would think. Just one example: no one has yet captured the rich complexity of two people’s brain activity as they talk together. “We spend our lives having conversation with each other and forging these bonds,” neuroscientist Thalia Wheatley of Dartmouth College says. “[Yet] we have very little understanding of how it is people actually connect. We know almost nothing about how minds couple.” That is beginning to change. A growing cadre of neuroscientists is using sophisticated technology—and some very complicated math—to capture what happens in one brain, two brains, or even 12 or 15 at a time when their owners are engaged in eye contact, storytelling, joint attention focused on a topic or object, or any other activity that requires social give and take. Although the field of interactive social neuroscience is in its infancy, the hope remains that identifying the neural underpinnings of real social exchange will change our basic understanding of communication and ultimately improve education or inform treatment of the many psychiatric disorders that involve social impairments. © 2019 Scientific American

Keyword: Brain imaging
Link ID: 26128 - Posted: 04.11.2019

Laura Sanders Brains have long been star subjects for neuroscientists. But the typical “brain in a jar” experiments that focus on one subject in isolation may be missing a huge part of what makes us human — our social ties. “There’s this assumption that we can understand how the mind works by just looking at individual minds, and not looking at them in interactions,” says social neuroscientist Thalia Wheatley of Dartmouth College. “I think that’s wrong.” To answer some of the thorniest questions about the human brain, scientists will have to study the mind as it actually exists: steeped in social connections that involve rich interplay among family, friends and strangers, Wheatley argues. To illustrate her point, she asked the audience at a symposium in San Francisco on March 26, during the annual meeting of the Cognitive Neuroscience Society, how many had talked to another person that morning. Nearly everybody in the crowd of about 100 raised a hand. Everyday social interactions may seem inconsequential. But recent work on those who have been isolated, such as elderly people and prisoners in solitary confinement, suggests otherwise: Brains deprived of social interaction stop working well (SN: 12/8/18, p. 11). “That’s a hint that it’s not just that we like interaction,” Wheatley says. “It’s important to keep us healthy and sane.” |© Society for Science & the Public 2000 - 2019

Keyword: Learning & Memory
Link ID: 26122 - Posted: 04.09.2019

Nicola Davis A low level of alcohol consumption does not protect against stroke, new research suggests, in the latest blow to the idea that a few drinks can be beneficial to health. At least 100,000 people have strokes in the UK every year, according to recent figures. It had been thought that low levels of alcohol consumption might have a protective effect against stroke, as well as other diseases and conditions. Now researchers say that in the case of stroke, even low levels of alcohol consumption are bad news. “Moderate drinking of about one or two drinks a day does not protect against stroke,” said Dr Iona Millwood, co-author of the study from the University of Oxford. Advertisement The results chime with a major study released last year which concluded there is no healthy level of drinking. Writing in the Lancet, researchers from the UK and China described how they examined the impact of alcohol on stroke using a type of natural experiment. About a third of people from east Asia have genetic variants that affect the way alcohol is broken down in the body, which can make drinking an unpleasant experience and lead to flushed skin. People with these genetic variants are known to drink less – a situation confirmed by the latest study – but who has these genetic variants is random, meaning they can appear in people regardless of their social situation or health. As a result, the team were able to look at the impact of drinking on the risk of stroke without many of the other issues that can muddy the waters. © 2019 Guardian News & Media Limited

Keyword: Stroke; Drug Abuse
Link ID: 26118 - Posted: 04.06.2019

Helen Thompson Whether practical, dramatical or pragmatical, domestic cats appear to recognize the familiar sound of their own names and can distinguish them from other words, researchers report April 4 in Scientific Reports. While dog responses to human behavior and speech have received much attention (SN: 10/1/16, p. 11), researchers are just scratching the surface of human-cat interactions. Research has shown that domestic cats (Felis catus) appear to respond to human facial expressions, and can distinguish between different human voices. But can cats recognize their own names? “I think many cat owners feel that cats know their names, or the word ‘food,’” but until now, there was no scientific evidence to back that up, says Atsuko Saito, a psychologist at Sophia University in Tokyo and a cat owner. So Saito and her colleagues pounced on that research question. They asked cat owners to say four nouns of similar length followed by the cat’s name. Cats gradually lost interest with each noun, but then reacted strongly to their names — moving their ears, head or tail, shifting their hind paw position or, of course, meowing. The results held up with cats living alone, with other cats and at a cat café, where customers can hang out with cats. And when someone other than the owner said the name, the cats still responded to their names more than to other nouns. One finding did give the team pause. Cat café cats almost always reacted to their names and those of other cats living there. Housecats did so much less frequently. Lots of humans visit cat cafés, and cats’ names are frequently called together, so it may be harder for cats to associate their own names with positive reinforcement in these environments, the researchers write. As for whether or not a cat understands what a name is, well, only the cat knows that. |© Society for Science & the Public 2000 - 2019

Keyword: Animal Communication
Link ID: 26115 - Posted: 04.06.2019

By C. Claiborne Ray Q. How do bees find the flowers in the container garden on the fourth-floor deck of my city apartment? A. Foraging bees use the same methods to find nectar and pollen four floors up that they use at ground level. Honeybees routinely fly two miles from their hives in their search for raw material for honey; it doesn’t require much extra energy to fly several stories up. It takes only one scout to report a promising garden to the rest of the hive with a famous waggle dance. The scout relies on its sophisticated eyes, which are tuned to a variety of wavelengths, including ultraviolet color patterns in flowers that are invisible to people. We’re taking you on a journey to help you understand how bees, while hunting for pollen, use all of their senses — taste, touch, smell and more — to decide what to pick up and bring home. When the bees get closer to flowers, smell receptors begin transmitting information. And it has recently been discovered that both bumblebees and honeybees can detect and discriminate among weak electrostatic fields emanating from flowers. The bees accumulate a positive charge, while the flowers have a negative charge. The interaction between the fields is detected by antennae or sensitive hairs on the body. The electrical field helps bees to recognize pollen-rich blooms and perhaps even to transfer the pollen. © 2019 The New York Times Company

Keyword: Animal Migration; Animal Communication
Link ID: 26102 - Posted: 04.02.2019

By Malia Wollan “If you’re talking to a puppy, increase the pitch of your voice and slow the tempo,” says Mario Gallego-Abenza, a cognitive biologist and an author of a recent study analyzing canine response to human speech. People tend to use that high-register, baby-talk tone with all dogs, but it’s really only puppies under a year old that seem to like it. “With older dogs, just use your normal voice,” he says. Dogs can learn words. One well-studied border collie named Rico knew 200 objects by name and, like a toddler, could infer the names of novel objects by excluding things with labels he already knew. Use facial expressions, gestures and possibly food treats while you talk. “Maintain eye contact,” Gallego-Abenza says. Research shows that even wolves are attuned to the attention of human faces and that dogs are particularly receptive to your gaze and pointing gestures. Scientists disagree about whether dogs are capable of full-blown empathy, but studies suggest canines feel at least a form of primitive empathy known as “emotional contagion.” In one study, dogs that heard recordings of infants crying experienced the same spike in cortisol levels and alertness as their human counterparts. You might find yourself wondering: Is this dog even listening to me? Does it care? Look for the sorts of social cues you would seek in an attentive human listener. “Is the dog looking at you?” Gallego-Abenza says. “Is it getting closer?” You are a social animal; connection with other social animals can make you feel better about the world. Gallego-Abenza, no longer studying dogs, is now working on a doctorate at the University of Vienna focused on vocalizations between ravens. Last year, a couple contacted him, sure that they were able to converse with the birds in their garden. “Humans have this rich language, and we really want to communicate,” he says. “We think that every other animal is the same, but they’re not.” Go ahead and talk to dogs, but consider letting wild creatures alone to their own intraspecies squeaks, howls and whispers. © 2019 The New York Times Company

Keyword: Animal Communication
Link ID: 26077 - Posted: 03.26.2019

By Sandra G. Boodman “She never cried loudly enough to bother us,” recalled Natalia Weil of her daughter, who was born in 2011. Although Vivienne babbled energetically in her early months, her vocalizing diminished around the time of her first birthday. So did the quality of her voice, which dwindled from normal to raspy to little more than a whisper. Vivienne also was a late talker: She didn’t begin speaking until she was 2. Her suburban Maryland pediatrician initially suspected that a respiratory infection was to blame for the toddler’s hoarseness, and counseled patience. But after the problem persisted, the doctor diagnosed acid reflux and prescribed a drug to treat the voice problems reflux can cause. But Vivienne’s problem turned out to be far more serious — and unusual — than excess stomach acid. The day she learned what was wrong ranks among the worst of Weil’s life. “I had never heard of it,” said Weil, now 33, of her daughter’s diagnosis. “Most people haven’t.” The chronic illness seriously damaged Vivienne Weil’s voice. The 8-year-old has blossomed recently after a new treatment restored it. Her mother says she is eagerly making new friends and has become “a happy, babbly little girl.” (Natalia Weil) At first, Natalia, a statistician, and her husband, Jason, a photographer, were reassured by the pediatrician, who blamed a respiratory infection for their daughter’s voice problem. Her explanation sounded logical: Toddlers get an average of seven or eight colds annually. © 1996-2019 The Washington Post

Keyword: Language
Link ID: 26075 - Posted: 03.25.2019

By Emilia Clarke Just when all my childhood dreams seemed to have come true, I nearly lost my mind and then my life. I’ve never told this story publicly, but now it’s time. It was the beginning of 2011. I had just finished filming the first season of “Game of Thrones,” a new HBO series based on George R. R. Martin’s “A Song of Ice and Fire” novels. With almost no professional experience behind me, I’d been given the role of Daenerys Targaryen, also known as Khaleesi of the Great Grass Sea, Lady of Dragonstone, Breaker of Chains, Mother of Dragons. As a young princess, Daenerys is sold in marriage to a musclebound Dothraki warlord named Khal Drogo. It’s a long story—eight seasons long—but suffice to say that she grows in stature and in strength. She becomes a figure of power and self-possession. Before long, young girls would dress in platinum wigs and flowing robes to be Daenerys Targaryen for Halloween. The show’s creators, David Benioff and D. B. Weiss, have said that my character is a blend of Napoleon, Joan of Arc, and Lawrence of Arabia. And yet, in the weeks after we finished shooting the first season, despite all the looming excitement of a publicity campaign and the series première, I hardly felt like a conquering spirit. I was terrified. Terrified of the attention, terrified of a business I barely understood, terrified of trying to make good on the faith that the creators of “Thrones” had put in me. I felt, in every way, exposed. In the very first episode, I appeared naked, and, from that first press junket onward, I always got the same question: some variation of “You play such a strong woman, and yet you take off your clothes. Why?” In my head, I’d respond, “How many men do I need to kill to prove myself?” © 2019 Condé Nast

Keyword: Stroke
Link ID: 26068 - Posted: 03.23.2019

Jef Akst A robot interacting with young honey bees in Graz, Austria, exchanged information with a robot swimming with zebrafish in Lausanne, Switzerland, and the robots’ communication influenced the behavior of each animal group, according to a study published in Science Robotics today (March 20). “It’s the first time that people are using this kind of technology to have two different species communicate with each other,” says Simon Garnier, a complex systems biologist at New Jersey Institute of Technology who did not participate in the study. “It’s a proof of concept that you can have robots mediate interactions between distant groups.” He adds, however, that the specific applications of such a setup remains to be seen. As robotics technology has advanced, biologists have sought to harness it, building robots that look and behave like animals. This has allowed researchers to control one side of social interactions in studies of animal behavior. Robots that successfully integrate into animal populations also provide scientists with a means to influence the groups’ behavior. “The next step, we were thinking . . . [is] adding features to the group that the animals cannot do because they don’t have the capabilities to do so,” José Halloy, a physicist at Paris Diderot University who has been working on developing robots to interact intelligently with animals for more than a decade, writes in an email. “The simple and striking thing is that robots can use telecommunication or the Internet and animals cannot do that.” © 1986 - 2019 The Scientist.

Keyword: Animal Communication; Language
Link ID: 26060 - Posted: 03.22.2019

Bruce Bower Humankind’s gift of gab is not set in stone, and farming could help to explain why. Over the last 6,000 years or so, farming societies increasingly have substituted processed dairy and grain products for tougher-to-chew game meat and wild plants common in hunter-gatherer diets. Switching to those diets of softer, processed foods altered people’s jaw structure over time, rendering certain sounds like “f” and “v” easier to utter, and changing languages worldwide, scientists contend. People who regularly chew tough foods such as game meat experience a jaw shift that removes a slight overbite from childhood. But individuals who grow up eating softer foods retain that overbite into adulthood, say comparative linguist Damián Blasi of the University of Zurich and his colleagues. Computer simulations suggest that adults with an overbite are better able to produce certain sounds that require touching the lower lip to the upper teeth, the researchers report in the March 15 Science. Linguists classify those speech sounds, found in about half of the world’s languages, as labiodentals. And when Blasi and his team reconstructed language change over time among Indo-European tongues (SN: 11/25/17, p. 16), currently spoken from Iceland to India, the researchers found that the likelihood of using labiodentals in those languages rose substantially over the past 6,000 to 7,000 years. That was especially true when foods such as milled grains and dairy products started appearing (SN: 2/1/03, p. 67). “Labiodental sounds emerged recently in our species, and appear more frequently in populations with long traditions of eating soft foods,” Blasi said at a March 12 telephone news conference. |© Society for Science & the Public 2000 - 2019

Keyword: Language; Evolution
Link ID: 26037 - Posted: 03.15.2019

Aimee Cunningham How active a person’s immune system is soon after a stroke may be tied to later mental declines, a new study finds. Researchers took blood samples from 24 stroke patients up to nine times over the course of a year. Twelve of the patients also completed a mental-skills test at four points during that time. Patients who had highly active immune cells on the second day after a stroke were more likely to see their test scores decline a year later, researchers report online March 12 in Brain. “The people who either got better on the task or stayed the same had less of an immune response at day 2 [after the stroke], and the people who had more of an immune response at day 2 were more likely to decline and do worse later,” says study coauthor Marion Buckwalter, a neuroscientist at Stanford University School of Medicine. A stroke occurs when the brain loses oxygen, due to a blocked or burst blood vessel. Buckwalter and her colleagues used a technique called mass cytometry that analyzes thousands of immune cells and their signaling molecules — which indicate how active a cell is — from blood samples of patients who had suffered a stroke. The researchers also tested patients’ memory, concentration, language skills and other thinking skills using the Montreal Cognitive Assessment. It’s unclear why some patients have a more active immune response than others in the days after a stroke. But with more research, it’s possible that the response may be a way to predict which patients will fare worse after a stroke, the researchers say. |© Society for Science & the Public 2000 - 2019

Keyword: Stroke; Neuroimmunology
Link ID: 26026 - Posted: 03.13.2019

Laura Sanders In the understory of Central American cloud forests, musical mice trill songs to one another. Now a study of the charismatic creatures reveals how their brains orchestrate these rapid-fire duets. The results, published in the March 1 Science, show that the brains of singing mice split up the musical work. One brain system directs the patterns of notes that make up songs, while another coordinates duets with another mouse, which are carried out with split-second precision. The study suggests that “a quirky animal from the cloud forest of Costa Rica could give us a brand new insight,” into the rapid give-and-take in people’s conversations, says study coauthor Michael Long, a neuroscientist at New York University’s School of Medicine. Quirks abound in these mice, known as Alston’s singing mice (Scotinomys teguina). Like famous singers with extreme green room demands, these mice are “kind of divas,” Long says, requiring larger terrariums, exercise equipment and a very special diet. In the lab, standard mouse chow doesn’t cut it; instead, singing mice feast on fresh meal worm, dry cat food and fresh fruits and berries, says Bret Pasch. The biologist at Northern Arizona University in Flagstaff has studied these singing mice for years but wasn’t involved in this study. The mice are also, of course, loud. “They’re very vocal,” particularly in the confines of a lab, Pasch says. “Once an animal calls, it’s like a symphony that goes off,” with repeating calls. In the wild, these duets are thought to attract mates and stake out territory. |© Society for Science & the Public 2000 - 2019.

Keyword: Animal Communication; Sexual Behavior
Link ID: 25998 - Posted: 03.01.2019

By Karen Weintraub A widely criticized experiment last year saw a researcher in China delete a gene in twin girls at the embryonic stage in an attempt to protect them from HIV. A new study suggests that using a drug to delete the same gene in people with stroke or traumatic brain injuries could help improve their recovery. The new work shows the benefits of turning off the gene in stroke-induced mice by using the drug, already approved as an HIV treatment. It also focuses on a sample of people who were naturally born without the gene. People without the gene recover faster and more completely from stroke than the general population does, the researchers found. The combined results suggest the drug might boost recovery in humans after a stroke or traumatic brain injury, says S. Thomas Carmichael, the study’s senior researcher and a neurologist at the University of California, Los Angeles, David Geffen School of Medicine. His team has started a follow-up human study to test the drug’s efficacy. The combination of mouse research and leveraging of people’s genetic data to confirm the relevance of drug targets makes the new research a “landmark paper,” says Jin-Moo Lee, co-director of the Barnes–Jewish Hospital and Washington University Stroke and Cerebrovascular Center in Saint Louis who was not involved with the work. © 2019 Scientific American

Keyword: Stroke
Link ID: 25981 - Posted: 02.22.2019

Jules Howard It’s a bit garbled but you can definitely hear it in the mobile phone footage. As the chimpanzees arrange their branches into a makeshift ladder and one of them makes its daring escape from its Belfast zoo enclosure, some words ring out loud and clear: “Don’t escape, you bad little gorilla!” a child onlooker shouts from the crowd. And … POP … with that a tiny explosion goes off inside my head. Something knocks me back about this sentence. It’s a “kids-say-the-funniest things” kind of sentence, and in any other situation I’d offer a warm smile and a chuckle of approval. But not this time. This statement has brought out the pedant in me. At this point, you may wonder if I’m capable of fleshing out a 700-word article chastising a toddler for mistakenly referring to a chimpanzee as a gorilla. The good news is that, though I am more than capable of such a callous feat, I don’t intend to write about this child’s naive zoological error. In fact, this piece isn’t really about the (gorgeous, I’m sure) child. It’s about us. You and me, and the words we use. So let’s repeat it. That sentence, I mean. “Don’t escape, you bad little gorilla!” the child shouted. The words I’d like to focus on in this sentence are the words “you” and “bad”. The words “you” and “bad” are nice examples of a simple law of nearly all human languages. They are examples of Zipf’s law of abbreviation, where more commonly used words in a language tend to be shorter. It’s thought that this form of information-shortening allows the transmission of more complex information in a shorter amount of time, and it’s why one in four words you and I write or say is likely to be something of the “you, me, us, the, to” variety. © 2019 Guardian News & Media Limited

Keyword: Language; Evolution
Link ID: 25971 - Posted: 02.18.2019

By Virginia Morell It’s hard to imagine a teen asking their mother for approval on anything. But a new study shows that male zebra finches—colorful songbirds with complex songs—learn their father’s tune better when mom “fluffs up” to signal her approval. This is the first time the songbirds, thought to be mere memorization machines, have been shown to use social cues for learning—putting them in an elite club that includes cowbirds, marmosets, and humans. The finding suggests other songbirds might also learn their tunes this way, and that zebra finches are better models for studying language development than thought. “Female zebra finches play an important role in male learning, in some ways even rivaling that of the male tutors,” says Karl Berg, an avian ecologist at the University of Texas in Brownsville, who was not involved in the new study. Previously, scientists knew only that the nonsinging females played some role in song acquisition, because males raised with deaf females develop incorrect songs. Researchers have long known that female brown-headed cowbirds make quick, lateral wing strokes to approve the songs of juvenile males (as in finches, only male cowbirds learn to sing). Most scientists discounted the cowbirds’ social cues as an isolated oddity, because the birds are brood parasites. But cowbirds’ similarities to zebra finches—both are highly social and use their songs to attract mates rather than claim territories—led Cornell University developmental psychobiologists Samantha Carouso-Peck and Michael Goldstein to wonder whether female finches also use social cues to help young males learn the best, mate-attracting songs. © 2018 American Association for the Advancement of Science.

Keyword: Animal Communication; Sexual Behavior
Link ID: 25922 - Posted: 02.01.2019

Hannah Devlin Science correspondent People who stutter are being given electrical brain stimulation in a clinical trial aimed at improving fluency without the need for gruelling speech training. If shown to be effective, the technique – which involves passing an almost imperceptible current through the brain – could be routinely offered by speech therapists. “Stuttering can have serious effects on individuals in terms of their choice of career, what they can get out of education, their earning potential and personal life,” said Prof Kate Watkins, the trial’s principal investigator and a neuroscientist at the University of Oxford. About one in 20 young children go through a phase of stuttering, but most grow out of it. It is estimated that stuttering affects about one in 100 adults, with men about four times more likely to stutter than women. Advertisement In the film The King’s Speech, a speech therapist uses a barrage of techniques to help King George VI, played by Colin Firth, to overcome his stutter, including breathing exercises and speaking without hearing his own voice. The royal client also learns that he can sing without stuttering, a common occurrence in people with the impediment. Speech therapy has advanced since the 1930s, but some of the most effective programmes for improving fluency still require intensive training and involve lengthy periods of using unnatural-sounding speech. © 2019 Guardian News and Media Limited

Keyword: Language
Link ID: 25901 - Posted: 01.26.2019