Chapter 15. Brain Asymmetry, Spatial Cognition, and Language
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
By Karen Russell In late October, when the Apple TV was relaunched, Bandit’s Shark Showdown was among the first apps designed for the platform. The game stars a young dolphin with anime-huge eyes, who battles hammerhead sharks with bolts of ruby light. There is a thrilling realism to the undulance of the sea: each movement a player makes in its midnight-blue canyons unleashes a web of fluming consequences. Bandit’s tail is whiplash-fast, and the sharks’ shadows glide smoothly over rocks. Every shark, fish, and dolphin is rigged with an invisible skeleton, their cartoonish looks belied by the programming that drives them—coding deeply informed by the neurobiology of action. The game’s design seems suspiciously sophisticated when compared with that of apps like Candy Crush Soda Saga and Dude Perfect 2. Bandit’s Shark Showdown’s creators, Omar Ahmad, Kat McNally, and Promit Roy, work for the Johns Hopkins School of Medicine, and made the game in conjunction with a neuroscientist and neurologist, John Krakauer, who is trying to radically change the way we approach stroke rehabilitation. Ahmad told me that their group has two ambitions: to create a successful commercial game and to build “artistic technologies to help heal John’s patients.” A sister version of the game is currently being played by stroke patients with impaired arms. Using a robotic sling, patients learn to sync the movements of their arms to the leaping, diving dolphin; that motoric empathy, Krakauer hopes, will keep patients engaged in the immersive world of the game for hours, contracting their real muscles to move the virtual dolphin.
Angus Chen English bursts with consonants. We have words that string one after another, like angst, diphthong and catchphrase. But other languages keep more vowels and open sounds. And that variability might be because they evolved in different habitats. Consonant-heavy syllables don't carry very well in places like windy mountain ranges or dense rainforests, researchers say. "If you have a lot of tree cover, for example, [sound] will reflect off the surface of leaves and trunks. That will break up the coherence of the transmitted sound," says Ian Maddieson, a linguist at the University of New Mexico. That can be a real problem for complicated consonant-rich sounds like "spl" in "splice" because of the series of high-frequency noises. In this case, there's a hiss, a sudden stop and then a pop. Where a simple, steady vowel sound like "e" or "a" can cut through thick foliage or the cacophony of wildlife, these consonant-heavy sounds tend to get scrambled. Hot climates might wreck a word's coherence as well, since sunny days create pockets of warm air that can punch into a sound wave. "You disrupt the way it was originally produced, and it becomes much harder to recognize what sound it was," Maddieson says. "In a more open, temperate landscape, prairies in the Midwest of the United States [or in Georgia] for example, you wouldn't have that. So the sound would be transmitted with fewer modifications." © 2015 npr
Paul Ibbotson and Michael Tomasello The natural world is full of wondrous adaptations such as camouflage, migration and echolocation. In one sense, the quintessentially human ability to use language is no more remarkable than these other talents. However, unlike these other adaptations, language seems to have evolved just once, in one out of 8.7 million species on earth today. The hunt is on to explain the foundations of this ability and what makes us different from other animals. The intellectual most closely associated with trying to pin down that capacity is Noam Chomsky. He proposed a universal grammatical blueprint that was unique to humans. This blueprint operated like a computer program. Instead of running Windows or Excel, this program performed “operations” on language – any language. Regardless of which of the 6000+ human languages that this code could be exposed to, it would guide the learner to the correct adult grammar. It was a bold claim: despite the surface variations we hear between Swahili, Japanese and Latin, they are all run on the same piece of underlying software. As ever, remarkable claims require remarkable evidence, and in the 50 years since some of these ideas were laid out, history has not been kind. First, it turned out that it is really difficult to state what is “in” universal grammar in a way that does justice to the sheer diversity of human languages. Second, it looks as if kids don’t learn language in the way predicted by a universal grammar; rather, they start with small pockets of reliable patterns in the language they hear, such as Where’s the X?, I wanna X, More X, It’s a X, I’m X-ing it, Put X here, Mommy’s X-ing it, Let’s X it, Throw X, X gone, I X-ed it, Sit on the X, Open X, X here, There’s a X, X broken … and gradually build their grammar on these patterns, from the “bottom up”. © 2015 Guardian News and Media Limited
Link ID: 21607 - Posted: 11.06.2015
Nicole Fisher , We know that the brain is neuroplastic — adapts to changes in behavior, environment, thinking and emotions — and may even rewire itself in certain ways. Life experience also teaches us that the tongue is a learning tool that shapes our brain. During early development, babies test everything by placing it in their mouths. As children age they stick out their tongues when concentrating on tasks such as drawing. Even as adults we let our tongue tell us about the world around us through eating, drinking and kissing. During basketball games, some players stick out their tongues while shooting. Now, knowing that there is such a rich nerve connection to the brain, scientists and doctors are turning to the tongue as a way to possibly stimulate the brain for neural retraining and rehabilitation after traumatic injuries or disease. The team at Helius Medical Technologies believe combining physical therapy with stimulation of the tongue may improve impairment of brain function and associated symptoms of injury. “We have already seen that stimulation of various nerves can improve symptoms of a range of neurological diseases. However, we believe the tongue is a much more elegant and direct pathway for stimulating brain structures and inducing neuroplasticity. We are focused on investigating the tongue as a gateway to the brain to hopefully ease the disease of brain injury,” said Dr. Jonathan Sackier, CMO at Helius. It has been argued by some that the era of small molecule is gone. Instead, recognition that the entire body is a closed electrical circuit, is leading to new therapeutic modalities that are known in certain circles as “electroceuticals.”
Looking at brain tissue from mice, monkeys and humans, scientists have found that a molecule known as growth and differentiation factor 10 (GDF10) is a key player in repair mechanisms following stroke. The findings suggest that GDF10 may be a potential therapy for recovery after stroke. The study, published in Nature Neuroscience, was supported by the National Institute of Neurological Disorders and Stroke (NINDS), part of the National Institutes of Health. “These findings help to elucidate the mechanisms of repair following stroke. Identifying this key protein further advances our knowledge of how the brain heals itself from the devastating effects of stroke, and may help to develop new therapeutic strategies to promote recovery,” said Francesca Bosetti, Ph.D., stroke program director at NINDS. Stroke can occur when a brain blood vessel becomes blocked, preventing nearby tissue from getting essential nutrients. When brain tissue is deprived of oxygen and nutrients, it begins to die. Once this occurs, repair mechanisms, such as axonal sprouting, are activated as the brain attempts to overcome the damage. During axonal sprouting, healthy neurons send out new projections (“sprouts”) that re-establish some of the connections lost or damaged during the stroke and form new ones, resulting in partial recovery. Before this study, it was unknown what triggered axonal sprouting. Previous studies suggested that GDF10 was involved in the early stages of axonal sprouting, but its exact role in the process was unclear. S. Thomas Carmichael, M.D., Ph.D., and his colleagues at the David Geffen School of Medicine at the University of California Los Angeles took a closer look at GDF10 to identify how it may contribute to axonal sprouting.
By Bob Grant Scientists delving into the neurological underpinnings of traumatic brain injuries (TBI) are finding that there may be crucial differences in the long-term effects of the events that depend not only on the insult, but also on the victim. “No two brain injuries are identical,” University of Pennsylvania neuroscientist Akiva Cohen said during a press conference held at the Society for Neuroscience (SfN) annual meeting in Chicago on Monday (October 19). “Brain injury, like many pathologies these days, constitutes a spectrum.” In addition to a severity spectrum that spans mild to severe, brain injuries may differ in terms of how male and female animals respond to them, according to Ramesh Raghupathi, a neurobiologist at Drexel University. Raghupathi and his colleagues have found that young male mice suffer more depressive behaviors than female mice at both four and eight weeks after mild TBI, and females display more headache-like symptoms after similar insults, which can include concussion. “All of these animals at these times after injury are cognitively normal,” Raghupathi told reporters. “And they do not have any movement problems.” Raghupathi and his colleagues also found molecular differences that may underlie the sex differences in TBI response that they observed. “In the male mice,” he said, “there is a dramatic difference in dopamine transmission” compared to the uninjured mice.” Researchers have previously linked impaired dopamine signaling to depression. Raghupathi’s team tested for the lingering effects of TBI in mice by subjecting the animals to certain swimming tests—which are accepted as proxies for depression—and by using a thin filament to touch the faces of the rodents and recording their sensitivity as a measure of headache-like behaviors.
By Jonathan Webb Science reporter, BBC News Crocodiles can sleep with one eye open, according to a study from Australia. In doing so they join a list of animals with this ability, which includes some birds, dolphins and other reptiles. Writing in the Journal of Experimental Biology, the researchers say the crocs are probably sleeping with one brain hemisphere at a time, leaving one half of the brain active and on the lookout. Consistent with this idea, the crocs in the study were more likely to leave one eye open in the presence of a human. They also kept that single eye trained directly on the interloper, said senior author John Lesku. "They definitely monitored the human when they were in the room. But even after the human left the room, the animal still kept its open eye… directed towards the location where the human had been - suggesting that they were keeping an eye out for potential threats." The experiments were done in an aquarium lined with infrared cameras, to monitor juvenile crocodiles day and night. "These animals are not particularly amenable to handling; they are a little snippy. So we had to limit all of our work to juvenile crocodiles, about 40-50cm long," said Dr Lesku, from La Trobe University in Melbourne. As well as placing a human in the room for certain periods, the team tested the effect of having other young crocs around. Sure enough, these also tended to attract the gaze of any reptiles dozing with only one eye. This matches what is known of "unihemispheric sleep" in aquatic mammals, such as walruses and dolphins, which seem to use one eye to make sure they stick together in a group. © 2015 BBC.
By Hanae Armitage About 70 million people worldwide stutter when they speak, and it turns out humans aren’t the only ones susceptible to verbal hiccups. Scientists at this year’s Society for Neuroscience Conference in Chicago, Illinois, show that mice, too, can stumble in their vocalizations. In humans, stuttering has long been linked to a mutation in the “housekeeping” gene Gnptab, which maintains basic levels of cellular function. To cement this curious genetic link, researchers decided to induce the Gnptab “stutter mutation” in mice. They suspected the change would trigger a mouse version of stammering. But deciphering stuttered squeaks is no easy task, so researchers set up a computerized model to register stutters through a statistical analysis of vocalizations. After applying the model to human speech, researchers boiled the verbal impediment down to two basic characteristics—fewer vocalizations in a given period of time and longer gaps in between each vocalization. For example, in 1 minute, stuttering humans made just 90 vocalizations compared with 125 for non-stutterers. Using these parameters to evaluate mouse vocalizations, researchers were able to identify stuttering mice over a 3.5-minute period. As expected, the mice carrying the mutated gene had far fewer vocalizations, with longer gaps between “speech” compared with their unmodified littermates—Gnptab mutant mice had about 80 vocalizations compared with 190 in the nonmutant mice. The findings not only supply evidence for Gnptab’s role in stuttering, but they also show that its function remains relatively consistent across multiple species. Scientists say the genetic parallel could help reveal the neural mechanisms behind stuttering, be it squeaking or speaking. © 2015 American Association for the Advancement of Science.
By Christopher Intagliata
"Babies come prepared to learn any of the world's languages." Alison Bruderer, a cognitive scientist at the University of British Columbia. "Which means no matter where they're growing up in the world, their brains are prepared to pick up the language they're listening to around them."
And listen they do. But another key factor to discerning a language’s particular sounds may be for babies to move their tongues as they listen. Bruderer and her colleagues tested that notion by sitting 24 sixth-month-olds in front of a video screen and displaying a checkerboard pattern, while they played one of two tracks: a single, repeated "D" sound in Hindi, <
Music can be a transformative experience, especially for your brain. Musicians’ brains respond more symmetrically to the music they listen to. And the size of the effect depends on which instrument they play. People who learn to play musical instruments can expect their brains to change in structure and function. When people are taught to play a piece of piano music, for example, the part of their brains that represents their finger movements gets bigger. Musicians are also better at identifying pitch and speech sounds – brain imaging studies suggest that this is because their brains respond more quickly and strongly to sound. Other research has found that the corpus callosum – the strip of tissue that connects the left and right hemisphere of the brain – is also larger in musicians. Might this mean that the two halves of a musician’s brain are better at communicating with each other compared with non-musicians? To find out, Iballa Burunat at the University of Jyväskylä in Finland and her colleagues used an fMRI scanner to look at the brains of 18 musicians and 18 people who have never played professionally. The professional musicians – all of whom had a degree in music – included cellists, violinists, keyboardists and bassoon and trombone players. While they were in the scanner, all of the participants were played three different pieces of music – prog rock, an Argentinian tango and some Stravinsky. Burunat recorded how their brains responded to the music, and used software to compare the activity of the left and right hemispheres of each person’s brain. © Copyright Reed Business Information Ltd.
Neuroscientist Dr. Charles Tator has asked the family of former NHL enforcer Todd Ewen to donate Ewen's brain so he can study it. This week, Ewan's death was ruled a suicide and Tator wants to examine his brain to determine whether it has signs of degeneration. In particular, he's interested in what Ewen's brain may have in common with the other brains of athletes he's studying as part of the Canadian Sports Concussion Project. Brent Bambury speaks with Dr. Tator about how concussions can affect athletes and what big unanswered questions remain when it comes to the links between concussions, brain injury and self-harm. This conversation has been edited for clarity and length. Brent Bambury: You and your team already have examined the brains of eighteen former professional athletes. What do you hope to learn by looking at Todd Ewen's brain? Dr. Charles Tator: Well we want to know if he had C.T.E. In other words, was this the cause of his decline in terms of depression, for example. BB: What is C.T.E. ? CT: Well C.T.E. is chronic traumatic encephalopathy which is a specific type of brain degeneration that occurs after repetitive trauma like multiple concussions. BB: Is that something that you can only determine by examining the brain from a cadaver? CT: Unfortunately, even though we are getting clues about it from other tests like M.R.I., at this point in 2015, you have to do an autopsy to be sure that it's C.T.E. So with the Todd Ewen donation, if we're fortunate enough to have that opportunity to examine his brain, we would want to see if there were any manifestations of these previous concussions that he had in his career. ©2015 CBC/Radio-Canada
Keyword: Brain Injury/Concussion
Link ID: 21450 - Posted: 09.28.2015
By Jane E. Brody Mark Hammel’s hearing was damaged in his 20s by machine gun fire when he served in the Israeli Army. But not until decades later, at 57, did he receive his first hearing aids. “It was very joyful, but also very sad, when I contemplated how much I had missed all those years,” Dr. Hammel, a psychologist in Kingston, N.Y., said in an interview. “I could hear well enough sitting face to face with someone in a quiet room, but in public, with background noise, I knew people were talking, but I had no idea what they were saying. I just stood there nodding my head and smiling. “Eventually, I stopped going to social gatherings. Even driving, I couldn’t hear what my daughter was saying in the back seat. I live in the country, and I couldn’t hear the birds singing. “People with hearing loss often don’t realize what they’re missing,” he said. “So much of what makes us human is social contact, interaction with other human beings. When that’s cut off, it comes with a very high cost.” And the price people pay is much more than social. As Dr. Hammel now realizes, “the capacity to hear is so essential to overall health.” Hearing loss is one of the most common conditions affecting adults, and the most common among older adults. An estimated 30 million to 48 million Americans have hearing loss that significantly diminishes the quality of their lives — academically, professionally and medically as well as socially. One person in three older than 60 has life-diminishing hearing loss, but most older adults wait five to 15 years before they seek help, according to a 2012 report in Healthy Hearing magazine. And the longer the delay, the more one misses of life and the harder it can be to adjust to hearing aids. © 2015 The New York Times Company
Now hear this. Anthropologists have estimated the hearing abilities of early hominins – reconstructing a human ancestor’s sensory perception. Rolf Quam from Binghamton University in New York and his colleagues studied skulls and ear bones from Australopithecus africanus and Paranthropus robustus, two species that lived between 1 million and 3 million years ago, as well as modern humans and chimpanzees. Using CT scans of the bones, they built 3D reconstructions of the ear of each species. Then they fed a series of anatomical measurements into a computer model to predict their hearing abilities. The results for humans and chimpanzees fitted well with laboratory data, suggesting the model aligned well with real performance. For each species, they then estimated the frequency range they can hear best. Modern humans and chimpanzees perform similarly below 3 kilohertz, but humans have better hearing than chimps in the 3-5 kHz range. The early hominins had a similar sensitive range to chimpanzees, but shifted slightly towards that of modern humans, so they have better hearing than chimps do for 3-4 kHz sounds. Australopithecus and Paranthropus are not believed to have been capable of language, but they almost certainly communicated vocally as other primates do, says Quam. Quam thinks this shift in hearing sensitivity would have helped them communicate in open environments, such as African savannahs, where human ancestors are thought to have evolved bipedalism. © Copyright Reed Business Information Ltd.
Linda Geddes Jack struggled in regular school. Diagnosed with dyslexia and the mathematical equivalent, dyscalculia, as well as the movement disorder dyspraxia, Jack (not his real name) often misbehaved and played the class clown. So the boy’s parents were relieved when he was offered a place at Fairley House in London, which specializes in helping children with learning difficulties. Fairley is also possibly the first school in the world to have offered pupils the chance to undergo electrical brain stimulation. The stimulation was done as part of an experiment in which twelve eight- to ten-year-olds, including Jack, wore an electrode-equipped cap while they played a video game. Neuroscientist Roi Cohen Kadosh of the University of Oxford, UK, who led the pilot study in 2013, is one of a handful of researchers across the world who are investigating whether small, specific areas of a child’s brain can be safely stimulated to overcome learning difficulties. “It would be great to be able to understand how to deliver effective doses of brain stimulation to kids’ brains, so that we can get ahead of developmental conditions before they really start to hold children back in their learning,” says psychologist Nick Davis of Swansea University, UK. The idea of using magnets or electric currents to treat psychiatric or learning disorders — or just to enhance cognition — has generated a flurry of excitement over the past ten years. The technique is thought to work by activating neural circuits or by making it easier for neurons to fire. The research is still in its infancy, but at least 10,000 adults have undergone such stimulation, and it seems to be safe — at least in the short term. One version of the technology, called transcranial magnetic stimulation (TMS), has been approved by the US Food and Drug Administration to treat migraine and depression in adults. © 2015 Nature Publishing Group,
by Laura Sanders Like every other person who carries around a smartphone, I take a lot of pictures, mostly of my kids. I thought I was bad with a few thousand snaps filling my phone’s memory. But then I talked to MIT researcher Deb Roy. For three years, Roy and a small group of researchers recorded every waking moment of Roy’s son’s life at home, amassing over 200,000 hours of video and audio recordings. Roy’s intention wasn’t to prove he was the proudest parent of all time. Instead, he wanted to study how babies learn to say words. As a communication and machine learning expert, Roy and his wife Rupal Patel, also a speech researcher, recognized that having a child would be a golden research opportunity. The idea to amass this gigantic dataset “was kicking around and something we thought about for years,” Roy says. So after a pregnancy announcement and lots of talking and planning and “fascinating conversations” with the university administration in charge of approving human experiments, the researchers decided to go for it. To the delight of his parents, a baby boy arrived in 2005. When Roy and Patel brought their newborn home, the happy family was greeted by 11 cameras and 14 microphones, tucked up into the ceiling. From that point on, cameras rolled whenever the baby was awake. © Society for Science & the Public 2000 - 2015
By JAMES GORMAN Among the deep and intriguing phenomena that attract intense scientific interest are the birth and death of the universe, the intricacies of the human brain and the way dogs look at humans. That gaze — interpreted as loving or slavish, inquisitive or dumb — can cause dog lovers to melt, cat lovers to snicker, and researchers in animal cognition to put sausage into containers and see what wolves and dogs will do to get at it. More than one experiment has made some things pretty clear. Dogs look at humans much more than wolves do. Wolves tend to put their nose to the Tupperware and keep at it. This evidence has led to the unsurprising conclusion that dogs are more socially connected to humans and wolves more self-reliant. Once you get beyond the basics, however, agreement is elusive. In order to assess the latest bit of research, published in Biology Letters Tuesday by Monique Udell at Oregon State University, some context can be drawn from an earlier experiment that got a lot of attention more than a decade ago. In a much publicized paper in 2003, Adam Miklosi, now director of the Family Dog Project, at Eotvos Lorand University in Budapest, described work in which dogs and wolves who were raised by humans learned to open a container to get food. Then they were presented with the same container, modified so that it could not be opened. Wolves persisted, trying to solve the unsolvable problem, while dogs looked back at nearby humans. At first glance it might seem to a dog lover that the dogs were brilliant, saying, in essence, “Can I get some help here? You closed it; you open it.” But Dr. Miklosi didn’t say that. He concluded that dogs have a genetic predisposition to look at humans, which could have been the basis for the intense but often imperfect communication that dogs and people engage in. © 2015 The New York Times Company
By CLYDE HABERMAN Perhaps no crime staggers the mind, or turns the stomach, more than the murder of a baby, and so it is not a surprise when law enforcement comes down hard on the presumed killers. Often enough, these are men and women accused of having succumbed to sudden rage or simmering frustration and literally shaken the life out of a helpless infant who would not stop crying or would not fall asleep. Shaken baby syndrome has been a recognized diagnosis for several decades, though many medical professionals now prefer the term abusive head trauma. It is defined by a constellation of symptoms known as the triad: brain swelling, bleeding on the surface of the brain and bleeding behind the eyes. For years, those three symptoms by themselves were uniformly accepted as evidence that a crime had been committed, even in the absence of bruises, broken bones or other signs of abuse. While many doctors, maybe most, still swear by the diagnosis, a growing number have lost faith. Not that they doubt that some babies have been abused. But these skeptics assert that factors other than shaking, and having nothing to do with criminal behavior, may sometimes explain the triad. Has the syndrome been diagnosed too liberally? Are some innocent parents and other caretakers being wrongly sent to prison? Those questions, at the complex intersection of medicine and the law, can stir strong emotions among doctors, parents and prosecutors. They shape this first installment in a new series of Retro Report, video documentaries that explore major news stories of the past and their enduring consequences. The video’s starting point is a Massachusetts criminal case that introduced the concept of shaken baby syndrome to many Americans: the 1997 murder trial of Louise Woodward, an 18-year-old British au pair accused of having shaken an 8-month-old boy, Matthew Eappen, so aggressively that he died. Matthew also had injuries that may have predated Ms. Woodward’s joining the Eappen family in Newton, outside Boston. The focus, however, was on the triad of symptoms. To prosecution witnesses, they proved that the baby had been shaken violently, his head hitting some hard surface. © 2015 The New York Times Company
By Emily Chung, Whadd'ya at? Ow ya goin'? If you were at a picnic with a bunch of Newfoundlanders or Australians, those are the greetings you might fling around. Similarly, scientists who eavesdrop on sperm whales – Moby Dick's species — have found they also have distinct "dialects." And a new study suggests like human dialects, they arise through cultural learning. "Cultural transmission seems key to the partitioning of sperm whales into… clans," the researchers wrote in a paper published today in the journal Nature Communications. Sperm whales live around the world, mainly in deeper waters far offshore. The solitary males live in colder areas, and roam in Canadian waters in areas where the ocean depth is more than 1000 metres, says Mauricio Cantor, the Dalhousie University PhD. student who led the new study with Hal Whitehead, a Dalhousie biology professor. The females live in warmer, more southern waters, in loose family groups of around seven to 12 whales – sisters, aunts, grandmothers, cousins, and the occasional unrelated friend and their calves. Sometimes, they meet up with other families for gatherings of up to 200 whales, similar to human picnics or festivals. These can last from a few hours to a few days. The whales that gather in these groups, called clans, have distinct "dialects" of patterns of clicks called codas that are distinct from the clicks they use in echolocation when they're hunting for food. They use codas talk to each other when they surface between dives. ©2015 CBC/Radio-Canada.
By Michael Balter If you find yourself along the Atlantic coastal border between Spain and France, here are some phrases that might come in handy: Urte askotarako! (“Pleased to meet you!”), Eskerrik asko! (“Thank you!”), and Non daude komunak? (“Where is the toilet?”). Welcome to Basque Country, where many people speak a musical language that has no known relationship to any other tongue. Many researchers have assumed that Basque must represent a “relic language” spoken by the hunter-gatherers who occupied Western Europe before farmers moved in about 7500 years ago. But a new study contradicts that idea and concludes that the Basques are descended from a group of those early farmers that kept to itself as later waves of migration swept through Europe. The great majority of Europeans speak languages belonging to the Indo-European family, which includes such diverse tongues as German, Greek, Spanish, and French; a smaller number speak Uralic languages like Finnish, Hungarian, and Estonian. But Basque stands truly alone; what linguists call a “language isolate.” This uniqueness is a source of pride among the nearly 700,000 Basque speakers, some of whom have called for the creation of an independent nation separate from Spain and France. For scientists, however, Basque is a major unsolved mystery. In the 19th century, some anthropologists claimed that Basques had differently shaped skulls than other Europeans. Yet although that racial idea had been discredited by the 20th century, researchers have been able to show that the Basques have a striking number of genetic differences that set them apart from other Europeans. Variations in their immune cells and proteins include a higher-than-normal frequency of Rh-negative blood types, for example. Those findings led to the hypothesis that the Basques descended from early hunter-gatherers who had somehow avoided being genetically overwhelmed when farming spread into Europe from the Near East. But some recent studies have questioned just how genetically distinct the Basques really are. © 2015 American Association for the Advancement of Science.
Link ID: 21384 - Posted: 09.08.2015
By Amy Ellis Nutt There may finally be an explanation for why men are often less verbally adept than women at expressing themselves. It's the testosterone. Scientists have long known that language development is different between boys and girls. But in scanning the brains of 18 individuals before and after undergoing hormone treatment for female-to-male sex reassignment, Austrian and Dutch researchers found evidence of specific brain structure differences. In particular, they found two areas in the left hemispheres of the transgender men that lost gray matter volume during high-testosterone treatment over a period of four weeks: Broca's area, which is involved in the production of language, and Wernicke's area, which processes language. All of which suggests, according to the study, which was presented this week at the European College of Neuropsychopharmacology Congress, why verbal abilities are often stronger in women than men. "In more general terms, these findings may suggest that a genuine difference between the brains of women and men is substantially attributable to the effects of circulating hormones," said one of the researchers at the conference, Rupert Lanzenberger from Vienna. "Moreover, the hormonal influence on human brain structure goes beyond the early developmental phase and is still present in adulthood." Previous research has shown that higher testosterone is linked to smaller vocabulary in children and also that verbal fluency skills seemed to decrease after female-to-male sex reassignment testosterone treatment.