Chapter 15. Language and Our Divided Brain
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
By Lauran Neergaard, New research suggests it may be possible to predict which preschoolers will struggle to read — and it has to do with how the brain deciphers speech when it's noisy. Scientists are looking for ways to tell, as young as possible, when children are at risk for later learning difficulties so they can get early interventions. There are some simple pre-reading assessments for preschoolers. But Northwestern University researchers went further and analyzed brain waves of children as young as three. How well youngsters' brains recognize specific sounds — consonants — amid background noise can help identify who is more likely to have trouble with reading development, the team reported Tuesday in the journal PLOS Biology. If the approach pans out, it may provide "a biological looking glass," said study senior author Nina Kraus, director of Northwestern's Auditory Neuroscience Laboratory. "If you know you have a three-year-old at risk, you can as soon as possible begin to enrich their life in sound so that you don't lose those crucial early developmental years." Connecting sound to meaning is a key foundation for reading. For example, preschoolers who can match sounds to letters earlier go on to read more easily. Auditory processing is part of that pre-reading development: If your brain is slower to distinguish a "D" from a "B" sound, for example, then recognizing words and piecing together sentences could be affected, too. What does noise have to do with it? It stresses the system, as the brain has to tune out competing sounds to selectively focus, in just fractions of milliseconds. And consonants are more vulnerable to noise than vowels, which tend to be louder and longer, Kraus explained. ©2015 CBC/Radio-Canada
By Gretchen Reynolds Would soccer be safer if young players were not allowed to head the ball? According to a new study of heading and concussions in youth soccer, the answer to that question is not the simple yes that many of us might have hoped. Soccer parents — and nowadays we are legion — naturally worry about head injuries during soccer, whether our child’s head is hitting the ball or another player. The resounding head-to-head collision between Alexandra Popp of Germany and Morgan Brian of the United States during the recent Women’s World Cup sent shivers down many of our spines. People’s concerns about soccer heading and concussions have grown so insistent in the past year or so that some doctors, parents and former professional players have begun to call for banning the practice outright among younger boys and girls, up to about age 14, and curtailing it at other levels of play. Ridding youth soccer of heading, many of these advocates say, would virtually rid the sport of severe head injuries. But Dawn Comstock, for one, was skeptical when she heard about the campaign. An associate professor of public health at the University of Colorado in Denver and an expert on youth sports injuries, she is also, she said, “a believer in evidence-based decision making.” And she said she wasn’t aware of any studies showing that heading causes the majority of concussions in the youth game. In fact, she and her colleagues could not find any large-scale studies examining the causes of concussions in youth soccer at all. So, for a study being published this week in JAMA Pediatrics, she and her colleagues decided to investigate the issue themselves. © 2015 The New York Times Company
Keyword: Brain Injury/Concussion
Link ID: 21172 - Posted: 07.15.2015
Gretchen Cuda Kroen When Kate Klein began working as a nurse in the Cleveland Clinic's Neurointensive Care Unit, one of the first things she noticed was that her patients spent a lot of time in bed. She knew patients with other injuries benefitted from getting up and moving early on, and she wondered why not patients with brain injuries. "I asked myself that question. I asked my colleagues that question," Klein says. "Why aren't these patients getting out of bed? Is there something unique about patients with neurologic injury?" Doctors have long encouraged their surgical patients to get out of bed as soon as it's safe to do so. Movement increases circulation, reduces swelling, inflammation and the risk of blood clots, and it speeds healing. But that wasn't the thinking with brain injuries, explains Edward Manno, director of the Neurointensive Care Unit at the Cleveland Clinic and one of the neurologists who works with Klein. "The predominant thinking was that rest was better suited for the brain," Manno says. Often the damaged brain is susceptible to lack of blood flow. Increased activity may make things worse if initiated too quickly, Manno says. "So many of us thought for quite some time that we needed to put the brain to rest after the initial insult of stroke or other neurologic injury." © 2015 NPR
Keyword: Brain Injury/Concussion
Link ID: 21136 - Posted: 07.06.2015
By BARRY MEIER and DANIELLE IVORY In a small brick building across the street from a Taco Bell in Marrero, La., patients enter a clear plastic capsule and breathe pure oxygen. The procedure, known as hyperbaric oxygen therapy, uses a pressurized chamber to help scuba divers overcome the bends and to aid people sickened by toxic gases. But Dr. Paul G. Harch, who operates the clinic there on the outskirts of New Orleans, offers it as a concussion treatment. One patient, Rashada Parks, said that she had struggled with neck pain, mood swings and concentration problems ever since she fell and hit her head more than three years ago. Narcotic painkillers hadn’t helped her, nor had antidepressants. But after 40 hourlong treatments, or dives, in a hyperbaric chamber, her symptoms have subsided. “I have hope now,” Ms. Parks said. “It’s amazing.” Three studies run at a taxpayer cost of about $70 million have all come to a far different conclusion. They found that the benefits of hyperbaric oxygen reported by patients like Ms. Parks may have resulted from a placebolike effect, not the therapy’s supposed ability to repair and regenerate brain cells. But undeterred, advocates of the treatment recently persuaded lawmakers to spend even more public money investigating whether the three studies were flawed. A growing industry has developed around concussions, with entrepreneurs, academic institutions and doctors scrambling to find ways to detect, prevent and treat head injuries. An estimated 1.7 million Americans are treated every year after suffering concussions from falls, car accidents, sports injuries and other causes. While the vast majority quickly recover with rest, a small percentage of patients experience lingering effects a year or longer afterward. Along with memory issues, symptoms can include headaches, dizziness and vision and balance problems. © 2015 The New York Times Company
Keyword: Brain Injury/Concussion
Link ID: 21132 - Posted: 07.04.2015
Henry Nicholls Andy Russell had entered the lecture hall late and stood at the back, listening to the close of a talk by Marta Manser, an evolutionary biologist at the University of Zurich who works on animal communication. Manser was explaining some basic concepts in linguistics to her audience, how humans use meaningless sounds or “phonemes” to generate a vast dictionary of meaningful words. In English, for instance, just 40 different phonemes can be resampled into a rich vocabulary of some 200,000 words. But, explained Manser, this linguistic trick of reorganising the meaningless to create new meaning had not been demonstrated in any non-human animal. This was back in 2012. Russell’s “Holy shit, man” excitement was because he was pretty sure he had evidence for phoneme structuring in the chestnut-crowned babbler, a bird he’s been studying in the semi-arid deserts of south-east Australia for almost a decade. After the talk, Russell (a behavioural ecologist at the University of Exeter) travelled to Zurich to present his evidence to Manser’s colleague Simon Townsend, whose research explores the links between animal communication systems and human language. The fruits of their collaboration are published today in PLoS Biology. One of Russell’s students Jodie Crane had been recording the calls of the chestnut-crowned babbler for her PhD. The PLoS Biology paper focuses on two of these calls, which appear to be made up of two identical elements, just arranged in a different way. © 2015 Guardian News and Media Limited
Emma Bowman In a small, sparse makeshift lab, Melissa Malzkuhn practices her range of motion in a black, full-body unitard dotted with light-reflecting nodes. She's strapped on a motion capture, or mocap, suit. Infrared cameras that line the room will capture her movement and translate it into a 3-D character, or avatar, on a computer. But she's not making a Disney animated film. Three-dimensional motion capture has developed quickly in the last few years, most notably as a Hollywood production tool for computer animation in films like Planet of the Apes and Avatar. Behind the scenes though, leaders in the deaf community are taking on the technology to create and improve bilingual learning tools in American Sign Language. Malzkuhn has suited up to record a simple nursery rhyme. Being deaf herself, she spoke with NPR through an interpreter. "I know in English there's just a wealth of nursery rhymes available, but we really don't see as much in ASL," she says. "So we're gonna be doing some original work here in developing nursery rhymes." That's because sound-based rhymes don't cross over well into the visual language of ASL. Malzkuhn heads the Motion Light Lab, or ML2. It's the newest hub of the National Science Foundation Science of Learning Center, Visual Language and Visual Learning (VL2) at Gallaudet University, the premier school for deaf and hard of hearing students. © 2015 NPR
Link ID: 21107 - Posted: 06.29.2015
By Sarah C. P. Williams Parrots, like the one in the video above, are masters of mimicry, able to repeat hundreds of unique sounds, including human phrases, with uncanny accuracy. Now, scientists say they have pinpointed the neurons that turn these birds into copycats. The discovery could not only illuminate the origins of bird-speak, but might shed light on how new areas of the brain arise during evolution. Parrots, songbirds, and hummingbirds—which can all chirp different dialects, pick up new songs, and mimic sound—all have a “song nuclei” in their brain: a group of interconnected neurons that synchronizes singing and learning. But the exact boundaries of that region are fuzzy; some researchers define it as larger or smaller than others do, depending on what criteria they use to outline the area. And differences between the song nuclei of parrots—which can better imitate complex sounds—and other birds are hard to pinpoint. Neurobiologist Erich Jarvis of Duke University in Durham, North Carolina, was studying the activation of PVALB—a gene that had been previously found in songbirds—within the brains of parrots when he noticed something strange. Stained sections of deceased parrot brains revealed that the gene was turned on at distinct levels within two distinct areas of what he thought was the song nuclei of the birds’ brains. Sometimes, the gene was activated in a spherical central core of the nuclei. But other times, it was only active in an outer shell of cells surrounding that core. When he and collaborators looked more closely, they found that the inner core and the outer shell—like the chocolate and surrounding candy shell of an M&M—varied in many more ways as well.
by Kate Solomon Jean-Dominique Bauby famously wrote The Diving Bell and The Butterfly by blinking as an assistant read out the alphabet, but locked-in patients could soon have a much easier way to communicate. For the first time, scientists have successfully transcribed brainwaves as text, which could mean that those unable to speak could use the system to "talk" via a computer. Carried out by a group of informatics, neuroscience and medical researchers at Albany Medical Centre, the team managed to identify the brainwaves relating to speech by using electrocorticographic (ECoG) technology to monitor the frontal and temporal lobes of seven epileptic volunteers. This involves using needles to record signals directly from a person's neurons; it's an invasive procedure requiring an incision through the skull. The participants then read aloud from a sample text while machine learning algorithms pulled out the most likely word sequence from the signals recorded by the EcOG. Existing speech-to-text tools then transcribed the continuously spoken speech directly from the brain activity. Error rates were as low as 25 percent during the study, which means the potential for the system is pretty vast. The findings could offer locked-in and mute patients a valuable communication method but it also means humans could one day communicate directly with a computer without needing any intermediary equipment.
by Sarah Zielinski Last year in Australia, I visited Featherdale Wildlife Park where, in a couple of areas, kangaroos and wallabies hop amongst the tourists. For a dollar, you can buy an ice cream cone full of grass for the marsupials to eat. But if you’re not careful, an animal will quickly grab the cone out of your hand and feed itself. Now I’m wishing that I had paid more attention to that grabbing motion. Kangaroos are lefties, scientists report June 18 in Current Biology. And the preference for one hand over the other may be linked to the ability to walk on two legs. Humans show a definite preference for one hand over the other, usually the right. This handedness had been considered a distinctly human trait. But scientists have found more and more evidence that other species have such preferences as well. Female domestic cats, for instance, tend to use their right paws and males their left. Andrey Giljov of Saint Petersburg State University in Russia and colleagues were curious about the evolution of handedness and looked to marsupials, since these animals are an early offshoot of the mammal lineage. They observed four species in the wild — red kangaroos, eastern gray kangaroos, red-necked wallabies and Goodfellow’s tree kangaroos — performing tasks such as grooming and feeding. © Society for Science & the Public 2000 - 2015.
Link ID: 21075 - Posted: 06.20.2015
Tom Bawden The mystery behind the nightingale’s beautiful song has been revealed, with scientists finding that male birds sing complex notes to prove to females that they would be a good father to their children. Nightingales use their songs to advertise their family values, according to new research which discovers that the better the singer, the more support they are likely to offer their young family by feeding and defending them from predators. But while the beauty comes from the complexity of the song, the effect it has on the females is based on something far more mundane – the amount of effort the singer has put into his performance. Researchers at the Freie Universitat Berlin found that complicated choral arrangements are much harder to sing, especially when they include frequent appearances of long buzzing sounds and require the bird to be in good physical condition. “We don’t think the female is concerned with the beauty of the song but rather the information encoded in the song that tells her about the singer’s characteristics – his age, where he was raised, the strength of his immune system and how motivated he is to contribute to bringing up the young,” Professor Silke Kipper, one of the report’s authors, told The Independent. “The songs can also be a good indication of the bird’s ability to learn, which is another important characteristic of a good parent.” © independent.co.uk
by Meghan Rosen When we brought Baby S home from the hospital six months ago, his big sister, B, was instantly smitten. She leaned her curly head over his car seat, tickled his toes and cooed like a pro — in a voice squeakier than Mickey Mouse’s. B’s voice — already a happy toddler squeal — sounded as if she'd sucked in some helium. My husband and I wondered about her higher pitch. Are humans hardwired to chitchat squeakily to babies, or did B pick up vocal cues from us? (I don’t sound like that, do I?) If I’m like other mothers, I probably do. American English-speaking moms dial up their pitch drastically when talking to their children. But dads’ voices tend to stay steady, researchers reported May 19 in Pittsburgh at the 169th Meeting of the Acoustical Society of America. “Dads talk to kids like they talk to adults,” says study coauthor Mark VanDam, a speech scientist at Washington State University. But that doesn’t mean fathers are doing anything wrong, he says. Rather, they may be doing something right: offering their kids a kind of conversational bridge to the outside world. Scientists have studied infant- or child-directed speech (often called “motherese” or “parentese”) for decades. In American English, this type of babytalk typically uses high pitch, short utterances, repetition, loud volume and slowed-down speech. Mothers who speak German Japanese, French, and other languages also tweak their pitch and pace when talking to children. But no one had really studied dads, VanDam says. © Society for Science & the Public 2000 - 2015.
The virtual reality arm appears to move faster and more accurately than the real arm Virtual reality could help stroke patients recover by "tricking" them into thinking their affected limb is more accurate than it really is. Researchers in Spain found that making the affected limb appear more effective on screen increased the chance the patient would use it in real life. This is important because stroke victims often underuse their affected limbs, making them even weaker. A stroke charity welcomed the study and called for more research. In the study of 20 stroke patients, researchers sometimes enhanced the virtual representation of the patient's affected limb, making it seem faster and more accurate, but without the patient's knowledge. After the episodes in which the limbs were made to seem more effective, the patients then went on to use them more, according to lead researcher Belen Rubio. "Surprisingly, only 10 minutes of enhancement was enough to induce significant changes in the amount of spontaneous use of the affected limb," said Mrs Rubio from the Laboratory of Synthetic, Perceptive, Emotive and Cognitive Systems at Pompeu Fabra University in Spain. "This therapy could create a virtuous circle of recovery, in which positive feedback, spontaneous arm use and motor performance can reinforce each other. Engaging patients in this ongoing cycle of spontaneous arm use, training and learning could produce a remarkable impact on their recovery process," she said. © 2015 BBC
Link ID: 21030 - Posted: 06.09.2015
By Lisa Sanders On Thursday, we challenged Well readers to figure out why a previously healthy 31-year-old woman suddenly began having strokes. I thought this was a particularly tough case – all the more so since I had never heard of the disease she was ultimately diagnosed with. Apparently I was not alone. Only a few dozen of the 400 plus readers who wrote in were able to make this difficult diagnosis. The correct diagnosis is: Susac’s syndrome The first person to identify this rare neurological disorder was Errol Levine, a retired radiologist from South Africa, now living in Santa Fe, N.M. The location of the stroke shown — in a part of the brain known as the corpus callosum — was a subtle clue, and Dr. Levine recalled reading of an autoimmune disease characterized by strokes in this unusual area of the brain. This is Dr. Levine’s second win. Well done, sir! Susac’s syndrome is a rare disorder first described in 1979 by Dr. John Susac, a neurologist in Winter Haven, Fla. Dr. Susac described two women, one 26 years old, the other 40, who he encountered within weeks of one another. Both had the same unusual triad of psychiatric symptoms suggestive of some type of brain inflammation, hearing loss, and patchy vision loss caused by blockages of the tiniest vessels of the retina known as branch retinal arteries. A few years later, Dr. Susac encountered two more cases and presented one of these at a meeting as a mystery diagnosis. The doctor who figured it out called the disorder Susac’s syndrome, and the name stuck. Seen primarily in young women, Susac’s is thought to be an autoimmune disorder in which antibodies, the foot soldiers of the immune system, mistakenly attack tissues in some of the smallest arteries in the brain. The inflammation of these small vessels blocks the flow of blood, causing tiny strokes. © 2015 The New York Times Company
by Clare Wilson A new study has discredited the theory that dyslexia is caused by visual problems. So what does cause the condition and how can it be treated? What kind of visual problems are claimed to cause dyslexia? A huge variety. They include difficulties in merging information from both eyes, problems with glare from white pages or the text blurring or "dancing" on the page. A host of products claim to relieve this so-called visual stress, especially products that change the background colour of the page, such as tinted glasses and coloured overlays. Others advise eye exercises that supposedly help people with dyslexia track words on the page. Despite lack of evidence that these approaches work, some people with dyslexia say they help – more than half of university students with dyslexia have used such products. What are the new findings? That there's no evidence visual stress is linked with dyslexia. Nearly 6000 UK children aged between 7 and 9 had their reading abilities tested as well as performing a battery of visual tests. About 3 per cent of them had serious dyslexia, in line with the national average. But in the visual tests, the differences between the students with and without dyslexia were minimal. In two of the 11 tests, about 16 per cent of the children with dyslexia scored poorly, compared with 10 per cent for children with normal reading abilities. But that small difference could be caused by the fact that they read less, says author Alexandra Creavin of the University of Bristol, UK. And more importantly, the 16 per cent figure is so low, it can't be the main explanation for dyslexia. © Copyright Reed Business Information Ltd.
By Jason G. Goldman In 1970 child welfare authorities in Los Angeles discovered that a 14-year-old girl referred to as “Genie” had been living in nearly total social isolation from birth. An unfortunate participant in an unintended experiment, Genie proved interesting to psychologists and linguists, who wondered whether she could still acquire language despite her lack of exposure to it. Genie did help researchers better define the critical period for learning speech—she quickly acquired a vocabulary but did not gain proficiency with grammar—but thankfully, that kind of case study comes along rarely. So scientists have turned to surrogates for isolation experiments. The approach is used extensively with parrots, songbirds and hummingbirds, which, like us, learn how to verbally communicate over time; those abilities are not innate. Studying most vocal-learning mammals—for example, elephants, whales, sea lions—is not practical, so Tel Aviv University zoologists Yosef Prat, Mor Taub and Yossi Yovel turned to the Egyptian fruit bat, a vocal-learning species that babbles before mastering communication, as a child does. The results of their study, the first to raise bats in a vocal vacuum, were published this spring in the journal Science Advances. Five bat pups were reared by their respective mothers in isolation, so the pups heard no adult conversations. After weaning, the juveniles were grouped together and exposed to adult bat chatter through a speaker. A second group of five bats was raised in a colony, hearing their species' vocal interactions from birth. Whereas the group-raised bats eventually swapped early babbling for adult communication, the isolated bats stuck with their immature vocalizations well into adolescence. © 2015 Scientific American
By Meeri Kim The dangers of concussions, caused by traumatic stretching and damage to nerve cells in the brain that lead to dizziness, nausea and headache, has been well documented. But ear damage that is sometimes caused by a head injury has symptoms so similar to the signs of a concussion that doctors may misdiagnose it and administer the wrong treatment. A perilymph fistula is a tear or defect in the small, thin membranes that normally separate the air-filled middle ear from the inner ear, which is filled with a fluid called perilymph. When a fistula forms, tiny amounts of this fluid leak out of the inner ear, an organ crucial not only for hearing but also for balance. Losing even a few small drops of perilymph leaves people disoriented, nauseous and often with a splitting headache, vertigo and memory loss. While most people with a concussion recover within a few days, a perilymph fistula can leave a person disabled for months. There is some controversy around perilymph fistula due to its difficulty of diagnosis — the leak is not directly observable, but rather identified by its symptoms. However, it is generally accepted as a real condition by otolaryngologists and sports physicians, and typically known to follow a traumatic event. But concussions — as well as post-concussion syndrome, which is marked by dizziness, headache and other symptoms that can last even a year after the initial blow — also occur as the result of such an injury.
Athletes who lose consciousness after concussions may be at greater risk for memory loss later in life, a small study of retired National Football League players suggests. Researchers compared memory tests and brain scans for former NFL players and a control group of people who didn't play college or pro football. After concussions that resulted in lost consciousness, the football players were more likely to have mild cognitive impairment and brain atrophy years later. "Our results do suggest that players with a history of concussion with a loss of consciousness may be at greater risk for cognitive problems later in life," senior study author Munro Cullum, chief of neuropsychology at the University of Texas Southwestern Medical Center in Dallas, said by email. "We are at the early stages of understanding who is actually at risk at the individual level." Cullum and colleagues recruited 28 retired NFL players living in Texas: eight who were diagnosed with mild cognitive impairment and 20 who didn't appear to have any memory problems. They ranged in age from 36 to 79, and were an average of about 58 years old. All but three former athletes experienced at least one concussion, and they typically had more than three. Researchers compared these men to 27 people who didn't play football but were similar in age, education, and mental capacity to the retired athletes, including six with cognitive impairment. These men were 41 to 77 years old, and about 59 on average. ©2015 CBC/Radio-Canada
by Bas den Hond Watch your language. Words mean different things to different people – so the brainwaves they provoke could be a way to identify you. Blair Armstrong of the Basque Center on Cognition, Brain, and Language in Spain and his team recorded the brain signals of 45 volunteers as they read a list of 75 acronyms – such as FBI or DVD – then used computer programs to spot differences between individuals. The participants' responses varied enough that the programs could identify the volunteers with about 94 per cent accuracy when the experiment was repeated. The results hint that such brainwaves could be a way for security systems to verify individuals' identity. While the 94 per cent accuracy seen in this experiment would not be secure enough to guard, for example, a room or computer full of secrets, Armstrong says it's a promising start. Techniques for identifying people based on the electrical signals in their brain have been developed before. A desirable advantage of such techniques is that they could be used to verify someone's identity continuously, whereas passwords or fingerprints only provide a tool for one-off identification. Continuous verification – by face or ear recognition, or perhaps by monitoring brain activity – could in theory allow someone to interact with many computer systems simultaneously, or even with a variety of intelligent objects, without having to repeatedly enter passwords for each device. © Copyright Reed Business Information Ltd
By Virginia Morell Like humans, dolphins, and a few other animals, North Atlantic right whales (Eubalaena glacialis) have distinctive voices. The usually docile cetaceans utter about half a dozen different calls, but the way in which each one does so is unique. To find out just how unique, researchers from Syracuse University in New York analyzed the “upcalls” of 13 whales whose vocalizations had been collected from suction cup sensors attached to their backs. An upcall is a contact vocalization that lasts about 1 to 2 seconds and rises in frequency, sounding somewhat like a deep-throated cow’s moo. Researchers think the whales use the calls to announce themselves and to “touch base” with others of their kind, they explained in a poster presented today at the Meeting of the Acoustical Society of America in Pittsburgh, Pennsylvania. After analyzing the duration and harmonic frequency of these upcalls, as well as the rate at which the frequencies changed, the scientists found that they could distinguish the voices of each of the 13 whales. They think their discovery will provide a new tool for tracking and monitoring the critically endangered whales, which number about 450 and range primarily from Florida to Newfoundland. © 2015 American Association for the Advancement of Science.
Haroon Siddique Long-term depression in people over 50 could more than double their risk of suffering a stroke, with the risk remaining significantly higher even after the depression allays, research suggests. The US study of more than 16,000 people, which documented 1,192 strokes, found that onset of recent depression was not associated with higher stroke risk, suggesting the damage is done by depressive symptoms accumulating over time. The study’s lead author, Paola Gilsanz, from Harvard University’s TH Chan School of Public Health, said: “Our findings suggest that depression may increase stroke risk over the long term. Looking at how changes in depressive symptoms over time may be associated with strokes allowed us to see if the risk of stroke increases after elevated depressive symptoms start or if risk goes away when depressive symptoms do. We were surprised that changes in depressive symptoms seem to take more than two years to protect against or elevate stroke risk.” The research, published on Wednesday in the Journal of the American Heart Association, used data from between 1998 and 2010 from the Health and Retirement Study, which interviews a panel of representative Americans aged over 50 every two years, on their depressive symptoms, history of stroke, and stroke risk factors. Gilsanz, with colleagues from universities in Washington, California and Minnesota, and Bronx Partners for Healthy Communities, found that people with high depressive symptoms at two consecutive interviews had a 114% higher risk of suffering a first stroke, compared with people without depression at either interview. Those who had depressive symptoms at one interview but not at the next had a 66% higher risk. © 2015 Guardian News and Media Limited