Chapter 14. Attention and Consciousness
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
by Clare Wilson Once only possible in an MRI scanner, vibrating pads and electrode caps could soon help locked-in people communicate on a day-to-day basis YOU wake up in hospital unable to move, to speak, to twitch so much as an eyelid. You hear doctors telling your relatives you are in a vegetative state – unaware of everything around you – and you have no way of letting anyone know this is not the case. Years go by, until one day, you're connected to a machine that allows you to communicate through your brain waves. It only allows yes or no answers, but it makes all the difference – now you can tell your carers if you are thirsty, if you'd like to sit up, even which TV programmes you want to watch. In recent years, breakthroughs in mind-reading technology have brought this story close to reality for a handful of people who may have a severe type of locked-in syndrome, previously diagnosed as being in a vegetative state. So far, most work has required a lab and a giant fMRI scanner. Now two teams are developing devices that are portable enough to be taken out to homes, to help people communicate on a day-to-day basis. The technology might also be able to identify people who have been misdiagnosed. People with "classic" locked-in syndrome are fully conscious but completely paralysed apart from eye movements. Adrian Owen of Western University in London, Canada, fears that there is another form of the condition where the paralysis is total. He thinks that a proportion of people diagnosed as being in a vegetative state – in which people are thought to have no mental awareness at all – are actually aware but unable to let anyone know. "The possibility is that we are missing people with some sort of complete locked-in syndrome," he says. © Copyright Reed Business Information Ltd.
Alison Abbott If you have to make a complex decision, will you do a better job if you absorb yourself in, say, a crossword puzzle instead of ruminating about your options? The idea that unconscious thought is sometimes more powerful than conscious thought is attractive, and echoes ideas popularized by books such as writer Malcolm Gladwell’s best-selling Blink. But within the scientific community, ‘unconscious-thought advantage’ (UTA) has been controversial. Now Dutch psychologists have carried out the most rigorous study yet of UTA — and find no evidence for it. Their conclusion, published this week in Judgement and Decision Making, is based on a large experiment that they designed to provide the best chance of capturing the effect should it exist, along with a sophisticated statistical analysis of previously published data1. The report adds to broader concerns about the quality of psychology studies and to an ongoing controversy about the extent to which unconscious thought in general can influence behaviour. “The bigger debate is about how clever our unconscious is,” says cognitive psychologist David Shanks of University College London. “This carefully constructed paper makes a great contribution.” Shanks published a review last year that questioned research claiming that various unconscious influences, including UTA, affect decision making2. © 2015 Nature Publishing Group
|By Christof Koch Faces are the glue that holds us together and that gives us our identity. All of us but the visually impaired and blind are experts at recognizing people's identity, gender, age and ethnicity from looking at their faces. First impressions of attractiveness or competence take but a brief glimpse of somebody's face. Newly born infants already tend to fixate on faces. This bias also turns up in art. Paintings and movies are filled with faces staring at the viewer. Who can forget the endless close-ups of the feuding husband and wife in Ingmar Bergman's Cimmerian masterpiece Scenes from a Marriage? Because recognizing a face is so vital to our social lives, it comes as no surprise that a lot of real estate in the cerebral cortex—the highly convoluted region that makes up the bulk of our brain—is devoted to a task crucial to processing faces and their identity. We note whether someone looks our way or not. We discern emotional expressions, whether they register joy, fear or anger. Indeed, functional brain imaging has identified a set of adjacent regions, referred to as the fusiform face area (FFA), that are situated on the left and the right sides of the brain, at the bottom of the temporal lobe of the cerebral cortex. The FFA turns up its activity when subjects look at portraits or close-ups of faces or even when they just think about these images. Two just published studies of the brain's visual networks, including the FFA, enlarge what we know about the physical basis of face perception. Both explore the unique access to the brain afforded by patients whose epileptic seizures have proved resistant to drugs. A surgical treatment finds the locations in the brain where the hypersynchronized activity that characterizes a seizure begins before spreading from its point of origin to engulf one or sometimes both hemispheres. If a single point—a focus where the seizure begins—can be found, it can be removed. After this procedure, a patient usually has significantly fewer seizures—and some remain seizure-free. To triangulate the location of the focus, neurosurgeons insert electrodes into the brain to monitor electrical activity that occurs during a seizure. © 2015 Scientific American
Link ID: 20523 - Posted: 01.27.2015
Simon Parkin A few months before she died, my grandmother made a decision. Bobby, as her friends called her (theirs is a generation of nicknames), was a farmer’s wife who not only survived World War II but also found in it justification for her natural hoarding talent. ‘Waste not, want not’ was a principle she lived by long after England recovered from a war that left it buckled and wasted. So she kept old envelopes and bits of cardboard cereal boxes for note taking and lists. She kept frayed blankets and musty blouses from the 1950s in case she needed material to mend. By extension, she was also a meticulous chronicler. She kept albums of photographs of her family members. She kept the airmail love letters my late grandfather sent her while he travelled the world with the merchant navy in a box. Her home was filled with the debris of her memories. Yet in the months leading up to her death, the emphasis shifted from hoarding to sharing. Every time I visited my car would fill with stuff: unopened cartons of orange juice, balls of fraying wool, damp, antique books, empty glass jars. All things she needed to rehome now she faced her mortality. The memories too began to move out. She sent faded photographs to her children, grandchildren and friends, as well as letters containing vivid paragraphs detailing some experience or other. On 9 April, the afternoon before the night she died, she posted a letter to one of her late husband’s old childhood friends. In the envelope she enclosed some photographs of my grandfather and his friend playing as young children. “You must have them,” she wrote to him. It was a demand but also a plea, perhaps, that these things not be lost or forgotten when, a few hours later, she slipped away in her favourite armchair. © 2015 BBC
Oliver Burkeman One spring morning in Tucson, Arizona, in 1994, an unknown philosopher named David Chalmers got up to give a talk on consciousness, by which he meant the feeling of being inside your head, looking out – or, to use the kind of language that might give a neuroscientist an aneurysm, of having a soul. Though he didn’t realise it at the time, the young Australian academic was about to ignite a war between philosophers and scientists, by drawing attention to a central mystery of human life – perhaps the central mystery of human life – and revealing how embarrassingly far they were from solving it. The scholars gathered at the University of Arizona – for what would later go down as a landmark conference on the subject – knew they were doing something edgy: in many quarters, consciousness was still taboo, too weird and new agey to take seriously, and some of the scientists in the audience were risking their reputations by attending. Yet the first two talks that day, before Chalmers’s, hadn’t proved thrilling. “Quite honestly, they were totally unintelligible and boring – I had no idea what anyone was talking about,” recalled Stuart Hameroff, the Arizona professor responsible for the event. “As the organiser, I’m looking around, and people are falling asleep, or getting restless.” He grew worried. “But then the third talk, right before the coffee break – that was Dave.” With his long, straggly hair and fondness for all-body denim, the 27-year-old Chalmers looked like he’d got lost en route to a Metallica concert. “He comes on stage, hair down to his butt, he’s prancing around like Mick Jagger,” Hameroff said. “But then he speaks. And that’s when everyone wakes up.”
Link ID: 20503 - Posted: 01.21.2015
Closing your eyes when trying to recall events increases the chances of accuracy, researchers at the University of Surrey suggest. Scientists tested people's ability to remember details of films showing fake crime scenes. They hope the studies will help witnesses recall details more accurately when questioned by police. They say establishing a rapport with the person asking the questions can also help boost memory. Writing in the journal Legal and Criminological Psychology, scientists tested 178 participants in two separate experiments. In the first, they asked volunteers to watch a film showing an electrician entering a property, carrying out work and then stealing a number of items. Volunteers were then questioned in one of four groups. People were either asked questions with their eyes open or closed, and after a sense of rapport had been built with the interviewer or no attempt had been made to create a friendly introduction. People who had some rapport with their interviewer and had their eyes shut throughout questioning answered three-quarters of the 17 questions correctly. But those who did not have a friendly introduction with the interviewer and had their eyes open answered 41% correctly. The analysis showed that eye closing had the strongest impact on remembering details correctly ,but that feeling comfortable during the interview also helped. In the second experiment, people were asked to remember details of what they had heard during a mock crime scene. © 2015 BBC
By Melissa Healy A pill may help those whose out-of-control eating is a cause of extreme distress An ADHD drug may offer hope for a different psychiatric disorder Binge eating disorder, a newly recognized condition in which bouts of voracious eating lead to guilt, shame and often obesity, may yield to lisdexamfetamine (marketed as Vyvanse), a medication that has been used for several years to treat attention deficit and hyperactivity disorder in children and adults. In an 11-week clinical trial that tested a range of Vyvanse dosages, researchers found that, compared to those taking a placebo pill, subjects diagnosed with binge eating disorder who took a daily 50 or 70 mg dose of the ADHD drug had fewer binge eating episodes, were more likely to cease binge eating for a four-week period, reported greater improvement in their functioning, and lost substantially more weight. The findings, published online early in the journal JAMA Psychiatry on Wednesday, offer early evidence that patients whose consumption patterns are punctuated by episodes of out-of-control eating may be helped by some medication. The disorder, which has in recent years won wider recognition by the psychiatric establishment, has traditionally been treated with psychotherapy. It has proved a difficult condition to treat. Among those getting lisdexamfetamine, side effects were similar to those experienced by adults who take the medication to treat symptoms of ADHD, including dry mouth, difficulty falling asleep, increased heart rate and headaches. Adverse events prompted six of 196 subjects in the active arm of treatment to withdraw from the study.
Keyword: Anorexia & Bulimia
Link ID: 20492 - Posted: 01.17.2015
By Peter Holley "Lynchian," according to David Foster Wallace, "refers to a particular kind of irony where the very macabre and the very mundane combine in such a way as to reveal the former's perpetual containment within the latter." Perhaps no other word better describes the onetime fate of Martin Pistorius, a South African man who spent more than a decade trapped inside his own body involuntarily watching "Barney" reruns day after day. "I cannot even express to you how much I hated Barney," Martin told NPR during the first episode of a new program on human behavior, "Invisibilia." The rest of the world thought Pistorius was a vegetable, according to NPR. Doctors had told his family as much after he'd fallen into a mysterious coma as a healthy 12-year-old before emerging several years later completely paralyzed, unable to communicate with the outside world. The nightmarish condition, which can be caused by stroke or an overdose of medication, is known as "total locked-in syndrome," and it has no cure, according to the National Institute of Neurological Disorders and Stroke. In a first-person account for the Daily Mail, Pistorius described the period after he slipped into a coma: I was completely unresponsive. I was in a virtual coma but the doctors couldn’t diagnose what had caused it. When he finally did awaken in the early 1990s, around the age of 14 or 15, Pistorius emerged in a dreary fog as his mind gradually rebooted itself.
Ewen Callaway The ability to recognize oneself in a mirror has been touted as a hallmark of higher cognition — present in humans and only the most intelligent of animals — and the basis for empathy. A study published this week in Current Biology controversially reports that macaques can be trained to pay attention to themselves in a mirror, the first such observation in any monkey species1. Yet the finding raises as many questions as it answers — not only about the cognitive capacity of monkeys, but also about mirror self-recognition as a measure of animal intelligence. “Simply because you’re acting as if you recognize yourself in a mirror doesn’t necessarily mean you’ve achieved self-recognition,” says Gordon Gallup, an evolutionary psychologist at the State University of New York in Albany, who in 1970 was the first to demonstrate mirror self-recognition in captive chimpanzees2. When most animals encounter their reflections in a mirror, they act as if they have seen another creature. They lash out aggressively, belt out loud calls and display other social behaviours. This is how chimps first acted when Gallup placed a full-length mirror next to their cages. But after a couple of days, their attitudes changed and they started examining themselves, says Gallup. “They’d look at the inside of their mouths; they’d watch their tongue move.” This convinced him that the chimps recognized themselves in the mirror. He knew other scientists would be sceptical, so he developed a test of mirror self-recognition. After chimps started acting as if they saw themselves in the mirror, after about 10 days, he anaesthetized them and applied an odour-free red mark to a location on their faces they could not see, such as above the brow ridge. © 2015 Nature Publishing Group
Link ID: 20467 - Posted: 01.10.2015
|By Stephen L. Macknik and Susana Martinez-Conde To a neuroscientist, the trouble with cocktail parties is not that we do not love cocktails or parties (many neuroscientists do). Instead what we call “the cocktail party problem” is the mystery of how anyone can have a conversation at a cocktail party at all. Consider a typical scene: You have a dozen or more lubricated and temporarily uninhibited adults telling loud, improbable stories at increasing volumes. Interlocutors guffaw and slap backs. Given the decibel level, it is a minor neural miracle that any one of these revelers can hear and parse one word from any other. The alcohol does not help, but it is not the main source of difficulties. The cocktail party problem is that there is just too much going on at once: How can our brain filter out the noise to focus on the wanted information? This problem is a central one for perceptual neuroscience—and not just during cocktail parties. The entire world we live in is quite literally too much to take in. Yet the brain does gather all of this information somehow and sorts it in real time, usually seamlessly and correctly. Whereas the physical reality consists of comparable amounts of signal and noise for many of the sounds and sights around you, your perception is that the conversation or object that interests you remains in clear focus. So how does the brain accomplish this feat? One critical component is that our neural circuits simplify the problem by actively ignoring—suppressing—anything that is not task-relevant. Our brain picks its battles. It stomps out irrelevant information so that the good stuff has a better chance of rising to awareness. This process, colloquially called attention, is how the brain sorts the wheat from the chaff. © 2014 Scientific American
By ADAM FRANK In the endless public wars between science and religion, Buddhism has mostly been given a pass. The genesis of this cultural tolerance began with the idea, popular in the 1970s, that Buddhism was somehow in harmony with the frontiers of quantum physics. While the silliness of “quantum spirituality” is apparent enough these days, the possibility that Eastern traditions might have something to say to science did not disappear. Instead, a more natural locus for that encounter was found in the study of the mind. Spurred by the Dalai Lama’s remarkable engagement with scientists, interest in Buddhist attitudes toward the study of the mind has grown steadily. But within the Dalai Lama’s cheerful embrace lies a quandary whose resolution could shake either tradition to its core: the true relationship between our material brains and our decidedly nonmaterial minds. More than evolution, more than inexhaustible arguments over God’s existence, the real fault line between science and religion runs through the nature of consciousness. Carefully unpacking that contentious question, and exploring what Buddhism offers its investigation, is the subject of Evan Thompson’s new book, “Waking, Dreaming, Being.” A professor of philosophy at the University of British Columbia, Thompson is in a unique position to take up the challenge. In addition to a career built studying cognitive science’s approach to the mind, he is intimate with the long history of Buddhist and Vedic commentary on the mind too. He also happens to be the son of the maverick cultural historian William Irwin Thompson, whose Lindisfarne Association proposed the “study and realization of a new planetary culture” (a goal that reveals a lot about its strengths and weaknesses). Growing up in this environment, the younger Thompson managed to pick up an enthusiasm for non-Western philosophical traditions and a healthy skepticism for their spiritualist assumptions. © 2014 The New York Times Company
Link ID: 20430 - Posted: 12.20.2014
By Quassim Cassam Most people wonder at some point in their lives how well they know themselves. Self-knowledge seems a good thing to have, but hard to attain. To know yourself would be to know such things as your deepest thoughts, desires and emotions, your character traits, your values, what makes you happy and why you think and do the things you think and do. These are all examples of what might be called “substantial” self-knowledge, and there was a time when it would have been safe to assume that philosophy had plenty to say about the sources, extent and importance of self-knowledge in this sense. Not any more. With few exceptions, philosophers of self-knowledge nowadays have other concerns. Here’s an example of the sort of thing philosophers worry about: suppose you are wearing socks and believe you are wearing socks. How do you know that that’s what you believe? Notice that the question isn’t: “How do you know you are wearing socks?” but rather “How do you know you believe you are wearing socks?” Knowledge of such beliefs is seen as a form of self-knowledge. Other popular examples of self-knowledge in the philosophical literature include knowing that you are in pain and knowing that you are thinking that water is wet. For many philosophers the challenge is explain how these types of self-knowledge are possible. This is usually news to non-philosophers. Most certainly imagine that philosophy tries to answer the Big Questions, and “How do you know you believe you are wearing socks?” doesn’t sound much like one of them. If knowing that you believe you are wearing socks qualifies as self-knowledge at all — and even that isn’t obvious — it is self-knowledge of the most trivial kind. Non-philosophers find it hard to figure out why philosophers would be more interested in trivial than in substantial self-knowledge. © 2014 The New York Times Company
Link ID: 20402 - Posted: 12.08.2014
|By Piercarlo Valdesolo Google “successful Thanksgiving” and you will get a lot of different recommendations. Most you’ve probably heard before: plan ahead, get help, follow certain recipes. But according to new research from Florida State University, enjoying your holiday also requires a key ingredient that few guests consider as they wait to dive face first into the turkey: a belief in free will. What does free will have to do with whether or not Aunt Sally leaves the table in a huff? These researchers argue that belief in free will is essential to experiencing the emotional state that makes Thanksgiving actually about giving thanks: gratitude. Previous research has shown that our level of gratitude for an act depends on three things: 1) the cost to the benefactor (in time, effort or money), 2) the value of the act to the beneficiary, and 3) the sincerity of the benefactor’s intentions. For example, last week my 4-year-old daughter gave me a drawing of our family. This act was costly (she spent time and effort), valuable (I love the way she draws herself bigger than everyone else in the family), and sincere (she drew it because she knew I would like it). But what if I thought that she drew it for a different reason? What if I thought that she was being coerced by my wife? Or if I thought that this was just an assignment at her pre-school? In other words, what if I thought she had no choice but to draw it? I wouldn’t have defiantly thrown it back in her face, but I surely would have felt differently about the sincerity of the action. It would have diminished my gratitude. © 2014 Scientific American
|By Christof Koch Point to any one organ in the body, and doctors can tell you something about what it does and what happens if that organ is injured by accident or disease or is removed by surgery—whether it be the pituitary gland, the kidney or the inner ear. Yet like the blank spots on maps of Central Africa from the mid-19th century, there are structures whose functions remain unknown despite whole-brain imaging, electroencephalographic recordings that monitor the brain's cacophony of electrical signals and other advanced tools of the 21st century. Consider the claustrum. It is a thin, irregular sheet of cells, tucked below the neocortex, the gray matter that allows us to see, hear, reason, think and remember. It is surrounded on all sides by white matter—the tracts, or wire bundles, that interconnect cortical regions with one another and with other brain regions. The claustra—for there are two of them, one on the left side of the brain and one on the right—lie below the general region of the insular cortex, underneath the temples, just above the ears. They assume a long, thin wisp of a shape that is easily overlooked when inspecting the topography of a brain image. Advanced brain-imaging techniques that look at the white matter fibers coursing to and from the claustrum reveal that it is a neural Grand Central Station. Almost every region of the cortex sends fibers to the claustrum. These connections are reciprocated by other fibers that extend back from the claustrum to the originating cortical region. Neuroanatomical studies in mice and rats reveal a unique asymmetry—each claustrum receives input from both cortical hemispheres but only projects back to the overlying cortex on the same side. Whether or not this is true in people is not known. Curiouser and curiouser, as Alice would have said. © 2014 Scientific American
Link ID: 20350 - Posted: 11.24.2014
Kate Szell “I once asked Clara who she was. It was so embarrassing, but she’d had a haircut, so how was I to know?” That’s Rachel, she’s 14 and counts Clara as one of her oldest and best friends. There’s nothing wrong with Rachel’s sight, yet she struggles to recognise others. Why? Rachel is face blind. Most of us take for granted the fact that we recognise someone after a quick glance at their face. We don’t realise we’re doing something very different when we look at a face compared with when we look at anything else. To get a feeling of how peculiar facial recognition is, try recognising people by looking at their hands, instead of their faces. Tricky? That’s exactly how Rachel feels – only she’s not looking at hands, she’s looking straight into someone’s eyes. Specific areas of the brain process facial information. Damage to those areas gives rise to prosopagnosia or “face blindness”: an inability or difficulty with recognising faces. While brain damage-induced prosopagnosia is rare, prosopagnosia itself is not. Studies suggest around 2% of the population could have some form of prosopagnosia. These “developmental” prosopagnosics seem to be born without the ability to recognise faces and don’t acquire it, relying instead on all manner of cues, from gait to hairstyles, to tell people apart. Kirsten Dalrymple from the University of Minnesota is one of a handful of researchers looking into developmental prosopagnosia. Her particular interest is in prosopagnosic children. “Some seem to cope without much of a problem but, for others, it’s a totally different story,” she says. “They can become very socially withdrawn and can also be at risk of walking off with strangers.” © 2014 Guardian News and Media Limited o
Link ID: 20347 - Posted: 11.24.2014
By CLYDE HABERMAN The notion that a person might embody several personalities, each of them distinct, is hardly new. The ancient Romans had a sense of this and came up with Janus, a two-faced god. In the 1880s, Robert Louis Stevenson wrote “Strange Case of Dr. Jekyll and Mr. Hyde,” a novella that provided us with an enduring metaphor for good and evil corporeally bound. Modern comic books are awash in divided personalities like the Hulk and Two-Face in the Batman series. Even heroic Superman has his alternating personas. But few instances of the phenomenon captured Americans’ collective imagination quite like “Sybil,” the study of a woman said to have had not two, not three (like the troubled figure in the 1950s’ “Three Faces of Eve”), but 16 different personalities. Alters, psychiatrists call them, short for alternates. As a mass-market book published in 1973, “Sybil” sold in the millions. Tens of millions watched a 1976 television movie version. The story had enough juice left in it for still another television film in 2007. Sybil Dorsett, a pseudonym, became the paradigm of a psychiatric diagnosis once known as multiple personality disorder. These days, it goes by a more anodyne label: dissociative identity disorder. Either way, the strange case of the woman whose real name was Shirley Ardell Mason made itself felt in psychiatrists’ offices across the country. Pre-"Sybil,” the diagnosis was rare, with only about 100 cases ever having been reported in medical journals. Less than a decade after “Sybil” made its appearance, in 1980, the American Psychiatric Association formally recognized the disorder, and the numbers soared into the thousands. People went on television to tell the likes of Jerry Springer and Leeza Gibbons about their many alters. One woman insisted that she had more than 300 identities within her (enough, if you will, to fill the rosters of a dozen major-league baseball teams). Even “Eve,” whose real name is Chris Costner Sizemore, said in the mid-1970s that those famous three faces were surely an undercount. It was more like 22, she said. © 2014 The New York Times Company
Link ID: 20346 - Posted: 11.24.2014
By MAX BEARAK MUMBAI, India — The young man sat cross-legged atop a cushioned divan on an ornately decorated stage, surrounded by other Jain monks draped in white cloth. His lip occasionally twitched, his hands lay limp in his lap, and for the most part his eyes were closed. An announcer repeatedly chastised the crowd for making even the slightest noise. From daybreak until midafternoon, members of the audience approached the stage, one at a time, to show the young monk a random object, pose a math problem, or speak a word or phrase in one of at least six different languages. He absorbed the miscellany silently, letting it slide into his mind, as onlookers in their seats jotted everything down on paper. After six hours, the 500th and last item was uttered — it was the number 100,008. An anxious hush descended over the crowd. And the monk opened his eyes and calmly recalled all 500 items, in order, detouring only once to fill in a blank he had momentarily set aside. When he was done, and the note-keepers in the audience had confirmed his achievement, the tense atmosphere dissolved and the announcer led the crowd in a series of triumphant chants. The opportunity to witness the feat of memory drew a capacity crowd of 6,000 to the Sardar Vallabhbhai Patel stadium in Mumbai on Sunday. The exhibition was part of a campaign to encourage schoolchildren to use meditation to build brainpower, as Jain monks have done for centuries in India, a country drawn both toward ancient religious practices and more recent ambitions. But even by Jain standards, the young monk — Munishri Ajitchandrasagarji, 24 — is something special. His guru, P. P. Acharya Nayachandrasagarji, said no other monk in many years had come close to his ability. © 2014 The New York Times Company
James Gorman Evidence has been mounting for a while that birds and other animals can count, particularly when the things being counted are items of food. But most of the research is done under controlled conditions. In a recent experiment with New Zealand robins, Alexis Garland and Jason Low at Victoria University of Wellington tested the birds in a natural setting, giving them no training and no rewards, and showed that they knew perfectly well when a scientist had showed them two mealworms in a box, but then delivered only one. The researchers reported the work this fall in the journal Behavioural Processes. The experiment is intriguing to watch, partly because it looks like a child’s magic trick. The apparatus used is a wooden box that has a sliding drawer. After clearly showing a robin that she was dropping two mealworms in a circular well in the box, Dr. Garland would slide in the drawer. It covered the two worms with an identical-looking circular well containing only one worm. When the researcher moved away and the robin flew down and lifted off a cover, it would find only one worm. The robins pecked intensely at the box, behavior they didn’t show if they found the two worms they were expecting. Earlier experiments had also shown the birds to be good at counting, and Dr. Garland said that one reason might be that they are inveterate thieves. Mates, in particular, steal from one another’s food caches, where they hide perishable prey like worms or insects. “If you’ve got a mate that steals 50 or more percent of your food,” she said, you’d better learn how to keep track of how many mealworms you’ve got. © 2014 The New York Times Company
By Anna North Do you devour the latest neuroscience news, eager to learn more about how your brain works? Or do you click past it to something else, something more applicable to your life? If you’re in the latter camp, you may be in the majority. A new study suggests that many people just don’t pay that much attention to brain science, and its findings may raise a question: Is “neuro-literacy” really necessary? At Wired, Christian Jarrett writes, “It feels to me like interest in the brain has exploded.” He cites the prevalence of the word “brain” in headlines as well as “the emergence of new fields such as neuroleadership, neuroaesthetics and neuro-law.” But as a neuroscience writer, he notes, he may be “heavily biased” — and in fact, some research “suggests neuroscience has yet to make an impact on most people’s everyday lives.” For instance, he reports, Cliodhna O’Connor and Helene Joffe recently interviewed 48 Londoners about brain science for a paper published in the journal Science Communication. Anyone who thinks we live in an era of neuro-fixation may find the results a bit of a shock. Said one participant in the research: “Science of the brain? I haven’t a clue. Nothing at all. I’d be lying if I said there was.” Another: “Brain research I understand, an image of, I don’t know, a monkey or a dog with like the top of their head off and electrodes and stuff on their brain.” And another: “I might have seen it on the news or something, you know, some report of some description. But because they probably mentioned the word ‘science,’ or ‘We’re going to go now to our science correspondent Mr. Lala,’ that’s probably when I go, okay, it’s time for me to make a cup of tea.” According to the study authors, 71 percent of respondents “took pains to convey that neuroscience was not salient in their day-to-day life: it was ‘just not really on my radar.’” Some respondents associated brain research with scientists in white coats or with science classes (asked to free-associate about the term “brain research,” one respondent drew a mean-faced stick figure labeled “cross teacher”). And 42 percent saw science as something alien to them, removed from their own lives. © 2014 The New York Times Company
Link ID: 20315 - Posted: 11.15.2014
By ALAN SCHWARZ CONCORD, Calif. — Every time Matthias is kicked out of a school or day camp for defying adults and clashing with other children, his mother, Joelle Kendle, inches closer to a decision she dreads. With each morning of arm-twisting and leg-flailing as she tries to get him dressed and out the door for first grade, the temptation intensifies. Ms. Kendle is torn over whether to have Matthias, just 6 and already taking the stimulant Adderall for attention deficit hyperactivity disorder, go on a second and more potent medication: the antipsychotic Risperdal. Her dilemma is shared by a steadily rising number of American families who are using multiple psychotropic drugs — stimulants, antipsychotics, antidepressants and others — to temper their children’s troublesome behavior, even though many doctors who mix such medications acknowledge that little is known about the overall benefits and risks for children. In 2012 about one in 54 youngsters ages 6 through 17 covered by private insurance was taking at least two psychotropic medications — a rise of 44 percent in four years, according to Express Scripts, which processes prescriptions for 85 million Americans. Academic studies of children covered by Medicaid have also found higher rates and growth. Combined, the data suggest that about one million children are currently taking various combinations of psychotropics. Risks of antipsychotics alone, for example, are known to include substantial weight gain and diabetes. Stimulants can cause appetite suppression, insomnia and, far more infrequently, hallucinations. Some combinations of medication classes, like antipsychotics and antidepressants, have shown improved benefits (for psychotic depression) but also heightened risks (for heart rhythm disturbances). But this knowledge has been derived substantially from studies in adults — children are rarely studied because of concerns about safety and ethics — leaving many experts worried that the use of multiple psychotropics in youngsters has not been explored fully. There is also debate over whether the United States Food and Drug Administration’s database of patients’ adverse drug reactions reliably monitors the hazards of psychotropic drug combinations, primarily because only a small fraction of cases are ever reported. Some clinicians are left somewhat queasy about relying mostly on anecdotal reports of benefit and harm. © 2014 The New York Times Company