Chapter 14. Attention and Consciousness
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
Ian Sample Science editor Children who are born very prematurely are at greater risk of developing mental health and social problems that can persist well into adulthood, according to one of the largest reviews of evidence. Those with an extremely low birth weight, at less than a kilogram, are more likely to have attention disorders and social difficulties as children, and feel more shyness, anxiety and depression as adults, than those born a healthy weight. The review draws on findings from 41 published studies over the past 26 years and highlights the need for doctors to follow closely how children born very prematurely fare as they become teenagers and adults. “It is important that families and doctors be aware of the potential for these early-emerging mental health problems in children born at extremely low birth weight, since at least some of them endure into adulthood,” said Karen Mathewson, a psychologist at McMaster University in Ontario. Improvements in neonatal care in the past two decades mean that more children who are born very prematurely now survive. In a healthy pregnancy, a baby can reach 1kg (a little more than 2lbs) within 27 weeks, or the end of the second trimester. The study, which involves data from 13,000 children in 12 different countries, follows previous research that found a greater tendency for very low birth weight children to have lower IQs and autism and more trouble with relationships and careers as they reach adulthood and venture into the world. © 2017 Guardian News and Media Limited
By Virginia Morell Strange as it might seem, not all animals can immediately recognize themselves in a mirror. Great apes, dolphins, Asian elephants, and Eurasian magpies can do this—as can human kids around age 2. Now, some scientists are welcoming another creature to this exclusive club: carefully trained rhesus monkeys. The findings suggest that with time and teaching, other animals can learn how mirrors work, and thus learn to recognize themselves—a key test of cognition. “It’s a really interesting paper because it shows not only what the monkeys can’t do, but what it takes for them to succeed,” says Diana Reiss, a cognitive psychologist at Hunter College in New York City, who has given the test to dolphins and Asian elephants in other experiments. The mirror self-recognition test (MSR) is revered as a means of testing self-awareness. A scientist places a colored, odorless mark on an animal where it can’t see it, usually the head or shoulder. If the animal looks in the mirror and spontaneously rubs the mark, it passes the exam. Successful species are said to understand the concept of “self” versus “other.” But some researchers wonder whether failure is simply a sign that the exam itself is inadequate, perhaps because some animals can’t understand how mirrors work. Some animals—like rhesus monkeys, dogs, and pigs—don’t recognize themselves in mirrors, but can use them to find food. That discrepancy puzzled Mu-ming Poo, a neurobiologist at the Shanghai Institutes for Biological Sciences in China, and one of the study’s authors. “There must be some transition between that simple mirror use and recognizing yourself,” he says. © 2017 American Association for the Advancement of Science.
by Bethany Brookshire Gender bias works in subtle ways, even in the scientific process. The latest illustration of that: Scientists recommend women less often than men as reviewers for scientific papers, a new analysis shows. That seemingly minor oversight is yet another missed opportunity for women that might end up having an impact on hiring, promotions and more. Peer review is one of the bricks in the foundation supporting science. A researcher’s results don’t get published in a journal until they successfully pass through a gauntlet of scientific peers, who scrutinize the paper for faulty findings, gaps in logic or less-than-meticulous methods. The scientist submitting the paper gets to suggest names for those potential reviewers. Scientific journal editors may contact some of the recommended scientists, and then reach out to a few more. But peer review isn’t just about the paper (and scientist) being examined. Being the one doing the reviewing “has a number of really positive benefits,” says Brooks Hanson, an earth scientist and director of publications at the American Geophysical Union in Washington, D.C. “You read papers differently as a reviewer than you do as a reader or author. You look at issues differently. It’s a learning experience in how to write papers and how to present research.” Serving as a peer reviewer can also be a networking tool for scientific collaborations, as reviewers seek out authors whose work they admired. And of course, scientists put the journals they review for on their resumes when they apply for faculty positions, research grants and awards. |© Society for Science & the Public 2000 - 2017.
By Tiffany O'Callaghan Imagine feeling angry or upset whenever you hear a certain everyday sound. It’s a condition called misophonia, and we know little about its causes. Now there’s evidence that misophonics show distinctive brain activity whenever they hear their trigger sounds, a finding that could help devise coping strategies and treatments. Olana Tansley-Hancock knows misophonia’s symptoms only too well. From the age of about 7 or 8, she experienced feelings of rage and discomfort whenever she heard the sound of other people eating. By adolescence, she was eating many of her meals alone. As time wore on, many more sounds would trigger her misophonia. Rustling papers and tapping toes on train journeys constantly forced her to change seats and carriages. Clacking keyboards in the office meant she was always making excuses to leave the room. Finally, she went to a doctor for help. “I got laughed at,” she says. “People who suffer from misophonia often have to make adjustments to their lives, just to function,” says Miren Edelstein at the University of California, San Diego. “Misophonia seems so odd that it’s difficult to appreciate how disabling it can be,” says her colleague, V. S. Ramachandran. The condition was first given the name misophonia in 2000, but until 2013, there had only been two case studies published. More recently, clear evidence has emerged that misophonia isn’t a symptom of other conditions, such as obsessive compulsive disorder, nor is it a matter of being oversensitive to other people’s bad manners. Some studies, including work by Ramachandran and Edelstein, have found that trigger sounds spur a full fight-or-flight response in people with misophonia. © Copyright Reed Business Information Ltd.
Ian Sample Science editor Doctors have used a brain-reading device to hold simple conversations with “locked-in” patients in work that promises to transform the lives of people who are too disabled to communicate. The groundbreaking technology allows the paralysed patients – who have not been able to speak for years – to answer “yes” or “no” to questions by detecting telltale patterns in their brain activity. Three women and one man, aged 24 to 76, were trained to use the system more than a year after they were diagnosed with completely locked-in syndrome, or CLIS. The condition was brought on by amyotrophic lateral sclerosis, or ALS, a progressive neurodegenerative disease which leaves people totally paralysed but still aware and able to think. “It’s the first sign that completely locked-in syndrome may be abolished forever, because with all of these patients, we can now ask them the most critical questions in life,” said Niels Birbaumer, a neuroscientist who led the research at the University of Tübingen. “This is the first time we’ve been able to establish reliable communication with these patients and I think that is important for them and their families,” he added. “I can say that after 30 years of trying to achieve this, it was one of the most satisfying moments of my life when it worked.” © 2017 Guardian News and Media Limited
Noah Charney The Chinese government just arrested a group of people associated with a sham tourist attraction that had lured hundreds of sight-seers to a fake Terracotta Warriors exhibit, comprised entirely of modern replicas. Sotheby’s recently hired Jamie Martin of Orion Analytical, a forensic specialist at testing art, who then discovered that a Parmigianino painting recently sold is actually a modern forgery (Sotheby’s returned the buyer’s money and then sued the person for whom they sold it). And the Ringling Museum in Sarasota, Florida, is hoping that a painting of Philip IV of Spain in their collection will be definitively determined to be by Velazquez, and not a copy in the style of Velazquez. And that’s just in the last week or so. Art forgery and authenticity seems to be in the news just about every week (to my publicist’s delight). But I’m on a bit of a brainstorm. After my interview with Nobel Prize winner Dr. Eric Kandel on the neuroscience behind how we humans understand art, I’ve developed a keen interest in art and the mind. I tackled selfies, self-portraits and facial recognition recently, as well as what happens when the brain fails to function properly and neglects to recognize the value of art. Since my last book was a history of forgery, it was perhaps inevitable that I would wonder about the neurology of the recognition of originals versus copies. But while I looked into forgery from a wide variety of angles for the book, neuroscience was not one of them. © 2017 Salon Media Group, Inc.
By ADAM BEAR and JOSHUA KNOBE What’s normal? Perhaps the answer seems obvious: What’s normal is what’s typical — what is average. But in a recent paper in the journal Cognition, we argue that the situation is more complicated than that. After conducting a series of experiments that examined how people decide whether something is normal or not, we found that when people think about what is normal, they combine their sense of what is typical with their sense of what is ideal. Normal, in other words, turns out to be a blend of statistical and moral notions. Our key finding can be illustrated with a simple example. Ask yourself, “What is the average number of hours of TV that people watch in a day?” Then ask yourself a question that might seem very similar: “What is the normal number of hours of TV for a person to watch in a day?” If you are like most of our experimental participants, you will not give the same answer to the second question that you give to the first. Our participants said the “average” number was about four hours and the “normal” number was about three hours. In addition, they said that the “ideal” number was about 2.5 hours. This has an interesting implication. It suggests that people’s conception of the normal deviates from the average in the direction of what they think ought to be so. Our studies found this same pattern in numerous other cases: the normal grandmother, the normal salad, the normal number of students to be bullied in a middle school. Again and again, our participants did not take the normal to be the same as the average. Instead, what people picked out as the “normal thing to do” or a “normal such-and-such” tended to be intermediate between what they thought was typical and what they thought was ideal. © 2017 The New York Times Company
Link ID: 23165 - Posted: 01.30.2017
Nicola Davis Girls as young as six years old believe that brilliance is a male trait, according research into gender stereotypes. The US-based study also found that, unlike boys, girls do not believe that achieving good grades in school is related to innate abilities. Andrei Cimpian, a co-author of the research from New York University, said that the work highlights how even young children can absorb and be influenced by gender stereotypes – such as the idea that brilliance or giftedness is more common in men. Are gendered toys harming childhood development? Read more “Because these ideas are present at such an early age, they have so much time to affect the educational trajectories of boys and girls,” he said. Writing in the journal Science, researchers from three US universities describe how they carried out a range of tests with 400 children, half of whom were girls, to probe the influence of gender stereotypes on children’s notions of intelligence and ability. In the first test, a group of 96 boys and girls of ages five, six and seven, were read a story about a highly intelligent person, and were asked to guess the person’s gender. They were then presented with a series of pictures showing pairs of adults, some same-sex, some opposite sex, and were asked to pick which they thought was highly intelligent. Finally, the children were asked to match certain objects and traits, such as “being smart”, to pictures of men and women. © 2017 Guardian News and Media Limited
By Carl Bialik A woman has never come closer to the presidency than Hillary Clinton did in winning the popular vote in November. Yet as women march in Washington on Saturday, many of them to protest the presidency of Donald Trump, an important obstacle to the first woman president remains: the hidden, internalized bias many people hold against career advancement by women. And perhaps surprisingly, there is evidence that women hold more of this bias, on average, than men do. There has been lots of discussion of the role that overt sexism played in both Trump’s campaign and at the ballot box. A YouGov survey conducted two weeks before the election, for example, found that Trump voters had much higher levels of sexism, on average, than Clinton voters, as measured by their level of agreement with statements such as “women seek to gain power by getting control over men.” An analysis of the survey found that sexism played a big role in explaining people’s votes, after controlling for other factors, including gender and political ideology. Other research has reached similar conclusions. Two recent studies of voters, however, suggest that another, subtler form of bias may also have been a factor in the election. These studies looked at what’s known as “implicit bias,” the unconscious tendency to associate certain qualities with certain groups — in this case, the tendency to associate men with careers and women with family. Researchers have found that this kind of bias is stronger on average in women than in men, and, among women, it is particularly strong among political conservatives. And at least according to one study, this unconscious bias was especially strong among one group in 2016: women who supported Trump.
Link ID: 23134 - Posted: 01.23.2017
By Jordan Axt Imagine playing a game where you’re seated in front of four decks of cards. On the back of two decks are pictures of puppies; on the other two are pictures of spiders. Each deck has some cards that win points and others that lose points. In general, the puppy decks are “good” in that they win you more points than they lose while the spider decks are “bad” in that they lose you more points they win. You repeatedly select cards in hopes of winning as many points as possible. This game seems pretty easy— and it is. Most players favor the puppy decks from the start and quickly learn to continue favoring them because they produce more points. However, if the pictures on the decks are reversed, the game becomes a little harder. People may have a tougher time initially favoring spider decks because it’s difficult to learn that something people fear like spiders brings positive outcomes and something people enjoy like puppies brings negative outcomes. Performance on this learning task is best when one’s attitudes and motivations are aligned. For instance, when puppies earn you more points than spiders, people’s preference for puppies can lead people to select more puppies initially, and a motivation to earn as many points as possible leads people to select more and more puppies over time. But when spiders earn you more points than spiders, people have to overcome their initial aversion to spiders in order to perform well. © 2017 Scientific American
Link ID: 23130 - Posted: 01.21.2017
By Rachael Lallensack A video game is helping researchers learn more about how tiny European starlings keep predators at bay. Their massive flocks, consisting of hundreds to thousands of birds, fly together in a mesmerizing, pulsating pattern called a murmuration. For a long time, researchers have suspected that the bigger the flock, the harder it is for predators like falcons and hawks to take down any one member, something known as “confusion effect.” Now, researchers have analyzed that effect—in human hunters. Using the first 3D computer program to simulate a murmuration, scientists tested how well 25 players, acting as flying predators, could target and pursue virtual starlings, whose movements were simulated based on data from real starling flocks (see video above). The team’s findings reaffirmed the confusion effect: The larger the simulated flocks, the harder it was for the “predators” to single out and catch individual prey, the researchers report this week in Royal Society Open Science. So maybe sometimes, it’s not so bad to get lost in a crowd. © 2017 American Association for the Advancement of Science.
By Alan Burdick Some nights—more than I like, lately—I wake to the sound of the bedside clock. The room is dark, without detail, and it expands in such a way that it seems as if I’m outdoors, under an empty sky, or underground, in a cavern. I might be falling through space. I might be dreaming. I could be dead. Only the clock moves, its tick steady, unhurried. At these moments I have the most chilling understanding that time moves in only one direction. I’m tempted to look at the clock, but I already know that it’s the same time it always is: 4 A.M., or 4:10 A.M., or once, for a disconcerting stretch of days, 4:27 A.M. Even without looking, I could deduce the time from the ping of the bedroom radiator gathering steam in winter or the infrequency of the cars passing by on the street outside. In 1917, the psychologist Edwin G. Boring and his wife, Lucy, described an experiment in which they woke people at intervals to see if they knew what time it was; the average estimate was accurate to within fifty minutes, although almost everyone thought it was later than it actually was. They found that subjects were relying on internal or external signals: their degree of sleepiness or indigestion (“The dark brown taste in your mouth is never bad when you have been asleep only a short time”), the moonlight, “bladder cues,” the sounds of cars or roosters. “When a man is asleep, he has in a circle round him the chain of the hours, the sequence of the years, the order of the heavenly bodies,” Proust wrote. “Instinctively he consults them when he awakes, and in an instant reads off his own position on the earth’s surface and the time that has elapsed during his slumbers.” © 2017 Condé Nast.
By Maggie Koerth-Baker “The president can’t have a conflict of interest,” Donald Trump told The New York Times in November. He appears to have meant that in the legal sense — the president isn’t bound by the same conflict-of-interest laws that loom over other executive branch officials and employees.1 But that doesn’t mean the president’s interests can’t be in conflict. When he takes office Jan. 20, Trump will be tangled in a wide array of situations in which his personal connections and business coffers are pulling him in one direction while the interests of the American presidency and people pull him in another. For example, Trump is the president of a vineyard in Virginia that’s requesting foreign worker visas from the government he’ll soon lead. He’s also involved in an ongoing business partnership with the Philippines’ diplomatic trade envoy — a relationship that could predispose Trump to accepting deals that are more favorable to that country than he otherwise might. Once he’s in office, he will appoint some members of the labor board that could hear disputes related to his hotels. Neither Trump nor his transition team replied to interview requests for this article, but his comments to the Times suggest that he genuinely believes he can be objective and put the country first, despite financial and social pressures to do otherwise. Unfortunately, science says he’s probably wrong.
Link ID: 23108 - Posted: 01.16.2017
By Ellen Hendriksen Pop quiz: what’s the first thing that comes to mind when I say “ADHD”? a. Getting distracted b. Ants-in-pants c. Elementary school boys d. Women and girls Most likely, you didn’t pick D. If that’s the case, you’re not alone. For most people, ADHD conjures a mental image of school-aged boys squirming at desks or bouncing off walls, not a picture of adults, girls, or especially adult women. Both scientists and society have long pinned ADHD on males, even though girls and women may be just as likely to suffer from this neurodevelopmental disorder. Back in 1987, the American Psychiatric Association stated that the male to female ratio for ADHD was 9 to 1. Twenty years later, however, an epidemiological study of almost 4,000 kids found the ratio was more like 1 to 1—half girls, half boys. © 2017 Scientific American
By Drake Baer Philosophers have been arguing about the nature of will for at least 2,000 years. It’s at the core of blockbuster social-psychology findings, from delayed gratification to ego depletion to grit. But it’s only recently, thanks to the tools of brain imaging, that the act of willing is starting to be captured at a mechanistic level. A primary example is “cognitive control,” or how the brain selects goal-serving behavior from competing processes like so many unruly third-graders with their hands in the air. It’s the rare neuroscience finding that’s immediately applicable to everyday life: By knowing the way the brain is disposed to behaving or misbehaving in accordance to your goals, it’s easier to get the results you’re looking for, whether it’s avoiding the temptation of chocolate cookies or the pull of darkly ruminative thoughts. Jonathan Cohen, who runs a neuroscience lab dedicated to cognitive control at Princeton, says that it underlies just about every other flavor of cognition that’s thought to “make us human,” whether it’s language, problem solving, planning, or reasoning. “If I ask you not to scratch the mosquito bite that you have, you could comply with my request, and that’s remarkable,” he says. Every other species — ape, dog, cat, lizard — will automatically indulge in the scratching of the itch. (Why else would a pup need a post-surgery cone?) It’s plausible that a rat or monkey could be taught not to scratch an itch, he says, but that would probably take thousands of trials. But any psychologically and physically able human has the capacity to do so. “It’s a hardwired reflex that is almost certainly coded genetically,” he says. “But with three words — don’t scratch it — you can override those millions of years of evolution. That’s cognitive control.” © 2017, New York Media LLC.
Link ID: 23067 - Posted: 01.07.2017
By Michael Price As we age, we get progressively better at recognizing and remembering someone’s face, eventually reaching peak proficiency at about 30 years old. A new study suggests that’s because brain tissue in a region dedicated to facial recognition continues to grow and develop throughout childhood and into adulthood, a process known as proliferation. The discovery may help scientists better understand the social evolution of our species, as speedy recollection of faces let our ancestors know at a glance whether to run, woo, or fight. The results are surprising because most scientists have assumed that brain development throughout one’s life depends almost exclusively on “synaptic pruning,” or the weeding out of unnecessary connections between neurons, says Brad Duchaine, a psychologist at Dartmouth College who was not involved with the study. “I expect these findings will lead to much greater interest in the role of proliferation in neural development.” Ten years ago, Kalanit Grill-Spector, a psychologist at Stanford University in Palo Alto, California, first noticed that several parts of the brain’s visual cortex, including a segment known as the fusiform gyrus that’s known to be involved in facial recognition, appeared to develop at different rates after birth. To get more detailed information on how the size of certain brain regions changes over time, she turned to a recently developed brain imaging technology known as quantitative magnetic resonance imaging (qMRI). The technique tracks how long it takes for protons, excited by the imaging machine’s strong magnetic field, to calm down. Like a top spinning on a crowded table, these protons will slow down more quickly if they’re surrounded by a lot of molecules—a proxy for measuring volume. © 2017 American Association for the Advancement of Science
Alexander Fornito, The human brain is an extraordinarily complex network, comprising an estimated 86 billion neurons connected by 100 trillion synapses. A connectome is a comprehensive map of these links—a wiring diagram of the brain. With current technology, it is not possible to map a network of this size at the level of every neuron and synapse. Instead researchers use techniques such as magnetic resonance imaging to map connections between areas of the human brain that span several millimeters and contain many thousands of neurons. At this macroscopic scale, each area comprises a specialized population of neurons that work together to perform particular functions that contribute to cognition. For example, different parts of your visual cortex contain cells that process specific types of information, such as the orientation of a line and the direction in which it moves. Separate brain regions process information from your other senses, such as sound, smell and touch, and other areas control your movements, regulate your emotional responses, and so on. These specialized functions are not processed in isolation but are integrated to provide a unitary and coherent experience of the world. This integration is hypothesized to occur when different populations of cells synchronize their activity. The fiber bundles that connect different parts of the brain—the wires of the connectome—provide the substrate for this communication. These connections ensure that brain activity unfolds through time as a rhythmic symphony rather than a disordered cacophony. © 2017 Scientific American
Keyword: Brain imaging
Link ID: 23049 - Posted: 01.03.2017
By Susana Martinez-Conde Our perceptual and cognitive systems like to keep things simple. We describe the line drawings below as a circle and a square, even though their imagined contours consist—in reality—of discontinuous line segments. The Gestalt psychologists of the 19th and early 20th century branded this perceptual legerdemain as the Principle of Closure, by which we tend to recognize shapes and concepts as complete, even in the face of fragmentary information. Now at the end of the year, it is tempting to seek a cognitive kind of closure: we want to close the lid on 2016, wrap it with a bow and start a fresh new year from a blank slate. Of course, it’s just an illusion, the Principle of Closure in one of its many incarnations. The end of the year is just as arbitrary as the end of the month, or the end of the week, or any other date we choose to highlight in the earth’s recurrent journey around the sun. But it feels quite different. That’s why we have lists of New Year’s resolutions, or why we start new diets or exercise regimes on Mondays rather than Thursdays. Researchers have also found that, even though we measure time in a continuous scale, we assign special meaning to idiosyncratic milestones such as entering a new decade. What should we do about our brain’s oversimplification tendencies concerning the New Year—if anything? One strategy would be to fight our feelings of closure and rebirth as we (in truth) seamlessly move from the last day of 2016 to the first day of 2017. But that approach is likely to fail. Try as we might, the Principle of Closure is just too ingrained in our perceptual and cognitive systems. In fact, if you already have the feeling that the beginning of the year is somewhat special (hey, it only happens once a year!), you might as well decide that resistance is futile, and not just embrace the illusion, but do your best to channel it. © 2017 Scientific American
Perry Link People who study other cultures sometimes note that they benefit twice: first by learning about the other culture and second by realizing that certain assumptions of their own are arbitrary. In reading Colin McGinn’s fine recent piece, “Groping Toward the Mind,” in The New York Review, I was reminded of a question I had pondered in my 2013 book Anatomy of Chinese: whether some of the struggles in Western philosophy over the concept of mind—especially over what kind of “thing” it is—might be rooted in Western language. The puzzles are less puzzling in Chinese. Indo-European languages tend to prefer nouns, even when talking about things for which verbs might seem more appropriate. The English noun inflation, for example, refers to complex processes that were not a “thing” until language made them so. Things like inflation can even become animate, as when we say “we need to combat inflation” or “inflation is killing us at the check-out counter.” Modern cognitive linguists like George Lakoff at Berkeley call inflation an “ontological metaphor.” (The inflation example is Lakoff’s.) When I studied Chinese, though, I began to notice a preference for verbs. Modern Chinese does use ontological metaphors, such as fāzhăn (literally “emit and unfold”) to mean “development” or xὶnxīn (“believe mind”) for “confidence.” But these are modern words that derive from Western languages (mostly via Japanese) and carry a Western flavor with them. “I firmly believe that…” is a natural phrase in Chinese; you can also say “I have a lot of confidence that…” but the use of a noun in such a phrase is a borrowing from the West. © 1963-2016 NYREV, Inc
By Susana Martinez-Conde, Stephen L. Macknik We think we know what we want—but do we, really? In 2005 Lars Hall and Petter Johansson, both at Lund University in Sweden, ran an experiment that transformed how cognitive scientists think about choice. The experimental setup looked deceptively simple. A study participant and researcher faced each other across a table. The scientist offered two photographs of young women deemed equally attractive by an independent focus group. The subject then had to choose which portrait he or she found more appealing. Next, the experimenter turned both pictures over, moved them toward the subjects and asked them to pick up the photo they just chose. Subjects complied, unaware that the researcher had just performed a swap using a sleight-of-hand technique known to conjurers as black art. Because your visual neurons are built to detect and enhance contrast, it is very hard to see black on black: a magician dressed in black against a black velvet backdrop can look like a floating head. Hall and Johansson deliberately used a black tabletop in their experiment. The first photos their subjects saw all had black backs. Behind those, however, they hid a second picture of the opposite face with a red back. When the experimenter placed the first portrait face down on the table, he pushed the second photo toward the subject. When participants picked up the red-backed photos, the black-backed ones stayed hidden against the table's black surface—that is, until the experimenter could surreptitiously sweep them into his lap. © 2016 Scientific American