Chapter 18. Attention and Higher Cognition
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
By Thomas MacMillan “Time” is the most common noun in the English language, Dean Buonomano tells us on the first page of his new book, Your Brain Is a Time Machine: The Neuroscience and Physics of Time. But our despite fixation with time, and its obvious centrality in our lives, we still struggle to fully understand it. From a psychology perspective, for instance, time seems to flow by, sometimes slowly — like when we’re stuck in line at the DMV — and sometimes quickly — like when we’re lost in an engrossing novel. But from a physics perspective, time may be simply another dimension in the universe, like length, height, or width. Buonomano, a professor of neuroscience at UCLA, lays out the latest, best theories about how we understand time, illuminating a fundamental aspect of being human. The human brain, he writes, is a time machine that allows us to mentally travel backward and forward, to plan for the future and agonizingly regret that past like no other animal. And, he argues, our brains are time machines like clocks are time machines: constantly tracking the passage of time, whether it’s circadian rhythms that tell us when to go to sleep, or microsecond calculations that allow us to the hear the difference between “They gave her cat-food” and “They gave her cat food.” In an interview with Science of Us, Buonomano spoke about planning for the future as a basic human activity, the limits of be-here-now mindfulness, and the inherent incompatibility between physicists’ and neuroscientists’ understanding of the nature of time. I finished reading your book late last night and went to bed sort of planning our interview today, and then woke up at about 3:30 a.m. ready to do the interview, with my head full of insistent thoughts about questions that I should ask you. So was that my brain being a — maybe malfunctioning — time machine? I think this is consistent with the notion that the brain is an organ that’s future-oriented. As far as survival goes, the evolutionary value of the brain is to act in the present to ensure survival in the future, whether survival is figuring out a good place to get food, or doing an interview, I suppose. ! © Invalid Date, New York Media LLC
By Daniel Barron Earlier this month, JAMA Psychiatry published a land-breaking editorial. A group of psychiatrists led by David Ross described how and why post-traumatic stress disorder (PTSD) should be clinically evaluated from a neuroscience framework. The fact that this editorial was published in one of psychiatry’s leading journals is no small feat. Psychiatry houses a large and powerful contingency that argues neuroscience has little clinical relevance. The relevance of neuroscience to psychiatry was the subject of a recent Op-Ed debate in the New York Times: “There’s Such a Thing as Too Much Neuroscience” was rebutted with “More Neuroscience, Not Less.” This specific debate—and the dense politics as a whole—exists because competing frameworks are vying for competing funding, a conflict that pre-dates Freud’s departure from neurology. That the relevance of neuroscience to psychiatry is still questioned is blatantly outlandish: what organ do psychiatrists treat if not the brain? And what framework could possibly be more relevant than neuroscience to understanding brain dysfunction? In his editorial, Ross tactfully presented his case for neuroscience, describing the obvious choice for a clinical framework as one “perspective,” making a delicate intellectual curtsey while supporting his case with data. Ross discussed five “key neuroscience themes” (read: lines of evidence from burgeoning sub-fields) relevant to understanding and treating PTSD: fear conditioning, dysregulated circuits, memory reconsolidation, and epigenetic and genetic considerations. Each theme accounts for the diverse biological, psychological and social factors involved in PTSD—which is to say, these factors all have some affect on the brain mechanisms. Most importantly, Ross describes how a mechanistic approach allows clinicians to trace the specific causes of PTSD to specific treatments that can target those causes. © 2017 Scientific American,
By Cormac McCarthy I call it the Kekulé Problem because among the myriad instances of scientific problems solved in the sleep of the inquirer Kekulé’s is probably the best known. He was trying to arrive at the configuration of the benzene molecule and not making much progress when he fell asleep in front of the fire and had his famous dream of a snake coiled in a hoop with its tail in its mouth—the ouroboros of mythology—and woke exclaiming to himself: “It’s a ring. The molecule is in the form of a ring.” Well. The problem of course—not Kekulé’s but ours—is that since the unconscious understands language perfectly well or it would not understand the problem in the first place, why doesnt it simply answer Kekulé’s question with something like: “Kekulé, it’s a bloody ring.” To which our scientist might respond: “Okay. Got it. Thanks.” Why the snake? That is, why is the unconscious so loathe to speak to us? Why the images, metaphors, pictures? Why the dreams, for that matter. A logical place to begin would be to define what the unconscious is in the first place. To do this we have to set aside the jargon of modern psychology and get back to biology. The unconscious is a biological system before it is anything else. To put it as pithily as possibly—and as accurately—the unconscious is a machine for operating an animal. All animals have an unconscious. If they didnt they would be plants. We may sometimes credit ours with duties it doesnt actually perform. Systems at a certain level of necessity may require their own mechanics of governance. Breathing, for instance, is not controlled by the unconscious but by the pons and the medulla oblongata, two systems located in the brainstem. Except of course in the case of cetaceans, who have to breathe when they come up for air. An autonomous system wouldnt work here. The first dolphin anesthetized on an operating table simply died. (How do they sleep? With half of their brain alternately.) But the duties of the unconscious are beyond counting. Everything from scratching an itch to solving math problems. © 2017 NautilusThink Inc,
Tara García Mathewson You saw the pictures in science class—a profile view of the human brain, sectioned by function. The piece at the very front, right behind where a forehead would be if the brain were actually in someone’s head, is the pre-frontal cortex. It handles problem-solving, goal-setting, and task execution. And it works with the limbic system, which is connected and sits closer to the center of the brain. The limbic system processes emotions and triggers emotional responses, in part because of its storage of long-term memory. When a person lives in poverty, a growing body of research suggests the limbic system is constantly sending fear and stress messages to the prefrontal cortex, which overloads its ability to solve problems, set goals, and complete tasks in the most efficient ways. This happens to everyone at some point, regardless of social class. The overload can be prompted by any number of things, including an overly stressful day at work or a family emergency. People in poverty, however, have the added burden of ever-present stress. They are constantly struggling to make ends meet and often bracing themselves against class bias that adds extra strain or even trauma to their daily lives. And the science is clear—when brain capacity is used up on these worries and fears, there simply isn’t as much bandwidth for other things. Economic Mobility Pathways, or EMPath, has built its whole service-delivery model around this science, which it described in its 2014 report, “Using Brain Science to Design New Pathways Out of Poverty.” The Boston nonprofit started out as Crittenton Women’s Union, a merger of two of the city’s oldest women-serving organizations, both of which focused on improving the economic self-sufficiency of families. It continues that work with a new name and a burgeoning focus on intergenerational mobility. © 2017 by The Atlantic Monthly Group.
Aimee Cunningham Taking antidepressants during pregnancy does not increase the risk of autism or attention-deficit/hyperactivity disorder, two new large studies suggest. Genetic or environmental influences, rather than prenatal exposure to the drugs, may have a greater influence on whether a child will develop these disorders. The studies are published online April 18 in JAMA. Clinically, the message is “quite reassuring for practitioners and for mothers needing to make a decision about antidepressant use during pregnancy,” says psychiatrist Simone Vigod, a coauthor of one of the studies. Past research has questioned the safety of expectant moms taking antidepressants (SN: 6/5/10, p. 22). “A mother’s mood disturbances during pregnancy are a big public health issue — they impact the health of mothers and their children,” says Tim Oberlander, a developmental pediatrician at the University of British Columbia in Vancouver. About one in 10 women develop a major depressive episode during pregnancy. “All treatment options should be explored. Nontreatment is never an option,” says Oberlander, who coauthored a commentary, also published in JAMA. Untreated depression during pregnancy creates risks for the child, including poor fetal growth, preterm birth and developmental problems. Some women may benefit from psychotherapy alone. A more serious illness may require antidepressants. “Many of us have started to look at longer term child outcomes related to antidepressant exposure because mothers want to know about that in the decision-making process,” says Vigod, of Women’s College Hospital in Toronto. |© Society for Science & the Public 2000 - 2017.
Angelo Young Billionaire magnate Elon Musk is trying to fill the world with electric cars and solar panels while at the same time aiming to deploy reusable rockets to eventually colonize Mars. As if that weren’t enough for his plate, Musk recently announced the launch of Neuralink, a neuroscience startup seeking to create a way to interface human brains with computers. According to him, this would be part of guarding humanity against what Musk considers a threat from the rise of artificial intelligence. He envisions a lattice of electrodes implanted into the human skull that could allow people to download and upload thoughts as well as treat brain conditions such as epilepsy or bipolar disorders. Musk’s proposition seems as outlandish and unlikely as his vision for the Hyperloop rapid transport system, but like his other big ideas, there’s real science behind it. Figuring out what’s really involved in efforts to sync brains with computers was part of what inspired Adam Piore to write “The Body Builders: Inside the Science of the Engineered Human,” which was released last month by HarperCollins. Written in plain language that gives nonscientists a way to separate the science from the sensational, “The Body Builders” is a fascinating dive into what’s happening right now in bioengineering research — from brain-computer interfaces to bionic limbs — that will redefine human-machine interactions in the years to come. Piore, an award-winning journalist who has written extensively about scientific advances, spoke to Salon recently about just how close we are to being able to read one another’s thoughts through electrodes and the processing power of modern computers. © 2017 Salon Media Group, Inc.
By Neuroskeptic In a thought-provoking new paper called What are neural correlates neural correlates of?, NYU sociologist Gabriel Abend argues that neuroscientists need to pay more attention to philosophy, social science, and the humanities. Abend’s main argument is that if we are to study the neural correlates or neural basis of a certain phenomenon, we must first define that phenomenon and know how to identify instances of it. Sometimes, this identification is straightforward: in a study of brain responses to the taste of sugar, say, there is little room for confusion because we all agree what sugar is. However, if a neuroscientist wants to study the neural correlates of, say, love, they will need to decide what love is, and this is something that philosophers and others have been debating for a long time. Abend argues that cognitive neuroscientists “cannot avoid taking sides in philosophical and social science controversies” in studying phenomena, such as love or morality, which have no neutral, universally accepted definition. In choosing a particular set of stimuli in order to experimentally evoke something, neuroscientists are aligning themselves with a certain theory of what that thing is. For example, the field of “moral neuroscience” makes heavy use of a family of hypothetical dilemmas called trolley problems. The classic trolley problem asks us to choose between allowing a runaway trolley to hit and kill five people, or throwing one person in front of the trolley, killing them but saving the other five.
Link ID: 23501 - Posted: 04.18.2017
By John Horgan I’m writing a book on the mind-body problem, and one theme is that mind-theorists’ views are shaped by emotionally traumatic experiences, like mental illness, the death of a child and the breakup of a marriage. David Chalmers is a striking counter-example. He seems remarkably well adjusted and rational, especially for a philosopher. I’ve tracked his career since I heard him call consciousness “the hard problem” in 1994. Although I often disagree with him—about, for example, whether information theory can help solve consciousness—I’ve always found him an admirably clear thinker, who doesn’t oversell his ideas (unlike Daniel Dennett when he insists that consciousness is an “illusion”). Just in the last couple of years, Chalmers's writings, talks and meetings have helped me understand integrated information theory, Bayesian brains, ethical implications of artificial intelligence and philosophy’s lack of progress, among other topics. Last year I interviewed Chalmers at his home in a woody suburb of New York City. My major takeaway: Although he has faith that consciousness can be scientifically solved, Chalmers doesn’t think we’re close to a final theory, and if we find such a theory, consciousness might remain as philosophically confusing as, say, quantum mechanics. In other words, Chalmers is a philosophical hybrid, who fuses optimism with mysterianism, the position that consciousness is intractable. Below are edited excerpts from our conversation. Chalmers, now 50, was born and raised in Australia. His parents split up when he was five. “My father is a medical researcher, a pretty successful scientist and administrator in medicine in Australia… My mother is I would say a spiritual thinker.” “So if you want an historical story, I guess I end up halfway between my father and mother… My father is a reductionist, and my mother is very much a non-reductionist. I’m a non-reductionist with a tolerance for ideas that might look a bit crazy to some people, like the idea that there’s consciousness everywhere, consciousness is not reducible to something physical. That said, the tradition I’m working in is very much in the western scientific and analytic tradition.” © 2017 Scientific American
Link ID: 23499 - Posted: 04.17.2017
Richard A. Friedman I was doing KenKen, a math puzzle, on a plane recently when a fellow passenger asked why I bothered. I said I did it for the beauty. O.K., I’ll admit it’s a silly game: You have to make the numbers within the grid obey certain mathematical constraints, and when they do, all the pieces fit nicely together and you get this rush of harmony and order. Still, it makes me wonder what it is about mathematical thinking that is so elegant and aesthetically appealing. Is it the internal logic? The unique mix of simplicity and explanatory power? Or perhaps just its pure intellectual beauty? I’ve loved math since I was a kid because it felt like a big game and because it seemed like the laziest thing you could do mentally. After all, how many facts do you need to remember to do math? Later in college, I got excited by physics, which I guess you could say is just a grand exercise in applying math to understand the universe. My roommate, a brainy math major, used to bait me, saying that I never really understood the math I was using. I would counter that he never understood what on Earth the math he studied was good for. We were both right, but he’d be happy to know that I’ve come around to his side: Math is beautiful on a purely abstract level, quite apart from its ability to explain the world. We all know that art, music and nature are beautiful. They command the senses and incite emotion. Their impact is swift and visceral. How can a mathematical idea inspire the same feelings? Well, for one thing, there is something very appealing about the notion of universal truth — especially at a time when people entertain the absurd idea of alternative facts. The Pythagorean theorem still holds, and pi is a transcendental number that will describe all perfect circles for all time. © 2017 The New York Times Company
Keyword: Brain imaging
Link ID: 23498 - Posted: 04.17.2017
Nicola Davis Apes are on a par with human infants in being able to tell when people have an accurate belief about a situation or are actually mistaken, researchers say. While previous work has shown that great apes understand the goals, desires and perceptions of others, scientists say the latest finding reveals an important cognitive ability. “For the last 30 or more years people thought that belief understanding is the key marker of humans and really differentiates us from other species – and this does not seem to be the case,” said David Buttelmann, co-author of the research from the Max Planck Institute for Evolutionary Anthropology in Germany. Apes can guess what others are thinking - just like humans, study finds Read more The results follow on the heels of a study published last year which also suggests that apes understand the concept of false beliefs – after research that used eye-tracking technology to monitor the gaze of apes exposed to various pranks carried out by an actor dressed in a King Kong suit. But the new study, says Buttelmann, is an important step forward, showing that apes not only understand false belief in others, but apply that understanding to their own actions. Writing in the journal Plos One, Buttelmann and colleagues described exploring the understanding of false belief in 34 great apes, including bonobos, chimpanzees and orangutans, using a test that can be passed by human infants at one to two years of age. © 2017 Guardian News and Media Limited
Rebecca Hersher Do you pop up from your seat during meetings and finish other people's sentences? And maybe you also procrastinate, or find yourself zoning out in the middle of one-on-one conversations? It's possible you have adult ADHD. Six simple questions can reliably identify adults with attention-deficit/hyperactivity disorder, according to a World Health Organization advisory group working with two additional psychiatrists. The questions are: How often do you have difficulty concentrating on what people say to you, even when they are speaking to you directly? How often do you leave your seat in meetings and other situations in which you are expected to remain seated? How often do you have difficulty unwinding and relaxing when you have time to yourself? When you're in a conversation, how often do you find yourself finishing the sentences of the people you are talking to before they can finish them themselves? How often do you put things off until the last minute? How often do you depend on others to keep your life in order and attend to details? The response options are "never," "rarely," "sometimes," "often" or "very often." "It's very important to look at the questions in their totality, not each individual symptom," says Dr. David Goodman, an assistant professor of psychiatry at Johns Hopkins School of Medicine who was not involved in the study. "No single question stands out as indicating ADHD." © 2017 npr
Link ID: 23456 - Posted: 04.06.2017
Marcelo Gleiser The idea that neuroscience is rediscovering the soul is, to most scientists and philosophers, nothing short of outrageous. Of course it is not. But the widespread, adverse, knee-jerk attitude presupposes the old-fashioned definition of the soul — the ethereal, immaterial entity that somehow encapsulates your essence. Surely, this kind of supernatural mumbo-jumbo has no place in modern science. And I agree. The Cartesian separation of body and soul, the res extensa (matter stuff) vs. res cogitans (mind stuff) has long been discarded as untenable in a strictly materialistic description of natural phenomena. After all, how would something immaterial interact with something material without any exchange of energy? And how would something immaterial — whatever that means — somehow maintain the essence of who you are beyond your bodily existence? So, this kind of immaterial soul really presents problems for science, although, as pointed out here recently by Adam Frank, the scientific understanding of matter is not without its challenges. But what if we revisit the definition of soul, abandoning its canonical meaning as the "spiritual or immaterial part of a human being or animal, regarded as immortal" for something more modern? What if we consider your soul as the sum total of your neurocognitive essence, your very specific brain signature, the unique neuronal connections, synapses, and flow of neurotransmitters that makes you you? © 2017 npr
Link ID: 23454 - Posted: 04.06.2017
Bruce Bower Kids can have virtual out-of-body experiences as early as age 6. Oddly enough, the ability to inhabit a virtual avatar signals a budding sense that one’s self is located in one’s own body, researchers say. Grade-schoolers were stroked on their backs with a stick while viewing virtual versions of themselves undergoing the same touch. Just after the session ended, the children often reported that they had felt like the virtual body was their actual body, says psychologist Dorothy Cowie of Durham University in England. This sense of being a self in a body, which can be virtually manipulated via sight and touch, gets stronger and more nuanced throughout childhood, the scientists report March 22 in Developmental Science. By around age 10, individuals start to report feeling the touch of a stick stroking a virtual body, denoting a growing integration of sensations with the experience of body ownership, Cowie’s team finds. A year after that, youngsters still don’t display all the elements of identifying self with body observed in adults. During virtual reality trials, only adults perceived their actual bodies as physically moving through space toward virtual bodies receiving synchronized touches. This first-of-its-kind study opens the way to studying how a sense of self develops from childhood on, says cognitive neuroscientist Olaf Blanke of the Swiss Federal Institute of Technology in Lausanne. “The new data clearly show that kids at age 6 have brain mechanisms that generate an experience of being a self located inside one’s own body.” He suspects that a beginner’s version of “my body is me” emerges by age 4. |© Society for Science & the Public 2000 - 2017.
By Anna Buckley BBC Science Radio Unit In an infamous memo written in 1965, the philosopher Hubert Dreyfus stated that humans would always beat computers at chess because machines lacked intuition. Daniel Dennett disagreed. A few years later, Dreyfus rather embarrassingly found himself in checkmate against a computer. And in May 1997 the IBM computer, Deep Blue defeated the world chess champion Garry Kasparov. Many who were unhappy with this result then claimed that chess was a boringly logical game. Computers didn't need intuition to win. The goalposts shifted. Daniel Dennett has always believed our minds are machines. For him the question is not can computers be human? But are humans really that clever? In an interview with BBC Radio 4's The Life Scientific, Dennett says there's nothing special about intuition. "Intuition is simply knowing something without knowing how you got there". Dennett blames the philosopher Rene Descartes for permanently polluting our thinking about how we think about the human mind. Descartes couldn't imagine how a machine could be capable of thinking, feeling and imagining. Such talents must be God-given. He was writing in the 17th century, when machines were made of levers and pulleys not CPUs and RAM, so perhaps we can forgive him. Our brains are made of a hundred billion neurons. If you were to count all the neurons in your brain at a rate of one a second, it would take more than 3,000 years. Our minds are made of molecular machines, otherwise known as brain cells. And if you find this depressing then you lack imagination, says Dennett. "Do you know the power of a machine made of a trillion moving parts?", he asks. "We're not just are robots", he says. "We're robots, made of robots, made of robots". Our brain cells are robots that respond to chemical signals. The motor proteins they create are robots. And so it goes on. © 2017 BBC.
Link ID: 23445 - Posted: 04.04.2017
by Emilie Reas Alzheimer’s disease (AD) has been characterized as a “complete loss of self.” Early on when memory begins to fade, the victim has difficulty recalling names, their grocery list or where they put their keys. As the disease progresses, they have trouble staying focused, planning and performing basic daily activities. From the exterior, dementia appears to ravage one’s intellect and personality; yet as mere observers, it’s impossible to ascertain how consciousness of the self and environment is transformed by dementia. The celebrated late neurologist Oliver Sacks once suggested that, “Style, neurologically, is the deepest part of one’s being and may be preserved, almost to the last, in dementia.” Is this remaining neurological “style” sufficient to preserve consciousness? Is the AD patient aware of their deteriorating cognition, retaining a sense of identity or morality, or can they still connect with friends and loved ones? Emerging advances in neuroscience have enabled researchers to more precisely probe the AD brain, suggesting that although some aspects of consciousness are compromised by dementia, others are remarkably spared. Scientists are beginning to piece together how the selective loss of some functions, but the preservation of others, alters consciousness in AD. A recent study found that the severity of cognitive impairment strongly relates to “meta-cognition” (reflecting on one’s own condition), moral judgments and thinking about the future, whereas basic personal identity and body awareness remain. Perhaps the most widely observed deficit in consciousness is “anosognosia,” impaired awareness of one’s own illness; whereas individuals with mild cognitive impairment (MCI; considered a precursor to full AD) are aware of their declining memory, AD patients may be unaware of their impairments. These behavioral signs suggest that only some aspects of consciousness and self-awareness are truly lost in AD.
By George Johnson Who knows what Arturo the polar bear was thinking as he paced back and forth in the dark, air-conditioned chamber behind his artificial grotto? Just down the pathway Cecilia sat quietly in her cage, contemplating whatever chimpanzees contemplate. The idea that something resembling a subjective, contemplative mind exists in other animals has become mainstream — and not just for apes. In recent years, both creatures, inhabitants of the Mendoza Zoological Park in Argentina, have been targets of an international campaign challenging the morality of holding animals captive as living museum exhibits. The issue is not so much physical abuse as mental abuse — the effect confinement has on the inhabitants’ minds. Last July, a few months after I visited the zoo, Arturo, promoted by animal rights activists as “the world’s saddest polar bear,” died of what his keepers said were complications of old age. (His mantle has now been bestowed on Pizza, a polar bear on display at a Chinese shopping mall.) But Cecilia (the “loneliest chimp,” some sympathizers have called her) has been luckier, if luck is a concept a chimpanzee can understand. In November, Judge María Alejandra Mauricio of the Third Court of Guarantees in Mendoza decreed that Cecilia is a “nonhuman person” — one that was being denied “the fundamental right” of all sentient beings “to be born, to live, grow, and die in the proper environment for their species.” Copyright 2017 Undark
Laurel Hamers SAN FRANCISCO — Girls and boys with attention-deficit/hyperactivity disorder don’t just behave differently. Parts of their brains look different, too. Now, researchers can add the cerebellum to that mismatch. For boys, symptoms of the disorder tend to include poor impulse control and disruptive behavior. Girls are more likely to have difficulty staying focused on one task. Studies show that those behavioral differences are reflected in brain structure. Boys with ADHD, for example, are more likely than girls to display abnormalities in premotor and primary motor circuits, pediatric neurologist Stewart Mostofsky of Kennedy Krieger Institute in Baltimore has reported previously. Now, Mostofsky and colleagues have looked at the cerebellum, which plays a role in coordinating movement. He reported the new findings March 25 at the Cognitive Neuroscience Society’s annual meeting in San Francisco. Girls ages 8 to 12 with ADHD showed differences in the volume of various regions of their cerebellum compared with girls without the condition, MRI scans revealed. A similar comparison of boys showed abnormalities, too. But those differences didn’t match what’s seen between girls, preliminary analyses suggest. So far, researchers have looked at 18 subjects in each of the four groups, but plan to quintuple that number in the coming months. |© Society for Science & the Public 2000 - 2017
Laurel Hamers SAN FRANCISCO — When faced with simple math problems, people who get jittery about the subject may rely more heavily on certain brain circuitry than math-savvy people do. The different mental approach could help explain why people with math anxiety struggle on more complicated problems, researchers reported March 25 at the Cognitive Neuroscience Society’s annual meeting. While in fMRI machines, adults with and without math anxiety evaluated whether simple arithmetic problems, such as 9+2=11, were correct or incorrect. Both groups had similar response times and accuracy on the problems, but brain scans turned up differences. Specifically, in people who weren’t anxious about math, lower activation of the frontoparietal attention network was linked to better performance. That brain network is involved in working memory and problem solving. Math-anxious people showed no correlation between performance and frontoparietal network activity. People who used the circuit less were probably getting ahead by automating simple arithmetic, said Hyesang Chang, a cognitive neuroscientist at the University of Chicago. Because math-anxious people showed more variable brain activity overall, Chang speculated that they might instead be using a variety of computationally demanding strategies. This scattershot approach works fine for simple math, she said, but might get maxed out when the math is more challenging. Citations H. Chang et al. Simple arithmetic: Not so simple for highly math anxious individuals. Cognitive Neuroscience Society Annual Meeting, San Francisco, March 25, 2017. |© Society for Science & the Public 2000 - 2017.
By Scott Barry Kaufman Rarely do I read a scientific paper that overwhelms me with so much excitement, awe, and reverence. Well, a new paper in Psychological Science has really got me revved up, and I am bursting to share their findings with you! Most research on mind-wandering and daydreaming draws on either two methods: strict, laboratory conditions that ask people to complete boring, cognitive tasks and retrospective surveys that ask people to recall how often they daydream in daily life. It has been rather difficult to compare these results to each other; laboratory tasks aren't representative of how we normally go about our day, and surveys are prone to memory distortion. In this new, exciting study, Michael Kane and colleagues directly compared laboratory mind-wandering with real-life mind wandering within the same person, and used an important methodology called "experience-sampling" that allows the researcher to capture people's ongoing stream of consciousness. For 7 days, 8 times a day, the researchers randomly asked 274 undergraduates at North Carolina at Greensboro whether they were mind-wandering and the quality of their daydreams. They also asked them to engage in a range of tasks in the laboratory that assessed their rates of mind-wandering, the contents of their off-task thoughts, and their "executive functioning" (a set of skills that helps keep things in memory despite distractions and focus on the relevant details). What did they find? © 2017 Scientific American
Link ID: 23409 - Posted: 03.27.2017
Laura Sanders Not too long ago, the internet was stationary. Most often, we’d browse the Web from a desktop computer in our living room or office. If we were feeling really adventurous, maybe we’d cart our laptop to a coffee shop. Looking back, those days seem quaint. Today, the internet moves through our lives with us. We hunt Pokémon as we shuffle down the sidewalk. We text at red lights. We tweet from the bathroom. We sleep with a smartphone within arm’s reach, using the device as both lullaby and alarm clock. Sometimes we put our phones down while we eat, but usually faceup, just in case something important happens. Our iPhones, Androids and other smartphones have led us to effortlessly adjust our behavior. Portable technology has overhauled our driving habits, our dating styles and even our posture. Despite the occasional headlines claiming that digital technology is rotting our brains, not to mention what it’s doing to our children, we’ve welcomed this alluring life partner with open arms and swiping thumbs. Scientists suspect that these near-constant interactions with digital technology influence our brains. Small studies are turning up hints that our devices may change how we remember, how we navigate and how we create happiness — or not. Somewhat limited, occasionally contradictory findings illustrate how science has struggled to pin down this slippery, fast-moving phenomenon. Laboratory studies hint that technology, and its constant interruptions, may change our thinking strategies. Like our husbands and wives, our devices have become “memory partners,” allowing us to dump information there and forget about it — an off-loading that comes with benefits and drawbacks. Navigational strategies may be shifting in the GPS era, a change that might be reflected in how the brain maps its place in the world. Constant interactions with technology may even raise anxiety in certain settings. |© Society for Science & the Public 2000 - 2017