Chapter 18. Attention and Higher Cognition
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
By Scott Barry Kaufman Rarely do I read a scientific paper that overwhelms me with so much excitement, awe, and reverence. Well, a new paper in Psychological Science has really got me revved up, and I am bursting to share their findings with you! Most research on mind-wandering and daydreaming draws on either two methods: strict, laboratory conditions that ask people to complete boring, cognitive tasks and retrospective surveys that ask people to recall how often they daydream in daily life. It has been rather difficult to compare these results to each other; laboratory tasks aren't representative of how we normally go about our day, and surveys are prone to memory distortion. In this new, exciting study, Michael Kane and colleagues directly compared laboratory mind-wandering with real-life mind wandering within the same person, and used an important methodology called "experience-sampling" that allows the researcher to capture people's ongoing stream of consciousness. For 7 days, 8 times a day, the researchers randomly asked 274 undergraduates at North Carolina at Greensboro whether they were mind-wandering and the quality of their daydreams. They also asked them to engage in a range of tasks in the laboratory that assessed their rates of mind-wandering, the contents of their off-task thoughts, and their "executive functioning" (a set of skills that helps keep things in memory despite distractions and focus on the relevant details). What did they find? © 2017 Scientific American
Link ID: 23409 - Posted: 03.27.2017
Laura Sanders Not too long ago, the internet was stationary. Most often, we’d browse the Web from a desktop computer in our living room or office. If we were feeling really adventurous, maybe we’d cart our laptop to a coffee shop. Looking back, those days seem quaint. Today, the internet moves through our lives with us. We hunt Pokémon as we shuffle down the sidewalk. We text at red lights. We tweet from the bathroom. We sleep with a smartphone within arm’s reach, using the device as both lullaby and alarm clock. Sometimes we put our phones down while we eat, but usually faceup, just in case something important happens. Our iPhones, Androids and other smartphones have led us to effortlessly adjust our behavior. Portable technology has overhauled our driving habits, our dating styles and even our posture. Despite the occasional headlines claiming that digital technology is rotting our brains, not to mention what it’s doing to our children, we’ve welcomed this alluring life partner with open arms and swiping thumbs. Scientists suspect that these near-constant interactions with digital technology influence our brains. Small studies are turning up hints that our devices may change how we remember, how we navigate and how we create happiness — or not. Somewhat limited, occasionally contradictory findings illustrate how science has struggled to pin down this slippery, fast-moving phenomenon. Laboratory studies hint that technology, and its constant interruptions, may change our thinking strategies. Like our husbands and wives, our devices have become “memory partners,” allowing us to dump information there and forget about it — an off-loading that comes with benefits and drawbacks. Navigational strategies may be shifting in the GPS era, a change that might be reflected in how the brain maps its place in the world. Constant interactions with technology may even raise anxiety in certain settings. |© Society for Science & the Public 2000 - 2017
By Christof Koch We moderns take it for granted that consciousness is intimately tied up with the brain. But this assumption did not always hold. For much of recorded history, the heart was considered the seat of reason, emotion, valor and mind. Indeed, the first step in mummification in ancient Egypt was to scoop out the brain through the nostrils and discard it, whereas the heart, the liver and other internal organs were carefully extracted and preserved. The pharaoh would then have access to everything he needed in his afterlife. Everything except for his brain! Several millennia later Aristotle, one of the greatest of all biologists, taxonomists, embryologists and the first evolutionist, had this to say: “And of course, the brain is not responsible for any of the sensations at all. The correct view [is] that the seat and source of sensation is the region of the heart.” He argued consistently that the primary function of the wet and cold brain is to cool the warm blood coming from the heart. Another set of historical texts is no more insightful on this question. The Old and the New Testaments are filled with references to the heart but entirely devoid of any mentions of the brain. Debate about what the brain does grew ever more intense over ensuing millennia. The modern embodiment of these arguments seeks to identify the precise areas within the three-pound cranial mass where consciousness arises. What follows is an attempt to size up the past and present of this transmillennial journey. The field has scored successes in delineating a brain region that keeps the neural engine humming. Switched on, you are awake and conscious. In another setting, your body is asleep, yet you still have experiences—you dream. In a third position, you are deeply asleep, effectively off-line. © 2017 Scientific American
Link ID: 23361 - Posted: 03.16.2017
By Nicole Mortillaro, CBC News Have you ever been witness to an event with a friend only to conclude you both had different accounts about what had occurred? This is known as perception bias. Our views and beliefs can cloud the way we perceive things — and perception bias can take on many forms. New research published in the Journal of Personality and Social Psychology found that people tend to perceive young black men as larger, stronger and more threatening than white men of the same size. This, the authors say, could place them at risk in situations with police. The research was prompted by recent police shootings against black men in the United States — particularly those involving descriptions of men that didn't correspond with reality. Take, for example, the case of Dontre Hamilton. In 2014, the unarmed Hamilton was shot 14 times and killed by police in Milkwaukee. The officer involved testified that he believed he would have been easily overpowered by Hamilton, who he described as having a muscular build. But the autopsy report found that Hamilton was just five foot seven and weighed 169 pounds. Looking at the Hamilton case, as well as many other examples, the researchers sought to determine whether or not there were psychologically driven preconceived notions about black men over white men. ©2017 CBC/Radio-Canada.
Laurel Hamers Mistakes can be learning opportunities, but the brain needs time for lessons to sink in. When facing a fast and furious stream of decisions, even the momentary distraction of noting an error can decrease accuracy on the next choice, researchers report in the March 15 Journal of Neuroscience. “We have a brain region that monitors and says ‘you messed up’ so that we can correct our behavior,” says psychologist George Buzzell, now at the University of Maryland in College Park. But sometimes, that monitoring system can backfire, distracting us from the task at hand and causing us to make another error. “There does seem to be a little bit of time for people, after mistakes, where you're sort of offline,” says Jason Moser, a psychologist at Michigan State University in East Lansing, who wasn’t part of the study. To test people’s response to making mistakes, Buzzell and colleagues at George Mason University in Fairfax, Va., monitored 23 participants’ brain activity while they worked through a challenging task. Concentric circles flashed briefly on a screen, and participants had to respond with one hand if the two circles were the same color and the other hand if the circles were subtly different shades. After making a mistake, participants generally answered the next question correctly if they had a second or so to recover. But when the next challenge came very quickly after an error, as little as 0.2 seconds, accuracy dropped by about 10 percent. Electrical activity recorded from the visual cortex showed that participants paid less attention to the next trial if they had just made a mistake than if they had responded correctly. |© Society for Science & the Public 2000 - 2017
By Warren Cornwall The number of years someone spends behind bars can hinge on whether they were clearly aware that they were committing a crime. But how is a judge or jury to know for sure? A new study suggests brain scans can distinguish between hardcore criminal intent and simple reckless behavior, but the approach is far from being ready for the courtroom. The study is unusual because it looks directly at the brains of people while they are engaged in illicit activity, says Liane Young, a Boston College psychologist who was not involved in the work. Earlier research, including work by her, has instead generally looked at the brains of people only observing immoral activity. Researchers led by Read Montague, a neuroscientist at Virginia Tech Carilion Research Insitute in Roanoke and at University College London, used functional magnetic resonance imaging (fMRI), which can measure brain activity based on blood flow. They analyzed the brains of 40 people—a mix of men and women mostly in their 20s and 30s—as they went through scenarios that simulated trying to smuggle something through a security checkpoint. In some cases, the people knew for certain they had contraband in a suitcase. In other cases, they chose from between two and five suitcases, with only one containing contraband (and thus they weren’t sure they were carrying contraband). The risk of getting caught also varied based on how many of the 10 security checkpoints had a guard stationed there. The results showed distinctive patterns of brain activity for when the person knew for certain the suitcase had contraband and when they only knew there was a chance of it, the team reports today in the Proceedings of the National Academy of Sciences. But there was an unexpected twist. Those differing brain patterns only showed up when people were first shown how many security checkpoints were guarded, and then offered the suitcases. In that case, a computer analysis of the fMRI images correctly classified people as knowing or reckless between 71% and 80% of the time. © 2017 American Association for the Advancement of Science
Is there life after death for our brains? It depends. Loretta Norton, a doctoral student at Western University in Canada, was curious, so she and her collaborators asked critically ill patients and their families if they could record brain activity in the half hours before and after life support was removed. They ended up recording four patients with electroencephalography, better known as EEG, which uses small electrodes attached to a person’s head to measure electrical activity in the brain. In three patients, the EEG showed brain activity stopping up to 10 minutes before the person’s heart stopped beating. But in a fourth, the EEG picked up so-called delta wave bursts up to 10 minutes after the person’s heart stopped. Delta waves are associated with deep sleep, also known as slow-wave sleep. In living people, neuroscientists consider slow-wave sleep to be a key process in consolidating memories. The study also raises questions about the exact moment when death occurs. Here’s Neuroskeptic: Another interesting finding was that the actual moment at which the heart stopped was not associated with any abrupt change in the EEG. The authors found no evidence of the large “delta blip” (the so-called “death wave“), an electrical phenomena which has been observed in rats following decapitation. With only four patients, it’s difficult to draw any sort of broad conclusion from this study. But it does suggest that death may be a gradual process as opposed to a distinct moment in time. © 1996-2017 WGBH Educational Foundation
Link ID: 23348 - Posted: 03.13.2017
By Aylin Woodward Noise is everywhere, but that’s OK. Your brain can still keep track of a conversation in the face of revving motorcycles, noisy cocktail parties or screaming children – in part by predicting what’s coming next and filling in any blanks. New data suggests that these insertions are processed as if the brain had really heard the parts of the word that are missing. “The brain has evolved a way to overcome interruptions that happen in the real world,” says Matthew Leonard at the University of California, San Francisco. We’ve known since the 1970s that the brain can “fill in” inaudible sections of speech, but understanding how it achieves this phenomenon – termed perceptual restoration – has been difficult. To investigate, Leonard’s team played volunteers words that were partially obscured or inaudible to see how their brains responded. The experiment involved people who already had hundreds of electrodes implanted into their brain to monitor their epilepsy. These electrodes detect seizures, but can also be used to record other types of brain activity. The team played the volunteers recordings of a word that could either be “faster” or “factor”, with the middle sound replaced by noise. Data from the electrodes showed that their brains responded as if they had actually heard the missing “s” or “c” sound. © Copyright Reed Business Information Ltd.
By JESS BIDGOOD SALEM, Mass. — A few years ago, Bevil Conway, then a neuroscientist at Wellesley College, got an interesting request: Could he give a lecture to the curators and other staff at the Peabody Essex Museum, the art and culture museum here? So Mr. Conway gathered his slides and started from the beginning, teaching the basics of neuroscience — “How neurons work, how neurons talk to each other, issues of evolutionary biology,” Mr. Conway said — to people who run an institution best known for its venerable collections of maritime and Asian art. It was an early step in what has become a galvanizing mission for the museum’s director, Dan L. Monroe: harnessing the lessons of brain science to make the museum more engaging as attendance is falling around the country. “If one’s committed to creating more meaningful and impactful art experiences, it seems a good idea to have a better idea about how our brains work,” he said. “That was the original line of thinking that started us down this path.” The museum, known as P.E.M., has been looking at neuroscience to incorporate its lessons into exhibitions ever since. In an effort to build shows that engage the brain, it has tried breaking up exhibition spaces into smaller pieces; posting questions and quotes on the wall, instead of relying only on explanatory wall text; and experimenting with elements like smell and sound in visual exhibitions. And those efforts are about to increase. The museum recently received a $130,000 grant from the Barr Foundation, a Boston-based philanthropic organization, to bring a neuroscience researcher on staff, add three neuroscientists to the museum as advisers and publish a guide that will help other museums incorporate neuroscience into their exhibition planning. “A lot of what we’re seeing in museums right now is the interpretation of pieces, or artwork,” said E. San San Wong, a senior program officer with the foundation. “What this is looking at is: How do we more actively engage people with art, in multiple senses?” © 2017 The New York Times Company
By Jackie Snow Last month, Facebook announced software that could simply look at a photo and tell, for example, whether it was a picture of a cat or a dog. A related program identifies cancerous skin lesions as well as trained dermatologists can. Both technologies are based on neural networks, sophisticated computer algorithms at the cutting edge of artificial intelligence (AI)—but even their developers aren’t sure exactly how they work. Now, researchers have found a way to "look" at neural networks in action and see how they draw conclusions. Neural networks, also called neural nets, are loosely based on the brain’s use of layers of neurons working together. Like the human brain, they aren't hard-wired to produce a specific result—they “learn” on training sets of data, making and reinforcing connections between multiple inputs. A neural net might have a layer of neurons that look at pixels and a layer that looks at edges, like the outline of a person against a background. After being trained on thousands or millions of data points, a neural network algorithm will come up with its own rules on how to process new data. But it's unclear what the algorithm is using from those data to come to its conclusions. “Neural nets are fascinating mathematical models,” says Wojciech Samek, a researcher at Fraunhofer Institute for Telecommunications at the Heinrich Hertz Institute in Berlin. “They outperform classical methods in many fields, but are often used in a black box manner.” © 2017 American Association for the Advancement of Science.
By PHILIP FERNBACH and STEVEN SLOMAN How can so many people believe things that are demonstrably false? The question has taken on new urgency as the Trump administration propagates falsehoods about voter fraud, climate change and crime statistics that large swaths of the population have bought into. But collective delusion is not new, nor is it the sole province of the political right. Plenty of liberals believe, counter to scientific consensus, that G.M.O.s are poisonous, and that vaccines cause autism. The situation is vexing because it seems so easy to solve. The truth is obvious if you bother to look for it, right? This line of thinking leads to explanations of the hoodwinked masses that amount to little more than name calling: “Those people are foolish” or “Those people are monsters.” Such accounts may make us feel good about ourselves, but they are misguided and simplistic: They reflect a misunderstanding of knowledge that focuses too narrowly on what goes on between our ears. Here is the humbler truth: On their own, individuals are not well equipped to separate fact from fiction, and they never will be. Ignorance is our natural state; it is a product of the way the mind works. What really sets human beings apart is not our individual mental capacity. The secret to our success is our ability to jointly pursue complex goals by dividing cognitive labor. Hunting, trade, agriculture, manufacturing — all of our world-altering innovations — were made possible by this ability. Chimpanzees can surpass young children on numerical and spatial reasoning tasks, but they cannot come close on tasks that require collaborating with another individual to achieve a goal. Each of us knows only a little bit, but together we can achieve remarkable feats. © 2017 The New York Times Company
Link ID: 23316 - Posted: 03.06.2017
By Ruth Williams Scientists at New York University’s School of Medicine have probed the deepest layers of the cerebral cortices of mice to record the activities of inhibitory interneurons when the animals are alert and perceptive. The team’s findings reveal that these cells exhibit different activities depending on the cortical layer they occupy, suggesting a level of complexity not previously appreciated. In their paper published in Science today (March 2), the researchers also described the stimulatory and inhibitory inputs that regulate these cells, adding further details to the picture of interneuron operations within the cortical circuitry. “It is an outstanding example of circuit analysis and a real experimental tour de force,” said neuroscientist Massimo Scanziani of the University of California, San Diego, who was not involved in the work. Christopher Moore of Brown University in Providence, Rhode Island, who also did not participate in the research, echoed Scanziani’s sentiments. “It’s just a beautiful paper,” he said. “They do really hard experiments and come up with what seem to be really valid [observations]. It’s a well-done piece of work.” The mammalian cerebral cortex is a melting pot of information, where signals from sensory inputs, emotions, and memories are combined and processed to produce a coherent perception of the world. Excitatory cells are the most abundant type of cortical neurons and are thought to be responsible for the relay and integration of this information, while the rarer interneurons inhibit the excitatory cells to suppress information flow. Interneurons are “a sort of gatekeeper in the cortex,” said Scanziani. © 1986-2017 The Scientist
Link ID: 23314 - Posted: 03.04.2017
By Catherine Caruso When a football player clocks an opponent on the field, it often does not look so bad—until we see it in slow motion. Suddenly, a clean, fair tackle becomes a dirty play, premeditated to maim (as any bar full of indignant fans will loudly confirm). But why? A study published last August in the Proceedings of the National Academy of Sciences USA suggests that slow motion leads us to believe that the people involved were acting with greater intent. Researchers designed experiments based on a place where slow-motion video comes up a lot: the courtroom. They asked subjects to imagine themselves as jurors and watch a video of a convenience store robbery and shooting, either in slow motion or in real time. Those who watched the slow-motion video reported thinking the robber had more time to act and was acting with greater intent. The effect persisted even when the researchers displayed a timer on the screen to emphasize exactly how much time was passing, and it was reduced yet still present when subjects watched a combination of real-time and slow-motion videos of the crime (as they might in an actual courtroom). Participants also ascribed greater intent to a football player ramming an opponent when they viewed the play in slow motion. Werner Helsen, a kinesiologist at the University of Leuven in Belgium, who was not involved in the study, says the findings are in line with his own research on perception and decision making in crime scene interventions and violent soccer plays. One possible explanation for this slo-mo effect stems from our sense of time, which author Benjamin Converse, a psychologist at the University of Virginia, describes as “quite malleable.” He explains that when we watch footage in slow motion, we cannot help but assume that because we as viewers have more time to think through the events as they unfold, the same holds true for the people in the video. © 2017 Scientific American
Link ID: 23311 - Posted: 03.04.2017
By Drake Baer If you’re going to get any sort of science done, an experiment needs a control group: the unaffected, possibly placebo-ed population that didn’t take part in whatever intervention it is you’re trying to study. Back in the earlier days of cognitive neuroscience, the control condition was intuitive enough: Just let the person in the brain scanner lie in repose, awake yet quiet, contemplating the tube they’re inside of. But in 1997, 2001, and beyond, studies kept coming out saying that it wasn’t much of a control at all. When the brain is “at rest,” it’s doing anything but resting. When you don’t give its human anything to do, brain areas related to processing emotions, recalling memory, and thinking about what’s to come become quietly active. These self-referential streams of thought are so pervasive that in a formative paper Marcus Raichle, a Washington University neurologist who helped found the field, declared it to be the “the default mode of brain function,” and the constellation of brain areas that carry it out are the default mode network, or DMN. Because when given nothing else to do, the brain defaults to thinking about the person it’s embedded in. Since then, the DMN has been implicated in everything from depression to creativity. People who daydream more tend to have a more active DMN; relatedly, dreaming itself appears to be an amplified version of mind-wandering. In Buddhist traditions, this chattering described by neuroscientists as the default mode is a dragon to be tamed, if not slain. Chögyam Trungpa, who was instrumental in bringing Tibetan Buddhism to the U.S., said the meditation practice is “necessary generally because our thinking pattern, our conceptualized way of conducting our life in the world, is either too manipulative, imposing itself upon the world, or else runs completely wild and uncontrolled,” he wrote in Cutting Through Spiritual Materialism. “Therefore, our meditation practice must begin with ego’s outermost layer, the discursive thoughts which continually run through our minds, our mental gossip.” © 2017, New York Media LLC.
Sara Reardon Like ivy plants that send runners out searching for something to cling to, the brain’s neurons send out shoots that connect with other neurons throughout the organ. A new digital reconstruction method shows three neurons that branch extensively throughout the brain, including one that wraps around its entire outer layer. The finding may help to explain how the brain creates consciousness. Christof Koch, president of the Allen Institute for Brain Science in Seattle, Washington, explained his group’s new technique at a 15 February meeting of the Brain Research through Advancing Innovative Neurotechnologies initiative in Bethesda, Maryland. He showed how the team traced three neurons from a small, thin sheet of cells called the claustrum — an area that Koch believes acts as the seat of consciousness in mice and humans1. Tracing all the branches of a neuron using conventional methods is a massive task. Researchers inject individual cells with a dye, slice the brain into thin sections and then trace the dyed neuron’s path by hand. Very few have been able to trace a neuron through the entire organ. This new method is less invasive and scalable, saving time and effort. Koch and his colleagues engineered a line of mice so that a certain drug activated specific genes in claustrum neurons. When the researchers fed the mice a small amount of the drug, only a handful of neurons received enough of it to switch on these genes. That resulted in production of a green fluorescent protein that spread throughout the entire neuron. The team then took 10,000 cross-sectional images of the mouse brain and used a computer program to create a 3D reconstruction of just three glowing cells. © 2017 Macmillan Publishers Limited
By Alice Klein The proof is in the packaging. Making all cigarette packets look the same reduces the positive feelings smokers associate with specific brands and encourages quitting, Australian research shows. The findings come ahead of the UK and Ireland introducing plain tobacco packaging in May. Australia was the first nation to introduce such legislation in December 2012. Since then, all cigarettes have been sold in plain olive packets with standard fonts and graphic health warnings. The primary goal was to make cigarettes less appealing so that people would not take up smoking in the first place. But an added bonus has been the number of existing smokers who have ditched the habit. Between 2010 and 2013, the proportion of daily smokers in Australia dropped from 15.1 to 12.8 per cent – a record decline. The number of calls to quit helplines also increased by 78 per cent after the policy change. Brand betrayal This drop in smoking popularity can be partly explained by a loss of brand affinity, says Hugh Webb at the Australian National University in Canberra. People derive a sense of belonging and identity from brands, he says. For example, you may see yourself as a “Mac person” or a “PC person” and feel connected to other people who choose that brand. “Marketers are extremely savvy about cultivating these brand identities.” © Copyright Reed Business Information Ltd
By Kerry Grens Brain scans of 3,242 volunteers aged four to 63 years old revealed that those diagnosed with attention deficit hyperactivity disorder (ADHD)—roughly half of the group—had smaller tissue volumes in five brain regions. Because the differences were largest between children, the researchers concluded that ADHD likely involves a delay in brain maturation. The study, published in The Lancet Psychiatry on February 15, is the largest of its kind to date, and the authors hope it will change public perception of the disorder. “I think most scientists in the field already know that the brains of people with ADHD show differences, but I now hope to have shown convincing evidence … that will reach the general public and show that it has [a basis in the brain] just like other psychiatric disorders,” geneticist and coauthor Martine Hoogman of Radboud University in the Netherlands told The Washington Post. “We know that ADHD deals with stigma, but we also know that increasing knowledge will reduce stigma.” Most pronounced among the brain differences between those with and without ADHD was the amygdala, important for emotional processing. “The amygdala is heavily connected to other brain regions. It is a kind of hub for numerous kinds of signaling around salience and significance of events,” Joel Nigg, a psychiatry professor at Oregon Health & Science University School of Medicine who was not part of the study, told CNN. “The bigger story here is that alterations in amygdala have not been widely accepted as part of ADHD, so seeing that effect emerge here is quite interesting.” © 1986-2017 The Scientist
By JOANNA KLEIN The good news is, the human brain is flexible and efficient. This helps us make sense of the world. But the bad news is, the human brain is flexible and efficient. This means the brain can sometimes make mistakes. You can watch this tension play out when the brain tries to connect auditory and visual speech. It’s why we may find a poorly dubbed kung fu movie hard to believe, and why we love believing the gibberish in those Bad Lip Reading Videos on YouTube. “By dubbing speech that is reasonably consistent with the available mouth movements, we can utterly change the meaning of what the original talker was saying,” said John Magnotti, a neuroscientist at Baylor College of Medicine in Texas. “Sometimes we can detect that something is a little off, but the illusion is usually quite compelling.” In a study published Thursday in PLOS Computational Biology, Dr. Magnotti and Michael Beauchamp, also a neuroscientist at Baylor College of Medicine, tried to pin down why our brains are susceptible to these kinds of perceptual mistakes by looking at a well-known speech illusion called the McGurk effect. By comparing mathematical models for how the brain integrates senses important in detecting speech, they found that the brain uses vision, hearing and experience when making sense of speech. If the mouth and voice are likely to come from the same person, the brain combines them; otherwise, they are kept separate. “You may think that when you’re talking to someone you’re just listening to their voice,” said Dr. Beauchamp, who led the study. “But it turns out that what their face is doing is actually profoundly influencing what you are perceiving.” © 2017 The New York Times Company
By Sam Wong Can a mouse be mindful? Researchers believe they have created the world’s first mouse model of meditation by using light to trigger brain activity similar to what meditation induces. The mice involved appeared less anxious, too. Human experiments show that meditation reduces anxiety, lowers levels of stress hormones and improves attention and cognition. In one study of the effects of two to four weeks of meditation training, Michael Posner of the University of Oregon and colleagues discovered changes in the white matter in volunteers’ brains, related to the efficiency of communication between different brain regions. The changes, picked up in scans, were particularly noticeable between the anterior cingulate cortex (ACC) and other areas. Since the ACC regulates activity in the amygdala, a region that controls fearful responses, Posner’s team concluded that the changes in white matter could be responsible for meditation’s effects on anxiety. The mystery was how meditation could alter the white matter in this way. Posner’s team figured that it was related to changes in theta brainwaves, measured using electrodes on the scalp. Meditation increases theta wave activity, even when people are no longer meditating. To test the theory, the team used optogenetics – they genetically engineered certain cells to be switched on by light. In this way, they were able to use pulses of light on mice to stimulate theta brainwave-like activity in the ACC. © Copyright Reed Business Information Ltd.
Link ID: 23260 - Posted: 02.21.2017
By Matthew Hutson, Veronique Greenwood For some things, such as deciding whether to take a new job or nab your opponent's rook in chess, you're better off thinking long and hard. For others, such as judging your interviewer's or opponent's emotional reactions, first instincts are best—or so traditional wisdom suggests. But new research finds that careful reflection actually makes us better at assessing others' feelings. The findings could improve how we deal with bosses, spouses, friends and, especially, strangers. We would have trouble getting through the day or even a conversation if we couldn't tell how other people were feeling. And yet this ability, called empathic accuracy, eludes introspection. “We don't think too hard about the exact processes we engage in when we do it,” says Christine Ma-Kellams, a psychologist at the University of La Verne in California, “and we don't necessarily know how accurate we are.” Recently Ma-Kellams and Jennifer Lerner of Harvard University conducted four studies, all published in 2016. In one experiment, participants imagined coaching an employee for a particular job. When told to help the employee get better at reading others' emotions, most people recommended thinking “in an intuitive and instinctive way” as opposed to “in an analytic and systematic way.” When told to make employees worse at the task, the participants recommended the opposite. And yet later experiments suggested this coaching was off base. For instance, in another experiment, professionals in an executive-education program took a “cognitive reflection test” to measure how much they relied on intuitive versus systematic thinking. The most reflective thinkers were most accurate at interpreting their partners' moods during mock interviews. Systematic thinkers also outperformed intuiters at guessing the emotions expressed in photographs of eyes. © 2017 Scientific American