Chapter 14. Attention and Consciousness
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
A new study by investigators at Brigham and Women's Hospital in collaboration with researchers at the University of York and Leeds in the UK and MD Andersen Cancer Center in Texas puts to the test anecdotes about experienced radiologists' ability to sense when a mammogram is abnormal. In a paper published August 29 in the Proceedings of the National Academy of Sciences, visual attention researchers showed radiologists mammograms for half a second and found that they could identify abnormal mammograms at better than chance levels. They further tested this ability through a series of experiments to explore what signal may alert radiologists to the presence of a possible abnormality, in the hopes of using these insights to improve breast cancer screening and early detection. "Radiologists can have 'hunches' after a first look at a mammogram. We found that these hunches are based on something real in the images. It's really striking that in the blink of an eye, an expert can pick up on something about that mammogram that indicates abnormality," said Jeremy Wolfe, PhD, senior author of the study and director of the Visual Attention Laboratory at BWH. "Not only that, but they can detect something abnormal in the other breast, the breast that does not contain a lesion." In the clinic, radiologists carefully evaluate mammograms and may use computer automated systems to help screen the images. Although they would never assess an image in half a second in the clinic, the ability of experts to extract the "gist" of an image quickly suggests that there may be a detectable signs of breast cancer that radiologists are rapidly picking up. Copyright 2016 ScienceDaily
By STEVE SILBERMAN In the late 1930s, Charles Bradley, the director of a home for “troublesome” children in Rhode Island, had a problem. The field of neuroscience was still in its infancy, and one of the few techniques available to allow psychiatrists like Bradley to ponder the role of the brain in emotional disorders was a procedure that required replacing a volume of cerebrospinal fluid in the patient’s skull with air. This painstaking process allowed any irregularities to stand out clearly in X-ray images, but many patients suffered excruciating headaches that lasted for weeks afterward. Meanwhile, a pharmaceutical company called Smith, Kline & French was facing a different sort of problem. The firm had recently acquired the rights to sell a powerful stimulant then called “benzedrine sulfate” and was trying to create a market for it. Toward that end, the company made quantities of the drug available at no cost to doctors who volunteered to run studies on it. Bradley was a firm believer that struggling children needed more than a handful of pills to get better; they also needed psychosocial therapy and the calming and supportive environment that he provided at the home. But he took up the company’s offer, hoping that the drug might eliminate his patients’ headaches. It did not. But the Benzedrine did have an effect that was right in line with Smith, Kline & French’s aspirations for its new product: The drug seemed to boost the children’s eagerness to learn in the classroom while making them more amenable to following the rules. The drug seemed to calm the children’s mood swings, allowing them to become, in the words of their therapists, more “attentive” and “serious,” able to complete their schoolwork and behave. Bradley was amazed that Benzedrine, a forerunner of Ritalin and Adderall, was such a great normalizer, turning typically hard-to-manage kids into models of complicity and decorum. But even after marveling at the effects of the drug, he maintained that medication should be considered for children only in addition to other forms of therapy. © 2016 The New York Times Company
By Usha Lee McFarling @ushamcfarling LOS ANGELES — A team of physicians and neuroscientists on Wednesday reported the successful use of ultrasound waves to “jump start” the brain of a 25-year-old man recovering from coma — and plan to launch a much broader test of the technique, in hopes of finding a way to help at least some of the tens of thousands of patients in vegetative states. The team, based at the University of California, Los Angeles, cautions that the evidence so far is thin: They have no way to know for sure whether the ultrasound stimulation made the difference for their young patient, or whether he spontaneously recovered by coincidence shortly after the therapy. But the region of the brain they targeted with the ultrasound — the thalamus — has previously been shown to be important in restoring consciousness. In 2007, a 38-year-old man who had been minimally conscious for six years regained some functions after electrodes were implanted in his brain to stimulate the thalamus. The ultrasound technique is a “good idea” that merits further study, said Dr. Nicholas Schiff, a pioneer in the field of using brain stimulation to restore consciousness who conducted the 2007 study. “It’s intriguing and it’s an interesting possibility,” said Schiff, a neuroscientist at Weill Cornell Medicine. The UCLA procedure used an experimental device, about the size of a teacup saucer, to focus ultrasonic waves on the thalamus, two walnut-sized bulbs in the center of the brain that serve as a critical hub for information flow and help regulate consciousness and sleep.
Link ID: 22606 - Posted: 08.27.2016
Laura Sanders Brain scientists Eric Jonas and Konrad Kording had grown skeptical. They weren’t convinced that the sophisticated, big data experiments of neuroscience were actually accomplishing anything. So they devised a devilish experiment. Instead of studying the brain of a person, or a mouse, or even a lowly worm, the two used advanced neuroscience methods to scrutinize the inner workings of another information processor — a computer chip. The unorthodox experimental subject, the MOS 6502, is the same chip that dazzled early tech junkies and kids alike in the 1980s by powering Donkey Kong, Space Invaders and Pitfall, as well as the Apple I and II computers. Of course, these experiments were rigged. The scientists already knew everything about how the 6502 works. “The beauty of the microprocessor is that unlike anything in biology, we understand it on every level,” says Jonas, of the University of California, Berkeley. A barrel-hurling gorilla is the enemy in Donkey Kong, a video game powered by the MOS 6502 microprocessor. Along with Space Invaders and Pitfall, this game served as the “behavior” in a recent experiment. Using a simulation of MOS 6502, Jonas and Kording, of Northwestern University in Chicago, studied the behavior of electricity-moving transistors, along with aspects of the chip’s connections and its output, to reveal how it handles information. Since they already knew what the outcomes should be, they were actually testing the methods. By the end of their experiments, Jonas and Kording had discovered almost nothing. |© Society for Science & the Public 2000 - 2016
Keyword: Brain imaging
Link ID: 22597 - Posted: 08.24.2016
By Jessica Hamzelou Feel like you’ve read this before? Most of us have experienced the eerie familiarity of déjà vu, and now the first brain scans of this phenomenon have revealed why – it’s a sign of our brain checking its memory. Déjà vu was thought to be caused by the brain making false memories, but research by Akira O’Connor at the University of St Andrews, UK, and his team now suggests this is wrong. Exactly how déjà vu works has long been a mystery, partly because its fleeting and unpredictable nature makes it difficult to study. To get around this, O’Connor and his colleagues developed a way to trigger the sensation of déjà vu in the lab. The team’s technique uses a standard method to trigger false memories. It involves telling a person a list of related words – such as bed, pillow, night, dream – but not the key word linking them together, in this case, sleep. When the person is later quizzed on the words they’ve heard, they tend to believe they have also heard “sleep” – a false memory. To create the feeling of déjà vu, O’ Connor’s team first asked people if they had heard any words beginning with the letter “s”. The volunteers replied that they hadn’t. This meant that when they were later asked if they had heard the word sleep, they were able to remember that they couldn’t have, but at the same time, the word felt familiar. “They report having this strange experience of déjà vu,” says O’Connor. © Copyright Reed Business Information Ltd.
By MIKE SACKS You’ve seen me. I know you have. I’m the guy wearing gloves on the subway in October. Or even into April. Perhaps I’m wearing just one glove, allowing my naked hand to turn the pages of a book. No big deal. Just another one-gloved commuter, heading home. If it’s crowded, you may have noticed me doing my best to “surf,” sans contact, until the car comes to a stop, in which case I may knock into a fellow passenger. Aboveground you may have seen me acting the gentleman, opening doors for others with a special paper towel I carry in my front left pocket for just such a momentous occasion. No? How about that guy walking quickly ahead of you, the one impishly avoiding sidewalk cracks? Or perhaps you’ve noticed a stranger who turns and makes eye contact with you for seemingly no reason. You may have asked, “You got a problem?” Oh, I definitely have a problem. But it has nothing to do with you, sir or madam. (And, yes, even in my thoughts I refer to you as “sir” and “madam.”) The problem here is what multiple doctors have diagnosed as obsessive-compulsive disorder. You may refer to it by its kicky abbreviation, O.C.D. I prefer to call it Da Beast. Da Beast is a creature I have lived with since I was 11, a typical age for O.C.D. to snarl into one’s life without invitation or warning. According to the International O.C.D. Foundation, roughly one in 100 adults suffers from the disorder. Each of us has his or her own obsessive thoughts and fears to contend with. My particular beast of burden is a fear of germs and sickness. It’s a popular one, perhaps the most common. © 2016 The New York Times Company
Keyword: OCD - Obsessive Compulsive Disorder
Link ID: 22541 - Posted: 08.11.2016
By ABBY GOODNOUGH TUSCALOOSA, Ala. — Roslyn Lewis was at work at a dollar store here in Tuscaloosa, pushing a heavy cart of dog food, when something popped in her back: an explosion of pain. At the emergency room the next day, doctors gave her Motrin and sent her home. Her employer paid for a nerve block that helped temporarily, numbing her lower back, but she could not afford more injections or physical therapy. A decade later, the pain radiates to her right knee and remains largely unaddressed, so deep and searing that on a recent day she sat stiffly on her couch, her curtains drawn, for hours. The experience of African-Americans, like Ms. Lewis, and other minorities illustrates a problem as persistent as it is complex: Minorities tend to receive less treatment for pain than whites, and suffer more disability as a result. While an epidemic of prescription opioid abuse has swept across the United States, African-Americans and Hispanics have been affected at much lower rates than whites. Researchers say minority patients use fewer opioids, and they offer a thicket of possible explanations, including a lack of insurance coverage and a greater reluctance among members of minority groups to take opioid painkillers even if they are prescribed. But the researchers have also found evidence of racial bias and stereotyping in recognizing and treating pain among minorities, particularly black patients. “We’ve done a good job documenting that these disparities exist,” said Salimah Meghani, a pain researcher at the University of Pennsylvania. “We have not done a good job doing something about them.” Dr. Meghani’s 2012 analysis of 20 years of published research found that blacks were 34 percent less likely than whites to be prescribed opioids for conditions such as backaches, abdominal pain and migraines, and 14 percent less likely to receive opioids for pain from traumatic injuries or surgery. © 2016 The New York Times Company
By EUGENE M. CARUSO, ZACHARY C. BURNS and BENJAMIN A. CONVERSE Watching slow-motion footage of an event can certainly improve our judgment of what happened. But can it also impair judgment? This question arose in the 2009 murder trial of a man named John Lewis, who killed a police officer during an armed robbery of a Dunkin’ Donuts in Philadelphia. Mr. Lewis pleaded guilty; the only question for the jury was whether the murder resulted from a “willful, deliberate and premeditated” intent to kill or — as Mr. Lewis argued — from a spontaneous, panicked reaction to seeing the officer enter the store unexpectedly. The key piece of evidence was a surveillance video of the shooting, which the jury saw both in real time and in slow motion. The jury found that Mr. Lewis had acted with premeditation, and he was sentenced to death. Mr. Lewis appealed the decision, arguing that the slow-motion video was prejudicial. Specifically, he claimed that watching the video in slow motion artificially stretched the relevant time period and created a “false impression of premeditation.” Did it? We recently conducted a series of experiments whose results are strikingly consistent with that claim. Our studies, published this week in the Proceedings of the National Academy of Sciences, show that seeing replays of an action in slow motion leads viewers to believe that the actor had more time to think before acting than he actually did. The result is that slow motion makes actions seem more intentional, more premeditated. In one of our studies, participants watched surveillance video of a fatal shooting that occurred outside a convenience store during an armed robbery. We gave them a set of instructions similar to those given to the jurors in Mr. Lewis’s case, asking them to decide whether the crime was premeditated or not. We assigned half our participants to watch the video in slow motion and the other half to watch it at regular speed. © 2016 The New York Times Company
Link ID: 22525 - Posted: 08.08.2016
By BENEDICT CAREY Solving a hairy math problem might send a shudder of exultation along your spinal cord. But scientists have historically struggled to deconstruct the exact mental alchemy that occurs when the brain successfully leaps the gap from “Say what?” to “Aha!” Now, using an innovative combination of brain-imaging analyses, researchers have captured four fleeting stages of creative thinking in math. In a paper published in Psychological Science, a team led by John R. Anderson, a professor of psychology and computer science at Carnegie Mellon University, demonstrated a method for reconstructing how the brain moves from understanding a problem to solving it, including the time the brain spends in each stage. The imaging analysis found four stages in all: encoding (downloading), planning (strategizing), solving (performing the math), and responding (typing out an answer). “I’m very happy with the way the study worked out, and I think this precision is about the limit of what we can do” with the brain imaging tools available, said Dr. Anderson, who wrote the report with Aryn A. Pyke and Jon M. Fincham, both also at Carnegie Mellon. To capture these quicksilver mental operations, the team first taught 80 men and women how to interpret a set of math symbols and equations they had not seen before. The underlying math itself wasn’t difficult, mostly addition and subtraction, but manipulating the newly learned symbols required some thinking. The research team could vary the problems to burden specific stages of the thinking process — some were hard to encode, for instance, while others extended the length of the planning stage. The scientists used two techniques of M.R.I. data analysis to sort through what the participants’ brains were doing. One technique tracked the neural firing patterns during the solving of each problem; the other identified significant shifts from one kind of mental state to another. The subjects solved 88 problems each, and the research team analyzed the imaging data from those solved successfully. © 2016 The New York Times Company
By ERICA GOODE You are getting sleepy. Very sleepy. You will forget everything you read in this article. Hypnosis has become a common medical tool, used to reduce pain, help people stop smoking and cure them of phobias. But scientists have long argued about whether the hypnotic “trance” is a separate neurophysiological state or simply a product of a hypnotized person’s expectations. A study published on Thursday by Stanford researchers offers some evidence for the first explanation, finding that some parts of the brain function differently under hypnosis than during normal consciousness. The study was conducted with functional magnetic resonance imaging, a scanning method that measures blood flow in the brain. It found changes in activity in brain areas that are thought to be involved in focused attention, the monitoring and control of the body’s functioning, and the awareness and evaluation of a person’s internal and external environments. “I think we have pretty definitive evidence here that the brain is working differently when a person is in hypnosis,” said Dr. David Spiegel, a professor of psychiatry and behavioral sciences at Stanford who has studied the effectiveness of hypnosis. Functional imaging is a blunt instrument and the findings can be difficult to interpret, especially when a study is looking at activity levels in many brain areas. Still, Dr. Spiegel said, the findings might help explain the intense absorption, lack of self-consciousness and suggestibility that characterize the hypnotic state. © 2016 The New York Times Company
By ANNA WEXLER EARLIER this month, in the journal Annals of Neurology, four neuroscientists published an open letter to practitioners of do-it-yourself brain stimulation. These are people who stimulate their own brains with low levels of electricity, largely for purposes like improved memory or learning ability. The letter, which was signed by 39 other researchers, outlined what is known and unknown about the safety of such noninvasive brain stimulation, and asked users to give careful consideration to the risks. For the last three years, I have been studying D.I.Y. brain stimulators. Their conflict with neuroscientists offers a fascinating case study of what happens when experimental tools normally kept behind the closed doors of academia — in this case, transcranial direct current stimulation — are appropriated for use outside them. Neuroscientists began experimenting in earnest with transcranial direct current stimulation about 15 years ago. In such stimulation, electric current is administered at levels that are hundreds of times less than those used in electroconvulsive therapy. To date, more than 1,000 peer-reviewed studies of the technique have been published. Studies have suggested, among other things, that the stimulation may be beneficial for treating problems like depression and chronic pain as well as enhancing cognition and learning in healthy individuals. The device scientists use for stimulation is essentially a nine-volt battery attached to two wires that are connected to electrodes placed at various spots on the head. A crude version can be constructed with just a bit of electrical know-how. Consequently, as reports of the effects of the technique began to appear in scientific journals and in newspapers, people began to build their own devices at home. By late 2011 and early 2012, diagrams, schematics and videos began to appear online. © 2016 The New York Times Company
Link ID: 22471 - Posted: 07.23.2016
Rachel Ehrenberg The brain doesn’t really go out like a light when anesthesia kicks in. Nor does neural activity gradually dim, a new study in monkeys reveals. Rather, intermittent flickers of brain activity appear as the effects of an anesthetic take hold. Some synchronized networks of brain activity fall out of step as the monkeys gradually drift from wakefulness, the study showed. But those networks resynchronized when deep unconsciousness set in, researchers reported in the July 20 Journal of Neuroscience. That the two networks behave so differently during the drifting-off stage is surprising, says study coauthor Yumiko Ishizawa of Harvard Medical School and Massachusetts General Hospital. It isn’t clear what exactly is going on, she says, except that the anesthetic’s effects are a lot more complex than previously thought. Most studies examining the how anesthesia works useelectroencephalograms, or EEGs, which record brain activity using electrodes on the scalp. The new study offers unprecedented surveillance by eavesdropping via electrodes implanted inside macaque monkeys’ brains. This new view provides clues to how the brain loses and gains consciousness. “It’s a very detailed description of something we know very little about,” says cognitive neuroscientist Tristan Bekinschtein of the University of Cambridge, who was not involved with the work. Although the study is elegant, it isn’t clear what to make of the findings, he says. “These are early days.” |© Society for Science & the Public 2000 - 2016.
Link ID: 22457 - Posted: 07.20.2016
James M. Broadway “Where did the time go?” middle-aged and older adults often remark. Many of us feel that time passes more quickly as we age, a perception that can lead to regrets. According to psychologist and BBC columnist Claudia Hammond, “the sensation that time speeds up as you get older is one of the biggest mysteries of the experience of time.” Fortunately, our attempts to unravel this mystery have yielded some intriguing findings. In 2005, for instance, psychologists Marc Wittmann and Sandra Lenhoff, both then at Ludwig Maximilian University of Munich, surveyed 499 participants, ranging in age from 14 to 94 years, about the pace at which they felt time moving—from “very slowly” to “very fast.” For shorter durations—a week, a month, even a year—the subjects' perception of time did not appear to increase with age. Most participants felt that the clock ticked by quickly. But for longer durations, such as a decade, a pattern emerged: older people tended to perceive time as moving faster. When asked to reflect on their lives, the participants older than 40 felt that time elapsed slowly in their childhood but then accelerated steadily through their teenage years into early adulthood. There are good reasons why older people may feel that way. When it comes to how we perceive time, humans can estimate the length of an event from two very different perspectives: a prospective vantage, while an event is still occurring, or a retrospective one, after it has ended. In addition, our experience of time varies with whatever we are doing and how we feel about it. In fact, time does fly when we are having fun. Engaging in a novel exploit makes time appear to pass more quickly in the moment. But if we remember that activity later on, it will seem to have lasted longer than more mundane experiences. © 2016 Scientific American,
Michael Egnor The most intractable question in modern neuroscience and philosophy of the mind is often phrased "What is consciousness?" The problem has been summed up nicely by philosopher David Chalmers as what he calls the Hard Problem of consciousness: How is it that we are subjects, and not just objects? Chalmers contrasts this hard question with what he calls the Easy Problem of consciousness: What are the neurobiological substrates underlying such things as wakefulness, alertness, attention, arousal, etc. Chalmers doesn't mean of course that the neurobiology of arousal is easy. He merely means to show that even if we can understand arousal from a neurobiological standpoint, we haven't yet solved the hard problem: the problem of subjective experience. Why am I an I, and not an it? Chalmers's point is a good one, and I think that it has a rather straightforward solution. First, some historical background is necessary. "What is consciousness?" is a modern question. It wasn't asked before the 17th century, because no one before Descartes thought that the mind was particularly mysterious. The problem of consciousness was created by moderns. The scholastic philosophers, following Aristotle and Aquinas, understood the soul as the animating principle of the body. In a human being, the powers of the soul -- intellect, will, memory, perception, appetite, and such -- were no more mysterious than the other powers of the soul, such as respiration, circulation, etc. Of course, biology in the Middle Ages wasn't as advanced as it is today, so there was much they didn't understand about human physiology, but in principle the mind was just another aspect of human biology, not inherently mysterious. In modern parlance, the scholastics saw the mind as the Easy Problem, no more intractable than understanding how breathing or circulation work.
Link ID: 22441 - Posted: 07.15.2016
Jon Hamilton Letting mice watch Orson Welles movies may help scientists explain human consciousness. At least that's one premise of the Allen Brain Observatory, which launched Wednesday and lets anyone with an Internet connection study a mouse brain as it responds to visual information. "Think of it as a telescope, but a telescope that is looking at the brain," says Christof Koch, chief scientific officer of the Allen Institute for Brain Science, which created the observatory. The hope is that thousands of scientists and would-be scientists will look through that telescope and help solve one of the great mysteries of human consciousness, Koch says. "You look out at the world and there's a picture in your head," he says. "You see faces, you see your wife, you see something on TV." But how does the brain create those images from the chaotic stream of visual information it receives? "That's the mystery," Koch says. There's no easy way to study a person's brain as it makes sense of visual information. So the observatory has been gathering huge amounts of data on mice, which have a visual system that is very similar to the one found in people. The data come from mice that run on a wheel as still images and movies appear on a screen in front of them. For the mice, it's a lot like watching TV on a treadmill at the gym. But these mice have been genetically altered in a way that allows a computer to monitor the activity of about 18,000 neurons as they respond to different images. "We can look at those neurons and from that decode literally what goes through the mind of the mouse," Koch says. Those neurons were pretty active when the mice watched the first few minutes of Orson Welles' film noir classic Touch of Evil. The film is good for mouse experiments because "It's black and white and it has nice contrasts and it has a long shot without having many interruptions," Koch says. © 2016 npr
Not much is definitively proven about consciousness, the awareness of one’s existence and surroundings, other than that it’s somehow linked to the brain. But theories as to how, exactly, grey matter generates consciousness are challenged when a fully-conscious man is found to be missing most of his brain. Several years ago, a 44-year-old Frenchman went to the hospital complaining of mild weakness in his left leg. It was discovered then that his skull was filled largely by fluid, leaving just a thin perimeter of actual brain tissue. And yet the man was a married father of two and a civil servant with an IQ of 75, below-average in his intelligence but not mentally disabled. Doctors believe the man’s brain slowly eroded over 30 years due to a build up of fluid in the brain’s ventricles, a condition known as “hydrocephalus.” His hydrocephalus was treated with a shunt, which drains the fluid into the bloodstream, when he was an infant. But it was removed when he was 14 years old. Over the following decades, the fluid accumulated, leaving less and less space for his brain. While this may seem medically miraculous, it also poses a major challenge for cognitive psychologists, says Axel Cleeremans of the Université Libre de Bruxelles.
By SUNITA SAH A POPULAR remedy for a conflict of interest is disclosure — informing the buyer (or the patient, etc.) of the potential bias of the seller (or the doctor, etc.). Disclosure is supposed to act as a warning, alerting consumers to their adviser’s stake in the matter so they can process the advice accordingly. But as several recent studies I conducted show, there is an underappreciated problem with disclosure: It often has the opposite of its intended effect, not only increasing bias in advisers but also making advisees more likely to follow biased advice. When I worked as a physician, I witnessed how bias could arise from numerous sources: gifts or sponsorships from the pharmaceutical industry; compensation for performing particular procedures; viewing our own specialties as delivering more effective treatments than others’ specialties. Although most physicians, myself included, tend to believe that we are invulnerable to bias, thus making disclosures unnecessary, regulators insist on them, assuming that they work effectively. To some extent, they do work. Disclosing a conflict of interest — for example, a financial adviser’s commission or a physician’s referral fee for enrolling patients into clinical trials — often reduces trust in the advice. But my research has found that people are still more likely to follow this advice because the disclosure creates increased pressure to follow the adviser’s recommendation. It turns out that people don’t want to signal distrust to their adviser or insinuate that the adviser is biased, and they also feel pressure to help satisfy their adviser’s self-interest. Instead of functioning as a warning, disclosure can become a burden on advisees, increasing pressure to take advice they now trust less. © 2016 The New York Times Company
Link ID: 22416 - Posted: 07.09.2016
By Emily Rosenzweig Life deals most of us a consistent stream of ego blows, be they failures at work, social slights, or unrequited love. Social psychology has provided decades of insight into just how adept we are at defending ourselves against these psychic threats. We discount negative feedback, compare ourselves favorably to those who are worse off than us, attribute our failures to others, place undue value on our own strengths, and devalue opportunities denied to us–all in service of protecting and restoring our sense of self-worth. As a group, this array of motivated mental processes that support mood repair and ego defense has been called the “psychological immune system.” Particularly striking to social psychologists is our ability to remain blind to our use of these motivated strategies, even when it is apparent to others just how biased we are. However there are times when we either cannot remain blind to our own psychological immune processes, or where we may find ourselves consciously wanting to use them expressly for the purpose of restoring our ego or our mood. What then? Can we believe a conclusion we reach even when we know that we arrived at it in a biased way? For example, imagine you’ve recently gone through a breakup and want to get over your ex. You decide to make a mental list of all of their character flaws in an effort to feel better about the relationship ending. A number of prominent social psychologists have suggested you’re out of luck—knowing that you’re focusing only on your ex’s worst qualities prevents you from believing the conclusion you’ve come to that you’re better off without him or her. In essence, they argue that we must remain blind to our own biased mental processes in order to reap their ego-restoring benefits. And in many ways this closely echoes the position that philosophers like Mele have taken about the possibility of agentic self-deception. © 2016 Scientific American
George Johnson A paper in The British Medical Journal in December reported that cognitive behavioral therapy — a means of coaxing people into changing the way they think — is as effective as Prozac or Zoloft in treating major depression. In ways no one understands, talk therapy reaches down into the biological plumbing and affects the flow of neurotransmitters in the brain. Other studies have found similar results for “mindfulness” — Buddhist-inspired meditation in which one’s thoughts are allowed to drift gently through the head like clouds reflected in still mountain water. Findings like these have become so commonplace that it’s easy to forget their strange implications. Depression can be treated in two radically different ways: by altering the brain with chemicals, or by altering the mind by talking to a therapist. But we still can’t explain how mind arises from matter or how, in turn, mind acts on the brain. This longstanding conundrum — the mind-body problem — was succinctly described by the philosopher David Chalmers at a recent symposium at The New York Academy of Sciences. “The scientific and philosophical consensus is that there is no nonphysical soul or ego, or at least no evidence for that,” he said. Descartes’s notion of dualism — mind and body as separate things — has long receded from science. The challenge now is to explain how the inner world of consciousness arises from the flesh of the brain. © 2016 The New York Times Company
Link ID: 22397 - Posted: 07.05.2016
Mo Costandi There’s much more to visual perception than meets the eye. What we see is not merely a matter of patterns of light falling on the retina, but rather is heavily influenced by so-called ‘top-down’ brain mechanisms, which can alter the visual information, and other types of sensory information, that enters the brain before it even reaches our conscious awareness. A striking example of this is a phenomenon called inattentional blindness, whereby narrowly focusing one’s attention on one visual stimulus makes us oblivious to other stimuli, even though they otherwise may be glaringly obvious, as demonstrated by the infamous ‘Invisible Gorilla’ study. Now researchers say they have discovered another extreme form of blindness, in which people fail to notice an unexpected image, even when shown by itself and staring them in the face. Marjan Persuh and Robert Melara of the City University of New York designed two experiments to investigate whether people’s prior expectations could block their awareness of meaningful and important visual stimuli. In the first, they recruited 20 student volunteers and asked them to perform a visual discrimination task. They were shown a series of images, consisting of successive pairs of faces, each of which were presented for half a second on a computer screen, and asked to indicate whether each pair showed faces of people of the same or different sex. Towards the end of each session, the participants were presented with a simple shape, which flashed onto the screen for one tenth of a second. They were then asked if they had seen anything new and, after replying, were told that a shape had indeed appeared, and asked to select the correct one from a display of four. This shape recognition task was then repeated in one final control trial. © 2016 Guardian News and Media Limited
Link ID: 22394 - Posted: 07.04.2016