Chapter 18. Attention and Higher Cognition
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
By David Shultz We still may not know what causes consciousness in humans, but scientists are at least learning how to detect its presence. A new application of a common clinical test, the positron emission tomography (PET) scan, seems to be able to differentiate between minimally conscious brains and those in a vegetative state. The work could help doctors figure out which brain trauma patients are the most likely to recover—and even shed light on the nature of consciousness. “This is really cool what these guys did here,” says neuroscientist Nicholas Schiff at Cornell University, who was not involved in the study. “We’re going to make great use of it.” PET scans work by introducing a small amount of radionuclides into the body. These radioactive compounds act as a tracer and naturally emit subatomic particles called positrons over time, and the gamma rays indirectly produced by this process can be detected by imaging equipment. The most common PET scan uses fluorodeoxyglucose (FDG) as the tracer in order to show how glucose concentrations change in tissue over time—a proxy for metabolic activity. Compared with other imaging techniques, PET scans are relatively cheap and easy to perform, and are routinely used to survey for cancer, heart problems, and other diseases. In the new study, researchers used FDG-PET scans to analyze the resting cerebral metabolic rate—the amount of energy being used by the tissue—of 131 patients with a so-called disorder of consciousness and 28 healthy controls. Disorders of consciousness can refer to a wide range of problems, ranging from a full-blown coma to a minimally conscious state in which patients may experience brief periods where they can communicate and follow instructions. Between these two extremes, patients may be said to be in a vegetative state or exhibit unresponsive wakefulness, characterized by open eyes and basic reflexes, but no signs of awareness. Most disorders of consciousness result from head trauma, and where someone falls on the consciousness continuum is typically determined by the severity of the injury. © 2016 American Association for the Advancement of Science
By Amina Zafar, Tragically Hip frontman Gord Downie's resilience and openness about his terminal glioblastoma and his plans to tour could help to reduce stigma and improve awareness, some cancer experts say. Tuesday's news revealed that the singer has an aggressive form of cancer that originated in his brain. An MRI scan last week showed the tumour has responded well to surgery, radiation and chemotherapy, doctors said. "I was quickly impressed by Gord's resilience and courage," Downie's neuro-oncologist, Dr. James Perry of Sunnybrook Health Sciences Centre, told a news conference. Perry said it's daunting for many of his patients to reveal the diagnosis to their family, children and co-workers. "The news today, while sad, also creates for us in brain tumour research an unprecedented opportunity to create awareness and to create an opportunity for fundraising for research that's desperately needed to improve the odds for all people with this disease," Perry said. Dr. James Perry, head of neurology at Toronto's Sunnybrook Health Sciences Centre, calls Gord Downie's sad news an unprecedented opportunity to fundraise for brain tumour research. (Aaron Vincent Elkaim/Canadian Press) "Gord's courage in coming forward with his diagnosis will be a beacon for all patients with glioblastoma in Canada. They will see a survivor continuing with his craft despite its many challenges." ©2016 CBC/Radio-Canada.
Link ID: 22251 - Posted: 05.26.2016
Dean Burnett A recent report by the National Obesity Forum stated that official advice about low-fat diets is wrong. As ever, there’s now heated debate over how valid/accurate this claim is. But let’s step back a moment and ask a revealing question: why do official government dietary guidelines even exist? Why are they necessary? From an entirely logical position, eating food fulfils several requirements. It provides the energy to do things, helps us build up stores of energy for when needed, and provides the materials required to build and maintain our bodies. Therefore, the human body requires a regular intake of nutrients, vitamins and calories to maintain day-to-day functioning. As a result, the human body has developed an intricate digestive system to monitor and regulate our food intake. The digestive system is quite cool. It has a sophisticated nervous system that can operate pretty much independently, so is often regarded as separate from the main one, leading some to describe it as a “second brain”, there to encourage, monitor and process the consumption and digestion of food. It also utilises hormones, namely leptin and ghrelin, which decrease and increase appetite respectively depending on how much food the body has/needs. It’s a painstakingly complex and precise system that’s evolved over aeons to make sure we eat what and when we need to, and get the most out of our food. However, at some point the human brain got involved, then everything went to hell. This is why we can now be presented with foodstuffs we’re repeatedly told are unhealthy, even dangerous, and say “Thanks. Extra chilli sauce on mine, please”.
By Lisa Rapaport (Reuters Health) - Attention deficit hyperactivity disorder (ADHD), usually diagnosed in children, may show up for the first time in adulthood, two recent studies suggest. And not only can ADHD appear for the first time after childhood, but the symptoms for adult-onset ADHD may be different from symptoms experienced by kids, the researchers found. “Although the nature of symptoms differs somewhat between children and adults, all age groups show impairments in multiple domains – school, family and friendships for kids and school, occupation, marriage and driving for adults,” said Stephen Faraone, a psychiatry researcher at SUNY Upstate Medical University in Syracuse, New York and author of an editorial accompanying the two studies in JAMA Psychiatry. Faraone cautions, however, that some newly diagnosed adults might have had undetected ADHD as children. Support from parents and teachers or high intelligence, for example, might prevent ADHD symptoms from emerging earlier in life. It’s not clear whether study participants “were completely free of psychopathology prior to adulthood,” Faraone said in an email. One of the studies, from Brazil, tracked more than 5,200 people born in 1993 until they were 18 or 19 years old. © 2016 Scientific American
Stephen Cave For centuries, philosophers and theologians have almost unanimously held that civilization as we know it depends on a widespread belief in free will—and that losing this belief could be calamitous. Our codes of ethics, for example, assume that we can freely choose between right and wrong. In the Christian tradition, this is known as “moral liberty”—the capacity to discern and pursue the good, instead of merely being compelled by appetites and desires. The great Enlightenment philosopher Immanuel Kant reaffirmed this link between freedom and goodness. If we are not free to choose, he argued, then it would make no sense to say we ought to choose the path of righteousness. Today, the assumption of free will runs through every aspect of American politics, from welfare provision to criminal law. It permeates the popular culture and underpins the American dream—the belief that anyone can make something of themselves no matter what their start in life. As Barack Obama wrote in The Audacity of Hope, American “values are rooted in a basic optimism about life and a faith in free will.” So what happens if this faith erodes? The sciences have grown steadily bolder in their claim that all human behavior can be explained through the clockwork laws of cause and effect. This shift in perception is the continuation of an intellectual revolution that began about 150 years ago, when Charles Darwin first published On the Origin of Species. Shortly after Darwin put forth his theory of evolution, his cousin Sir Francis Galton began to draw out the implications: If we have evolved, then mental faculties like intelligence must be hereditary. But we use those faculties—which some people have to a greater degree than others—to make decisions. So our ability to choose our fate is not free, but depends on our biological inheritance. © 2016 by The Atlantic Monthly Group.
Link ID: 22228 - Posted: 05.18.2016
George Johnson At the Science of Consciousness conference last month in Tucson, I was faced with a quandary: Which of eight simultaneous sessions should I attend? In one room, scientists and philosophers were discussing the physiology of brain cells and how they might generate the thinking mind. In another, the subject was free will — real or an illusion? Next door was a session on panpsychism, the controversial (to say the least) idea that everything — animal, vegetable and mineral — is imbued at its subatomic roots with mindlike qualities. Running on parallel tracks were sessions titled “Phenomenal Consciousness,” the “Neural Correlates of Consciousness” and the “Extended Mind.” For much of the 20th century, the science of consciousness was widely dismissed as an impenetrable mystery, a morass of a problem that could be safely pursued only by older professors as they thought deep thoughts in their endowed chairs. Beginning in the 1990s, the field slowly became more respectable. There is, after all, a gaping hole in science. The human mind has plumbed the universe, concluding that it is precisely 13.8 billion years old. With particle accelerators like the Large Hadron Collider at CERN, scientists have discovered the vanishingly tiny particles, like the Higgs boson, that underpin reality. But there is no scientific explanation for consciousness — without which none of these discoveries could have been made. © 2016 The New York Times Company
Link ID: 22227 - Posted: 05.18.2016
By John Horgan Speakers at the 2016 Tucson consciousness conference suggested that “temporal nonlocality” or other quantum effects in the brain could account for free will. But what happens when the brain is immersed in a hot tub? This is the second of four posts on “The Science of Consciousness” in Tucson, Arizona, which lasted from April 26 to April 30. (See Further Reading for links to other posts.) Once again, I’m trying to answer the question: What is it like to be a skeptical journalist at a consciousness conference? -- John Horgan DAY 2, THURSDAY, APRIL 28. HOT TUBS AND QUANTUM INCOHERENCE Breakfast on the patio with Stuart Kauffman, who has training in… almost everything. Philosophy, medicine, science. We’ve bumped heads in the past, but we’re friendly now. In his mid-70s, Stu is still obsessed with--and hacking away at--the biggest mysteries. We talk about… almost everything. Quantum mechanics, the origin of life, materialism, free will, God, the birth and death of his daughter, the death of his wife, his re-marriage, predictability versus possibility. As Stu speaks, his magnificent, weathered face looks happy/sad, arrogant/anxious. Superposition of emotions. He tells me about his brand-new book, Humanity in a Creative Universe, in which he outlines a perspective that can help lift us out of our spiritual crisis. Who saves the savior? I scoot to a morning session, “Consciousness and Free Will.” I hope it will supply me with ammo for my defenses of free will. I can do without God, but not free will. © 2016 Scientific American, a Division of Nature America, Inc.
Link ID: 22216 - Posted: 05.16.2016
By Daniel Barron No matter where we call home, where we were raised, or what we ate for breakfast, our brains process information pretty much the same as anyone else in the world. Which makes sense—our genomes are 99.6-99.9% identical, which makes our brains nearly so. Look at a landscape or cityscape and comparable computations occur in your brain as in someone from another background or country. Zhangjiajie National Forest Park, China. Credit: Chensiyuan, via Wikimedia Commons under GFDL Consider my recent walk through China’s Zhangjiajie National Forest Park, an inspiration for James Cameron’s Avatar. Some of our first steps into the park involved a 1,070 foot ascent in the Bailong elevator, the world’s tallest outdoor elevator. Crammed within the carriage were travelers from Japan, India, China, the U.S.A., and Korea. No matter our origin, the Wulingyuan landscape didn’t disappoint: the towering red and green rock formations stretched towards the sky as they defied gravity. Gasps and awes were our linguistic currency while our visual cortices gleefully fired away. The approximately 3000 quartzite sandstone pillars, with their unusual red and green contrasts, mesmerized our visual centers, demanding our attention. One of the brain’s earliest visual processing centers, V1, lies at the middle of the back of our head. V1 identifies simple forms like vertical, horizontal, and diagonal edges of contrasting intensities, or lines. Look at a vertical line, and neurons that are sensitive to vertical lines will fire more quickly; look at a horizontal line, and our horizontal neurons buzz away. © 2016 Scientific American
By John Horgan Scientists trying to explain consciousness are entitled to be difficult, but what’s philosophers’ excuse? Don’t they have a moral duty to be comprehensible to non-specialists? I recently attended “The Science of Consciousness,” the legendary inquest held every two years in Tucson, Arizona. I reported on the first meeting in 1994 and wanted to see how it’s evolved since then. This year’s shindig lasted from April 26 to April 30 and featured hundreds of presenters, eminent and obscure. I arrived on the afternoon of April 27 and stayed through the closing “End-of-Consciousness Party.” The only event I regret missing is a chat between philosopher David Chalmers, who loosed his “hard problem of consciousness” meme here in Tucson in 1994, and Deepak Chopra, the New Age mogul and a sponsor of this year’s meeting. I feel obliged to post something fast, because conference organizer and quantum-consciousness advocate Stuart Hameroff complained that most reporters “come for free, drink our booze and don’t write anything.” Hameroff also generously allowed me to give a talk, “The Quest to Solve Consciousness: A Skeptic’s View,” even though I teased him in my 1994 article for Scientific American, calling him an “aging hipster.” What follows is a highly subjective account of my first day at the meeting. I’d call this a “stream-of-consciousness report on consciousness,” but that would be pretentious. I'm just trying to answer this question: What is it like to be a skeptical journalist at a consciousness conference? I’ll post on the rest of the meeting soon. -- John Horgan DAY 1, WEDNESDAY, APRIL 27. THE HOROR A bullet-headed former New York fireman picks me up at the Tucson airport. Driving to the Loews Ventana Canyon Resort, he argues strenuously that President Trump will make us great again. As we approach the resort, he back-peddles a bit, no doubt worried about his tip. I tip him well, to show how tolerant I am. Everyone’s entitled to an irrational belief or two. © 2016 Scientific American
Link ID: 22193 - Posted: 05.09.2016
By Sarah Kaplan Scientists have known for a while that stereotypes warp our perceptions of things. Implicit biases — those unconscious assumptions that worm their way into our brains, without our full awareness and sometimes against our better judgment — can influence grading choices from teachers, split-second decisions by police officers and outcomes in online dating. We can't even see the world without filtering it through the lens of our assumptions, scientists say. In a study published Monday in the journal Nature Neuroscience, psychologists report that the neurons that respond to things such as sex, race and emotion are linked by stereotypes, distorting the way we perceive people's faces before that visual information even reaches our conscious brains. "The moment we actually glimpse another person ... [stereotypes] are biasing that processing in a way that conforms to our already existing expectations," said Jonathan Freeman, a psychology professor at New York University and one of the authors of the report. Responsibility lies in two far-flung regions of the brain: the orbital frontal cortex, which rests just above the eyes and is responsible for rapid visual predictions and categorizations, and the fusiform cortex, which sits in the back of the brain and is involved in recognizing faces. When Freeman and his co-author, Ryan Stolier, had 43 participants look at images of faces in a brain scanner, they noticed that neurons seemed to be firing in similar patterns in both parts of the brain, suggesting that information from each part was influencing the other.
By Scott Barry Kaufman "Just because a diagnosis [of ADHD] can be made does not take away from the great traits we love about Calvin and his imaginary tiger friend, Hobbes. In fact, we actually love Calvin BECAUSE of his ADHD traits. Calvin’s imagination, creativity, energy, lack of attention, and view of the world are the gifts that Mr. Watterson gave to this character." -- The Dragonfly Forest In his 2004 book "Creativity is Forever", Gary Davis reviewed the creativity literature from 1961 to 2003 and identified 22 reoccurring personality traits of creative people. This included 16 "positive" traits (e.g., independent, risk-taking, high energy, curiosity, humor, artistic, emotional) and 6 "negative" traits (e.g., impulsive, hyperactive, argumentative). In her own review of the creativity literature, Bonnie Cramond found that many of these same traits overlap to a substantial degree with behavioral descriptions of Attention Deficit Hyperactive Disorder (ADHD)-- including higher levels of spontaneous idea generation, mind wandering, daydreaming, sensation seeking, energy, and impulsivity. Research since then has supported the notion that people with ADHD characteristics are more likely to reach higher levels of creative thought and achievement than people without these characteristics (see here, here, here, here, here, here, here, here, here, and here). Recent research by Darya Zabelina and colleagues have found that real-life creative achievement is associated with the ability to broaden attention and have a “leaky” mental filter-- something in which people with ADHD excel. © 2016 Scientific American
Link ID: 22166 - Posted: 05.02.2016
By Adam Bear It happens hundreds of times a day: We press snooze on the alarm clock, we pick a shirt out of the closet, we reach for a beer in the fridge. In each case, we conceive of ourselves as free agents, consciously guiding our bodies in purposeful ways. But what does science have to say about the true source of this experience? In a classic paper published almost 20 years ago, the psychologists Dan Wegner and Thalia Wheatley made a revolutionary proposal: The experience of intentionally willing an action, they suggested, is often nothing more than a post hoc causal inference that our thoughts caused some behavior. The feeling itself, however, plays no causal role in producing that behavior. This could sometimes lead us to think we made a choice when we actually didn’t or think we made a different choice than we actually did. But there’s a mystery here. Suppose, as Wegner and Wheatley propose, that we observe ourselves (unconsciously) perform some action, like picking out a box of cereal in the grocery store, and then only afterwards come to infer that we did this intentionally. If this is the true sequence of events, how could we be deceived into believing that we had intentionally made our choice before the consequences of this action were observed? This explanation for how we think of our agency would seem to require supernatural backwards causation, with our experience of conscious will being both a product and an apparent cause of behavior. In a study just published in Psychological Science, Paul Bloom and I explore a radical—but non-magical—solution to this puzzle. © 2016 Scientific America
Link ID: 22158 - Posted: 04.30.2016
Yuki Noguchi Hey! Wake up! Need another cup of coffee? Join the club. Apparently about a third of Americans are sleep-deprived. And their employers are probably paying for it, too, in the form of mistakes, productivity loss, accidents and increased health insurance costs. A recent Robert Wood Johnson Foundation report found a third of Americans get less sleep than the recommended seven hours. Another survey by Accountemps, an accounting services firm, put that number at nearly 75 percent in March. Bill Driscoll, Accountemps' regional president in the greater Boston area, says some sleepy accountants even admitted it caused them to make costly mistakes. "One person deleted a project that took 1,000 hours to put together," Driscoll says. "Another person missed a decimal point on an estimated payment and the client overpaid by $1 million. Oops. William David Brown, a sleep psychologist at the University of Texas Southwestern Medical School and author of Sleeping Your Way To The Top, says Americans are sacrificing more and more sleep every year. Fatigue is cumulative, he says, and missing the equivalent of one night's sleep is like having a blood alcohol concentration of about .1 — above the legal limit to drive. "About a third of your employees in any big company are coming to work with an equivalent impairment level of being intoxicated," Brown says. © 2016 npr
By Matthew A. Scult My heart pounds as I sprint to the finish line. Thousands of spectators cheer as a sense of elation washes over me. I savor the feeling. But then, the image slowly fades away and my true surroundings come into focus. I am lying in a dark room with my head held firmly in place, inside an MRI scanner. While this might typically be unpleasant, I am a willing research study participant and am eagerly anticipating what comes next. I hold my breath as I stare at the bar on the computer screen representing my brain activity. Then the bar jumps. My fantasy of winning a race had caused the “motivation center” of my brain to surge with activity. I am participating in a study about neurofeedback, a diverse and fascinating area of research that combines neuroscience and technology to monitor and modulate brain activity in real time. My colleagues, Katie Dickerson and Jeff MacInnes, in the Adcock Lab at Duke University, are studying whether people can train themselves to increase brain activity in a tiny region of the brain called the VTA. Notably, the VTA is thought to be involved in motivation—the desire to get something that you want. For example, if I told you that by buying a lottery ticket you would be guaranteed to win $1,000,000, you would probably be very motivated to buy the ticket and would have a spike in brain activity in this region of your brain. But while studies have shown that motivation for external rewards (like money) activate the VTA, until now, we didn’t know whether people could internally generate a motivational state that would activate this brain region. To see if people can self-activate the VTA, my colleagues are using neurofeedback, which falls under the broader umbrella of biofeedback. © 2016 Scientific American
By JAMES GORMAN Bees find nectar and tell their hive-mates; flies evade the swatter; and cockroaches seem to do whatever they like wherever they like. But who would believe that insects are conscious, that they are aware of what’s going on, not just little biobots? Neuroscientists and philosophers apparently. As scientists lean increasingly toward recognizing that nonhuman animals are conscious in one way or another, the question becomes: Where does consciousness end? Andrew B. Barron, a cognitive scientist, and Colin Klein, a philosopher, at Macquarie University in Sydney, Australia, propose in Proceedings of the National Academy of Sciences that insects have the capacity for consciousness. This does not mean that a honeybee thinks, “Why am I not the queen?” or even, “Oh, I like that nectar.” But, Dr. Barron and Dr. Klein wrote in a scientific essay, the honeybee has the capacity to feel something. Their claim stops short of some others. Christof Koch, the president and chief scientific officer of the Allen Institute for Brain Science in Seattle, and Giulio Tononi, a neuroscientist and psychiatrist at the University of Wisconsin, have proposed that consciousness is nearly ubiquitous in different degrees, and can be present even in nonliving arrangements of matter, to varying degrees. They say that rather than wonder how consciousness arises, one should look at where we know it exists and go from there to where else it might exist. They conclude that it is an inherent property of physical systems in which information moves around in a certain way — and that could include some kinds of artificial intelligence and even naturally occurring nonliving matter. © 2016 The New York Times Company
Link ID: 22118 - Posted: 04.19.2016
By Stephen L. Macknik, Susana Martinez-Conde The renowned Slydini holds up an empty box for all to see. It is not really a box—just four connected cloth-covered cardboard walls, forming a floppy parallelogram with no bottom or top. Yet when the magician sets it down on a table, it looks like an ordinary container. Now he begins to roll large yellow sheets of tissue paper into balls. He claps his hands—SMACK!—as he crumples each new ball in a fist and then straightens his arm, wordlessly compelling the audience to gaze after his closed hand. He opens it, and ... the ball is still there. Nothing happened. Huh. Slydini's hand closes once more around the tissue, and it starts snaking around, slowly and gracefully, like a belly dancer's. The performance is mesmerizing. With his free hand, he grabs an imaginary pinch of pixie dust from the box to sprinkle on top of the other hand. This time he opens his hand to reveal that the tissue is gone! Four balls disappear in this fashion. Then, for the finale, Slydini tips the box forward and shows the impossible: all four balls have mysteriously reappeared inside. Slydini famously performed this act on The Dick Cavett Show in 1978. It was one of his iconic tricks. Despite the prestidigitator's incredible showmanship, though, the sleight only works because your brain cannot multitask. © 2016 Scientific American,
Link ID: 22114 - Posted: 04.19.2016
By JEFFREY M. ZACKS and REBECCA TREIMAN OUR favorite Woody Allen joke is the one about taking a speed-reading course. “I read ‘War and Peace’ in 20 minutes,” he says. “It’s about Russia.” The promise of speed reading — to absorb text several times faster than normal, without any significant loss of comprehension — can indeed seem too good to be true. Nonetheless, it has long been an aspiration for many readers, as well as the entrepreneurs seeking to serve them. And as the production rate for new reading matter has increased, and people read on a growing array of devices, the lure of speed reading has only grown stronger. The first popular speed-reading course, introduced in 1959 by Evelyn Wood, was predicated on the idea that reading was slow because it was inefficient. The course focused on teaching people to make fewer back-and-forth eye movements across the page, taking in more information with each glance. Today, apps like SpeedRead With Spritz aim to minimize eye movement even further by having a digital device present you with a stream of single words one after the other at a rapid rate. Unfortunately, the scientific consensus suggests that such enterprises should be viewed with suspicion. In a recent article in Psychological Science in the Public Interest, one of us (Professor Treiman) and colleagues reviewed the empirical literature on reading and concluded that it’s extremely unlikely you can greatly improve your reading speed without missing out on a lot of meaning. Certainly, readers are capable of rapidly scanning a text to find a specific word or piece of information, or to pick up a general idea of what the text is about. But this is skimming, not reading. We can definitely skim, and it may be that speed-reading systems help people skim better. Some speed-reading systems, for example, instruct people to focus only on the beginnings of paragraphs and chapters. This is probably a good skimming strategy. Participants in a 2009 experiment read essays that had half the words covered up — either the beginning of the essay, the end of the essay, or the beginning or end of each individual paragraph. Reading half-paragraphs led to better performance on a test of memory for the passage’s meaning than did reading only the first or second half of the text, and it worked as well as skimming under time pressure. © 2016 The New York Times Company
By Matthew Hutson Bad news for believers in clairvoyance. Our brains appear to rewrite history so that the choices we make after an event seem to precede it. In other words, we add loops to our mental timeline that let us feel we can predict things that in reality have already happened. Adam Bear and Paul Bloom at Yale University conducted some simple tests on volunteers. In one experiment, subjects looked at white circles and silently guessed which one would turn red. Once one circle had changed colour, they reported whether or not they had predicted correctly. Over many trials, their reported accuracy was significantly better than the 20 per cent expected by chance, indicating that the volunteers either had psychic abilities or had unwittingly played a mental trick on themselves. The researchers’ study design helped explain what was really going on. They placed different delays between the white circles’ appearance and one of the circles turning red, ranging from 50 milliseconds to one second. Participants’ reported accuracy was highest – surpassing 30 per cent – when the delays were shortest. That’s what you would expect if the appearance of the red circle was actually influencing decisions still in progress. This suggests it’s unlikely that the subjects were merely lying about their predictive abilities to impress the researchers. The mechanism behind this behaviour is still unclear. It’s possible, the researchers suggest, that we perceive the order of events correctly – one circle changes colour before we have actually made our prediction – but then we subconsciously swap the sequence in our memories so the prediction seems to come first. Such a switcheroo could be motivated by a desire to feel in control of our lives. © Copyright Reed Business Information Ltd.
Link ID: 22109 - Posted: 04.16.2016
By Simon Makin Everyone's brain is different. Until recently neuroscience has tended to gloss this over by averaging results from many brain scans in trying to elicit general truths about how the organ works. But in a major development within the field researchers have begun documenting how brain activity differs between individuals. Such differences had been largely thought of as transient and uninteresting but studies are starting to show that they are innate properties of people's brains, and that knowing them better might ultimately help treat neurological disorders. The latest study, published April 8 in Science, found that the brain activity of individuals who were just biding their time in a brain scanner contained enough information to predict how their brains would function during a range of ordinary activities. The researchers used these at-rest signatures to predict which regions would light up—which groups of brain cells would switch on—during gambling, reading and other tasks they were asked to perform in the scanner. The technique might be used one day to assess whether certain areas of the brains of people who are paralyzed or in a comatose state are still functional, the authors say. The study capitalizes on a relatively new method of brain imaging that looks at what is going on when a person essentially does nothing. The technique stems from the mid-1990s work of biomedical engineer Bharat Biswal, now at New Jersey Institute of Technology. Biswal noticed that scans he had taken while participants were resting in a functional magnetic resonance imaging (fMRI) scanner displayed orderly, low-frequency oscillations. He had been looking for ways to remove background noise from fMRI signals but quickly realized these oscillations were not noise. His work paved the way for a new approach known as resting-state fMRI. © 2016 Scientific American
Zoe Cormier Researchers have published the first images showing the effects of LSD on the human brain, as part of a series of studies to examine how the drug causes its characteristic hallucinogenic effects1. David Nutt, a neuropsychopharmacologist at Imperial College London who has previously examined the neural effects of mind-altering drugs such as the hallucinogen psilocybin, found in magic mushrooms, was one of the study's leaders. He tells Nature what the research revealed, and how he hopes LSD (lysergic acid diethylamide) might ultimately be useful in therapies. Why study the effects of LSD on the brain? For brain researchers, studying how psychedelic drugs such as LSD alter the ‘normal’ brain state is a way to study the biological phenomenon that is consciousness. We ultimately would also like to see LSD deployed as a therapeutic tool. The idea has old roots. In the 1950s and 60s thousands of people took LSD for alcoholism; in 2012, a retrospective analysis of some of these studies suggested that it helped cut down on drinking. Since the 1970s there have been lots of studies with LSD on animals, but not on the human brain. We need that data to validate the trial of this drug as a potential therapy for addiction or depression. Why hasn’t anyone done brain scans before? Before the 1960s, LSD was studied for its potential therapeutic uses, as were other hallucinogens. But the drug was heavily restricted in the UK, the United States and around the world after 1967 — in my view, due to unfounded hysteria over its potential dangers. The restrictions vary worldwide, but in general, countries have insisted that LSD has ‘no medical value’, making it tremendously difficult to work with. © 2016 Nature Publishing Group