Links for Keyword: Consciousness

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 142

George Paxinos Many people today believe they possess a soul. While conceptions of the soul differ, many would describe it as an “invisible force that appears to animate us”. It’s often believed the soul can survive death and is intimately associated with a person’s memories, passions and values. Some argue the soul has no mass, takes no space and is localised nowhere. But as a neuroscientist and psychologist, I have no use for the soul. On the contrary, all functions attributable to this kind of soul can be explained by the workings of the brain. Psychology is the study of behaviour. To carry out their work of modifying behaviour, such as in treating addiction, phobia, anxiety and depression, psychologists do not need to assume people have souls. For the psychologists, it is not so much that souls do not exist, it is that there is no need for them. It is said psychology lost its soul in the 1930s. By this time, the discipline fully became a science, relying on experimentation and control rather than introspection. What is the soul? It is not only religious thinkers who have proposed that we possess a soul. Some of the most notable proponents have been philosophers, such as Plato (424-348 BCE) and René Descartes in the 17th century. Plato believed we do not learn new things but recall things we knew before birth. For this to be so, he concluded, we must have a soul. Centuries later, Descartes wrote his thesis Passions of the Soul, where he argued there was a distinction between the mind, which he described as a “thinking substance”, and the body, “the extended substance”. He wrote: © 2010–2016, The Conversation US, Inc.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 22692 - Posted: 09.26.2016

By Usha Lee McFarling @ushamcfarling LOS ANGELES — A team of physicians and neuroscientists on Wednesday reported the successful use of ultrasound waves to “jump start” the brain of a 25-year-old man recovering from coma — and plan to launch a much broader test of the technique, in hopes of finding a way to help at least some of the tens of thousands of patients in vegetative states. The team, based at the University of California, Los Angeles, cautions that the evidence so far is thin: They have no way to know for sure whether the ultrasound stimulation made the difference for their young patient, or whether he spontaneously recovered by coincidence shortly after the therapy. But the region of the brain they targeted with the ultrasound — the thalamus — has previously been shown to be important in restoring consciousness. In 2007, a 38-year-old man who had been minimally conscious for six years regained some functions after electrodes were implanted in his brain to stimulate the thalamus. The ultrasound technique is a “good idea” that merits further study, said Dr. Nicholas Schiff, a pioneer in the field of using brain stimulation to restore consciousness who conducted the 2007 study. “It’s intriguing and it’s an interesting possibility,” said Schiff, a neuroscientist at Weill Cornell Medicine. The UCLA procedure used an experimental device, about the size of a teacup saucer, to focus ultrasonic waves on the thalamus, two walnut-sized bulbs in the center of the brain that serve as a critical hub for information flow and help regulate consciousness and sleep.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 22606 - Posted: 08.27.2016

Rachel Ehrenberg The brain doesn’t really go out like a light when anesthesia kicks in. Nor does neural activity gradually dim, a new study in monkeys reveals. Rather, intermittent flickers of brain activity appear as the effects of an anesthetic take hold. Some synchronized networks of brain activity fall out of step as the monkeys gradually drift from wakefulness, the study showed. But those networks resynchronized when deep unconsciousness set in, researchers reported in the July 20 Journal of Neuroscience. That the two networks behave so differently during the drifting-off stage is surprising, says study coauthor Yumiko Ishizawa of Harvard Medical School and Massachusetts General Hospital. It isn’t clear what exactly is going on, she says, except that the anesthetic’s effects are a lot more complex than previously thought. Most studies examining the how anesthesia works useelectroencephalograms, or EEGs, which record brain activity using electrodes on the scalp. The new study offers unprecedented surveillance by eavesdropping via electrodes implanted inside macaque monkeys’ brains. This new view provides clues to how the brain loses and gains consciousness. “It’s a very detailed description of something we know very little about,” says cognitive neuroscientist Tristan Bekinschtein of the University of Cambridge, who was not involved with the work. Although the study is elegant, it isn’t clear what to make of the findings, he says. “These are early days.” |© Society for Science & the Public 2000 - 2016.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 14: Biological Rhythms, Sleep, and Dreaming
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 10: Biological Rhythms and Sleep
Link ID: 22457 - Posted: 07.20.2016

Michael Egnor The most intractable question in modern neuroscience and philosophy of the mind is often phrased "What is consciousness?" The problem has been summed up nicely by philosopher David Chalmers as what he calls the Hard Problem of consciousness: How is it that we are subjects, and not just objects? Chalmers contrasts this hard question with what he calls the Easy Problem of consciousness: What are the neurobiological substrates underlying such things as wakefulness, alertness, attention, arousal, etc. Chalmers doesn't mean of course that the neurobiology of arousal is easy. He merely means to show that even if we can understand arousal from a neurobiological standpoint, we haven't yet solved the hard problem: the problem of subjective experience. Why am I an I, and not an it? Chalmers's point is a good one, and I think that it has a rather straightforward solution. First, some historical background is necessary. "What is consciousness?" is a modern question. It wasn't asked before the 17th century, because no one before Descartes thought that the mind was particularly mysterious. The problem of consciousness was created by moderns. The scholastic philosophers, following Aristotle and Aquinas, understood the soul as the animating principle of the body. In a human being, the powers of the soul -- intellect, will, memory, perception, appetite, and such -- were no more mysterious than the other powers of the soul, such as respiration, circulation, etc. Of course, biology in the Middle Ages wasn't as advanced as it is today, so there was much they didn't understand about human physiology, but in principle the mind was just another aspect of human biology, not inherently mysterious. In modern parlance, the scholastics saw the mind as the Easy Problem, no more intractable than understanding how breathing or circulation work.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 22441 - Posted: 07.15.2016

George Johnson A paper in The British Medical Journal in December reported that cognitive behavioral therapy — a means of coaxing people into changing the way they think — is as effective as Prozac or Zoloft in treating major depression. In ways no one understands, talk therapy reaches down into the biological plumbing and affects the flow of neurotransmitters in the brain. Other studies have found similar results for “mindfulness” — Buddhist-inspired meditation in which one’s thoughts are allowed to drift gently through the head like clouds reflected in still mountain water. Findings like these have become so commonplace that it’s easy to forget their strange implications. Depression can be treated in two radically different ways: by altering the brain with chemicals, or by altering the mind by talking to a therapist. But we still can’t explain how mind arises from matter or how, in turn, mind acts on the brain. This longstanding conundrum — the mind-body problem — was succinctly described by the philosopher David Chalmers at a recent symposium at The New York Academy of Sciences. “The scientific and philosophical consensus is that there is no nonphysical soul or ego, or at least no evidence for that,” he said. Descartes’s notion of dualism — mind and body as separate things — has long receded from science. The challenge now is to explain how the inner world of consciousness arises from the flesh of the brain. © 2016 The New York Times Company

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 22397 - Posted: 07.05.2016

By Clare Wilson People who meditate are more aware of their unconscious brain activity – or so a new take on a classic “free will” experiment suggests. The results hint that the feeling of conscious control over our actions can vary – and provide more clues to understanding the complex nature of free will. The famous experiment that challenged our notions of free will was first done in 1983 by neuroscientist Benjamin Libet. It involved measuring electrical activity in someone’s brain while asking them to press a button, whenever they like, while they watch a special clock that allows them to note the time precisely. Typically people feel like they decide to press the button about 200 milliseconds before their finger moves – but the electrodes reveal activity in the part of their brain that controls movement occurs a further 350 milliseconds before they feel they make that decision. This suggests that in fact it is the unconscious brain that “decides” when to press the button. In the new study, a team at the University of Sussex in Brighton, UK, did a slimmed-down version of the experiment (omitting the brain electrodes), with 57 volunteers, 11 of whom regularly practised mindfulness mediation. The meditators had a longer gap in time between when they felt like they decided to move their finger and when it physically moved – 149 compared with 68 milliseconds for the other people. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 22369 - Posted: 06.28.2016

Michael Graziano Ever since Charles Darwin published On the Origin of Species in 1859, evolution has been the grand unifying theory of biology. Yet one of our most important biological traits, consciousness, is rarely studied in the context of evolution. Theories of consciousness come from religion, from philosophy, from cognitive science, but not so much from evolutionary biology. Maybe that’s why so few theories have been able to tackle basic questions such as: What is the adaptive value of consciousness? When did it evolve and what animals have it? The Attention Schema Theory (AST), developed over the past five years, may be able to answer those questions. The theory suggests that consciousness arises as a solution to one of the most fundamental problems facing any nervous system: Too much information constantly flows in to be fully processed. The brain evolved increasingly sophisticated mechanisms for deeply processing a few select signals at the expense of others, and in the AST, consciousness is the ultimate result of that evolutionary sequence. If the theory is right—and that has yet to be determined—then consciousness evolved gradually over the past half billion years and is present in a range of vertebrate species. Even before the evolution of a central brain, nervous systems took advantage of a simple computing trick: competition. Neurons act like candidates in an election, each one shouting and trying to suppress its fellows. At any moment only a few neurons win that intense competition, their signals rising up above the noise and impacting the animal’s behavior. This process is called selective signal enhancement, and without it, a nervous system can do almost nothing. © 2016 by The Atlantic Monthly Group

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 22306 - Posted: 06.09.2016

By Hanoch Ben-Yami Adam Bear opens his article, What Neuroscience Says about Free Will by mentioning a few cases such as pressing snooze on the alarm clock or picking a shirt out of the closet. He continues with an assertion about these cases, and with a question: In each case, we conceive of ourselves as free agents, consciously guiding our bodies in purposeful ways. But what does science have to say about the true source of this experience? This is a bad start. To be aware of ourselves as free agents is not to have an experience. There’s no special tickle which tells you you’re free, no "freedom itch." Rather, to be aware of the fact that you acted freely is, among other things, to know that had you preferred to do something else in those circumstances, you would have done it. And in many circumstances we clearly know that this is the case, so in many circumstances we are aware that we act freely. No experience is involved, and so far there’s no question in Bear’s article for science to answer. Continuing with his alleged experience, Bear writes: …the psychologists Dan Wegner and Thalia Wheatley made a revolutionary proposal: The experience of intentionally willing an action, they suggested, is often nothing more than a post hoc causal inference that our thoughts caused some behavior. More than a revolutionary proposal, this is an additional confusion. What might "intentionally willing an action" mean? Is it to be contrasted with non-intentionally willing an action? But what could this stand for? © 2016 Scientific American

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 22282 - Posted: 06.04.2016

By David Shultz We still may not know what causes consciousness in humans, but scientists are at least learning how to detect its presence. A new application of a common clinical test, the positron emission tomography (PET) scan, seems to be able to differentiate between minimally conscious brains and those in a vegetative state. The work could help doctors figure out which brain trauma patients are the most likely to recover—and even shed light on the nature of consciousness. “This is really cool what these guys did here,” says neuroscientist Nicholas Schiff at Cornell University, who was not involved in the study. “We’re going to make great use of it.” PET scans work by introducing a small amount of radionuclides into the body. These radioactive compounds act as a tracer and naturally emit subatomic particles called positrons over time, and the gamma rays indirectly produced by this process can be detected by imaging equipment. The most common PET scan uses fluorodeoxyglucose (FDG) as the tracer in order to show how glucose concentrations change in tissue over time—a proxy for metabolic activity. Compared with other imaging techniques, PET scans are relatively cheap and easy to perform, and are routinely used to survey for cancer, heart problems, and other diseases. In the new study, researchers used FDG-PET scans to analyze the resting cerebral metabolic rate—the amount of energy being used by the tissue—of 131 patients with a so-called disorder of consciousness and 28 healthy controls. Disorders of consciousness can refer to a wide range of problems, ranging from a full-blown coma to a minimally conscious state in which patients may experience brief periods where they can communicate and follow instructions. Between these two extremes, patients may be said to be in a vegetative state or exhibit unresponsive wakefulness, characterized by open eyes and basic reflexes, but no signs of awareness. Most disorders of consciousness result from head trauma, and where someone falls on the consciousness continuum is typically determined by the severity of the injury. © 2016 American Association for the Advancement of Science

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 2: Functional Neuroanatomy: The Nervous System and Behavior
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 2: Cells and Structures: The Anatomy of the Nervous System
Link ID: 22260 - Posted: 05.28.2016

Stephen Cave For centuries, philosophers and theologians have almost unanimously held that civilization as we know it depends on a widespread belief in free will—and that losing this belief could be calamitous. Our codes of ethics, for example, assume that we can freely choose between right and wrong. In the Christian tradition, this is known as “moral liberty”—the capacity to discern and pursue the good, instead of merely being compelled by appetites and desires. The great Enlightenment philosopher Immanuel Kant reaffirmed this link between freedom and goodness. If we are not free to choose, he argued, then it would make no sense to say we ought to choose the path of righteousness. Today, the assumption of free will runs through every aspect of American politics, from welfare provision to criminal law. It permeates the popular culture and underpins the American dream—the belief that anyone can make something of themselves no matter what their start in life. As Barack Obama wrote in The Audacity of Hope, American “values are rooted in a basic optimism about life and a faith in free will.” So what happens if this faith erodes? The sciences have grown steadily bolder in their claim that all human behavior can be explained through the clockwork laws of cause and effect. This shift in perception is the continuation of an intellectual revolution that began about 150 years ago, when Charles Darwin first published On the Origin of Species. Shortly after Darwin put forth his theory of evolution, his cousin Sir Francis Galton began to draw out the implications: If we have evolved, then mental faculties like intelligence must be hereditary. But we use those faculties—which some people have to a greater degree than others—to make decisions. So our ability to choose our fate is not free, but depends on our biological inheritance. © 2016 by The Atlantic Monthly Group.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 22228 - Posted: 05.18.2016

George Johnson At the Science of Consciousness conference last month in Tucson, I was faced with a quandary: Which of eight simultaneous sessions should I attend? In one room, scientists and philosophers were discussing the physiology of brain cells and how they might generate the thinking mind. In another, the subject was free will — real or an illusion? Next door was a session on panpsychism, the controversial (to say the least) idea that everything — animal, vegetable and mineral — is imbued at its subatomic roots with mindlike qualities. Running on parallel tracks were sessions titled “Phenomenal Consciousness,” the “Neural Correlates of Consciousness” and the “Extended Mind.” For much of the 20th century, the science of consciousness was widely dismissed as an impenetrable mystery, a morass of a problem that could be safely pursued only by older professors as they thought deep thoughts in their endowed chairs. Beginning in the 1990s, the field slowly became more respectable. There is, after all, a gaping hole in science. The human mind has plumbed the universe, concluding that it is precisely 13.8 billion years old. With particle accelerators like the Large Hadron Collider at CERN, scientists have discovered the vanishingly tiny particles, like the Higgs boson, that underpin reality. But there is no scientific explanation for consciousness — without which none of these discoveries could have been made. © 2016 The New York Times Company

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 22227 - Posted: 05.18.2016

By John Horgan Speakers at the 2016 Tucson consciousness conference suggested that “temporal nonlocality” or other quantum effects in the brain could account for free will. But what happens when the brain is immersed in a hot tub? This is the second of four posts on “The Science of Consciousness” in Tucson, Arizona, which lasted from April 26 to April 30. (See Further Reading for links to other posts.) Once again, I’m trying to answer the question: What is it like to be a skeptical journalist at a consciousness conference? -- John Horgan DAY 2, THURSDAY, APRIL 28. HOT TUBS AND QUANTUM INCOHERENCE Breakfast on the patio with Stuart Kauffman, who has training in… almost everything. Philosophy, medicine, science. We’ve bumped heads in the past, but we’re friendly now. In his mid-70s, Stu is still obsessed with--and hacking away at--the biggest mysteries. We talk about… almost everything. Quantum mechanics, the origin of life, materialism, free will, God, the birth and death of his daughter, the death of his wife, his re-marriage, predictability versus possibility. As Stu speaks, his magnificent, weathered face looks happy/sad, arrogant/anxious. Superposition of emotions. He tells me about his brand-new book, Humanity in a Creative Universe, in which he outlines a perspective that can help lift us out of our spiritual crisis. Who saves the savior? I scoot to a morning session, “Consciousness and Free Will.” I hope it will supply me with ammo for my defenses of free will. I can do without God, but not free will. © 2016 Scientific American, a Division of Nature America, Inc.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 22216 - Posted: 05.16.2016

By John Horgan Scientists trying to explain consciousness are entitled to be difficult, but what’s philosophers’ excuse? Don’t they have a moral duty to be comprehensible to non-specialists? I recently attended “The Science of Consciousness,” the legendary inquest held every two years in Tucson, Arizona. I reported on the first meeting in 1994 and wanted to see how it’s evolved since then. This year’s shindig lasted from April 26 to April 30 and featured hundreds of presenters, eminent and obscure. I arrived on the afternoon of April 27 and stayed through the closing “End-of-Consciousness Party.” The only event I regret missing is a chat between philosopher David Chalmers, who loosed his “hard problem of consciousness” meme here in Tucson in 1994, and Deepak Chopra, the New Age mogul and a sponsor of this year’s meeting. I feel obliged to post something fast, because conference organizer and quantum-consciousness advocate Stuart Hameroff complained that most reporters “come for free, drink our booze and don’t write anything.” Hameroff also generously allowed me to give a talk, “The Quest to Solve Consciousness: A Skeptic’s View,” even though I teased him in my 1994 article for Scientific American, calling him an “aging hipster.” What follows is a highly subjective account of my first day at the meeting. I’d call this a “stream-of-consciousness report on consciousness,” but that would be pretentious. I'm just trying to answer this question: What is it like to be a skeptical journalist at a consciousness conference? I’ll post on the rest of the meeting soon. -- John Horgan DAY 1, WEDNESDAY, APRIL 27. THE HOROR A bullet-headed former New York fireman picks me up at the Tucson airport. Driving to the Loews Ventana Canyon Resort, he argues strenuously that President Trump will make us great again. As we approach the resort, he back-peddles a bit, no doubt worried about his tip. I tip him well, to show how tolerant I am. Everyone’s entitled to an irrational belief or two. © 2016 Scientific American

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 22193 - Posted: 05.09.2016

By Adam Bear It happens hundreds of times a day: We press snooze on the alarm clock, we pick a shirt out of the closet, we reach for a beer in the fridge. In each case, we conceive of ourselves as free agents, consciously guiding our bodies in purposeful ways. But what does science have to say about the true source of this experience? In a classic paper published almost 20 years ago, the psychologists Dan Wegner and Thalia Wheatley made a revolutionary proposal: The experience of intentionally willing an action, they suggested, is often nothing more than a post hoc causal inference that our thoughts caused some behavior. The feeling itself, however, plays no causal role in producing that behavior. This could sometimes lead us to think we made a choice when we actually didn’t or think we made a different choice than we actually did. But there’s a mystery here. Suppose, as Wegner and Wheatley propose, that we observe ourselves (unconsciously) perform some action, like picking out a box of cereal in the grocery store, and then only afterwards come to infer that we did this intentionally. If this is the true sequence of events, how could we be deceived into believing that we had intentionally made our choice before the consequences of this action were observed? This explanation for how we think of our agency would seem to require supernatural backwards causation, with our experience of conscious will being both a product and an apparent cause of behavior. In a study just published in Psychological Science, Paul Bloom and I explore a radical—but non-magical—solution to this puzzle. © 2016 Scientific America

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 22158 - Posted: 04.30.2016

By JAMES GORMAN Bees find nectar and tell their hive-mates; flies evade the swatter; and cockroaches seem to do whatever they like wherever they like. But who would believe that insects are conscious, that they are aware of what’s going on, not just little biobots? Neuroscientists and philosophers apparently. As scientists lean increasingly toward recognizing that nonhuman animals are conscious in one way or another, the question becomes: Where does consciousness end? Andrew B. Barron, a cognitive scientist, and Colin Klein, a philosopher, at Macquarie University in Sydney, Australia, propose in Proceedings of the National Academy of Sciences that insects have the capacity for consciousness. This does not mean that a honeybee thinks, “Why am I not the queen?” or even, “Oh, I like that nectar.” But, Dr. Barron and Dr. Klein wrote in a scientific essay, the honeybee has the capacity to feel something. Their claim stops short of some others. Christof Koch, the president and chief scientific officer of the Allen Institute for Brain Science in Seattle, and Giulio Tononi, a neuroscientist and psychiatrist at the University of Wisconsin, have proposed that consciousness is nearly ubiquitous in different degrees, and can be present even in nonliving arrangements of matter, to varying degrees. They say that rather than wonder how consciousness arises, one should look at where we know it exists and go from there to where else it might exist. They conclude that it is an inherent property of physical systems in which information moves around in a certain way — and that could include some kinds of artificial intelligence and even naturally occurring nonliving matter. © 2016 The New York Times Company

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 22118 - Posted: 04.19.2016

By Matthew Hutson Bad news for believers in clairvoyance. Our brains appear to rewrite history so that the choices we make after an event seem to precede it. In other words, we add loops to our mental timeline that let us feel we can predict things that in reality have already happened. Adam Bear and Paul Bloom at Yale University conducted some simple tests on volunteers. In one experiment, subjects looked at white circles and silently guessed which one would turn red. Once one circle had changed colour, they reported whether or not they had predicted correctly. Over many trials, their reported accuracy was significantly better than the 20 per cent expected by chance, indicating that the volunteers either had psychic abilities or had unwittingly played a mental trick on themselves. The researchers’ study design helped explain what was really going on. They placed different delays between the white circles’ appearance and one of the circles turning red, ranging from 50 milliseconds to one second. Participants’ reported accuracy was highest – surpassing 30 per cent – when the delays were shortest. That’s what you would expect if the appearance of the red circle was actually influencing decisions still in progress. This suggests it’s unlikely that the subjects were merely lying about their predictive abilities to impress the researchers. The mechanism behind this behaviour is still unclear. It’s possible, the researchers suggest, that we perceive the order of events correctly – one circle changes colour before we have actually made our prediction – but then we subconsciously swap the sequence in our memories so the prediction seems to come first. Such a switcheroo could be motivated by a desire to feel in control of our lives. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 22109 - Posted: 04.16.2016

By Daniel Barron It’s unnerving when someone with no criminal record commits a disturbingly violent crime. Perhaps he stabs his girlfriend 40 times and dumps her body in the desert. Perhaps he climbs to the top of a clock tower and guns down innocent passers-by. Or perhaps he climbs out of a car at a stoplight and nearly decapitates an unsuspecting police officer with 26 rounds from an assault rifle. Perhaps he even drowns his own children. Or shoots the President of the United States. The shock is palpable (NB: those are all actual cases). The very notion that someone—our neighbor, the guy ahead of us in the check-out line, we (!)—could do something so terrible rubs at our minds. We wonder, “What happened? What in this guy snapped?” After all, for the last 20 years, the accused went home to his family after work—why did he go rob that liquor store? What made him pull that trigger? The subject hit home for me this week when I was called to jury duty. As I made my way to the county courthouse, I wondered whether I would be asked to decide a capital murder case like the ones above. As a young neuroscientist, the prospect made me uneasy. At the trial, the accused’s lawyers would probably argue that, at the time of the crime, he had diminished capacity to make decisions, that somehow he wasn’t entirely free to choose whether or not to commit the crime. They might cite some form of neuroscientific evidence to argue that, at the time of the crime, his brain wasn’t functioning normally. And the jury and judge have to decide what to make of it. © 2016 Scientific American

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 22024 - Posted: 03.24.2016

Giant manta rays have been filmed checking out their reflections in a way that suggests they are self-aware. Only a small number of animals, mostly primates, have passed the mirror test, widely used as a tentative test of self-awareness. “This new discovery is incredibly important,” says Marc Bekoff, of the University of Colorado in Boulder. “It shows that we really need to expand the range of animals we study.” But not everyone is convinced that the new study proves conclusively that manta rays, which have the largest brains of any fish, can do this – or indeed, that the mirror test itself is an appropriate measure of self-awareness. Csilla Ari, of the University of South Florida in Tampa, filmed two giant manta rays in a tank, with and without a mirror inside.The fish changed their behaviour in a way that suggested that they recognised the reflections as themselves as opposed to another manta ray. They did not show signs of social interaction with the image, which is what you would expect if they perceived it to be another individual. Instead, the rays repeatedly moved their fins and circled in front of the mirror (click on image below to see one in action). This suggests they could see whether their reflection moved when they moved. The frequency of these movements was much higher when the mirror was in the tank than when it was not. manta © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 22015 - Posted: 03.22.2016

By BARBARA K. LIPSKA AS the director of the human brain bank at the National Institute of Mental Health, I am surrounded by brains, some floating in jars of formalin and others icebound in freezers. As part of my work, I cut these brains into tiny pieces and study their molecular and genetic structure. My specialty is schizophrenia, a devastating disease that often makes it difficult for the patient to discern what is real and what is not. I examine the brains of people with schizophrenia whose suffering was so acute that they committed suicide. I had always done my work with great passion, but I don’t think I really understood what was at stake until my own brain stopped working. In the first days of 2015, I was sitting at my desk when something freakish happened. I extended my arm to turn on the computer, and to my astonishment realized that my right hand disappeared when I moved it to the right lower quadrant of the keyboard. I tried again, and the same thing happened: The hand disappeared completely as if it were cut off at the wrist. It felt like a magic trick — mesmerizing, and totally inexplicable. Stricken with fear, I kept trying to find my right hand, but it was gone. I had battled breast cancer in 2009 and melanoma in 2012, but I had never considered the possibility of a brain tumor. I knew immediately that this was the most logical explanation for my symptoms, and yet I quickly dismissed the thought. Instead I headed to a conference room. My colleagues and I had a meeting scheduled to review our new data on the molecular composition of schizophrenia patients’ frontal cortex, a brain region that shapes who we are — our thoughts, emotions, memories. But I couldn’t focus on the meeting because the other scientists’ faces kept vanishing. Thoughts about a brain tumor crept quietly into my consciousness again, then screamed for attention. © 2016 The New York Times Company

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 7: Vision: From Eye to Brain
Link ID: 21984 - Posted: 03.14.2016

David H.Wells Take a theory of consciousness that calculates how aware any information-processing network is – be it a computer or a brain. Trouble is, it takes a supercomputer billions of years to verify its predictions. Add a maverick cosmologist, and what do you get? A way to make the theory useful within our lifetime. Integrated information theory (IIT) is one of our best descriptions of consciousness. Developed by neuroscientist Giulio Tononi of the University of Wisconsin at Madison, it’s based on the observation that each moment of awareness is unified. When you contemplate a bunch of flowers, say, it’s impossible to be conscious of the flower’s colour independently of its fragrance because the brain has integrated the sensory data. Tononi argues that for a system to be conscious, it must integrate information in such a way that the whole contains more information than the sum of its parts. The measure of how a system integrates information is called phi. One way of calculating phi involves dividing a system into two and calculating how dependent each part is on the other. One cut would be the “cruellest”, creating two parts that are the least dependent on each other. If the parts of the cruellest cut are completely independent, then phi is zero, and the system is not conscious. The greater their dependency, the greater the value of phi and the greater the degree of consciousness of the system. Finding the cruellest cut, however, is almost impossible for any large network. For the human brain, with its 100 billion neurons, calculating phi like this would take “longer than the age of our universe”, says Max Tegmark, a cosmologist at the Massachusetts Institute of Technology. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 21903 - Posted: 02.17.2016