Links for Keyword: Consciousness

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 21 - 40 of 172

By Christof Koch We moderns take it for granted that consciousness is intimately tied up with the brain. But this assumption did not always hold. For much of recorded history, the heart was considered the seat of reason, emotion, valor and mind. Indeed, the first step in mummification in ancient Egypt was to scoop out the brain through the nostrils and discard it, whereas the heart, the liver and other internal organs were carefully extracted and preserved. The pharaoh would then have access to everything he needed in his afterlife. Everything except for his brain! Several millennia later Aristotle, one of the greatest of all biologists, taxonomists, embryologists and the first evolutionist, had this to say: “And of course, the brain is not responsible for any of the sensations at all. The correct view [is] that the seat and source of sensation is the region of the heart.” He argued consistently that the primary function of the wet and cold brain is to cool the warm blood coming from the heart. Another set of historical texts is no more insightful on this question. The Old and the New Testaments are filled with references to the heart but entirely devoid of any mentions of the brain. Debate about what the brain does grew ever more intense over ensuing millennia. The modern embodiment of these arguments seeks to identify the precise areas within the three-pound cranial mass where consciousness arises. What follows is an attempt to size up the past and present of this transmillennial journey. The field has scored successes in delineating a brain region that keeps the neural engine humming. Switched on, you are awake and conscious. In another setting, your body is asleep, yet you still have experiences—you dream. In a third position, you are deeply asleep, effectively off-line. © 2017 Scientific American

Related chapters from BN8e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 23361 - Posted: 03.16.2017

Is there life after death for our brains? It depends. Loretta Norton, a doctoral student at Western University in Canada, was curious, so she and her collaborators asked critically ill patients and their families if they could record brain activity in the half hours before and after life support was removed. They ended up recording four patients with electroencephalography, better known as EEG, which uses small electrodes attached to a person’s head to measure electrical activity in the brain. In three patients, the EEG showed brain activity stopping up to 10 minutes before the person’s heart stopped beating. But in a fourth, the EEG picked up so-called delta wave bursts up to 10 minutes after the person’s heart stopped. Delta waves are associated with deep sleep, also known as slow-wave sleep. In living people, neuroscientists consider slow-wave sleep to be a key process in consolidating memories. The study also raises questions about the exact moment when death occurs. Here’s Neuroskeptic: Another interesting finding was that the actual moment at which the heart stopped was not associated with any abrupt change in the EEG. The authors found no evidence of the large “delta blip” (the so-called “death wave“), an electrical phenomena which has been observed in rats following decapitation. With only four patients, it’s difficult to draw any sort of broad conclusion from this study. But it does suggest that death may be a gradual process as opposed to a distinct moment in time. © 1996-2017 WGBH Educational Foundation

Related chapters from BN8e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 23348 - Posted: 03.13.2017

Sara Reardon Like ivy plants that send runners out searching for something to cling to, the brain’s neurons send out shoots that connect with other neurons throughout the organ. A new digital reconstruction method shows three neurons that branch extensively throughout the brain, including one that wraps around its entire outer layer. The finding may help to explain how the brain creates consciousness. Christof Koch, president of the Allen Institute for Brain Science in Seattle, Washington, explained his group’s new technique at a 15 February meeting of the Brain Research through Advancing Innovative Neurotechnologies initiative in Bethesda, Maryland. He showed how the team traced three neurons from a small, thin sheet of cells called the claustrum — an area that Koch believes acts as the seat of consciousness in mice and humans1. Tracing all the branches of a neuron using conventional methods is a massive task. Researchers inject individual cells with a dye, slice the brain into thin sections and then trace the dyed neuron’s path by hand. Very few have been able to trace a neuron through the entire organ. This new method is less invasive and scalable, saving time and effort. Koch and his colleagues engineered a line of mice so that a certain drug activated specific genes in claustrum neurons. When the researchers fed the mice a small amount of the drug, only a handful of neurons received enough of it to switch on these genes. That resulted in production of a green fluorescent protein that spread throughout the entire neuron. The team then took 10,000 cross-sectional images of the mouse brain and used a computer program to create a 3D reconstruction of just three glowing cells. © 2017 Macmillan Publishers Limited

Related chapters from BN8e: Chapter 2: Functional Neuroanatomy: The Nervous System and Behavior; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 2: Cells and Structures: The Anatomy of the Nervous System; Chapter 14: Attention and Consciousness
Link ID: 23283 - Posted: 02.25.2017

By Virginia Morell Strange as it might seem, not all animals can immediately recognize themselves in a mirror. Great apes, dolphins, Asian elephants, and Eurasian magpies can do this—as can human kids around age 2. Now, some scientists are welcoming another creature to this exclusive club: carefully trained rhesus monkeys. The findings suggest that with time and teaching, other animals can learn how mirrors work, and thus learn to recognize themselves—a key test of cognition. “It’s a really interesting paper because it shows not only what the monkeys can’t do, but what it takes for them to succeed,” says Diana Reiss, a cognitive psychologist at Hunter College in New York City, who has given the test to dolphins and Asian elephants in other experiments. The mirror self-recognition test (MSR) is revered as a means of testing self-awareness. A scientist places a colored, odorless mark on an animal where it can’t see it, usually the head or shoulder. If the animal looks in the mirror and spontaneously rubs the mark, it passes the exam. Successful species are said to understand the concept of “self” versus “other.” But some researchers wonder whether failure is simply a sign that the exam itself is inadequate, perhaps because some animals can’t understand how mirrors work. Some animals—like rhesus monkeys, dogs, and pigs—don’t recognize themselves in mirrors, but can use them to find food. That discrepancy puzzled Mu-ming Poo, a neurobiologist at the Shanghai Institutes for Biological Sciences in China, and one of the study’s authors. “There must be some transition between that simple mirror use and recognizing yourself,” he says. © 2017 American Association for the Advancement of Science.

Related chapters from BN8e: Chapter 18: Attention and Higher Cognition; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 23224 - Posted: 02.14.2017

Ian Sample Science editor Doctors have used a brain-reading device to hold simple conversations with “locked-in” patients in work that promises to transform the lives of people who are too disabled to communicate. The groundbreaking technology allows the paralysed patients – who have not been able to speak for years – to answer “yes” or “no” to questions by detecting telltale patterns in their brain activity. Three women and one man, aged 24 to 76, were trained to use the system more than a year after they were diagnosed with completely locked-in syndrome, or CLIS. The condition was brought on by amyotrophic lateral sclerosis, or ALS, a progressive neurodegenerative disease which leaves people totally paralysed but still aware and able to think. “It’s the first sign that completely locked-in syndrome may be abolished forever, because with all of these patients, we can now ask them the most critical questions in life,” said Niels Birbaumer, a neuroscientist who led the research at the University of Tübingen. “This is the first time we’ve been able to establish reliable communication with these patients and I think that is important for them and their families,” he added. “I can say that after 30 years of trying to achieve this, it was one of the most satisfying moments of my life when it worked.” © 2017 Guardian News and Media Limited

Related chapters from BN8e: Chapter 18: Attention and Higher Cognition; Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Link ID: 23176 - Posted: 02.01.2017

By Drake Baer Philosophers have been arguing about the nature of will for at least 2,000 years. It’s at the core of blockbuster social-psychology findings, from delayed gratification to ego depletion to grit. But it’s only recently, thanks to the tools of brain imaging, that the act of willing is starting to be captured at a mechanistic level. A primary example is “cognitive control,” or how the brain selects goal-serving behavior from competing processes like so many unruly third-graders with their hands in the air. It’s the rare neuroscience finding that’s immediately applicable to everyday life: By knowing the way the brain is disposed to behaving or misbehaving in accordance to your goals, it’s easier to get the results you’re looking for, whether it’s avoiding the temptation of chocolate cookies or the pull of darkly ruminative thoughts. Jonathan Cohen, who runs a neuroscience lab dedicated to cognitive control at Princeton, says that it underlies just about every other flavor of cognition that’s thought to “make us human,” whether it’s language, problem solving, planning, or reasoning. “If I ask you not to scratch the mosquito bite that you have, you could comply with my request, and that’s remarkable,” he says. Every other species — ape, dog, cat, lizard — will automatically indulge in the scratching of the itch. (Why else would a pup need a post-surgery cone?) It’s plausible that a rat or monkey could be taught not to scratch an itch, he says, but that would probably take thousands of trials. But any psychologically and physically able human has the capacity to do so. “It’s a hardwired reflex that is almost certainly coded genetically,” he says. “But with three words — don’t scratch it — you can override those millions of years of evolution. That’s cognitive control.” © 2017, New York Media LLC.

Related chapters from BN8e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 23067 - Posted: 01.07.2017

Perry Link People who study other cultures sometimes note that they benefit twice: first by learning about the other culture and second by realizing that certain assumptions of their own are arbitrary. In reading Colin McGinn’s fine recent piece, “Groping Toward the Mind,” in The New York Review, I was reminded of a question I had pondered in my 2013 book Anatomy of Chinese: whether some of the struggles in Western philosophy over the concept of mind—especially over what kind of “thing” it is—might be rooted in Western language. The puzzles are less puzzling in Chinese. Indo-European languages tend to prefer nouns, even when talking about things for which verbs might seem more appropriate. The English noun inflation, for example, refers to complex processes that were not a “thing” until language made them so. Things like inflation can even become animate, as when we say “we need to combat inflation” or “inflation is killing us at the check-out counter.” Modern cognitive linguists like George Lakoff at Berkeley call inflation an “ontological metaphor.” (The inflation example is Lakoff’s.) When I studied Chinese, though, I began to notice a preference for verbs. Modern Chinese does use ontological metaphors, such as fāzhăn (literally “emit and unfold”) to mean “development” or xὶnxīn (“believe mind”) for “confidence.” But these are modern words that derive from Western languages (mostly via Japanese) and carry a Western flavor with them. “I firmly believe that…” is a natural phrase in Chinese; you can also say “I have a lot of confidence that…” but the use of a noun in such a phrase is a borrowing from the West. © 1963-2016 NYREV, Inc

Related chapters from BN8e: Chapter 18: Attention and Higher Cognition; Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 23031 - Posted: 12.28.2016

By Susana Martinez-Conde, Stephen L. Macknik We think we know what we want—but do we, really? In 2005 Lars Hall and Petter Johansson, both at Lund University in Sweden, ran an experiment that transformed how cognitive scientists think about choice. The experimental setup looked deceptively simple. A study participant and researcher faced each other across a table. The scientist offered two photographs of young women deemed equally attractive by an independent focus group. The subject then had to choose which portrait he or she found more appealing. Next, the experimenter turned both pictures over, moved them toward the subjects and asked them to pick up the photo they just chose. Subjects complied, unaware that the researcher had just performed a swap using a sleight-of-hand technique known to conjurers as black art. Because your visual neurons are built to detect and enhance contrast, it is very hard to see black on black: a magician dressed in black against a black velvet backdrop can look like a floating head. Hall and Johansson deliberately used a black tabletop in their experiment. The first photos their subjects saw all had black backs. Behind those, however, they hid a second picture of the opposite face with a red back. When the experimenter placed the first portrait face down on the table, he pushed the second photo toward the subject. When participants picked up the red-backed photos, the black-backed ones stayed hidden against the table's black surface—that is, until the experimenter could surreptitiously sweep them into his lap. © 2016 Scientific American

Related chapters from BN8e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 23021 - Posted: 12.26.2016

Amanda Gefter As we go about our daily lives, we tend to assume that our perceptions—sights, sounds, textures, tastes—are an accurate portrayal of the real world. Sure, when we stop and think about it—or when we find ourselves fooled by a perceptual illusion—we realize with a jolt that what we perceive is never the world directly, but rather our brain’s best guess at what that world is like, a kind of internal simulation of an external reality. Still, we bank on the fact that our simulation is a reasonably decent one. If it wasn’t, wouldn’t evolution have weeded us out by now? The true reality might be forever beyond our reach, but surely our senses give us at least an inkling of what it’s really like. Not so, says Donald D. Hoffman, a professor of cognitive science at the University of California, Irvine. Hoffman has spent the past three decades studying perception, artificial intelligence, evolutionary game theory and the brain, and his conclusion is a dramatic one: The world presented to us by our perceptions is nothing like reality. What’s more, he says, we have evolution itself to thank for this magnificent illusion, as it maximizes evolutionary fitness by driving truth to extinction. Getting at questions about the nature of reality, and disentangling the observer from the observed, is an endeavor that straddles the boundaries of neuroscience and fundamental physics. On one side you’ll find researchers scratching their chins raw trying to understand how a three-pound lump of gray matter obeying nothing more than the ordinary laws of physics can give rise to first-person conscious experience. This is the aptly named “hard problem.”

Related chapters from BN8e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 22937 - Posted: 12.01.2016

In his memoir Do No Harm, Henry Marsh confesses to the uncertainties he's dealt with as a surgeon and reflects on the enigmas of the brain and consciousness. Originally broadcast May 26, 2015. DAVE DAVIES, HOST: This is FRESH AIR. I'm Dave Davies, sitting in for Terry Gross. Our guest has opened heads and cut into brains, performing delicate and risky surgery on the part of the body that controls everything - breathing, movement, memory, and consciousness. In his work as a neurosurgeon, Dr. Henry Marsh has fixed aneurysms and spinal problems and spent many years operating on brain tumors. In his memoir, Dr. Marsh discusses some of his most challenging cases, triumphs and failures and confesses to the fears and uncertainties he's dealt with. He explains the surgical instruments he uses and how procedures have changed since he started practicing. And he reflects on the state of his profession and the mysteries of the brain and consciousness. Last year, he retired as the senior consulting neurosurgeon at St. George's Hospital in London, where he practiced for 28 years. He was the subject of the Emmy Award-winning 2007 documentary "The English Surgeon," which followed him in Ukraine, trying to help patients and improve conditions at a rundown hospital. Marsh's book, "Do No Harm," is now out in paperback. Terry spoke to him when it was published in hardback. © 2016 npr

Related chapters from BN8e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 22732 - Posted: 10.08.2016

George Paxinos Many people today believe they possess a soul. While conceptions of the soul differ, many would describe it as an “invisible force that appears to animate us”. It’s often believed the soul can survive death and is intimately associated with a person’s memories, passions and values. Some argue the soul has no mass, takes no space and is localised nowhere. But as a neuroscientist and psychologist, I have no use for the soul. On the contrary, all functions attributable to this kind of soul can be explained by the workings of the brain. Psychology is the study of behaviour. To carry out their work of modifying behaviour, such as in treating addiction, phobia, anxiety and depression, psychologists do not need to assume people have souls. For the psychologists, it is not so much that souls do not exist, it is that there is no need for them. It is said psychology lost its soul in the 1930s. By this time, the discipline fully became a science, relying on experimentation and control rather than introspection. What is the soul? It is not only religious thinkers who have proposed that we possess a soul. Some of the most notable proponents have been philosophers, such as Plato (424-348 BCE) and René Descartes in the 17th century. Plato believed we do not learn new things but recall things we knew before birth. For this to be so, he concluded, we must have a soul. Centuries later, Descartes wrote his thesis Passions of the Soul, where he argued there was a distinction between the mind, which he described as a “thinking substance”, and the body, “the extended substance”. He wrote: © 2010–2016, The Conversation US, Inc.

Related chapters from BN8e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 22692 - Posted: 09.26.2016

By Usha Lee McFarling @ushamcfarling LOS ANGELES — A team of physicians and neuroscientists on Wednesday reported the successful use of ultrasound waves to “jump start” the brain of a 25-year-old man recovering from coma — and plan to launch a much broader test of the technique, in hopes of finding a way to help at least some of the tens of thousands of patients in vegetative states. The team, based at the University of California, Los Angeles, cautions that the evidence so far is thin: They have no way to know for sure whether the ultrasound stimulation made the difference for their young patient, or whether he spontaneously recovered by coincidence shortly after the therapy. But the region of the brain they targeted with the ultrasound — the thalamus — has previously been shown to be important in restoring consciousness. In 2007, a 38-year-old man who had been minimally conscious for six years regained some functions after electrodes were implanted in his brain to stimulate the thalamus. The ultrasound technique is a “good idea” that merits further study, said Dr. Nicholas Schiff, a pioneer in the field of using brain stimulation to restore consciousness who conducted the 2007 study. “It’s intriguing and it’s an interesting possibility,” said Schiff, a neuroscientist at Weill Cornell Medicine. The UCLA procedure used an experimental device, about the size of a teacup saucer, to focus ultrasonic waves on the thalamus, two walnut-sized bulbs in the center of the brain that serve as a critical hub for information flow and help regulate consciousness and sleep.

Related chapters from BN8e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 22606 - Posted: 08.27.2016

Rachel Ehrenberg The brain doesn’t really go out like a light when anesthesia kicks in. Nor does neural activity gradually dim, a new study in monkeys reveals. Rather, intermittent flickers of brain activity appear as the effects of an anesthetic take hold. Some synchronized networks of brain activity fall out of step as the monkeys gradually drift from wakefulness, the study showed. But those networks resynchronized when deep unconsciousness set in, researchers reported in the July 20 Journal of Neuroscience. That the two networks behave so differently during the drifting-off stage is surprising, says study coauthor Yumiko Ishizawa of Harvard Medical School and Massachusetts General Hospital. It isn’t clear what exactly is going on, she says, except that the anesthetic’s effects are a lot more complex than previously thought. Most studies examining the how anesthesia works useelectroencephalograms, or EEGs, which record brain activity using electrodes on the scalp. The new study offers unprecedented surveillance by eavesdropping via electrodes implanted inside macaque monkeys’ brains. This new view provides clues to how the brain loses and gains consciousness. “It’s a very detailed description of something we know very little about,” says cognitive neuroscientist Tristan Bekinschtein of the University of Cambridge, who was not involved with the work. Although the study is elegant, it isn’t clear what to make of the findings, he says. “These are early days.” |© Society for Science & the Public 2000 - 2016.

Related chapters from BN8e: Chapter 18: Attention and Higher Cognition; Chapter 14: Biological Rhythms, Sleep, and Dreaming
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 10: Biological Rhythms and Sleep
Link ID: 22457 - Posted: 07.20.2016

Michael Egnor The most intractable question in modern neuroscience and philosophy of the mind is often phrased "What is consciousness?" The problem has been summed up nicely by philosopher David Chalmers as what he calls the Hard Problem of consciousness: How is it that we are subjects, and not just objects? Chalmers contrasts this hard question with what he calls the Easy Problem of consciousness: What are the neurobiological substrates underlying such things as wakefulness, alertness, attention, arousal, etc. Chalmers doesn't mean of course that the neurobiology of arousal is easy. He merely means to show that even if we can understand arousal from a neurobiological standpoint, we haven't yet solved the hard problem: the problem of subjective experience. Why am I an I, and not an it? Chalmers's point is a good one, and I think that it has a rather straightforward solution. First, some historical background is necessary. "What is consciousness?" is a modern question. It wasn't asked before the 17th century, because no one before Descartes thought that the mind was particularly mysterious. The problem of consciousness was created by moderns. The scholastic philosophers, following Aristotle and Aquinas, understood the soul as the animating principle of the body. In a human being, the powers of the soul -- intellect, will, memory, perception, appetite, and such -- were no more mysterious than the other powers of the soul, such as respiration, circulation, etc. Of course, biology in the Middle Ages wasn't as advanced as it is today, so there was much they didn't understand about human physiology, but in principle the mind was just another aspect of human biology, not inherently mysterious. In modern parlance, the scholastics saw the mind as the Easy Problem, no more intractable than understanding how breathing or circulation work.

Related chapters from BN8e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 22441 - Posted: 07.15.2016

George Johnson A paper in The British Medical Journal in December reported that cognitive behavioral therapy — a means of coaxing people into changing the way they think — is as effective as Prozac or Zoloft in treating major depression. In ways no one understands, talk therapy reaches down into the biological plumbing and affects the flow of neurotransmitters in the brain. Other studies have found similar results for “mindfulness” — Buddhist-inspired meditation in which one’s thoughts are allowed to drift gently through the head like clouds reflected in still mountain water. Findings like these have become so commonplace that it’s easy to forget their strange implications. Depression can be treated in two radically different ways: by altering the brain with chemicals, or by altering the mind by talking to a therapist. But we still can’t explain how mind arises from matter or how, in turn, mind acts on the brain. This longstanding conundrum — the mind-body problem — was succinctly described by the philosopher David Chalmers at a recent symposium at The New York Academy of Sciences. “The scientific and philosophical consensus is that there is no nonphysical soul or ego, or at least no evidence for that,” he said. Descartes’s notion of dualism — mind and body as separate things — has long receded from science. The challenge now is to explain how the inner world of consciousness arises from the flesh of the brain. © 2016 The New York Times Company

Related chapters from BN8e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 22397 - Posted: 07.05.2016

By Clare Wilson People who meditate are more aware of their unconscious brain activity – or so a new take on a classic “free will” experiment suggests. The results hint that the feeling of conscious control over our actions can vary – and provide more clues to understanding the complex nature of free will. The famous experiment that challenged our notions of free will was first done in 1983 by neuroscientist Benjamin Libet. It involved measuring electrical activity in someone’s brain while asking them to press a button, whenever they like, while they watch a special clock that allows them to note the time precisely. Typically people feel like they decide to press the button about 200 milliseconds before their finger moves – but the electrodes reveal activity in the part of their brain that controls movement occurs a further 350 milliseconds before they feel they make that decision. This suggests that in fact it is the unconscious brain that “decides” when to press the button. In the new study, a team at the University of Sussex in Brighton, UK, did a slimmed-down version of the experiment (omitting the brain electrodes), with 57 volunteers, 11 of whom regularly practised mindfulness mediation. The meditators had a longer gap in time between when they felt like they decided to move their finger and when it physically moved – 149 compared with 68 milliseconds for the other people. © Copyright Reed Business Information Ltd.

Related chapters from BN8e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 22369 - Posted: 06.28.2016

Michael Graziano Ever since Charles Darwin published On the Origin of Species in 1859, evolution has been the grand unifying theory of biology. Yet one of our most important biological traits, consciousness, is rarely studied in the context of evolution. Theories of consciousness come from religion, from philosophy, from cognitive science, but not so much from evolutionary biology. Maybe that’s why so few theories have been able to tackle basic questions such as: What is the adaptive value of consciousness? When did it evolve and what animals have it? The Attention Schema Theory (AST), developed over the past five years, may be able to answer those questions. The theory suggests that consciousness arises as a solution to one of the most fundamental problems facing any nervous system: Too much information constantly flows in to be fully processed. The brain evolved increasingly sophisticated mechanisms for deeply processing a few select signals at the expense of others, and in the AST, consciousness is the ultimate result of that evolutionary sequence. If the theory is right—and that has yet to be determined—then consciousness evolved gradually over the past half billion years and is present in a range of vertebrate species. Even before the evolution of a central brain, nervous systems took advantage of a simple computing trick: competition. Neurons act like candidates in an election, each one shouting and trying to suppress its fellows. At any moment only a few neurons win that intense competition, their signals rising up above the noise and impacting the animal’s behavior. This process is called selective signal enhancement, and without it, a nervous system can do almost nothing. © 2016 by The Atlantic Monthly Group

Related chapters from BN8e: Chapter 18: Attention and Higher Cognition; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 22306 - Posted: 06.09.2016

By Hanoch Ben-Yami Adam Bear opens his article, What Neuroscience Says about Free Will by mentioning a few cases such as pressing snooze on the alarm clock or picking a shirt out of the closet. He continues with an assertion about these cases, and with a question: In each case, we conceive of ourselves as free agents, consciously guiding our bodies in purposeful ways. But what does science have to say about the true source of this experience? This is a bad start. To be aware of ourselves as free agents is not to have an experience. There’s no special tickle which tells you you’re free, no "freedom itch." Rather, to be aware of the fact that you acted freely is, among other things, to know that had you preferred to do something else in those circumstances, you would have done it. And in many circumstances we clearly know that this is the case, so in many circumstances we are aware that we act freely. No experience is involved, and so far there’s no question in Bear’s article for science to answer. Continuing with his alleged experience, Bear writes: …the psychologists Dan Wegner and Thalia Wheatley made a revolutionary proposal: The experience of intentionally willing an action, they suggested, is often nothing more than a post hoc causal inference that our thoughts caused some behavior. More than a revolutionary proposal, this is an additional confusion. What might "intentionally willing an action" mean? Is it to be contrasted with non-intentionally willing an action? But what could this stand for? © 2016 Scientific American

Related chapters from BN8e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 22282 - Posted: 06.04.2016

By David Shultz We still may not know what causes consciousness in humans, but scientists are at least learning how to detect its presence. A new application of a common clinical test, the positron emission tomography (PET) scan, seems to be able to differentiate between minimally conscious brains and those in a vegetative state. The work could help doctors figure out which brain trauma patients are the most likely to recover—and even shed light on the nature of consciousness. “This is really cool what these guys did here,” says neuroscientist Nicholas Schiff at Cornell University, who was not involved in the study. “We’re going to make great use of it.” PET scans work by introducing a small amount of radionuclides into the body. These radioactive compounds act as a tracer and naturally emit subatomic particles called positrons over time, and the gamma rays indirectly produced by this process can be detected by imaging equipment. The most common PET scan uses fluorodeoxyglucose (FDG) as the tracer in order to show how glucose concentrations change in tissue over time—a proxy for metabolic activity. Compared with other imaging techniques, PET scans are relatively cheap and easy to perform, and are routinely used to survey for cancer, heart problems, and other diseases. In the new study, researchers used FDG-PET scans to analyze the resting cerebral metabolic rate—the amount of energy being used by the tissue—of 131 patients with a so-called disorder of consciousness and 28 healthy controls. Disorders of consciousness can refer to a wide range of problems, ranging from a full-blown coma to a minimally conscious state in which patients may experience brief periods where they can communicate and follow instructions. Between these two extremes, patients may be said to be in a vegetative state or exhibit unresponsive wakefulness, characterized by open eyes and basic reflexes, but no signs of awareness. Most disorders of consciousness result from head trauma, and where someone falls on the consciousness continuum is typically determined by the severity of the injury. © 2016 American Association for the Advancement of Science

Related chapters from BN8e: Chapter 18: Attention and Higher Cognition; Chapter 2: Functional Neuroanatomy: The Nervous System and Behavior
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 2: Cells and Structures: The Anatomy of the Nervous System
Link ID: 22260 - Posted: 05.28.2016

Stephen Cave For centuries, philosophers and theologians have almost unanimously held that civilization as we know it depends on a widespread belief in free will—and that losing this belief could be calamitous. Our codes of ethics, for example, assume that we can freely choose between right and wrong. In the Christian tradition, this is known as “moral liberty”—the capacity to discern and pursue the good, instead of merely being compelled by appetites and desires. The great Enlightenment philosopher Immanuel Kant reaffirmed this link between freedom and goodness. If we are not free to choose, he argued, then it would make no sense to say we ought to choose the path of righteousness. Today, the assumption of free will runs through every aspect of American politics, from welfare provision to criminal law. It permeates the popular culture and underpins the American dream—the belief that anyone can make something of themselves no matter what their start in life. As Barack Obama wrote in The Audacity of Hope, American “values are rooted in a basic optimism about life and a faith in free will.” So what happens if this faith erodes? The sciences have grown steadily bolder in their claim that all human behavior can be explained through the clockwork laws of cause and effect. This shift in perception is the continuation of an intellectual revolution that began about 150 years ago, when Charles Darwin first published On the Origin of Species. Shortly after Darwin put forth his theory of evolution, his cousin Sir Francis Galton began to draw out the implications: If we have evolved, then mental faculties like intelligence must be hereditary. But we use those faculties—which some people have to a greater degree than others—to make decisions. So our ability to choose our fate is not free, but depends on our biological inheritance. © 2016 by The Atlantic Monthly Group.

Related chapters from BN8e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 22228 - Posted: 05.18.2016