Links for Keyword: Consciousness

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 158

By Neuroskeptic In a thought-provoking new paper called What are neural correlates neural correlates of?, NYU sociologist Gabriel Abend argues that neuroscientists need to pay more attention to philosophy, social science, and the humanities. Abend’s main argument is that if we are to study the neural correlates or neural basis of a certain phenomenon, we must first define that phenomenon and know how to identify instances of it. Sometimes, this identification is straightforward: in a study of brain responses to the taste of sugar, say, there is little room for confusion because we all agree what sugar is. However, if a neuroscientist wants to study the neural correlates of, say, love, they will need to decide what love is, and this is something that philosophers and others have been debating for a long time. Abend argues that cognitive neuroscientists “cannot avoid taking sides in philosophical and social science controversies” in studying phenomena, such as love or morality, which have no neutral, universally accepted definition. In choosing a particular set of stimuli in order to experimentally evoke something, neuroscientists are aligning themselves with a certain theory of what that thing is. For example, the field of “moral neuroscience” makes heavy use of a family of hypothetical dilemmas called trolley problems. The classic trolley problem asks us to choose between allowing a runaway trolley to hit and kill five people, or throwing one person in front of the trolley, killing them but saving the other five.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 23501 - Posted: 04.18.2017

By John Horgan I’m writing a book on the mind-body problem, and one theme is that mind-theorists’ views are shaped by emotionally traumatic experiences, like mental illness, the death of a child and the breakup of a marriage. David Chalmers is a striking counter-example. He seems remarkably well adjusted and rational, especially for a philosopher. I’ve tracked his career since I heard him call consciousness “the hard problem” in 1994. Although I often disagree with him—about, for example, whether information theory can help solve consciousness—I’ve always found him an admirably clear thinker, who doesn’t oversell his ideas (unlike Daniel Dennett when he insists that consciousness is an “illusion”). Just in the last couple of years, Chalmers's writings, talks and meetings have helped me understand integrated information theory, Bayesian brains, ethical implications of artificial intelligence and philosophy’s lack of progress, among other topics. Last year I interviewed Chalmers at his home in a woody suburb of New York City. My major takeaway: Although he has faith that consciousness can be scientifically solved, Chalmers doesn’t think we’re close to a final theory, and if we find such a theory, consciousness might remain as philosophically confusing as, say, quantum mechanics. In other words, Chalmers is a philosophical hybrid, who fuses optimism with mysterianism, the position that consciousness is intractable. Below are edited excerpts from our conversation. Chalmers, now 50, was born and raised in Australia. His parents split up when he was five. “My father is a medical researcher, a pretty successful scientist and administrator in medicine in Australia… My mother is I would say a spiritual thinker.” “So if you want an historical story, I guess I end up halfway between my father and mother… My father is a reductionist, and my mother is very much a non-reductionist. I’m a non-reductionist with a tolerance for ideas that might look a bit crazy to some people, like the idea that there’s consciousness everywhere, consciousness is not reducible to something physical. That said, the tradition I’m working in is very much in the western scientific and analytic tradition.” © 2017 Scientific American

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 23499 - Posted: 04.17.2017

Marcelo Gleiser The idea that neuroscience is rediscovering the soul is, to most scientists and philosophers, nothing short of outrageous. Of course it is not. But the widespread, adverse, knee-jerk attitude presupposes the old-fashioned definition of the soul — the ethereal, immaterial entity that somehow encapsulates your essence. Surely, this kind of supernatural mumbo-jumbo has no place in modern science. And I agree. The Cartesian separation of body and soul, the res extensa (matter stuff) vs. res cogitans (mind stuff) has long been discarded as untenable in a strictly materialistic description of natural phenomena. After all, how would something immaterial interact with something material without any exchange of energy? And how would something immaterial — whatever that means — somehow maintain the essence of who you are beyond your bodily existence? So, this kind of immaterial soul really presents problems for science, although, as pointed out here recently by Adam Frank, the scientific understanding of matter is not without its challenges. But what if we revisit the definition of soul, abandoning its canonical meaning as the "spiritual or immaterial part of a human being or animal, regarded as immortal" for something more modern? What if we consider your soul as the sum total of your neurocognitive essence, your very specific brain signature, the unique neuronal connections, synapses, and flow of neurotransmitters that makes you you? © 2017 npr

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 23454 - Posted: 04.06.2017

Bruce Bower Kids can have virtual out-of-body experiences as early as age 6. Oddly enough, the ability to inhabit a virtual avatar signals a budding sense that one’s self is located in one’s own body, researchers say. Grade-schoolers were stroked on their backs with a stick while viewing virtual versions of themselves undergoing the same touch. Just after the session ended, the children often reported that they had felt like the virtual body was their actual body, says psychologist Dorothy Cowie of Durham University in England. This sense of being a self in a body, which can be virtually manipulated via sight and touch, gets stronger and more nuanced throughout childhood, the scientists report March 22 in Developmental Science. By around age 10, individuals start to report feeling the touch of a stick stroking a virtual body, denoting a growing integration of sensations with the experience of body ownership, Cowie’s team finds. A year after that, youngsters still don’t display all the elements of identifying self with body observed in adults. During virtual reality trials, only adults perceived their actual bodies as physically moving through space toward virtual bodies receiving synchronized touches. This first-of-its-kind study opens the way to studying how a sense of self develops from childhood on, says cognitive neuroscientist Olaf Blanke of the Swiss Federal Institute of Technology in Lausanne. “The new data clearly show that kids at age 6 have brain mechanisms that generate an experience of being a self located inside one’s own body.” He suspects that a beginner’s version of “my body is me” emerges by age 4. |© Society for Science & the Public 2000 - 2017.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 13: Memory, Learning, and Development
Link ID: 23446 - Posted: 04.04.2017

By Anna Buckley BBC Science Radio Unit In an infamous memo written in 1965, the philosopher Hubert Dreyfus stated that humans would always beat computers at chess because machines lacked intuition. Daniel Dennett disagreed. A few years later, Dreyfus rather embarrassingly found himself in checkmate against a computer. And in May 1997 the IBM computer, Deep Blue defeated the world chess champion Garry Kasparov. Many who were unhappy with this result then claimed that chess was a boringly logical game. Computers didn't need intuition to win. The goalposts shifted. Daniel Dennett has always believed our minds are machines. For him the question is not can computers be human? But are humans really that clever? In an interview with BBC Radio 4's The Life Scientific, Dennett says there's nothing special about intuition. "Intuition is simply knowing something without knowing how you got there". Dennett blames the philosopher Rene Descartes for permanently polluting our thinking about how we think about the human mind. Descartes couldn't imagine how a machine could be capable of thinking, feeling and imagining. Such talents must be God-given. He was writing in the 17th century, when machines were made of levers and pulleys not CPUs and RAM, so perhaps we can forgive him. Our brains are made of a hundred billion neurons. If you were to count all the neurons in your brain at a rate of one a second, it would take more than 3,000 years. Our minds are made of molecular machines, otherwise known as brain cells. And if you find this depressing then you lack imagination, says Dennett. "Do you know the power of a machine made of a trillion moving parts?", he asks. "We're not just are robots", he says. "We're robots, made of robots, made of robots". Our brain cells are robots that respond to chemical signals. The motor proteins they create are robots. And so it goes on. © 2017 BBC.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 23445 - Posted: 04.04.2017

By George Johnson Who knows what Arturo the polar bear was thinking as he paced back and forth in the dark, air-conditioned chamber behind his artificial grotto? Just down the pathway Cecilia sat quietly in her cage, contemplating whatever chimpanzees contemplate. The idea that something resembling a subjective, contemplative mind exists in other animals has become mainstream — and not just for apes. In recent years, both creatures, inhabitants of the Mendoza Zoological Park in Argentina, have been targets of an international campaign challenging the morality of holding animals captive as living museum exhibits. The issue is not so much physical abuse as mental abuse — the effect confinement has on the inhabitants’ minds. Last July, a few months after I visited the zoo, Arturo, promoted by animal rights activists as “the world’s saddest polar bear,” died of what his keepers said were complications of old age. (His mantle has now been bestowed on Pizza, a polar bear on display at a Chinese shopping mall.) But Cecilia (the “loneliest chimp,” some sympathizers have called her) has been luckier, if luck is a concept a chimpanzee can understand. In November, Judge María Alejandra Mauricio of the Third Court of Guarantees in Mendoza decreed that Cecilia is a “nonhuman person” — one that was being denied “the fundamental right” of all sentient beings “to be born, to live, grow, and die in the proper environment for their species.” Copyright 2017 Undark

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 1: Biological Psychology: Scope and Outlook
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 1: An Introduction to Brain and Behavior
Link ID: 23437 - Posted: 04.01.2017

By Christof Koch We moderns take it for granted that consciousness is intimately tied up with the brain. But this assumption did not always hold. For much of recorded history, the heart was considered the seat of reason, emotion, valor and mind. Indeed, the first step in mummification in ancient Egypt was to scoop out the brain through the nostrils and discard it, whereas the heart, the liver and other internal organs were carefully extracted and preserved. The pharaoh would then have access to everything he needed in his afterlife. Everything except for his brain! Several millennia later Aristotle, one of the greatest of all biologists, taxonomists, embryologists and the first evolutionist, had this to say: “And of course, the brain is not responsible for any of the sensations at all. The correct view [is] that the seat and source of sensation is the region of the heart.” He argued consistently that the primary function of the wet and cold brain is to cool the warm blood coming from the heart. Another set of historical texts is no more insightful on this question. The Old and the New Testaments are filled with references to the heart but entirely devoid of any mentions of the brain. Debate about what the brain does grew ever more intense over ensuing millennia. The modern embodiment of these arguments seeks to identify the precise areas within the three-pound cranial mass where consciousness arises. What follows is an attempt to size up the past and present of this transmillennial journey. The field has scored successes in delineating a brain region that keeps the neural engine humming. Switched on, you are awake and conscious. In another setting, your body is asleep, yet you still have experiences—you dream. In a third position, you are deeply asleep, effectively off-line. © 2017 Scientific American

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 23361 - Posted: 03.16.2017

Is there life after death for our brains? It depends. Loretta Norton, a doctoral student at Western University in Canada, was curious, so she and her collaborators asked critically ill patients and their families if they could record brain activity in the half hours before and after life support was removed. They ended up recording four patients with electroencephalography, better known as EEG, which uses small electrodes attached to a person’s head to measure electrical activity in the brain. In three patients, the EEG showed brain activity stopping up to 10 minutes before the person’s heart stopped beating. But in a fourth, the EEG picked up so-called delta wave bursts up to 10 minutes after the person’s heart stopped. Delta waves are associated with deep sleep, also known as slow-wave sleep. In living people, neuroscientists consider slow-wave sleep to be a key process in consolidating memories. The study also raises questions about the exact moment when death occurs. Here’s Neuroskeptic: Another interesting finding was that the actual moment at which the heart stopped was not associated with any abrupt change in the EEG. The authors found no evidence of the large “delta blip” (the so-called “death wave“), an electrical phenomena which has been observed in rats following decapitation. With only four patients, it’s difficult to draw any sort of broad conclusion from this study. But it does suggest that death may be a gradual process as opposed to a distinct moment in time. © 1996-2017 WGBH Educational Foundation

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 23348 - Posted: 03.13.2017

Sara Reardon Like ivy plants that send runners out searching for something to cling to, the brain’s neurons send out shoots that connect with other neurons throughout the organ. A new digital reconstruction method shows three neurons that branch extensively throughout the brain, including one that wraps around its entire outer layer. The finding may help to explain how the brain creates consciousness. Christof Koch, president of the Allen Institute for Brain Science in Seattle, Washington, explained his group’s new technique at a 15 February meeting of the Brain Research through Advancing Innovative Neurotechnologies initiative in Bethesda, Maryland. He showed how the team traced three neurons from a small, thin sheet of cells called the claustrum — an area that Koch believes acts as the seat of consciousness in mice and humans1. Tracing all the branches of a neuron using conventional methods is a massive task. Researchers inject individual cells with a dye, slice the brain into thin sections and then trace the dyed neuron’s path by hand. Very few have been able to trace a neuron through the entire organ. This new method is less invasive and scalable, saving time and effort. Koch and his colleagues engineered a line of mice so that a certain drug activated specific genes in claustrum neurons. When the researchers fed the mice a small amount of the drug, only a handful of neurons received enough of it to switch on these genes. That resulted in production of a green fluorescent protein that spread throughout the entire neuron. The team then took 10,000 cross-sectional images of the mouse brain and used a computer program to create a 3D reconstruction of just three glowing cells. © 2017 Macmillan Publishers Limited

Related chapters from BP7e: Chapter 2: Functional Neuroanatomy: The Nervous System and Behavior; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 2: Cells and Structures: The Anatomy of the Nervous System; Chapter 14: Attention and Consciousness
Link ID: 23283 - Posted: 02.25.2017

By Virginia Morell Strange as it might seem, not all animals can immediately recognize themselves in a mirror. Great apes, dolphins, Asian elephants, and Eurasian magpies can do this—as can human kids around age 2. Now, some scientists are welcoming another creature to this exclusive club: carefully trained rhesus monkeys. The findings suggest that with time and teaching, other animals can learn how mirrors work, and thus learn to recognize themselves—a key test of cognition. “It’s a really interesting paper because it shows not only what the monkeys can’t do, but what it takes for them to succeed,” says Diana Reiss, a cognitive psychologist at Hunter College in New York City, who has given the test to dolphins and Asian elephants in other experiments. The mirror self-recognition test (MSR) is revered as a means of testing self-awareness. A scientist places a colored, odorless mark on an animal where it can’t see it, usually the head or shoulder. If the animal looks in the mirror and spontaneously rubs the mark, it passes the exam. Successful species are said to understand the concept of “self” versus “other.” But some researchers wonder whether failure is simply a sign that the exam itself is inadequate, perhaps because some animals can’t understand how mirrors work. Some animals—like rhesus monkeys, dogs, and pigs—don’t recognize themselves in mirrors, but can use them to find food. That discrepancy puzzled Mu-ming Poo, a neurobiologist at the Shanghai Institutes for Biological Sciences in China, and one of the study’s authors. “There must be some transition between that simple mirror use and recognizing yourself,” he says. © 2017 American Association for the Advancement of Science.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 23224 - Posted: 02.14.2017

Ian Sample Science editor Doctors have used a brain-reading device to hold simple conversations with “locked-in” patients in work that promises to transform the lives of people who are too disabled to communicate. The groundbreaking technology allows the paralysed patients – who have not been able to speak for years – to answer “yes” or “no” to questions by detecting telltale patterns in their brain activity. Three women and one man, aged 24 to 76, were trained to use the system more than a year after they were diagnosed with completely locked-in syndrome, or CLIS. The condition was brought on by amyotrophic lateral sclerosis, or ALS, a progressive neurodegenerative disease which leaves people totally paralysed but still aware and able to think. “It’s the first sign that completely locked-in syndrome may be abolished forever, because with all of these patients, we can now ask them the most critical questions in life,” said Niels Birbaumer, a neuroscientist who led the research at the University of Tübingen. “This is the first time we’ve been able to establish reliable communication with these patients and I think that is important for them and their families,” he added. “I can say that after 30 years of trying to achieve this, it was one of the most satisfying moments of my life when it worked.” © 2017 Guardian News and Media Limited

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Link ID: 23176 - Posted: 02.01.2017

By Drake Baer Philosophers have been arguing about the nature of will for at least 2,000 years. It’s at the core of blockbuster social-psychology findings, from delayed gratification to ego depletion to grit. But it’s only recently, thanks to the tools of brain imaging, that the act of willing is starting to be captured at a mechanistic level. A primary example is “cognitive control,” or how the brain selects goal-serving behavior from competing processes like so many unruly third-graders with their hands in the air. It’s the rare neuroscience finding that’s immediately applicable to everyday life: By knowing the way the brain is disposed to behaving or misbehaving in accordance to your goals, it’s easier to get the results you’re looking for, whether it’s avoiding the temptation of chocolate cookies or the pull of darkly ruminative thoughts. Jonathan Cohen, who runs a neuroscience lab dedicated to cognitive control at Princeton, says that it underlies just about every other flavor of cognition that’s thought to “make us human,” whether it’s language, problem solving, planning, or reasoning. “If I ask you not to scratch the mosquito bite that you have, you could comply with my request, and that’s remarkable,” he says. Every other species — ape, dog, cat, lizard — will automatically indulge in the scratching of the itch. (Why else would a pup need a post-surgery cone?) It’s plausible that a rat or monkey could be taught not to scratch an itch, he says, but that would probably take thousands of trials. But any psychologically and physically able human has the capacity to do so. “It’s a hardwired reflex that is almost certainly coded genetically,” he says. “But with three words — don’t scratch it — you can override those millions of years of evolution. That’s cognitive control.” © 2017, New York Media LLC.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 23067 - Posted: 01.07.2017

Perry Link People who study other cultures sometimes note that they benefit twice: first by learning about the other culture and second by realizing that certain assumptions of their own are arbitrary. In reading Colin McGinn’s fine recent piece, “Groping Toward the Mind,” in The New York Review, I was reminded of a question I had pondered in my 2013 book Anatomy of Chinese: whether some of the struggles in Western philosophy over the concept of mind—especially over what kind of “thing” it is—might be rooted in Western language. The puzzles are less puzzling in Chinese. Indo-European languages tend to prefer nouns, even when talking about things for which verbs might seem more appropriate. The English noun inflation, for example, refers to complex processes that were not a “thing” until language made them so. Things like inflation can even become animate, as when we say “we need to combat inflation” or “inflation is killing us at the check-out counter.” Modern cognitive linguists like George Lakoff at Berkeley call inflation an “ontological metaphor.” (The inflation example is Lakoff’s.) When I studied Chinese, though, I began to notice a preference for verbs. Modern Chinese does use ontological metaphors, such as fāzhăn (literally “emit and unfold”) to mean “development” or xὶnxīn (“believe mind”) for “confidence.” But these are modern words that derive from Western languages (mostly via Japanese) and carry a Western flavor with them. “I firmly believe that…” is a natural phrase in Chinese; you can also say “I have a lot of confidence that…” but the use of a noun in such a phrase is a borrowing from the West. © 1963-2016 NYREV, Inc

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 19: Language and Hemispheric Asymmetry
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 23031 - Posted: 12.28.2016

By Susana Martinez-Conde, Stephen L. Macknik We think we know what we want—but do we, really? In 2005 Lars Hall and Petter Johansson, both at Lund University in Sweden, ran an experiment that transformed how cognitive scientists think about choice. The experimental setup looked deceptively simple. A study participant and researcher faced each other across a table. The scientist offered two photographs of young women deemed equally attractive by an independent focus group. The subject then had to choose which portrait he or she found more appealing. Next, the experimenter turned both pictures over, moved them toward the subjects and asked them to pick up the photo they just chose. Subjects complied, unaware that the researcher had just performed a swap using a sleight-of-hand technique known to conjurers as black art. Because your visual neurons are built to detect and enhance contrast, it is very hard to see black on black: a magician dressed in black against a black velvet backdrop can look like a floating head. Hall and Johansson deliberately used a black tabletop in their experiment. The first photos their subjects saw all had black backs. Behind those, however, they hid a second picture of the opposite face with a red back. When the experimenter placed the first portrait face down on the table, he pushed the second photo toward the subject. When participants picked up the red-backed photos, the black-backed ones stayed hidden against the table's black surface—that is, until the experimenter could surreptitiously sweep them into his lap. © 2016 Scientific American

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 23021 - Posted: 12.26.2016

Amanda Gefter As we go about our daily lives, we tend to assume that our perceptions—sights, sounds, textures, tastes—are an accurate portrayal of the real world. Sure, when we stop and think about it—or when we find ourselves fooled by a perceptual illusion—we realize with a jolt that what we perceive is never the world directly, but rather our brain’s best guess at what that world is like, a kind of internal simulation of an external reality. Still, we bank on the fact that our simulation is a reasonably decent one. If it wasn’t, wouldn’t evolution have weeded us out by now? The true reality might be forever beyond our reach, but surely our senses give us at least an inkling of what it’s really like. Not so, says Donald D. Hoffman, a professor of cognitive science at the University of California, Irvine. Hoffman has spent the past three decades studying perception, artificial intelligence, evolutionary game theory and the brain, and his conclusion is a dramatic one: The world presented to us by our perceptions is nothing like reality. What’s more, he says, we have evolution itself to thank for this magnificent illusion, as it maximizes evolutionary fitness by driving truth to extinction. Getting at questions about the nature of reality, and disentangling the observer from the observed, is an endeavor that straddles the boundaries of neuroscience and fundamental physics. On one side you’ll find researchers scratching their chins raw trying to understand how a three-pound lump of gray matter obeying nothing more than the ordinary laws of physics can give rise to first-person conscious experience. This is the aptly named “hard problem.”

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 22937 - Posted: 12.01.2016

In his memoir Do No Harm, Henry Marsh confesses to the uncertainties he's dealt with as a surgeon and reflects on the enigmas of the brain and consciousness. Originally broadcast May 26, 2015. DAVE DAVIES, HOST: This is FRESH AIR. I'm Dave Davies, sitting in for Terry Gross. Our guest has opened heads and cut into brains, performing delicate and risky surgery on the part of the body that controls everything - breathing, movement, memory, and consciousness. In his work as a neurosurgeon, Dr. Henry Marsh has fixed aneurysms and spinal problems and spent many years operating on brain tumors. In his memoir, Dr. Marsh discusses some of his most challenging cases, triumphs and failures and confesses to the fears and uncertainties he's dealt with. He explains the surgical instruments he uses and how procedures have changed since he started practicing. And he reflects on the state of his profession and the mysteries of the brain and consciousness. Last year, he retired as the senior consulting neurosurgeon at St. George's Hospital in London, where he practiced for 28 years. He was the subject of the Emmy Award-winning 2007 documentary "The English Surgeon," which followed him in Ukraine, trying to help patients and improve conditions at a rundown hospital. Marsh's book, "Do No Harm," is now out in paperback. Terry spoke to him when it was published in hardback. © 2016 npr

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 22732 - Posted: 10.08.2016

George Paxinos Many people today believe they possess a soul. While conceptions of the soul differ, many would describe it as an “invisible force that appears to animate us”. It’s often believed the soul can survive death and is intimately associated with a person’s memories, passions and values. Some argue the soul has no mass, takes no space and is localised nowhere. But as a neuroscientist and psychologist, I have no use for the soul. On the contrary, all functions attributable to this kind of soul can be explained by the workings of the brain. Psychology is the study of behaviour. To carry out their work of modifying behaviour, such as in treating addiction, phobia, anxiety and depression, psychologists do not need to assume people have souls. For the psychologists, it is not so much that souls do not exist, it is that there is no need for them. It is said psychology lost its soul in the 1930s. By this time, the discipline fully became a science, relying on experimentation and control rather than introspection. What is the soul? It is not only religious thinkers who have proposed that we possess a soul. Some of the most notable proponents have been philosophers, such as Plato (424-348 BCE) and René Descartes in the 17th century. Plato believed we do not learn new things but recall things we knew before birth. For this to be so, he concluded, we must have a soul. Centuries later, Descartes wrote his thesis Passions of the Soul, where he argued there was a distinction between the mind, which he described as a “thinking substance”, and the body, “the extended substance”. He wrote: © 2010–2016, The Conversation US, Inc.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 22692 - Posted: 09.26.2016

By Usha Lee McFarling @ushamcfarling LOS ANGELES — A team of physicians and neuroscientists on Wednesday reported the successful use of ultrasound waves to “jump start” the brain of a 25-year-old man recovering from coma — and plan to launch a much broader test of the technique, in hopes of finding a way to help at least some of the tens of thousands of patients in vegetative states. The team, based at the University of California, Los Angeles, cautions that the evidence so far is thin: They have no way to know for sure whether the ultrasound stimulation made the difference for their young patient, or whether he spontaneously recovered by coincidence shortly after the therapy. But the region of the brain they targeted with the ultrasound — the thalamus — has previously been shown to be important in restoring consciousness. In 2007, a 38-year-old man who had been minimally conscious for six years regained some functions after electrodes were implanted in his brain to stimulate the thalamus. The ultrasound technique is a “good idea” that merits further study, said Dr. Nicholas Schiff, a pioneer in the field of using brain stimulation to restore consciousness who conducted the 2007 study. “It’s intriguing and it’s an interesting possibility,” said Schiff, a neuroscientist at Weill Cornell Medicine. The UCLA procedure used an experimental device, about the size of a teacup saucer, to focus ultrasonic waves on the thalamus, two walnut-sized bulbs in the center of the brain that serve as a critical hub for information flow and help regulate consciousness and sleep.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 22606 - Posted: 08.27.2016

Rachel Ehrenberg The brain doesn’t really go out like a light when anesthesia kicks in. Nor does neural activity gradually dim, a new study in monkeys reveals. Rather, intermittent flickers of brain activity appear as the effects of an anesthetic take hold. Some synchronized networks of brain activity fall out of step as the monkeys gradually drift from wakefulness, the study showed. But those networks resynchronized when deep unconsciousness set in, researchers reported in the July 20 Journal of Neuroscience. That the two networks behave so differently during the drifting-off stage is surprising, says study coauthor Yumiko Ishizawa of Harvard Medical School and Massachusetts General Hospital. It isn’t clear what exactly is going on, she says, except that the anesthetic’s effects are a lot more complex than previously thought. Most studies examining the how anesthesia works useelectroencephalograms, or EEGs, which record brain activity using electrodes on the scalp. The new study offers unprecedented surveillance by eavesdropping via electrodes implanted inside macaque monkeys’ brains. This new view provides clues to how the brain loses and gains consciousness. “It’s a very detailed description of something we know very little about,” says cognitive neuroscientist Tristan Bekinschtein of the University of Cambridge, who was not involved with the work. Although the study is elegant, it isn’t clear what to make of the findings, he says. “These are early days.” |© Society for Science & the Public 2000 - 2016.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 14: Biological Rhythms, Sleep, and Dreaming
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 10: Biological Rhythms and Sleep
Link ID: 22457 - Posted: 07.20.2016

Michael Egnor The most intractable question in modern neuroscience and philosophy of the mind is often phrased "What is consciousness?" The problem has been summed up nicely by philosopher David Chalmers as what he calls the Hard Problem of consciousness: How is it that we are subjects, and not just objects? Chalmers contrasts this hard question with what he calls the Easy Problem of consciousness: What are the neurobiological substrates underlying such things as wakefulness, alertness, attention, arousal, etc. Chalmers doesn't mean of course that the neurobiology of arousal is easy. He merely means to show that even if we can understand arousal from a neurobiological standpoint, we haven't yet solved the hard problem: the problem of subjective experience. Why am I an I, and not an it? Chalmers's point is a good one, and I think that it has a rather straightforward solution. First, some historical background is necessary. "What is consciousness?" is a modern question. It wasn't asked before the 17th century, because no one before Descartes thought that the mind was particularly mysterious. The problem of consciousness was created by moderns. The scholastic philosophers, following Aristotle and Aquinas, understood the soul as the animating principle of the body. In a human being, the powers of the soul -- intellect, will, memory, perception, appetite, and such -- were no more mysterious than the other powers of the soul, such as respiration, circulation, etc. Of course, biology in the Middle Ages wasn't as advanced as it is today, so there was much they didn't understand about human physiology, but in principle the mind was just another aspect of human biology, not inherently mysterious. In modern parlance, the scholastics saw the mind as the Easy Problem, no more intractable than understanding how breathing or circulation work.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 22441 - Posted: 07.15.2016