Links for Keyword: Consciousness
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
by Caroline Williams When it comes to making decisions, it seems that the conscious mind is the last to know. We already had evidence that it is possible to detect brain activity associated with movement before someone is aware of making a decision to move. Work presented this week at the British Neuroscience Association (BNA) conference in London not only extends it to abstract decisions, but suggests that it might even be possible to pre-emptively reverse a decision before a person realises they've made it. In 2011, Gabriel Kreiman of Harvard University measured the activity of individual neurons in 12 people with epilepsy, using electrodes already implanted into their brain to help identify the source of their seizures. The volunteers took part in the "Libet" experiment, in which they press a button whenever they like and remember the position of a second hand on a clock at the moment of decision. Kreiman discovered that electrical activity in the supplementary motor area, involved in initiating movement, and in the anterior cingulate cortex, which controls attention and motivation, appeared up to 5 seconds before a volunteer was aware of deciding to press the button (Neuron, doi.org/btkcpz). This backed up earlier fMRI studies by John-Dylan Haynes of the Bernstein Center for Computational Neuroscience in Berlin, Germany, that had traced the origins of decisions to the prefrontal cortex a whopping 10 seconds before awareness (Nature Neuroscience, doi.org/cs3rzv). "It's always nice when two lines of research converge and to know that what we see with fMRI is actually there in the neurons," says Haynes. © Copyright Reed Business Information Ltd.
Kerri Smith The experiment helped to change John-Dylan Haynes's outlook on life. In 2007, Haynes, a neuroscientist at the Bernstein Center for Computational Neuroscience in Berlin, put people into a brain scanner in which a display screen flashed a succession of random letters1. He told them to press a button with either their right or left index fingers whenever they felt the urge, and to remember the letter that was showing on the screen when they made the decision. The experiment used functional magnetic resonance imaging (fMRI) to reveal brain activity in real time as the volunteers chose to use their right or left hands. The results were quite a surprise. "The first thought we had was 'we have to check if this is real'," says Haynes. "We came up with more sanity checks than I've ever seen in any other study before." The conscious decision to push the button was made about a second before the actual act, but the team discovered that a pattern of brain activity seemed to predict that decision by as many as seven seconds. Long before the subjects were even aware of making a choice, it seems, their brains had already decided. As humans, we like to think that our decisions are under our conscious control — that we have free will. Philosophers have debated that concept for centuries, and now Haynes and other experimental neuroscientists are raising a new challenge. They argue that consciousness of a decision may be a mere biochemical afterthought, with no influence whatsoever on a person's actions. According to this logic, they say, free will is an illusion. "We feel we choose, but we don't," says Patrick Haggard, a neuroscientist at University College London. © 2013 Nature Publishing Group
by Julia Sklar IT IS a nightmare situation. A person diagnosed as being in a vegetative state has an operation without anaesthetic because they cannot feel pain. Except, maybe they can. Alexandra Markl at the Schön clinic in Bad Aibling, Germany, and colleagues studied people with unresponsive wakefulness syndrome (UWS) – also known as vegetative state – and identified activity in brain areas involved in the emotional aspects of pain. People with UWS can make reflex movements but can't show subjective awareness. There are two distinct neural networks that work together to create the sensation of pain. The more basic of the two – the sensory-discriminative network – identifies the presence of an unpleasant stimulus. It is the affective network that attaches emotions and subjective feelings to the experience. Crucially, without the activity of the emotional network, your brain detects pain but won't interpret it as unpleasant. Using PET scans, previous studies have detected activation in the sensory-discriminative network in people with UWS but their findings were consistent with a lack of subjective awareness, the hallmark of the condition. Now Markl and her colleagues have found evidence of activation in the affective or emotional network too (Brain and Behavior, doi.org/kfs). © Copyright Reed Business Information Ltd.
Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 8: General Principles of Sensory Processing, Touch, and Pain
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 5: The Sensorimotor System
Link ID: 17839 - Posted: 02.23.2013
By Mark Fischetti Various scholars have tried to explain consciousness in long articles and books, but one neuroscience pioneer has just released an unusual video blog to get the point across. In the sharply filmed and edited production, Joseph LeDoux, a renowned expert on the emotional brain at New York University, interrogates his NYU colleague Ned Block on the nature of consciousness. Block is a professor of philosophy, psychology and neural science and is considered a leading thinker on the subject. The interview ends with a transition into a music video performed by LeDoux’s longstanding band, the Amygdaloids. The whole exercise is a bit quirky, yet it succeeds in explaining consciousness in simple, even entertaining terms. LeDoux intends to produce a series of these video blogs to explore other intriguing aspects of the mind and brain, and he is giving Scientific American the chance to post them first on our Web site. LeDoux has already interviewed Michael Gazzaniga at the University of California, Santa Barbara, on free will and Nobel Prize winner Eric Kandel at Columbia University on mapping the mind. The video is not a quick hit, like most on the Net these days. The interview runs about 10 minutes, followed by the four-minute music video. The idea is for viewers to sit back and actually think along with the expert as his or her explanation unfolds. Yet video producer Alexis Gambis has generated some compelling imagery to keep our visual attention as Block unwraps his subject. Gambis directs the Imagine Science Film Festival, is about to complete his graduate degree in film and has a doctorate in molecular biology. © 2013 Scientific American
By ISABEL KERSHNER JERUSALEM — A brain scan performed on Ariel Sharon, the former Israeli prime minister who had a devastating stroke seven years ago and is presumed to be in a vegetative state, revealed significant brain activity in response to external stimuli, raising the chances that he is able to hear and understand, a scientist involved in the test said Sunday. Scientists showed Mr. Sharon, 84, pictures of his family, had him listen to a recording of the voice of one of his sons and used tactile stimulation to assess the extent of his brain’s response. “We were surprised that there was activity in the proper parts of the brain,” said Prof. Alon Friedman, a neuroscientist at Ben-Gurion University of the Negev and a member of the team that carried out the test. “It raises the chances that he hears and understands, but we cannot be sure. The test did not prove that.” The activity in specific regions of the brain indicated appropriate processing of the stimulations, according to a statement from Ben-Gurion University, but additional tests to assess Mr. Sharon’s level of consciousness were less conclusive. “While there were some encouraging signs, these were subtle and not as strong,” the statement added. The test was carried out last week at the Soroka University Medical Center in the southern Israeli city of Beersheba using a state-of-the-art M.R.I. machine and methods recently developed by Prof. Martin M. Monti of the University of California, Los Angeles. Professor Monti took part in the test, which lasted approximately two hours. © 2013 The New York Times Company
Doctors should resist the temptation to use an inexpensive tool that probes the brain's electrical activity when evaluating vegetative patients who can't communicate. Drs. Adrian Owen and Damian Cruse of the Centre for Brain and Mind in London, Ont., promoted the use of electroencephalography or EEG that can be used at a patient's bedside to determine if there's neurological activity in people in a vegetative state — those who are unresponsive in traditional tests of awareness. In a letter published in Thursday's issue of the medical journal The Lancet, Dr. Jonathan Victor of Weill Cornell Medical College in New York and his co-authors reanalyzed data shared from Owen's 2011 paper in the same journal. "I think we'd be very, very cautious about using this technology as it stands now," said Victor. Both groups agree the use of EEG technology remains promising to evaluate patients. The challenge, Victor said, is researchers can't be certain about their interpretations when faced with families trying to communicate with their loved ones, including for end-of-life discussions. The critique casts doubt on the original statistical approach and assumptions, which didn't hold when analyzed with a different model. In a rebuttal, Owen's team defended its approach as the only way to draw valid conclusions from vegetative patients and account for their variations. "There are few 'known truths' when attempting to detect covert awareness," Owen's team wrote. "Some are likely to be truly vegetative, while others may appear to be vegetative behaviorally, but are in fact, covertly aware." © CBC 2013
By John Horgan We’re approaching the end of one year and the beginning of another, when people resolve to quit smoking, swill less booze, gobble less ice cream, jog every day, or every other day, work harder, or less hard, be nicer to kids, spouses, ex-spouses, co-workers, read more books, watch less TV, except Homeland, which is awesome. In other words, it’s a time when people seek to alter their life trajectories by exercising their free will. Some mean-spirited materialists deny that free will exists, and this specious claim—not mere physiological processes in my brain–motivates me to reprint a defense of free will that I wrote for The New York Times 10 years ago: When I woke this morning, I stared at the ceiling above my bed and wondered: To what extent will my rising really be an exercise of my free will? Let’s say I got up right . . . now. Would my subjective decision be the cause? Or would computations unfolding in a subconscious neural netherworld actually set off the muscular twitches that slide me out of the bed, quietly, so as not to wake my wife (not a morning person), and propel me toward the door? One of the risks of science journalism is that occasionally you encounter research that threatens something you cherish. Free will is something I cherish. I can live with the idea of science killing off God. But free will? That’s going too far. And yet a couple of books I’ve been reading lately have left me brooding over the possibility that free will is as much a myth as divine justice. © 2012 Scientific American
By Ferris Jabr The computer, smartphone or other electronic device on which you may be reading this article, tracking the weather or checking your e-mail has a kind of rudimentary brain. It has highly organized electrical circuits that store information and behave in specific, predictable ways, just like the interconnected cells in your brain. On the most fundamental level, electrical circuits and neurons are made of the same stuff—atoms and their constituent elementary particles—but whereas the human brain is conscious of itself, man-made gadgets do not know they exist. Consciousness, most scientists would argue, is not a shared property of all matter in the universe. Rather consciousness is restricted to a subset of animals with relatively complex brains. The more scientists study animal behavior and brain anatomy, however, the more universal consciousness seems to be. A brain as complex as a human's is definitely not necessary for consciousness. On July 7 of this year, a group of neuroscientists convening at the University of Cambridge signed a document entitled “The Cambridge Declaration on Consciousness in Non-Human Animals,” officially declaring that nonhuman animals, “including all mammals and birds, and many other creatures, including octopuses,” are conscious. Humans are more than just conscious; they are also self-aware. Scientists differ on how they distinguish between consciousness and self-awareness, but here is one common distinction: consciousness is awareness of your body and your environment; self-awareness is recognition of that consciousness—not only understanding that you exist but further comprehending that you are aware of your existence. Another way of considering it: to be conscious is to think; to be self-aware is to realize that you are a thinking being and to think about your thoughts. Presumably human infants are conscious—they perceive and respond to people and things around them—but they are not yet self-aware. In their first years of life, children develop a sense of self, learning to recognize themselves in the mirror and to distinguish between their own point of view and the perspectives of other people. © 2012 Scientific American,
By Tanya Lewis A coma patient’s chances of surviving and waking up could be predicted by changes in the brain’s ability to discriminate sounds, new research suggests. Recovery from coma has been linked to auditory function before, but it wasn’t clear whether function depended on the time of assessment. Whereas previous studies tested patients several days or weeks after comas set in, a new study looks at the critical phase during the first 48 hours. At early stages, comatose brains can still distinguish between different sound patterns,. How this ability progresses over time can predict whether a coma patient will survive and ultimately awaken, researchers report. “It’s a very promising tool for prognosis,” says neurologist Mélanie Boly of the Belgian National Fund for Scientific Research, who was not involved with the study. “For the family, it’s very important to know if someone will recover or not.” A team led by neuroscientist Marzia De Lucia of the University of Lausanne in Switzerland studied 30 coma patients who had experienced heart attacks that deprived their brains of oxygen. All the patients underwent therapeutic hypothermia, a standard treatment to minimize brain damage, in which their bodies were cooled to 33° Celsius for 24 hours. De Lucia and colleagues played sounds for the patients and recorded their brain activity using scalp electrodes — once in hypothermic conditions during the first 24 hours of coma, and again a day later at normal body temperature. The sounds were a series of pure tones interspersed with sounds of different pitch, duration or location. The brain signals revealed how well patients could discriminate the sounds, compared with five healthy subjects. © Society for Science & the Public 2000 - 2012
Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 2: Functional Neuroanatomy: The Nervous System and Behavior
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 2: Cells and Structures: The Anatomy of the Nervous System
Link ID: 17546 - Posted: 11.27.2012
By Courtney Humphries A. Fainting, also called syncope, is a sudden and brief loss of consciousness followed by a spontaneous return to wakefulness — people who “black out” and then “come to” on their own without outside intervention. During the faint, they’re in danger of falls and injuries if they lose muscle control. There are several possible causes of fainting, but they all stem from a temporary decrease in blood flow to the brain. The typical Victorian-era swoon is one of the most common forms, called vasovagal syncope. Lewis Lipsitz, a geriatrician at Beth Israel Deaconess Medical Center and Hebrew SeniorLife, explains that it’s caused by a reflexive response to a stimulus, such as stress, a sudden shock, or the sight of blood. Fainting without an obvious trigger can be a sign of an underlying health problem, such as an irregular heart rhythm, heart disease, or severe dehydration. “The elderly have syncope more commonly than any other group,” Lipsitz says, which can put them at risk of falls and fractures. Often the spells are caused by actions as simple as changing position or eating a meal. When we stand up, Lipsitz says, “about half a liter of blood immediately goes to the legs and the lower abdomen,” and eating also pulls blood from the brain to the gut. Our bodies compensate by raising the heart rate to get blood to the brain. But elderly people can’t always restore their blood flow, and dehydration or certain medications can exacerbate the problem. © Copyright 2012 Globe Newspaper Company.
by Douglas Heaven Where does the mind reside? It's a question that's occupied the best brains for thousands of years. Now, a patient who is self-aware – despite lacking three regions of the brain thought to be essential for self-awareness – demonstrates that the mind remains as elusive as ever. The finding suggests that mental functions might not be tied to fixed brain regions. Instead, the mind might be more like a virtual machine running on distributed computers, with brain resources allocated in a flexible manner, says David Rudrauf at the University of Iowa in Iowa City, who led the study of the patient. Recent advances in functional neuroimaging – a technique that measures brain activity in the hope of finding correlations between mental functions and specific regions of the brain – have led to a wealth of studies that map particular functions onto regions. Previous neuroimaging studies had suggested that three regions – the insular cortex, anterior cingulate cortex and medial prefrontal cortex – are critical for self-awareness. But for Rudrauf the question wasn't settled. So when his team heard about patient R, who had lost brain tissue including the chunks of the three 'self-awareness' regions following a viral infection, they immediately thought he could help set the record straight. © Copyright Reed Business Information Ltd.
by Anil Ananthaswamy Advocates of free will can rest easy, for now. A 30-year-old classic experiment that is often used to argue against free will might have been misinterpreted. In the early 1980s, Benjamin Libet, a neuroscientist at the University of California in San Francisco, used electroencephalography (EEG) to record the brain activity of volunteers who had been told to make a spontaneous movement. With the help of a precise timer that the volunteers were asked to read at the moment they became aware of the urge to act, Libet found there was a 200 millisecond delay, on average, between this urge and the movement itself. But the EEG recordings also revealed a signal that appeared in the brain even earlier, 550 milliseconds, on average, before the action. Called the readiness potential, this has been interpreted as a blow to free will, as it suggests that the brain prepares to act well before we are conscious of the urge to move. This conclusion assumes that the readiness potential is the signature of the brain planning and preparing to move. "Even people who have been critical of Libet's work, by and large, haven't challenged that assumption," says Aaron Schurger of the National Institute of Health and Medical Research in Saclay, France. One attempt to do so came in 2009. Judy Trevena and Jeff Miller of the University of Otago in Dunedin, New Zealand, asked volunteers to decide, after hearing a tone, whether or not to tap on a keyboard. The readiness potential was present regardless of their decision, suggesting that it did not represent the brain preparing to move. Exactly what it did mean, though, still wasn't clear. © Copyright Reed Business Information Ltd.
Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 5: The Sensorimotor System
Link ID: 17131 - Posted: 08.07.2012
By Giulio Tononi When is an entity one entity? How can multiple elements be a single thing? A question simple enough— but one, thought Galileo, that had not yet been answered. Or perhaps, it had not been asked. The sensor of the digital camera certainly had a large repertoire of states— it could take any possible picture. But was it a single entity? You use the camera as a single entity, you grasp it with your hands as one. You watch the photograph as a single entity. But that is within your own consciousness. If it were not for you, the observer, would it still be a single entity? And what exactly would that mean? While musing such matters, Galileo was startled by a voice. J., a man with the forehead of an ancient god, addressed him in a polished tone: “Take a sentence of a dozen words, and take twelve men, and tell to each one word. Then stand the men in a row or jam them in a bunch, and let each think of his word as intently as he will; nowhere will there be a consciousness of the whole sentence. Or take a word of a dozen letters, and let each man think of his letter as intently as he will; nowhere will there be a consciousness of the whole word,” J. said. Or take a picture of one million dots, and take one million photodiodes, and show each photodiode its own dot. Then stand the photodiodes well ordered on a square array, and let each tell light from dark for its own dot, as precisely as it will; nowhere will there be a consciousness of the whole picture, said Galileo. “So you see that, Galileo,” J. continued. “There is no such thing as the spirit of the age, the sentiment of the people, or public opinion. The private minds do not agglomerate into a higher compound mind. They say the whole is more than the sum of its parts; they say, but how can it be so?” © 2012 Scientific American, Copyright © 2012 by Giulio Tononi
by Michael S. Gazzaniga We humans think we make all our decisions to act consciously and willfully. We all feel we are wonderfully unified, coherent mental machines and that our underlying brain structure must reflect this overpowering sense. It doesn’t. No command center keeps all other brain systems hopping to the instructions of a five-star general. The brain has millions of local processors making important decisions. There is no one boss in the brain. You are certainly not the boss of your brain. Have you ever succeeded in telling your brain to shut up already and go to sleep? Even though we know that the organization of the brain is made up of a gazillion decision centers, that neural activities going on at one level of organization are inexplicable at another level, and that there seems to be no boss, our conviction that we have a “self” making all the decisions is not dampened. It is a powerful illusion that is almost impossible to shake. In fact, there is little or no reason to shake it, for it has served us well as a species. There is, however, a reason to try to understand how it all comes about. If we understand why we feel in charge, we will understand why and how we make errors of thought and perception. When I was a kid, I spent a lot of time in the desert of Southern California—out in the desert scrub and dry bunchgrass, surrounded by purple mountains, creosote bush, coyotes, and rattlesnakes. The reason I am still here today is because I have nonconscious processes that were honed by evolution. © 2012, Kalmbach Publishing Co.
By JOHN MONTEROSSO and BARRY SCHWARTZ ARE you responsible for your behavior if your brain “made you do it”? Often we think not. For example, research now suggests that the brain’s frontal lobes, which are crucial for self-control, are not yet mature in adolescents. This finding has helped shape attitudes about whether young people are fully responsible for their actions. In 2005, when the Supreme Court ruled that the death penalty for juveniles was unconstitutional, its decision explicitly took into consideration that “parts of the brain involved in behavior control continue to mature through late adolescence.” Similar reasoning is often applied to behavior arising from chemical imbalances in the brain. It is possible, when the facts emerge, that the case of James E. Holmes, the suspect in the Colorado shootings, will spark debate about neurotransmitters and culpability. Whatever the merit of such cases, it’s worth stressing an important point: as a general matter, it is always true that our brains “made us do it.” Each of our behaviors is always associated with a brain state. If we view every new scientific finding about brain involvement in human behavior as a sign that the behavior was not under the individual’s control, the very notion of responsibility will be threatened. So it is imperative that we think clearly about when brain science frees someone from blame — and when it doesn’t. © 2012 The New York Times Company
By Michael Shermer Where is the experience of red in your brain? The question was put to me by Deepak Chopra at his Sages and Scientists Symposium in Carlsbad, Calif., on March 3. A posse of presenters argued that the lack of a complete theory by neuroscientists regarding how neural activity translates into conscious experiences (such as redness) means that a physicalist approach is inadequate or wrong. The idea that subjective experience is a result of electrochemical activity remains a hypothesis, Chopra elaborated in an e-mail. It is as much of a speculation as the idea that consciousness is fundamental and that it causes brain activity and creates the properties and objects of the material world. Where is Aunt Millie's mind when her brain dies of Alzheimer's? I countered to Chopra. Aunt Millie was an impermanent pattern of behavior of the universe and returned to the potential she emerged from, Chopra rejoined. In the philosophic framework of Eastern traditions, ego identity is an illusion and the goal of enlightenment is to transcend to a more universal nonlocal, nonmaterial identity. The hypothesis that the brain creates consciousness, however, has vastly more evidence for it than the hypothesis that consciousness creates the brain. Damage to the fusiform gyrus of the temporal lobe, for example, causes face blindness, and stimulation of this same area causes people to see faces spontaneously. Stroke-caused damage to the visual cortex region called V1 leads to loss of conscious visual perception. Changes in conscious experience can be directly measured by functional MRI, electroencephalography and single-neuron recordings. Neuroscientists can predict human choices from brain-scanning activity before the subject is even consciously aware of the decisions made. Using brain scans alone, neuroscientists have even been able to reconstruct, on a computer screen, what someone is seeing. © 2012 Scientific American
By Tom Siegfried Arguably, and it would be a tough argument to win if you took the other side, computers have had a greater impact on civilization than any other machine since the wheel. Sure, there was the steam engine, the automobile and the airplane, the printing press and the mechanical clock. Radios and televisions also made their share of societal waves. But look around. Computers do everything TVs and radios ever did. And computers tell time, control cars and planes, and have rendered printing presses pretty darn near obsolete. Computers have invaded every realm of life, from work to entertainment to medicine to education: Reading, writing and arithmetic are now all computer-centric activities. Every nook and cranny of human culture is controlled, colored or monitored by the digital computer. Even though, merely 100 years ago, no such machine existed. In 1912, the word computer referred to people (typically women) using pencils and paper or adding machines. Coincidentally, that was the year that Alan Turing was born. If you don’t like the way computers have taken over the world, you could blame him. No one did more to build the foundation of computer science than Turing. In a paper published in 1936, he described the principle behind all of today’s computing devices, sketching out the theoretical blueprint for a machine able to implement instructions for making any calculation. Turing didn’t invent the idea of a computer, of course. Charles Babbage had grand plans for a computing machine a century earlier (and even he had precursors). George Boole, not long after Babbage, developed the underlying binary mathematics (originally conceived much earlier by Gottfried Leibniz) that modern digital computers adopted. © Society for Science & the Public 2000 - 2012
By Laura Sanders A mysterious kind of nerve cell that has been linked to empathy, self-awareness, and even consciousness resides in Old World monkeys. The finding, published May 10 in Neuron, extends the domain of the neurons beyond humans, great apes and other large-brained creatures and will now allow scientists to study the habits of a neuron that may be key to human self-awareness. “People have been reluctant to say, but want to believe, that these neurons might be the neural correlate of consciousness,” says neuroscientist and psychiatrist Hugo Critchley of the University of Sussex in England. Finding the neurons in macaques, which can be studied in laboratories, “opens up the possibility to study directly the role of these cells,” he says. An earlier study saw no signs of the cells, called von Economo neurons, in macaques. But while carefully scrutinizing a small piece of a macaque brain for a different experiment, anatomist Henry Evrard of the Max Planck Institute for Biological Cybernetics in Tübingen, Germany, stumbled across the rare, distinctive cells. About three times bigger than other nerve cells, von Economo neurons have long, fat bodies and tufts of message-receiving dendrites at each end. Evrard compares the first sighting to seeing the tip of an iceberg. After many additional tests, he and his colleagues concluded that the cells, though smaller and sparser than their human counterparts, were indeed the elusive von Economo neurons. © Society for Science & the Public 2000 - 2012
By JAMES GORMAN The puzzle of consciousness is so devilish that scientists and philosophers are still struggling with how to talk about it, let alone figure out what it is and where it comes from. One problem is that the word has more than one meaning. Trying to plumb the nature of self-awareness or self-consciousness leads down one infamous rabbit hole. But what if the subject is simply the difference in brain activity between being conscious and being unconscious? Scientists and doctors certainly know how to knock people out. Michael T. Alkire at the University of California, Irvine, put it this way in an article in Science in 2008: “How consciousness arises in the brain remains unknown,” he wrote. “Yet, for nearly two centuries our ignorance has not hampered the use of general anesthesia for routinely extinguishing consciousness during surgery.” And a good thing, too. Setting aside what philosophers call “the hard problem” (self-awareness), a lot has been learned about the boundary between being awake and alert and being unconscious since ether was used in 1846 to put a patient under for surgery. Researchers have used anesthesia, recently in combination with brain scans, as a tool to see what happens in the brain when people fade in and out of consciousness — which parts turn on and which turn off. For instance, in a recent study, investigators showed that a person could respond to a simple command to open his eyes (the subjects were all right-handed men) when the higher parts of the brain were not yet turned on. The finding may be useful in deciding how to measure the effects of anesthetics, and it adds another data point to knowledge of what’s going on in the brain. © 2012 The New York Times Company
Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 14: Biological Rhythms, Sleep, and Dreaming
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 10: Biological Rhythms and Sleep
Link ID: 16645 - Posted: 04.14.2012
Tim Parks “There are no images.” This was the first time I noticed Riccardo Manzotti. It was a conference on art and neuroscience. Someone had spoken about the images we keep in our minds. Manzotti seemed agitated. The girl sitting next to me explained that he built robots, was a genius. “There are no images and no representations in our minds,” he insisted. “Our visual experience of the world is a continuum between see-er and seen united in a shared process of seeing.” I was curious, if only because, as a novelist I’d always supposed I was dealing in images, imagery. This stuff might have implications. So we had a beer together. Manzotti has a degree in engineering and another in philosophy. He teaches in the psychology department at IULM University, Milan. The move from engineering to philosophy was prompted by conceptual problems he’d run into when first seeking to build robots. What does it mean that a subject sees an object? “People say the robot stores images of the world through its video camera. It doesn’t, it stores digital data. It has no images.” Manzotti is what they call a radical externalist: for him consciousness is not safely confined within a brain whose neurons select and store information received from a separate world, appropriating, segmenting, and manipulating various forms of input. Instead, he offers a model he calls Spread Mind: consciousness is a process shared between various otherwise distinct processes which, for convenience’s sake we have separated out and stabilized in the words subject and object. Language, or at least our modern language, thus encourages a false account of experience. © 1963-2012 NYREV, Inc.