Links for Keyword: Consciousness
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
By ISABEL KERSHNER JERUSALEM — A brain scan performed on Ariel Sharon, the former Israeli prime minister who had a devastating stroke seven years ago and is presumed to be in a vegetative state, revealed significant brain activity in response to external stimuli, raising the chances that he is able to hear and understand, a scientist involved in the test said Sunday. Scientists showed Mr. Sharon, 84, pictures of his family, had him listen to a recording of the voice of one of his sons and used tactile stimulation to assess the extent of his brain’s response. “We were surprised that there was activity in the proper parts of the brain,” said Prof. Alon Friedman, a neuroscientist at Ben-Gurion University of the Negev and a member of the team that carried out the test. “It raises the chances that he hears and understands, but we cannot be sure. The test did not prove that.” The activity in specific regions of the brain indicated appropriate processing of the stimulations, according to a statement from Ben-Gurion University, but additional tests to assess Mr. Sharon’s level of consciousness were less conclusive. “While there were some encouraging signs, these were subtle and not as strong,” the statement added. The test was carried out last week at the Soroka University Medical Center in the southern Israeli city of Beersheba using a state-of-the-art M.R.I. machine and methods recently developed by Prof. Martin M. Monti of the University of California, Los Angeles. Professor Monti took part in the test, which lasted approximately two hours. © 2013 The New York Times Company
Doctors should resist the temptation to use an inexpensive tool that probes the brain's electrical activity when evaluating vegetative patients who can't communicate. Drs. Adrian Owen and Damian Cruse of the Centre for Brain and Mind in London, Ont., promoted the use of electroencephalography or EEG that can be used at a patient's bedside to determine if there's neurological activity in people in a vegetative state — those who are unresponsive in traditional tests of awareness. In a letter published in Thursday's issue of the medical journal The Lancet, Dr. Jonathan Victor of Weill Cornell Medical College in New York and his co-authors reanalyzed data shared from Owen's 2011 paper in the same journal. "I think we'd be very, very cautious about using this technology as it stands now," said Victor. Both groups agree the use of EEG technology remains promising to evaluate patients. The challenge, Victor said, is researchers can't be certain about their interpretations when faced with families trying to communicate with their loved ones, including for end-of-life discussions. The critique casts doubt on the original statistical approach and assumptions, which didn't hold when analyzed with a different model. In a rebuttal, Owen's team defended its approach as the only way to draw valid conclusions from vegetative patients and account for their variations. "There are few 'known truths' when attempting to detect covert awareness," Owen's team wrote. "Some are likely to be truly vegetative, while others may appear to be vegetative behaviorally, but are in fact, covertly aware." © CBC 2013
By John Horgan We’re approaching the end of one year and the beginning of another, when people resolve to quit smoking, swill less booze, gobble less ice cream, jog every day, or every other day, work harder, or less hard, be nicer to kids, spouses, ex-spouses, co-workers, read more books, watch less TV, except Homeland, which is awesome. In other words, it’s a time when people seek to alter their life trajectories by exercising their free will. Some mean-spirited materialists deny that free will exists, and this specious claim—not mere physiological processes in my brain–motivates me to reprint a defense of free will that I wrote for The New York Times 10 years ago: When I woke this morning, I stared at the ceiling above my bed and wondered: To what extent will my rising really be an exercise of my free will? Let’s say I got up right . . . now. Would my subjective decision be the cause? Or would computations unfolding in a subconscious neural netherworld actually set off the muscular twitches that slide me out of the bed, quietly, so as not to wake my wife (not a morning person), and propel me toward the door? One of the risks of science journalism is that occasionally you encounter research that threatens something you cherish. Free will is something I cherish. I can live with the idea of science killing off God. But free will? That’s going too far. And yet a couple of books I’ve been reading lately have left me brooding over the possibility that free will is as much a myth as divine justice. © 2012 Scientific American
By Ferris Jabr The computer, smartphone or other electronic device on which you may be reading this article, tracking the weather or checking your e-mail has a kind of rudimentary brain. It has highly organized electrical circuits that store information and behave in specific, predictable ways, just like the interconnected cells in your brain. On the most fundamental level, electrical circuits and neurons are made of the same stuff—atoms and their constituent elementary particles—but whereas the human brain is conscious of itself, man-made gadgets do not know they exist. Consciousness, most scientists would argue, is not a shared property of all matter in the universe. Rather consciousness is restricted to a subset of animals with relatively complex brains. The more scientists study animal behavior and brain anatomy, however, the more universal consciousness seems to be. A brain as complex as a human's is definitely not necessary for consciousness. On July 7 of this year, a group of neuroscientists convening at the University of Cambridge signed a document entitled “The Cambridge Declaration on Consciousness in Non-Human Animals,” officially declaring that nonhuman animals, “including all mammals and birds, and many other creatures, including octopuses,” are conscious. Humans are more than just conscious; they are also self-aware. Scientists differ on how they distinguish between consciousness and self-awareness, but here is one common distinction: consciousness is awareness of your body and your environment; self-awareness is recognition of that consciousness—not only understanding that you exist but further comprehending that you are aware of your existence. Another way of considering it: to be conscious is to think; to be self-aware is to realize that you are a thinking being and to think about your thoughts. Presumably human infants are conscious—they perceive and respond to people and things around them—but they are not yet self-aware. In their first years of life, children develop a sense of self, learning to recognize themselves in the mirror and to distinguish between their own point of view and the perspectives of other people. © 2012 Scientific American,
By Tanya Lewis A coma patient’s chances of surviving and waking up could be predicted by changes in the brain’s ability to discriminate sounds, new research suggests. Recovery from coma has been linked to auditory function before, but it wasn’t clear whether function depended on the time of assessment. Whereas previous studies tested patients several days or weeks after comas set in, a new study looks at the critical phase during the first 48 hours. At early stages, comatose brains can still distinguish between different sound patterns,. How this ability progresses over time can predict whether a coma patient will survive and ultimately awaken, researchers report. “It’s a very promising tool for prognosis,” says neurologist Mélanie Boly of the Belgian National Fund for Scientific Research, who was not involved with the study. “For the family, it’s very important to know if someone will recover or not.” A team led by neuroscientist Marzia De Lucia of the University of Lausanne in Switzerland studied 30 coma patients who had experienced heart attacks that deprived their brains of oxygen. All the patients underwent therapeutic hypothermia, a standard treatment to minimize brain damage, in which their bodies were cooled to 33° Celsius for 24 hours. De Lucia and colleagues played sounds for the patients and recorded their brain activity using scalp electrodes — once in hypothermic conditions during the first 24 hours of coma, and again a day later at normal body temperature. The sounds were a series of pure tones interspersed with sounds of different pitch, duration or location. The brain signals revealed how well patients could discriminate the sounds, compared with five healthy subjects. © Society for Science & the Public 2000 - 2012
Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 2: Functional Neuroanatomy: The Nervous System and Behavior
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 2: Cells and Structures: The Anatomy of the Nervous System
Link ID: 17546 - Posted: 11.27.2012
By Courtney Humphries A. Fainting, also called syncope, is a sudden and brief loss of consciousness followed by a spontaneous return to wakefulness — people who “black out” and then “come to” on their own without outside intervention. During the faint, they’re in danger of falls and injuries if they lose muscle control. There are several possible causes of fainting, but they all stem from a temporary decrease in blood flow to the brain. The typical Victorian-era swoon is one of the most common forms, called vasovagal syncope. Lewis Lipsitz, a geriatrician at Beth Israel Deaconess Medical Center and Hebrew SeniorLife, explains that it’s caused by a reflexive response to a stimulus, such as stress, a sudden shock, or the sight of blood. Fainting without an obvious trigger can be a sign of an underlying health problem, such as an irregular heart rhythm, heart disease, or severe dehydration. “The elderly have syncope more commonly than any other group,” Lipsitz says, which can put them at risk of falls and fractures. Often the spells are caused by actions as simple as changing position or eating a meal. When we stand up, Lipsitz says, “about half a liter of blood immediately goes to the legs and the lower abdomen,” and eating also pulls blood from the brain to the gut. Our bodies compensate by raising the heart rate to get blood to the brain. But elderly people can’t always restore their blood flow, and dehydration or certain medications can exacerbate the problem. © Copyright 2012 Globe Newspaper Company.
by Douglas Heaven Where does the mind reside? It's a question that's occupied the best brains for thousands of years. Now, a patient who is self-aware – despite lacking three regions of the brain thought to be essential for self-awareness – demonstrates that the mind remains as elusive as ever. The finding suggests that mental functions might not be tied to fixed brain regions. Instead, the mind might be more like a virtual machine running on distributed computers, with brain resources allocated in a flexible manner, says David Rudrauf at the University of Iowa in Iowa City, who led the study of the patient. Recent advances in functional neuroimaging – a technique that measures brain activity in the hope of finding correlations between mental functions and specific regions of the brain – have led to a wealth of studies that map particular functions onto regions. Previous neuroimaging studies had suggested that three regions – the insular cortex, anterior cingulate cortex and medial prefrontal cortex – are critical for self-awareness. But for Rudrauf the question wasn't settled. So when his team heard about patient R, who had lost brain tissue including the chunks of the three 'self-awareness' regions following a viral infection, they immediately thought he could help set the record straight. © Copyright Reed Business Information Ltd.
by Anil Ananthaswamy Advocates of free will can rest easy, for now. A 30-year-old classic experiment that is often used to argue against free will might have been misinterpreted. In the early 1980s, Benjamin Libet, a neuroscientist at the University of California in San Francisco, used electroencephalography (EEG) to record the brain activity of volunteers who had been told to make a spontaneous movement. With the help of a precise timer that the volunteers were asked to read at the moment they became aware of the urge to act, Libet found there was a 200 millisecond delay, on average, between this urge and the movement itself. But the EEG recordings also revealed a signal that appeared in the brain even earlier, 550 milliseconds, on average, before the action. Called the readiness potential, this has been interpreted as a blow to free will, as it suggests that the brain prepares to act well before we are conscious of the urge to move. This conclusion assumes that the readiness potential is the signature of the brain planning and preparing to move. "Even people who have been critical of Libet's work, by and large, haven't challenged that assumption," says Aaron Schurger of the National Institute of Health and Medical Research in Saclay, France. One attempt to do so came in 2009. Judy Trevena and Jeff Miller of the University of Otago in Dunedin, New Zealand, asked volunteers to decide, after hearing a tone, whether or not to tap on a keyboard. The readiness potential was present regardless of their decision, suggesting that it did not represent the brain preparing to move. Exactly what it did mean, though, still wasn't clear. © Copyright Reed Business Information Ltd.
Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 5: The Sensorimotor System
Link ID: 17131 - Posted: 08.07.2012
By Giulio Tononi When is an entity one entity? How can multiple elements be a single thing? A question simple enough— but one, thought Galileo, that had not yet been answered. Or perhaps, it had not been asked. The sensor of the digital camera certainly had a large repertoire of states— it could take any possible picture. But was it a single entity? You use the camera as a single entity, you grasp it with your hands as one. You watch the photograph as a single entity. But that is within your own consciousness. If it were not for you, the observer, would it still be a single entity? And what exactly would that mean? While musing such matters, Galileo was startled by a voice. J., a man with the forehead of an ancient god, addressed him in a polished tone: “Take a sentence of a dozen words, and take twelve men, and tell to each one word. Then stand the men in a row or jam them in a bunch, and let each think of his word as intently as he will; nowhere will there be a consciousness of the whole sentence. Or take a word of a dozen letters, and let each man think of his letter as intently as he will; nowhere will there be a consciousness of the whole word,” J. said. Or take a picture of one million dots, and take one million photodiodes, and show each photodiode its own dot. Then stand the photodiodes well ordered on a square array, and let each tell light from dark for its own dot, as precisely as it will; nowhere will there be a consciousness of the whole picture, said Galileo. “So you see that, Galileo,” J. continued. “There is no such thing as the spirit of the age, the sentiment of the people, or public opinion. The private minds do not agglomerate into a higher compound mind. They say the whole is more than the sum of its parts; they say, but how can it be so?” © 2012 Scientific American, Copyright © 2012 by Giulio Tononi
by Michael S. Gazzaniga We humans think we make all our decisions to act consciously and willfully. We all feel we are wonderfully unified, coherent mental machines and that our underlying brain structure must reflect this overpowering sense. It doesn’t. No command center keeps all other brain systems hopping to the instructions of a five-star general. The brain has millions of local processors making important decisions. There is no one boss in the brain. You are certainly not the boss of your brain. Have you ever succeeded in telling your brain to shut up already and go to sleep? Even though we know that the organization of the brain is made up of a gazillion decision centers, that neural activities going on at one level of organization are inexplicable at another level, and that there seems to be no boss, our conviction that we have a “self” making all the decisions is not dampened. It is a powerful illusion that is almost impossible to shake. In fact, there is little or no reason to shake it, for it has served us well as a species. There is, however, a reason to try to understand how it all comes about. If we understand why we feel in charge, we will understand why and how we make errors of thought and perception. When I was a kid, I spent a lot of time in the desert of Southern California—out in the desert scrub and dry bunchgrass, surrounded by purple mountains, creosote bush, coyotes, and rattlesnakes. The reason I am still here today is because I have nonconscious processes that were honed by evolution. © 2012, Kalmbach Publishing Co.
By JOHN MONTEROSSO and BARRY SCHWARTZ ARE you responsible for your behavior if your brain “made you do it”? Often we think not. For example, research now suggests that the brain’s frontal lobes, which are crucial for self-control, are not yet mature in adolescents. This finding has helped shape attitudes about whether young people are fully responsible for their actions. In 2005, when the Supreme Court ruled that the death penalty for juveniles was unconstitutional, its decision explicitly took into consideration that “parts of the brain involved in behavior control continue to mature through late adolescence.” Similar reasoning is often applied to behavior arising from chemical imbalances in the brain. It is possible, when the facts emerge, that the case of James E. Holmes, the suspect in the Colorado shootings, will spark debate about neurotransmitters and culpability. Whatever the merit of such cases, it’s worth stressing an important point: as a general matter, it is always true that our brains “made us do it.” Each of our behaviors is always associated with a brain state. If we view every new scientific finding about brain involvement in human behavior as a sign that the behavior was not under the individual’s control, the very notion of responsibility will be threatened. So it is imperative that we think clearly about when brain science frees someone from blame — and when it doesn’t. © 2012 The New York Times Company
By Michael Shermer Where is the experience of red in your brain? The question was put to me by Deepak Chopra at his Sages and Scientists Symposium in Carlsbad, Calif., on March 3. A posse of presenters argued that the lack of a complete theory by neuroscientists regarding how neural activity translates into conscious experiences (such as redness) means that a physicalist approach is inadequate or wrong. The idea that subjective experience is a result of electrochemical activity remains a hypothesis, Chopra elaborated in an e-mail. It is as much of a speculation as the idea that consciousness is fundamental and that it causes brain activity and creates the properties and objects of the material world. Where is Aunt Millie's mind when her brain dies of Alzheimer's? I countered to Chopra. Aunt Millie was an impermanent pattern of behavior of the universe and returned to the potential she emerged from, Chopra rejoined. In the philosophic framework of Eastern traditions, ego identity is an illusion and the goal of enlightenment is to transcend to a more universal nonlocal, nonmaterial identity. The hypothesis that the brain creates consciousness, however, has vastly more evidence for it than the hypothesis that consciousness creates the brain. Damage to the fusiform gyrus of the temporal lobe, for example, causes face blindness, and stimulation of this same area causes people to see faces spontaneously. Stroke-caused damage to the visual cortex region called V1 leads to loss of conscious visual perception. Changes in conscious experience can be directly measured by functional MRI, electroencephalography and single-neuron recordings. Neuroscientists can predict human choices from brain-scanning activity before the subject is even consciously aware of the decisions made. Using brain scans alone, neuroscientists have even been able to reconstruct, on a computer screen, what someone is seeing. © 2012 Scientific American
By Tom Siegfried Arguably, and it would be a tough argument to win if you took the other side, computers have had a greater impact on civilization than any other machine since the wheel. Sure, there was the steam engine, the automobile and the airplane, the printing press and the mechanical clock. Radios and televisions also made their share of societal waves. But look around. Computers do everything TVs and radios ever did. And computers tell time, control cars and planes, and have rendered printing presses pretty darn near obsolete. Computers have invaded every realm of life, from work to entertainment to medicine to education: Reading, writing and arithmetic are now all computer-centric activities. Every nook and cranny of human culture is controlled, colored or monitored by the digital computer. Even though, merely 100 years ago, no such machine existed. In 1912, the word computer referred to people (typically women) using pencils and paper or adding machines. Coincidentally, that was the year that Alan Turing was born. If you don’t like the way computers have taken over the world, you could blame him. No one did more to build the foundation of computer science than Turing. In a paper published in 1936, he described the principle behind all of today’s computing devices, sketching out the theoretical blueprint for a machine able to implement instructions for making any calculation. Turing didn’t invent the idea of a computer, of course. Charles Babbage had grand plans for a computing machine a century earlier (and even he had precursors). George Boole, not long after Babbage, developed the underlying binary mathematics (originally conceived much earlier by Gottfried Leibniz) that modern digital computers adopted. © Society for Science & the Public 2000 - 2012
By Laura Sanders A mysterious kind of nerve cell that has been linked to empathy, self-awareness, and even consciousness resides in Old World monkeys. The finding, published May 10 in Neuron, extends the domain of the neurons beyond humans, great apes and other large-brained creatures and will now allow scientists to study the habits of a neuron that may be key to human self-awareness. “People have been reluctant to say, but want to believe, that these neurons might be the neural correlate of consciousness,” says neuroscientist and psychiatrist Hugo Critchley of the University of Sussex in England. Finding the neurons in macaques, which can be studied in laboratories, “opens up the possibility to study directly the role of these cells,” he says. An earlier study saw no signs of the cells, called von Economo neurons, in macaques. But while carefully scrutinizing a small piece of a macaque brain for a different experiment, anatomist Henry Evrard of the Max Planck Institute for Biological Cybernetics in Tübingen, Germany, stumbled across the rare, distinctive cells. About three times bigger than other nerve cells, von Economo neurons have long, fat bodies and tufts of message-receiving dendrites at each end. Evrard compares the first sighting to seeing the tip of an iceberg. After many additional tests, he and his colleagues concluded that the cells, though smaller and sparser than their human counterparts, were indeed the elusive von Economo neurons. © Society for Science & the Public 2000 - 2012
By JAMES GORMAN The puzzle of consciousness is so devilish that scientists and philosophers are still struggling with how to talk about it, let alone figure out what it is and where it comes from. One problem is that the word has more than one meaning. Trying to plumb the nature of self-awareness or self-consciousness leads down one infamous rabbit hole. But what if the subject is simply the difference in brain activity between being conscious and being unconscious? Scientists and doctors certainly know how to knock people out. Michael T. Alkire at the University of California, Irvine, put it this way in an article in Science in 2008: “How consciousness arises in the brain remains unknown,” he wrote. “Yet, for nearly two centuries our ignorance has not hampered the use of general anesthesia for routinely extinguishing consciousness during surgery.” And a good thing, too. Setting aside what philosophers call “the hard problem” (self-awareness), a lot has been learned about the boundary between being awake and alert and being unconscious since ether was used in 1846 to put a patient under for surgery. Researchers have used anesthesia, recently in combination with brain scans, as a tool to see what happens in the brain when people fade in and out of consciousness — which parts turn on and which turn off. For instance, in a recent study, investigators showed that a person could respond to a simple command to open his eyes (the subjects were all right-handed men) when the higher parts of the brain were not yet turned on. The finding may be useful in deciding how to measure the effects of anesthetics, and it adds another data point to knowledge of what’s going on in the brain. © 2012 The New York Times Company
Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 14: Biological Rhythms, Sleep, and Dreaming
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 10: Biological Rhythms and Sleep
Link ID: 16645 - Posted: 04.14.2012
Tim Parks “There are no images.” This was the first time I noticed Riccardo Manzotti. It was a conference on art and neuroscience. Someone had spoken about the images we keep in our minds. Manzotti seemed agitated. The girl sitting next to me explained that he built robots, was a genius. “There are no images and no representations in our minds,” he insisted. “Our visual experience of the world is a continuum between see-er and seen united in a shared process of seeing.” I was curious, if only because, as a novelist I’d always supposed I was dealing in images, imagery. This stuff might have implications. So we had a beer together. Manzotti has a degree in engineering and another in philosophy. He teaches in the psychology department at IULM University, Milan. The move from engineering to philosophy was prompted by conceptual problems he’d run into when first seeking to build robots. What does it mean that a subject sees an object? “People say the robot stores images of the world through its video camera. It doesn’t, it stores digital data. It has no images.” Manzotti is what they call a radical externalist: for him consciousness is not safely confined within a brain whose neurons select and store information received from a separate world, appropriating, segmenting, and manipulating various forms of input. Instead, he offers a model he calls Spread Mind: consciousness is a process shared between various otherwise distinct processes which, for convenience’s sake we have separated out and stabilized in the words subject and object. Language, or at least our modern language, thus encourages a false account of experience. © 1963-2012 NYREV, Inc.
By Christof Koch What is the relation between selective attention and consciousness? When you strain to listen to the distant baying of coyotes over the sound of a campsite conversation, you do so by attending to the sound and becoming conscious of their howls. When you attend to your sparring opponent out of the corner of your eye, you become hyperaware of his smallest gestures. Because of the seemingly intimate relation between attention and consciousness, most scholars conflate the two processes. Indeed, when I came out of the closet to give public talks on the mind-body problem in the early 1990s (at that time, it wouldn’t do for a young professor in biology or engineering who had not even yet attained the holy state of tenure to talk about consciousness: it was considered too fringy), some of my colleagues insisted that I replace the incendiary “consciousness” with the more neutral “attention” because the two concepts could not be distinguished and were probably the same thing anyway. Two decades later a number of experiments prove that the two are not the same. Stage magicians are superb at manipulating the audience’s attention. By misdirecting your gaze using their hands or a beautiful, bikini-clad assistant, you look but don’t see, inverting Yogi Berra’s famous witticism, “You can observe a lot just by watching.” Scientists can do the same, sans the sexy woman. I described a psychophysical technique called continuous flash suppression in an earlier column [see “Rendering the Visible Invisible,” October/November 2008], in which a faint image in one eye—say, an angry face in the left eye—becomes invisible by flashing a series of colorful overlaid rectangles into the other eye. As long as you keep both eyes open, you see only the flashed pictures. Attention is drawn to the rapidly changing images, effectively camouflaging the angry face. As soon as you wink with the right eye, however, you see the face. This technique has been used to great effect both to hide things from consciousness—such as a naked man or woman—and to demonstrate that the brain will still attend to them. © 2012 Scientific American
By John Horgan I met Christof Koch in 1994 at the first of series of big conferences on consciousness held in Tucson, Ariz. A professor at Caltech, Koch had helped popularize consciousness as a topic for serious scientific investigation—instead of windy philosophical supposition—through his collaboration with the great Francis Crick, who had already cracked the genetic code and now wanted to solve the riddle of mind as well. In Tucson Koch outlined a theory, jointly fashioned by him and Crick, that 40-hertz brain waves might be a key to consciousness. Although I was skeptical of that particular theory, I liked the hard-nosed, materialist, reductionist approach that Koch and Crick took toward consciousness. I also liked the quirky intensity that Koch brought to his scientific work. This trait was on display in Tucson during an encounter between Koch and the philosopher David Chalmers, who proposed that consciousness is such a “hard problem” that it needs new approaches, such as one incorporating ideas from information theory. Confronting Chalmers at a cocktail party, Koch declared that Chalmers’s information-based theory of consciousness was untestable and therefore useless. “Why don’t you just say that when you have a brain the Holy Ghost comes down and makes you conscious!” Koch exclaimed. Such a theory was unnecessarily complicated, Chalmers responded dryly, and it would not accord with his own subjective experience. “But how do I know that your subjective experience is the same as mine?” Koch retorted. “How do I even know you’re conscious?” © 2012 Scientific American
Sumit Paul-Choudhury, editor LOOKING at your own brain is a humbling and slightly unnerving experience. Mine, depicted in a freshly acquired MRI scan, is startlingly intricate, compact - and baffling. This is as much of a portrait of my own mind as I am ever likely to see. But to my ignorant eyes (which, by way of an eerie bonus, are now looking at their own cross-sections) it looks pretty much like any other brain. Apparently a more expert eye wouldn't help. "Whilst all my participants get very excited about seeing their brain for the first time after being scanned, and I frequently get asked 'What can you tell me about my brain?', the reality is that the brain will for a long time yet remain a mysterious mass," says the neuroscientist who scanned my brain, for research purposes. "We must be content with knowing that the 'I' is constructed in its intricacies, but we cannot explain how.” The hope of closing the gap between the physical and mental is presumably what gets neuroscientists up in the morning, but it’s frustrating for a layperson like me. Avowed materialist though I am, I nonetheless rebel against the knowledge that the impassive blob on screen is "me". This cognitive dissonance was what I took with me to the opening of Brains, a new show at London’s Wellcome Collection, whose subtitle, "The Mind as Matter", suggests that its curators sympathise with my materialist perspective. “The neurosciences hold out the prospect of an objective account of consciousness - the soul or mind as nothing more than intricately connected flesh,” reads the introduction. But the bulk of the exhibition is dedicated to whole brains, brain collectors and anatomical paraphernalia, with little explicit reference to the brain’s fine structure, or how it might give rise to thought. © Copyright Reed Business Information Ltd.
Related chapters from BP7e: Chapter 2: Functional Neuroanatomy: The Nervous System and Behavior; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 2: Cells and Structures: The Anatomy of the Nervous System; Chapter 14: Attention and Consciousness
Link ID: 16594 - Posted: 03.31.2012
Robert Stickgold The psychologist Stuart Sutherland wrote that it is impossible to define consciousness “except in terms that are unintelligible without a grasp of what consciousness means ... Nothing worth reading has been written about it.” It is arguable whether Christof Koch's Consciousness provides such a definition, but the book is definitely worth reading. Koch, chief scientific officer at the Allen Institute for Brain Science in Seattle, Washington, is perhaps best known for his work with the late Francis Crick, searching for the neurobiological 'correlates of consciousness'. Here, he succinctly lays out the story of that quest. Focusing on how the brain might produce the mind, Koch mixes descriptions of major experiments with self-reflection and warnings of the inherent danger of the exercise. From Koch's collaborations with Crick, whom he seems to idolize, to his struggles with religion and free will, this is an engaging mixture of personal anecdote, scientific fact and pure speculation. It is often charming: Chapter 2, for instance, is entitled, 'In which I write about the wellsprings of my inner conflict between religion and reason, why I grew up wanting to be a scientist, why I wear a lapel pin of Professor Calculus, and how I acquired a second mentor late in life'. For many, the richest parts of the book will be Koch's lucid descriptions of experiments such as his work with Itzhak Fried, a neurosurgeon who implanted electrodes into the hippocampi of people with epilepsy. In one patient, Fried found a single neuron that responded only to the name or pictures of Saddam Hussein; in another, he found one that responded only to pictures of the actress Jennifer Aniston. In a descriptive tour de force, Koch explains that although Fried dubbed these cells concept neurons, we can think of them as “the cellular substrate of the Platonic Ideal of Jennifer Aniston”. © 2012 Nature Publishing Group,