Links for Keyword: Consciousness
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
By Adam Bear It happens hundreds of times a day: We press snooze on the alarm clock, we pick a shirt out of the closet, we reach for a beer in the fridge. In each case, we conceive of ourselves as free agents, consciously guiding our bodies in purposeful ways. But what does science have to say about the true source of this experience? In a classic paper published almost 20 years ago, the psychologists Dan Wegner and Thalia Wheatley made a revolutionary proposal: The experience of intentionally willing an action, they suggested, is often nothing more than a post hoc causal inference that our thoughts caused some behavior. The feeling itself, however, plays no causal role in producing that behavior. This could sometimes lead us to think we made a choice when we actually didn’t or think we made a different choice than we actually did. But there’s a mystery here. Suppose, as Wegner and Wheatley propose, that we observe ourselves (unconsciously) perform some action, like picking out a box of cereal in the grocery store, and then only afterwards come to infer that we did this intentionally. If this is the true sequence of events, how could we be deceived into believing that we had intentionally made our choice before the consequences of this action were observed? This explanation for how we think of our agency would seem to require supernatural backwards causation, with our experience of conscious will being both a product and an apparent cause of behavior. In a study just published in Psychological Science, Paul Bloom and I explore a radical—but non-magical—solution to this puzzle. © 2016 Scientific America
By JAMES GORMAN Bees find nectar and tell their hive-mates; flies evade the swatter; and cockroaches seem to do whatever they like wherever they like. But who would believe that insects are conscious, that they are aware of what’s going on, not just little biobots? Neuroscientists and philosophers apparently. As scientists lean increasingly toward recognizing that nonhuman animals are conscious in one way or another, the question becomes: Where does consciousness end? Andrew B. Barron, a cognitive scientist, and Colin Klein, a philosopher, at Macquarie University in Sydney, Australia, propose in Proceedings of the National Academy of Sciences that insects have the capacity for consciousness. This does not mean that a honeybee thinks, “Why am I not the queen?” or even, “Oh, I like that nectar.” But, Dr. Barron and Dr. Klein wrote in a scientific essay, the honeybee has the capacity to feel something. Their claim stops short of some others. Christof Koch, the president and chief scientific officer of the Allen Institute for Brain Science in Seattle, and Giulio Tononi, a neuroscientist and psychiatrist at the University of Wisconsin, have proposed that consciousness is nearly ubiquitous in different degrees, and can be present even in nonliving arrangements of matter, to varying degrees. They say that rather than wonder how consciousness arises, one should look at where we know it exists and go from there to where else it might exist. They conclude that it is an inherent property of physical systems in which information moves around in a certain way — and that could include some kinds of artificial intelligence and even naturally occurring nonliving matter. © 2016 The New York Times Company
By Matthew Hutson Bad news for believers in clairvoyance. Our brains appear to rewrite history so that the choices we make after an event seem to precede it. In other words, we add loops to our mental timeline that let us feel we can predict things that in reality have already happened. Adam Bear and Paul Bloom at Yale University conducted some simple tests on volunteers. In one experiment, subjects looked at white circles and silently guessed which one would turn red. Once one circle had changed colour, they reported whether or not they had predicted correctly. Over many trials, their reported accuracy was significantly better than the 20 per cent expected by chance, indicating that the volunteers either had psychic abilities or had unwittingly played a mental trick on themselves. The researchers’ study design helped explain what was really going on. They placed different delays between the white circles’ appearance and one of the circles turning red, ranging from 50 milliseconds to one second. Participants’ reported accuracy was highest – surpassing 30 per cent – when the delays were shortest. That’s what you would expect if the appearance of the red circle was actually influencing decisions still in progress. This suggests it’s unlikely that the subjects were merely lying about their predictive abilities to impress the researchers. The mechanism behind this behaviour is still unclear. It’s possible, the researchers suggest, that we perceive the order of events correctly – one circle changes colour before we have actually made our prediction – but then we subconsciously swap the sequence in our memories so the prediction seems to come first. Such a switcheroo could be motivated by a desire to feel in control of our lives. © Copyright Reed Business Information Ltd.
By Daniel Barron It’s unnerving when someone with no criminal record commits a disturbingly violent crime. Perhaps he stabs his girlfriend 40 times and dumps her body in the desert. Perhaps he climbs to the top of a clock tower and guns down innocent passers-by. Or perhaps he climbs out of a car at a stoplight and nearly decapitates an unsuspecting police officer with 26 rounds from an assault rifle. Perhaps he even drowns his own children. Or shoots the President of the United States. The shock is palpable (NB: those are all actual cases). The very notion that someone—our neighbor, the guy ahead of us in the check-out line, we (!)—could do something so terrible rubs at our minds. We wonder, “What happened? What in this guy snapped?” After all, for the last 20 years, the accused went home to his family after work—why did he go rob that liquor store? What made him pull that trigger? The subject hit home for me this week when I was called to jury duty. As I made my way to the county courthouse, I wondered whether I would be asked to decide a capital murder case like the ones above. As a young neuroscientist, the prospect made me uneasy. At the trial, the accused’s lawyers would probably argue that, at the time of the crime, he had diminished capacity to make decisions, that somehow he wasn’t entirely free to choose whether or not to commit the crime. They might cite some form of neuroscientific evidence to argue that, at the time of the crime, his brain wasn’t functioning normally. And the jury and judge have to decide what to make of it. © 2016 Scientific American
Giant manta rays have been filmed checking out their reflections in a way that suggests they are self-aware. Only a small number of animals, mostly primates, have passed the mirror test, widely used as a tentative test of self-awareness. “This new discovery is incredibly important,” says Marc Bekoff, of the University of Colorado in Boulder. “It shows that we really need to expand the range of animals we study.” But not everyone is convinced that the new study proves conclusively that manta rays, which have the largest brains of any fish, can do this – or indeed, that the mirror test itself is an appropriate measure of self-awareness. Csilla Ari, of the University of South Florida in Tampa, filmed two giant manta rays in a tank, with and without a mirror inside.The fish changed their behaviour in a way that suggested that they recognised the reflections as themselves as opposed to another manta ray. They did not show signs of social interaction with the image, which is what you would expect if they perceived it to be another individual. Instead, the rays repeatedly moved their fins and circled in front of the mirror (click on image below to see one in action). This suggests they could see whether their reflection moved when they moved. The frequency of these movements was much higher when the mirror was in the tank than when it was not. manta © Copyright Reed Business Information Ltd.
Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 22015 - Posted: 03.22.2016
By BARBARA K. LIPSKA AS the director of the human brain bank at the National Institute of Mental Health, I am surrounded by brains, some floating in jars of formalin and others icebound in freezers. As part of my work, I cut these brains into tiny pieces and study their molecular and genetic structure. My specialty is schizophrenia, a devastating disease that often makes it difficult for the patient to discern what is real and what is not. I examine the brains of people with schizophrenia whose suffering was so acute that they committed suicide. I had always done my work with great passion, but I don’t think I really understood what was at stake until my own brain stopped working. In the first days of 2015, I was sitting at my desk when something freakish happened. I extended my arm to turn on the computer, and to my astonishment realized that my right hand disappeared when I moved it to the right lower quadrant of the keyboard. I tried again, and the same thing happened: The hand disappeared completely as if it were cut off at the wrist. It felt like a magic trick — mesmerizing, and totally inexplicable. Stricken with fear, I kept trying to find my right hand, but it was gone. I had battled breast cancer in 2009 and melanoma in 2012, but I had never considered the possibility of a brain tumor. I knew immediately that this was the most logical explanation for my symptoms, and yet I quickly dismissed the thought. Instead I headed to a conference room. My colleagues and I had a meeting scheduled to review our new data on the molecular composition of schizophrenia patients’ frontal cortex, a brain region that shapes who we are — our thoughts, emotions, memories. But I couldn’t focus on the meeting because the other scientists’ faces kept vanishing. Thoughts about a brain tumor crept quietly into my consciousness again, then screamed for attention. © 2016 The New York Times Company
Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 7: Vision: From Eye to Brain
Link ID: 21984 - Posted: 03.14.2016
David H.Wells Take a theory of consciousness that calculates how aware any information-processing network is – be it a computer or a brain. Trouble is, it takes a supercomputer billions of years to verify its predictions. Add a maverick cosmologist, and what do you get? A way to make the theory useful within our lifetime. Integrated information theory (IIT) is one of our best descriptions of consciousness. Developed by neuroscientist Giulio Tononi of the University of Wisconsin at Madison, it’s based on the observation that each moment of awareness is unified. When you contemplate a bunch of flowers, say, it’s impossible to be conscious of the flower’s colour independently of its fragrance because the brain has integrated the sensory data. Tononi argues that for a system to be conscious, it must integrate information in such a way that the whole contains more information than the sum of its parts. The measure of how a system integrates information is called phi. One way of calculating phi involves dividing a system into two and calculating how dependent each part is on the other. One cut would be the “cruellest”, creating two parts that are the least dependent on each other. If the parts of the cruellest cut are completely independent, then phi is zero, and the system is not conscious. The greater their dependency, the greater the value of phi and the greater the degree of consciousness of the system. Finding the cruellest cut, however, is almost impossible for any large network. For the human brain, with its 100 billion neurons, calculating phi like this would take “longer than the age of our universe”, says Max Tegmark, a cosmologist at the Massachusetts Institute of Technology. © Copyright Reed Business Information Ltd.
By Christian Jarrett Back in the 1980s, the American scientist Benjamin Libet made a surprising discovery that appeared to rock the foundations of what it means to be human. He recorded people’s brain waves as they made spontaneous finger movements while looking at a clock, with the participants telling researchers the time at which they decided to waggle their fingers. Libet’s revolutionary finding was that the timing of these conscious decisions was consistently preceded by several hundred milliseconds of background preparatory brain activity (known technically as “the readiness potential”). The implication was that the decision to move was made nonconsciously, and that the subjective feeling of having made this decision is tagged on afterward. In other words, the results implied that free will as we know it is an illusion — after all, how can our conscious decisions be truly free if they come after the brain has already started preparing for them? For years, various research teams have tried to pick holes in Libet’s original research. It’s been pointed out, for example, that it’s pretty tricky for people to accurately report the time that they made their conscious decision. But, until recently, the broad implications of the finding have weathered these criticisms, at least in the eyes of many hard-nosed neuroscientists, and over the last decade or so his basic result has been replicated and built upon with ever more advanced methods such as fMRI and the direct recording of neuronal activity using implanted electrodes. © 2016, New York Media LLC
By David Shultz Is my yellow the same as your yellow? Does your pain feel like my pain? The question of whether the human consciousness is subjective or objective is largely philosophical. But the line between consciousness and unconsciousness is a bit easier to measure. In a new study of how anesthetic drugs affect the brain, researchers suggest that our experience of reality is the product of a delicate balance of connectivity between neurons—too much or too little and consciousness slips away. “It’s a very nice study,” says neuroscientist Melanie Boly at the University of Wisconsin, Madison, who was not involved in the work. “The conclusions that they draw are justified.” Previous studies of the brain have revealed the importance of “cortical integration” in maintaining consciousness, meaning that the brain must process and combine multiple inputs from different senses at once. Our experience of an orange, for example, is made up of sight, smell, taste, touch, and the recollection of our previous experiences with the fruit. The brain merges all of these inputs—photons, aromatic molecules, etc.—into our subjective experience of the object in that moment. “There is new meaning created by the interaction of things,” says Enzo Tagliazucchi, a physicist at the Institute for Medical Psychology in Kiel, Germany. Consciousness ascribes meaning to the pattern of photons hitting your retina, thus differentiating you from a digital camera. Although the brain still receives these data when we lose consciousness, no coherent sense of reality can be assembled. © 2016 American Association for the Advancement of Science.
It seems like the ultimate insult, but getting people with brain injuries to do maths may lead to better diagnoses. A trial of the approach has found two people in an apparent vegetative state that may be conscious but “locked-in”. People who are in a vegetative state are awake but have lost all cognitive function. Occasionally, people diagnosed as being in this state are actually minimally conscious with fleeting periods of awareness, or even locked-in. This occurs when they are totally aware but unable to move any part of their body. It can be very difficult to distinguish between each state, which is why a team of researchers in China have devised a brain-computer interface that tests whether people with brain injuries can perform mental arithmetic – a clear sign of conscious awareness. The team, led by Yuanqing Li at South China University of Technology and Jiahui Pan at the South China Normal University in Guangzhou showed 11 people with various diagnoses a maths problem on a screen. This was followed by two possible answers flickering at frequencies designed to evoke different patterns of brain activity. Frames around each number also flashed several times. The participants were asked to focus on the correct answer and count the number of times its frame flashed. The brain patterns from the flickering answers together with the detection of another kind of brain signal that occurs when someone counts, enabled a computer to tell which answer, if any, the person was focusing on. © Copyright Reed Business Information Ltd.
Laura Sanders Signals in the brain can hint at whether a person undergoing anesthesia will slip under easily or fight the drug, a new study suggests. The results, published January 14 in PLOS Computational Biology, bring scientists closer to being able to tailor doses of the powerful drugs for specific patients. Drug doses are often given with a one-size-fits-all attitude, says bioengineer and neuroscientist Patrick Purdon of Massachusetts General Hospital and Harvard Medical School. But the new study finds clear differences in people’s brain responses to similar doses of an anesthetic drug, Purdon says. “To me, that’s the key and interesting point.” Cognitive neuroscientist Tristan Bekinschtein of the University of Cambridge and colleagues recruited 20 people to receive low doses of the general anesthetic propofol. The low dose wasn’t designed to knock people out, but to instead dial down their consciousness until they teetered on the edge of awareness — a point between being awake and alert and being drowsy and nonresponsive. While the drug was being delivered, participants repeatedly heard either a buzzing sound or a noise and were asked each time which they heard, an annoying question designed to gauge awareness. Of the 20 people, seven were sidelined by the propofol and they began to respond less. Thirteen other participants, however, kept right on responding, “fighting the drug,” Bekinschtein says. © Society for Science & the Public 2000 - 2016.
Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 14: Biological Rhythms, Sleep, and Dreaming
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 10: Biological Rhythms and Sleep
Link ID: 21793 - Posted: 01.16.2016
Don’t blame impulsive people for their poor decisions. It’s not necessarily their fault. Impulsivity could result from not having enough time to veto our own actions. At least that is the implication of a twist on a classic experiment on free will. In 1983, neuroscientist Benjamin Libet performed an experiment to test whether we have free will. Participants were asked to voluntarily flex a finger while watching a clock-face with a rotating dot. They had to note the position of the dot as soon as they became aware of their intention to act. As they were doing so, Libet recorded their brain activity via EEG electrodes attached to the scalp. He found that a spike in brain activity called the readiness potential, which precedes a voluntary action, occurred about 350-milliseconds before the volunteers became consciously aware of their intention to act. The readiness potential is thought to signal the brain preparing for movement. Libet interpreted his results to mean that free will is an illusion. But we’re not complete slaves to our neurons, he reasoned, as there was a 200-millisecond gap between conscious awareness of our intention and the initiation of movement. Libet argued that this was enough time to consciously veto the action, or exert our “free won’t”. While Libet’s interpretations have remained controversial, this hasn’t stopped scientists carrying out variations of his experiment. Among other things, this has revealed that people with Tourette’s syndrome, who have uncontrollable tics, experience a shorter veto window than people without the condition, as do those with schizophrenia. © Copyright Reed Business Information Ltd.
By Melissa Healy A new study finds that policies on defining brain death vary from hospital to hospital and could result in serious errors. Since 2010, neurologists have had a clear set of standards and procedures to distinguish a brain-dead patient from one who might emerge from an apparent coma. But when profoundly unresponsive patients are rushed to hospitals around the nation, the physicians who make the crucial call are not always steeped in the diagnostic fine points of brain death and the means of identifying it with complete confidence. State laws governing the diagnosis of brain death vary widely. Some states allow any physician to make the diagnosis, while others dictate the level of specialty a physician making the call must have. Some require that a second physician confirm the diagnosis or that a given period of time elapse. Others make no such demands. Given these situations, hospital policies can be invaluable guides for physicians, hospital administrators and patients’ families. In the absence of consistent physician expertise or legal requirements, hospital protocols can translate a scientific consensus into a step-by-step checklist. That would help ensure that no one who is not brain-dead is denied further care or considered a potential organ donor and that the deceased and their families would have every opportunity to donate organs.
By KARL OVE KNAUSGAARD I arrived in Tirana, Albania, on a Sunday evening in late August, on a flight from Istanbul. The sun had set while the plane was midflight, and as we landed in the dark, images of fading light still filled my mind. The man next to me, a young, red-haired American wearing a straw hat, asked me if I knew how to get into town from the airport. I shook my head, put the book I had been reading into my backpack, got up, lifted my suitcase out of the overhead compartment and stood waiting in the aisle for the door up ahead to open. That book was the reason I had come. It was called “Do No Harm,” and it was written by the British neurosurgeon Henry Marsh. His job is to slice into the brain, the most complex structure we know of in the universe, where everything that makes us human is contained, and the contrast between the extremely sophisticated and the extremely primitive — all of that work with knives, drills and saws — fascinated me deeply. I had sent Marsh an email, asking if I might meet him in London to watch him operate. He wrote a cordial reply saying that he seldom worked there now, but he was sure something could be arranged. In passing, he mentioned that he would be operating in Albania in August and in Nepal in September, and I asked hesitantly whether I could join him in Albania. Now I was here. Tense and troubled, I stepped out of the door of the airplane, having no idea what lay ahead. I knew as little about Albania as I did about brain surgery. The air was warm and stagnant, the darkness dense. A bus was waiting with its engine running. Most of the passengers were silent, and the few who chatted with one another spoke a language I didn’t know. It struck me that 25 years ago, when this was among the last remaining Communist states in Europe, I would not have been allowed to enter; then, the country was closed to the outside world, almost like North Korea today. Now the immigration officer barely glanced at my passport before stamping it. She dully handed it back to me, and I entered Albania. © 2015 The New York Times Company
Scientists showed that they could alter brain activity of rats and either wake them up or put them in an unconscious state by changing the firing rates of neurons in the central thalamus, a region known to regulate arousal. The study, published in eLIFE, was partially funded by the National Institutes of Health. “Our results suggest the central thalamus works like a radio dial that tunes the brain to different states of activity and arousal,” said Jin Hyung Lee, Ph.D., assistant professor of neurology, neurosurgery and bioengineering at Stanford University, and a senior author of the study. Located deep inside the brain the thalamus acts as a relay station sending neural signals from the body to the cortex. Damage to neurons in the central part of the thalamus may lead to problems with sleep, attention, and memory. Previous studies suggested that stimulation of thalamic neurons may awaken patients who have suffered a traumatic brain injury from minimally conscious states. Dr. Lee’s team flashed laser pulses onto light sensitive central thalamic neurons of sleeping rats, which caused the cells to fire. High frequency stimulation of 40 or 100 pulses per second woke the rats. In contrast, low frequency stimulation of 10 pulses per second sent the rats into a state reminiscent of absence seizures that caused them to stiffen and stare before returning to sleep. “This study takes a big step towards understanding the brain circuitry that controls sleep and arousal,” Yejun (Janet) He, Ph.D., program director at NIH’s National Institute of Neurological Disorders and Stroke (NINDS).
The road map of conscious awareness has been deciphered. Now that we know which brain pathways control whether someone is awake or unconscious, we may be able to rouse people from a vegetative or minimally conscious state. In 2007, researchers used deep brain stimulation to wake a man from a minimally conscious state. It was quite remarkable, says Jin Lee at Stanford University in California. The 38-year-old had suffered a severe brain injury in a street mugging six years earlier. Before his treatment he was unable to communicate and had no voluntary control over his limbs. When doctors stimulated his thalamus – a central hub that sends signals all around the brain – his speech and movement gradually returned. However, attempts to treat other people in a similar way have failed. The problem lies with the crudeness of the technique. “Deep brain stimulation is done without much knowledge of how it actually alters the circuits in the brain,” says Lin. The technique involves attaching electrodes to the brain and using them to stimulate the tissue beneath. Unfortunately, the electrodes can also stimulate unintended areas, which means it is hard to work out exactly what is happening in people’s brains. “There are a lot of fibres and different cells in the thalamus and working out what was going on in the brain was very difficult,” says Lin. “So we wanted to figure it out.” © Copyright Reed Business Information Ltd.
Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 2: Functional Neuroanatomy: The Nervous System and Behavior
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 2: Cells and Structures: The Anatomy of the Nervous System
Link ID: 21695 - Posted: 12.12.2015
By John Horgan How does matter make mind? More specifically, how does a physical object generate subjective experiences like those you are immersed in as you read this sentence? How does stuff become conscious? This is called the mind-body problem, or, by philosopher David Chalmers, the “hard problem.” I expressed doubt that the hard problem can be solved--a position called mysterianism--in The End of Science. I argue in a new edition that my pessimism has been justified by the recent popularity of panpsychism. This ancient doctrine holds that consciousness is a property not just of brains but of all matter, like my table and coffee mug. Panpsychism strikes me as self-evidently foolish, but non-foolish people—notably Chalmers and neuroscientist Christof Koch—are taking it seriously. How can that be? What’s compelling their interest? Have I dismissed panpsychism too hastily? These questions lured me to a two-day workshop on integrated information theory at New York University last month. Conceived by neuroscientist Guilio Tononi (who trained under the late, great Gerald Edelman), IIT is an extremely ambitious theory of consciousness. It applies to all forms of matter, not just brains, and it implies that panpsychism might be true. Koch and others are taking panpsychism seriously because they take IIT seriously. © 2015 Scientific American
By Virginia Morell Was that fish on your plate once a sentient being? Scientists have long believed that the animals aren’t capable of the same type of conscious thought we are because they fail the “emotional fever” test. When researchers expose birds, mammals (including humans), and at least one species of lizard to new environments, they experience a slight rise in body temperature of 1°C to 2°C that lasts a while; it’s a true fever, as if they were responding to an infection. The fever is linked to the emotions because it’s triggered by an outside stimulus, yet produces behavioral and physiological changes that can be observed. Some scientists argue that these only occur in animals with sophisticated brains that sense and are conscious of what’s happening to them. Previous tests suggested that toads and fish don’t respond this way. Now, a new experiment that gave the fish more choices shows the opposite. Researchers took 72 zebrafish and either did nothing with them or placed them alone in a small net hanging inside a chamber in their tank with water of about 27°C; zebrafish prefer water of about 28°C. After 15 minutes in the net, the team released the confined fish. They could then freely swim among the tank’s five other chambers, each heated to a different temperature along a gradient from 17.92°C to 35°C. (The previous study used a similar setup but gave goldfish a choice between only two chambers, both at higher temperatures.) The stressed fish spent more time—between 4 and 8 hours—in the warmer waters than did the control fish, and raised their body temperatures about 2°C to 4°C, showing an emotional fever, the scientists report online today in the Proceedings of the Royal Society B. Thus, their study upends a key argument against consciousness in fish, they say. © 2015 American Association for the Advancement of Science.
Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 15: Emotions, Aggression, and Stress
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 11: Emotions, Aggression, and Stress
Link ID: 21657 - Posted: 11.25.2015
Alva Noë For some time now, I've been skeptical about the neuroscience of consciousness. Not so much because I doubt that consciousness is affected by neural states and processes, but because of the persistent tendency on the part of some neuroscientists to think of consciousness itself as a neural phenomenon. Nothing epitomizes this tendency better than Francis Crick's famous claim — he called it his "astonishing hypothesis" — that you are your brain. At an interdisciplinary conference at Brown not so long ago, I heard a prominent neuroscientist blandly assert, as if voicing well-established scientific fact, that thoughts, feelings and beliefs are specific constellations of matter that are located (as it happens) inside the head. My own view — I laid this out in a book I wrote a few years back called Out of Our Heads — is that the brain is only part of the story, and that we can only begin to understand how the brain makes us consciousness by realizing that brain functions only in the setting of our bodies and our broader environmental (including our social and cultural) situation. The skull is not a magical membrane, my late collaborator, friend and teacher Susan Hurley used to say. And there is no reason to think the processes supporting consciousness are confined to what happens only on one side (the inside) of that boundary. There is a nice interview on the Oxford University Press website with Anil Seth, the editor of a new Oxford journal Neuroscience of Consciousness. It's an informative discussion and makes the valuable point that the study of consciousness is interdisciplinary. © 2015 npr
Doubts are emerging about one of our leading models of consciousness. It seems that brain signals thought to reflect consciousness are also generated during unconscious activity. A decade of studies have lent credence to the global neuronal workspace theory of consciousness, which states that when something is perceived unconsciously, or subliminally, that information is processed locally in the brain. In contrast, conscious perception occurs when the information is broadcast to a “global workspace”, or assemblies of neurons distributed across various brain regions, leading to activity over the entire network. Proponents of this idea, Stanislas Dehaene at France’s national institute for health in Gif-sur-Yvette, and his colleagues, discovered that when volunteers view stimuli that either enter conscious awareness or don’t, their brains show identical EEG activity for the first 270 milliseconds. Then, if perception of the stimuli is subliminal, the brain activity peters out. However, when volunteers become conscious of the stimuli, there is a sudden burst of widespread brain activity 300 ms after the stimulus. This activity is characterised by an EEG signal called P3b, and has been called a neural correlate of consciousness. Brian Silverstein and Michael Snodgrass at the University of Michigan in Ann Arbor, and colleagues wondered if P3b could be detected during unconscious processing of stimuli. © Copyright Reed Business Information Ltd.