Links for Keyword: Consciousness

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 96

By Jerry Adler Smithsonian Magazine | In London, Benjamin Franklin once opened a bottle of fortified wine from Virginia and poured out, along with the refreshment, three drowned flies, two of which revived after a few hours and flew away. Ever the visionary, he wondered about the possibility of incarcerating himself in a wine barrel for future resurrection, “to see and observe the state of America a hundred years hence.” Alas, he wrote to a friend in 1773, “we live in an age too early . . . to see such an art brought in our time to its perfection.” If Franklin were alive today he would find a kindred spirit in Ken Hayworth, a neuroscientist who also wants to be around in 100 years but recognizes that, at 43, he’s not likely to make it on his own. Nor does he expect to get there preserved in alcohol or a freezer; despite the claims made by advocates of cryonics, he says, the ability to revivify a frozen body “isn’t really on the horizon.” So Hayworth is hoping for what he considers the next best thing. He wishes to upload his mind—his memories, skills and personality—to a computer that can be programmed to emulate the processes of his brain, making him, or a simulacrum, effectively immortal (as long as someone keeps the power on). Hayworth’s dream, which he is pursuing as president of the Brain Preservation Foundation, is one version of the “technological singularity.” It envisions a future of “substrate-independent minds,” in which human and machine consciousness will merge, transcending biological limits of time, space and memory. “This new substrate won’t be dependent on an oxygen atmosphere,” says Randal Koene, who works on the same problem at his organization, Carboncopies.org. “It can go on a journey of 1,000 years, it can process more information at a higher speed, it can see in the X-ray spectrum if we build it that way.”

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 5: The Sensorimotor System
Link ID: 20841 - Posted: 04.25.2015

by Anil Ananthaswamy HOLD that thought. When it comes to consciousness, the brain may be doing just that. It now seems that conscious perception requires brain activity to hold steady for hundreds of milliseconds. This signature in the pattern of brainwaves can be used to distinguish between levels of impaired consciousness in people with brain injury. The new study by Aaron Schurger at the Swiss Federal Institute of Technology in Lausanne doesn't explain the so-called "hard problem of consciousness" – how roughly a kilogram of nerve cells is responsible for the miasma of sensations, thoughts and emotions that make up our mental experience. However, it does chip away at it, and support the idea that it may one day be explained in terms of how the brain processes information. Neuroscientists think that consciousness requires neurons to fire in such a way that they produce a stable pattern of brain activity. The exact pattern will depend on what the sensory information is, but once information has been processed, the idea is that the brain should hold a pattern steady for a short period of time – almost as if it needs a moment to read out the information. In 2009, Schurger tested this theory by scanning 12 people's brains with fMRI machines. The volunteers were shown two images simultaneously, one for each eye. One eye saw a red-on-green line drawing and the other eye saw green-on-red. This confusion caused the volunteers to sometimes consciously perceive the drawing and sometimes not. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 2: Functional Neuroanatomy: The Nervous System and Behavior
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 2: Cells and Structures: The Anatomy of the Nervous System
Link ID: 20777 - Posted: 04.10.2015

|By Christof Koch In the Dutch countryside, a tall, older man, dressed in a maroon sports coat, his back slightly stooped, stands out because of his height and a pair of extraordinarily bushy eyebrows. His words, inflected by a British accent, are directed at a middle-aged man with long, curly brown hair, penetrating eyes and a dark, scholarly gown, who talks in only a halting English that reveals his native French origins. Their strangely clashing styles of speaking and mismatched clothes do not seem to matter to them as they press forward, with Eyebrows peering down intently at the Scholar. There is something distinctly odd about the entire meeting—a crossing of time, place and disciplines. Eyebrows: So I finally meet the man who doubts everything. The Scholar: (not missing a beat) At this time, I admit nothing that is not necessarily true. I'm famous for that! Eyebrows: Is there anything that you are certain of? (sotto voce) Besides your own fame? The Scholar: (evading the sarcastic jibe) I can't be certain of my fame. Indeed, I can't even be certain that there is a world out there, for I could be dreaming or hallucinating it. I can't be certain about the existence of my own body, its shape and extension, its corporality, for again I might be fooling myself. But now what am I, when I suppose that there is some supremely powerful and, if I may be permitted to say so, malicious deceiver who deliberately tries to fool me in any way he can? Given this evil spirit, how do I know that my sensations about the outside world—that is, it looks, feels and smells in a particular way—are not illusions, conjured up by Him to deceive me? It seems to me that therefore I can never know anything truly about the world. Nothing, rien du tout. I have to doubt everything. © 2015 Scientific American

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 20640 - Posted: 03.03.2015

By Neuroskeptic In an interesting short paper just published in Trends in Cognitive Science, Caltech neuroscientist Ralph Adolphs offers his thoughts on The Unsolved Problems of Neuroscience. Here’s Adolphs’ list of the top 23 questions (including 3 “meta” issues), which, he says, was inspired by Hilbert’s famous set of 23 mathematical problems: Problems that are solved, or soon will be: I. How do single neurons compute? II. What is the connectome of a small nervous system, like that of Caenorhabitis elegans (300 neurons)? III. How can we image a live brain of 100,000 neurons at cellular and millisecond resolution? IV. How does sensory transduction work? Problems that we should be able to solve in the next 50 years: V. How do circuits of neurons compute? VI. What is the complete connectome of the mouse brain (70,000,000 neurons)? VII. How can we image a live mouse brain at cellular and millisecond resolution? VIII. What causes psychiatric and neurological illness? IX. How do learning and memory work? X. Why do we sleep and dream? XI. How do we make decisions? XII. How does the brain represent abstract ideas? Problems that we should be able to solve, but who knows when: XIII. How does the mouse brain compute? XIV. What is the complete connectome of the human brain (80,000,000,000 neurons)? XV. How can we image a live human brain at cellular and millisecond resolution? XVI. How could we cure psychiatric and neurological diseases? XVII. How could we make everybody’s brain function best? Problems we may never solve: XVIII. How does the human brain compute? XIX. How can cognition be so flexible and generative? XX. How and why does conscious experience arise? Meta-questions: XXI. What counts as an explanation of how the brain works? (and which disciplines would be needed to provide it?) XXII. How could we build a brain? (how do evolution and development do it?) XXIII. What are the different ways of understanding the brain? (what is function, algorithm, implementation?) Adolphs R (2015). The unsolved problems of neuroscience. Trends in cognitive sciences PMID: 25703689

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 20637 - Posted: 03.02.2015

by Clare Wilson Once only possible in an MRI scanner, vibrating pads and electrode caps could soon help locked-in people communicate on a day-to-day basis YOU wake up in hospital unable to move, to speak, to twitch so much as an eyelid. You hear doctors telling your relatives you are in a vegetative state – unaware of everything around you – and you have no way of letting anyone know this is not the case. Years go by, until one day, you're connected to a machine that allows you to communicate through your brain waves. It only allows yes or no answers, but it makes all the difference – now you can tell your carers if you are thirsty, if you'd like to sit up, even which TV programmes you want to watch. In recent years, breakthroughs in mind-reading technology have brought this story close to reality for a handful of people who may have a severe type of locked-in syndrome, previously diagnosed as being in a vegetative state. So far, most work has required a lab and a giant fMRI scanner. Now two teams are developing devices that are portable enough to be taken out to homes, to help people communicate on a day-to-day basis. The technology might also be able to identify people who have been misdiagnosed. People with "classic" locked-in syndrome are fully conscious but completely paralysed apart from eye movements. Adrian Owen of Western University in London, Canada, fears that there is another form of the condition where the paralysis is total. He thinks that a proportion of people diagnosed as being in a vegetative state – in which people are thought to have no mental awareness at all – are actually aware but unable to let anyone know. "The possibility is that we are missing people with some sort of complete locked-in syndrome," he says. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 2: Functional Neuroanatomy: The Nervous System and Behavior
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 2: Cells and Structures: The Anatomy of the Nervous System
Link ID: 20537 - Posted: 01.31.2015

Simon Parkin A few months before she died, my grandmother made a decision. Bobby, as her friends called her (theirs is a generation of nicknames), was a farmer’s wife who not only survived World War II but also found in it justification for her natural hoarding talent. ‘Waste not, want not’ was a principle she lived by long after England recovered from a war that left it buckled and wasted. So she kept old envelopes and bits of cardboard cereal boxes for note taking and lists. She kept frayed blankets and musty blouses from the 1950s in case she needed material to mend. By extension, she was also a meticulous chronicler. She kept albums of photographs of her family members. She kept the airmail love letters my late grandfather sent her while he travelled the world with the merchant navy in a box. Her home was filled with the debris of her memories. Yet in the months leading up to her death, the emphasis shifted from hoarding to sharing. Every time I visited my car would fill with stuff: unopened cartons of orange juice, balls of fraying wool, damp, antique books, empty glass jars. All things she needed to rehome now she faced her mortality. The memories too began to move out. She sent faded photographs to her children, grandchildren and friends, as well as letters containing vivid paragraphs detailing some experience or other. On 9 April, the afternoon before the night she died, she posted a letter to one of her late husband’s old childhood friends. In the envelope she enclosed some photographs of my grandfather and his friend playing as young children. “You must have them,” she wrote to him. It was a demand but also a plea, perhaps, that these things not be lost or forgotten when, a few hours later, she slipped away in her favourite armchair. © 2015 BBC

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 20515 - Posted: 01.26.2015

Oliver Burkeman One spring morning in Tucson, Arizona, in 1994, an unknown philosopher named David Chalmers got up to give a talk on consciousness, by which he meant the feeling of being inside your head, looking out – or, to use the kind of language that might give a neuroscientist an aneurysm, of having a soul. Though he didn’t realise it at the time, the young Australian academic was about to ignite a war between philosophers and scientists, by drawing attention to a central mystery of human life – perhaps the central mystery of human life – and revealing how embarrassingly far they were from solving it. The scholars gathered at the University of Arizona – for what would later go down as a landmark conference on the subject – knew they were doing something edgy: in many quarters, consciousness was still taboo, too weird and new agey to take seriously, and some of the scientists in the audience were risking their reputations by attending. Yet the first two talks that day, before Chalmers’s, hadn’t proved thrilling. “Quite honestly, they were totally unintelligible and boring – I had no idea what anyone was talking about,” recalled Stuart Hameroff, the Arizona professor responsible for the event. “As the organiser, I’m looking around, and people are falling asleep, or getting restless.” He grew worried. “But then the third talk, right before the coffee break – that was Dave.” With his long, straggly hair and fondness for all-body denim, the 27-year-old Chalmers looked like he’d got lost en route to a Metallica concert. “He comes on stage, hair down to his butt, he’s prancing around like Mick Jagger,” Hameroff said. “But then he speaks. And that’s when everyone wakes up.”

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 20503 - Posted: 01.21.2015

Ewen Callaway The ability to recognize oneself in a mirror has been touted as a hallmark of higher cognition — present in humans and only the most intelligent of animals — and the basis for empathy. A study published this week in Current Biology controversially reports that macaques can be trained to pay attention to themselves in a mirror, the first such observation in any monkey species1. Yet the finding raises as many questions as it answers — not only about the cognitive capacity of monkeys, but also about mirror self-recognition as a measure of animal intelligence. “Simply because you’re acting as if you recognize yourself in a mirror doesn’t necessarily mean you’ve achieved self-recognition,” says Gordon Gallup, an evolutionary psychologist at the State University of New York in Albany, who in 1970 was the first to demonstrate mirror self-recognition in captive chimpanzees2. When most animals encounter their reflections in a mirror, they act as if they have seen another creature. They lash out aggressively, belt out loud calls and display other social behaviours. This is how chimps first acted when Gallup placed a full-length mirror next to their cages. But after a couple of days, their attitudes changed and they started examining themselves, says Gallup. “They’d look at the inside of their mouths; they’d watch their tongue move.” This convinced him that the chimps recognized themselves in the mirror. He knew other scientists would be sceptical, so he developed a test of mirror self-recognition. After chimps started acting as if they saw themselves in the mirror, after about 10 days, he anaesthetized them and applied an odour-free red mark to a location on their faces they could not see, such as above the brow ridge. © 2015 Nature Publishing Group

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 20467 - Posted: 01.10.2015

By ADAM FRANK In the endless public wars between science and religion, Buddhism has mostly been given a pass. The genesis of this cultural tolerance began with the idea, popular in the 1970s, that Buddhism was somehow in harmony with the frontiers of quantum physics. While the silliness of “quantum spirituality” is apparent enough these days, the possibility that Eastern traditions might have something to say to science did not disappear. Instead, a more natural locus for that encounter was found in the study of the mind. Spurred by the Dalai Lama’s remarkable engagement with scientists, interest in Buddhist attitudes toward the study of the mind has grown steadily. But within the Dalai Lama’s cheerful embrace lies a quandary whose resolution could shake either tradition to its core: the true relationship between our material brains and our decidedly nonmaterial minds. More than evolution, more than inexhaustible arguments over God’s existence, the real fault line between science and religion runs through the nature of consciousness. Carefully unpacking that contentious question, and exploring what Buddhism offers its investigation, is the subject of Evan Thompson’s new book, “Waking, Dreaming, Being.” A professor of philosophy at the University of British Columbia, Thompson is in a unique position to take up the challenge. In addition to a career built studying cognitive science’s approach to the mind, he is intimate with the long history of Buddhist and Vedic commentary on the mind too. He also happens to be the son of the maverick cultural historian William Irwin Thompson, whose Lindisfarne Association proposed the “study and realization of a new planetary culture” (a goal that reveals a lot about its strengths and weaknesses). Growing up in this environment, the younger Thompson managed to pick up an enthusiasm for non-Western philosophical traditions and a healthy skepticism for their spiritualist assumptions. © 2014 The New York Times Company

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 20430 - Posted: 12.20.2014

By Quassim Cassam Most people wonder at some point in their lives how well they know themselves. Self-knowledge seems a good thing to have, but hard to attain. To know yourself would be to know such things as your deepest thoughts, desires and emotions, your character traits, your values, what makes you happy and why you think and do the things you think and do. These are all examples of what might be called “substantial” self-knowledge, and there was a time when it would have been safe to assume that philosophy had plenty to say about the sources, extent and importance of self-knowledge in this sense. Not any more. With few exceptions, philosophers of self-knowledge nowadays have other concerns. Here’s an example of the sort of thing philosophers worry about: suppose you are wearing socks and believe you are wearing socks. How do you know that that’s what you believe? Notice that the question isn’t: “How do you know you are wearing socks?” but rather “How do you know you believe you are wearing socks?” Knowledge of such beliefs is seen as a form of self-knowledge. Other popular examples of self-knowledge in the philosophical literature include knowing that you are in pain and knowing that you are thinking that water is wet. For many philosophers the challenge is explain how these types of self-knowledge are possible. This is usually news to non-philosophers. Most certainly imagine that philosophy tries to answer the Big Questions, and “How do you know you believe you are wearing socks?” doesn’t sound much like one of them. If knowing that you believe you are wearing socks qualifies as self-knowledge at all — and even that isn’t obvious — it is self-knowledge of the most trivial kind. Non-philosophers find it hard to figure out why philosophers would be more interested in trivial than in substantial self-knowledge. © 2014 The New York Times Company

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 20402 - Posted: 12.08.2014

|By Piercarlo Valdesolo Google “successful Thanksgiving” and you will get a lot of different recommendations. Most you’ve probably heard before: plan ahead, get help, follow certain recipes. But according to new research from Florida State University, enjoying your holiday also requires a key ingredient that few guests consider as they wait to dive face first into the turkey: a belief in free will. What does free will have to do with whether or not Aunt Sally leaves the table in a huff? These researchers argue that belief in free will is essential to experiencing the emotional state that makes Thanksgiving actually about giving thanks: gratitude. Previous research has shown that our level of gratitude for an act depends on three things: 1) the cost to the benefactor (in time, effort or money), 2) the value of the act to the beneficiary, and 3) the sincerity of the benefactor’s intentions. For example, last week my 4-year-old daughter gave me a drawing of our family. This act was costly (she spent time and effort), valuable (I love the way she draws herself bigger than everyone else in the family), and sincere (she drew it because she knew I would like it). But what if I thought that she drew it for a different reason? What if I thought that she was being coerced by my wife? Or if I thought that this was just an assignment at her pre-school? In other words, what if I thought she had no choice but to draw it? I wouldn’t have defiantly thrown it back in her face, but I surely would have felt differently about the sincerity of the action. It would have diminished my gratitude. © 2014 Scientific American

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 15: Emotions, Aggression, and Stress
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 11: Emotions, Aggression, and Stress
Link ID: 20360 - Posted: 11.26.2014

|By Christof Koch Point to any one organ in the body, and doctors can tell you something about what it does and what happens if that organ is injured by accident or disease or is removed by surgery—whether it be the pituitary gland, the kidney or the inner ear. Yet like the blank spots on maps of Central Africa from the mid-19th century, there are structures whose functions remain unknown despite whole-brain imaging, electroencephalographic recordings that monitor the brain's cacophony of electrical signals and other advanced tools of the 21st century. Consider the claustrum. It is a thin, irregular sheet of cells, tucked below the neocortex, the gray matter that allows us to see, hear, reason, think and remember. It is surrounded on all sides by white matter—the tracts, or wire bundles, that interconnect cortical regions with one another and with other brain regions. The claustra—for there are two of them, one on the left side of the brain and one on the right—lie below the general region of the insular cortex, underneath the temples, just above the ears. They assume a long, thin wisp of a shape that is easily overlooked when inspecting the topography of a brain image. Advanced brain-imaging techniques that look at the white matter fibers coursing to and from the claustrum reveal that it is a neural Grand Central Station. Almost every region of the cortex sends fibers to the claustrum. These connections are reciprocated by other fibers that extend back from the claustrum to the originating cortical region. Neuroanatomical studies in mice and rats reveal a unique asymmetry—each claustrum receives input from both cortical hemispheres but only projects back to the overlying cortex on the same side. Whether or not this is true in people is not known. Curiouser and curiouser, as Alice would have said. © 2014 Scientific American

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 20350 - Posted: 11.24.2014

By CLYDE HABERMAN The notion that a person might embody several personalities, each of them distinct, is hardly new. The ancient Romans had a sense of this and came up with Janus, a two-faced god. In the 1880s, Robert Louis Stevenson wrote “Strange Case of Dr. Jekyll and Mr. Hyde,” a novella that provided us with an enduring metaphor for good and evil corporeally bound. Modern comic books are awash in divided personalities like the Hulk and Two-Face in the Batman series. Even heroic Superman has his alternating personas. But few instances of the phenomenon captured Americans’ collective imagination quite like “Sybil,” the study of a woman said to have had not two, not three (like the troubled figure in the 1950s’ “Three Faces of Eve”), but 16 different personalities. Alters, psychiatrists call them, short for alternates. As a mass-market book published in 1973, “Sybil” sold in the millions. Tens of millions watched a 1976 television movie version. The story had enough juice left in it for still another television film in 2007. Sybil Dorsett, a pseudonym, became the paradigm of a psychiatric diagnosis once known as multiple personality disorder. These days, it goes by a more anodyne label: dissociative identity disorder. Either way, the strange case of the woman whose real name was Shirley Ardell Mason made itself felt in psychiatrists’ offices across the country. Pre-"Sybil,” the diagnosis was rare, with only about 100 cases ever having been reported in medical journals. Less than a decade after “Sybil” made its appearance, in 1980, the American Psychiatric Association formally recognized the disorder, and the numbers soared into the thousands. People went on television to tell the likes of Jerry Springer and Leeza Gibbons about their many alters. One woman insisted that she had more than 300 identities within her (enough, if you will, to fill the rosters of a dozen major-league baseball teams). Even “Eve,” whose real name is Chris Costner Sizemore, said in the mid-1970s that those famous three faces were surely an undercount. It was more like 22, she said. © 2014 The New York Times Company

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 20346 - Posted: 11.24.2014

By Katy Waldman How much control do you have over how much control you think you have? The researchers Michael R. Ent and Roy F. Baumeister have been studying what makes a person more or less likely to believe in free will. Is it a deep connection to the philosophy of David Hume? An abiding faith in divine omnipotence? Try a really, really full bladder. In an online survey, 81 adults ages 18 to 70 reported the extent to which they felt hungry, tired, desirous of sex, and desirous of a toilet. They then rated the extent to which they considered themselves in command of their destinies. People experiencing intense physical needs were less likely to say they believed in free will. People who were not inexplicably taking an online survey while desperately holding in their pee (or starving, or wanting sex, or trying to stay awake) mostly claimed that the universe had handed them the keys to their lives. Also, people who brought their laptops with them into the bathroom to fill out the survey reported that they were God. (I kid on that last part.) Ent and Baumeister also used a survey to take the free will temperature of 23 people with panic disorder, 16 people with epilepsy, and 35 healthy controls. Those suffering from the two conditions—both of which can unpredictably plunge the mind into chaos—tended to put less stock in the notion of mental autonomy. There was a third experiment, too. I said earlier that people not taking an online survey while jonesing for various creature comforts mostly claimed that they wore the metaphysical pants. However, despite robust results for horniness, fatigue, and needing-to-go-ness, Ent and Baumeister didn’t initially see much correlation between people’s philosophical visions and their hunger levels. So they re-administered the survey to 112 new volunteers, some of whom were dieting and some of whom were not. © 2014 The Slate Group LLC.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 13: Homeostasis: Active Regulation of the Internal Environment
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 9: Homeostasis: Active Regulation of the Internal Environment
Link ID: 20294 - Posted: 11.10.2014

By Dwayne Godwin and Jorge Cham © 2014 Scientific American

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 20287 - Posted: 11.08.2014

David DiSalvo @neuronarrative One of the lively debates spawned from the neuroscience revolution has to do with whether humans possess free will, or merely feel as if we do. If we truly possess free will, then we each consciously control our decisions and actions. If we feel as if we possess free will, then our sense of control is a useful illusion—one that neuroscience will increasingly dispel as it gets better at predicting how brain processes yield decisions. For those in the free-will-as-illusion camp, the subjective experience of decision ownership is not unimportant, but it is predicated on neural dynamics that are scientifically knowable, traceable and—in time—predictable. One piece of evidence supporting this position has come from neuroscience research showing that brain activity underlying a given decision occurs before a person consciously apprehends the decision. In other words, thought patterns leading to conscious awareness of what we’re going to do are already in motion before we know we’ll do it. Without conscious knowledge of why we’re choosing as we’re choosing, the argument follows, we cannot claim to be exercising “free” will. Those supporting a purer view of free will argue that whether or not neuroscience can trace brain activity underlying decisions, making the decision still resides within the domain of an individual’s mind. In this view, parsing unconscious and conscious awareness is less important than the ultimate outcome – a decision, and subsequent action, emerging from a single mind. If free will is drained of its power by scientific determinism, free-will supporters argue, then we’re moving down a dangerous path where people can’t be held accountable for their decisions, since those decisions are triggered by neural activity occurring outside of conscious awareness. Consider how this might play out in a courtroom in which neuroscience evidence is marshalled to defend a murderer on grounds that he couldn’t know why he acted as he did.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 20232 - Posted: 10.23.2014

By Smitha Mundasad Health reporter, BBC News Scientists have uncovered hidden signatures in the brains of people in vegetative states that suggest they may have a glimmer of consciousness. Doctors normally consider these patients - who have severe brain injuries - to be unaware of the world around them although they appear awake. Researchers hope their work will help identify those who are actually conscious, but unable to communicate. Their report appears in PLoS Computational Biology. After catastrophic brain injuries, for example due to car crashes or major heart attacks, some people can appear to wake up yet do not respond to events around them. Doctors describe these patients as being in a vegetative state. Patients typically open their eyes and look around, but cannot react to commands or make any purposeful movements. Some people remain in this state for many years. But a handful of recent studies have questioned this diagnosis - suggesting some patients may actually be aware of what is going on around them, but unable to communicate. A team of scientists at Cambridge University studied 13 patients in vegetative states, mapping the electrical activity of their nerves using a mesh of electrodes applied to their scalps. The electrical patterns and connections they recorded were then compared with healthy volunteers. The study reveals four of the 13 patients had an electrical signature that was very similar to those seen in the volunteers. Dr Srivas Chennu, who led the research, said: "This suggests some of the brain networks that support consciousness in healthy adults may be well-preserved in a number of people in persistent vegetative state too." BBC © 2014

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 20217 - Posted: 10.18.2014

Daniel Cressey Mirrors are often used to elicit aggression in animal behavioural studies, with the assumption being that creatures unable to recognize themselves will react as if encountering a rival. But research suggests that such work may simply reflect what scientists expect to see, and not actual aggression. For most people, looking in a mirror does not trigger a bout of snarling hostility at the face staring back. But many animals do seem to react aggressively to their mirror image, and for years mirrors have been used to trigger such responses for behavioural research on species ranging from birds to fish. “There’s been a very long history of using a mirror as it’s just so handy,” says Robert Elwood, an animal-behaviour researcher at Queen’s University in Belfast, UK. Using a mirror radically simplifies aggression experiments, cutting down the number of animals required and providing the animal being observed with an ‘opponent’ perfectly matched in terms of size and weight. But in a study just published in Animal Behaviour1, Elwood and his team add to evidence that many mirror studies are flawed. The researchers looked at how convict cichlid fish (Amatitlania nigrofasciata) reacted both to mirrors and to real fish of their own species. This species prefers to display their right side in aggression displays, which means that they end up alongside each other in a head-to-tail configuration. It is impossible for a fish to achieve this with their own reflection, but Elwood reasoned that fish faced with a mirror would attempt it, and flip from side to side as they tried to present an aggressive display. On the other hand, if the reflection did not trigger an aggressive reaction, the fish would not display such behaviour as much or as frequently. © 2014 Nature Publishing Group,

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 15: Emotions, Aggression, and Stress
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 11: Emotions, Aggression, and Stress
Link ID: 20202 - Posted: 10.13.2014

By MICHAEL S. A. GRAZIANO OF the three most fundamental scientific questions about the human condition, two have been answered. First, what is our relationship to the rest of the universe? Copernicus answered that one. We’re not at the center. We’re a speck in a large place. Second, what is our relationship to the diversity of life? Darwin answered that one. Biologically speaking, we’re not a special act of creation. We’re a twig on the tree of evolution. Third, what is the relationship between our minds and the physical world? Here, we don’t have a settled answer. We know something about the body and brain, but what about the subjective life inside? Consider that a computer, if hooked up to a camera, can process information about the wavelength of light and determine that grass is green. But we humans also experience the greenness. We have an awareness of information we process. What is this mysterious aspect of ourselves? Many theories have been proposed, but none has passed scientific muster. I believe a major change in our perspective on consciousness may be necessary, a shift from a credulous and egocentric viewpoint to a skeptical and slightly disconcerting one: namely, that we don’t actually have inner feelings in the way most of us think we do. Imagine a group of scholars in the early 17th century, debating the process that purifies white light and rids it of all colors. They’ll never arrive at a scientific answer. Why? Because despite appearances, white is not pure. It’s a mixture of colors of the visible spectrum, as Newton later discovered. The scholars are working with a faulty assumption that comes courtesy of the brain’s visual system. The scientific truth about white (i.e., that it is not pure) differs from how the brain reconstructs it. © 2014 The New York Times Company

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 20196 - Posted: 10.11.2014

By Clare Wilson If you’re facing surgery, this may well be your worst nightmare: waking up while under the knife without medical staff realizing. The biggest-ever study of this phenomenon is shedding light on what such an experience feels like and is causing debate about how best to prevent it. For a one-year period starting in 2012, an anesthetist at every hospital in the United Kingdom and Ireland recorded every case where a patient told a staff member that he had been awake during surgery. Prompted by these reports, the researchers investigated 300 cases, interviewing the patient and doctors involved. One of the most striking findings, says the study’s lead author, Jaideep Pandit of Oxford University Hospitals, was that pain was not generally the worst part of the experience: It was paralysis. For some operations, paralyzing drugs are given to relax muscles and stop reflex movements. “Pain was something they understood, but very few of us have experienced what it’s like to be paralyzed,” Pandit says. “They thought they had been buried alive.” “I thought I was about to die,” says Sandra, who regained consciousness but was unable to move during a dental operation when she was 12 years old. “It felt as though nothing would ever work again — as though the anesthetist had removed everything apart from my soul.”

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 20168 - Posted: 10.07.2014