Links for Keyword: Consciousness
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
Don’t blame impulsive people for their poor decisions. It’s not necessarily their fault. Impulsivity could result from not having enough time to veto our own actions. At least that is the implication of a twist on a classic experiment on free will. In 1983, neuroscientist Benjamin Libet performed an experiment to test whether we have free will. Participants were asked to voluntarily flex a finger while watching a clock-face with a rotating dot. They had to note the position of the dot as soon as they became aware of their intention to act. As they were doing so, Libet recorded their brain activity via EEG electrodes attached to the scalp. He found that a spike in brain activity called the readiness potential, which precedes a voluntary action, occurred about 350-milliseconds before the volunteers became consciously aware of their intention to act. The readiness potential is thought to signal the brain preparing for movement. Libet interpreted his results to mean that free will is an illusion. But we’re not complete slaves to our neurons, he reasoned, as there was a 200-millisecond gap between conscious awareness of our intention and the initiation of movement. Libet argued that this was enough time to consciously veto the action, or exert our “free won’t”. While Libet’s interpretations have remained controversial, this hasn’t stopped scientists carrying out variations of his experiment. Among other things, this has revealed that people with Tourette’s syndrome, who have uncontrollable tics, experience a shorter veto window than people without the condition, as do those with schizophrenia. © Copyright Reed Business Information Ltd.
By Melissa Healy A new study finds that policies on defining brain death vary from hospital to hospital and could result in serious errors. Since 2010, neurologists have had a clear set of standards and procedures to distinguish a brain-dead patient from one who might emerge from an apparent coma. But when profoundly unresponsive patients are rushed to hospitals around the nation, the physicians who make the crucial call are not always steeped in the diagnostic fine points of brain death and the means of identifying it with complete confidence. State laws governing the diagnosis of brain death vary widely. Some states allow any physician to make the diagnosis, while others dictate the level of specialty a physician making the call must have. Some require that a second physician confirm the diagnosis or that a given period of time elapse. Others make no such demands. Given these situations, hospital policies can be invaluable guides for physicians, hospital administrators and patients’ families. In the absence of consistent physician expertise or legal requirements, hospital protocols can translate a scientific consensus into a step-by-step checklist. That would help ensure that no one who is not brain-dead is denied further care or considered a potential organ donor and that the deceased and their families would have every opportunity to donate organs.
By KARL OVE KNAUSGAARD I arrived in Tirana, Albania, on a Sunday evening in late August, on a flight from Istanbul. The sun had set while the plane was midflight, and as we landed in the dark, images of fading light still filled my mind. The man next to me, a young, red-haired American wearing a straw hat, asked me if I knew how to get into town from the airport. I shook my head, put the book I had been reading into my backpack, got up, lifted my suitcase out of the overhead compartment and stood waiting in the aisle for the door up ahead to open. That book was the reason I had come. It was called “Do No Harm,” and it was written by the British neurosurgeon Henry Marsh. His job is to slice into the brain, the most complex structure we know of in the universe, where everything that makes us human is contained, and the contrast between the extremely sophisticated and the extremely primitive — all of that work with knives, drills and saws — fascinated me deeply. I had sent Marsh an email, asking if I might meet him in London to watch him operate. He wrote a cordial reply saying that he seldom worked there now, but he was sure something could be arranged. In passing, he mentioned that he would be operating in Albania in August and in Nepal in September, and I asked hesitantly whether I could join him in Albania. Now I was here. Tense and troubled, I stepped out of the door of the airplane, having no idea what lay ahead. I knew as little about Albania as I did about brain surgery. The air was warm and stagnant, the darkness dense. A bus was waiting with its engine running. Most of the passengers were silent, and the few who chatted with one another spoke a language I didn’t know. It struck me that 25 years ago, when this was among the last remaining Communist states in Europe, I would not have been allowed to enter; then, the country was closed to the outside world, almost like North Korea today. Now the immigration officer barely glanced at my passport before stamping it. She dully handed it back to me, and I entered Albania. © 2015 The New York Times Company
Scientists showed that they could alter brain activity of rats and either wake them up or put them in an unconscious state by changing the firing rates of neurons in the central thalamus, a region known to regulate arousal. The study, published in eLIFE, was partially funded by the National Institutes of Health. “Our results suggest the central thalamus works like a radio dial that tunes the brain to different states of activity and arousal,” said Jin Hyung Lee, Ph.D., assistant professor of neurology, neurosurgery and bioengineering at Stanford University, and a senior author of the study. Located deep inside the brain the thalamus acts as a relay station sending neural signals from the body to the cortex. Damage to neurons in the central part of the thalamus may lead to problems with sleep, attention, and memory. Previous studies suggested that stimulation of thalamic neurons may awaken patients who have suffered a traumatic brain injury from minimally conscious states. Dr. Lee’s team flashed laser pulses onto light sensitive central thalamic neurons of sleeping rats, which caused the cells to fire. High frequency stimulation of 40 or 100 pulses per second woke the rats. In contrast, low frequency stimulation of 10 pulses per second sent the rats into a state reminiscent of absence seizures that caused them to stiffen and stare before returning to sleep. “This study takes a big step towards understanding the brain circuitry that controls sleep and arousal,” Yejun (Janet) He, Ph.D., program director at NIH’s National Institute of Neurological Disorders and Stroke (NINDS).
The road map of conscious awareness has been deciphered. Now that we know which brain pathways control whether someone is awake or unconscious, we may be able to rouse people from a vegetative or minimally conscious state. In 2007, researchers used deep brain stimulation to wake a man from a minimally conscious state. It was quite remarkable, says Jin Lee at Stanford University in California. The 38-year-old had suffered a severe brain injury in a street mugging six years earlier. Before his treatment he was unable to communicate and had no voluntary control over his limbs. When doctors stimulated his thalamus – a central hub that sends signals all around the brain – his speech and movement gradually returned. However, attempts to treat other people in a similar way have failed. The problem lies with the crudeness of the technique. “Deep brain stimulation is done without much knowledge of how it actually alters the circuits in the brain,” says Lin. The technique involves attaching electrodes to the brain and using them to stimulate the tissue beneath. Unfortunately, the electrodes can also stimulate unintended areas, which means it is hard to work out exactly what is happening in people’s brains. “There are a lot of fibres and different cells in the thalamus and working out what was going on in the brain was very difficult,” says Lin. “So we wanted to figure it out.” © Copyright Reed Business Information Ltd.
Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 2: Functional Neuroanatomy: The Nervous System and Behavior
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 2: Cells and Structures: The Anatomy of the Nervous System
Link ID: 21695 - Posted: 12.12.2015
By John Horgan How does matter make mind? More specifically, how does a physical object generate subjective experiences like those you are immersed in as you read this sentence? How does stuff become conscious? This is called the mind-body problem, or, by philosopher David Chalmers, the “hard problem.” I expressed doubt that the hard problem can be solved--a position called mysterianism--in The End of Science. I argue in a new edition that my pessimism has been justified by the recent popularity of panpsychism. This ancient doctrine holds that consciousness is a property not just of brains but of all matter, like my table and coffee mug. Panpsychism strikes me as self-evidently foolish, but non-foolish people—notably Chalmers and neuroscientist Christof Koch—are taking it seriously. How can that be? What’s compelling their interest? Have I dismissed panpsychism too hastily? These questions lured me to a two-day workshop on integrated information theory at New York University last month. Conceived by neuroscientist Guilio Tononi (who trained under the late, great Gerald Edelman), IIT is an extremely ambitious theory of consciousness. It applies to all forms of matter, not just brains, and it implies that panpsychism might be true. Koch and others are taking panpsychism seriously because they take IIT seriously. © 2015 Scientific American
By Virginia Morell Was that fish on your plate once a sentient being? Scientists have long believed that the animals aren’t capable of the same type of conscious thought we are because they fail the “emotional fever” test. When researchers expose birds, mammals (including humans), and at least one species of lizard to new environments, they experience a slight rise in body temperature of 1°C to 2°C that lasts a while; it’s a true fever, as if they were responding to an infection. The fever is linked to the emotions because it’s triggered by an outside stimulus, yet produces behavioral and physiological changes that can be observed. Some scientists argue that these only occur in animals with sophisticated brains that sense and are conscious of what’s happening to them. Previous tests suggested that toads and fish don’t respond this way. Now, a new experiment that gave the fish more choices shows the opposite. Researchers took 72 zebrafish and either did nothing with them or placed them alone in a small net hanging inside a chamber in their tank with water of about 27°C; zebrafish prefer water of about 28°C. After 15 minutes in the net, the team released the confined fish. They could then freely swim among the tank’s five other chambers, each heated to a different temperature along a gradient from 17.92°C to 35°C. (The previous study used a similar setup but gave goldfish a choice between only two chambers, both at higher temperatures.) The stressed fish spent more time—between 4 and 8 hours—in the warmer waters than did the control fish, and raised their body temperatures about 2°C to 4°C, showing an emotional fever, the scientists report online today in the Proceedings of the Royal Society B. Thus, their study upends a key argument against consciousness in fish, they say. © 2015 American Association for the Advancement of Science.
Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 15: Emotions, Aggression, and Stress
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 11: Emotions, Aggression, and Stress
Link ID: 21657 - Posted: 11.25.2015
Alva Noë For some time now, I've been skeptical about the neuroscience of consciousness. Not so much because I doubt that consciousness is affected by neural states and processes, but because of the persistent tendency on the part of some neuroscientists to think of consciousness itself as a neural phenomenon. Nothing epitomizes this tendency better than Francis Crick's famous claim — he called it his "astonishing hypothesis" — that you are your brain. At an interdisciplinary conference at Brown not so long ago, I heard a prominent neuroscientist blandly assert, as if voicing well-established scientific fact, that thoughts, feelings and beliefs are specific constellations of matter that are located (as it happens) inside the head. My own view — I laid this out in a book I wrote a few years back called Out of Our Heads — is that the brain is only part of the story, and that we can only begin to understand how the brain makes us consciousness by realizing that brain functions only in the setting of our bodies and our broader environmental (including our social and cultural) situation. The skull is not a magical membrane, my late collaborator, friend and teacher Susan Hurley used to say. And there is no reason to think the processes supporting consciousness are confined to what happens only on one side (the inside) of that boundary. There is a nice interview on the Oxford University Press website with Anil Seth, the editor of a new Oxford journal Neuroscience of Consciousness. It's an informative discussion and makes the valuable point that the study of consciousness is interdisciplinary. © 2015 npr
Doubts are emerging about one of our leading models of consciousness. It seems that brain signals thought to reflect consciousness are also generated during unconscious activity. A decade of studies have lent credence to the global neuronal workspace theory of consciousness, which states that when something is perceived unconsciously, or subliminally, that information is processed locally in the brain. In contrast, conscious perception occurs when the information is broadcast to a “global workspace”, or assemblies of neurons distributed across various brain regions, leading to activity over the entire network. Proponents of this idea, Stanislas Dehaene at France’s national institute for health in Gif-sur-Yvette, and his colleagues, discovered that when volunteers view stimuli that either enter conscious awareness or don’t, their brains show identical EEG activity for the first 270 milliseconds. Then, if perception of the stimuli is subliminal, the brain activity peters out. However, when volunteers become conscious of the stimuli, there is a sudden burst of widespread brain activity 300 ms after the stimulus. This activity is characterised by an EEG signal called P3b, and has been called a neural correlate of consciousness. Brian Silverstein and Michael Snodgrass at the University of Michigan in Ann Arbor, and colleagues wondered if P3b could be detected during unconscious processing of stimuli. © Copyright Reed Business Information Ltd.
Neel V. Patel The concept of the insanity defense dates back to ancient Greece and the Roman Empire. The idea has always been the same: Protect individuals from being held accountable for behavior they couldn’t control. Yet there have been more than a few historical and recent instances of a judge or jury issuing a controversial “by reason of…” verdict. What was intended as a human rights effort has become a last-ditch way to save killers (though it didn’t work for James Holmes). The question that hangs in the air at these sort of proceedings has always been the same: Is there a way to make determinations more scientific and less traditionally judicial? Adam Shniderman, a criminal justice researcher at Texas Christian University, has been studying the role of neuroscience in the court system for several years now. He explains that neurological data and explanations don’t easily translate into the world of lawyers and legal text. Inverse spoke with Shniderman to learn more about how neuroscience is used in today’s insanity defenses, and whether this is likely to change as the technology used to observe the brain gets better and better. Can you give me a quick overview of how the role of neuroscience in the courts, has changed over the years? Especially in the last few decades with new advances in technology. Obviously, [neuroscientific evidence] has become more widely used as brain-scanning technology has gotten better. Some of the scanning technology we use now, like functional MRI that measures blood oxygenation as a proxy for neurological activity, is relatively new within the last 20 years or so. The nature of brain scanning has changed, but the knowledge that the brain influences someone’s actions is not new.
Related chapters from BP7e: Chapter 16: Psychopathology: Biological Basis of Behavior Disorders; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 12: Psychopathology: The Biology of Behavioral Disorders; Chapter 14: Attention and Consciousness
Link ID: 21397 - Posted: 09.11.2015
By James Gallagher Health editor, BBC News website Close your eyes and imagine walking along a sandy beach and then gazing over the horizon as the Sun rises. How clear is the image that springs to mind? Most people can readily conjure images inside their head - known as their mind's eye. But this year scientists have described a condition, aphantasia, in which some people are unable to visualise mental images. Niel Kenmuir, from Lancaster, has always had a blind mind's eye. He knew he was different even in childhood. "My stepfather, when I couldn't sleep, told me to count sheep, and he explained what he meant, I tried to do it and I couldn't," he says. "I couldn't see any sheep jumping over fences, there was nothing to count." Our memories are often tied up in images, think back to a wedding or first day at school. As a result, Niel admits, some aspects of his memory are "terrible", but he is very good at remembering facts. And, like others with aphantasia, he struggles to recognise faces. Yet he does not see aphantasia as a disability, but simply a different way of experiencing life. Take the aphantasia test It is impossible to see what someone else is picturing inside their head. Psychologists use the Vividness of Visual Imagery Questionnaire, which asks you to rate different mental images, to test the strength of the mind's eye. The University of Exeter has developed an abridged version that lets you see how your mind compares. © 2015 BBC.
Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 7: Vision: From Eye to Brain
Link ID: 21347 - Posted: 08.27.2015
By Simon Worrall, National Geographic How do we know we exist? What is the self? These are some of the questions science writer Anil Ananthaswamy asks in his thought-provoking new book, The Man Who Wasn’t There: Investigations Into the Strange New Science of the Self. The answers, he says, may lie in medical conditions like Cotard’s syndrome, Alzheimer’s or body integrity identity disorder, which causes some people to try and amputate their own limbs. Speaking from Berkeley, California, he explains why Antarctic explorer Ernest Shackleton fell victim to the doppelgänger effect; how neuroscience is rewriting our ideas about identity; and how a song by George Harrison of the Beatles offers a critique of the Western view of the self. You dedicate the book to “those of us who want to let go but wonder, who is letting go and of what?” Explain that statement. We always hear within popular culture that we have to “let go,” as a way of dealing with certain situations in our lives. And in some sense you have to wonder about that statement because the person or thing doing the letting go is also probably what has to be let go. In the book, I am trying to get behind the whole issue of what the self is that has to do the letting go; and what aspects of the self have to be let go of. You start your book with Alzheimer’s. Tell us about the origin of the condition and what it tells us about “the autobiographical self.” Alzheimer’s is a very severe condition, especially during the mid- to late stages, which starts robbing people of their ability to remember anything that’s happening to them. They also start forgetting the people they are close to. © 1996-2015 National Geographic Society
By NINA STROHMINGER and SHAUN NICHOLS WHEN does the deterioration of your brain rob you of your identity, and when does it not? Alzheimer’s, the neurodegenerative disease that erodes old memories and the ability to form new ones, has a reputation as a ruthless plunderer of selfhood. People with the disease may no longer seem like themselves. Neurodegenerative diseases that target the motor system, like amyotrophic lateral sclerosis, can lead to equally devastating consequences: difficulty moving, walking, speaking and eventually, swallowing and breathing. Yet they do not seem to threaten the fabric of selfhood in quite the same way. Memory, it seems, is central to identity. And indeed, many philosophers and psychologists have supposed as much. This idea is intuitive enough, for what captures our personal trajectory through life better than the vault of our recollections? But maybe this conventional wisdom is wrong. After all, the array of cognitive faculties affected by neurodegenerative diseases is vast: language, emotion, visual processing, personality, intelligence, moral behavior. Perhaps some of these play a role in securing a person’s identity. The challenge in trying in determine what parts of the mind contribute to personal identity is that each neurodegenerative disease can affect many cognitive systems, with the exact constellation of symptoms manifesting differently from one patient to the next. For instance, some Alzheimer’s patients experience only memory loss, whereas others also experience personality change or impaired visual recognition. The only way to tease apart which changes render someone unrecognizable is to compare all such symptoms, across multiple diseases. And that’s just what we did, in a study published this month in Psychological Science. © 2015 The New York Times Company
By John Danaher Discoveries in neuroscience, and the science of behaviour more generally, pose a challenge to the existence of free will. But this all depends on what is meant by ‘free will’. The term means different things to different people. Philosophers focus on two conditions that seem to be necessary for free will: (i) the alternativism condition, according to which having free will requires the ability to do otherwise; and (ii) the sourcehood condition, according to which having free will requires that you (your ‘self’) be the source of your actions. A scientific and deterministic worldview is often said to threaten the first condition. Does it also threaten the second? That is what Christian List and Peter Menzies article “My brain made me do it: The exclusion argument against free will and what’s wrong with it” tries to figure out. As you might guess from the title, the authors think that the scientific worldview, in particular the advances in neuroscience, do not necessarily threaten the sourcehood condition. I discussed their main argument in the previous post. To briefly recap, they critiqued an argument from physicalism against free will. According to this argument, the mental states which constitute the self do not cause our behaviour because they are epiphenomenal: they supervene on the physical brain states that do all the causal work. List and Menzies disputed this by appealing to a difference-making account of causation. This allowed for the possibility of mental states causing behaviour (being the ‘difference makers’) even if they were supervenient upon underlying physical states.
By John Danaher Consider the following passage from Ian McEwan’s novel Atonement. It concerns one of the novel’s characters (Briony) as she philosophically reflects on the mystery of human action: She raised one hand and flexed its fingers and wondered, as she had sometimes done before, how this thing, this machine for gripping, this fleshy spider on the end of her arm, came to be hers, entirely at her command. Or did it have some little life of its own? She bent her finger and straightened it. The mystery was in the instant before it moved, the dividing moment between not moving and moving, when her intention took effect. It was like a wave breaking. If she could only find herself at the crest, she thought, she might find the secret of herself, that part of her that was really in charge. Is Briony’s quest forlorn? Will she ever find herself at the crest of the wave? The contemporary scientific understanding of human action seems to cast this into some doubt. A variety of studies in the neuroscience of action paint an increasingly mechanistic and subconscious picture of human behaviour. According to these studies, our behaviour is not the product of our intentions or desires or anything like that. It is the product of our neural networks and systems, a complex soup of electrochemical interactions, oftentimes operating beneath our conscious awareness. In other words, our brains control our actions; our selves (in the philosophically important sense of the word ‘self’) do not. This discovery — that our brains ‘make us do it’ and that ‘we’ don’t — is thought to have a number of significant social implications, particularly for our practices of blame and punishment.
By Neuroskeptic According to British biochemist Donald R. Forsdyke in a new paper in Biological Theory, the existence of people who seem to be missing most of their brain tissue calls into question some of the “cherished assumptions” of neuroscience. I’m not so sure. Forsdyke discusses the disease called hydrocephalus (‘water on the brain’). Some people who suffer from this condition as children are cured thanks to prompt treatment. Remarkably, in some cases, these post-hydrocephalics turn out to have grossly abnormal brain structure: huge swathes of their brain tissue are missing, replaced by fluid. Even more remarkably, in some cases, these people have normal intelligence and display no obvious symptoms, despite their brains being mostly water. This phenomenon was first noted by a British pediatrician called John Lorber. Lorber never published his observations in a scientific journal, although a documentary was made about them. However, his work was famously discussed in Science in 1980 by Lewin in an article called “Is Your Brain Really Necessary?“. There have been a number of other more recent published cases. Forsdyke argues that such cases pose a problem for mainstream neuroscience. If a post-hydrocephalic brain can store the same amount of information as a normal brain, he says, then “brain size does not scale with information quantity”, therefore, “it would seem timely to look anew at possible ways our brains might store their information.”
Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 13: Memory, Learning, and Development
Link ID: 21227 - Posted: 07.29.2015
Carl Zimmer Certain people, researchers have discovered, can’t summon up mental images — it’s as if their mind’s eye is blind. This month in the journal Cortex, the condition received a name: aphantasia, based on the Greek word phantasia, which Aristotle used to describe the power that presents visual imagery to our minds. I find research like this irresistible. It coaxes me to think about ways to experience life that are radically different from my own, and it offer clues to how the mind works. And in this instance, I played a small part in the discovery. In 2005, a 65-year-old retired building inspector paid a visit to the neurologist Adam Zeman at the University of Exeter Medical School. After a minor surgical procedure, the man — whom Dr. Zeman and his colleagues refer to as MX — suddenly realized he could no longer conjure images in his mind. Dr. Zeman couldn’t find any description of such a condition in medical literature. But he found MX’s case intriguing. For decades, scientists had debated how the mind’s eye works, and how much we rely on it to store memories and to make plans for the future. MX agreed to a series of examinations. He proved to have a good memory for a man of his age, and he performed well on problem-solving tests. His only unusual mental feature was an inability to see mental images. Dr. Zeman and his colleagues then scanned MX’s brain as he performed certain tasks. First, MX looked at faces of famous people and named them. The scientists found that certain regions of his brain became active, the same ones that become active in other people who look at faces. © 2015 The New York Times Company
The structure of the living cell is defined by the difference between what’s inside and what’s not. Biologists have taken great pains over the years to document the minute workings of the openings in cell membranes that allow hydrogen, sodium, calcium and other ions to make their way inside across the barrier that envelops the cell and its contents. Five scholars of the brain have built upon these observations to suggest that these activities may provide a foundation for a badly needed theory to understand consciousness and some of the cognitive processes that underlie it. They contend that when animal cells open and close themselves to the outside world, these actions can be construed as more than just responses to external stimuli. In fact, they constitute the basis for perception, cognition and movement in the animal kingdom—and may underlie consciousness itself. Read about what the five have to say and then continue to Koch’s reply. The five authors and NYU neurology professor Oliver Sacks; Antonio Damasio and Gil B. Carvalho from the University of Southern California, Norman D. Cook from the faculty of Kansai University in Osaka, Japan and Harry T. Hunt from Brock University in Ontario. They have framed their ideas in the form of an open letter to Christof Koch, president of the Allen Institute for Brain Science, and a Scientific American MIND columnist (Consciousness Redux) and member of Scientific American’s board of advisers.
By Sandra G. Boodman When B. Paul Turpin was admitted to a Tennessee hospital in January, the biggest concern was whether the 69-year-old endocrinologist would survive. But as he battled a life-threatening infection, Turpin developed terrifying hallucinations, including one in which he was performing on a stage soaked with blood. Doctors tried to quell his delusions with increasingly large doses of sedatives, which only made him more disoriented. Nearly five months later, Turpin’s infection has been routed, but his life is upended. Delirious and too weak to go home after his hospital discharge, he spent months in a rehab center, where he fell twice, once hitting his head. Until recently he did not remember where he lived and believed he had been in a car wreck. “I tell him it’s more like a train wreck,” said his wife, Marylou Turpin. “They kept telling me in the hospital, ‘Everybody does this,’ and that his confusion would disappear,” she said. Instead, her once astute husband has had great difficulty “getting past the scramble.” Turpin’s experience illustrates the consequences of delirium, a sudden disruption of consciousness and cognition marked by vivid hallucinations, delusions and an inability to focus that affects 7 million hospitalized Americans annually. The disorder can occur at any age — it has been seen in preschoolers — but disproportionately affects people older than 65 and is often misdiagnosed as dementia. While delirium and dementia can coexist, they are distinctly different illnesses. Dementia develops gradually and worsens progressively, while delirium occurs suddenly and typically fluctuates during the course of a day. Some patients with delirium are agitated and combative, while others are lethargic and inattentive.
Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 4: The Chemistry of Behavior: Neurotransmitters and Neuropharmacology
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 4: The Chemistry of Behavior: Neurotransmitters and Neuropharmacology
Link ID: 21010 - Posted: 06.02.2015
by Helen Thomson Imagine a world where you think of something and it happens. For instance, what if the moment you realise you want a cup of tea, the kettle starts boiling? That reality is on the cards, now that a brain implant has been developed that can decode a person's intentions. It has already allowed a man paralysed from the neck down to control a robotic arm with unprecedented fluidity. But the implications go far beyond prosthetics. By placing an implant in the area of the brain responsible for intentions, scientists are investigating whether brain activity can give away future decisions – before a person is even aware of making them. Such a result may even alter our understanding of free will. Fluid movement "These are exciting times," says Pedro Lopes, who works at the human-computer interaction lab at Hasso Plattner Institute in Potsdam, Germany. "These developments give us a glimpse of an exciting future where devices will understand our intentions as a means of adapting to our plans." The implant was designed for Erik Sorto, who was left unable to move his limbs after a spinal cord injury 12 years ago. The idea was to give him the ability to move a stand-alone robotic arm by recording the activity in his posterior parietal cortex – a part of the brain used in planning movements. "We thought this would allow us to decode brain activity associated with the overall goal of a movement – for example, 'I want to pick up that cup'," Richard Andersen at the California Institute of Technology in Pasadena told delegates at the NeuroGaming Conference in San Francisco earlier this month. © Copyright Reed Business Information Ltd
Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 5: The Sensorimotor System
Link ID: 20992 - Posted: 05.28.2015