Links for Keyword: Consciousness

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.


Links 81 - 100 of 328

Subhash Kak Many advanced artificial intelligence projects say they are working toward building a conscious machine, based on the idea that brain functions merely encode and process multisensory information. The assumption goes, then, that once brain functions are properly understood, it should be possible to program them into a computer. Microsoft recently announced that it would spend US$1 billion on a project to do just that. So far, though, attempts to build supercomputer brains have not even come close. A multi-billion-dollar European project that began in 2013 is now largely understood to have failed. That effort has shifted to look more like a similar but less ambitious project in the U.S., developing new software tools for researchers to study brain data, rather than simulating a brain. Some researchers continue to insist that simulating neuroscience with computers is the way to go. Others, like me, view these efforts as doomed to failure because we do not believe consciousness is computable. Our basic argument is that brains integrate and compress multiple components of an experience, including sight and smell – which simply can’t be handled in the way today’s computers sense, process and store data. Brains don’t operate like computers Living organisms store experiences in their brains by adapting neural connections in an active process between the subject and the environment. By contrast, a computer records data in short-term and long-term memory blocks. That difference means the brain’s information handling must also be different from how computers work. © 2010–2019, The Conversation US, Inc.

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 26714 - Posted: 10.17.2019

Patricia Churchland Three myths about morality remain alluring: only humans act on moral emotions, moral precepts are divine in origin, and learning to behave morally goes against our thoroughly selfish nature. Converging data from many sciences, including ethology, anthropology, genetics, and neuroscience, have challenged all three of these myths. First, self-sacrifice, given the pressing needs of close kin or conspecifics to whom they are attached, has been documented in many mammalian species—wolves, marmosets, dolphins, and even rodents. Birds display it too. In sharp contrast, reptiles show no hint of this impulse. Second, until very recently, hominins lived in small groups with robust social practices fostering well-being and survival in a wide range of ecologies. The idea of a divine lawgiver likely played no part in their moral practices for some two million years, emerging only with the advent of agriculture and larger communities where not everyone knew everyone else. The divine lawgiver idea is still absent from some large-scale religions, such as Confucianism and Buddhism. Third, it is part of our genetic heritage to care for kith and kin. Although self-sacrifice is common in termites and bees, the altruistic behavior of mammals and birds is vastly more flexible, variable, and farsighted. Attachment to others, mediated by powerful brain hormones, is the biological platform for morality. © 1986–2019 The Scientist.

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 15: Emotions, Aggression, and Stress
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 11: Emotions, Aggression, and Stress
Link ID: 26678 - Posted: 10.08.2019

By Benedict Carey For more than a decade, doctors have been using brain-stimulating implants to treat severe depression in people who do not benefit from medication, talk therapy or electroshock sessions. The treatment is controversial — any psychosurgery is, given its checkered history — and the results have been mixed. Two major trials testing stimulating implant for depression were halted because of disappointing results, and the approach is not approved by federal health regulators. Now, a team of psychiatric researchers has published the first long-term results, reporting Friday on patients who had stimulating electrodes implanted as long ago as eight years. The individuals have generally fared well, maintaining their initial improvements. The study, appearing in the American Journal of Psychiatry, was small, with just 28 subjects. Even still, experts said the findings were likely to extend interest in a field that has struggled. “The most impressive thing here is the sustained response,” Dr. Darin Dougherty, director of neurotherapeutics at Massachusetts General Hospital, said. “You do not see that for anything in this severe depression. The fact that they had this many people doing well for that long, that’s a big deal.” The implant treatment is known as deep brain stimulation, or D.B.S., and doctors have performed it for decades to help people control the tremors of Parkinson’s disease. In treating depression, surgeons thread an electrode into an area of the brain that sits beneath the crown of the head and is known to be especially active in people with severe depression. Running electrical current into that region, known as Brodmann Area 25, effectively shuts down its activity, resulting in relief of depression symptoms in many patients. The electrode is connected to a battery that is embedded in the chest. The procedure involves a single surgery; the implant provides continuous current from then on. © 2019 The New York Times Company

Related chapters from BN: Chapter 16: Psychopathology: Biological Basis of Behavior Disorders; Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Related chapters from MM:Chapter 12: Psychopathology: The Biology of Behavioral Disorders; Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Link ID: 26673 - Posted: 10.04.2019

Alison Abbott A prominent German neuroscientist committed scientific misconduct in research in which he claimed to have developed a brain-monitoring technique able to read certain thoughts of paralysed people, Germany’s main research agency has found. The DFG’s investigation into Niels Birbaumer’s high-profile work found that data in two papers were incomplete and that the scientific analysis was flawed — although it did not comment on whether the approach was valid. In a 19 September statement, the agency, which funded some of the work, said it was imposing some of its most severe sanctions to Birbaumer, who has positions at the University of Tübingen in Germany and the Wyss Center for Bio and Neuroengineering in Geneva, Switzerland. The DFG has banned Birbaumer from applying for its grants and from serving as a DFG evaluator for five years. The agency has also recommended the retraction of the two papers1,2 published in PLoS Biology, and says that it will ask him to return the grant money that he used to generate the data underpinning the papers. “The DFG has found scientific misconduct on my part and has imposed sanctions. I must therefore accept that I was unable to refute the allegations made against me,” Birbaumer said in a statement e-mailed to Nature in response to the DFG’s findings. In a subsequent phone conversation with Nature, Birbaumer added that he could not comment further on the findings because the DFG has not yet provided him with specific details on the reasoning behind the decisions. Birbaumer says he stands by his studies, which he says, “show that it is possible to communicate with patients who are completely paralysed, through computer-based analysis of blood flow and brain currents”. © 2019 Springer Nature Limited

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 1: Cells and Structures: The Anatomy of the Nervous System
Link ID: 26636 - Posted: 09.23.2019

By Kenneth Shinozuka What is consciousness? In a sense, this is one of the greatest mysteries in the universe. yet in another, it’s not an enigma at all. If we define consciousness as the feeling of what it’s like to subjectively experience something, then there is nothing more deeply familiar. Most of us know what it’s like to feel the pain of a headache, to empathize with another human being, to see the color blue, to hear the soaring melodies of a symphony, and so on. In fact, as philosopher Galen Strawson insightfully pointed out in a New York Times opinion piece, consciousness is “the only thing in the universe whose ultimate intrinsic nature we can claim to know.” This is a crucial point. We don’t have direct access to the outer world. Instead we experience it through the filter of our consciousness. We have no idea what the color blue really looks like “out there,” only how it appears to us “in here.” Furthermore, as some cognitive scientists like Donald Hoffman have argued in recent years, external reality is likely to be far different from our perceptions of it. The human brain has been optimized, through the process of evolution, to model reality in the way that’s most conducive to its survival, not in the way that most faithfully represents the world. Science has produced an outstandingly accurate description of the outer world, but it has told us very little, if anything, about our internal consciousness. With sufficient knowledge of physics, I can calculate all the forces acting on the chair in front of me, but I don’t know what “forces” or “laws” are giving rise to my subjective experience of the chair. © 2019 Scientific American

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 26607 - Posted: 09.13.2019

Bahar Gholipour The death of free will began with thousands of finger taps. In 1964, two German scientists monitored the electrical activity of a dozen people’s brains. Each day for several months, volunteers came into the scientists’ lab at the University of Freiburg to get wires fixed to their scalp from a showerhead-like contraption overhead. The participants sat in a chair, tucked neatly in a metal tollbooth, with only one task: to flex a finger on their right hand at whatever irregular intervals pleased them, over and over, up to 500 times a visit. The purpose of this experiment was to search for signals in the participants’ brains that preceded each finger tap. At the time, researchers knew how to measure brain activity that occurred in response to events out in the world—when a person hears a song, for instance, or looks at a photograph—but no one had figured out how to isolate the signs of someone’s brain actually initiating an action. The experiment’s results came in squiggly, dotted lines, a representation of changing brain waves. In the milliseconds leading up to the finger taps, the lines showed an almost undetectably faint uptick: a wave that rose for about a second, like a drumroll of firing neurons, then ended in an abrupt crash. This flurry of neuronal activity, which the scientists called the Bereitschaftspotential, or readiness potential, was like a gift of infinitesimal time travel. For the first time, they could see the brain readying itself to create a voluntary movement.

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 26602 - Posted: 09.11.2019

By Tam Hunt How do you know your dog is conscious? Well, she wags her tail when she’s happy, bounces around like a young human child when excited, and yawns when sleepy— among many other examples of behaviors that convince us (most of us, at least) that dogs are quite conscious in ways that are similar to, but not the same as, human consciousness. Most of us are okay attributing emotions, desires, pain and pleasure—which is what I mean by consciousness in this context—to dogs and many other pets. What about further down the chain. Is a mouse conscious? We can apply similar tests for “behavioral correlates of consciousness” like those I’ve just mentioned, but, for some of us, the mice behaviors observed will be considerably less convincing than for dogs in terms of there being an inner life for the average mouse. Advertisement What about an ant? What behaviors do ants engage in that might make us think an individual ant is at least a little bit conscious? Or is it not conscious at all? Let me now turn the questions around: how do I know you, my dear reader, are conscious? If we met, I’d probably introduce myself and hear you say your name and respond to my questions and various small talk. You might be happy to meet me and smile or shake my hand vigorously. Or you might get a little anxious at meeting someone new and behave awkwardly. All of these behaviors would convince me that you are in fact conscious much like I am, and not just faking it! Now here’s the broader question? How can we know anybody or any animal or any thing is actually conscious and not just faking it? The nature of consciousness makes it by necessity a wholly private affair. The only consciousness I can know with certainty is my own. Everything else is inference. © 2019 Scientific American

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 26557 - Posted: 08.31.2019

By Anil K. Seth On the 10th of April this year Pope Francis, President Salva Kiir of South Sudan and former rebel leader Riek Machar sat down together for dinner at the Vatican. They ate in silence, the start of a two-day retreat aimed at reconciliation from a civil war that has killed some 400,000 people since 2013. At about the same time in my laboratory at the University of Sussex in England, Ph.D. student Alberto Mariola was putting the finishing touches to a new experiment in which volunteers experience being in a room that they believe is there but that is not. In psychiatry clinics across the globe, people arrive complaining that things no longer seem “real” to them, whether it is the world around them or their own selves. In the fractured societies in which we live, what is real—and what is not—seems to be increasingly up for grabs. Warring sides may experience and believe in different realities. Perhaps eating together in silence can help because it offers a small slice of reality that can be agreed on, a stable platform on which to build further understanding. Advertisement We need not look to war and psychosis to find radically different inner universes. In 2015 a badly exposed photograph of a dress tore across the Internet, dividing the world into those who saw it as blue and black (me included) and those who saw it as white and gold (half my lab). Those who saw it one way were so convinced they were right—that the dress truly was blue and black or white and gold—that they found it almost impossible to believe that others might perceive it differently. We all know that our perceptual systems are easy to fool. The popularity of visual illusions is testament to this phenomenon. Things seem to be one way, and they are revealed to be another: two lines appear to be different lengths, but when measured they are exactly the same; we see movement in an image we know to be still. The story usually told about illusions is that they exploit quirks in the circuitry of perception, so that what we perceive deviates from what is there. Implicit in this story, however, is the assumption that a properly functioning perceptual system will render to our consciousness things precisely as they are. © 2019 Scientific American

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 7: Vision: From Eye to Brain
Link ID: 26549 - Posted: 08.29.2019

By John Horgan At the beginning of my book Mind-Body Problems, I describe one of my earliest childhood memories: I am walking near a river on a hot summer day. My left hand grips a fishing rod, my right a can of worms. One friend walks in front of me, another behind. We’re headed to a spot on the river where we can catch perch, bullheads and large-mouth bass. Weeds bordering the path block my view of the river, but I can smell its dank breath and feel its chill on my skin. The seething of cicadas builds to a crescendo. I stop short. I’m me, I say. My friends don’t react, so I say, louder, I’m me. The friend before me glances over his shoulder and keeps walking, the friend behind pushes me. I resume walking, still thinking, I’m me, I’m me. I feel lonely, scared, exhilarated, bewildered. Advertisement That moment was when I first became self-conscious, aware of myself as something weird, distinct from the rest of the world, demanding explanation. Or so I came to believe when I recalled the incident in subsequent decades. I never really talked about it, because it was hard to describe. It meant a lot to me, but I doubted it would mean much to anyone else. Then I learned that others have had similar experiences. One is Rebecca Goldstein, the philosopher and novelist, whom I profiled in Mind-Body Problems. Before interviewing Goldstein, I read her novel 36 Arguments for the Existence of God, and I came upon a passage in which the hero, Cass, a psychologist, recalls a recurrent “metaphysical seizure” or “vertigo” that struck him in childhood. Lying in bed, he was overcome by the improbability that he was just himself and no one else. “The more he tried to get a fix on the fact of being Cass here,” Goldstein writes, “the more the whole idea of it just got away from him.” Even as an adult, Cass kept asking himself, “How can it be that, of all things, one is this thing, so that one can say, astonishingly, ‘Here I am’”? © 2019 Scientific American

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 4: Development of the Brain
Link ID: 26510 - Posted: 08.17.2019

Meredith Fore A long-standing controversy in neuroscience centers on a simple question: How do neurons in the brain share information? Sure, it’s well-known that neurons are wired together by synapses and that when one of them fires, it sends an electrical signal to other neurons connected to it. But that simple model leaves a lot of unanswered questions—for example, where exactly in neurons’ firing is information stored? Resolving these questions could help us understand the physical nature of a thought. Two theories attempt to explain how neurons encode information: the rate code model and the temporal code model. In the rate code model, the rate at which neurons fire is the key feature. Count the number of spikes in a certain time interval, and that number gives you the information. In the temporal code model, the relative timing between firings matters more—information is stored in the specific pattern of intervals between spikes, vaguely like Morse code. But the temporal code model faces a difficult question: If a gap is "longer" or "shorter," it has to be longer or shorter relative to something. For the temporal code model to work, the brain needs to have a kind of metronome, a steady beat to allow the gaps between firings to hold meaning. Every computer has an internal clock to synchronize its activities across different chips. If the temporal code model is right, the brain should have something similar. Some neuroscientists posit that the clock is in the gamma rhythm, a semiregular oscillation of brain waves. But it doesn’t stay consistent. It can speed up or slow down depending on what a person experiences, such as a bright light. Such a fickle clock didn't seem like the full story for how neurons synchronize their signals, leading to ardent disagreements in the field about whether the gamma rhythm meant anything at all. © 2018 Condé Nast.

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 14: Biological Rhythms, Sleep, and Dreaming
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 10: Biological Rhythms and Sleep
Link ID: 26494 - Posted: 08.13.2019

Hope Reese Patricia S. Churchland is a key figure in the field of neurophilosophy, which employs a multidisciplinary lens to examine how neurobiology contributes to philosophical and ethical thinking. In her new book, “Conscience: The Origins of Moral Intuition,” Churchland makes the case that neuroscience, evolution, and biology are essential to understanding moral decision-making and how we behave in social environments. In the past, “philosophers thought it was impossible that neuroscience would ever be able to tell us anything about the nature of the self, or the nature of decision-making,” the author says. The way we reach moral conclusions, Churchland asserts, has a lot more to do with our neural circuitry than we realize. The way we reach moral conclusions, she asserts, has a lot more to do with our neural circuitry than we realize. We are fundamentally hard-wired to form attachments, for instance, which greatly influence our moral decision-making. Also, our brains are constantly using reinforcement learning — observing consequences after specific actions and adjusting our behavior accordingly. Churchland, who teaches philosophy at the University of California, San Diego, also presents research showing that our individual neuro-architecture is heavily influenced by genetics: political attitudes, for instance, are 40 to 50 percent heritable, recent scientific studies suggest. Copyright 2019 Undark

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 15: Emotions, Aggression, and Stress
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 11: Emotions, Aggression, and Stress
Link ID: 26480 - Posted: 08.02.2019

By Christof Koch “Consciousness” refers to any subjective experience — the delectable taste of Nutella, the sharp sting of an infected tooth, the slow passage of time when bored, the sense of vitality and anxiety just before a competitive event. In the words of the philosopher Thomas Nagel, consciousness exists in a human or other subject whenever “there is something it is like to be that organism.” The concept has inspired countless philosophical theories since antiquity and much laboratory work over the past century, but it has also given rise to many misunderstandings. Myth No. 1 Humans have a unique brain. There’s a long history of scientists thinking they have identified a particular feature to explain our advanced consciousness (and planetary dominance). In a popular TED talk, the neuroscientist Suzana Herculano-Houzel argues that the human brain’s distinctiveness lies in the huge number of neurons that make up the outermost layer of the organ, the cerebral cortex (or neocortex): 16 billion, out of some 86 billion total neurons. “That’s the simplest explanation for our remarkable cognitive abilities,” she says. Other suppositions have included special brain regions or nerve cells found only (or primarily) in humans — spindle or von Economo neurons, for example. Or perhaps the human brain consumes more calories than the brains of other species? After close to two centuries of brain research, however, no single feature of the human brain truly stands out. We certainly do not possess the largest brain — elephants and whales trounce us. Recent research has revealed that pilot whales, a type of dolphin, have 37 billion cortical neurons, undermining Herculano-Houzel’s hypothesis. And researchers have found that whales, elephants and other large-brained animals (not just great apes and humans) also have von Economo neurons. New research shows that humans and mice have about the same number of categories of brain cells. The fact is, there is no simple brain-centric explanation for why humans sit atop the cognitive hill. © 1996-2019 The Washington Post

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 26467 - Posted: 07.30.2019

By Susana Martinez-Conde Is your mind—every sensation, feeling, and memory you’ve ever had—completely tractable to your brain? If so, does it mean that you are a mere machine, and that all meaning and purpose is illusory? About a year ago, I joined author of Aping Mankind Raymond Tallis, and German philosopher and author of I am Not a Brain Markus Gabriel to discuss these issues at the How the Light Gets In Festival, hosted by the Institute of Art and Ideas. The video of the debate, which you can watch below, just came live last month. My co-panelists and I were tasked to start the debate with short pitches stating our positions on whether our minds and consciousness are no more than matter and mechanism. Specifically, we were charged with answering three questions at the outset, in 4 minutes or less: Are our minds just our brains? Has neuroscience led philosophy astray? Do we need to create new concepts, or abandon old ones, to understand why we feel a sense of meaning? The script that I prepared to address them follows below—but make sure to check out the full video for alternative views, and the discussion that ensued! A lot of the research we do in my lab focuses on understanding the neural bases of illusory perception. About 10 years ago, this led to my becoming interested in the neuroscience of stage magic, and beginning a research program about why magic works in the brain. Along the way, I decided to take magic lessons myself, to get a better understanding of magic: not only as a scientist looking in from the outside, but from the perspective of the magician. This was not only a good research investment, but also a whole lot of fun. But when I tell people about it, a question I get often is, do I still enjoy magic shows, or do they now feel mundane and unmagical? I always answer that I now enjoy magic a lot more than before I started studying it. © 2019 Scientific American

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 26396 - Posted: 07.08.2019

By Matthew Shaer A few years ago, a scientist named Nenad Sestan began throwing around an idea for an experiment so obviously insane, so “wild” and “totally out there,” as he put it to me recently, that at first he told almost no one about it: not his wife or kids, not his bosses in Yale’s neuroscience department, not the dean of the university’s medical school. Like everything Sestan studies, the idea centered on the mammalian brain. More specific, it centered on the tree-shaped neurons that govern speech, motor function and thought — the cells, in short, that make us who we are. In the course of his research, Sestan, an expert in developmental neurobiology, regularly ordered slices of animal and human brain tissue from various brain banks, which shipped the specimens to Yale in coolers full of ice. Sometimes the tissue arrived within three or four hours of the donor’s death. Sometimes it took more than a day. Still, Sestan and his team were able to culture, or grow, active cells from that tissue — tissue that was, for all practical purposes, entirely dead. In the right circumstances, they could actually keep the cells alive for several weeks at a stretch. When I met with Sestan this spring, at his lab in New Haven, he took great care to stress that he was far from the only scientist to have noticed the phenomenon. “Lots of people knew this,” he said. “Lots and lots.” And yet he seems to have been one of the few to take these findings and push them forward: If you could restore activity to individual post-mortem brain cells, he reasoned to himself, what was to stop you from restoring activity to entire slices of post-mortem brain? © 2019 The New York Times Company

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 26380 - Posted: 07.02.2019

By Max Bertolero, Danielle S. Bassett | Networks pervade our lives. Every day we use intricate networks of roads, railways, maritime routes and skyways traversed by commercial flights. They exist even beyond our immediate experience. Think of the World Wide Web, the power grid and the universe, of which the Milky Way is an infinitesimal node in a seemingly boundless network of galaxies. Few such systems of interacting connections, however, match the complexity of the one underneath our skull. Neuroscience has gained a higher profile in recent years, as many people have grown familiar with splashily colored images that show brain regions “lighting up” during a mental task. There is, for instance, the temporal lobe, the area by your ear, which is involved with memory, and the occipital lobe at the back of your head, which dedicates itself to vision. What has been missing from this account of human brain function is how all these distinct regions interact to give rise to who we are. Our laboratory and others have borrowed a language from a branch of mathematics called graph theory that allows us to parse, probe and predict complex interactions of the brain that bridge the seemingly vast gap between frenzied neural electrical activity and an array of cognitive tasks—sensing, remembering, making decisions, learning a new skill and initiating movement. This new field of network neuroscience builds on and reinforces the idea that certain regions of the brain carry out defined activities. In the most fundamental sense, what the brain is—and thus who we are as conscious beings—is, in fact, defined by a sprawling network of 100 billion neurons with at least 100 trillion connecting points, or synapses. © 2019 Scientific American

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 26379 - Posted: 07.02.2019

Tam Hunt How can you know that any animal, other human beings, or anything that seems conscious, isn’t just faking it? Does it enjoy an internal subjective experience, complete with sensations and emotions like hunger, joy, or sadness? After all, the only consciousness you can know with certainty is your own. Everything else is inference. The nature of consciousness makes it by necessity a wholly private affair. These questions are more than philosophical. As intelligent digital assistants, self-driving cars and other robots start to proliferate, are these AIs actually conscious or just seem like it? Or what about patients in comas – how can doctors know with any certainty what kind of consciousness is or is not present, and prescribe treatment accordingly? In my work, often with with psychologist Jonathan Schooler at the University of California, Santa Barbara, we’re developing a framework for thinking about the many different ways to possibly test for the presence of consciousness. There is a small but growing field looking at how to assess the presence and even quantity of consciousness in various entities. I’ve divided possible tests into three broad categories that I call the measurable correlates of consciousness. There are three types of ways to gauge consciousness. You can look for brain activity that occurs at the same time as reported subjective states. Or you can look for physical actions that seem to be accompanied by subjective states. Finally, you can look for the products of consciousness, like artwork or music, or this article I’ve written, that can be separated from the entity that created them to infer the presence – or not – of consciousness. © 2010–2019, The Conversation US, Inc.

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 26378 - Posted: 07.02.2019

By Benedict Carey Doctors have known for years that some patients who become unresponsive after a severe brain injury nonetheless retain a “covert consciousness,” a degree of cognitive function that is important to recovery but is not detectable by standard bedside exams. As a result, a profound uncertainty often haunts the wrenching decisions that families must make when an unresponsive loved one needs life support, an uncertainty that also amplifies national debates over how to determine when a patient in this condition can be declared beyond help. Now, scientists report the first large-scale demonstration of an approach that can identify this hidden brain function right after injury, using specialized computer analysis of routine EEG recordings from the skull. The new study, published Wednesday in the New England Journal of Medicine, found that 15 percent of otherwise unresponsive patients in one intensive care unit had covert brain activity in the days after injury. Moreover, these patients were nearly four times more likely to achieve partial independence over the next year with rehabilitation, compared to patients with no activity. The EEG approach will not be widely available for some time, due in part to the technical expertise required, which most I.C.U.’s don’t yet have. And doctors said the test would not likely resolve the kind of high-profile cases that have taken on religious and political dimensions, like that of Terri Schiavo, the Florida woman whose condition touched off an ethical debate in the mid-2000s, or Karen Ann Quinlan, a New Jersey woman whose case stirred similar sentiments in the 1970s. Those debates centered less on recovery than on the definition of life and the right to die; the new analysis presumes some resting level of EEG, and that signal in both women was virtually flat. © 2019 The New York Times Company

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 1: Cells and Structures: The Anatomy of the Nervous System
Link ID: 26363 - Posted: 06.27.2019

Paul GabrielsenPaul Gabrielsen For everyone who's looked into an infant's sparkling eyes and wondered what goes on in its little fuzzy head, there's now an answer. New research shows that babies display glimmers of consciousness and memory as early as 5 months old. For decades, neuroscientists have been searching for an unmistakable signal of consciousness in electrical brain activity. Such a sign could determine whether minimally conscious or anesthetized adults are aware—and when consciousness begins in babies. Studies on adults show a particular pattern of brain activity: When your senses detect something, such as a moving object, the vision center of your brain activates, even if the object goes by too fast for you to notice. But if the object remains in your visual field for long enough, the signal travels from the back of the brain to the prefrontal cortex, which holds the image in your mind long enough for you to notice. Scientists see a spike in brain activity when the senses pick something up, and another signal, the "late slow wave," when the prefrontal cortex gets the message. The whole process takes less than one-third of a second. Researchers in France wondered if such a two-step pattern might be present in infants. The team monitored infants' brain activity through caps fitted with electrodes. More than 240 babies participated, but two-thirds were too squirmy for the movement-sensitive caps. The remaining 80 (ages 5 months, 12 months, or 15 months) were shown a picture of a face on a screen for a fraction of a second. © 2018 Condé Nast

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 4: Development of the Brain
Link ID: 26338 - Posted: 06.19.2019

By John Horgan I can live without God, but I need free will. Without free will life makes no sense, it lacks meaning. So I’m always on the lookout for strong, clear arguments for free will. Christian List, a philosopher at the London School of Economics, provides such arguments in his succinct new book Why Free Will Is Real (Harvard 2019). I met List in 2015 when I decided to attend, after much deliberation, a workshop on consciousness at NYU. I recently freely chose to send him some questions, which he freely chose to answer. –John Horgan Horgan: Why philosophy? Was your choice pre-determined? List: I don’t think it was. As a teenager, I wanted to become a computer scientist or mathematician. It was only during my last couple of years at high school that I developed an interest in philosophy, and then I studied mathematics and philosophy as an undergraduate. For my doctorate, I chose political science, because I wanted to do something more applied, but I ended up working on mathematical models of collective decision-making and their implications for philosophical questions about democracy. Can majority voting produce rational collective outcomes? Are there truths to be found in politics? So, I was drawn back into philosophy. But the fact that I now teach philosophy is due to contingent events, especially meeting some philosophers who encouraged me. Horgan: Free-will denial seems to be on the rise. Why do you think that is? List: The free-will denial we are now seeing appears to be a by-product of the growing popularity of a reductionistic worldview in which everything is thought to be reducible to physical processes. If we look at the world solely through the lens of fundamental physics, for instance, then we will see only particles, fields, and forces, and there seems no room for human agency and free will. People then look like bio-physical machines. My response is that this kind of reductionism is mistaken. I want to embrace a scientific worldview, but reject reductionism. In fact, many scientists reject the sort of reductionism that is often mistakenly associated with science. © 2019 Scientific American

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 26318 - Posted: 06.10.2019

By John Horgan In a previous post I summarized my remarks at “Souls or Selfish Genes,” a conversation at Stevens Institute of Technology about religious versus scientific views of humanity. I represented the agnostic position and David Lahti, a biologist and philosopher at the City University of New York, a position more friendly to theism. Below is Lahti’s summary of his opening comments. –John Horgan I’ve been asked to deal with the question of “Souls vs. Selfish Genes”. And whereas I am sure this is a false dichotomy, I’m not quite sure how exactly to fit the two parts of the truth together. But I’ll give you a few thoughts I’ve had about it, which can at least start us off. First, selfish genes. This of course is a reference to Richard Dawkins’ 1976 book of the same name, which is a popular and sensational description of a revolution in our understanding of the way evolution by natural selection operates. Briefly, we discovered in the 1960s-70s that the organismic individual was generally the most important level at which natural selection operates, meaning that evolution by natural selection proceeds primarily via certain individuals in a population reproducing more successfully than others. In fact, this is too simplistic. Hamilton’s theory of kin selection showed that it’s actually below the level of the individual where we really have to concentrate in order to explain certain traits, such as the self-sacrificial stinging of bees and the fact that some young male birds help their mother raise her next brood instead of looking for a mate. Those individuals are not being as selfish as we might predict. © 2019 Scientific American

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 26250 - Posted: 05.20.2019