Chapter 18. Attention and Higher Cognition

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.


Links 21 - 40 of 1695

By Ann Gibbons Louise hadn’t seen her sister or nephew for 26 years. Yet the moment she spotted them on a computer screen, she recognized them, staring hard at their faces. The feat might have been impressive enough for a human, but Louise is a bonobo—one who had spent most of her life at a separate sanctuary from these relatives. The discovery, published today in the Proceedings of the National Academy of Sciences, reveals that our closest primate cousins can remember the faces of friends and family for years, and sometimes even decades. The study, experts say, shows that the capability for long-term social memory is not unique to people, as was long believed. “It’s a remarkable finding,” says Frans de Waal, a primatologist at Emory University who was not involved with the work. “I’m not even sure we humans remember most individuals we haven’t seen for 2 decades.” The research, he says, raises the possibility that other animals can also do this and may remember far more than we give them credit for. Trying to figure out whether nonhuman primates remember a face isn’t simple. You can’t just ask them. So in the new study, comparative psychologist Christopher Krupenye at Johns Hopkins University and colleagues used eye trackers, infrared cameras that noninvasively map a subject’s gaze as they look at images of people or objects. The scientists worked with 26 chimpanzees and bonobos living in three zoos or sanctuaries in Europe and Japan. The team showed the animals photos of the faces of two apes placed side by side on the screen at the same time for 3 seconds. Some images were of complete strangers; some were of close friends, foes, or family members who had once lived in their same social groups, but whom they hadn’t seen in years.

Keyword: Attention; Learning & Memory
Link ID: 29058 - Posted: 12.19.2023

By Jaimie Seaton It’s not uncommon for Veronica Smith to be looking at her partner’s face when suddenly she sees his features changing—his eyes moving closer together and then farther apart, his jawline getting wider and narrower, and his skin moving and shimmering. Smith, age 32, has experienced this phenomenon when looking at faces since she was four or five years old, and while it’s intermittent when she’s viewing another person’s face, it’s more constant when she views her own. “I almost always experience it when I look at my own face in the mirror, which makes it really hard to get ready because I’ll think that I look weird,” Smith explains. “I can more easily tell that I’m experiencing distortions when I’m looking at other people because I know what they look like.” Smith has a rare condition called prosopometamorphopsia (PMO), in which faces appear distorted in shape, texture, position or color. (PMO is related to Alice in Wonderland syndrome, or AIWS, which distorts the size perception of objects or one’s own body.) PMO has fascinated many scientists. The late neurologist and writer Oliver Sacks co-wrote a paper on the condition that was published in 2014, the year before he died. Brad Duchaine, a professor of psychological and brain sciences at Dartmouth College, explains that some people with it see distortions that affect the whole face (bilateral PMO) while others see only the left or right half of a face as distorted (hemi-PMO). “Not surprisingly, people with PMO find the distortions extremely distressing. Over the last century, approximately 75 cases have been reported in the literature. However, little is known about the condition because cases with face distortions have usually been documented by neurologists who don’t have expertise in visual neuroscience or the time to study the cases in depth,” Duchaine says. For 25 years Duchaine’s work has focused on prosopagnosia (face blindness), but after co-authoring a study on hemi-PMO that was published in 2020, Duchaine shifted much of his lab’s work to PMO. © 2023 SCIENTIFIC AMERICAN,

Keyword: Attention; Vision
Link ID: 29051 - Posted: 12.16.2023

By Oshan Jarow Sometimes when I’m looking out across the northern meadow of Brooklyn’s Prospect Park, or even the concrete parking lot outside my office window, I wonder if someone like Shakespeare or Emily Dickinson could have taken in the same view and seen more. I don’t mean making out blurry details or more objects in the scene. But through the lens of their minds, could they encounter the exact same world as me and yet have a richer experience? One way to answer that question, at least as a thought experiment, could be to compare the electrical activity inside our brains while gazing out upon the same scene, and running some statistical analysis designed to actually tell us whose brain activity indicates more richness. But that’s just a loopy thought experiment, right? Not exactly. One of the newest frontiers in the science of the mind is the attempt to measure consciousness’s “complexity,” or how diverse and integrated electrical activity is across the brain. Philosophers and neuroscientists alike hypothesize that more complex brain activity signifies “richer” experiences. The idea of measuring complexity stems from information theory — a mathematical approach to understanding how information is stored, communicated, and processed —which doesn’t provide wonderfully intuitive examples of what more richness actually means. Unless you’re a computer person. “If you tried to upload the content onto a hard drive, it’s how much memory you’d need to be able to store the experience you’re having,” Adam Barrett, a professor of machine learning and data science at the University of Sussex, told me. Another approach to understanding richness is to look at how it changes in different mental states. Recent studies have found that measures of complexity are lowest in patients under general anesthesia, higher in ordinary wakefulness, and higher still in psychedelic trips, which can notoriously turn even the most mundane experiences — say, my view of the parking lot outside my office window — into profound and meaningful encounters.

Keyword: Consciousness
Link ID: 29049 - Posted: 12.16.2023

By Amitha Kalaichandran In May, I was invited to take part in a survey by the National Academies of Sciences, Engineering, and Medicine to better delineate how long Covid is described and diagnosed as part of The National Research Action Plan on Long Covid. The survey had several questions around definitions and criteria to include, such as “brain fog” often experienced by those with long Covid. My intuition piqued, and I began to wonder about the similarities between these neurological symptoms and those experienced by people with attention-deficit/hyperactivity disorder, or ADHD. As a medical journalist with clinical and epidemiological experience, I found the possible connection and its implications impossible to ignore. We know that three years of potential exposure to SARS-CoV-2, in combination with the shift in social patterns (including work-from-home and social isolation), has impacted several aspects of neurocognition, as detailed in a recent report from the Substance Abuse and Mental Health Services Administration. A 2021 systematic review found persistent neuropsychiatric symptoms in Covid-19 survivors, and a 2021 paper in the journal JAMA Network Open found that executive functioning, processing speed, memory, and recall were impacted in patients hospitalized with Covid-19. Long Covid may indeed be linked to developing chronic neurocognitive issues, and even dementia may be accelerated. The virus might impact the frontal lobe, the area that governs executive function — which involves how we make decisions and plan, use our working memory, and control impulses. In October, a paper in Cell reported that long Covid brain fog could be traced to serotonin depletion driven by immune system proteins called viral-associated interferons. Similarly, the symptoms of attention-deficit/hyperactivity disorder, or ADHD, are believed to be rooted structurally in the frontal lobe and possibly from a naturally low level of the neurotransmitter dopamine, with contributions from norepinephrine, serotonin, and GABA. This helps explain why people with ADHD, who experience inattention, hyperactivity, and impulsivity, among other symptoms, may seek higher levels of stimulation: to activate the release of dopamine. However, a deficit in serotonin can also trigger ADHD. The same neurotransmitter, when depleted, may be responsible for brain fog in long Covid.

Keyword: ADHD
Link ID: 29038 - Posted: 12.09.2023

By Amanda Gefter On a February morning in 1935, a disoriented homing pigeon flew into the open window of an unoccupied room at the Hotel New Yorker. It had a band around its leg, but where it came from, or was meant to be headed, no one could say. While management debated what to do, a maid rushed to the 33rd floor and knocked at the door of the hotel’s most infamous denizen: Nikola Tesla. The 78-year-old inventor quickly volunteered to take in the homeless pigeon. “Dr. Tesla … dropped work on a new electrical project, lest his charge require some little attention,” reported The New York Times. “The man who recently announced the discovery of an electrical death-beam, powerful enough to destroy 10,000 airplanes at a swoop, carefully spread towels on his window ledge and set down a little cup of seed.” Nikola Tesla—the Serbian-American scientist famous for designing the alternating current motor and the Tesla coil—had, for years, regularly been spotted skulking through the nighttime streets of midtown Manhattan, feeding the birds at all hours. In the dark, he’d sound a low whistle, and from the gloom, hordes of pigeons would flock to the old man, perching on his outstretched arms. He was known to keep baskets in his room as nests, along with caches of homemade seed mix, and to leave his windows perpetually open so the birds could come and go. Once, he was arrested for trying to lasso an injured homing pigeon in the plaza of St. Patrick’s Cathedral, and, from his holding cell in the 34th Street precinct, had to convince the officers that he was—or had been—one of the most famous inventors in the world. It had been years since he’d produced a successful invention. He was gaunt and broke—living off of debt and good graces—having been kicked out of a string of hotels, a trail of pigeon droppings and unpaid rent in his wake. He had no family or close friends, except for the birds. © 2023 NautilusNext Inc.,

Keyword: Consciousness
Link ID: 29034 - Posted: 12.09.2023

By Emily Cataneo It’s 1922. You’re a scientist presented with a hundred youths who, you’re told, will grow up to lead conventional adult lives — with one exception. In 40 years, one of the one hundred is going to become impulsive and criminal. You run blood tests on the subjects and discover nothing that indicates that one of them will go off the rails in four decades. And yet sure enough, 40 years later, one bad egg has started shoplifting and threatening strangers. With no physical evidence to explain his behavior, you conclude that this man has chosen to act out of his own free will. Now, imagine the same experiment starting in 2022. This time, you use the blood samples to sequence everyone’s genome. In one, you find a mutation that codes for something called tau protein in the brain and you realize that this individual will not become a criminal in 40 years out of choice, but rather due to dementia. It turns out he did not shoplift out of free will, but because of physical forces beyond his control. Now, take the experiment a step further. If a man opens fire in an elementary school and kills scores of children and teachers, should he be held responsible? Should he be reviled and punished? Or should observers, even the mourning families, accept that under the right circumstances, that shooter could have been them? Does the shooter have free will while the man with dementia does not? Can you explain why? These provocative, even disturbing questions about similar scenarios underlie two new books about whether humans have control over our personalities, opinions, actions, and fates. “Free Agents: How Evolution Gave Us Free Will,” by professor of genetics and neuroscience Kevin J. Mitchell, and “Determined: A Science of Life Without Free Will,” by biology and neurology professor Robert M. Sapolsky, both undertake the expansive task of using the tools of science to probe the question of whether we possess free will, a question with stark moral and existential implications for the way we structure human society.

Keyword: Consciousness
Link ID: 29009 - Posted: 11.18.2023

By Francesca Paris There are more Americans who say they have serious cognitive problems — with remembering, concentrating or making decisions — than at any time in the last 15 years, data from the Census Bureau shows. The increase started with the pandemic: The number of working-age adults reporting “serious difficulty” thinking has climbed by an estimated one million people. About as many adults ages 18 to 64 now report severe cognitive issues as report trouble walking or taking the stairs, for the first time since the bureau started asking the questions each month in the 2000s. The sharp increase captures the effects of long Covid for a small but significant portion of younger adults, researchers say, most likely in addition to other effects of the pandemic, including psychological distress. But they also say it’s not yet possible to fully dissect all the reasons behind the increase. Richard Deitz, an economist at the Federal Reserve Bank of New York, analyzed the data and attributed much of the increase to long Covid. “These numbers don’t do this — they don’t just start suddenly increasing sharply like this,” he said. In its monthly Current Population Survey, the census asks a sample of Americans whether they have serious problems with their memory and concentration. It defines them as disabled if they answer yes to that question or one of five others about limitations on their daily activities. The questions are unrelated to disability applications, so respondents don’t have a financial incentive to answer one way or another. At the start of 2020, the survey estimated there were fewer than 15 million Americans ages 18 to 64 with any kind of disability. That rose to about 16.5 million by September 2023. Nearly two-thirds of that increase was made up of people who had newly reported limitations on their thinking. There were also increases in census estimates of the number of adults with a vision disability or serious difficulty doing basic errands. For older working-age Americans, the pandemic ended a yearslong decline in reported rates of disability. © 2023 The New York Times Company

Keyword: Attention
Link ID: 29003 - Posted: 11.13.2023

By Yasemin Saplakoglu More than 150 years ago, the economist and philosopher William Stanley Jevons discovered something curious about the number 4. While musing about how the mind conceives of numbers, he tossed a handful of black beans into a cardboard box. Then, after a fleeting glance, he guessed how many there were, before counting them to record the true value. After more than 1,000 trials, he saw a clear pattern. When there were four or fewer beans in the box, he always guessed the right number. But for five beans or more, his quick estimations were often incorrect. Jevons’ description of his self-experiment, published in Nature in 1871, set the “foundation of how we think about numbers,” said Steven Piantadosi, a professor of psychology and neuroscience at the University of California, Berkeley. It sparked a long-lasting and ongoing debate about why there seems to be a limit on the number of items we can accurately judge to be present in a set. Now, a new study in Nature Human Behaviour has edged closer to an answer by taking an unprecedented look at how human brain cells fire when presented with certain quantities. Its findings suggest that the brain uses a combination of two mechanisms to judge how many objects it sees. One estimates quantities. The second sharpens the accuracy of those estimates — but only for small numbers. It’s “very exciting” that the findings connect long-debated ideas to their neural underpinnings, said Piantadosi, who was not involved in the study. “There’s not many things in cognition where people have been able to pinpoint very plausible biological foundations.” Although the new study does not end the debate, the findings start to untangle the biological basis for how the brain judges quantities, which could inform bigger questions about memory, attention and even mathematics. All Rights Reserved © 2023

Keyword: Attention
Link ID: 29000 - Posted: 11.11.2023

By Caren Chesler In 2019, Debra Halsch was diagnosed with smoldering multiple myeloma, a rare blood and bone marrow disorder that can develop into a type of blood cancer. Her doctors recommended chemotherapy, she said, but she feared the taxing side effects the drugs might wreak on her body. Instead, the life coach from Piermont, New York tried meditation. A friend had told Halsch, now 57, about Joe Dispenza, who holds week-long meditation retreats that regularly attract thousands of people and carry a $2,299 price tag. Halsch signed up for one in Cancun, Mexico and soon became a devotee. She now meditates for at least two hours a day and says her health has improved as a result. Goop, the health and lifestyle brand launched by actor and entrepreneur Gwyneth Paltrow in 2008, will have its own series on Netflix beginning January 24. Dispenza, a chiropractor who has written various self-help books, has said he believes the mind can heal the body. After all, he says he healed himself back in 1986, when a truck hit him while he was bicycling, breaking six vertebrae. Instead of surgery, Dispenza says he spent hours each day recreating his spine in his mind, visualizing it healthy and healed. After 11 weeks, the story goes, he was back on his feet. Halsch said she believes she can do the same for her illness. “If our thoughts and emotions can make our bodies sick, they can make us well, too,” she said. In an email to Undark, Rhadell Hovda, chief operating officer for Dispenza’s parent company, Encephalon, Inc., emphasized that Dispenza does not claim meditation can treat or cure cancer. However, he does “follow the evidence when it is presented,” and has encountered people at workshops and retreats “who claimed to have healed from many conditions.” For more than two decades, various studies have suggested that meditation and mindfulness — that is, being aware of the present moment — can help reduce and improve pain management, lending some credence to the notion that the brain can affect the body. Such results have helped the field grow into a multibillion-dollar industry, populated by meditation apps, guided workshops, and upscale retreats.

Keyword: Attention; Stress
Link ID: 28990 - Posted: 11.08.2023

By Catherine Offord Close your eyes and picture yourself running an errand across town. You can probably imagine the turns you’d need to take and the landmarks you’d encounter. This ability to conjure such scenarios in our minds is thought to be crucial to humans’ capacity to plan ahead. But it may not be uniquely human: Rats also seem to be able to “imagine” moving through mental environments, researchers report today in Science. Rodents trained to navigate within a virtual arena could, in return for a reward, activate the same neural patterns they’d shown while navigating—even when they were standing still. That suggests rodents can voluntarily access mental maps of places they’ve previously visited. “We know humans carry around inside their heads representations of all kinds of spaces: rooms in your house, your friends’ houses, shops, libraries, neighborhoods,” says Sean Polyn, a psychologist at Vanderbilt University who was not involved in the research. “Just by the simple act of reminiscing, we can place ourselves in these spaces—to think that we’ve got an animal analog of that very human imaginative act is very impressive.” Researchers think humans’ mental maps are encoded in the hippocampus, a brain region involved in memory. As we move through an environment, cells in this region fire in particular patterns depending on our location. When we later revisit—or simply think about visiting—those locations, the same hippocampal signatures are activated. Rats also encode spatial information in the hippocampus. But it’s been impossible to establish whether they have a similar capacity for voluntary mental navigation because of the practical challenges of getting a rodent to think about a particular place on cue, says study author Chongxi Lai, who conducted the work while a graduate student and later a postdoc at the Howard Hughes Medical Institute’s Janelia Research Campus. In their new study, Lai, along with Janelia neuroscientist Albert Lee and colleagues, found a way around this problem by developing a brain-machine interface that rewarded rats for navigating their surroundings using only their thoughts.

Keyword: Learning & Memory; Attention
Link ID: 28989 - Posted: 11.04.2023

By Laura Sanders Like tiny, hairy Yodas raising X-wings from a swamp, rats can lift digital cubes and drop them near a target. But these rats aren’t using the Force. Instead, they are using their imagination. This telekinetic trick, described in the Nov. 3 Science, provides hints about how brains imagine new scenarios and remember past ones. “This is fantastic research,” says Mayank Mehta, a neurophysicist at UCLA. “It opens up a lot of exciting possibilities.” A deeper scientific understanding of the brain area involved in the feat could, for instance, help researchers diagnose and treat memory disorders, he says. Neuroscientist Albert Lee and his colleagues study how brains can go back in time by revisiting memories and jump ahead to imagine future scenarios. Those processes, sometimes called “mental time travel,” are “part of what makes our inner mental lives quite rich and interesting,” says Lee, who did the new study while at Howard Hughes Medical Institute’s Janelia Research Campus in Ashburn, Va. To dip into these complex questions, the researchers began with a simpler one: “Can you be in one place and think about another place?” says Lee, who is now an HHMI investigator at Beth Israel Deaconess Medical Center in Boston. “The rat isn’t doing anything fancier than that. We’re not asking them to recall their summer vacation.” Neuroscientist and engineer Chongxi Lai, also now at Beth Israel Deaconess, Lee and colleagues trained rats to move on a spherical treadmill in the midst of a 3-D virtual world projected onto a surrounding screen. While the rats poked around their virtual world, electrodes recorded signals from nerve cells in the rats’ hippocampi, brain structures known to hold complex spatial information, among other things (SN: 10/6/14). In this way, researchers matched patterns of brain activity with spots in the virtual world. © Society for Science & the Public 2000–2023.

Keyword: Attention
Link ID: 28988 - Posted: 11.04.2023

By Dan Falk You’re thirsty so you reach for a glass of water. It’s either a freely chosen action or the inevitable result of the laws of nature, depending on who you ask. Do we have free will? The question is ancient—and vexing. Everyone seems to have pondered it, and many seem quite certain of the answer, which is typically either “yes” or “absolutely not.” One scientist in the “absolutely not” camp is Robert Sapolsky. In his new book, Determined: A Science of Life Without Free Will, the primatologist and Stanford professor of neurology spells out why we can’t possibly have free will. Why do we behave one way and not another? Why do we choose Brand A over Brand B, or vote for Candidate X over Candidate Y? Not because we have free will, but because every act and thought are the product of “cumulative biological and environmental luck.” Sapolsky tells readers that the “biology over which you had no control, interacting with the environment over which you had no control, made you you.” That is to say, “everything in your childhood, starting with how you were mothered within minutes of birth, was influenced by culture, which means as well by the centuries of ecological factors that influenced what kind of culture your ancestors invented, and by the evolutionary pressures that molded the species you belong to.” In Body ImageNO, WE DON’T: Robert Sapolsky on free will: “I have spent forever trying to understand where behavior comes from. And what you see is there’s absolutely no room for free will.” Photo courtesy of Christine Johnston. Sapolsky brings the same combination of earthy directness and literary flourish that marked his earlier books, including Why Zebras Don’t Get Ulcers, about the biology of stress, to this latest work. To summarize his point of view in Determined, he writes, “Or as Maria sings in The Sound of Music, ‘Nothing comes from nothing, nothing ever could.’” The affable, bushy-bearded Sapolsky is now in his mid 60s. During our recent interview over Zoom, I was on the lookout for any inconsistency; anything that might suggest that deep down he admits we really do make decisions, as many of us surely feel. But he was prepared and stuck to his guns. © 2023 NautilusNext Inc.,

Keyword: Consciousness
Link ID: 28987 - Posted: 11.04.2023

By Clay Risen William E. Pelham Jr., a child psychologist who challenged how his field approached attention deficit hyperactivity disorder in children, arguing for a therapy-based regimen that used drugs like Ritalin and Adderall as an optional supplement, died on Oct. 21 in Miami. He was 75. His son, William E. Pelham III, who is also a child psychologist, confirmed the death, in a hospital, but did not provide a cause. Dr. Pelham began his career in the mid-1970s, when the modern understanding of mental health was emerging and psychologists were only just beginning to understand A.D.H.D. — and with it a new generation of medication to treat it. Through the 1980s and ’90s, doctors and many parents embraced A.D.H.D. drugs like Ritalin and Adderall as miracle medications, though some, including Dr. Pelham, raised concerns about their efficacy and side effects. Dr. Pelham was not opposed to medication. He recognized that drugs were effective at rapidly addressing the symptoms of A.D.H.D., like fidgeting, impulsiveness and lack of concentration. But in a long string of studies and papers, he argued that for most children, behavioral therapy, combined with parental intervention techniques, should be the first line of attack, followed by low doses of drugs, if necessary. And yet, as he pointed out repeatedly, the reality was far different: The Centers for Disease Control and Prevention reported in 2016 that while six in 10 children diagnosed with A.D.H.D. were on medication, fewer than half received behavioral therapy. In one major study, which he published in 2016 along with Susan Murphy, a statistician at the University of Michigan, he demonstrated the importance of treatment sequencing — that behavioral therapy should come first, then medication. He and Dr. Murphy split a group of 146 children with A.D.H.D., from ages 5 to 12, into two groups. One group received a low dose of generic Ritalin; the other received nothing, but their parents were given instruction in behavioral-modification techniques. After two months, children from both groups who showed no improvement were arranged into four new groups: The children given generic Ritalin received either more medication or behavioral modification therapy, and the children given behavioral modification therapy received either more intense therapy or a dose of medication. © 2023 The New York Times Company

Keyword: ADHD; Drug Abuse
Link ID: 28984 - Posted: 11.04.2023

By Darren Incorvaia The idea of a chicken running around with its head cut off, inspired by a real-life story, may make it seem like the bird doesn’t have much going on upstairs. But Sonja Hillemacher, an animal behavior researcher at the University of Bonn in Germany, always knew that chickens were more than mindless sources of wings and nuggets. “They are way smarter than you think,” Ms. Hillemacher said. Now, in a study published in the journal PLOS One on Wednesday, Ms. Hillemacher and her colleagues say they have found evidence that roosters can recognize themselves in mirrors. In addition to shedding new light on chicken intellect, the researchers hope that their experiment can prompt re-evaluations of the smarts of other animals. The mirror test is a common, but contested, test of self-awareness. It was introduced by the psychologist Gordon Gallup in 1970. He housed chimpanzees with mirrors and then marked their faces with red dye. The chimps didn’t seem to notice until they could see their reflections, and then they began inspecting and touching the marked spot on their faces, suggesting that they recognized themselves in the mirror. The mirror test has since been used to assess self-recognition in many other species. But only a few — such as dolphins and elephants — have passed. After being piloted on primates, the mirror test was “somehow sealed in a nearly magical way as sacred,” said Onur Güntürkün, a neuroscientist at Ruhr University Bochum in Germany and an author of the study who worked with Ms. Hillemacher and Inga Tiemann, also at the University of Bonn. But different cognitive processes are active in different situations, and there’s no reason to think that the mirror test is accurate for animals with vastly different sensory abilities and social systems than what chimps have. The roosters failed the classic mirror test. When the team marked them with pink powder, the birds showed no inclination to inspect or touch the smudge in front of the mirror the way that Dr. Gallup’s chimps did. As an alternative, the team tested rooster self-awareness in a more fowl friendly way. © 2023 The New York Times Company

Keyword: Consciousness; Intelligence
Link ID: 28978 - Posted: 10.28.2023

By George Musser They call it the hard problem of consciousness, but a better term might be the impossible problem of consciousness. The whole point is that the qualitative aspects of our conscious experience, or “qualia,” are inexplicable. They slip through the explanatory framework of science, which is reductive: It explains things by breaking them down into parts and describing how they fit together. Subjective experience has an intrinsic je ne sais quoi that can’t be decomposed into parts or explained by relating one thing to another. Qualia can’t be grasped intellectually. They can only be experienced firsthand. For the past five years or so, I’ve been trying to untangle the cluster of theories that attempt to explain consciousness, traveling the world to interview neuroscientists, philosophers, artificial-intelligence researchers, and physicists—all of whom have something to say on the matter. Most duck the hard problem, either bracketing it until neuroscientists explain brain function more fully or accepting that consciousness has no deeper explanation and must be wired into the base level of reality. Although I made it a point to maintain an outsider’s view of science in my reporting, staying out of academic debates and finding value in every approach, I find both positions defensible but dispiriting. I cling to the intuition that consciousness must have some scientific explanation that we can achieve. But how? It’s hard to imagine how science could possibly expand its framework to accommodate the redness of red or the awfulness of fingernails on a chalkboard. But there is another option: to suppose that we are misconstruing our experience in some way. We think that it has intrinsic qualities, but maybe on closer inspection it doesn’t. Not that this is an easy position to take. Two leading theories of consciousness take a stab at it. Integrated Information Theory (IIT) says that the neural networks in our head are conscious since neurons act together in harmony—they form collective structures with properties beyond those of the individual cells. If so, subjective experience isn’t primitive and unanalyzable; in principle, you could follow the network’s transitions and read its mind. “What IIT tries to do is completely avoid any intrinsic quality in the traditional sense,” the father of IIT, Giulio Tononi, told me. © 2023 NautilusNext Inc.,

Keyword: Consciousness
Link ID: 28970 - Posted: 10.25.2023

By Hope Reese There is no free will, according to Robert Sapolsky, a biologist and neurologist at Stanford University and a recipient of the MacArthur Foundation “genius” grant. Dr. Sapolsky worked for decades as a field primatologist before turning to neuroscience, and he has spent his career investigating behavior across the animal kingdom and writing about it in books including “Behave: The Biology of Humans at Our Best and Worst” and “Monkeyluv, and Other Essays on Our Lives as Animals.” In his latest book, “Determined: A Science of Life Without Free Will,” Dr. Sapolsky confronts and refutes the biological and philosophical arguments for free will. He contends that we are not free agents, but that biology, hormones, childhood and life circumstances coalesce to produce actions that we merely feel were ours to choose. It’s a provocative claim, he concedes, but he would be content if readers simply began to question the belief, which is embedded in our cultural conversation. Getting rid of free will “completely strikes at our sense of identity and autonomy and where we get meaning from,” Dr. Sapolsky said, and this makes the idea particularly hard to shake. There are major implications, he notes: Absent free will, no one should be held responsible for their behavior, good or bad. Dr. Sapolsky sees this as “liberating” for most people, for whom “life has been about being blamed and punished and deprived and ignored for things they have no control over.” He spoke in a series of interviews about the challenges that free will presents and how he stays motivated without it. These conversations were edited and condensed for clarity. To most people, free will means being in charge of our actions. What’s wrong with that outlook? It’s a completely useless definition. When most people think they’re discerning free will, what they mean is somebody intended to do what they did: Something has just happened; somebody pulled the trigger. They understood the consequences and knew that alternative behaviors were available. But that doesn’t remotely begin to touch it, because you’ve got to ask: Where did that intent come from? That’s what happened a minute before, in the years before, and everything in between. © 2023 The New York Times Company

Keyword: Consciousness; Attention
Link ID: 28967 - Posted: 10.17.2023

Linda Geddes Science correspondent The former Premier League goalkeeper Brad Friedel once said that to be able to work well in the box, you have to be able to think outside the box. Now scientific data supports the idea that goalies’ brains really do perceive the world differently – their brains appear able to merge signals from the different senses more quickly, possibly underpinning their unique abilities on the football pitch. Goalkeeping is the most specialised position in football, with the primary objective of stopping the opposition from scoring. But while previous studies have highlighted differences in physiological and performance profiles between goalkeepers and other players, far less was known about whether they have different perceptual or cognitive abilities. “Unlike other football players, goalkeepers are required to make thousands of very fast decisions based on limited or incomplete sensory information,” said Michael Quinn, a former goalkeeper in the Irish Premiership, who is now studying for a master’s degree in behavioural neuroscience at University College Dublin. Suspecting that this ability might hinge on an enhanced capacity to combine information from different senses, Quinn and researchers at Dublin City University and University College Dublin recruited 60 professional goalkeepers, outfield players and age-matched non-players to do a series of tests, looking for differences in their ability to distinguish sounds and flashes as separate from one another. Doing so enabled them to estimate volunteers’ temporal binding windows – the timeframe in which different sensory signals are fused together in the brain. The study, published in Current Biology, found that goalkeepers had a narrower temporal binding window relative to outfielders and non-soccer players. © 2023 Guardian News & Media Limited

Keyword: Attention; Vision
Link ID: 28954 - Posted: 10.10.2023

Mariana Lenharo For more than a century, researchers have known that people are generally very good at eyeballing quantities of four or fewer items. But performance at sizing up numbers drops markedly — becoming slower and more prone to error — in the face of larger numbers. Now scientists have discovered why: the human brain uses one mechanism to assess four or fewer items and a different one for when there are five or more. The findings, obtained by recording the neuron activity of 17 human participants, settle a long-standing debate on how the brain estimates how many objects a person sees. The results were published in Nature Human Behaviour1 on 2 October. The finding is relevant to the understanding of the nature of thinking, says psychologist Lisa Feigenson, the co-director of the Johns Hopkins University Laboratory for Child Development in Baltimore, Maryland. “Fundamentally, the question is one of mental architecture: what are the building blocks that give rise to human thought?” The limits of the human ability to estimate large quantities have puzzled many generations of scientists. In an 1871 Nature article2, economist and logician William Stanley Jevons described his investigations into his own counting skills and concluded “that the number five is beyond the limit of perfect discrimination, by some persons at least”. Some researchers have argued that the brain uses a single estimation system, one that is simply less precise for higher numbers. Others hypothesize that the performance discrepancy arises from there being two separate neuronal systems to quantify objects. But experiments have failed to determine which model is correct. Then, a team of researchers had a rare opportunity to record the activity of individual neurons inside the brains of people who were awake. All were being treated for seizures at the University Hospital Bonn in Germany, and had microelectrodes inserted in their brains in preparation for surgery. © 2023 Springer Nature Limited

Keyword: Attention
Link ID: 28953 - Posted: 10.10.2023

By Marco Giancotti I’m lying down in a white cylinder barely wider than my body, surrounded on all sides by a mass of sophisticated machinery the size of a small camper van. It’s an fMRI machine, one of the technological marvels of modern neuroscience. Two small inflatable cushions squeeze my temples, keeping my head still. “We are ready to begin the next batch of exercises,” I hear Dr. Horikawa’s gentle voice saying. We’re underground, in one of the laboratories of Tokyo University’s Faculty of Medicine, Hongo Campus. “Do you feel like proceeding?” “Yes, let’s go,” I answer. The machine sets in motion again. A powerful current grows inside the cryogenically cooled wires that coil around me, showering my head with radio waves, knocking the hydrogen atoms inside my head off their original spin axis, and measuring the rate at which the axis recovers afterward. To the sensors around me, I’m now as transparent as a glass of water. Every tiny change of blood flow anywhere inside my brain is being watched and recorded in 3-D. A few seconds pass, then a synthetic female voice speaks into my ears over the electronic clamor: “top hat.” I close my eyes and I imagine a top hat. A few seconds later a beep tells me I should rate the quality of my mental picture, which I do with a controller in my hand. The voice speaks again: “fire extinguisher,” and I repeat the routine. Next is “butterfly,” then “camel,” then “snowmobile,” and so on, for about 10 minutes, while the system monitors the activation of my brain synapses. For most people, this should be a rather simple exercise, perhaps even satisfying. For me, it’s a considerable strain, because I don’t “see” any of those things. For each and every one of the prompts, I rate my mental image “0” on a 0 to 5 scale, because as soon as I close my eyes, what I see are not everyday objects, animals, and vehicles, but the dark underside of my eyelids. I can’t willingly form the faintest of images in my mind. And, although it isn’t the subject of the current experiment, I also can’t conjure sounds, smells, or any other kind of sensory stimulation inside my head. I have what is called “aphantasia,” the absence of voluntary imagination of the senses. I know what a top hat is. I can describe its main characteristics. I can even draw an above-average impression of one on a piece of paper for you. But I can’t visualize it mentally. What’s wrong with me? © 2023 NautilusNext Inc.,

Keyword: Consciousness; Attention
Link ID: 28945 - Posted: 10.05.2023

By Anil Seth Earlier this month, the consciousness science community erupted into chaos. An open letter, signed by 124 researchers—some specializing in consciousness and others not—made the provocative claim that one of the most widely discussed theories in the field, Integrated Information Theory (IIT), should be considered “pseudoscience.” The uproar that followed sent consciousness social media into a doom spiral of accusation and recrimination, with the fallout covered in Nature, New Scientist, and elsewhere. Calling something pseudoscience is pretty much the strongest criticism one can make of a theory. It’s a move that should never be taken lightly, especially when more than 100 influential scientists and philosophers do it all at once. The open letter justified the charge primarily on the grounds that IIT has “commitments” to panpsychism—the idea that consciousness is fundamental and ubiquitous—and that the theory “as a whole” may not be empirically testable. A subsequent piece by one of the lead authors of the letter, Hakwan Lau, reframed the charge somewhat: that the claims made for IIT by its proponents and the wider media are not supported by empirical evidence. The brainchild of neuroscientist Giulio Tononi, IIT has been around for quite some time. Back in the late 1990s, Tononi published a paper in Science with the Nobel Laureate Gerald Edelman, linking consciousness to mathematical measures of complexity. This paper, which made a lasting impression on me, sowed the seeds of what later became IIT. Tononi published his first outline of the theory itself in 2004 and it has been evolving ever since, with the latest version—IIT 4.0—appearing earlier this year. The theory’s counterintuitive and deeply mathematical nature has always attracted controversy and criticism—including from myself and my colleagues—but it has certainly become prominent in consciousness science. A survey conducted at the main conference in the field—the annual meeting of the Association for the Scientific Study of Consciousness—found that nearly half of respondents considered it “definitely promising” or “probably promising,” and researchers in the field regularly identify it as one of four main theoretical approaches to consciousness. (The philosopher Tim Bayne did just this in our recent review paper on theories of consciousness for Nature Reviews Neuroscience.) © 2023 NautilusNext Inc.

Keyword: Consciousness; Attention
Link ID: 28936 - Posted: 09.29.2023