Links for Keyword: Attention
Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.
By Laura Sanders Growing up, Roberto S. Luciani had hints that his brain worked differently than most people. He didn’t relate when people complained about a movie character looking different than what they’d pictured from the book, for instance. But it wasn’t until he was a teenager that things finally clicked. His mother had just woken up and was telling him about a dream she had. “Movielike,” is how she described it. “Up until then, I assumed that cartoon depictions of imagination were exaggerated,” Luciani says, “I asked her what she meant and quickly realized my visual imagery was not functioning like hers.” That’s because Luciani has a condition called aphantasia — an inability to picture objects, people and scenes in his mind. When he was growing up, the term didn’t even exist. But now, Luciani, a cognitive scientist at the University of Glasgow in Scotland, and other scientists are getting a clearer picture of how some brains work, including those with a blind mind’s eye. In a recent study, Luciani and colleagues explored the connections between the senses, in this case, hearing and seeing. In most of our brains, these two senses collaborate. Auditory information influences activity in brain areas that handle vision. But in people with aphantasia, this connection isn’t as strong, researchers report November 4 in Current Biology. While in a brain scanner, blindfolded people listened to three sound scenes: A forest full of birds, a crowd of people, and a street bustling with traffic. In 10 people without aphantasia, these auditory scenes create reliable neural hallmarks in parts of the visual cortex. But in 23 people with aphantasia, these hallmarks were weaker. © Society for Science & the Public 2000–2024.
Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 29566 - Posted: 11.20.2024
By Yasemin Saplakoglu Around 2,500 years ago, Babylonian traders in Mesopotamia impressed two slanted wedges into clay tablets. The shapes represented a placeholder digit, squeezed between others, to distinguish numbers such as 50, 505 and 5,005. An elementary version of the concept of zero was born. Hundreds of years later, in seventh-century India, zero took on a new identity. No longer a placeholder, the digit acquired a value and found its place on the number line, before 1. Its invention went on to spark historic advances in science and technology. From zero sprang the laws of the universe, number theory and modern mathematics. “Zero is, by many mathematicians, definitely considered one of the greatest — or maybe the greatest — achievement of mankind,” said the neuroscientist Andreas Nieder (opens a new tab), who studies animal and human intelligence at the University of Tübingen in Germany. “It took an eternity until mathematicians finally invented zero as a number.” Perhaps that’s no surprise given that the concept can be difficult for the brain to grasp. It takes children longer to understand and use zero than other numbers, and it takes adults longer to read it than other small numbers. That’s because to understand zero, our mind must create something out of nothing. It must recognize absence as a mathematical object. “It’s like an extra level of abstraction away from the world around you,” said Benjy Barnett (opens a new tab), who is completing graduate work on consciousness at University College London. Nonzero numbers map onto countable objects in the environment: three chairs, each with four legs, at one table. With zero, he said, “we have to go one step further and say, ‘OK, there wasn’t anything there. Therefore, there must be zero of them.’” © 2024 Simons Foundation
Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 29523 - Posted: 10.19.2024
By Miryam Naddaf Neurons in the hippocampus help to pick out patterns in the flood of information pouring through the brain.Credit: Arthur Chien/Science Photo Library The human brain is constantly picking up patterns in everyday experiences — and can do so without conscious thought, finds a study1 of neuronal activity in people who had electrodes implanted in their brain tissue for medical reasons. The study shows that neurons in key brain regions combine information on what occurs and when, allowing the brain to pick out the patterns in events as they unfold over time. That helps the brain to predict coming events, the authors say. The work was published today in Nature. “The brain does a lot of things that we are not consciously aware of,” says Edvard Moser, a neuroscientist at the Norwegian University of Science and Technology in Trondheim. “This is no exception.” To make sense of the world around us, the brain must process an onslaught of information on what happens, where it happens and when it happens. The study’s authors wanted to explore how the brain organizes this information over time — a crucial step in learning and memory. The team studied 17 people who had epilepsy and had electrodes implanted in their brains in preparation for surgical treatment. These electrodes allowed the authors to directly capture the activity of individual neurons in multiple brain regions. Among those regions were the hippocampus and entorhinal cortex, which are involved in memory and navigation. These areas contain time and place cells that act as the body’s internal clock and GPS system, encoding time and locations. “All the external world coming into our brain has to be filtered through that system,” says study co-author Itzhak Fried, a neurosurgeon and neuroscientist at the University of California, Los Angeles. © 2024 Springer Nature Limited
Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 13: Memory and Learning
Link ID: 29497 - Posted: 09.28.2024
By Angie Voyles Askham Nathaniel Daw has never touched a mouse. As professor of computational and theoretical neuroscience at Princeton University, he mainly works with other people’s data to construct models of the brain’s decision-making process. So when a collaborator came to him a few years ago with confusing data from mice that had performed a complex decision-making task in the lab, Daw says his best advice was just to fit the findings to the tried-and-true model of reward prediction error (RPE). That model relies on the idea that dopaminergic activity in the midbrain reflects discrepancies between expected and received rewards. Daw’s collaborator, Ben Engelhard, had measured the activity of dopamine neurons in the ventral tegmental area (VTA) of mice as they were deciding how to navigate a virtual environment. And although the virtual environment was more complex than what a mouse usually experiences in the real world, an RPE-based model should have held, Daw assumed. “It was obvious to me that there was this very simple story that was going to explain his data,” Daw says. But it didn’t. The neurons exhibited a wide range of responses, with some activated by visual cues and others by movement or cognitive tasks. The classic RPE model, it turned out, could not explain such heterogeneity. Daw, Engelhard and their colleagues published the findings in 2019. That was a wake-up call, Daw says, particularly after he watched videos of what the mice actually experienced in the maze. “It’s just so much more complicated, and high dimensional, and richer” than expected, he says. The idea that this richness could be reduced to such a simple model seems ludicrous now, he adds. “I was just so blinded.” © 2024 Simons Foundation
Related chapters from BN: Chapter 4: The Chemistry of Behavior: Neurotransmitters and Neuropharmacology; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 4: Development of the Brain; Chapter 14: Attention and Higher Cognition
Link ID: 29479 - Posted: 09.14.2024
By Sneha Khedkar About 10 years ago, when Michael Yartsev set up the NeuroBat Lab, he built a new windowless cave of sorts: a fully automated bat flight room. Equipped with cameras and other recording devices, the remote-controlled space has enabled his team to study the neuronal basis of navigation, acoustic and social behavior in Egyptian fruit bats without having any direct interaction with the animals. “In our lab, there’s never a human involved in the experiments,” says Yartsev, associate professor of bioengineering at the University of California, Berkeley. The impetus to create the space was clear. The setup, paired with wireless electrodes inserted in the bats’ hippocampus, has helped the team demonstrate, for example, that place cells encode a flying bat’s current, past and future locations. Also, a mountain of evidence suggests that the identity, sex and stress levels of human experimenters can influence the behavior of and brain circuit activity in other lab animals, such as mice and rats. Now Yartsev and his team have proved that “experimenter effects” hold true for bats, too, according to a new study published last month in Nature Neuroscience. The presence of human experimenters changed hippocampal neuronal activity in bats both at rest and during flight—and exerted an even stronger influence than another fruit bat, the study shows. The team expected that humans would influence neural activity, Yartsev says, “but we did not expect it to be so profound.” © 2024 Simons Foundation
Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 9: Hearing, Balance, Taste, and Smell
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 29430 - Posted: 08.13.2024
By Maya L. Kapoor Six years ago, while shopping at a supermarket, Sadie Dingfelder spied her husband selecting a store-branded peanut butter jar. “Since when do you buy generic?” she asked, grabbing the jar from the cart. To her surprise, the frightened man turned out to be a total stranger. As usual, Dingfelder quickly began rewriting the unsettling interaction in her mind as a funny story, but a stark thought struck her this time: “Other people do not make this kind of mistake.” Dingfelder, a freelance science journalist, has prosopagnosia, or face blindness. It’s extremely difficult for her to recognize faces: She has gotten into cars with the wrong people; she has made plans with friends and then been surprised by who came. She once had to ask filmmaker John Waters, who met her at a museum for an interview, to help identify the museum staffer who had just introduced them — she couldn’t pick her out from a crowd of his fans. In “Do I Know You? A Faceblind Reporter’s Journey Into the Science of Sight, Memory, and Imagination,” Dingfelder begins coming to terms with her neurodivergence, weaving together science reporting — including brain scans, computerized tests, and assessments by medical researchers — and personal memoir in order to understand herself better. Ultimately, “Do I Know You?” is a question Dingfelder seems to be asking herself. By the end of the book, the answer feels like a firm yes. The term prosopagnosia, a portmanteau of the Greek words for “face” and “not knowing,” was coined by Joachim Bodamer, a psychiatrist and neurologist in Nazi Germany. Bodamer had encountered German soldiers with head traumas who had lost the ability to recognize people, including one soldier who blithely passed by his own mother at a train station.
Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 29423 - Posted: 08.11.2024
By Yasemin Saplakoglu Two years ago, Sarah Shomstein realized she didn’t have a mind’s eye. The vision scientist was sitting in a seminar room, listening to a scientific talk, when the presenter asked the audience to imagine an apple. Shomstein closed her eyes and did so. Then, the presenter asked the crowd to open their eyes and rate how vividly they saw the apple in their mind. Saw the apple? Shomstein was confused. She didn’t actually see an apple. She could think about an apple: its taste, its shape, its color, the way light might hit it. But she didn’t see it. Behind her eyes, “it was completely black,” Shomstein recalled. And yet, “I imagined an apple.” Most of her colleagues reacted differently. They reported actually seeing an apple, some vividly and some faintly, floating like a hologram in front of them. In that moment, Shomstein, who’s spent years researching perception at George Washington University, realized she experienced the world differently than others. She is part of a subset of people — thought to be about 1% to 4% of the general population — who lack mental imagery, a phenomenon known as aphantasia. Though it was described more than 140 years ago, the term “aphantasia” was coined only in 2015. It immediately drew the attention of anyone interested in how the imagination works. That included neuroscientists. So far, they’re finding that aphantasia is not a disorder — it’s a different way of experiencing the world. Early studies have suggested that differences in the connections between brain regions involved in vision, memory and decision-making could explain variations in people’s ability to form mental images. Because many people with aphantasia dream in images and can recognize objects and faces, it seems likely that their minds store visual information — they just can’t access it voluntarily or can’t use it to generate the experience of imagery. That’s just one explanation for aphantasia. In reality, people’s subjective experiences vary dramatically, and it’s possible that different subsets of aphantasics have their own neural explanations. Aphantasia and hyperphantasia, the opposite phenomenon in which people report mental imagery as vivid as reality, are in fact two ends of a spectrum, sandwiching an infinite range of internal experiences between them. © 2024 the Simons Foundation.
Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 7: Vision: From Eye to Brain
Link ID: 29417 - Posted: 08.02.2024
By Olivia Gieger Three pioneers in face-perception research have won the 2024 Kavli Prize in Neuroscience. Nancy Kanwisher, professor of cognitive neuroscience at the Massachusetts Institute of Technology; Winrich Freiwald, professor of neurosciences and behavior at Rockefeller University; and Doris Tsao, professor of neurobiology at the University of California, Berkeley, will share the $1 million Kavli Prize for their discoveries of the regions—in both the human and monkey brains—responsible for identifying and recognizing faces. “This is work that’s very classic and very elegant, not only in face-processing and face-recognition work, but the impact it’s had on how we think about brain organization in general is huge,” says Alexander Cohen, assistant professor of neurology at Harvard Medical School, who studies face recognition in autistic people. The Norwegian Academy of Science and Letters awards the prize every two years. Kanwisher says she long suspected that something special happens in the brain when we look at faces, because people with prosopagnosia—the inability to recognize faces—maintain the ability to recognize nearly all other objects. What’s more, it is harder to recognize an upside-down face than most other inverted objects, studies have shown. To get to the root of face processing, Kanwisher spent hours as a young researcher lying still in an MRI machine as images of faces and objects flashed before her. A spot in the bottom right of the cerebral cortex lit up when she and others looked at faces, according to functional MRI (fMRI) scans, she and her colleagues reported in a seminal 1997 paper. They called the region the fusiform face area. © 2024 Simons Foundation
Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 29356 - Posted: 06.13.2024
By Betsy Mason To help pay for his undergraduate education, Elias Garcia-Pelegrin had an unusual summer job: cruise ship magician. “I was that guy who comes out at dinnertime and does random magic for you,” he says. But his latest magic gig is even more unusual: performing for Eurasian jays at Cambridge University’s Comparative Cognition Lab. Birds can be harder to fool than tourists. And to do magic for the jays, he had to learn to do sleight-of-hand tricks with a live, wriggling waxworm instead of the customary coin or ball. But performing in an aviary does have at least one advantage over performing on a cruise ship: The birds aren’t expecting to be entertained. “You don’t have to worry about impressing anybody, or tell a joke,” Garcia-Pelegrin says. “So you just do the magic.” In just the last few years, researchers have become interested in what they can learn about animal minds by studying what does and doesn’t fool them. “Magic effects can reveal blind spots in seeing and roadblocks in thinking,” says Nicky Clayton, who heads the Cambridge lab and, with Garcia-Pelegrin and others, cowrote an overview of the science of magic in the Annual Review of Psychology. What we visually perceive about the world is a product of how our brains interpret what our eyes see. Humans and other animals have evolved to handle the immense amount of visual information we’re exposed to by prioritizing some types of information, filtering out things that are usually less relevant and filling in gaps with assumptions. Many magic effects exploit these cognitive shortcuts in humans, and comparing how well these same tricks work on other species may reveal something about how their minds operate. Clayton and her colleagues have used magic tricks with both jays and monkeys to reveal differences in how these animals experience the world. Now they are hoping to expand to more species and inspire other researchers to try magic to explore big questions about complex mental abilities and how they evolved.
Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 29345 - Posted: 06.06.2024
By Mariana Lenharo Crows know their numbers. An experiment has revealed that these birds can count their own calls, showcasing a numerical skill previously only seen in people. Investigating how animals understand numbers can help scientists to explore the biological origins of humanity’s numerical abilities, says Giorgio Vallortigara, a neuroscientist at the University of Trento in Rovereto, Italy. Being able to produce a deliberate number of vocalizations on cue, as the birds in the experiment did, “is actually a very impressive achievement”, he notes. Andreas Nieder, an animal physiologist at the University of Tübingen in Germany and a co-author of the study published 23 May in Science1, says it was amazing to see how cognitively flexible these corvids are. “They have a reputation of being very smart and intelligent, and they proved this once again.” The researchers worked with three carrion crows (Corvus corone) that had already been trained to caw on command. Over the next several months, the birds were taught to associate visual cues — a screen showing the digits 1, 2, 3 or 4 — with the number of calls they were supposed to produce. They were later also introduced to four auditory cues that were each associated with a distinct number. During the experiment, the birds stood in front of the screen and were presented with a visual or auditory cue. They were expected to produce the number of vocalizations associated with the cue and to peck at an ‘enter key’ on the touchscreen monitor when they were done. If they got it right, an automated feeder delivered bird-seed pellets and mealworms as a reward. They were correct most of the time. “Their performance was way beyond chance and highly significant,” says Nieder. © 2024 Springer Nature Limited
Related chapters from BN: Chapter 6: Evolution of the Brain and Behavior; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 29326 - Posted: 05.25.2024
By Meghan Willcoxon In the summer of 1991, the neuroscientist Vittorio Gallese was studying how movement is represented in the brain when he noticed something odd. He and his research adviser, Giacomo Rizzolatti, at the University of Parma were tracking which neurons became active when monkeys interacted with certain objects. As the scientists had observed before, the same neurons fired when the monkeys either noticed the objects or picked them up. But then the neurons did something the researchers didn’t expect. Before the formal start of the experiment, Gallese grasped the objects to show them to a monkey. At that moment, the activity spiked in the same neurons that had fired when the monkey grasped the objects. It was the first time anyone had observed neurons encode information for both an action and another individual performing that action. Those neurons reminded the researchers of a mirror: Actions the monkeys observed were reflected in their brains through these peculiar motor cells. In 1992, Gallese and Rizzolatti first described the cells in the journal Experimental Brain Research and then in 1996 named them “mirror neurons” in Brain. The researchers knew they had found something interesting, but nothing could have prepared them for how the rest of the world would respond. Within 10 years of the discovery, the idea of a mirror neuron had become the rare neuroscience concept to capture the public imagination. From 2002 to 2009, scientists across disciplines joined science popularizers in sensationalizing these cells, attributing more properties to them to explain such complex human behaviors as empathy, altruism, learning, imitation, autism, and speech. Then, nearly as quickly as mirror neurons caught on, scientific doubts about their explanatory power crept in. Within a few years, these celebrity cells were filed away in the drawer of over-promised, under-delivered discoveries. © 2024 NautilusNext Inc.,
Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 5: The Sensorimotor System
Link ID: 29316 - Posted: 05.21.2024
By Lilly Tozer How the brain processes visual information — and its perception of time — is heavily influenced by what we’re looking at, a study has found. In the experiment, participants perceived the amount of time they had spent looking at an image differently depending on how large, cluttered or memorable the contents of the picture were. They were also more likely to remember images that they thought they had viewed for longer. The findings, published on 22 April in Nature Human Behaviour1, could offer fresh insights into how people experience and keep track of time. “For over 50 years, we’ve known that objectively longer-presented things on a screen are better remembered,” says study co-author Martin Wiener, a cognitive neuroscientist at George Mason University in Fairfax, Virginia. “This is showing for the first time, a subjectively experienced longer interval is also better remembered.” Research has shown that humans’ perception of time is intrinsically linked to our senses. “Because we do not have a sensory organ dedicated to encoding time, all sensory organs are in fact conveying temporal information” says Virginie van Wassenhove, a cognitive neuroscientist at the University of Paris–Saclay in Essonne, France. Previous studies found that basic features of an image, such as its colours and contrast, can alter people’s perceptions of time spent viewing the image. In the latest study, researchers set out to investigate whether higher-level semantic features, such as memorability, can have the same effect. © 2024 Springer Nature Limited
Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 7: Vision: From Eye to Brain
Link ID: 29269 - Posted: 04.24.2024
By Meghan Willcoxon In the summer of 1991, the neuroscientist Vittorio Gallese was studying how movement is represented in the brain when he noticed something odd. He and his research adviser, Giacomo Rizzolatti, at the University of Parma were tracking which neurons became active when monkeys interacted with certain objects. As the scientists had observed before, the same neurons fired when the monkeys either noticed the objects or picked them up. But then the neurons did something the researchers didn’t expect. Before the formal start of the experiment, Gallese grasped the objects to show them to a monkey. At that moment, the activity spiked in the same neurons that had fired when the monkey grasped the objects. It was the first time anyone had observed neurons encode information for both an action and another individual performing that action. Those neurons reminded the researchers of a mirror: Actions the monkeys observed were reflected in their brains through these peculiar motor cells. In 1992, Gallese and Rizzolatti first described the cells in the journal Experimental Brain Research and then in 1996 named them “mirror neurons” in Brain. The researchers knew they had found something interesting, but nothing could have prepared them for how the rest of the world would respond. Within 10 years of the discovery, the idea of a mirror neuron had become the rare neuroscience concept to capture the public imagination. From 2002 to 2009, scientists across disciplines joined science popularizers in sensationalizing these cells, attributing more properties to them to explain such complex human behaviors as empathy, altruism, learning, imitation, autism and speech. Then, nearly as quickly as mirror neurons caught on, scientific doubts about their explanatory power crept in. Within a few years, these celebrity cells were filed away in the drawer of over-promised, under-delivered discoveries. Vittorio Gallese wears round glasses.
Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 7: Vision: From Eye to Brain
Link ID: 29242 - Posted: 04.04.2024
By Emily Makowski & I spend my days surrounded by thousands of written words, and sometimes I feel as though there’s no escape. That may not seem particularly unusual. Plenty of people have similar feelings. But no, I’m not just talking about my job as a copy editor here at Scientific American, where I edit and fact-check an endless stream of science writing. This constant flow of text is all in my head. My brain automatically translates spoken words into written ones in my mind’s eye. I “see” subtitles that I can’t turn off whenever I talk or hear someone else talking. This same speech-to-text conversion even happens for the inner dialogue of my thoughts. This mental closed-captioning has accompanied me since late toddlerhood, almost as far back as my earliest childhood memories. And for a long time, I thought that everyone could “read” spoken words in their head the way I do. What I experience goes by the name of ticker-tape synesthesia. It is not a medical condition—it’s just a distinctive way of perceiving the surrounding world that relatively few people share. Not much is known about the neurophysiology or psychology of this phenomenon, sometimes called “ticker taping,” even though a reference to it first appeared in the scientific literature in the late 19th century. Ticker taping is considered a form of synesthesia, an experience in which the brain reroutes one kind of incoming sensory information so that it is processed as another. For example, sounds might be perceived as touch, allowing the affected person to “feel” them as tactile sensations. As synesthesia goes, ticker taping is relatively uncommon. “There are varieties of synesthesia which really have just been completely under the radar..., and ticker tape is really one of those,” says Mark Price, a cognitive psychologist at the University of Bergen in Norway. The name “ticker-tape synesthesia” itself evokes the concept’s late 19th-century origins. At that time stock prices transmitted by telegraph were printed on long paper strips, which would be torn into tiny bits and thrown from building windows during parades. © 2024 SCIENTIFIC AMERICAN,
Related chapters from BN: Chapter 8: General Principles of Sensory Processing, Touch, and Pain; Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 5: The Sensorimotor System; Chapter 15: Language and Lateralization
Link ID: 29238 - Posted: 04.04.2024
By Marta Zaraska The renowned Polish piano duo Marek and Wacek didn’t use sheet music when playing live concerts. And yet onstage the pair appeared perfectly in sync. On adjacent pianos, they playfully picked up various musical themes, blended classical music with jazz and improvised in real time. “We went with the flow,” said Marek Tomaszewski, who performed with Wacek Kisielewski until Wacek’s death in 1986. “It was pure fun.” The pianists seemed to read each other’s minds by exchanging looks. It was, Marek said, as if they were on the same wavelength. A growing body of research suggests that might have been literally true. Dozens of recent experiments studying the brain activity of people performing and working together — duetting pianists, card players, teachers and students, jigsaw puzzlers and others — show that their brain waves can align in a phenomenon known as interpersonal neural synchronization, also known as interbrain synchrony. “There’s now a lot of research that shows that people interacting together display coordinated neural activities,” said Giacomo Novembre, a cognitive neuroscientist at the Italian Institute of Technology in Rome, who published a key paper on interpersonal neural synchronization last summer. The studies have come out at an increasing clip over the past few years — one as recently as last week — as new tools and improved techniques have honed the science and theory. They’re finding that synchrony between brains has benefits. It’s linked to better problem-solving, learning and cooperation, and even with behaviors that help others at a personal cost. What’s more, recent studies in which brains were stimulated with an electric current hint that synchrony itself might cause the improved performance observed by scientists. © 2024 the Simons Foundation.
Related chapters from BN: Chapter 17: Learning and Memory; Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 13: Memory and Learning; Chapter 5: The Sensorimotor System
Link ID: 29229 - Posted: 03.30.2024
By Ingrid Wickelgren You see a woman on the street who looks familiar—but you can’t remember how you know her. Your brain cannot attach any previous experiences to this person. Hours later, you suddenly recall the party at a friend’s house where you met her, and you realize who she is. In a new study in mice, researchers have discovered the place in the brain that is responsible for both types of familiarity—vague recognition and complete recollection. Both, moreover, are represented by two distinct neural codes. The findings, which appeared on February 20 in Neuron, showcase the use of advanced computer algorithms to understand how the brain encodes concepts such as social novelty and individual identity, says study co-author Steven Siegelbaum, a neuroscientist at the Mortimer B. Zuckerman Mind Brain Behavior Institute at Columbia University. The brain’s signature for strangers turns out to be simpler than the one used for old friends—which makes sense, Siegelbaum says, given the vastly different memory requirements for the two relationships. “Where you were, what you were doing, when you were doing it, who else [was there]—the memory of a familiar individual is a much richer memory,” Siegelbaum says. “If you’re meeting a stranger, there’s nothing to recollect.” The action occurs in a small sliver of a brain region called the hippocampus, known for its importance in forming memories. The sliver in question, known as CA2, seems to specialize in a certain kind of memory used to recall relationships. “[The new work] really emphasizes the importance of this brain area to social processing,” at least in mice, says Serena Dudek, a neuroscientist at the National Institute of Environmental Health Sciences, who was not involved in the study. © 2024 SCIENTIFIC AMERICAN,
Related chapters from BN: Chapter 17: Learning and Memory; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 13: Memory and Learning; Chapter 14: Attention and Higher Cognition
Link ID: 29222 - Posted: 03.28.2024
By Robert D. Hershey Jr. Daniel Kahneman, who never took an economics course but who pioneered a psychologically based branch of that field that led to a Nobel in economic science in 2002, died on Wednesday. He was 90. His death was confirmed by his partner, Barbara Tversky. She declined to say where he died. Professor Kahneman, who was long associated with Princeton University and lived in Manhattan, employed his training as a psychologist to advance what came to be called behavioral economics. The work, done largely in the 1970s, led to a rethinking of issues as far-flung as medical malpractice, international political negotiations and the evaluation of baseball talent, all of which he analyzed, mostly in collaboration with Amos Tversky, a Stanford cognitive psychologist who did groundbreaking work on human judgment and decision-making. (Ms. Tversky, also a professor of psychology at Stanford, had been married to Professor Tversky, who died in 1996. She and Professor Kahneman became partners several years ago.) As opposed to traditional economics, which assumes that human beings generally act in fully rational ways and that any exceptions tend to disappear as the stakes are raised, the behavioral school is based on exposing hard-wired mental biases that can warp judgment, often with counterintuitive results. “His central message could not be more important,” the Harvard psychologist and author Steven Pinker told The Guardian in 2014, “namely, that human reason left to its own devices is apt to engage in a number of fallacies and systematic errors, so if we want to make better decisions in our personal lives and as a society, we ought to be aware of these biases and seek workarounds. That’s a powerful and important discovery.” © 2024 The New York Times Company
Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 29218 - Posted: 03.28.2024
By Anna Gibbs Imagine a person’s face. Now imagine that whenever you looked at that face, there was a chance it would appear distorted. That’s what life is like for a person with prosopometamorphopsia, or PMO. Now, thanks to a new study, you can see through the eyes of someone with this rare condition. Relying on feedback from a 58-year-old man who has had PMO for nearly three years, researchers at Dartmouth College altered photos of faces to mimic the “demonic” distortions he experienced. This is believed to be the first time that images have been created to so closely replicate what a patient with the condition is seeing, psychologist Antônio Mello and colleagues report in the March 23 Lancet. “We hope this has a big impact in the way people think about PMO, especially for them to be able to understand how severe PMO can be,” Mello says. For instance, he says, this particular patient didn’t like to go to the store because fellow shoppers looked like “an army of demons.” PMO is poorly understood, with fewer than 100 cases cited since 1904. Patients report a wide variety of facial distortions. While the patient in this study sees extremely stretched features with deep grooves on the face, others may see distortions that cause features to move position or change size. Because of that, this visualization is patient-specific and wouldn’t apply for everyone with PMO, says Jason Barton, a neurologist at the University of British Columbia in Vancouver who has worked with the researchers before but was not involved in this study. Still, “I think it’s helpful for people to understand the kinds of distortions people can see.” © Society for Science & the Public 2000–2024.
Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 29211 - Posted: 03.23.2024
By Meghan Rosen Leakiness in the brain could explain the memory and concentration problems linked to long COVID. In patients with brain fog, MRI scans revealed signs of damaged blood vessels in their brains, researchers reported February 22 in Nature Neuroscience. In these people, dye injected into the bloodstream leaked into their brains and pooled in regions that play roles in language, memory, mood and vision. It’s the first time anyone’s shown that long COVID patients can have leaky blood brain barriers, says study coauthor Matthew Campbell, a geneticist at Trinity College Dublin in Ireland. That barrier, tightly knit cells lining blood vessels, typically keeps riffraff out of the brain, like bouncers guarding a nightclub. If the barrier breaks down, bloodborne viruses, cells and other interlopers can sneak into the brain’s tissues and wreak havoc, says Avindra Nath, a neurologist at the National Institutes of Health in Bethesda, Md. It’s too early to say definitively whether that’s happening in people with long COVID, but the new study provides evidence that “brain fog has a biological basis,” says Nath, who wasn’t involved with the work. That alone is important for patients, he says, because their symptoms may be otherwise discounted by physicians. For some people, brain fog can feel like a slowdown in thinking or difficulty recalling short-term memories, Campbell says. For example, “patients will go for a drive, and forget where they’re driving to.” That might sound trivial, he says, but it actually pushes people into panic mode. © Society for Science & the Public 2000–2024.
Related chapters from BN: Chapter 17: Learning and Memory; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 13: Memory and Learning; Chapter 14: Attention and Higher Cognition
Link ID: 29192 - Posted: 03.16.2024
By Pam Belluck Long Covid may lead to measurable cognitive decline, especially in the ability to remember, reason and plan, a large new study suggests. Cognitive testing of nearly 113,000 people in England found that those with persistent post-Covid symptoms scored the equivalent of 6 I.Q. points lower than people who had never been infected with the coronavirus, according to the study, published Wednesday in The New England Journal of Medicine. People who had been infected and no longer had symptoms also scored slightly lower than people who had never been infected, by the equivalent of 3 I.Q. points, even if they were ill for only a short time. The differences in cognitive scores were relatively small, and neurological experts cautioned that the results did not imply that being infected with the coronavirus or developing long Covid caused profound deficits in thinking and function. But the experts said the findings are important because they provide numerical evidence for the brain fog, focus and memory problems that afflict many people with long Covid. “These emerging and coalescing findings are generally highlighting that yes, there is cognitive impairment in long Covid survivors — it’s a real phenomenon,” said James C. Jackson, a neuropsychologist at Vanderbilt Medical Center, who was not involved in the study. He and other experts noted that the results were consistent with smaller studies that have found signals of cognitive impairment. The new study also found reasons for optimism, suggesting that if people’s long Covid symptoms ease, the related cognitive impairment might, too: People who had experienced long Covid symptoms for months and eventually recovered had cognitive scores similar to those who had experienced a quick recovery, the study found. © 2024 The New York Times Company
Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 13: Memory and Learning
Link ID: 29171 - Posted: 02.29.2024