Links for Keyword: Attention

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.


Links 101 - 120 of 726

As we get older, we become more easily distracted, but it isn't always a disadvantage, according to researchers. Tarek Amer, a psychology postdoctoral research fellow at Columbia University, says that although our ability to focus our attention on specific things worsens as we get older, our ability to take in broad swaths of information remains strong. So in general, older adults are able to retain information that a more focused person could not. For the last few years, Amer's research has focused mainly on cognitive control, a loose term that describes one's ability to focus their attention. His work at the University of Toronto, where he received his PhD in 2018, looked specifically at older adults aged 60 to 80. Amer joined Spark host Nora Young to discuss his research and how it could be implemented in practical ways. What happens to our ability to concentrate as we get older? There's a lot of research that shows as we get older, this ability tends to decline or is reduced with age. So essentially, what we see is that relative to younger adults, older adults have a harder time focusing on one thing while ignoring distractions. This distraction can be from the external world. This can also be internally based distractions, such as our own thoughts, which are usually not related to the task at hand. With respect to mind wandering specifically, the literature is ... mixed. [The] typical finding is that older adults tend to, at least in lab-based tasks, mind wander less. So I know that you've been looking, in your own research, at concentration and memory formation. So what exactly are you studying? One of the things I was interested in is whether this [decline in the ability to concentrate] could be associated with any benefits in old age. For example, one thing that we showed is that when older and younger adults perform a task that includes both task-relevant as well as task-irrelevant information, older adults are actually processing both types of information. So if we give them a memory task at the end that actually is testing memory for the irrelevant information … we see that older adults actually outperform younger adults. ©2020 CBC/Radio-Canada.

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 4: Development of the Brain
Link ID: 27116 - Posted: 03.14.2020

Dori Grijseels In 2016, three neuroscientists wrote a commentary article arguing that, to truly understand the brain, neuroscience needed to change. From that paper, the International Brain Laboratory (IBL) was born. The IBL, now a collaboration between 22 labs across the world, is unique in biology. The IBL is modeled on physics collaborations, like the ATLAS experiment at CERN, where thousands of scientists work together on a common problem, sharing data and resources during the process. This was in response to the main criticism that the paper’s authors, Zachary Mainen, Michael Häusser and Alexandre Pouget, had about existing neuroscience collaborations: labs came together to discuss generalities, but all the experiments were done separately. They wanted to create a collaboration in which scientists worked together throughout the process, even though their labs may be distributed all over the globe. The IBL decided to focus on one brain function only: decision-making. Decision-making engages the whole brain, since it requires using both input from the senses and information about previous experiences. If someone is thinking about bringing a sweater when they go out, they will use their senses to determine whether it looks and feels cold outside, but they might also remember that, yesterday, they were cold without a sweater. For its first published (in pre-print form) experiment, seven labs of the 22 collaborating in the IBL tested 101 mice on their decision-making ability. The mice saw a black and white grating either to their right or to their left. They then had to twist a little Lego wheel to move the grating to the middle. By rewarding them with sugary water whenever they did the task correctly, the mice gradually learned. It is easy for them to decide which way to twist the wheel if the grating has a high contrast, because it stands out compared to the background of their visual field. However, the mice were also presented with a more ambiguously-patterned grating not easily distinguishable from the background, so the decision of which way to turn the wheel was more difficult. In some cases, the grating was even indistinguishable from the background. Between all seven labs –which were spread across three countries – the mice completed this task three million times. © 2017 – 2019 Massive Science Inc.

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 13: Memory and Learning
Link ID: 27102 - Posted: 03.07.2020

By Liz Langley It might be time to reconsider what it means to call someone a “rat.” Previous research has shown the much-maligned rodents assist comrades in need, as well as remember individual rats that have helped them—and return the favor. Now, a new study builds on this evidence of empathy, revealing that domestic rats will avoid harming other rats. In the study, published March 5 in the journal Current Biology, rats were trained to pull levers to get a tasty sugar pellet. If the lever delivered a mild shock to a neighbor, several of the rats stopped pulling that lever and switched to another. Harm aversion, as it's known, is a well-known human trait regulated by a part of the brain called the anterior cingulate cortex (ACC). Further experiments showed the ACC controls this behavior in rats, too. This is the first time scientists have found the ACC is necessary for harm aversion in a non-human species. This likeness between human and rat brains is “super-exciting for two reasons,” says study co-author Christian Keysers, of the Netherlands Institute for Neuroscience. For one, it suggests that preventing harm to others is rooted deep in mammals' evolutionary history. (See what a rat looks like when it’s happy.) What’s more, the finding could have a real impact on people suffering from psychiatric disorders such as psychopathy and sociopathy, whose anterior cingulate cortexes are impaired. “We currently have no effective drugs to reduce violence in antisocial populations,” Keysers says, and figuring out how to increase such patients’ aversion to hurting others could be a powerful tool.

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 15: Emotions, Aggression, and Stress
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 11: Emotions, Aggression, and Stress
Link ID: 27101 - Posted: 03.07.2020

Jordana Cepelewicz Decisions, decisions. All of us are constantly faced with conscious and unconscious choices. Not just about what to wear, what to eat or how to spend a weekend, but about which hand to use when picking up a pencil, or whether to shift our weight in a chair. To make even trivial decisions, our brains sift through a pile of “what ifs” and weigh the hypotheticals. Even for choices that seem automatic — jumping out of the way of a speeding car, for instance — the brain can very quickly extrapolate from past experiences to make predictions and guide behavior. In a paper published last month in Cell, a team of researchers in California peered into the brains of rats on the cusp of making a decision and watched their neurons rapidly play out the competing choices available to them. The mechanism they described might underlie not just decision-making, but also animals’ ability to envision more abstract possibilities — something akin to imagination. The group, led by the neuroscientist Loren Frank of the University of California, San Francisco, investigated the activity of cells in the hippocampus, the seahorse-shaped brain region known to play crucial roles both in navigation and in the storage and retrieval of memories. They gave extra attention to neurons called place cells, nicknamed “the brain’s GPS” because they mentally map an animal’s location as it moves through space. Place cells have been shown to fire very rapidly in particular sequences as an animal moves through its environment. The activity corresponds to a sweep in position from just behind the animal to just ahead of it. (Studies have demonstrated that these forward sweeps also contain information about the locations of goals or rewards.) These patterns of neural activity, called theta cycles, repeat roughly eight times per second in rats and represent a constantly updated virtual trajectory for the animals. All Rights Reserved © 2020

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Link ID: 27070 - Posted: 02.25.2020

By Sue Halpern During the 2016 Presidential primary, SPARK Neuro, a company that uses brain waves and other physiological signals to delve into the subliminal mind, decided to assess people’s reactions to the Democratic candidates. The company had not yet launched, but its C.E.O., Spencer Gerrol, was eager to refine its technology. In a test designed to uncover how people are actually feeling, as opposed to how they say they are feeling, SPARK Neuro observed, among other things, that the cadence of Bernie Sanders’s voice grabbed people’s attention, while Hillary Clinton’s measured tones were a bore. A few months later, Katz Media Group, a radio-and-television-ad representative firm, hired Gerrol’s group to study a cohort of undecided voters in Florida and Pennsylvania. The company’s chief marketing officer, Stacey Schulman, picked SPARK Neuro because its algorithm took into account an array of neurological and physiological signals. “Subconscious emotion underlies conscious decision-making, which is interesting for the marketing world but critically important in the political realm,” Schulman told me. “This measures how the body is responding, and it happens before you articulate it.” Neuromarketing—gauging consumers’ feelings and beliefs by observing and measuring spontaneous, unmediated physiological responses to an ad or a sales pitch—is not new. “For a while, using neuroscience to do marketing was something of a fad, but it has been applied to commerce for a good ten years now,” Schulman said. Nielsen, the storied media-insight company, has a neuromarketing division. Google has been promoting what it calls “emotion analytics” to advertisers. A company called Realeyes claims to have trained artificial intelligence to “read emotions” through Webcams; another called Affectiva says that it “provides deep insight into unfiltered and unbiased consumer emotional response to brand content” through what it calls “facial coding.” Similarly, ZimGo Polling, a South Korean company that operates in the United States, has paired facial-recognition technology with “automated emotion understanding” and natural language processing to give “insights into how people feel about real-time issues,” and “thereby enables a virtual 24/7 town hall meeting with citizens.” This is crucial, according to the C.E.O. of ZimGo’s parent company, because “people vote on emotion.” © 2020 Condé Nast

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 15: Emotions, Aggression, and Stress
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 11: Emotions, Aggression, and Stress
Link ID: 27017 - Posted: 02.04.2020

Roger E. Beaty, Ph.D. When we think about creativity, the arts often come to mind. Most people would agree that writers, painters, and actors are all creative. This is what psychologists who study the subject refer to as Big-C creativity: publicly-recognizable, professional-level performance. But what about creativity on a smaller scale? This is what researchers refer to as little-c creativity, and it is something that we all possess and express in our daily lives, from inventing new recipes to performing a do-it-yourself project to thinking of clever jokes to entertain the kids. One way psychologists measure creative thinking is by asking people to think of uncommon uses for common objects, such as a cup or a cardboard box. Their responses can be analyzed on different dimensions, such as fluency (the total number of ideas) and originality. Surprisingly, many people struggle with this seemingly simple task, only suggesting uses that closely resemble the typical uses for the object. The same happens in other tests that demand ideas that go beyond what we already know (i.e., “thinking outside the box”). Such innovation tasks assess just one aspect of creativity. Many new tests are being developed that tap into other creative skills, from visuospatial abilities essential for design (like drawing) to scientific abilities important for innovation and discovery. But where do creative ideas come from, and what makes some people more creative than others? Contrary to romantic notions of a purely spontaneous process, increasing evidence from psychology and neuroscience experiments indicates that creativity requires cognitive effort—in part, to overcome the distraction and “stickiness” of prior knowledge (remember how people think of common uses when asked to devise creative ones). In light of these findings, we can consider general creative thinking as a dynamic interplay between the brain’s memory and control systems. Without memory, our minds would be a blank slate—not conducive to creativity, which requires knowledge and expertise. But without mental control, we wouldn’t be able to push thinking in new directions and avoid getting stuck on what we already know. © 2020 The Dana Foundation

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 26979 - Posted: 01.22.2020

Matthew Schafer and Daniela Schiller How do animals, from rats to humans, intuit shortcuts when moving from one place to another? Scientists have discovered mental maps in the brain that help animals picture the best routes from an internalized model of their environments. Physical space is not all that is tracked by the brain's mapmaking capacities. Cognitive models of the environment may be vital to mental processes, including memory, imagination, making inferences and engaging in abstract reasoning. Most intriguing is the emerging evidence that maps may be involved in tracking the dynamics of social relationships: how distant or close individuals are to one another and where they reside within group hierarchies. We are often told that there are no shortcuts in life. But the brain—even the brain of a rat—is wired in a way that completely ignores this kind of advice. The organ, in fact, epitomizes a shortcut-finding machine. The first indication that the brain has a knack for finding alternative routes was described in 1948 by Edward Tolman of the University of California, Berkeley. Tolman performed a curious experiment in which a hungry rat ran across an unpainted circular table into a dark, narrow corridor. The rat turned left, then right, and then took another right and scurried to the far end of a well-lit narrow strip, where, finally, a cup of food awaited. There were no choices to be made. The rat had to follow the one available winding path, and so it did, time and time again, for four days. On the fifth day, as the rat once again ran straight across the table into the corridor, it hit a wall—the path was blocked. The animal went back to the table and started looking for alternatives. Overnight, the circular table had turned into a sunburst arena. Instead of one track, there were now 18 radial paths to explore, all branching off from the sides of the table. After venturing out a few inches on a few different paths, the rat finally chose to run all the way down path number six, the one leading directly to the food. © 2020 Scientific American,

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 26961 - Posted: 01.15.2020

By Sarah Bate Alice is six years old. She struggles to make friends at school and often sits alone in the playground. She loses her parents in the supermarket and approaches strangers at pickup. Once she became separated from her family on a trip to the zoo, and she now has an intense fear of crowded places. Alice has a condition called face blindness, also known as prosopagnosia. This difficulty in recognising facial identity affects 2 percent of the population. Like Alice, most of these people are born with the condition, although a small number acquire face-recognition difficulties after brain injury or illness. Unfortunately, face blindness seems largely resilient to improvement. Yet a very recent study offers more promising findings: children’s face-recognition skills substantially improved after they played a modified version of the game Guess Who?over a two-week period. In the traditional version of Guess Who?, two players see an array of 24 cartoon faces, and each selects a target. Both then take turns asking yes/no questions about the appearance of their opponent’s chosen face, typically inquiring about eye color, hairstyle and accessories such as hats or spectacles. The players use the answers to eliminate faces in the array; when only one remains, they can guess the identity of their opponent’s character. The experimental version of the game preserved this basic setup but used lifelike faces that differed only in the size or spacing of the eyes, nose or mouth. That is, the hairstyle and outer face shape were identical, and children had to read the faces solely on the basis of small differences between the inner features. This manipulation is thought to reflect a key processing strategy that underlies human face recognition: the ability to account not only for the size and shape of features but also the spacing between them. Evidence suggests this ability to process faces “holistically” is impaired in face blindness. The Guess Who? training program aimed to capitalize on this link. Children progressed through 10 levels of the game, with differences between the inner features becoming progressively less obvious. Children played for half an hour per day on any 10 days over a two-week period, advancing to the next level when they won the game on two consecutive rounds. © 2019 Scientific American

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 26921 - Posted: 12.27.2019

By Gretchen Reynolds Top athletes’ brains are not as noisy as yours and mine, according to a fascinating new study of elite competitors and how they process sound. The study finds that the brains of fit, young athletes dial down extraneous noise and attend to important sounds better than those of other young people, suggesting that playing sports may change brains in ways that alter how well people sense and respond to the world around them. For most of us with normal hearing, of course, listening to and processing sounds are such automatic mental activities that we take them for granted. But “making sense of sound is actually one of the most complex jobs we ask of our brains,” says Nina Kraus, a professor and director of the Auditory Neuroscience Laboratory at Northwestern University in Evanston, Ill., who oversaw the new study. Sound processing also can be a reflection of broader brain health, she says, since it involves so many interconnected areas of the brain that must coordinate to decide whether any given sound is familiar, what it means, if the body should respond and how a particular sound fits into the broader orchestration of other noises that constantly bombard us. For some time, Dr. Kraus and her collaborators have been studying whether some people’s brains perform this intricate task more effectively than others. By attaching electrodes to people’s scalps and then playing a simple sound, usually the spoken syllable “da,” at irregular intervals, they have measured and graphed electrical brain wave activity in people’s sound-processing centers. © 2019 The New York Times Company

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 5: The Sensorimotor System
Link ID: 26901 - Posted: 12.18.2019

By Virginia Morell Dogs may not be able to count to 10, but even the untrained ones have a rough sense of how many treats you put in their food bowl. That’s the finding of a new study, which reveals that our canine pals innately understand quantities in much the same way we do. The study is “compelling and exciting,” says Michael Beran, a psychologist at Georgia State University in Atlanta who was not involved in the research. “It further increases our confidence that [these representations of quantity in the brain] are ancient and widespread among species.” The ability to rapidly estimate the number of sheep in a flock or ripened fruits on a tree is known as the “approximate number system.” Previous studies have suggested monkeys, fish, bees, and dogs have this talent. But much of this research has used trained animals that receive multiple tests and rewards. That leaves open the question of whether the ability is innate in these species, as it is in humans. In the new study, Gregory Berns, a neuroscientist at Emory University in Atlanta, and colleagues recruited 11 dogs from various breeds, including border collies, pitbull mixes, and Labrador golden retriever mixes, to see whether they could find brain activity associated with a sensitivity to numbers. The team, which pioneered canine brain scanning (by getting dogs to voluntarily enter a functional magnetic resonance imaging scanner and remain motionless), had their subjects enter the scanner, rest their heads on a block, and fix their eyes on a screen at the opposite end (see video, above). On the screen was an array of light gray dots on a black background whose number changed every 300 milliseconds. If dogs, like humans and nonhuman primates, have a dedicated brain region for representing quantities, their brains should show more activity there when the number of dots was dissimilar (three small dots versus 10 large ones) than when they were constant (four small dots versus four large dots). © 2019 American Association for the Advancement of Science.

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 26900 - Posted: 12.18.2019

By Zeynep Tufekci More than a billion people around the world have smartphones, almost all of which come with some kind of navigation app such as Google or Apple Maps or Waze. This raises the age-old question we encounter with any technology: What skills are we losing? But also, crucially: What capabilities are we gaining? Talking with people who are good at finding their way around or adept at using paper maps, I often hear a lot of frustration with digital maps. North/south orientation gets messed up, and you can see only a small section at a time. And unlike with paper maps, one loses a lot of detail after zooming out. I can see all that and sympathize that it may be quite frustrating for the already skilled to be confined to a small phone screen. (Although map apps aren’t really meant to be replacements for paper maps, which appeal to our eyes, but are actually designed to be heard: “Turn left in 200 feet. Your destination will be on the right.”) But consider what digital navigation aids have meant for someone like me. Despite being a frequent traveler, I’m so terrible at finding my way that I still use Google Maps almost every day in the small town where I have lived for many years. What looks like an inferior product to some has been a significant expansion of my own capabilities. I’d even call it life-changing. Part of the problem is that reading paper maps requires a specific skill set. There is nothing natural about them. In many developed nations, including the U.S., one expects street names and house numbers to be meaningful referents, and instructions such as “go north for three blocks and then west” make sense to those familiar with these conventions. In Istanbul, in contrast, where I grew up, none of those hold true. For one thing, the locals rarely use street names. Why bother when a government or a military coup might change them—again. House and apartment numbers often aren’t sequential either because after buildings 1, 2 and 3 were built, someone squeezed in another house between 1 and 2, and now that’s 4. But then 5 will maybe get built after 3, and 6 will be between 2 and 3. Good luck with 1, 4, 2, 6, 5, and so on, sometimes into the hundreds, in jumbled order. Besides, the city is full of winding, ancient alleys that intersect with newer avenues at many angles. © 2019 Scientific American

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 13: Memory and Learning
Link ID: 26768 - Posted: 10.30.2019

Ian Sample Science editor Warning: this story is about death. You might want to click away now. That’s because, researchers say, our brains do their best to keep us from dwelling on our inevitable demise. A study found that the brain shields us from existential fear by categorising death as an unfortunate event that only befalls other people. “The brain does not accept that death is related to us,” said Yair Dor-Ziderman, at Bar Ilan University in Israel. “We have this primal mechanism that means when the brain gets information that links self to death, something tells us it’s not reliable, so we shouldn’t believe it.” Being shielded from thoughts of our future death could be crucial for us to live in the present. The protection may switch on in early life as our minds develop and we realise death comes to us all. “The moment you have this ability to look into your own future, you realise that at some point you’re going to die and there’s nothing you can do about it,” said Dor-Ziderman. “That goes against the grain of our whole biology, which is helping us to stay alive.” To investigate how the brain handles thoughts of death, Dor-Ziderman and colleagues developed a test that involved producing signals of surprise in the brain. They asked volunteers to watch faces flash up on a screen while their brain activity was monitored. The person’s own face or that of a stranger flashed up on screen several times, followed by a different face. On seeing the final face, the brain flickered with surprise because the image clashed with what it had predicted. © 2019 Guardian News & Media Limited

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 15: Emotions, Aggression, and Stress
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 11: Emotions, Aggression, and Stress
Link ID: 26721 - Posted: 10.19.2019

Jon Hamilton Too much physical exertion appears to make the brain tired. That's the conclusion of a study of triathletes published Thursday in the journal Current Biology. Researchers found that after several weeks of overtraining, athletes became more likely to choose immediate gratification over long-term rewards. At the same time, brain scans showed the athletes had decreased activity in an area of the brain involved in decision-making. The finding could explain why some elite athletes see their performance decline when they work out too much — a phenomenon known as overtraining syndrome. The distance runner Alberto Salazar, for example, experienced a mysterious decline after winning the New York Marathon three times and the Boston Marathon once in the early 1980s. Salazar's times fell off even though he was still in his mid-20s and training more than ever. "Probably [it was] something linked to his brain and his cognitive capacities," says Bastien Blain, an author of the study and a postdoctoral fellow at University College London. (Salazar didn't respond to an interview request for this story.) Blain was part of a team that studied 37 male triathletes who volunteered to take part in a special training program. "They were strongly motivated to be part of this program, at least at the beginning," Blain says. Half of the triathletes were instructed to continue their usual workouts. The rest were told to increase their weekly training by 40%. The result was a training program so intense that these athletes began to perform worse on tests of maximal output. After three weeks, all the participants were put in a brain scanner and asked a series of questions designed to reveal whether a person is more inclined to choose immediate gratification or a long-term reward. "For example, we ask, 'Do you prefer $10 now or $60 in six months,' " Blain says. © 2019 npr

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 5: The Sensorimotor System
Link ID: 26656 - Posted: 09.28.2019

Salvatore Domenic Morgera The human brain sends hundreds of billions of neural signals each second. It’s an extraordinarily complex feat. A healthy brain must establish an enormous number of correct connections and ensure that they remain accurate for the entire period of the information transfer – that can take seconds, which in “brain time” is pretty long. How does each signal get to its intended destination? The challenge for your brain is similar to what you’re faced with when trying to engage in conversation at a noisy cocktail party. You’re able to focus on the person you’re talking to and “mute” the other discussions. This phenomenon is selective hearing – what’s called the cocktail party effect. When everyone at a large, crowded party talks at roughly the same loudness, the average sound level of the person you’re speaking with is about equal to the average level of all the other partygoers’ chatter combined. If it were a satellite TV system, this roughly equal balance of desired signal and background noise would result in poor reception. Nevertheless, this balance is good enough to let you understand conversation at a bustling party. How does the human brain do it, distinguishing among billions of ongoing “conversations” within itself and locking on to a specific signal for delivery? My team’s research into the neurological networks of the brain shows there are two activities that support its ability to establish reliable connections in the presence of significant biological background noise. Although the brain’s mechanisms are quite complex, these two activities act as what an electrical engineer calls a matched filter - a processing element used in high-performance radio systems, and now known to exist in nature. © 2010–2019, The Conversation US, Inc.

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 26604 - Posted: 09.12.2019

Bruce Bower Monkeys can keep strings of information in order by using a simple kind of logical thought. Rhesus macaque monkeys learned the order of items in a list with repeated exposure to pairs of items plucked from the list, say psychologist Greg Jensen of Columbia University and colleagues. The animals drew basic logical conclusions about pairs of listed items, akin to assuming that if A comes before B and B comes before C, then A comes before C, the scientists conclude July 30 in Science Advances. Importantly, rewards given to monkeys didn’t provide reliable guidance to the animals about whether they had correctly ordered pairs of items. Monkeys instead worked out the approximate order of images in the list, and used that knowledge to make choices in experiments about which of two images from the list followed the other, Jensen’s group says. Previous studies have suggested that a variety of animals, including monkeys, apes, pigeons, rats and crows, can discern the order of a list of items (SN: 7/5/08, p. 13). But debate persists about whether nonhuman creatures do so only with the prodding of rewards for correct responses or, at least sometimes, by consulting internal knowledge acquired about particular lists. Jensen’s group designed experimental sessions in which four monkeys completed as many as 600 trials to determine the order of seven images in a list. Images included a hot air balloon, an ear of corn and a zebra. Monkeys couldn’t rely on rewards to guide their choices. In some sessions, animals usually received a larger reward for correctly identifying which of two images came later in the list and a smaller reward for an incorrect response. In other sessions, incorrect responses usually yielded a larger reward than correct responses. Rewards consisted of larger or smaller gulps of water delivered through tubes to the moderately thirsty primates. |© Society for Science & the Public 2000 - 2019

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 26475 - Posted: 08.01.2019

By Jocelyn Kaiser U.S. scientists who challenged a new rule that would require them to register their basic studies of the human brain and behavior in a federal database of clinical trials have won another reprieve. The National Institutes of Health (NIH) in Bethesda, Maryland, says it now understands why some of that kind of research won’t easily fit the format of ClinicalTrials.gov, and the agency has delayed for the reporting requirements for another 2 years. The controversy dates back to 2017, when behavioral and cognitive researchers realized that new requirements for registering and reporting results from NIH-funded clinical studies would also cover even basic studies of human subjects, experiments that did not test drugs or other potential treatments. The scientists protested that including such studies would confuse the public and create burdensome, unnecessary paperwork. A year ago, NIH announced it would delay the requirement until September and seek further input. The responses prompted NIH staff to examine published papers from scientists conducting basic research. They agreed it would be hard to include some of these studies into the rigid informational format used by ClinicalTrials.gov—for example, because the authors didn’t specify the outcome they expected before the study began, or they reported results for individuals and not the whole group. In other cases, the authors did several preliminary studies to help them design their experiment. © 2019 American Association for the Advancement of Science

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 13: Memory and Learning
Link ID: 26450 - Posted: 07.25.2019

Maria Temming A new analysis of brain scans may explain why hyperrealistic androids and animated characters can be creepy. By measuring people’s neural activity as they viewed pictures of humans and robots, researchers identified a region of the brain that seems to underlie the “uncanny valley” effect — the unsettling sensation sometimes caused by robots or animations that look almost, but not quite, human (SN Online: 11/22/13). Better understanding the neural circuitry that causes this feeling may help designers create less unnerving androids. In research described online July 1 in the Journal of Neuroscience, neuroscientist Fabian Grabenhorst and colleagues took functional MRI scans of 21 volunteers during two activities. In each activity, participants viewed pictures of humans, humanoid robots of varying realism and — to simulate the appearance of hyperrealistic robots — “artificial humans,” pictures of people whose features were slightly distorted through plastic surgery and photo editing. In the first activity, participants rated each picture on likability and how humanlike the figures appeared. Next, participants chose between pairs of these pictures, based on which subject they would rather receive a gift from. In line with the uncanny valley effect, participants generally rated more humanlike candidates as more likable, but this trend broke down for artificial humans — the most humanlike of the nonhuman options. A similar uncanny valley trend emerged in participants’ judgments about which figures were more trustworthy gift-givers. |© Society for Science & the Public 2000 - 2019.

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 15: Emotions, Aggression, and Stress
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 11: Emotions, Aggression, and Stress
Link ID: 26387 - Posted: 07.04.2019

By Nathan Dunne I would stare at my hands and think, “I’m not me.” No matter where I was, in the middle of a busy street or at my dining table at home, the condition would be the same. It was like looking at my hands through a plate of glass. Although I could feel the skin on my palms, it did not feel like my own. Half of myself would move through the day while the other half watched. I was split in two. Nothing I did would relieve the condition. I went to see an ophthalmologist, convinced I had cataracts. The verdict was near-perfect vision. I tried taking time off work, talking with family and writing notes about how my life had become a simulation. Each morning I would stare at the mirror in an attempt to recognize myself, but the distance between my body and this new, outer eye only grew larger. I began to believe I was becoming psychotic and would soon be in a psychiatric ward. I was a 28-year-old, working as a copywriter while pursuing a PhD in art history, and I felt my life was nearing its end. One evening in April 2008, as I contemplated another helpless night trapped beyond my body, full blown panic set in. I took up the phone, ready to dial for emergency, when suddenly music began to play from downstairs. It was a nauseating pop song that my neighbor played incessantly, but something about the melody gave me pause. The next day I began a series of frustrating doctor’s visits. First with my physician, then a neurologist, gastroenterologist and chiropractor. I said that I had never taken drugs or drank alcohol excessively. While I was fatigued from my doctoral study, I didn’t think this qualified me for the split in the self that had occurred. © 1996-2019 The Washington Post

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 26372 - Posted: 07.01.2019

By Bret Stetka The hippocampus is a small curl of brain, which nests beneath each temple. It plays a crucial role in memory formation, taking our experiences and interactions and setting them in the proverbial stone by creating new connections among neurons. A report published on June 27in Science reveals how the hippocampus learns and hard wires certain experiences into memory. The authors show that following a particular behavior, the hippocampus replays that behavior repeatedly until it is internalized. They also report on how the hippocampus tracks our brain’s decision-making centers to remember our past choices. Previous research has shown that the rodent hippocampus replays or revisits past experiences during sleep or periods of rest. While a rat navigates a maze, for example, so-called place cells are activated and help the animal track its position. Following their journey through the maze, those same cells are reactivated in the exact same pattern. What previously happened is mentally replayed again. The authors of the new study were curious whether this phenomenon only applies to previous encounters with a particular location or if perhaps this hippocampal replay also applies to memory more generally, including mental and nonspatial memories. It turns out it does. In the study, 33 participants were presented with a series of images containing both a face and a house. They had to judge the age of either one or the other. If during the second trial, the age of the selected option remained the same, the judged category also did not change in the subsequent trial. If the ages differed, the judged category flipped to the other option in the next round. © 2019 Scientific American

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 13: Memory and Learning
Link ID: 26367 - Posted: 06.28.2019

By Susana Martinez-Conde and Stephen L. Macknik The man and the woman sat down, facing each other in the dimly illuminated room. This was the first time the two young people had met, though they were about to become intensely familiar with each other—in an unusual sort of way. The researcher informed them that the purpose of the study was to understand “the perception of the face of another person.” The two participants were to gaze at each other’s eyes for 10 minutes straight, while maintaining a neutral facial expression, and pay attention to their partner’s face. After giving these instructions, the researcher stepped back and sat on one side of the room, away from the participants’ lines of sight. The two volunteers settled in their seats and locked eyes—feeling a little awkward at first, but suppressing uncomfortable smiles to comply with the scientist’s directions. Ten minutes had seemed like a long stretch to look deeply into the eyes of a stranger, but time started to lose its meaning after a while. Sometimes, the young couple felt as if they were looking at things from outside their own bodies. Other times, it seemed as if each moment contained a lifetime. Throughout their close encounter, each member of the duo experienced their partner’s face as everchanging. Human features became animal traits, transmogrifying into grotesqueries. There were eyeless faces, and faces with too many eyes. The semblances of dead relatives materialized. Monstrosities abounded. The bizarre perceptual phenomena that the pair witnessed were manifestations of the “strange face illusion,” first described by the psychologist Giovanni Caputo of the University of Urbino, Italy. Urbino’s original study, published in 2010, reported a new type of illusion, experienced by people looking at themselves in the mirror in low light conditions. © 2019 Scientific American

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 7: Vision: From Eye to Brain
Link ID: 26230 - Posted: 05.14.2019