Links for Keyword: Attention

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.


Links 81 - 100 of 699

By Sarah Bate Alice is six years old. She struggles to make friends at school and often sits alone in the playground. She loses her parents in the supermarket and approaches strangers at pickup. Once she became separated from her family on a trip to the zoo, and she now has an intense fear of crowded places. Alice has a condition called face blindness, also known as prosopagnosia. This difficulty in recognising facial identity affects 2 percent of the population. Like Alice, most of these people are born with the condition, although a small number acquire face-recognition difficulties after brain injury or illness. Unfortunately, face blindness seems largely resilient to improvement. Yet a very recent study offers more promising findings: children’s face-recognition skills substantially improved after they played a modified version of the game Guess Who?over a two-week period. In the traditional version of Guess Who?, two players see an array of 24 cartoon faces, and each selects a target. Both then take turns asking yes/no questions about the appearance of their opponent’s chosen face, typically inquiring about eye color, hairstyle and accessories such as hats or spectacles. The players use the answers to eliminate faces in the array; when only one remains, they can guess the identity of their opponent’s character. The experimental version of the game preserved this basic setup but used lifelike faces that differed only in the size or spacing of the eyes, nose or mouth. That is, the hairstyle and outer face shape were identical, and children had to read the faces solely on the basis of small differences between the inner features. This manipulation is thought to reflect a key processing strategy that underlies human face recognition: the ability to account not only for the size and shape of features but also the spacing between them. Evidence suggests this ability to process faces “holistically” is impaired in face blindness. The Guess Who? training program aimed to capitalize on this link. Children progressed through 10 levels of the game, with differences between the inner features becoming progressively less obvious. Children played for half an hour per day on any 10 days over a two-week period, advancing to the next level when they won the game on two consecutive rounds. © 2019 Scientific American

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 26921 - Posted: 12.27.2019

By Gretchen Reynolds Top athletes’ brains are not as noisy as yours and mine, according to a fascinating new study of elite competitors and how they process sound. The study finds that the brains of fit, young athletes dial down extraneous noise and attend to important sounds better than those of other young people, suggesting that playing sports may change brains in ways that alter how well people sense and respond to the world around them. For most of us with normal hearing, of course, listening to and processing sounds are such automatic mental activities that we take them for granted. But “making sense of sound is actually one of the most complex jobs we ask of our brains,” says Nina Kraus, a professor and director of the Auditory Neuroscience Laboratory at Northwestern University in Evanston, Ill., who oversaw the new study. Sound processing also can be a reflection of broader brain health, she says, since it involves so many interconnected areas of the brain that must coordinate to decide whether any given sound is familiar, what it means, if the body should respond and how a particular sound fits into the broader orchestration of other noises that constantly bombard us. For some time, Dr. Kraus and her collaborators have been studying whether some people’s brains perform this intricate task more effectively than others. By attaching electrodes to people’s scalps and then playing a simple sound, usually the spoken syllable “da,” at irregular intervals, they have measured and graphed electrical brain wave activity in people’s sound-processing centers. © 2019 The New York Times Company

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 5: The Sensorimotor System
Link ID: 26901 - Posted: 12.18.2019

By Virginia Morell Dogs may not be able to count to 10, but even the untrained ones have a rough sense of how many treats you put in their food bowl. That’s the finding of a new study, which reveals that our canine pals innately understand quantities in much the same way we do. The study is “compelling and exciting,” says Michael Beran, a psychologist at Georgia State University in Atlanta who was not involved in the research. “It further increases our confidence that [these representations of quantity in the brain] are ancient and widespread among species.” The ability to rapidly estimate the number of sheep in a flock or ripened fruits on a tree is known as the “approximate number system.” Previous studies have suggested monkeys, fish, bees, and dogs have this talent. But much of this research has used trained animals that receive multiple tests and rewards. That leaves open the question of whether the ability is innate in these species, as it is in humans. In the new study, Gregory Berns, a neuroscientist at Emory University in Atlanta, and colleagues recruited 11 dogs from various breeds, including border collies, pitbull mixes, and Labrador golden retriever mixes, to see whether they could find brain activity associated with a sensitivity to numbers. The team, which pioneered canine brain scanning (by getting dogs to voluntarily enter a functional magnetic resonance imaging scanner and remain motionless), had their subjects enter the scanner, rest their heads on a block, and fix their eyes on a screen at the opposite end (see video, above). On the screen was an array of light gray dots on a black background whose number changed every 300 milliseconds. If dogs, like humans and nonhuman primates, have a dedicated brain region for representing quantities, their brains should show more activity there when the number of dots was dissimilar (three small dots versus 10 large ones) than when they were constant (four small dots versus four large dots). © 2019 American Association for the Advancement of Science.

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 26900 - Posted: 12.18.2019

By Zeynep Tufekci More than a billion people around the world have smartphones, almost all of which come with some kind of navigation app such as Google or Apple Maps or Waze. This raises the age-old question we encounter with any technology: What skills are we losing? But also, crucially: What capabilities are we gaining? Talking with people who are good at finding their way around or adept at using paper maps, I often hear a lot of frustration with digital maps. North/south orientation gets messed up, and you can see only a small section at a time. And unlike with paper maps, one loses a lot of detail after zooming out. I can see all that and sympathize that it may be quite frustrating for the already skilled to be confined to a small phone screen. (Although map apps aren’t really meant to be replacements for paper maps, which appeal to our eyes, but are actually designed to be heard: “Turn left in 200 feet. Your destination will be on the right.”) But consider what digital navigation aids have meant for someone like me. Despite being a frequent traveler, I’m so terrible at finding my way that I still use Google Maps almost every day in the small town where I have lived for many years. What looks like an inferior product to some has been a significant expansion of my own capabilities. I’d even call it life-changing. Part of the problem is that reading paper maps requires a specific skill set. There is nothing natural about them. In many developed nations, including the U.S., one expects street names and house numbers to be meaningful referents, and instructions such as “go north for three blocks and then west” make sense to those familiar with these conventions. In Istanbul, in contrast, where I grew up, none of those hold true. For one thing, the locals rarely use street names. Why bother when a government or a military coup might change them—again. House and apartment numbers often aren’t sequential either because after buildings 1, 2 and 3 were built, someone squeezed in another house between 1 and 2, and now that’s 4. But then 5 will maybe get built after 3, and 6 will be between 2 and 3. Good luck with 1, 4, 2, 6, 5, and so on, sometimes into the hundreds, in jumbled order. Besides, the city is full of winding, ancient alleys that intersect with newer avenues at many angles. © 2019 Scientific American

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 13: Memory and Learning
Link ID: 26768 - Posted: 10.30.2019

Ian Sample Science editor Warning: this story is about death. You might want to click away now. That’s because, researchers say, our brains do their best to keep us from dwelling on our inevitable demise. A study found that the brain shields us from existential fear by categorising death as an unfortunate event that only befalls other people. “The brain does not accept that death is related to us,” said Yair Dor-Ziderman, at Bar Ilan University in Israel. “We have this primal mechanism that means when the brain gets information that links self to death, something tells us it’s not reliable, so we shouldn’t believe it.” Being shielded from thoughts of our future death could be crucial for us to live in the present. The protection may switch on in early life as our minds develop and we realise death comes to us all. “The moment you have this ability to look into your own future, you realise that at some point you’re going to die and there’s nothing you can do about it,” said Dor-Ziderman. “That goes against the grain of our whole biology, which is helping us to stay alive.” To investigate how the brain handles thoughts of death, Dor-Ziderman and colleagues developed a test that involved producing signals of surprise in the brain. They asked volunteers to watch faces flash up on a screen while their brain activity was monitored. The person’s own face or that of a stranger flashed up on screen several times, followed by a different face. On seeing the final face, the brain flickered with surprise because the image clashed with what it had predicted. © 2019 Guardian News & Media Limited

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 15: Emotions, Aggression, and Stress
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 11: Emotions, Aggression, and Stress
Link ID: 26721 - Posted: 10.19.2019

Jon Hamilton Too much physical exertion appears to make the brain tired. That's the conclusion of a study of triathletes published Thursday in the journal Current Biology. Researchers found that after several weeks of overtraining, athletes became more likely to choose immediate gratification over long-term rewards. At the same time, brain scans showed the athletes had decreased activity in an area of the brain involved in decision-making. The finding could explain why some elite athletes see their performance decline when they work out too much — a phenomenon known as overtraining syndrome. The distance runner Alberto Salazar, for example, experienced a mysterious decline after winning the New York Marathon three times and the Boston Marathon once in the early 1980s. Salazar's times fell off even though he was still in his mid-20s and training more than ever. "Probably [it was] something linked to his brain and his cognitive capacities," says Bastien Blain, an author of the study and a postdoctoral fellow at University College London. (Salazar didn't respond to an interview request for this story.) Blain was part of a team that studied 37 male triathletes who volunteered to take part in a special training program. "They were strongly motivated to be part of this program, at least at the beginning," Blain says. Half of the triathletes were instructed to continue their usual workouts. The rest were told to increase their weekly training by 40%. The result was a training program so intense that these athletes began to perform worse on tests of maximal output. After three weeks, all the participants were put in a brain scanner and asked a series of questions designed to reveal whether a person is more inclined to choose immediate gratification or a long-term reward. "For example, we ask, 'Do you prefer $10 now or $60 in six months,' " Blain says. © 2019 npr

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 5: The Sensorimotor System
Link ID: 26656 - Posted: 09.28.2019

Salvatore Domenic Morgera The human brain sends hundreds of billions of neural signals each second. It’s an extraordinarily complex feat. A healthy brain must establish an enormous number of correct connections and ensure that they remain accurate for the entire period of the information transfer – that can take seconds, which in “brain time” is pretty long. How does each signal get to its intended destination? The challenge for your brain is similar to what you’re faced with when trying to engage in conversation at a noisy cocktail party. You’re able to focus on the person you’re talking to and “mute” the other discussions. This phenomenon is selective hearing – what’s called the cocktail party effect. When everyone at a large, crowded party talks at roughly the same loudness, the average sound level of the person you’re speaking with is about equal to the average level of all the other partygoers’ chatter combined. If it were a satellite TV system, this roughly equal balance of desired signal and background noise would result in poor reception. Nevertheless, this balance is good enough to let you understand conversation at a bustling party. How does the human brain do it, distinguishing among billions of ongoing “conversations” within itself and locking on to a specific signal for delivery? My team’s research into the neurological networks of the brain shows there are two activities that support its ability to establish reliable connections in the presence of significant biological background noise. Although the brain’s mechanisms are quite complex, these two activities act as what an electrical engineer calls a matched filter - a processing element used in high-performance radio systems, and now known to exist in nature. © 2010–2019, The Conversation US, Inc.

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 26604 - Posted: 09.12.2019

Bruce Bower Monkeys can keep strings of information in order by using a simple kind of logical thought. Rhesus macaque monkeys learned the order of items in a list with repeated exposure to pairs of items plucked from the list, say psychologist Greg Jensen of Columbia University and colleagues. The animals drew basic logical conclusions about pairs of listed items, akin to assuming that if A comes before B and B comes before C, then A comes before C, the scientists conclude July 30 in Science Advances. Importantly, rewards given to monkeys didn’t provide reliable guidance to the animals about whether they had correctly ordered pairs of items. Monkeys instead worked out the approximate order of images in the list, and used that knowledge to make choices in experiments about which of two images from the list followed the other, Jensen’s group says. Previous studies have suggested that a variety of animals, including monkeys, apes, pigeons, rats and crows, can discern the order of a list of items (SN: 7/5/08, p. 13). But debate persists about whether nonhuman creatures do so only with the prodding of rewards for correct responses or, at least sometimes, by consulting internal knowledge acquired about particular lists. Jensen’s group designed experimental sessions in which four monkeys completed as many as 600 trials to determine the order of seven images in a list. Images included a hot air balloon, an ear of corn and a zebra. Monkeys couldn’t rely on rewards to guide their choices. In some sessions, animals usually received a larger reward for correctly identifying which of two images came later in the list and a smaller reward for an incorrect response. In other sessions, incorrect responses usually yielded a larger reward than correct responses. Rewards consisted of larger or smaller gulps of water delivered through tubes to the moderately thirsty primates. |© Society for Science & the Public 2000 - 2019

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 26475 - Posted: 08.01.2019

By Jocelyn Kaiser U.S. scientists who challenged a new rule that would require them to register their basic studies of the human brain and behavior in a federal database of clinical trials have won another reprieve. The National Institutes of Health (NIH) in Bethesda, Maryland, says it now understands why some of that kind of research won’t easily fit the format of ClinicalTrials.gov, and the agency has delayed for the reporting requirements for another 2 years. The controversy dates back to 2017, when behavioral and cognitive researchers realized that new requirements for registering and reporting results from NIH-funded clinical studies would also cover even basic studies of human subjects, experiments that did not test drugs or other potential treatments. The scientists protested that including such studies would confuse the public and create burdensome, unnecessary paperwork. A year ago, NIH announced it would delay the requirement until September and seek further input. The responses prompted NIH staff to examine published papers from scientists conducting basic research. They agreed it would be hard to include some of these studies into the rigid informational format used by ClinicalTrials.gov—for example, because the authors didn’t specify the outcome they expected before the study began, or they reported results for individuals and not the whole group. In other cases, the authors did several preliminary studies to help them design their experiment. © 2019 American Association for the Advancement of Science

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 13: Memory and Learning
Link ID: 26450 - Posted: 07.25.2019

Maria Temming A new analysis of brain scans may explain why hyperrealistic androids and animated characters can be creepy. By measuring people’s neural activity as they viewed pictures of humans and robots, researchers identified a region of the brain that seems to underlie the “uncanny valley” effect — the unsettling sensation sometimes caused by robots or animations that look almost, but not quite, human (SN Online: 11/22/13). Better understanding the neural circuitry that causes this feeling may help designers create less unnerving androids. In research described online July 1 in the Journal of Neuroscience, neuroscientist Fabian Grabenhorst and colleagues took functional MRI scans of 21 volunteers during two activities. In each activity, participants viewed pictures of humans, humanoid robots of varying realism and — to simulate the appearance of hyperrealistic robots — “artificial humans,” pictures of people whose features were slightly distorted through plastic surgery and photo editing. In the first activity, participants rated each picture on likability and how humanlike the figures appeared. Next, participants chose between pairs of these pictures, based on which subject they would rather receive a gift from. In line with the uncanny valley effect, participants generally rated more humanlike candidates as more likable, but this trend broke down for artificial humans — the most humanlike of the nonhuman options. A similar uncanny valley trend emerged in participants’ judgments about which figures were more trustworthy gift-givers. |© Society for Science & the Public 2000 - 2019.

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 15: Emotions, Aggression, and Stress
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 11: Emotions, Aggression, and Stress
Link ID: 26387 - Posted: 07.04.2019

By Nathan Dunne I would stare at my hands and think, “I’m not me.” No matter where I was, in the middle of a busy street or at my dining table at home, the condition would be the same. It was like looking at my hands through a plate of glass. Although I could feel the skin on my palms, it did not feel like my own. Half of myself would move through the day while the other half watched. I was split in two. Nothing I did would relieve the condition. I went to see an ophthalmologist, convinced I had cataracts. The verdict was near-perfect vision. I tried taking time off work, talking with family and writing notes about how my life had become a simulation. Each morning I would stare at the mirror in an attempt to recognize myself, but the distance between my body and this new, outer eye only grew larger. I began to believe I was becoming psychotic and would soon be in a psychiatric ward. I was a 28-year-old, working as a copywriter while pursuing a PhD in art history, and I felt my life was nearing its end. One evening in April 2008, as I contemplated another helpless night trapped beyond my body, full blown panic set in. I took up the phone, ready to dial for emergency, when suddenly music began to play from downstairs. It was a nauseating pop song that my neighbor played incessantly, but something about the melody gave me pause. The next day I began a series of frustrating doctor’s visits. First with my physician, then a neurologist, gastroenterologist and chiropractor. I said that I had never taken drugs or drank alcohol excessively. While I was fatigued from my doctoral study, I didn’t think this qualified me for the split in the self that had occurred. © 1996-2019 The Washington Post

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 26372 - Posted: 07.01.2019

By Bret Stetka The hippocampus is a small curl of brain, which nests beneath each temple. It plays a crucial role in memory formation, taking our experiences and interactions and setting them in the proverbial stone by creating new connections among neurons. A report published on June 27in Science reveals how the hippocampus learns and hard wires certain experiences into memory. The authors show that following a particular behavior, the hippocampus replays that behavior repeatedly until it is internalized. They also report on how the hippocampus tracks our brain’s decision-making centers to remember our past choices. Previous research has shown that the rodent hippocampus replays or revisits past experiences during sleep or periods of rest. While a rat navigates a maze, for example, so-called place cells are activated and help the animal track its position. Following their journey through the maze, those same cells are reactivated in the exact same pattern. What previously happened is mentally replayed again. The authors of the new study were curious whether this phenomenon only applies to previous encounters with a particular location or if perhaps this hippocampal replay also applies to memory more generally, including mental and nonspatial memories. It turns out it does. In the study, 33 participants were presented with a series of images containing both a face and a house. They had to judge the age of either one or the other. If during the second trial, the age of the selected option remained the same, the judged category also did not change in the subsequent trial. If the ages differed, the judged category flipped to the other option in the next round. © 2019 Scientific American

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 13: Memory and Learning
Link ID: 26367 - Posted: 06.28.2019

By Susana Martinez-Conde and Stephen L. Macknik The man and the woman sat down, facing each other in the dimly illuminated room. This was the first time the two young people had met, though they were about to become intensely familiar with each other—in an unusual sort of way. The researcher informed them that the purpose of the study was to understand “the perception of the face of another person.” The two participants were to gaze at each other’s eyes for 10 minutes straight, while maintaining a neutral facial expression, and pay attention to their partner’s face. After giving these instructions, the researcher stepped back and sat on one side of the room, away from the participants’ lines of sight. The two volunteers settled in their seats and locked eyes—feeling a little awkward at first, but suppressing uncomfortable smiles to comply with the scientist’s directions. Ten minutes had seemed like a long stretch to look deeply into the eyes of a stranger, but time started to lose its meaning after a while. Sometimes, the young couple felt as if they were looking at things from outside their own bodies. Other times, it seemed as if each moment contained a lifetime. Throughout their close encounter, each member of the duo experienced their partner’s face as everchanging. Human features became animal traits, transmogrifying into grotesqueries. There were eyeless faces, and faces with too many eyes. The semblances of dead relatives materialized. Monstrosities abounded. The bizarre perceptual phenomena that the pair witnessed were manifestations of the “strange face illusion,” first described by the psychologist Giovanni Caputo of the University of Urbino, Italy. Urbino’s original study, published in 2010, reported a new type of illusion, experienced by people looking at themselves in the mirror in low light conditions. © 2019 Scientific American

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 7: Vision: From Eye to Brain
Link ID: 26230 - Posted: 05.14.2019

By Stephen L. Macknik, Susana Martinez-Conde We were very sad to learn that Johnny Thompson (aka The Great Tomsoni) passed away on March 9, 2019, at the age of 84. We first met Johnny in 2007, when he spoke at the ‘Magic of Consciousness’ Symposium that we organized at the annual meeting of the Association for the Scientific Study of Consciousness, in Las Vegas. Johnny Thompson, along with Mac King, Teller, Apollo Robbins, and James Randi, talked to an academic audience of neuroscientists, psychologists and philosophers about his impressions about the psychologically puzzling aspects of magic, and helped jumpstart ‘neuromagic’ as a field of scientific enquiry. Johnny Thomson and his co-presenters inspired us, among many other investigators, to conduct research into the neuroscientific bases of magic. Dozens of papers by labs around the world have been published in the intervening decade as a result. Johnny himself co-authored an academic review with us, on the intersection of magic and neuroscience, published in Nature Reviews Neuroscience in 2008. Our later book Sleights of Mind: What the Neuroscience of Magic Reveals About Our Everyday Deceptions, drew significantly from our extensive conversations with Johnny and his keen insights. Thompson was regarded as a deeply knowledgeable magician's magician and magic theorist. He was generous and kind with his wisdom and is widely recognized for having served as consultant to numerous world-renowned magic acts. Though his contributions to the neuroscience of magic are less well known than his magic artistry, they have led to significant advances in the science of attention and misdirection, too. Among the magic aphorisms we have heard over the years, one of our favorites is Johnny’s assertion that “when the audience laughs, time stops,” allowing the magician, at that precise moment, to get away with magical murder. © 2019 Scientific American

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 26119 - Posted: 04.08.2019

By: Kevin P. Madore, Ph.D., and Anthony D. Wagner, Ph.D. As you go about your day, you may barely notice that you are frequently multitasking. It may be driving to work while listening to a radio program or talking to a loved one on the phone (putting yourself and others at risk), or perusing Facebook while texting a friend, or switching back and forth between a high-level project like compiling a report and a routine chore like scheduling an appointment. Multitasking means trying to perform two or more tasks concurrently, which typically leads to repeatedly switching between tasks (i.e., task switching) or leaving one task unfinished in order to do another. The scientific study of multitasking over the past few decades has revealed important principles about the operations, and processing limitations, of our minds and brains. One critical finding to emerge is that we inflate our perceived ability to multitask: there is little correlation with our actual ability. In fact, multitasking is almost always a misnomer, as the human mind and brain lack the architecture to perform two or more tasks simultaneously. By architecture, we mean the cognitive and neural building blocks and systems that give rise to mental functioning. We have a hard time multitasking because of the ways that our building blocks of attention and executive control inherently work. To this end, when we attempt to multitask, we are usually switching between one task and another. The human brain has evolved to single task. Together with studies of patients who have suffered focal neural injuries, functional neuroimaging studies indicate that key brain systems involved in executive control and sustained attention determine our ability to multitask. These include the frontoparietal control network, dorsal attention network, and ventral attention network. © 2019 The Dana Foundation

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 26117 - Posted: 04.06.2019

Nicola Davis “Acting is the least mysterious of all crafts,” Marlon Brando once said. But for scientists, working out what is going on in an actor’s head has always been something of a puzzle. Now, researchers have said thespians show different patterns of brain activity depending on whether they are in character or not. Dr Steven Brown, the first author of the research from McMaster University in Canada, said: “It looks like when you are acting, you are suppressing yourself; almost like the character is possessing you.” Character building and what makes a truly great actor Read more Writing in the journal Royal Society Open Science, Brown and colleagues report how 15 method actors, mainly theatre students, were trained to take on a Shakespeare role – either Romeo or Juliet – in a theatre workshop, and were asked various questions, to which they responded in character. They were then invited into the laboratory, where their brains were scanned in a series of experiments. Once inside the MRI scanner, the actors were asked to think about their response to a number of fresh conundrums that flashed up on screen, and which might well have occurred to the star-crossed lovers, such as: would they gatecrash a party? And would they tell their parents that they had fallen in love? Each actor was asked to respond to different questions, based on four different premises assigned in a random order. In one, they were asked for their own perspective; in another, they were asked to say how they thought a particular close friend would react, while in a third, they were asked to respond as though they were either Romeo or Juliet. = © 2019 Guardian News & Media Limited

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 1: Cells and Structures: The Anatomy of the Nervous System
Link ID: 26029 - Posted: 03.13.2019

By Max Evans BBC News A stranger once waved at Boo James on a bus. She did not think any more of it - until it later emerged it was her mother. She has a relatively rare condition called face blindness, which means she cannot recognise the faces of her family, friends, or even herself. Scientists have now launched a study they hope could help train people like Boo to recognise people better. Boo said for many years she thought she was "from another planet". "It is immensely stressful and very emotionally upsetting to sit and dwell upon so I try not to do that," she said. "It's very hard work. It can be physically and emotionally exhausting to spend a day out in public constantly wondering whether you should have spoken to someone." For most of her life, she didn't know she had the condition - also known as prosopagnosia - and blamed herself for the "social awkwardness" caused when she failed to recognise people. "I had to try and find a way to explain that. I really couldn't very well, except to think that I was just the one to blame for not being bothered to remember who people were. "[Like it was] some sort of laziness: I didn't want to know them, obviously I wasn't interested enough to remember them, so that was some kind of deficiency, perhaps, in me." But the penny dropped in her early 40s when she saw a news item about the condition on television. "I then knew that the only reason I wasn't recognising that person was because my brain physically wasn't able to do it," she said. "I could immediately engage more self-understanding and forgive myself and try to approach things from a different angle." Image caption Boo has developed techniques to try to help her cope, including remembering what people wear She said her childhood was punctuated by "traumatic experiences" with fellow children, childminders and teachers she could not recognise. © 2019 BBC

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 26011 - Posted: 03.06.2019

Bruce Bower WASHINGTON — Beliefs among some university professors that intelligence is fixed, rather than capable of growth, contribute to a racial achievement gap in STEM courses, a new study suggests. Those professors may subtly communicate stereotypes about blacks, Hispanics and Native Americans allegedly being less intelligent than Asians and whites, say psychologist Elizabeth Canning of Indiana University in Bloomington and her colleagues. In turn, black, Hispanic and Native American undergraduates may respond by becoming less academically motivated and more anxious about their studies, leading to lower grades. Even small dips in STEM grades — especially for students near pass/fail cutoffs — can accumulate across the 15 or more science, technology, engineering and math classes needed to become a physician or an engineer, Canning says. That could jeopardize access to financial aid and acceptance to graduate programs. “Our work suggests that academic benefits could accrue over time if all students, and particularly underrepresented minority students, took STEM classes with faculty who endorse a growth mind-set,” Canning says. Underrepresented minority students’ reactions to professors with fixed or flexible beliefs about intelligence have yet to be studied. But over a two-year period, the disparity in grade point averages separating Asian and white STEM students from black, Hispanic and Native American peers was nearly twice as large in courses taught by professors who regarded intelligence as set in stone, versus malleable, Canning’s team reports online February 15 in Science Advances. |© Society for Science & the Public 2000 - 2019.

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 13: Memory and Learning
Link ID: 25970 - Posted: 02.18.2019

By Alex Fox If math is the language of the universe, bees may have just uttered their first words. New research suggests these busybodies of the insect world are capable of addition and subtraction—using colors in the place of plus and minus symbols. In the animal kingdom, the ability to count—or at least distinguish between differing quantities—isn’t unusual: It has been seen in frogs, spiders, and even fish. But solving equations using symbols is rare air, so far only achieved by famously brainy animals such as chimpanzees and African grey parrots. Enter the honey bee (Apis mellifera). Building on prior research that says the social insects can count to four and understand the concept of zero, researchers wanted to test the limits of what their tiny brains can do. Scientists trained 14 bees to link the colors blue and yellow to addition and subtraction, respectively. They placed the bees at the entrance of a Y-shaped maze, where they were shown several shapes in either yellow or blue. If the shapes were blue, bees got a reward if they went to the end of the maze with one more blue shape (the other end had one less blue shape); if the shapes were yellow, they got a reward if they went to the end of the maze with one less yellow shape. © 2018 American Association for the Advancement of Science

Related chapters from BN: Chapter 6: Evolution of the Brain and Behavior; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 25938 - Posted: 02.08.2019

By Benedict Carey The world’s most common digital habit is not easy to break, even in a fit of moral outrage over the privacy risks and political divisions Facebook has created, or amid concerns about how the habit might affect emotional health. Although four in 10 Facebook users say they have taken long breaks from it, the digital platform keeps growing. A recent study found that the average user would have to be paid $1,000 to $2,000 to be pried away for a year. So what happens if you actually do quit? A new study, the most comprehensive to date, offers a preview. Expect the consequences to be fairly immediate: More in-person time with friends and family. Less political knowledge, but also less partisan fever. A small bump in one’s daily moods and life satisfaction. And, for the average Facebook user, an extra hour a day of downtime. The study, by researchers at Stanford University and New York University, helps clarify the ceaseless debate over Facebook’s influence on the behavior, thinking and politics of its active monthly users, who number some 2.3 billion worldwide. The study was posted recently on the Social Science Research Network, an open access site. “For me, Facebook is one of those compulsive things,” said Aaron Kelly, 23, a college student in Madison, Wis. “It’s really useful, but I always felt like I was wasting time on it, distracting myself from study, using it whenever I got bored.” Mr. Kelly, who estimated that he spent about an hour a day on the platform, took part in the study “because it was kind of nice to have an excuse to deactivate and see what happened,” he said. Well before news broke that Facebook had shared users’ data without consent, scientists and habitual users debated how the platform had changed the experience of daily life. A cadre of psychologists has argued for years that the use of Facebook and other social media is linked to mental distress, especially in adolescents. Others have likened habitual Facebook use to a mental disorder, comparing it to drug addiction and even publishing magnetic-resonance images of what Facebook addiction “looks like in the brain.” © 2019 The New York Times Company

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 16: Psychopathology: Biological Basis of Behavior Disorders
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 12: Psychopathology: The Biology of Behavioral Disorders
Link ID: 25919 - Posted: 01.31.2019