Links for Keyword: Attention

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 21 - 40 of 414

By DOLLY CHUGH, KATHERINE L. MILKMAN and MODUPE AKINOLA IN the world of higher education, we professors like to believe that we are free from the racial and gender biases that afflict so many other people in society. But is this self-conception accurate? To find out, we conducted an experiment. A few years ago, we sent emails to more than 6,500 randomly selected professors from 259 American universities. Each email was from a (fictional) prospective out-of-town student whom the professor did not know, expressing interest in the professor’s Ph.D. program and seeking guidance. These emails were identical and written in impeccable English, varying only in the name of the student sender. The messages came from students with names like Meredith Roberts, Lamar Washington, Juanita Martinez, Raj Singh and Chang Huang, names that earlier research participants consistently perceived as belonging to either a white, black, Hispanic, Indian or Chinese student. In total, we used 20 different names in 10 different race-gender categories (e.g. white male, Hispanic female). On a Monday morning, the emails went out — one email per professor — and then we waited to see which professors would write back to which students. We understood, of course, that some professors would naturally be unavailable or uninterested in mentoring. But we also knew that the average treatment of any particular type of student should not differ from that of any other — unless professors were deciding (consciously or not) which students to help on the basis of their race and gender. (This “audit” methodology has long been used to study intentional and unintentional bias in real-world decision-making, as it allows researchers to standardize much about the decision environment.) What did we discover? First comes the fairly good news, which we reported in a paper in Psychological Science. Despite not knowing the students, 67 percent of the faculty members responded to the emails, and remarkably, 59 percent of the responders even agreed to meet on the proposed date with a student about whom they knew little and who did not even attend their university. (We immediately wrote back to cancel those meetings.) © 2014 The New York Times Company

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 15: Emotions, Aggression, and Stress
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 11: Emotions, Aggression, and Stress
Link ID: 19604 - Posted: 05.12.2014

By Diana Kwon Would you rather have $50 now or $100 two weeks from now? Even though the $100 is obviously the better choice, many people will opt for the $50. Both humans and animals show this tendency to place lower value on later rewards, a behavior known as temporal discounting. High rates of temporal discounting can lead to impulsive behavior, and at its worst, too much of this “now bias” is associated with pathological gambling, attention deficit hyperactivity disorder and drug addiction. What determines if you’ll be an impulsive decision-maker? New evidence suggests that for women, estrogen levels might be a factor. In a recent study published in the Journal of Neuroscience, Charlotte Boettiger and her team at the University of North Carolina revealed that greater increases in estrogen levels across the menstrual cycle led to less impulsive decision making. The researchers tested the “now bias” in 87 women between the ages of 18 and 40 at two different points in their menstrual cycle – in the menstrual phase when estrogen levels are low and the follicular phase when estrogen levels are high. Participants were given a delay-discounting task where they had to choose between two options: a certain sum of money at a later date or a discounted amount immediately (e.g. $100 in one week or $70 today). Subjects showed a greater bias toward the immediate choice during the menstrual phase of the cycle, when estrogen levels were low. Estrogen levels vary between women and can change with factors like stress and age. When the researchers measured amounts of estradiol (the dominant form of estrogen) from the saliva in a subset of the participants at the two points in their menstrual cycles, they found that not all of them showed a detectable increase. Only those with a measureable rise in estradiol showed a significant change in impulsive decision-making. © 2014 Scientific American

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 12: Sex: Evolutionary, Hormonal, and Neural Bases
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 8: Hormones and Sex
Link ID: 19589 - Posted: 05.08.2014

By Felicity Muth Imagine that you walk into a room, where three people are sitting, facing you. Their faces are oriented towards you, but all three of them have their eyes directed towards the left side of the room. You would probably follow their gaze to the point where they were looking (if you weren’t too unnerved to take your eyes off these odd people). As a social species, we are particularly cued in to social cues like following others’ gazes. However, we’re not the only animals that follow the gazes of members of our species: great apes, monkeys, lemurs, dogs, goats, birds and even tortoises follow each other’s gazes too. However, we don’t all follow gazes to the same extent. One species of macaque monkey (the stumptailed macaque) follows gazes a lot more than other macaque species, bonobos do it more than chimpanzees and human children follow gazes a lot more than other great ape species do. Species also differ in their understanding of what the other animal is looking at. For example, if we saw a person gazing at a point, and between them and this point was a barrier, whether the barrier was solid or transparent would affect how far we followed their gaze. This is because we imagine ourselves in their physical position and what they might be able to see. Bonobos and chimpanzees can also do this, but not the orang-utan. Like us, great apes and old world monkeys also will follow a gaze, but then look back at the individual gazing if they don’t see what the individual is gazing at (‘are you going crazy or am I just not seeing what you’re seeing?’). Capuchin and spider monkeys don’t seem to do this. So, even though a lot of animals are capable of following the gazes of others, there is a lot of variation in the extent and flexibility of this behaviour. A recent study looked to see whether chimpanzees, bonobos, orang-utans and humans would be more likely to follow their own species’ gazes than another species. © 2014 Scientific American

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 19581 - Posted: 05.06.2014

by Bethany Brookshire When you are waiting with a friend to cross a busy intersection, car engines running, horns honking and the city humming all around you, your brain is busy processing all those sounds. Somehow, though, the human auditory system can filter out the extraneous noise and allow you to hear what your friend is telling you. But if you tried to ask your iPhone a question, Siri might have a tougher time. A new study shows how the mammalian brain can distinguish the signal from the noise. Brain cells in the primary auditory cortex can both turn down the noise and increase the gain on the signal. The results show how the brain processes sound in noisy environments, and might eventually help in the development of better voice recognition devices, including improvements to cochlear implants for those with hearing loss. Not to mention getting Siri to understand you on a chaotic street corner. Nima Mesgarani and colleagues at the University of Maryland in College Park were interested in how mammalian brains separate speech from background noise. Ferrets have an auditory system that is extremely similar to humans. So the researchers looked at the A1 area of the ferret cortex, which corresponds to our auditory A1 region. Equipped with carefully implanted electrodes, the alert ferrets listened to both ferret sounds and parts of human speech. The ferret sounds and speech were presented alone, against a background of white noise, against pink noise (noise with equal energy at all octaves that sounds lower in pitch than white noise) and against reverberation. Then they took the neural signals recorded from the electrodes and used a computer simulation to reconstruct the sounds the animal was hearing. In results published April 21 in Proceedings of the National Academy of Sciences, the researchers show the ferret brain is quite good at detecting both ferrets sounds and speech in all three noisy conditions. “We found that the noise is drastically decreased, as if the brain of the ferret filtered it out and recovered the cleaned speech,” Mesgarani says. © Society for Science & the Public 2000 - 2013.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 19553 - Posted: 04.30.2014

Intelligence is hard to test, but one aspect of being smart is self-control, and a version of the old shell game that works for many species suggests that brain size is very important. When it comes to animal intelligence, says Evan MacLean, co-director of Duke University’s Canine Cognition Center, don’t ask which species is smarter. “Smarter at what?” is the right question. Many different tasks, requiring many different abilities, are given to animals to measure cognition. And narrowing the question takes on particular importance when the comparisons are across species. So Dr. MacLean, Brian Hare and Charles Nunn, also Duke scientists who study animal cognition, organized a worldwide effort by 58 scientists to test 36 species on a single ability: self-control. This capacity is thought to be part of thinking because it enables animals to override a strong, nonthinking impulse, and to solve a problem that requires some analysis of the situation in front of them. The testing program, which took several international meetings to arrange, and about seven years to complete, looked at two common tasks that are accepted ways to judge self-control. It then tried to correlate how well the animals did on the tests with other measures, like brain size, diet and the size of their normal social groups. Unsurprisingly, the great apes did very well. Dogs and baboons did pretty well. And squirrel monkeys, marmosets and some birds were among the worst performers. Surprisingly, absolute brain size turned out to be a much better predictor of success than relative brain size, which has been thought to be a good indication of intelligence. Social group size was not significant, but variety of diet was. The paper, published last week in the journal Proceedings of the National Academy of Sciences, is accompanied online by videos showing the animals doing what looks for all the world like the shell game in which a player has to guess where the pea is. © 2014 The New York Times Company

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 19551 - Posted: 04.29.2014

By LAURENCE STEINBERG I’M not sure whether it’s a badge of honor or a mark of shame, but a paper I published a few years ago is now ranked No. 8 on a list of studies that other psychologists would most like to see replicated. Good news: People find the research interesting. Bad news: They don’t believe it. The paper in question, written with my former student Margo Gardner, appeared in the journal Developmental Psychology in July 2005. It described a study in which we randomly assigned subjects to play a video driving game, either alone or with two same-age friends watching them. The mere presence of peers made teenagers take more risks and crash more often, but no such effect was observed among adults. I find my colleagues’ skepticism surprising. Most people recall that as teenagers, they did far more reckless things when with their friends than when alone. Data from the Federal Bureau of Investigation indicate that many more juvenile crimes than adult crimes are committed in groups. And driving statistics conclusively show that having same-age passengers in the car substantially increases the risk of a teen driver’s crashing but has no similar impact when an adult is behind the wheel. Then again, I’m aware that our study challenged many psychologists’ beliefs about the nature of peer pressure, for it showed that the influence of peers on adolescent risk taking doesn’t rely solely on explicit encouragement to behave recklessly. Our findings also undercut the popular idea that the higher rate of real-world risk taking in adolescent peer groups is a result of reckless teenagers’ being more likely to surround themselves with like-minded others. My colleagues and I have replicated our original study of peer influences on adolescent risk taking several times since 2005. We have also shown that the reason teenagers take more chances when their peers are around is partly because of the impact of peers on the adolescent brain’s sensitivity to rewards. In a study of people playing our driving game, my colleague Jason Chein and I found that when teens were with people their own age, their brains’ reward centers became hyperactivated, which made them more easily aroused by the prospect of a potentially pleasurable experience. This, in turn, inclined teenagers to pay more attention to the possible benefits of a risky choice than to the likely costs, and to make risky decisions rather than play it safe. Peers had no such effect on adults’ reward centers, though. © 2014 The New York Times Company

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 15: Emotions, Aggression, and Stress
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 11: Emotions, Aggression, and Stress
Link ID: 19544 - Posted: 04.28.2014

Forget cellphones; rambunctious friends may be the riskiest driver distraction for teens, according to a new study. Researchers installed video and G-force recorders in the vehicles of 52 newly licensed high school students for 6 months. They found that certain distractions, such as fiddling with the car’s controls and eating, were not strongly related to serious incidents, which included collisions and evasive maneuvers. However, when passengers in the car were engaged in loud conversation, teen drivers were six times more likely to have a serious incident. What’s more, horseplay increased risk by a factor of three whereas cellphone use only doubled it, the team reported online this week in the Journal of Adolescent Health. Forty-three states restrict newly licensed drivers from having more than one other teen in the car, and the study authors say their data suggest that's good policy. © 2014 American Association for the Advancement of Science.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 13: Memory, Learning, and Development
Link ID: 19514 - Posted: 04.22.2014

Anne Trafton | MIT News Office Picking out a face in the crowd is a complicated task: Your brain has to retrieve the memory of the face you’re seeking, then hold it in place while scanning the crowd, paying special attention to finding a match. A new study by MIT neuroscientists reveals how the brain achieves this type of focused attention on faces or other objects: A part of the prefrontal cortex known as the inferior frontal junction (IFJ) controls visual processing areas that are tuned to recognize a specific category of objects, the researchers report in the April 10 online edition of Science. Scientists know much less about this type of attention, known as object-based attention, than spatial attention, which involves focusing on what’s happening in a particular location. However, the new findings suggest that these two types of attention have similar mechanisms involving related brain regions, says Robert Desimone, the Doris and Don Berkey Professor of Neuroscience, director of MIT’s McGovern Institute for Brain Research, and senior author of the paper. “The interactions are surprisingly similar to those seen in spatial attention,” Desimone says. “It seems like it’s a parallel process involving different areas.” In both cases, the prefrontal cortex — the control center for most cognitive functions — appears to take charge of the brain’s attention and control relevant parts of the visual cortex, which receives sensory input. For spatial attention, that involves regions of the visual cortex that map to a particular area within the visual field.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 19478 - Posted: 04.12.2014

In an op-ed in the Sunday edition of this newspaper, Barbara Ehrenreich, card-carrying liberal rationalist, writes about her own mystical experiences (the subject of her new book), and argues that the numinous deserves more cutting-edge scientific study: I appreciate the spirit (if you will) of this argument, but I am very doubtful as to its application. The trouble is that in its current state, cognitive science has a great deal of difficulty explaining “what happens” when “those wires connect” for non-numinous experience, which is why mysterian views of consciousness remain so potent even among thinkers whose fundamental commitments are atheistic and materialistic. (I’m going to link to the internet’s sharpest far-left scold for a good recent polemic on this front.) That is to say, even in contexts where it’s very easy to identify the physical correlative to a given mental state, and to get the kind of basic repeatability that the scientific method requires — show someone an apple, ask them to describe it; tell them to bite into it, ask them to describe the taste; etc. — there is no kind of scientific or philosophical agreement on what is actually happening to produce the conscious experience of the color “red,” the conscious experience of the crisp McIntosh taste, etc. So if we can’t say how this ”normal” conscious experience works, even when we can easily identify the physical stimulii that produce it, it seems exponentially harder to scientifically investigate the invisible, maybe-they-exist and maybe-they-don’t stimulii — be they divine, alien, or panpsychic — that Ehrenreich hypothesizes might produce more exotic forms of conscious experience. © 2014 The New York Times Company

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 15: Emotions, Aggression, and Stress
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 11: Emotions, Aggression, and Stress
Link ID: 19468 - Posted: 04.10.2014

If you know only one thing about violins, it is probably this: A 300-year-old Stradivarius supposedly possesses mysterious tonal qualities unmatched by modern instruments. However, even elite violinists cannot tell a Stradivarius from a top-quality modern violin, a new double-blind study suggests. Like the sound of coughing during the delicate second movement of Beethoven's violin concerto, the finding seems sure to annoy some people, especially dealers who broker the million-dollar sales of rare old Italian fiddles. But it may come as a relief to the many violinists who cannot afford such prices. "There is nothing magical [about old Italian violins], there is nothing that is impossible to reproduce," says Olivier Charlier, a soloist who participated in the study and who plays a fiddle made by Carlo Bergonzi (1683 to 1747). However, Yi-Jia Susanne Hou, a soloist who participated in the study and who until recently played a violin by Bartolomeo Giuseppe Antonio Guarneri "del Gesù" (1698 to 1744), questions whether the test was fair. "Whereas I believe that [the researchers] assembled some of the finest contemporary instruments, I am quite certain that they didn't have some of the finest old instruments that exist," she says. The study marks the latest round in debate over the "secret of Stradivarius." Some violinists, violinmakers, and scientists have thought that Antonio Stradivari (1644 to 1737) and his contemporaries in Cremona, Italy, possessed some secret—perhaps in the varnish or the wood they used—that enabled them to make instruments of unparalleled quality. Yet, for decades researchers have failed to identify a single physical characteristic that distinguishes the old Italians from other top-notch violins. The varnish is varnish; the wood (spruce and maple) isn't unusual. Moreover, for decades tests have shown that listeners cannot tell an old Italian from a modern violin. © 2014 American Association for the Advancement of Science

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 19460 - Posted: 04.08.2014

By ANA GANTMAN and JAY VAN BAVEL TAKE a close look at your breakfast. Is that Jesus staring out at you from your toast? Such apparitions can be as lucrative as they are seemingly miraculous. In 2004, a Florida woman named Diane Duyser sold a decade-old grilled cheese sandwich that bore a striking resemblance to the Virgin Mary. She got $28,000 for it on eBay. The psychological phenomenon of seeing something significant in an ambiguous stimulus is called pareidolia. Virgin Mary grilled cheese sandwiches and other pareidolia remind us that almost any object is open to multiple interpretations. Less understood, however, is what drives some interpretations over others. In a forthcoming paper in the journal Cognition, we hope to shed some light on that question. In a series of experiments, we examined whether awareness of perceptually ambiguous stimuli was enhanced by the presence of moral content. We quickly flashed strings of letters on a computer screen and asked participants to indicate whether they believed each string formed a word or not. To ensure that the letter strings were perceptually ambiguous, we flashed them between approximately 40 and 70 milliseconds. (When they were presented for too long, people easily saw all the letter strings and demonstrated close to 100 percent accuracy. When they were presented too quickly, people were unable to see the words and performed “at chance,” around 50 percent accuracy.) Some of the strings of letters we flashed were words, others were not. Importantly, some of the words we flashed had moral content (virtue, steal, God) and others did not (virtual, steel, pet). Over the course of three experiments, we found that participants correctly identified strings of letters as words more often when they formed moral words (69 percent accuracy) than when they formed nonmoral words (65 percent accuracy). This suggested that moral content gave a “boost” to perceptually ambiguous stimuli — a shortcut to conscious awareness. We call this phenomenon the “moral pop-out effect.” © 2014 The New York Times Company

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 19453 - Posted: 04.07.2014

By BRAYDEN KING and JERRY KIM THIS season Major League Baseball is allowing its officiating crews to use instant replay to review certain critical calls, including home runs, force plays and foul balls. But the calling of the strike zone — determining whether a pitch that is not swung at is a ball or a strike — will still be left completely to the discretion of the officials. This might seem an odd exception, since calling the strike zone may be the type of officiating decision most subject to human foible. In research soon to be published in the journal Management Science, we studied umpires’ strike-zone calls using pitch-location data compiled by the high-speed cameras introduced by Major League Baseball several years ago in an effort to measure, monitor and reward umpires’ accuracy. After analyzing more than 700,000 pitches thrown during the 2008 and 2009 seasons, we found that umpires frequently made errors behind the plate — about 14 percent of non-swinging pitches were called erroneously. Some of those errors occurred in fairly predictable ways. We found, for example, that umpires tended to favor the home team by expanding the strike zone, calling a strike when the pitch was actually a ball 13.3 percent of the time for home team pitchers versus 12.7 percent of the time for visitors. Other errors were more surprising. Contrary to the expectation (or hope) that umpires would be more accurate in important situations, we found that they were, in fact, more likely to make mistakes when the game was on the line. For example, our analyses suggest that umpires were 13 percent more likely to miss an actual strike in the bottom of the ninth inning of a tie game than in the top of the first inning, on the first pitch. © 2014 The New York Times Company

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 19426 - Posted: 03.31.2014

Andreas von Bubnoff People who are unable to recognize faces can still learn to distinguish between other types of very similar objects, researchers report. The finding provides fresh support for the idea that the brain mechanisms that process face images are specialized for that task. It also offers evidence against an 'expertise' hypothesis, in which the same mechanisms are responsible for recognition of faces and other highly similar objects we have learned to tell apart — the way bird watchers can recognize birds after years of training. Constantin Rezlescu, a psychologist at Harvard University in Cambridge, Massachusetts, and his colleagues worked with two volunteers nicknamed Florence and Herschel, who had acquired prosopagnosia, or face blindness, following brain damage. The condition renders people unable to recognize and distinguish between faces — in some cases, even those of their own family members. The team trained Florence and Herschel to recognize greebles, computer-generated objects that differ from one another in similar ways to faces. The two volunteers spent eight training sessions of up to one hour learning to recognize 20 different greebles. The objects belonged to five different ‘families’ that were easier to distinguish between than were the individual greebles, and initially the participants took longer to discriminate individual greebles within the same family than they did the different families. But by the end of the training, they could tell individual greebles apart just as quickly — a sign that they had become experts in recognizing them, just as dog trainers can recognize individual dogs as easily as different breeds of dog. The study appears in Proceedings of the National Academy of Sciences1. © 2014 Nature Publishing Group

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 19408 - Posted: 03.25.2014

|By Nathan Collins A car detects when a driver starts to nod off and gently pulls over. A tablet or laptop senses its user is confused and offers assistance. Such interventions seem futuristic, but in fact they may not require any technological breakthroughs: a recent study suggests that with the aid of a standard camera, a simple computer program can learn to read people's eye movements to determine what they are doing and perhaps how they are feeling. Psychologists at the University of South Carolina were curious if a computer could figure out what a person was up to based on their eye movements. They first had 12 people engage in four tasks, including reading lines of text and searching photographs for a specific printed letter. Each person repeated the tasks 35 to 50 times while a camera recorded how their eyes moved. Using a subset of those data, the team trained a simple computer program, called a naive Bayes classifier, to identify which of the four tasks each person was doing. In the remaining trials, the classifier correctly determined which task the person was working on 75 percent of the time, well above the 25 percent expected by chance. Because the computer program is based on a flexible algorithm that is simple but powerful, this set-up could most likely be used to identify emotions or mental states such as confusion or fatigue, the researchers suggest in the paper, which appeared in September 2013 in PLOS ONE. With only a brief training period, a car's onboard computer—existing models are more than powerful enough—could learn how a driver's gaze changed as he or she became more exhausted. Further work, the authors suggest, could lead to devices capable of identifying and aiding people in need of assistance in a variety of situations. © 2014 Scientific American

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 7: Vision: From Eye to Brain
Link ID: 19379 - Posted: 03.19.2014

Sara Reardon Two monkeys sit at computer screens, eyeing one another as they wait for a promised reward: apple juice. Each has a choice — it can either select a symbol that results in juice being shared equally, or pick one that delivers most of the juice to itself. But being selfish is risky. If its partner also chooses not to share, neither gets much juice. This game, the ‘prisoner’s dilemma’, is a classic test of strategy that involves the simultaneous evaluation of an opponent’s thinking. Researchers have now discovered — and manipulated — specific brain circuits in rhesus macaques (Macaca mulatta) that seem to be involved in the animals’ choices, and in their assessments of their partners’ choices. Investigating the connections could shed light on how social context affects decision-making in humans, and how disorders that affect social skills, such as autism spectrum disorder, disrupt brain circuitry. “Once we have identified that there are particular neural signals necessary to drive the processes, we can begin to tinker,” says Michael Platt, a neurobiologist at Duke University in Durham, North Carolina. Neurobiologists Keren Haroush and Ziv Williams of Harvard Medical School in Boston, Massachusetts, zoomed in on neural circuits in rhesus macaques by implanting electrode arrays into a brain area called the dorsal anterior cingulate cortex (dACC), which is associated with rewards and decision-making. The arrays recorded the activity of hundreds of individual neurons. When the monkeys played the prisoner’s dilemma (see ‘A juicy experiment’) against a computer program, they rarely chose to cooperate. But when they played with another monkey that they could see, they were several times more likely to choose to share the juice. © 2014 Nature Publishing Group

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 15: Emotions, Aggression, and Stress
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 11: Emotions, Aggression, and Stress
Link ID: 19297 - Posted: 02.26.2014

Want to read someone’s mind? Look at their pupils. A person about to answer “yes” to a question, especially if they are more used to answering “no,” will have more enlarged pupils than someone about to answer “no,” according to a new study. Normally, pupils dilate when a person is in a darkened environment to let more light into the eye and allow better vision. But pupil size can also be altered by levels of signaling chemicals naturally produced by the brain. In the study, published online this week in the Proceedings of the National Academy of Sciences, scientists observed the pupils of 29 people as they pressed a “yes” or “no” button to indicate whether they’d seen a difficult-to-detect visual cue on a screen in front of them. When a person was deciding how to answer—in the seconds before pressing a button—their pupils grew larger. And if a person was normally biased toward answering “no” when they weren’t sure on the visual cue, then the pupil change was even more profound in the decision-making seconds before a “yes” answer. The finding could lead to new ways to detect people’s intrinsic biases and how confident they are in an answer given, important variables in many sociological and psychological studies. © 2014 American Association for the Advancement of Science.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 7: Vision: From Eye to Brain
Link ID: 19166 - Posted: 01.25.2014

By DAN HURLEY Two and a half millenniums ago, a prince named Siddhartha Gautama traveled to Bodh Gaya, India, and began to meditate beneath a tree. Forty-nine days of continuous meditation later, tradition tells us, he became the Buddha — the enlightened one. More recently, a psychologist named Amishi Jha traveled to Hawaii to train United States Marines to use the same technique for shorter sessions to achieve a much different purpose: mental resilience in a war zone. “We found that getting as little as 12 minutes of meditation practice a day helped the Marines to keep their attention and working memory — that is, the added ability to pay attention over time — stable,” said Jha, director of the University of Miami’s Contemplative Neuroscience, Mindfulness Research and Practice Initiative. “If they practiced less than 12 minutes or not at all, they degraded in their functioning.” Jha, whose program has received a $1.7 million, four-year grant from the Department of Defense, described her results at a bastion of scientific conservatism, the New York Academy of Sciences, during a meeting on “The Science of Mindfulness.” Yet mindfulness hasn’t long been part of serious scientific discourse. She first heard another scientist mention the word “meditation” during a lecture in 2005. “I thought, I can’t believe he just used that word in this audience, because it wasn’t something I had ever heard someone utter in a scientific context,” Jha said. Although pioneers like Jon Kabat-Zinn, now emeritus professor at the University of Massachusetts Medical Center, began teaching mindfulness meditation as a means of reducing stress as far back as the 1970s, all but a dozen or so of the nearly 100 randomized clinical trials have been published since 2005. And the most recent studies of mindfulness — the simple, nonjudgmental observation of a person’s breath, body or just about anything else — are taking the practice in directions that might have shocked the Buddha. In addition to military fitness, scientists are now testing brief stints of mindfulness training as a means to improve scores on standardized tests and lay down new connections between brain cells. © 2014 The New York Times Company

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 19142 - Posted: 01.16.2014

By Emilie Reas “Come on. Get out of the express checkout lane! That’s way more than twelve items, lady.” Without having to count, you can make a good guess at how many purchases the shopper in front of you is making. She may think she’s pulling a fast one, but thanks to the brain’s refined sense for quantity, she’s not fooling anyone. This ability to perceive numerosity – or number of items – does more than help prevent express lane fraud; it also builds the foundation for our arithmetic skills, the economic system and our concept of value. Until recently, it’s remained a puzzle how the brain allows us to so quickly and accurately judge quantity. Neuroscientists believe that neural representations of most high-level cognitive concepts – for example, those involved in memory, language or decision-making – are distributed, in a relatively disorganized manner, throughout the brain. In contrast, highly organized, specialized brain regions have been identified that represent most lower-level sensory information, such as sights, sounds, or physical touch. Such areas resemble maps, in that sensory information is arranged in a logical, systematic spatial layout. Notably, this type of neural topography has only previously been observed for the basic senses, but never for a high-level cognitive function. Researchers from the Netherlands may have discovered an exception to this rule, as reported in their recently published Science paper: a small brain area which represents numerosity along a continuous “map.” Just as we organize numbers along a mental “number line,” with one at the left, increasing in magnitude to the right, so is quantity mapped onto space in the brain. One side of this brain region responds to small numbers, the adjacent region to larger numbers, and so on, with numeric representations increasing to the far end. © 2014 Scientific American,

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 19135 - Posted: 01.15.2014

by Helen Thomson DRAW a line across a page, then write on it what you had for dinner yesterday and what you plan to eat tomorrow. If you are a native English speaker, or hail from pretty much any European country, you no doubt wrote last night's meal to the left of tomorrow night's. That's because we construct mental timelines to represent and reason about time, and most people in the West think of the past as on the left, and the future as on the right. Arnaud Saj at the University of Geneva, Switzerland, and his colleagues wondered whether the ability to conjure up a mental timeline is a necessary part of reasoning about events in time. To investigate, they recruited seven Europeans with what's called left hemispatial neglect. That means they have damage to parts of the right side of their brain, limiting their ability to detect, identify and interact with objects in the left-hand side of space. They may eat from only the right side of a plate, shave just the right side of their face, and ignore numbers on the left side of a clock. The team also recruited seven volunteers who had damage to the right side of their brain but didn't have hemispatial neglect, and seven people with undamaged brains. All the volunteers took part in a variety of memory tests. First, they learned about a fictional man called David. They were shown pictures of what David liked to eat 10 years ago, and what he might like to eat in 10 years' time. Participants were then shown drawings of 10 of David's favourite foods, plus four food items they hadn't seen before. Participants had to say whether it was a food that David liked in the past or might like in future. The tests were repeated with items in David's apartment, and his favourite clothes. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 19095 - Posted: 01.04.2014

Associated Press A sophisticated, real-world study confirms that dialing, texting or reaching for a cell phone while driving raises the risk of a crash or near-miss, especially for younger drivers. But the research also produced a surprise: Simply talking on the phone did not prove dangerous, as it has in other studies. This one did not distinguish between handheld and hands-free devices - a major weakness. And even though talking doesn't require drivers to take their eyes off the road, it's hard to talk on a phone without first reaching for it or dialing a number - things that raise the risk of a crash, researchers note. Earlier work with simulators, test tracks and cell phone records suggests that risky driving increases when people are on cell phones, especially teens. The 15- to 20-year-old age group accounts for 6 percent of all drivers but 10 percent of traffic deaths and 14 percent of police-reported crashes with injuries. For the new study, researchers at the Virginia Tech Transportation Institute installed video cameras, global positioning systems, lane trackers, gadgets to measure speed and acceleration, and other sensors in the cars of 42 newly licensed drivers 16 or 17 years old, and 109 adults with an average of 20 years behind the wheel. © 2014 Hearst Communications, Inc.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 19091 - Posted: 01.04.2014