Links for Keyword: Attention
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
By Bruce Bower Even 6-month-old babies can rapidly estimate approximate numbers of items without counting. But surprisingly, an apparently inborn sense for numbers doesn’t top out until around age 30. Number sense precision gradually declines after that, generally falling to preteen levels by about age 70, say psychologist Justin Halberda of Johns Hopkins University in Baltimore and his colleagues. They report the findings, based on Internet testing of more than 10,000 volunteers ages 11 to 85, online the week of June 25 in the Proceedings of the National Academy of Sciences. “I expected to see some improvement in number sense into preschool or maybe early elementary school, but not up to age 30,” Halberda says. Evidence of critical mental abilities peaking after young adulthood is rare but has been reported for face memory (SN: 1/1/11, p. 16). Participants in the new study completed a game that tested the precision of their number sense, or how accurately they could assess quantities. Volunteers saw a series of images showing mixes of blue and yellow dots and judged which color dot was more numerous. Each dot array appeared for a fraction of a second. In some dot arrays, one color greatly outnumbered the other. In other arrays, one color slightly outnumbered the other. Test-takers of the same age showed large differences in how accurately they could assess the dots, with the highest average scores coming around age 30, the researchers report. Teens and adults with a robust number sense reported doing moderately better at math in school and on the math portion of the SAT than those with a weak number sense. © Society for Science & the Public 2000 - 2012
Related chapters from BP6e: Chapter 18: Attention and Higher Cognition; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 13: Memory, Learning, and Development
Link ID: 16974 - Posted: 06.27.2012
By ALEX STONE PINCH a coin at its edge between the thumb and first fingers of your right hand and begin to place it in your left palm, without letting go. Begin to close the fingers of the left hand. The instant the coin is out of sight, extend the last three digits of your right hand and secretly retract the coin. Make a fist with your left — as if holding the coin — as your right hand palms the coin and drops to the side. You’ve just performed what magicians call a retention vanish: a false transfer that exploits a lag in the brain’s perception of motion, called persistence of vision. When done right, the spectator will actually see the coin in the left palm for a split second after the hands separate. This bizarre afterimage results from the fact that visual neurons don’t stop firing once a given stimulus (here, the coin) is no longer present. As a result, our perception of reality lags behind reality by about one one-hundredth of a second. Magicians have long used such cognitive biases to their advantage, and in recent years scientists have been following in their footsteps, borrowing techniques from the conjurer’s playbook in an effort not to mystify people but to study them. Magic may seem an unlikely tool, but it’s already yielded several widely cited results. Consider the work on choice blindness — people’s lack of awareness when evaluating the results of their decisions. In one study, shoppers in a blind taste test of two types of jam were asked to choose the one they preferred. They were then given a second taste from the jar they picked. Unbeknown to them, the researchers swapped the flavors before the second spoonful. The containers were two-way jars, lidded at both ends and rigged with a secret compartment that held the other jam on the opposite side — a principle that’s been used to bisect countless showgirls. This seems like the sort of thing that wouldn’t scan, yet most people failed to notice that they were tasting the wrong jam, even when the two flavors were fairly dissimilar, like grapefruit and cinnamon-apple. © 2012 The New York Times Company
By ARIEL KAMINER YOU could drive past the hulking warehouse on the rough patch of waterfront in Sunset Park, Brooklyn, several times without ever figuring it for the latest frontier of neurological thrill-seeking. But that’s where Yehuda Duenyas, 38, who calls himself “a creator of innovative experiences,” was camped out last week, along with his team of scrappy young technical wizards and a quarter-million dollars’ worth of circuitry, theatrical lighting and optimism called “The Ascent.” Part art installation, part adventure ride, part spiritual journey, “The Ascent” claims to let users harness their brain’s own electrical impulses, measured through EEG readings, to levitate themselves. During its brief stay in New York, it welcomed representatives from cultural organizations like PS 122 and Lincoln Center, event promoters and friends of the team. In the shadowy vastness of the warehouse, “The Ascent” looked spare and heroic, like the setting for the final showdown between good and evil. Up high, a large circular track of lights and equipment hung from the ceiling. Down on the floor, another circle mirrored the one above, with incandescent bulbs illuminating transient puffs of smoke and casting the apparatus in a ghostly light. In the 30 feet between the lights above and the lights below, the air seemed heavy with magic and danger. An assistant outfitted me with a harness around my middle and a couple of EEG sensors across my forehead. Another assistant led me to the center of the circle and snapped me into the two hanging cables. For one long and mysterious moment, I stood alone in silence. Then the fun began. © 2012 The New York Times Company
by Carl Zimmer I dig a knife into a cardboard box, slit it open, and lift a plastic bottle of bright red fluid from inside. I set it down on my kitchen table, next to my coffee and eggs. The drink, called NeuroSonic, is labeled with a cartoon silhouette of a head, with a red circle where its brain should be. A jagged line—presumably the trace of an EKG—crosses the circle. And down at the very bottom of the bottle, it reads, “Mental performance in every bottle.” My office is full of similar boxes: Dream Water (“Dream Responsibly”), Brain Toniq (“The clean and intelligent think drink”), iChill (“helps you relax, reduce stress, sleep better”), and Nawgan (“What to Drink When You Want to Think”). These products contain mixtures of neurotransmitters, hormones, and neuroactive amino acids, but you don’t need a prescription to buy them. I ordered mine on Amazon, and you can even find them in many convenience stores. I unscrew the cap from one of them and take a gulp. NeuroSonic tastes like cherry and aluminum. I wait for my neurons to light up. While I wait I call nutrition scientist Chris Noonan, who serves as adviser to Neuro, the company that makes NeuroSonic and a line of other elixirs for the brain. The inspiration for NeuroSonic came from the huge success of energy drinks, the caffeine-loaded potions now earning over $6 billion a year in the United States. The company’s founder, Diana Jenkins, posed a question: “Instead of just having a regular caffeinated energy drink, could we also include nutrients for cognitive enhancement and cognitive health?” Her team searched the scientific literature for compounds, eventually zeroing in on L-theanine, an amino acid found in green tea. © 2012, Kalmbach Publishing Co.
Related chapters from BP6e: Chapter 4: The Chemical Bases of Behavior: Neurotransmitters and Neuropharmacology; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 4: The Chemistry of Behavior: Neurotransmitters and Neuropharmacology; Chapter 14: Attention and Consciousness
Link ID: 16922 - Posted: 06.16.2012
David Cyranoski Adrian Owen still gets animated when he talks about patient 23. The patient was only 24 years old when his life was devastated by a car accident. Alive but unresponsive, he had been languishing in what neurologists refer to as a vegetative state for five years, when Owen, a neuro-scientist then at the University of Cambridge, UK, and his colleagues at the University of Liège in Belgium, put him into a functional magnetic resonance imaging (fMRI) machine and started asking him questions. Incredibly, he provided answers. A change in blood flow to certain parts of the man's injured brain convinced Owen that patient 23 was conscious and able to communicate. It was the first time that anyone had exchanged information with someone in a vegetative state. Patients in these states have emerged from a coma and seem awake. Some parts of their brains function, and they may be able to grind their teeth, grimace or make random eye movements. They also have sleep–wake cycles. But they show no awareness of their surroundings, and doctors have assumed that the parts of the brain needed for cognition, perception, memory and intention are fundamentally damaged. They are usually written off as lost. Owen's discovery1, reported in 2010, caused a media furore. Medical ethicist Joseph Fins and neurologist Nicholas Schiff, both at Weill Cornell Medical College in New York, called it a “potential game changer for clinical practice”2. The University of Western Ontario in London, Canada, soon lured Owen away from Cambridge with Can$20 million (US$19.5 million) in funding to make the techniques more reliable, cheaper, more accurate and more portable — all of which Owen considers essential if he is to help some of the hundreds of thousands of people worldwide in vegetative states. © 2012 Nature Publishing Group
By Mariette DiChristina “I see you have a watch with a buckle.” Standing at my side, Apollo Robbins held my wrist lightly as he turned my hand over and back. I knew exactly what was coming but I fell for it anyway. “Yes,” I said, trying to keep an eye on him, “that looks pretty easy for you to take off, but my rings would be harder.” He agreed, politely, while looking down at my hands and then up into my eyes: “Which one do you think would be hardest to remove?” While I considered the answer, he had already removed my watch and put it on his own wrist behind his back, unseen. He isn’t called the “The Gentleman Thief” for nothing. Robbins had just skillfully managed my attentional spotlight—that is, the focus of awareness at any given moment. To conceal his pilfering, Robbins had employed what is generally called “misdirection”: he got me to attend to the wrong things, added to my brain’s cognitive load with his humorous patter, created a distracting internal dialogue in me by giving me a question to answer, and generally flummoxed me all the while by pressing here and there on a shoulder or wrist. Adding insult to injury, Robbins had just described what he does—and shown his techniques while swiftly lifting another watch and emptying the pockets of the amiable Flip Phillips of Skidmore College. Still, I never stood a chance. My response to being fooled so easily? I laughed out loud. © 2012 Scientific America
By Brian Fung We often like to think of our brains as a single device, the unitary executive governing the republic of our limbs and thoughts. While there's some truth to that, the reality is much more complex. In fact, not only do different parts of the brain perform different functions, but many of our basic activities -- such as quoting a song lyric or calculating a waiter's tip -- actually activate multiple regions of the brain that fire in perfect coordination with one another. When these otherwise independent parts of the brain work together, they operate in what's called a brain network: Large scale brain network research suggests that cognitive functioning is the result of interactions or communication between different brain systems distributed throughout the brain. That is, when performing a particular task, just one isolated brain area is not working alone. Instead, different areas of the brain, often far apart from each other within the geographic space of the brain, are communicating through a fast-paced synchronized set of brain signals. These networks can be considered preferred pathways for sending signals back and forth to perform a specific set of cognitive or motor behaviors. With all that the brain has to process over the course of a day, you might expect the various networks' signals to interfere with one another, much as overloading a cell phone tower might result in a dropped call or two. © 2012 by The Atlantic Monthly Group
By Bruce Bower It’s prime time in social psychology for studying primes, a term for cues that go unnoticed but still sway people’s attitudes and behavior. Primes have been reported to influence nearly every facet of social life, at least in lab experiments. Subtle references to old age can cause healthy college students to slow their walking pace without realizing it. Cunningly presented cues about money nudge people to become more self-oriented and less helpful to others. And people holding a hot cup of coffee are more apt to judge strangers as having warm personalities. Over the last 15 years, many social psychologists have come to regard the triggering of personal tendencies by unnoticed cues as an established phenomenon. Priming may even inspire innovative mental health treatments, some argue. Yale University psychologist John Bargh likens primes to whistles that only mental butlers can hear. Once roused by primes, these silent inner servants dutifully act on a person’s preexisting tendencies and preferences without making a conscious commotion. Many animals reflexively take appropriate actions in response to fleeting smells and sounds associated with predators or potential mates, suggesting an ancient evolutionary heritage for priming, Bargh says. People can pursue actions on their own initiative, but mental butlers strive to ease the burden on the conscious lord of the manor. © Society for Science & the Public 2000 - 2012
John von Radowitz Slackers may have brains that are wired for under-achievement, a study suggests. Scientists have identified neural pathways that appear to influence an individual's willingness to work hard to earn money. Scans showed differences between "go-getters" and "slackers" in three specific areas of the brain. People prepared to work hard for rewards had more of the nerve-signalling chemical dopamine in two brain regions called the striatum and ventromedial prefrontal cortex. Both are known to play an important role in behaviour-changing reward sensations and motivation. But "slackers", who were less willing to work hard for reward, had higher dopamine levels in the anterior insula. This is a brain region involved in emotion and risk perception. Dopamine is a "neurotransmitter" that helps nerves "talk" to each other by sending chemical signals across connection points called synapses. Psychologist Michael Treadway, from Vanderbilt University in Nashville, US, who co-led the research, said: "Past studies in rats have shown that dopamine is crucial for reward motivation. But this study provides new information about how dopamine determines individual differences in the behaviour of human reward-seekers." The findings are reported in the latest issue of the Journal of Neurosciences. © independent.co.uk
Related chapters from BP6e: Chapter 18: Attention and Higher Cognition; Chapter 4: The Chemical Bases of Behavior: Neurotransmitters and Neuropharmacology
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 4: The Chemistry of Behavior: Neurotransmitters and Neuropharmacology
Link ID: 16736 - Posted: 05.02.2012
by Jane J. Lee Companies and health organizations spend millions of dollars on surveys, polls, and focus groups trying to suss out what people will like, buy, or do. But research shows that these techniques aren't all that accurate. Can brain scans do any better? It's possible, according to a new study that finds that a neural activity predicts people's responses to a public service ad about cigarette smoking better than simply asking a focus group. Researchers led by neuroscientist Emily Falk at the University of Michigan, Ann Arbor, and Matthew Lieberman, a social neuroscientist at the University of California, Los Angeles, focused on the medial prefrontal cortex (MPFC), located at the front of the brain. Of the many roles its neurons play, scientists were most interested in the ones related to self-reflection, thinking of what you value, and identity. Activity in this region increases when people identify with what they see or try to determine the value of something as it relates to them. A previous study by Falk found that MPFC activity that was recorded while people viewed slides with messages urging regular sunscreen use predicted which individuals were most likely to comply. But Lieberman and Falk wanted to go a step further and see if activity in the MPFC in one group of people could predict the behavior of a much bigger population. They looked at the effectiveness of three ad campaigns aimed at getting smokers to call the National Cancer Institute's quit hotline. The researchers took functional magnetic resonance imaging scans of brain activity in 30 heavy smokers who intended to quit, evenly split between men and women and ranging from 28 to 69 years old, as they watched three ad campaigns. Then scientists asked participants to rank the campaigns according to how effective they thought they'd be for the public. © 2010 American Association for the Advancement of Science.
Related chapters from BP6e: Chapter 18: Attention and Higher Cognition; Chapter 15: Emotions, Aggression, and Stress
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 11: Emotions, Aggression, and Stress
Link ID: 16723 - Posted: 04.28.2012
by Anil Ananthaswamy Is our ability to map numbers onto a physical space – such as along a line – a cultural invention rather than an innate ability? Members of a remote tribe in Papua New Guinea understand the concept of numbers but do not map them along a line, which suggests that the 'number line' must be learned. Researchers have long thought that the human brain is hard wired to associate numbers with physical space, and we naturally associate numbers with physical space. The idea received a boost in 2002 when it was discovered that people with brain damage who were unable to fully perceive one side of their body had trouble interpreting the number line – they claimed, for example, that five lies between three and six (Nature, DOI:10.1038/417138a). In 2008, Stanislas Dehaene of the National Institute of Health and Medical Research (INSERM) in Saclay, France, and colleagues found a subtle variation of the concept in the Mundurucu, an indigenous group in the Amazon with little or no formal education. The Mundurucu map numbers on to a line, but use a logarithmic scale rather than a typical linear scale – they allow plenty of room for small numbers but scrunch larger numbers together at the far end of the line. The finding suggested that the linear number line is a cultural invention, but the number line itself remained intact as an intuition shared by all humanity (Science, DOI: 10.1126/science.1156540). © Copyright Reed Business Information Ltd
By DAN HURLEY Early on a drab afternoon in January, a dozen third graders from the working-class suburb of Chicago Heights, Ill., burst into the Mac Lab on the ground floor of Washington-McKinley School in a blur of blue pants, blue vests and white shirts. Minutes later, they were hunkered down in front of the Apple computers lining the room’s perimeter, hoping to do what was, until recently, considered impossible: increase their intelligence through training. Games based on N-back tests require players to remember the location of a symbol or the sound of a particular letter presented just before (1-back), the time before last (2-back), the time before that (3-back) and so on. Some researchers say that playing games like this may actually make us smarter. “Can somebody raise their hand,” asked Kate Wulfson, the instructor, “and explain to me how you get points?” On each of the children’s monitors, there was a cartoon image of a haunted house, with bats and a crescent moon in a midnight blue sky. Every few seconds, a black cat appeared in one of the house’s five windows, then vanished. The exercise was divided into levels. On Level 1, the children earned a point by remembering which window the cat was just in. Easy. But the game is progressive: the cats keep coming, and the kids have to keep watching and remembering. © 2012 The New York Times Company
Related chapters from BP6e: Chapter 18: Attention and Higher Cognition; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 13: Memory, Learning, and Development
Link ID: 16690 - Posted: 04.23.2012
By Julian De Freitas and Brandon Liverence Notice that, even as you fixate on the screen in front of you, you can still shift your attention to different regions in your peripheries. For decades, cognitive scientists have conceptualized attention as akin to a shifting spotlight that “illuminates” regions it shines upon, or as a zoom lens, focusing on things so that we see them in finer detail. These metaphors are commonplace because they capture the intuition that attention illuminates or sharpens things, and thus, enhances our perception of them. Some of the important early studies to directly confirm this intuition were conducted by NYU psychologist Marisa Carrasco and colleagues, who showed that attention enhances the perceived sharpness of attended patterns. In their experiment, participants saw two textured patterns presented side-by-side on a computer screen, and judged which of the two patterns looked sharper. However, just before the patterns appeared, an attention-attracting cue was flashed at the upcoming location of one of the patterns. They found that attended patterns were perceived as sharper than physically identical unattended patterns. In other words, attention may make physically blurry (or otherwise degraded) images appear sharper – much like a zoom lens on a camera. Subsequent studies by Carrasco’s group and others found that attention also enhances perception of other features – for example, color saturation , orientation , and speed . This research suggests that attention causes incoming sensory information from attended locations to be processed more fully, without changing the information itself. © 2012 Scientific American,
Related chapters from BP6e: Chapter 18: Attention and Higher Cognition; Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 7: Vision: From Eye to Brain
Link ID: 16672 - Posted: 04.19.2012
By Laura Sanders The brain’s power to focus can make a single voice seem like the only sound in a room full of chatter, a new study shows. The results help explain how people can pick out a speaker from a jumbled stream of incoming sounds. A deeper understanding of this feat could help scientists better treat people who can’t sort out sound signals effectively, an ability that can decline with age. “I think this is a truly outstanding study, which has deep implications for the way we think about the auditory brain,” says auditory neuroscientist Christophe Micheyl of the University of Minnesota, who was not involved in the new research. For the project, engineer Nima Mesgarani and neurosurgeon Edward Chang, both of the University of California, San Francisco, studied what happens in the brains of people who are trying to follow one of two talkers, a scenario known to scientists as the cocktail party problem. Electrodes placed under the skulls of three people for an epilepsy treatment picked up signs of brain signals called high gamma waves produced by groups of nerve cells. The pattern and strength of these signals reflect which sounds people are paying attention to. “We are able to assess what someone is actually hearing — not just what’s coming in through their ears,” Chang says. Volunteers listened to two speakers, one female and one male, saying nonsense sentences such as “Ready tiger go to red two now.” The participants had to report the color and number spoken by the person who said one of two call signs (“ringo” or “tiger”). © Society for Science & the Public 2000 - 2012
Related chapters from BP6e: Chapter 18: Attention and Higher Cognition; Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 16671 - Posted: 04.19.2012
By Maria Konnikova When I was seven years old, my mom took me to see Curly Sue. Though I don’t remember much of the movie, two scenes made quite the impression: the first, when James Belushi asks Alisan Porter to hit him on the head with a baseball bat, and the second, when Bill, Sue, and Grey sit in the 3-D movie theater. At first glance, that second one doesn’t seem to pack quite the same punch–insert pun grimace here–as a little girl swinging a huge bat at a man’s forehead. But I found it irresistible. A wide shot of the entire movie theater, and all of the faces—in 3-D glasses, of course—moving and reacting in perfect unison. Heads swerve left. Heads swerve right. Gasps. Ducks. Frowns. All in a beautifully choreographed synchronicity. What made the scene so memorable to me? I’m not entirely sure, but I can only imagine that it was awe at the realization that, at certain moments, we can all be made to experience the same emotions in similar fashion. I don’t think I ever understood before that when I watched a movie, it wasn’t just me watching and reacting. Everyone else was watching and reacting along with me. And chances are, they were doing it in much the same way. Twenty years later, researchers are finally beginning to understand what it is that makes the present-day film experience so binding on a profound level—and why it’s often difficult for older movies to keep up. It seems that filmmakers have over the years perfected the way to best capture—and keep—viewers’ attention. Through trial, error, and instinct, Hollywood has figured out how best to cater to the natural dynamic of our attention and how to capitalize on our naïve assumptions about the continuity of space, time, and action. © 2012 Scientific American
Related chapters from BP6e: Chapter 18: Attention and Higher Cognition; Chapter 15: Emotions, Aggression, and Stress
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 11: Emotions, Aggression, and Stress
Link ID: 16653 - Posted: 04.16.2012
By Jonah Lehrer Eric Kandel is a titan of modern neuroscience. He won the Nobel Prize in 2000 not simply for discovering a new set of scientific facts (although he has discovered plenty of those), but for pioneering a new scientific approach. As he recounts in his memoir In Search of Memory, Kandel demonstrated that reductionist techniques could be applied to the brain, so that even something as mysterious as memory might be studied in sea slugs, as a function of kinase enzymes and synaptic proteins. (The memories in question involved the “habituation” of the slugs to a poke; they basically got bored of being prodded.) Because natural selection is a deeply conservative process – evolution doesn’t mess with success – it turns out that humans rely on almost all of the same neural ingredients as those inveterbrates. Memory has a nearly universal chemistry. But Kandel is not just one of the most important scientists of our time – he’s also an omnivorous public intellectual, deeply knowledgeable about everything from German art to the history of psychoanalysis. In his marvelous new book, The Age of Insight, Kandel puts this learning on display. He dives into the cultural ferment of 19th century Vienna, seeking to understand why the city was such a fount of new ideas, but he also explores the neuroscience of aesthetics, attempting to explain why some works of art, such as Klimt’s “Adele Bloch-Bauer I,” continue to haunt us. In many respects, the book imitates those famous Viennese salons, in which artists, scientists and doctors exchanged ideas and gave birth to a new way of thinking about the mind. (The city was a case-study in consilience.) If you’re interested in the intersection of art and science, the book is a must-read. © 2012 Condé Nast.
Related chapters from BP6e: Chapter 15: Emotions, Aggression, and Stress; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 11: Emotions, Aggression, and Stress; Chapter 13: Memory, Learning, and Development
Link ID: 16622 - Posted: 04.09.2012
By Laura Sanders CHICAGO — As any high school senior staring down the SAT knows, when the stakes are high, some test-takers choke. A new study finds that activity in distinct parts of the brain can predict whether a person will remain cool or crumble under pressure. The results, presented April 1 at the annual meeting of the Cognitive Neuroscience Society, offer some great new clues that may help scientists understand how the brain copes with stressful situations, says psychologist Thomas Carr of Michigan State University in East Lansing. “Sometimes you come across a study you wish you'd done yourself,” he says “This is such a study.” In the study, Andrew Mattarella-Micke and Sian Beilock, both of the University of Chicago, had volunteers perform math problems, some easy, some hard, while undergoing a functional MRI scan. These two-step calculations were designed to tap into a person’s working memory: Participants had to hold an intermediate number in mind to correctly calculate the final answer. After volunteers had performed about 25 minutes of low-stakes math, the researchers ratcheted up the pressure. Participants were told that their performance had been monitored the whole time, and if they improved, they would get 60 bucks instead of the 30 they had been promised. In addition to raising the financial stakes, the researchers added social pressure, too. They told volunteers that if the participants failed to improve, a teammate would lose money. © Society for Science & the Public 2000 - 2012
By SANDRA BLAKESLEE If you wear a white coat that you believe belongs to a doctor, your ability to pay attention increases sharply. But if you wear the same white coat believing it belongs to a painter, you will show no such improvement. So scientists report after studying a phenomenon they call enclothed cognition: the effects of clothing on cognitive processes. It is not enough to see a doctor’s coat hanging in your doorway, said Adam D. Galinsky, a professor at the Kellogg School of Management at Northwestern University, who led the study. The effect occurs only if you actually wear the coat and know its symbolic meaning — that physicians tend to be careful, rigorous and good at paying attention. The findings, on the Web site of The Journal of Experimental Social Cognition, are a twist on a growing scientific field called embodied cognition. We think not just with our brains but with our bodies, Dr. Galinsky said, and our thought processes are based on physical experiences that set off associated abstract concepts. Now it appears that those experiences include the clothes we wear. “I love the idea of trying to figure out why, when we put on certain clothes, we might more readily take on a role and how that might affect our basic abilities,” said Joshua I. Davis, an assistant professor of psychology at Barnard College and expert on embodied cognition who was not involved with the study. This study does not fully explain how this comes about, he said, but it does suggest that it will be worth exploring various ideas.
Graham Lawton, deputy magazine editor FOR such a big topic this is an awfully short book. But don't blame neuroscientist Sam Harris for being brief. He had no choice. In a brisk 66 pages Harris explains why we don't have free will, points out why that doesn't matter as much as it might appear to - and then simply stops in order to hammer home his point. Free will touches everything we value - law, politics, relationships, morality and more. And yet it is an illusion. We either live in a deterministic universe where the future is set, or an indeterminate one where thoughts and actions happen at random. Neither is compatible with free will. Having laid this out, Harris tries to salvage something from the wreckage. In the process he ends up rowing back to a position not unlike the "compatibilists" who argue that free will can be reconciled with the laws of physics, a notion he has earlier attacked. Harris starts his rescue mission by pointing out that, even in the absence of free will, there is still a distinction between voluntary action and mere accidents. Imagine, he says, that while he is writing his book somebody outside fires up a leaf blower. He ignores the sound by attending to his work. The decision feels like the exercise of free will, but isn't. © Copyright Reed Business Information Ltd.
What makes people creative? What gives some of us the ability to create work that captivates the eyes, minds and hearts of others? Jonah Lehrer, a writer specializing in neuroscience, addresses that question in his new book, Imagine: How Creativity Works. Lehrer defines creativity broadly, considering everything from the invention of masking tape to breakthroughs in mathematics; from memorable ad campaigns to Shakespearean tragedies. He finds that the conditions that favor creativity — our brains, our times, our buildings, our cities — are equally broad. Lehrer joins NPR's Robert Siegel to talk about the creative process — where great ideas come from, how to foster them, and what to do when you inevitably get stuck. On comparing Shakespeare with the inventor of masking tape "I think we absolutely can lump them all together. I think one of the mistakes we've made in talking about creativity is we've assumed it's a single verb — that when people are creative they're just doing one particular kind of thinking. But looking at creativity from the perspective of the brain, we can see that creativity is actually a bundle of distinct mental processes. "... Whether you're writing a Shakespearean tragedy, or trying to come up with a new graphic design or writing a piece of software, how we think about the problem should depend on the problem itself. Creativity is really a catch-all term for a variety of very different kinds of thinking." Copyright 2012 NPR