Chapter 18. Attention and Higher Cognition
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
Children don’t usually have the words to communicate even the darkest of thoughts. As a result, some children aged 5 to 11 take their own lives. It’s a rare and often overlooked phenomenon—and one that scientists are only just beginning to understand. A study published today in the journal Pediatrics reveals that attention deficit disorder (A.D.D.), not depression, may be the most common mental health diagnosis among children who die by suicide. By contrast, the researchers found that two-thirds of the 606 early adolescents studied (aged 12 to 14) had suffered from depression. While the finding isn’t necessarily causal, it does suggest that impulsive behavior might contribute to incidences of child suicide. Alternatively, some of these cases could be attributed to early-onset bipolar disorder, misdiagnosed as A.D.D. or A.D.H.D. Here’s Catherine Saint Louis, reporting for The New York Times: Suicide prevention has focused on identifying children struggling with depression; the new study provides an early hint that this strategy may not help the youngest suicide victims. “Maybe in young children, we need to look at behavioral markers,” said Jeffrey Bridge, the paper’s senior author and an epidemiologist at the Research Institute at Nationwide Children’s Hospital in Columbus, Ohio. Jill Harkavy-Friedman, the vice president of research at the American Foundation for Suicide Prevention, agreed. “Not everybody who is at risk for suicide has depression,” even among adults, said Dr. Harkavy-Friedman, who was not involved in the new research. © 1996-2016 WGBH Educational Foundation
Rosie Mestel The 2016 US election was a powerful reminder that beliefs tend to come in packages: socialized medicine is bad, gun ownership is a fundamental right, and climate change is a myth — or the other way around. Stances that may seem unrelated can cluster because they have become powerful symbols of membership of a group, says Dan Kahan, who teaches law and psychology at Yale Law School in New Haven, Connecticut. And the need to keep believing can further distort people’s perceptions and their evaluation of evidence. Here, Kahan tells Nature about the real-world consequences of group affinity and cognitive bias, and about research that may point to remedies. This interview has been edited for length and clarity. One measure is how individualistic or communitarian people are, and how egalitarian or hierarchical. Hierarchical and individualistic people tend to have confidence in markets and industry: those represent human ingenuity and power. People who are egalitarian and communitarian are suspicious of markets and industry. They see them as responsible for social disparity. It’s natural to see things you consider honourable as good for society, and things that are base, as bad. Such associations will motivate people’s assessment of evidence. Can you give an example? In a study, we showed people data from gun-control experiments and varied the results1. People who were high in numeracy always saw when a study supported their view. If it didn’t support their view, they didn’t notice — or argued their way out of it. © 2016 Macmillan Publishers Limited
By Alison Howell What could once only be imagined in science fiction is now increasingly coming to fruition: Drones can be flown by human brains' thoughts. Pharmaceuticals can help soldiers forget traumatic experiences or produce feelings of trust to encourage confession in interrogation. DARPA-funded research is working on everything from implanting brain chips to "neural dust" in an effort to alleviate the effects of traumatic experience in war. Invisible microwave beams produced by military contractors and tested on U.S. prisoners can produce the sensation of burning at a distance. What all these techniques and technologies have in common is that they're recent neuroscientific breakthroughs propelled by military research within a broader context of rapid neuroscientific development, driven by massive government-funded projects in both America and the European Union. Even while much about the brain remains mysterious, this research has contributed to the rapid and startling development of neuroscientific technology. And while we might marvel at these developments, it is also undeniably true that this state of affairs raises significant ethical questions. What is the proper role – if any – of neuroscience in national defense or war efforts? My research addresses these questions in the broader context of looking at how international relations, and specifically warfare, are shaped by scientific and medical expertise and technology. 2016 © U.S. News & World Report L.P.
Amanda Gefter As we go about our daily lives, we tend to assume that our perceptions—sights, sounds, textures, tastes—are an accurate portrayal of the real world. Sure, when we stop and think about it—or when we find ourselves fooled by a perceptual illusion—we realize with a jolt that what we perceive is never the world directly, but rather our brain’s best guess at what that world is like, a kind of internal simulation of an external reality. Still, we bank on the fact that our simulation is a reasonably decent one. If it wasn’t, wouldn’t evolution have weeded us out by now? The true reality might be forever beyond our reach, but surely our senses give us at least an inkling of what it’s really like. Not so, says Donald D. Hoffman, a professor of cognitive science at the University of California, Irvine. Hoffman has spent the past three decades studying perception, artificial intelligence, evolutionary game theory and the brain, and his conclusion is a dramatic one: The world presented to us by our perceptions is nothing like reality. What’s more, he says, we have evolution itself to thank for this magnificent illusion, as it maximizes evolutionary fitness by driving truth to extinction. Getting at questions about the nature of reality, and disentangling the observer from the observed, is an endeavor that straddles the boundaries of neuroscience and fundamental physics. On one side you’ll find researchers scratching their chins raw trying to understand how a three-pound lump of gray matter obeying nothing more than the ordinary laws of physics can give rise to first-person conscious experience. This is the aptly named “hard problem.”
Link ID: 22937 - Posted: 12.01.2016
By Melissa Dahl Considering its origin story, it’s not so surprising that hypnosis and serious medical science have often seemed at odds. The man typically credited with creating hypnosis, albeit in a rather primitive form, is Franz Mesmer, a doctor in 18th-century Vienna. (Mesmer, mesmerize. Get it?) Mesmer developed a general theory of disease he called “animal magnetism,” which held that every living thing carries within it an internal magnetic force, in liquid form. Illness arises when this fluid becomes blocked, and can be cured if it can be coaxed to flow again, or so Mesmer’s thinking went. To get that fluid flowing, as science journalist Jo Marchant describes in her recent book, Cure, Mesmer “simply waved his hands to direct it through his patients’ bodies” — the origin of those melodramatic hand motions that stage hypnotists use today.” After developing a substantial following — “mesmerism” became “the height of fashion” in late 1780s Paris, writes Marchant — Mesmer became the subject of what was essentially the world’s first clinical trial. King Louis XVI pulled together a team of the world’s top scientists, including Benjamin Franklin, who tested mesmerism and found its capacity to “cure” was, essentially, a placebo effect. “Not a shred of evidence exists for any fluid,” Franklin wrote. “The practice … is the art of increasing the imagination by degrees.” Maybe so. But that doesn’t mean it doesn’t work. © 2016, New York Media LLC.
By Yasemin Saplakoglu Even if you don’t have rhythm, your pupils do. In a new study, neuroscientists played drumming patterns from Western music, including beats typical in pop and rock, while asking volunteers to focus on computer screens for an unrelated fast-paced task that involved pressing the space bar as quickly as possible in response to a signal on the screen. Unbeknownst to the participants, the music omitted strong and weak beats at random times. (You can listen below for an example of a music clip they used. If you listen carefully, you can hear bass and hi-hat beats omitted throughout.) Eye scanners tracked the dilations of the subjects’ pupils as the music played. Their pupils enlarged when the rhythms dropped certain beats, even though the participants weren’t paying attention to the music. The biggest dilations matched the omissions of the beats in the most prominent locations in the music, usually the important first beat in a repeated set of notes. The results suggest that we may have an automatic sense of “hierarchical meter”—a pattern of strong and weak beats—that governs our expectations of music, the researchers write in the February 2017 issue of Brain and Cognition. Perhaps, the authors say, our eyes reveal clues into the importance that music and rhythm plays in our lives. © 2016 American Association for the Advancement of Science
Ian Sample Science editor Scientists have raised hopes for a radical new therapy for phobias and post-traumatic stress disorder (PTSD) with a procedure that can dampen down fears linked to painful memories. The advance holds particular promise for patients because in early tests, researchers found they could reduce anxieties triggered by specific memories without asking people to think about them consciously. That could make it more appealing than exposure therapy, which aims to help patients overcome their phobias by making them confront their fears in a safe environment, for example by encouraging them to handle spiders or snakes in the clinic. The new technique, called fMRI decoded neurofeedback (DecNef), was developed by scientists at the ATR Computational Neuroscience Lab in Japan. Mitsuo Kawato, who worked with researchers in the UK and the US on the latest study, said he wanted to find an alternative to exposure therapy, which has a 40% drop-out rate among PTSD patients. “We always thought this was ambitious, but it worked the way we hoped it would,” said Ben Seymour, a clinical neuroscientist and member of the team at Cambridge University. “We don’t completely erase the fear memory, but it is substantially reduced.” The procedure uses a computer algorithm to analyse a patient’s brain activity in real time and pinpoint moments when their fears can be overwritten by giving them a reward. In the latest study, the reward was a small amount of money. © 2016 Guardian News and Media Limited
By R. Douglas Fields SAN DIEGO—A wireless device that decodes brain waves has enabled a woman paralyzed by locked-in syndrome to communicate from the comfort of her home, researchers announced this week at the annual meeting of the Society for Neuroscience. The 59-year-old patient, who prefers to remain anonymous but goes by the initials HB, is “trapped” inside her own body, with full mental acuity but completely paralyzed by a disease that struck in 2008 and attacked the neurons that make her muscles move. Unable to breathe on her own, a tube in her neck pumps air into her lungs and she requires round-the-clock assistance from caretakers. Thanks to the latest advance in brain–computer interfaces, however, HB has at least regained some ability to communicate. The new wireless device enables her to select letters on a computer screen using her mind alone, spelling out words at a rate of one letter every 56 seconds, to share her thoughts. “This is a significant achievement. Other attempts on such an advanced case have failed,” says neuroscientist Andrew Schwartz of the University of Pittsburgh, who was not involved in the study, published in The New England Journal of Medicine. HB’s mind is intact and the part of her brain that controls her bodily movements operates perfectly, but the signals from her brain no longer reach her muscles because the motor neurons that relay them have been damaged by amyotrophic lateral sclerosis (ALS), says neuroscientist Erick Aarnoutse, who designed the new device and was responsible for the technical aspects of the research. He is part of a team of physicians and scientists led by neuroscientist Nick Ramsey at Utrecht University in the Netherlands. Previously, the only way HB could communicate was via a system that uses an infrared camera to track her eye movements. But the device is awkward to set up and use for someone who cannot move, and it does not function well in many situations, such as in bright sunlight. © 2016 Scientific American,
Laurence O'Dwyer Until as late as 2013 a joint (or comorbid) diagnosis of autism and attention deficit hyperactivity disorder (ADHD) was not permitted by the most influential psychiatric handbook, the Diagnostic and Statistical Manual of Mental Disorders (DSM). The DSM is an essential tool in psychiatry as it allows clinicians and researchers to use a standard framework for classifying mental disorders. Health insurance companies and drug regulation agencies also use the DSM, so its definition of what does or doesn’t constitute a particular disorder can have far-reaching consequences. One of the reasons for the prohibition of a comorbid diagnosis of autism and ADHD was that the severity of autism placed it above ADHD in the diagnostic hierarchy, so the inattention that is normally present in autism did not seem to merit an additional diagnosis. Nevertheless, that was an odd state of affairs, as any clinician working in the field would be able to quote studies that point to anything from 30% to 80% of patients with autism also having ADHD. More problematic still is the fact that patients with both sets of symptoms may respond poorly to standard ADHD treatments or have increased side effects. The fifth edition of the DSM opened the way for a more detailed look at this overlap, and just a year after the new guidelines were adopted, a consortium (which I am a part of) at the Radboud University in Nijmegen (Netherlands) called NeuroIMAGE published a paper which showed that autistic traits in ADHD participants could be predicted by complex interactions between grey and white matter volumes in the brain. © 2016 Guardian News and Media Limited
Ian Sample Science editor US military scientists have used electrical brain stimulators to enhance mental skills of staff, in research that aims to boost the performance of air crews, drone operators and others in the armed forces’ most demanding roles. The successful tests of the devices pave the way for servicemen and women to be wired up at critical times of duty, so that electrical pulses can be beamed into their brains to improve their effectiveness in high pressure situations. The brain stimulation kits use five electrodes to send weak electric currents through the skull and into specific parts of the cortex. Previous studies have found evidence that by helping neurons to fire, these minor brain zaps can boost cognitive ability. The technology is seen as a safer alternative to prescription drugs, such as modafinil and ritalin, both of which have been used off-label as performance enhancing drugs in the armed forces. But while electrical brain stimulation appears to have no harmful side effects, some experts say its long-term safety is unknown, and raise concerns about staff being forced to use the equipment if it is approved for military operations. Others are worried about the broader implications of the science on the general workforce because of the advance of an unregulated technology. © 2016 Guardian News and Media Limited
By Chelsea Whyte FACING a big problem and finding it hard to decide what to do? A sprinkling of disgust might boost your confidence. Common sense suggests that our confidence in the decisions we make comes down to the quality of the information available – the clearer that information, the more confident we feel. But it seems that the state of our body also guides us. Micah Allen at University College London and his colleagues showed 29 people a screen of dots moving in varied directions. They asked the volunteers which direction most of the spots were moving in, and how confident they were in their decisions. Before each task, the participants briefly saw a picture of a face on the screen. It was either twisted in disgust or had a neutral expression. Although this happened too quickly for the faces to be consciously perceived, the volunteers’ bodies reacted. Seeing disgust, which is a powerful evolutionary sign of danger, boosted the volunteers’ alertness, pushing up their heart rates and dilating their pupils. “When you induce disgust, high confidence becomes lower and low confidence becomes higher“ When shown a neutral face, the volunteers became less confident as the task got more difficult. As the movement of the dots became more varied, they were less sure of the main direction. But when they were shown the disgusted face, they reacted differently. In easy tasks, in which people were previously confident, they became more doubtful of their decisions. In more difficult tasks, their confidence grew. Neither face made any difference to the accuracy of their answers (eLife, doi.org/bsgd). © Copyright Reed Business Information Ltd.
Hannah Devlin The human brain is predisposed to learn negative stereotypes, according to research that offers clues as to how prejudice emerges and spreads through society. The study found that the brain responds more strongly to information about groups who are portrayed unfavourably, adding weight to the view that the negative depiction of ethnic or religious minorities in the media can fuel racial bias. Hugo Spiers, a neuroscientist at University College London, who led the research, said: “The newspapers are filled with ghastly things people do ... You’re getting all these news stories and the negative ones stand out. When you look at Islam, for example, there’s so many more negative stories than positive ones and that will build up over time.” The scientists also uncovered a characteristic brain signature seen when participants were told a member of a “bad” group had done something positive - an observation that is likely to tally with the subjective experience of minorities. “Whenever someone from a really bad group did something nice they were like, ‘Oh, weird,’” said Spiers. Previous studies have identified brain areas involved in gender or racial stereotyping, but this is the first attempt to investigate how the brain learns to link undesirable traits to specific groups and how this is converted into prejudice over time. © 2016 Guardian News and Media Limited
Link ID: 22821 - Posted: 11.02.2016
Laura Sanders The eyes may reveal whether the brain’s internal stopwatch runs fast or slow. Pupil size predicted whether a monkey would over- or underestimate a second, scientists report in the Nov. 2 Journal of Neuroscience. Scientists knew that pupils get bigger when a person is paying attention. They also knew that paying attention can influence how people perceive the passage of time. Using monkeys, the new study links pupil size and timing directly. “What they’ve done here is connect those dots,” says neuroscientist Thalia Wheatley of Dartmouth College. More generally, the study shows how the eyes are windows into how the brain operates. “There’s so much information coming out of the eyes,” Wheatley says. Neuroscientist Masaki Tanaka of Hokkaido University School of Medicine in Japan and colleagues trained three Japanese macaques to look at a spot on a computer screen after precisely one second had elapsed. The study measured the monkeys’ subjective timing abilities: The monkeys had to rely on themselves to count the milliseconds. Just before each trial, the researchers measured pupil diameters. When the monkeys underestimated a second by looking too soon, their pupil sizes were slightly larger than in trials in which the monkeys overestimated a second, the researchers found. That means that when pupils were large, the monkeys felt time zoom by. But when pupils were small, time felt slower. |© Society for Science & the Public 2000 - 2016.
Link ID: 22818 - Posted: 11.02.2016
Nicola Davis The proficiency of elite football referees could be down to their eagle eyes, say researchers. A study of elite and sub-elite referees has found that a greater tendency to predict and watch contact zones between players contributes to the greater accuracy of top-level referees. “Over the years they develop so much experience that they now can anticipate, very well, future events so that they can already direct their attention to those pieces of information where they expect something to happen,” said lead author Werner Helsen from the University of Leuven. Keith Hackett, a former football referee and former general manager of the Professional Game Match Officials Limited, said the research chimed with his own experiences. “In working with elite referees for a number of years I have recognised their ability to see, recognise think and then act in a seamless manner,” he said. “They develop skill sets that enable them to see and this means good game-reading and cognitive skills to be in the right place at the right time.” Mistakes, he believes, often come down to poor visual perception. “Last week, we saw an elite referee fail to detect the violent act of [Moussa] Sissoko using his arm/elbow, putting his opponent’s safety at risk,” he said. “The review panel, having received confirmation from the referee that he failed to see the incident despite looking in the direction of the foul challenge, were able to act.” Writing in the journal Cognitive Research, researchers from the University of Leuven in Belgium and Brunel University in west London say they recruited 39 referees, 20 of whom were elite referees and 19 were experienced but had never refereed at a professional level. © 2016 Guardian News and Media Limited
Link ID: 22817 - Posted: 11.01.2016
By Jesse Singal For a long time, the United States’ justice system has been notorious for its proclivity for imprisoning children. Because of laws that grant prosecutors and judges discretion to bump juveniles up to the category of “adult” when they commit crimes deemed serious enough by the authorities, the U.S. is an outlier in locking up kids, with some youthful defendants even getting life sentences. Naturally, this has attracted a great deal of outrage and advocacy from human-rights organizations, who argue that kids, by virtue of not lacking certain judgment, foresight, and decision-making abilities, should be treated a bit more leniently. Writing for the Marshall Project and drawing on some interesting brain science, Dana Goldstein takes the argument about youth incarceration even further: We should also rethink our treatment of offenders who are young adults. As Goldstein explains, the more researchers study the brain, the more they realize that it takes decades for the organ to develop fully and to impart to its owners their full, adult capacities for reasoning. “Altogether,” she writes, “the research suggests that brain maturation continues into one’s twenties and even thirties.” Many of these insights come from the newest generation of neuroscience research. “Everyone has always known that there are behavioral changes throughout the lifespan,” Catherine Lebel, an assistant professor of radiology at the University of Calgary who has conducted research into brain development, told Goldstein. “It’s only with new imaging techniques over the last 15 years that we’ve been able to get at some of these more subtle changes.” ! © 2016, New York Media LLC.
By Diana Kwon Can you feel your heart beating? Most people cannot, unless they are agitated or afraid. The brain masks the sensation of the heart in a delicate balancing act—we need to be able to feel our pulse racing occasionally as an important signal of fear or excitement, but most of the time the constant rhythm would be distracting or maddening. A growing body of research suggests that because of the way the brain compensates for our heartbeat, it may be vulnerable to perceptual illusions—if they are timed just right. In a study published in May in the Journal of Neuroscience, a team at the Swiss Federal Institute of Technology in Lausanne conducted a series of studies on 143 participants and found that subjects took longer to identify a flashing object when it appeared in sync with the rhythm of their heartbeats. Using functional MRI, they also found that activity in the insula, a brain area associated with self-awareness, was suppressed when people viewed these synchronized images. The authors suggest that the flashing object was suppressed by the brain because it got lumped in with all the other bodily changes that occur with each heartbeat—the eyes make tiny movements, eye pressure changes slightly, the chest expands and contracts. “The brain knows that the heartbeat is coming from the self, so it doesn't want to be bothered by the sensory consequences of these signals,” says Roy Salomon, one of the study's co-authors. © 2016 Scientific American
By Melissa Dahl A rule that spans time and space and morning routines: It is entirely too easy to underestimate the time it takes to get to work. Maybe once — one time — it took just 20 minutes to get to work, but it typically takes 25 to 30, and you know that, but still you leave late and, thus, arrive late. It’s dumb. It is also, maybe, human nature. As Christian Jarrett at BPS Research Digest reports, a team of neuroscientists has just uncovered a very handy if rather complicated excuse for tardiness — it seems people tend to underestimate how long it will take to travel familiar routes. The laws of time and space do not actually bend in order to transport you to work or school more quickly, but at least part of you believes that they will. And yet the oddest part of this new study, published in the journal Hippocampus, is that the participants tended to overestimate the physical length of those routes, even as they underestimated how long it would take to travel them. It does make a certain amount of sense that people would exaggerate the breadth of familiar distances, because the level of detail you’ve stored about them matters to your memory. If you remember every Starbucks and street corner you pass on the way you usually walk to school, for instance, the walking route will likely feel longer when you recall it than one you don’t know as well. As Jarrett explains, the researchers “thought a more detailed neural representation would make that space seem larger.” And when they asked a group of students — all of whom had been living in the same building in London for 9 months — to draw a little map of their neighborhood, this is indeed what they found. The students exaggerated the physical distance of the routes they walked the most, drawing their maps a little bigger they should have. © 2016, New York Media LLC.
Link ID: 22797 - Posted: 10.28.2016
By Michael-Paul Schallmo, Scott Murray, Most people do not associate autism with visual problems. It’s not obvious how atypical vision might be related to core features of autism such as social and language difficulties and repetitive behaviors. Yet examining how autism affects vision holds tremendous promise for understanding this condition at a neural level. Over the past 50 years, we have learned more about the visual parts of the brain than any other areas, and we have a solid understanding of how neural activity leads to visual perception in a typical brain. Differences in neuronal processing in autism are likely to be widespread, and may be similar across brain regions. So pinpointing these differences in visual areas might reveal important details about processing in brain regions related to social functioning and language, which are not as well understood. Studying vision in autism may also help connect studies of people to those of animal models. Working with animals allows neuroscientists to study neural processing at many different levels—from specific genes and single neurons to small neural networks and brain regions that control functions such as movement or hearing. But animals do not display the complexity and diversity in language and social functioning that people do. By contrast, visual brain processes are similar between people and animals. We can use our rich knowledge of how neurons in animals process visual information to bridge the gap between animals and people. We can also use it to test hypotheses about how autism alters neural functioning in the brain. © 2016 Scientific American
By KATE MURPHY Eavesdrop on any conversation or pay close attention to your own and you’ll hear laughter. From explosive bursts to muffled snorts, some form of laughter punctuates almost all verbal communication. Electronic communication, too, LOL. You’ll probably also notice that, more often than not, the laughter is in response to something that wasn’t very funny — or wasn’t funny at all. Observational studies suggest this is the case 80 percent to 90 percent of the time. Take Hillary Clinton’s strategic laughter during heated exchanges with Donald J. Trump during the presidential debates. Or Jimmy Fallon’s exaggerated laughter when interviewing guests on “The Tonight Show.” Or employees at Fox News reporting that they tried to “laugh off” unwanted sexual advances by Roger Ailes and others within the organization. How laughter went from a primal signal of safety (the opposite of a menacing growl) to an odd assortment of vocalizations that smooth as much as confuse social interactions is poorly understood. But researchers who study laughter say reflecting on when and why you titter, snicker or guffaw is a worthy exercise, given that laughter can harm as much as help you. “It’s a hall of mirrors of inferences and intentions every time you encounter laughter,” said Sophie Scott, a neuroscientist at University College London who studies how the brain produces and processes laughter. “You think it’s so simple. It’s just jokes and ha-ha but laughter is really sophisticated and complicated.” Laughter at its purest and most spontaneous is affiliative and bonding. To our forebears it meant, “We’re not going to kill each other! What a relief!” But as we’ve developed as humans so has our repertoire of laughter, unleashed to achieve ends quite apart from its original function of telling friend from foe. Some of it is social lubrication — the warm chuckles we give one another to be amiable and polite. Darker manifestations include dismissive laughter, which makes light of something someone said sincerely, and derisive laughter, which shames. © 2016 The New York Times Company
By Kensy Cooperrider, Rafael Núñez “What is the difference between yesterday and tomorrow?” The Yupno man we were interviewing, Danda, paused to consider his answer. A group of us sat on a hillside in the Yupno Valley, a remote nook high in the mountains of Papua New Guinea. Only days earlier we had arrived on a single-engine plane. After a steep hike from the grass airstrip, we found ourselves in the village of Gua, one of about 20 Yupno villages dotting the rugged terrain. We came all the way here because we are interested in time—in how Yupno people understand concepts such as past, present and future. Are these ideas universal, or are they products of our language, our culture and our environment? As we interviewed Danda and others in the village, we listened to what they said about time, but we paid even closer attention to what they did with their hands as they spoke. Gestures can be revealing. Ask English speakers about the difference between yesterday and tomorrow, and they might thrust a hand over the shoulder when referring to the past and then forward when referring to the future. Such unreflective movements reveal a fundamental way of thinking in which the past is at our backs, something that we “leave behind,” and the future is in front of us, something to “look forward” to. Would a Yupno speaker do the same? Danda was making just the kinds of gestures we were hoping for. As he explained the Yupno word for “yesterday,” his hand swept backward; as he mentioned “tomorrow,” it leaped forward. We all sat looking up a steep slope toward a jagged ridge, but as the light faded, we changed the camera angle, spinning around so that we and Danda faced in the opposite direction, downhill. With our backs now to the ridge, we looked over the Yupno River meandering toward the Bismarck Sea. “Let's go over that one more time,” we suggested. © 2016 Scientific American,
Link ID: 22778 - Posted: 10.22.2016