Chapter 18. Attention and Higher Cognition

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 1460

Christof Koch A future where the thinking capabilities of computers approach our own is quickly coming into view. We feel ever more powerful machine-learning (ML) algorithms breathing down our necks. Rapid progress in coming decades will bring about machines with human-level intelligence capable of speech and reasoning, with a myriad of contributions to economics, politics and, inevitably, warcraft. The birth of true artificial intelligence will profoundly affect humankind’s future, including whether it has one. The following quotes provide a case in point: “From the time the last great artificial intelligence breakthrough was reached in the late 1940s, scientists around the world have looked for ways of harnessing this ‘artificial intelligence’ to improve technology beyond what even the most sophisticated of today’s artificial intelligence programs can achieve.” Advertisement “Even now, research is ongoing to better understand what the new AI programs will be able to do, while remaining within the bounds of today’s intelligence. Most AI programs currently programmed have been limited primarily to making simple decisions or performing simple operations on relatively small amounts of data.” These two paragraphs were written by GPT-2, a language bot I tried last summer. Developed by OpenAI, a San Francisco–based institute that promotes beneficial AI, GPT-2 is an ML algorithm with a seemingly idiotic task: presented with some arbitrary starter text, it must predict the next word. The network isn’t taught to “understand” prose in any human sense. Instead, during its training phase, it adjusts the internal connections in its simulated neural networks to best anticipate the next word, the word after that, and so on. Trained on eight million Web pages, its innards contain more than a billion connections that emulate synapses, the connecting points between neurons. When I entered the first few sentences of the article you are reading, the algorithm spewed out two paragraphs that sounded like a freshman’s effort to recall the gist of an introductory lecture on machine learning during which she was daydreaming. The output contains all the right words and phrases—not bad, really! Primed with the same text a second time, the algorithm comes up with something different. © 2019 Scientific American,

Keyword: Consciousness; Robotics
Link ID: 26894 - Posted: 12.12.2019

By Steve Taylor In the second half of the 19th century, scientific discoveries—in particular, Darwin’s theory of evolution—meant that Christian beliefs were no longer feasible as a way of explaining the world. The authority of the Bible as an explanatory text was fatally damaged. The new findings of science could be utilized to provide an alternative conceptual system to make sense of the world—a system that insisted that nothing existed apart from basic particles of matter, and that all phenomena could be explained in terms of the organization and the interaction of these particles. One of the most fervent of late 19th century materialists, T.H. Huxley, described human beings as “conscious automata” with no free will. As he explained in 1874, “Volitions do not enter into the chain of causation…. The feeling that we call volition is not the cause of a voluntary act, but the symbol of that state of the brain which is the immediate cause." This was a very early formulation of an idea that has become commonplace amongst modern scientists and philosophers who hold similar materialist views: that free will is an illusion. According to Daniel Wegner, for instance, “The experience of willing an act arises from interpreting one’s thought as the cause of the act.” In other words, our sense of making choices or decisions is just an awareness of what the brain has already decided for us. When we become aware of the brain’s actions, we think about them and falsely conclude that our intentions have caused them. You could compare it to a king who believes he is making all his own decisions, but is constantly being manipulated by his advisors and officials, who whisper in his ear and plant ideas in his head. © 2019 Scientific American

Keyword: Consciousness
Link ID: 26881 - Posted: 12.07.2019

By Viorica Marian Psycholinguistics is a field at the intersection of psychology and linguistics, and one if its recent discoveries is that the languages we speak influence our eye movements. For example, English speakers who hear candle often look at a candy because the two words share their first syllable. Research with speakers of different languages revealed that bilingual speakers not only look at words that share sounds in one language but also at words that share sounds across their two languages. When Russian-English bilinguals hear the English word marker, they also look at a stamp, because the Russian word for stamp is marka. Even more stunning, speakers of different languages differ in their patterns of eye movements when no language is used at all. In a simple visual search task in which people had to find a previously seen object among other objects, their eyes moved differently depending on what languages they knew. For example, when looking for a clock, English speakers also looked at a cloud. Spanish speakers, on the other hand, when looking for the same clock, looked at a present, because the Spanish names for clock and present—reloj and regalo—overlap at their onset. The story doesn’t end there. Not only do the words we hear activate other, similar-sounding words—and not only do we look at objects whose names share sounds or letters even when no language is heard—but the translations of those names in other languages become activated as well in speakers of more than one language. For example, when Spanish-English bilinguals hear the word duck in English, they also look at a shovel, because the translations of duck and shovel—pato and pala, respectively—overlap in Spanish. © 2019 Scientific American

Keyword: Language; Attention
Link ID: 26875 - Posted: 12.06.2019

By Gaby Maimon What is the biological basis of thought? How do brains store memories? Questions like these have intrigued humanity for millennia, but the answers still remain largely elusive. You might think that the humble fruit fly, Drosophila melanogaster, has little to add here, but since the 1970s, scientists have actually been studying the neural basis of higher brain functions, like memory, in these insects. Classic work––performed by several labs, including those of Martin Heisenberg and Seymour Benzer––focused on studying the behavior of wild-type and genetically mutated Drosophila in simple learning and memory tasks, ultimately leading to the discovery of several key molecules and other underlying mechanisms. However, because one could not peer into the brain of behaving flies to eavesdrop on neurons in action, this field, in its original form, could only go so far in helping to explain the mechanisms of cognition. In 2010, when I was a postdoctoral researcher in the lab of Michael Dickinson, we developed the first method for measuring electrical activity of neurons in behaving Drosophila. A similar method was developed in parallel by Johannes Seelig and Vivek Jayaraman. In these approaches, one glues a fly to a custom plate that allows one to carefully remove the cuticle over the brain and measure neural activity via electrodes or fluorescence microscopy. Even though the fly is glued in place, the animal can still flap her wings in tethered flight or walk on an air-cushioned ball, which acts like a spherical treadmill beneath her legs. These technical achievements attracted the attention of the Drosophila neurobiology community, but should anyone really care about seeing a fly brain in action beyond this small, venerable, group of arthropod-loving nerds (of which I'm honored to be a member)? In other words, will these methods help to reveal anything of general relevance beyond flies? Increasingly, the answer looks to be yes. © 2019 Scientific American

Keyword: Learning & Memory
Link ID: 26871 - Posted: 12.04.2019

By John Williams Credit...Sonny Figueroa/The New York Times Let’s get right to the overwhelming question: What does it mean to experience something? Chances are that when you recount a great meal to a friend, you don’t say that it really lit up your nucleus of the solitary tract. But chances are equally good, if you adhere to conventional scientific and philosophical wisdom, that you believe the electrical activity in that part of your brain is what actually accounts for the sensation when you dine. In “Out of My Head,” the prolific British writer Tim Parks adds to the very long shelf of books about what he calls the “deep puzzle of minute-by-minute perception.” The vast majority of us — and this is undoubtedly for the best, life being hard enough — don’t get tripped up by the ontological mysteries of our minute-to-minute perceiving. We just perceive. But partly because it remains a stubborn philosophical problem, rather than a neatly explained scientific one, consciousness — our awareness, our self-awareness, our self — makes for an endlessly fascinating subject. And Parks, though not a scientist or professional philosopher, proves to be a companionable guide, even if his book is more an appetizer than a main course. He wants “simply” to ask, he writes, whether “we ordinary folks” can say anything useful about consciousness. But he also wants to poke skeptically at the “now standard view of conscious experience as something locked away in the head,” the “dominant internalist model which assumes the brain is some kind of supercomputer.” “Out of My Head” was inspired, in large part, by the theories of Riccardo Manzotti, a philosopher, roboticist and friend of Parks with whom the author has had “one of the most intense and extended conversations of my life.” (A good chunk of that conversation appeared as a 15-part dialogue on the website of The New York Review of Books, whose publishing arm has released “Out of My Head.”) © 2019 The New York Times Company

Keyword: Consciousness
Link ID: 26842 - Posted: 11.22.2019

By Gabriel Finkelstein Unlike Charles Darwin and Claude Bernard, who endure as heroes in England and France, Emil du Bois-Reymond is generally forgotten in Germany — no streets bear his name, no stamps portray his image, no celebrations are held in his honor, and no collections of his essays remain in print. Most Germans have never heard of him, and if they have, they generally assume that he was Swiss. But it wasn’t always this way. Du Bois-Reymond was once lauded as “the foremost naturalist of Europe,” “the last of the encyclopedists,” and “one of the greatest scientists Germany ever produced.” Contemporaries celebrated him for his research in neuroscience and his addresses on science and culture; in fact, the poet Jules Laforgue reported seeing his picture hanging for sale in German shop windows alongside those of the Prussian royal family. Those familiar with du Bois-Reymond generally recall his advocacy of understanding biology in terms of chemistry and physics, but during his lifetime he earned recognition for a host of other achievements. He pioneered the use of instruments in neuroscience, discovered the electrical transmission of nerve signals, linked structure to function in neural tissue, and posited the improvement of neural connections with use. He served as a professor, as dean, and as rector at the University of Berlin, directed the first institute of physiology in Prussia, was secretary of the Prussian Academy of Sciences, established the first society of physics in Germany, helped found the Berlin Society of Anthropology, oversaw the Berlin Physiological Society, edited the leading German journal of physiology, supervised dozens of researchers, and trained an army of physicians. © 2019 Scientific American

Keyword: Consciousness
Link ID: 26811 - Posted: 11.11.2019

By Dan Falk At the moment, you’re reading these words and, presumably, thinking about what the words and sentences mean. Or perhaps your mind has wandered, and you’re thinking about dinner, or looking forward to bingeing the latest season of “The Good Place.” But you’re definitely experiencing something. How is that possible? Every part of you, including your brain, is made of atoms, and each atom is as lifeless as the next. Your atoms certainly don’t know or feel or experience anything, and yet you — a conglomeration of such atoms — have a rich mental life in which a parade of experiences unfolds one after another. The puzzle of consciousness has, of course, occupied the greatest minds for millennia. The philosopher David Chalmers has called the central mystery the “hard problem” of consciousness. Why, he asks, does looking at a red apple produce the experience of seeing red? And more generally: Why do certain arrangements of matter experience anything? Anyone who has followed the recent debates over the nature of consciousness will have been struck by the sheer variety of explanations on offer. Many prominent neuroscientists, cognitive scientists, philosophers, and physicists have put forward “solutions” to the puzzle — all of them wildly different from, and frequently contradicting, each other. “‘You,’ your joys and your sorrows, your memories and ambitions, your sense of personal identity and free will, are in fact no more than the behavior of a vast assembly of nerve cells and their associated molecules.”

Keyword: Consciousness
Link ID: 26802 - Posted: 11.08.2019

By Erica Tennenhouse Live in the urban jungle long enough, and you might start to see things—in particular humanmade objects like cars and furniture. That’s what researchers found when they melded photos of artificial items with images of animals and asked 20 volunteers what they saw. The people, all of whom lived in cities, overwhelmingly noticed the manufactured objects whereas the animals faded into the background. To find out whether built environments can alter peoples’ perception, the researchers gathered hundreds of photos of animals and artificial objects such as bicycles, laptops, or benches. Then, they superimposed them to create hybrid images—like a horse combined with a table (above, top left) or a rhinoceros combined with a car (above, bottom right). As volunteers watched the hybrids flash by on a screen, they categorized each as a small animal, a big animal, a small humanmade object, or a big humanmade object. Overall, volunteers showed a clear bias toward the humanmade objects, especially when they were big, the researchers report today in the Proceedings of the Royal Society B. The bias itself was a measure of how much the researchers had to visually “amp up” an image before participants saw it instead of its partner image. That bias suggests people’s perceptions are fundamentally altered by their environments, the researchers say. Humans often rely on past experiences to process new information—the classic example is mistaking a snake for a garden hose. But in this case, living in industrialized nations—where you are exposed to fewer “natural” objects—could change the way you view the world. © 2019 American Association for the Advancement of Science

Keyword: Vision; Attention
Link ID: 26793 - Posted: 11.06.2019

By Christof Koch “And death shall have no dominion”—Dylan Thomas, 1933 You will die, sooner or later. We all will. For everything that has a beginning has an end, an ineluctable consequence of the second law of thermodynamics. Few of us like to think about this troubling fact. But once birthed, the thought of oblivion can’t be completely erased. It lurks in the unconscious shadows, ready to burst forth. In my case, it was only as a mature man that I became fully mortal. I had wasted an entire evening playing an addictive, first-person shooter video game—running through subterranean halls, flooded corridors, nightmarishly turning tunnels, and empty plazas under a foreign sun, firing my weapons at hordes of aliens relentlessly pursuing me. I went to bed, easily falling asleep but awoke abruptly a few hours later. Abstract knowledge had turned to felt reality—I was going to die! Not right there and then but eventually. Advertisement Evolution equipped our species with powerful defense mechanisms to deal with this foreknowledge—in particular, psychological suppression and religion. The former prevents us from consciously acknowledging or dwelling on such uncomfortable truths while the latter reassures us by promising never-ending life in a Christian heaven, an eternal cycle of Buddhist reincarnations or an uploading of our mind to the Cloud, the 21st-century equivalent of rapture for nerds. Death has no such dominion over nonhuman animals. Although they can grieve for dead offspring and companions, there is no credible evidence that apes, dogs, crows and bees have minds sufficiently self-aware to be troubled by the insight that one day they will be no more. Thus, these defense mechanisms must have arisen in recent hominin evolution, in less than 10 million years. © 2019 Scientific American

Keyword: Consciousness
Link ID: 26780 - Posted: 11.01.2019

By Zeynep Tufekci More than a billion people around the world have smartphones, almost all of which come with some kind of navigation app such as Google or Apple Maps or Waze. This raises the age-old question we encounter with any technology: What skills are we losing? But also, crucially: What capabilities are we gaining? Talking with people who are good at finding their way around or adept at using paper maps, I often hear a lot of frustration with digital maps. North/south orientation gets messed up, and you can see only a small section at a time. And unlike with paper maps, one loses a lot of detail after zooming out. I can see all that and sympathize that it may be quite frustrating for the already skilled to be confined to a small phone screen. (Although map apps aren’t really meant to be replacements for paper maps, which appeal to our eyes, but are actually designed to be heard: “Turn left in 200 feet. Your destination will be on the right.”) But consider what digital navigation aids have meant for someone like me. Despite being a frequent traveler, I’m so terrible at finding my way that I still use Google Maps almost every day in the small town where I have lived for many years. What looks like an inferior product to some has been a significant expansion of my own capabilities. I’d even call it life-changing. Part of the problem is that reading paper maps requires a specific skill set. There is nothing natural about them. In many developed nations, including the U.S., one expects street names and house numbers to be meaningful referents, and instructions such as “go north for three blocks and then west” make sense to those familiar with these conventions. In Istanbul, in contrast, where I grew up, none of those hold true. For one thing, the locals rarely use street names. Why bother when a government or a military coup might change them—again. House and apartment numbers often aren’t sequential either because after buildings 1, 2 and 3 were built, someone squeezed in another house between 1 and 2, and now that’s 4. But then 5 will maybe get built after 3, and 6 will be between 2 and 3. Good luck with 1, 4, 2, 6, 5, and so on, sometimes into the hundreds, in jumbled order. Besides, the city is full of winding, ancient alleys that intersect with newer avenues at many angles. © 2019 Scientific American

Keyword: Attention; Learning & Memory
Link ID: 26768 - Posted: 10.30.2019

by Emily Anthes The Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative, launched by the U.S. National Institutes of Health (NIH) in 2013, has a lofty goal: to unravel the cellular basis of cognition and behavior. Since the initiative’s launch, the NIH has doled out about $1 billion to researchers who are developing tools and technologies to map, measure and observe the brain’s neural circuits. Along the way, the agency has also tried to explore the ethical implications of this research. Khara Ramos, who directs the neuroethics program at the NIH’s National Institute of Neurological Disorders and Stroke, described the emerging field of neuroethics today at the 2019 Society for Neuroscience annual meeting in Chicago, Illinois. Spectrum: Was discussion about ethics part of the BRAIN Initiative from the beginning? Khara Ramos: We knew that we needed to do something with neuroethics, but it took time for us to figure out what exactly, in part because neuroethics is a relatively new field. Bioethics is a broad field that covers all aspects of biomedicine, but there isn’t specialization of bioethics in kidney research or pulmonary research the way there is in neuroscience research, and that’s really because the brain is so intimately connected with who we are. Neuroscience research raises these unique ethical questions, such as: How might new neurotechnologies alter fundamental notions of agency or autonomy or identity? We’re starting to focus on data sharing and privacy from a philosophical, conceptual perspective: Is there something unique about brain data that is different from, for instance, genetic data? How do researchers themselves feel about data sharing and privacy? And how does the public view it? For instance, is my social security number more or less sensitive than the kinds of neural data that somebody might be able to get if I were participating in a clinical trial? © 2019 Simons Foundation

Keyword: Autism; Attention
Link ID: 26725 - Posted: 10.21.2019

Ian Sample Science editor Warning: this story is about death. You might want to click away now. That’s because, researchers say, our brains do their best to keep us from dwelling on our inevitable demise. A study found that the brain shields us from existential fear by categorising death as an unfortunate event that only befalls other people. “The brain does not accept that death is related to us,” said Yair Dor-Ziderman, at Bar Ilan University in Israel. “We have this primal mechanism that means when the brain gets information that links self to death, something tells us it’s not reliable, so we shouldn’t believe it.” Being shielded from thoughts of our future death could be crucial for us to live in the present. The protection may switch on in early life as our minds develop and we realise death comes to us all. “The moment you have this ability to look into your own future, you realise that at some point you’re going to die and there’s nothing you can do about it,” said Dor-Ziderman. “That goes against the grain of our whole biology, which is helping us to stay alive.” To investigate how the brain handles thoughts of death, Dor-Ziderman and colleagues developed a test that involved producing signals of surprise in the brain. They asked volunteers to watch faces flash up on a screen while their brain activity was monitored. The person’s own face or that of a stranger flashed up on screen several times, followed by a different face. On seeing the final face, the brain flickered with surprise because the image clashed with what it had predicted. © 2019 Guardian News & Media Limited

Keyword: Attention; Emotions
Link ID: 26721 - Posted: 10.19.2019

By Sara Reardon Brain scientists can watch neurons fire and communicate. They can map how brain regions light up during sensation, decision-making, and speech. What they can't explain is how all this activity gives rise to consciousness. Theories abound, but their advocates often talk past each other and interpret the same set of data differently. "Theories are very flexible," says Christof Koch, president of the Allen Institute for Brain Science in Seattle, Washington. "Like vampires, they're very difficult to slay." Now, the Templeton World Charity Foundation (TWCF), a nonprofit best known for funding research at the intersection of science and religion, hopes to narrow the debate with experiments that directly pit theories of consciousness against each other. The first phase of the $20 million project, launched this week at the Society for Neuroscience meeting in Chicago, Illinois, will compare two theories of consciousness by scanning the brains of participants during cleverly designed tests. Proponents of each theory have agreed to admit it is flawed if the outcomes go against them. Head-to-head contests are rare in basic science. "It's a really outlandish project," says principal investigator Lucia Melloni, a neuroscientist at the Max Planck Institute for Empirical Aesthetics in Frankfurt, Germany. But understanding consciousness has become increasingly important for researchers seeking to communicate with locked-in patients, determine whether artificial intelligence systems can become conscious, or explore whether animals experience consciousness the way humans do. To winnow the theories, TWCF took inspiration from a 1919 experiment in which physicist Arthur Eddington pitted Albert Einstein's theory of general relativity against Isaac Newton's gravitational theory. Eddington measured how the Sun's gravity caused light from nearby stars to shift during a solar eclipse—and Einstein won. © 2019 American Association for the Advancement of Science

Keyword: Consciousness
Link ID: 26715 - Posted: 10.17.2019

Subhash Kak Many advanced artificial intelligence projects say they are working toward building a conscious machine, based on the idea that brain functions merely encode and process multisensory information. The assumption goes, then, that once brain functions are properly understood, it should be possible to program them into a computer. Microsoft recently announced that it would spend US$1 billion on a project to do just that. So far, though, attempts to build supercomputer brains have not even come close. A multi-billion-dollar European project that began in 2013 is now largely understood to have failed. That effort has shifted to look more like a similar but less ambitious project in the U.S., developing new software tools for researchers to study brain data, rather than simulating a brain. Some researchers continue to insist that simulating neuroscience with computers is the way to go. Others, like me, view these efforts as doomed to failure because we do not believe consciousness is computable. Our basic argument is that brains integrate and compress multiple components of an experience, including sight and smell – which simply can’t be handled in the way today’s computers sense, process and store data. Brains don’t operate like computers Living organisms store experiences in their brains by adapting neural connections in an active process between the subject and the environment. By contrast, a computer records data in short-term and long-term memory blocks. That difference means the brain’s information handling must also be different from how computers work. © 2010–2019, The Conversation US, Inc.

Keyword: Consciousness
Link ID: 26714 - Posted: 10.17.2019

Patricia Churchland Three myths about morality remain alluring: only humans act on moral emotions, moral precepts are divine in origin, and learning to behave morally goes against our thoroughly selfish nature. Converging data from many sciences, including ethology, anthropology, genetics, and neuroscience, have challenged all three of these myths. First, self-sacrifice, given the pressing needs of close kin or conspecifics to whom they are attached, has been documented in many mammalian species—wolves, marmosets, dolphins, and even rodents. Birds display it too. In sharp contrast, reptiles show no hint of this impulse. Second, until very recently, hominins lived in small groups with robust social practices fostering well-being and survival in a wide range of ecologies. The idea of a divine lawgiver likely played no part in their moral practices for some two million years, emerging only with the advent of agriculture and larger communities where not everyone knew everyone else. The divine lawgiver idea is still absent from some large-scale religions, such as Confucianism and Buddhism. Third, it is part of our genetic heritage to care for kith and kin. Although self-sacrifice is common in termites and bees, the altruistic behavior of mammals and birds is vastly more flexible, variable, and farsighted. Attachment to others, mediated by powerful brain hormones, is the biological platform for morality. © 1986–2019 The Scientist.

Keyword: Consciousness; Emotions
Link ID: 26678 - Posted: 10.08.2019

Alex Smith When children are diagnosed with attention deficit hyperactivity disorder, stimulant medications like Ritalin or Adderall are usually the first line of treatment. The American Academy of Pediatrics issued new guidelines on Monday that uphold the central role of medication, accompanied by behavioral therapy, in ADHD treatment. However, some parents, doctors and researchers who study kids with ADHD say they are disappointed that the new guidelines don't recommend behavioral treatment first for more children, as some recent research has suggested might lead to better outcomes. When 6-year-old Brody Knapp of Kansas City, Mo., was diagnosed with ADHD last year, his father, Brett, was skeptical. Brett didn't want his son taking pills. "You hear of losing your child's personality, and they become a shell of themselves, and they're not that sparkling little kid that you love," Brett says. "I didn't want to lose that with Brody, because he's an amazing kid." Brody's mother, Ashley, had other ideas. She's a school principal and has ADHD herself. "I was all for stimulants at the very, very beginning," Ashley says, "just because I know what they can do to help a neurological issue such as ADHD." More and more families have been facing the same dilemma. The prevalence of diagnosed ADHD has shot up in the U.S. in the past two decades; 1 in 10 children now has that diagnosis. The updated guidelines from the AAP recommend that children with ADHD should also be screened for other conditions, and monitored closely. But the treatment recommendations regarding medication are essentially unchanged from the previous guidelines, which were published in 2011. © 2019 npr

Keyword: ADHD; Drug Abuse
Link ID: 26657 - Posted: 10.01.2019

Jon Hamilton Too much physical exertion appears to make the brain tired. That's the conclusion of a study of triathletes published Thursday in the journal Current Biology. Researchers found that after several weeks of overtraining, athletes became more likely to choose immediate gratification over long-term rewards. At the same time, brain scans showed the athletes had decreased activity in an area of the brain involved in decision-making. The finding could explain why some elite athletes see their performance decline when they work out too much — a phenomenon known as overtraining syndrome. The distance runner Alberto Salazar, for example, experienced a mysterious decline after winning the New York Marathon three times and the Boston Marathon once in the early 1980s. Salazar's times fell off even though he was still in his mid-20s and training more than ever. "Probably [it was] something linked to his brain and his cognitive capacities," says Bastien Blain, an author of the study and a postdoctoral fellow at University College London. (Salazar didn't respond to an interview request for this story.) Blain was part of a team that studied 37 male triathletes who volunteered to take part in a special training program. "They were strongly motivated to be part of this program, at least at the beginning," Blain says. Half of the triathletes were instructed to continue their usual workouts. The rest were told to increase their weekly training by 40%. The result was a training program so intense that these athletes began to perform worse on tests of maximal output. After three weeks, all the participants were put in a brain scanner and asked a series of questions designed to reveal whether a person is more inclined to choose immediate gratification or a long-term reward. "For example, we ask, 'Do you prefer $10 now or $60 in six months,' " Blain says. © 2019 npr

Keyword: Attention
Link ID: 26656 - Posted: 09.28.2019

Alison Abbott A prominent German neuroscientist committed scientific misconduct in research in which he claimed to have developed a brain-monitoring technique able to read certain thoughts of paralysed people, Germany’s main research agency has found. The DFG’s investigation into Niels Birbaumer’s high-profile work found that data in two papers were incomplete and that the scientific analysis was flawed — although it did not comment on whether the approach was valid. In a 19 September statement, the agency, which funded some of the work, said it was imposing some of its most severe sanctions to Birbaumer, who has positions at the University of Tübingen in Germany and the Wyss Center for Bio and Neuroengineering in Geneva, Switzerland. The DFG has banned Birbaumer from applying for its grants and from serving as a DFG evaluator for five years. The agency has also recommended the retraction of the two papers1,2 published in PLoS Biology, and says that it will ask him to return the grant money that he used to generate the data underpinning the papers. “The DFG has found scientific misconduct on my part and has imposed sanctions. I must therefore accept that I was unable to refute the allegations made against me,” Birbaumer said in a statement e-mailed to Nature in response to the DFG’s findings. In a subsequent phone conversation with Nature, Birbaumer added that he could not comment further on the findings because the DFG has not yet provided him with specific details on the reasoning behind the decisions. Birbaumer says he stands by his studies, which he says, “show that it is possible to communicate with patients who are completely paralysed, through computer-based analysis of blood flow and brain currents”. © 2019 Springer Nature Limited

Keyword: Consciousness; Brain imaging
Link ID: 26636 - Posted: 09.23.2019

The mysterious ailments experienced by some 40 Canadian and U.S. diplomats and their families while stationed in Cuba may have had nothing to do with sonic "attacks" identified in earlier studies. According to a new Canadian study, obtained exclusively by Radio-Canada's investigative TV program Enquête, the cause could instead be neurotoxic agents used in pesticide fumigation. A number of Canadians and Americans living in Havana fell victim to an unexplained illness starting in late 2016, complaining of concussion-like symptoms, including headaches, dizziness, nausea and difficulty concentrating. Some described hearing a buzzing or high-pitched sounds before falling sick. In the wake of the health problems experienced over the past three years, Global Affairs Canada commissioned a clinical study by a team of multidisciplinary researchers in Halifax, affiliated with the Brain Repair Centre, Dalhousie University and the Nova Scotia Health Authority. "The working hypothesis actually came only after we had most of the results," Dr. Alon Friedman, the study's lead author, said in an interview. The researchers identified a damaged region of the brain that is responsible for memory, concentration and sleep-and-wake cycle, among other things, and then looked at how this region could come to be injured. "There are very specific types of toxins that affect these kinds of nervous systems ... and these are insecticides, pesticides, organophosphates — specific neurotoxins," said Friedman. "So that's why we generated the hypothesis that we then went to test in other ways." Twenty-six individuals participated in the study, including a control group of people who never lived in Havana. ©2019 CBC/Radio-Canada

Keyword: Neurotoxins; Attention
Link ID: 26627 - Posted: 09.20.2019

By Kenneth Shinozuka What is consciousness? In a sense, this is one of the greatest mysteries in the universe. yet in another, it’s not an enigma at all. If we define consciousness as the feeling of what it’s like to subjectively experience something, then there is nothing more deeply familiar. Most of us know what it’s like to feel the pain of a headache, to empathize with another human being, to see the color blue, to hear the soaring melodies of a symphony, and so on. In fact, as philosopher Galen Strawson insightfully pointed out in a New York Times opinion piece, consciousness is “the only thing in the universe whose ultimate intrinsic nature we can claim to know.” This is a crucial point. We don’t have direct access to the outer world. Instead we experience it through the filter of our consciousness. We have no idea what the color blue really looks like “out there,” only how it appears to us “in here.” Furthermore, as some cognitive scientists like Donald Hoffman have argued in recent years, external reality is likely to be far different from our perceptions of it. The human brain has been optimized, through the process of evolution, to model reality in the way that’s most conducive to its survival, not in the way that most faithfully represents the world. Science has produced an outstandingly accurate description of the outer world, but it has told us very little, if anything, about our internal consciousness. With sufficient knowledge of physics, I can calculate all the forces acting on the chair in front of me, but I don’t know what “forces” or “laws” are giving rise to my subjective experience of the chair. © 2019 Scientific American

Keyword: Consciousness
Link ID: 26607 - Posted: 09.13.2019