Chapter 18. Attention and Higher Cognition
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
By Simon Makin Everyone's brain is different. Until recently neuroscience has tended to gloss this over by averaging results from many brain scans in trying to elicit general truths about how the organ works. But in a major development within the field researchers have begun documenting how brain activity differs between individuals. Such differences had been largely thought of as transient and uninteresting but studies are starting to show that they are innate properties of people's brains, and that knowing them better might ultimately help treat neurological disorders. The latest study, published April 8 in Science, found that the brain activity of individuals who were just biding their time in a brain scanner contained enough information to predict how their brains would function during a range of ordinary activities. The researchers used these at-rest signatures to predict which regions would light up—which groups of brain cells would switch on—during gambling, reading and other tasks they were asked to perform in the scanner. The technique might be used one day to assess whether certain areas of the brains of people who are paralyzed or in a comatose state are still functional, the authors say. The study capitalizes on a relatively new method of brain imaging that looks at what is going on when a person essentially does nothing. The technique stems from the mid-1990s work of biomedical engineer Bharat Biswal, now at New Jersey Institute of Technology. Biswal noticed that scans he had taken while participants were resting in a functional magnetic resonance imaging (fMRI) scanner displayed orderly, low-frequency oscillations. He had been looking for ways to remove background noise from fMRI signals but quickly realized these oscillations were not noise. His work paved the way for a new approach known as resting-state fMRI. © 2016 Scientific American
Zoe Cormier Researchers have published the first images showing the effects of LSD on the human brain, as part of a series of studies to examine how the drug causes its characteristic hallucinogenic effects1. David Nutt, a neuropsychopharmacologist at Imperial College London who has previously examined the neural effects of mind-altering drugs such as the hallucinogen psilocybin, found in magic mushrooms, was one of the study's leaders. He tells Nature what the research revealed, and how he hopes LSD (lysergic acid diethylamide) might ultimately be useful in therapies. Why study the effects of LSD on the brain? For brain researchers, studying how psychedelic drugs such as LSD alter the ‘normal’ brain state is a way to study the biological phenomenon that is consciousness. We ultimately would also like to see LSD deployed as a therapeutic tool. The idea has old roots. In the 1950s and 60s thousands of people took LSD for alcoholism; in 2012, a retrospective analysis of some of these studies suggested that it helped cut down on drinking. Since the 1970s there have been lots of studies with LSD on animals, but not on the human brain. We need that data to validate the trial of this drug as a potential therapy for addiction or depression. Why hasn’t anyone done brain scans before? Before the 1960s, LSD was studied for its potential therapeutic uses, as were other hallucinogens. But the drug was heavily restricted in the UK, the United States and around the world after 1967 — in my view, due to unfounded hysteria over its potential dangers. The restrictions vary worldwide, but in general, countries have insisted that LSD has ‘no medical value’, making it tremendously difficult to work with. © 2016 Nature Publishing Group
By Sandhya Somashekhar African Americans are routinely under-treated for their pain compared with whites, according to research. A study released Monday sheds some disturbing light on why that might be the case. Researchers at the University of Virginia quizzed white medical students and residents to see how many believed inaccurate and at times "fantastical" differences about the two races -- for example, that blacks have less sensitive nerve endings than whites or that black people's blood coagulates more quickly. They found that fully half thought at least one of the false statements presented was possibly, probably or definitely true. Moreover, those who held false beliefs often rated black patients' pain as lower than that of white patients and made less appropriate recommendations about how they should be treated. The study, published in the Proceedings of the National Academy of Sciences, could help illuminate one of the most vexing problems in pain treatment today: That whites are more likely than blacks to be prescribed strong pain medications for equivalent ailments. A 2000 study out of Emory University found that at a hospital emergency department in Atlanta, 74 percent of white patients with bone fractures received painkillers compared with 50 percent of black patients. Similarly, a paper last year found that black children with appendicitis were less likely to receive pain medication than their white counterparts. And a 2007 study found that physicians were more likely to underestimate the pain of black patients compared with other patients.
Noah Smith, ( How do human beings behave in response to risk? That is one of the most fundamental unanswered questions of our time. A general theory of decision-making amid uncertainty would be the kind of scientific advance that comes only a few times a century. Risk is central to financial and insurance markets. It affects the consumption, saving and business investment that moves the global economy. Understanding human behavior in the face of risk would let us reduce accidents, retire more comfortably, get cheaper health insurance and maybe even avoid recessions. A number of our smartest scientists have tried to develop a general theory of risk behavior. John von Neumann, the pioneering mathematician and physicist, took a crack at it back in 1944, when he developed the theory of expected utility along with Oskar Morgenstern. According to this simple theory, people value a possible outcome by multiplying the probability that something happens by the amount they would like it to happen. This beautiful idea underlies much of modern economic theory, but unfortunately it doesn't work well in most situations. Alternative theories have been developed for specific applications. The psychologist Daniel Kahneman won a Nobel Prize for the creation of prospect theory, which says -- among other things -- that people measure outcomes relative to a reference point. That theory does a great job of explaining the behavior of subjects in certain lab experiments, and can help account for the actions of certain inexperienced consumers. But it is very difficult to apply generally, because the reference points are hard to predict in advance and may shift in unpredictable ways.
By PAM BELLUCK When people make risky decisions, like doubling down in blackjack or investing in volatile stocks, what happens in the brain? Scientists have long tried to understand what makes some people risk-averse and others risk-taking. Answers could have implications for how to treat, curb or prevent destructively risky behavior, like pathological gambling or drug addiction. Now, a study by Dr. Karl Deisseroth, a prominent Stanford neuroscientist and psychiatrist, and his colleagues gives some clues. The study, published Wednesday in the journal Nature, reports that a specific type of neuron or nerve cell, in a certain brain region helps galvanize whether or not a risky choice is made. The study was conducted in rats, but experts said it built on research suggesting the findings could be similar in humans. If so, they said, it could inform approaches to addiction, which involves some of the same neurons and brain areas, as well as treatments for Parkinson’s disease because one class of Parkinson’s medications turns some patients into problem gamblers. In a series of experiments led by Kelly Zalocusky, a doctoral student, researchers found that a risk-averse rat made decisions based on whether its previous choice involved a loss (in this case, of food). Rats whose previous decision netted them less food were prompted to behave conservatively next time by signals from certain receptors in a brain region called the nucleus accumbens, the scientists discovered. These receptors, which are proteins attached to neurons, are part of the dopamine system, a neurochemical important to emotion, movement and thinking. In risk-taking rats, however, those receptors sent a much fainter signal, so the rats kept making high-stakes choices even if they lost out. But by employing optogenetics, a technique that uses light to manipulate neurons, the scientists stimulated brain cells with those receptors, heightening the “loss” signal and turning risky rats into safer rats. © 2016 The New York Times Company
By Daniel Barron It’s unnerving when someone with no criminal record commits a disturbingly violent crime. Perhaps he stabs his girlfriend 40 times and dumps her body in the desert. Perhaps he climbs to the top of a clock tower and guns down innocent passers-by. Or perhaps he climbs out of a car at a stoplight and nearly decapitates an unsuspecting police officer with 26 rounds from an assault rifle. Perhaps he even drowns his own children. Or shoots the President of the United States. The shock is palpable (NB: those are all actual cases). The very notion that someone—our neighbor, the guy ahead of us in the check-out line, we (!)—could do something so terrible rubs at our minds. We wonder, “What happened? What in this guy snapped?” After all, for the last 20 years, the accused went home to his family after work—why did he go rob that liquor store? What made him pull that trigger? The subject hit home for me this week when I was called to jury duty. As I made my way to the county courthouse, I wondered whether I would be asked to decide a capital murder case like the ones above. As a young neuroscientist, the prospect made me uneasy. At the trial, the accused’s lawyers would probably argue that, at the time of the crime, he had diminished capacity to make decisions, that somehow he wasn’t entirely free to choose whether or not to commit the crime. They might cite some form of neuroscientific evidence to argue that, at the time of the crime, his brain wasn’t functioning normally. And the jury and judge have to decide what to make of it. © 2016 Scientific American
Link ID: 22024 - Posted: 03.24.2016
Giant manta rays have been filmed checking out their reflections in a way that suggests they are self-aware. Only a small number of animals, mostly primates, have passed the mirror test, widely used as a tentative test of self-awareness. “This new discovery is incredibly important,” says Marc Bekoff, of the University of Colorado in Boulder. “It shows that we really need to expand the range of animals we study.” But not everyone is convinced that the new study proves conclusively that manta rays, which have the largest brains of any fish, can do this – or indeed, that the mirror test itself is an appropriate measure of self-awareness. Csilla Ari, of the University of South Florida in Tampa, filmed two giant manta rays in a tank, with and without a mirror inside.The fish changed their behaviour in a way that suggested that they recognised the reflections as themselves as opposed to another manta ray. They did not show signs of social interaction with the image, which is what you would expect if they perceived it to be another individual. Instead, the rays repeatedly moved their fins and circled in front of the mirror (click on image below to see one in action). This suggests they could see whether their reflection moved when they moved. The frequency of these movements was much higher when the mirror was in the tank than when it was not. manta © Copyright Reed Business Information Ltd.
By BARBARA K. LIPSKA AS the director of the human brain bank at the National Institute of Mental Health, I am surrounded by brains, some floating in jars of formalin and others icebound in freezers. As part of my work, I cut these brains into tiny pieces and study their molecular and genetic structure. My specialty is schizophrenia, a devastating disease that often makes it difficult for the patient to discern what is real and what is not. I examine the brains of people with schizophrenia whose suffering was so acute that they committed suicide. I had always done my work with great passion, but I don’t think I really understood what was at stake until my own brain stopped working. In the first days of 2015, I was sitting at my desk when something freakish happened. I extended my arm to turn on the computer, and to my astonishment realized that my right hand disappeared when I moved it to the right lower quadrant of the keyboard. I tried again, and the same thing happened: The hand disappeared completely as if it were cut off at the wrist. It felt like a magic trick — mesmerizing, and totally inexplicable. Stricken with fear, I kept trying to find my right hand, but it was gone. I had battled breast cancer in 2009 and melanoma in 2012, but I had never considered the possibility of a brain tumor. I knew immediately that this was the most logical explanation for my symptoms, and yet I quickly dismissed the thought. Instead I headed to a conference room. My colleagues and I had a meeting scheduled to review our new data on the molecular composition of schizophrenia patients’ frontal cortex, a brain region that shapes who we are — our thoughts, emotions, memories. But I couldn’t focus on the meeting because the other scientists’ faces kept vanishing. Thoughts about a brain tumor crept quietly into my consciousness again, then screamed for attention. © 2016 The New York Times Company
How is the brain able to use past experiences to guide decision-making? A few years ago, researchers supported by the National Institutes of Health discovered in rats that awake mental replay of past experiences is critical for learning and making informed choices. Now, the team has discovered key secrets of the underlying brain circuitry – including a unique system that encodes location during inactive periods. “Advances such as these in understanding cellular and circuit-level processes underlying such basic functions as executive function, social cognition, and memory fit into NIMH’s mission of discovering the roots of complex behaviors,” said NIMH acting director Bruce Cuthbert, Ph.D. While a rat is moving through a maze — or just mentally replaying the experience — an area in the brain’s memory hub, or hippocampus, specialized for locations, called CA1, communicates with a decision-making area in the executive hub or prefrontal cortex (PFC). A distinct subset of PFC neurons excited during mental replay of the experience are activated during movement, while another distinct subset, less engaged during movement in the maze – and therefore potentially distracting – are inhibited during replay. “Such strongly coordinated activity within this CA1-PFC circuit during awake replay is likely to optimize the brain’s ability to consolidate memories and use them to decide on future action” explained Shantanu Jadhav, Ph.D. (link is external), now an assistant professor at Brandeis University, Waltham, MA., the study’s co-first author. His contributions to this line of research were made possible, in part, by a Pathway to Independence award from the Office of Research Training and Career Development of the NIH’s National Institute of Mental Health (NIMH).
By Kj Dell’Antonia New research shows that the youngest students in a classroom are more likely to be given a diagnosis of attention deficit hyperactivity disorder than the oldest. The findings raise questions about how we regard those wiggly children who just can’t seem to sit still – and who also happen to be the youngest in their class. Researchers in Taiwan looked at data from 378,881 children ages 4 to 17 and found that students born in August, the cut-off month for school entry in that country, were more likely to be given diagnoses of A.D.H.D. than students born in September. The children born in September would have missed the previous year’s cut-off date for school entry, and thus had nearly a full extra year to mature before entering school. The findings were published Thursday in The Journal of Pediatrics. While few dispute that A.D.H.D. is a legitimate disability that can impede a child’s personal and school success and that treatment can be effective, “our findings emphasize the importance of considering the age of a child within a grade when diagnosing A.D.H.D. and prescribing medication for treating A.D.H.D.,” the authors concluded. Dr. Mu-Hong Chen, a member of the department of psychiatry at Taipei Veterans General Hospital in Taiwan and the lead author of the study, hopes that a better understanding of the data linking relative age at school entry to an A.D.H.D. diagnosis will encourage parents, teachers and clinicians to give the youngest children in a grade enough time and help to allow them to prove their ability. Other research has shown similar results. An earlier study in the United States, for example, found that roughly 8.4 percent of children born in the month before their state’s cutoff date for kindergarten eligibility are given A.D.H.D. diagnoses, compared to 5.1 percent of children born in the month immediately afterward. © 2016 The New York Times Company
By Daniel Engber Nearly 20 years ago, psychologists Roy Baumeister and Dianne Tice, a married couple at Case Western Reserve University, devised a foundational experiment on self-control. “Chocolate chip cookies were baked in the room in a small oven,” they wrote in a paper that has been cited more than 3,000 times. “As a result, the laboratory was filled with the delicious aroma of fresh chocolate and baking.” Here’s how that experiment worked. Baumeister and Tice stacked their fresh-baked cookies on a plate, beside a bowl of red and white radishes, and brought in a parade of student volunteers. They told some of the students to hang out for a while unattended, eating only from the bowl of radishes, while another group ate only cookies. Afterward, each volunteer tried to solve a puzzle, one that was designed to be impossible to complete. Baumeister and Tice timed the students in the puzzle task, to see how long it took them to give up. They found that the ones who’d eaten chocolate chip cookies kept working on the puzzle for 19 minutes, on average—about as long as people in a control condition who hadn’t snacked at all. The group of kids who noshed on radishes flubbed the puzzle test. They lasted just eight minutes before they quit in frustration. The authors called this effect “ego depletion” and said it revealed a fundamental fact about the human mind: We all have a limited supply of willpower, and it decreases with overuse. © 2016 The Slate Group LLC.
Link ID: 21965 - Posted: 03.08.2016
Angus Chen We know we should put the cigarettes away or make use of that gym membership, but in the moment, we just don't do it. There is a cluster of neurons in our brain critical for motivation, though. What if you could hack them to motivate yourself? These neurons are located in the middle of the brain, in a region called the ventral tegmental area. A paper published Thursday in the journal Neuron suggests that we can activate the region with a little bit of training. The researchers stuck 73 people into an fMRI, a scanner that can detect what part of the brain is most active, and focused on that area associated with motivation. When the researchers said "motivate yourself and make this part of your brain light up," people couldn't really do it. "They weren't that reliable when we said, 'Go! Get psyched. Turn on your VTA,' " says Dr. Alison Adcock, a psychiatrist at Duke and senior author on the paper. That changed when the participants were allowed to watch a neurofeedback meter that displayed activity in their ventral tegmental area. When activity ramps up, the participants see the meter heat up while they're in the fMRI tube. "Your whole mind is allowed to speak to a specific part of your brain in a way you never imagined before. Then you get feedback that helps you discover how to turn that part of the brain up or down," says John Gabrieli, a neuroscientist at the Massachusetts Institute of Technology who was not involved with the work. © 2016 npr
Link ID: 21954 - Posted: 03.05.2016
Monya Baker Is psychology facing a ‘replication crisis’? Last year, a crowdsourced effort that was able to validate fewer than half of 98 published findings1 rang alarm bells about the reliability of psychology papers. Now a team of psychologists has reassessed the study and say that it provides no evidence for a crisis. “Our analysis completely invalidates the pessimistic conclusions that many have drawn from this landmark study,” says Daniel Gilbert, a psychologist at Harvard University in Cambridge, Massachusetts, and a co-author of the reanalysis, published on 2 March in Science2. But a response3 in the same issue of Science counters that the reanalysis itself depends on selective assumptions. And others say that psychology still urgently needs to improve its research practices. Statistical criticism In August 2015, a team of 270 researchers reported the largest ever single-study audit of the scientific literature. Led by Brian Nosek, executive director of the Center for Open Science in Charlottesville, Virginia, the Reproducibility Project attempted to replicate studies in 100 psychology papers. (It ended up with 100 replication attempts for 98 papers because of problems assigning teams to two papers.) According to one of several measures of reproducibility, just 36% could be confirmed; by another statistical measure, 47% could1. Either way, the results looked worryingly feeble. “Both optimistic and pessimistic conclusions about reproducibility are possible, and neither are yet warranted” Not so fast, says Gilbert. © 2016 Nature Publishing Group
Link ID: 21953 - Posted: 03.05.2016
By Christian Jarrett Most of us like to think that we’re independent-minded — we tell ourselves we like Adele’s latest album because it suits our taste, not because millions of other people bought it, or that we vote Democrat because we’re so enlightened, not because all our friends vote that way. The reality, of course, is that humans are swayed in all sorts of different ways — some of them quite subtle — by other people’s beliefs and expectations. Our preferences don’t form in a vacuum, but rather in something of a social pressure-cooker. This has been demonstrated over and over, perhaps most famously in the classic Asch conformity studies from the ‘50s. In those experiments, many participants went along with a blatantly wrong majority judgment about the lengths of different lines — simply, it seems, to fit in. (Although the finding is frequently exaggerated, the basic point about the power of social influence holds true.) But that doesn’t mean all humans are susceptible to peer pressure in the same way. You only have to look at your own friends and family to know that some people always seem to roll with the crowd, while others are much more independent-minded. What accounts for these differences? A new study in Frontiers in Human Neuroscience led by Dr. Juan Dominguez of Monash University in Melbourne, Australia, offers the first hint that part of the answer may come down to certain neural mechanisms. In short, the study suggests that people have a network in their brains that is attuned to disagreement with other people. When this network is activated, it makes us feel uncomfortable (we experience “cognitive dissonance,” to use the psychological jargon) and it’s avoiding this state that motivates us to switch our views as much as possible. It appears the network is more sensitive in some people than in others, and that this might account for varying degrees of pushover-ness. © 2016, New York Media LLC.
By Meeri Kim Teenagers tend to have a bad reputation in our society, and perhaps rightly so. When compared to children or adults, adolescents are more likely to engage in binge drinking, drug use, unprotected sex, criminal activity, and reckless driving. Risk-taking is like second nature to youth of a certain age, leading health experts to cite preventable and self-inflicted causes as the biggest threats to adolescent well-being in industrialized societies. But before going off on a tirade about groups of reckless young hooligans, consider that a recent study may have revealed a silver lining to all that misbehavior. While adolescents will take more risks in the presence of their peers than when alone, it turns out that peers can also encourage them to learn faster and engage in more exploratory acts. A group of 101 late adolescent males were randomly assigned to play the Iowa Gambling Task, a psychological game used to assess decision making, either alone or observed by their peers. The task involves four decks of cards: two are “lucky” decks that will generate long-term gain if the player continues to draw from them, while the other two are “unlucky” decks that have the opposite effect. The player chooses to play or pass cards drawn from one of these decks, eventually catching on to which of the decks are lucky or unlucky — and subsequently only playing from the lucky ones.
By David Z. Hambrick We all make stupid mistakes from time to time. History is replete with examples. Legend has it that the Trojans accepted the Greek’s “gift” of a huge wooden horse, which turned out to be hollow and filled with a crack team of Greek commandos. The Tower of Pisa started to lean even before construction was finished—and is not even the world’s farthest leaning tower. NASA taped over the original recordings of the moon landing, and operatives for Richard Nixon’s re-election committee were caught breaking into a Watergate office, setting in motion the greatest political scandal in U.S. history. More recently, the French government spent $15 billion on a fleet of new trains, only to discover that they were too wide for some 1,300 station platforms. We readily recognize these incidents as stupid mistakes—epic blunders. On a more mundane level, we invest in get-rich-quick schemes, drive too fast, and make posts on social media that we later regret. But what, exactly, drives our perception of these actions as stupid mistakes, as opposed to bad luck? Their seeming mindlessness? The severity of the consequences? The responsibility of the people involved? Science can help us answer these questions. In a study just published in the journal Intelligence, using search terms such as “stupid thing to do”, Balazs Aczel and his colleagues compiled a collection of stories describing stupid mistakes from sources such as The Huffington Post and TMZ. One story described a thief who broke into a house and stole a TV and later returned for the remote; another described burglars who intended to steal cell phones but instead stole GPS tracking devices that were turned on and gave police their exact location. The researchers then had a sample of university students rate each story on the responsibility of the people involved, the influence of the situation, the seriousness of the consequences, and other factors. © 2016 Scientific American,
Link ID: 21928 - Posted: 02.24.2016
Alison Abbott. More than 50 years after a controversial psychologist shocked the world with studies that revealed people’s willingness to harm others on order, a team of cognitive scientists has carried out an updated version of the iconic ‘Milgram experiments’. Their findings may offer some explanation for Stanley Milgram's uncomfortable revelations: when following commands, they say, people genuinely feel less responsibility for their actions — whether they are told to do something evil or benign. “If others can replicate this, then it is giving us a big message,” says neuroethicist Walter Sinnot-Armstrong of Duke University in Durham, North Carolina, who was not involved in the work. “It may be the beginning of an insight into why people can harm others if coerced: they don’t see it as their own action.” The study may feed into a long-running legal debate about the balance of personal responsibility between someone acting under instruction and their instructor, says Patrick Haggard, a cognitive neuroscientist at University College London, who led the work, published on 18 February in Current Biology1. Milgram’s original experiments were motivated by the trial of Nazi Adolf Eichmann, who famously argued that he was ‘just following orders’ when he sent Jews to their deaths. The new findings don’t legitimize harmful actions, Haggard emphasizes, but they do suggest that the ‘only obeying orders’ excuse betrays a deeper truth about how a person feels when acting under command. © 2016 Nature Publishing Group
By BENEDICT CAREY Children with attention-deficit problems improve faster when the first treatment they receive is behavioral — like instruction in basic social skills — than when they start immediately on medication, a new study has found. Beginning with behavioral therapy is also a less expensive option over time, according to a related analysis. Experts said the efficacy of this behavior-first approach, if replicated in larger studies, could change standard medical practice, which favors stimulants like Adderall and Ritalin as first-line treatments, for the more than four million children and adolescents in the United States with a diagnosis of attention deficit hyperactivity disorder, or A.D.H.D. The new research, published in two papers by the Journal of Clinical Child & Adolescent Psychology, found that stimulants were most effective as a supplemental, second-line treatment for those who needed it — and often at doses that were lower than normally prescribed. The study is thought to be the first of its kind in the field to evaluate the effect of altering the types of treatment midcourse — adding a drug to behavior therapy, for example, or vice versa. “We showed that the sequence in which you give treatments makes a big difference in outcomes,” said William E. Pelham of Florida International University, a leader of the study with Susan Murphy of the University of Michigan. “The children who started with behavioral modification were doing significantly better than those who began with medication by the end, no matter what treatment combination they ended up with.” Other experts cautioned that the study tracked behavior but not other abilities that medication can quickly improve, like attention and academic performance, and said that drugs remained the first-line treatment for those core issues. © 2016 The New York Times Company
Link ID: 21909 - Posted: 02.18.2016
Allison Aubrey It's no secret that stimulant medications such as Adderall that are prescribed to treat symptoms of ADHD are sometimes used as "study drugs" aimed at boosting cognitive performance. And emergency room visits linked to misuse of the drug are on the rise, according to a study published Tuesday in the Journal of Clinical Psychiatry. "Young adults in the 18- to 25-year age range are most likely to misuse these drugs," says Dr. Ramin Mojtabai, a professor at the Johns Hopkins Bloomberg School of Public Health and senior author of the study. A common scenario is this: A person who has been prescribed ADHD drugs gives or diverts pills to a friend or family member who may be looking for a mental boost, perhaps to cram for a final or prepare a report. And guess what? This is illegal. Overall, the study found that nonmedical use of Adderall and generic versions of the drug increased by 67 percent among adults between 2006 and 2011. The findings are based on data from the National Survey on Drug Use and Health. The number of emergency room visits involving Adderall misuse increased from 862 visits in 2006 to 1,489 in 2011 according to data from the Drug Abuse Warning Network . © 2016 npr
David H.Wells Take a theory of consciousness that calculates how aware any information-processing network is – be it a computer or a brain. Trouble is, it takes a supercomputer billions of years to verify its predictions. Add a maverick cosmologist, and what do you get? A way to make the theory useful within our lifetime. Integrated information theory (IIT) is one of our best descriptions of consciousness. Developed by neuroscientist Giulio Tononi of the University of Wisconsin at Madison, it’s based on the observation that each moment of awareness is unified. When you contemplate a bunch of flowers, say, it’s impossible to be conscious of the flower’s colour independently of its fragrance because the brain has integrated the sensory data. Tononi argues that for a system to be conscious, it must integrate information in such a way that the whole contains more information than the sum of its parts. The measure of how a system integrates information is called phi. One way of calculating phi involves dividing a system into two and calculating how dependent each part is on the other. One cut would be the “cruellest”, creating two parts that are the least dependent on each other. If the parts of the cruellest cut are completely independent, then phi is zero, and the system is not conscious. The greater their dependency, the greater the value of phi and the greater the degree of consciousness of the system. Finding the cruellest cut, however, is almost impossible for any large network. For the human brain, with its 100 billion neurons, calculating phi like this would take “longer than the age of our universe”, says Max Tegmark, a cosmologist at the Massachusetts Institute of Technology. © Copyright Reed Business Information Ltd.
Link ID: 21903 - Posted: 02.17.2016