Links for Keyword: Consciousness

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 41 - 59 of 59

By Giulio Tononi When is an entity one entity? How can multiple elements be a single thing? A question simple enough— but one, thought Galileo, that had not yet been answered. Or perhaps, it had not been asked. The sensor of the digital camera certainly had a large repertoire of states— it could take any possible picture. But was it a single entity? You use the camera as a single entity, you grasp it with your hands as one. You watch the photograph as a single entity. But that is within your own consciousness. If it were not for you, the observer, would it still be a single entity? And what exactly would that mean? While musing such matters, Galileo was startled by a voice. J., a man with the forehead of an ancient god, addressed him in a polished tone: “Take a sentence of a dozen words, and take twelve men, and tell to each one word. Then stand the men in a row or jam them in a bunch, and let each think of his word as intently as he will; nowhere will there be a consciousness of the whole sentence. Or take a word of a dozen letters, and let each man think of his letter as intently as he will; nowhere will there be a consciousness of the whole word,” J. said. Or take a picture of one million dots, and take one million photodiodes, and show each photodiode its own dot. Then stand the photodiodes well ordered on a square array, and let each tell light from dark for its own dot, as precisely as it will; nowhere will there be a consciousness of the whole picture, said Galileo. “So you see that, Galileo,” J. continued. “There is no such thing as the spirit of the age, the sentiment of the people, or public opinion. The private minds do not agglomerate into a higher compound mind. They say the whole is more than the sum of its parts; they say, but how can it be so?” © 2012 Scientific American, Copyright © 2012 by Giulio Tononi

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 17129 - Posted: 08.06.2012

by Michael S. Gazzaniga We humans think we make all our decisions to act consciously and willfully. We all feel we are wonderfully unified, coherent mental machines and that our underlying brain structure must reflect this overpowering sense. It doesn’t. No command center keeps all other brain systems hopping to the instructions of a five-star general. The brain has millions of local processors making important decisions. There is no one boss in the brain. You are certainly not the boss of your brain. Have you ever succeeded in telling your brain to shut up already and go to sleep? Even though we know that the organization of the brain is made up of a gazillion decision centers, that neural activities going on at one level of organization are inexplicable at another level, and that there seems to be no boss, our conviction that we have a “self” making all the decisions is not dampened. It is a powerful illusion that is almost impossible to shake. In fact, there is little or no reason to shake it, for it has served us well as a species. There is, however, a reason to try to understand how it all comes about. If we understand why we feel in charge, we will understand why and how we make errors of thought and perception. When I was a kid, I spent a lot of time in the desert of Southern California—out in the desert scrub and dry bunchgrass, surrounded by purple mountains, creosote bush, coyotes, and rattlesnakes. The reason I am still here today is because I have nonconscious processes that were honed by evolution. © 2012, Kalmbach Publishing Co.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 17121 - Posted: 08.04.2012

By JOHN MONTEROSSO and BARRY SCHWARTZ ARE you responsible for your behavior if your brain “made you do it”? Often we think not. For example, research now suggests that the brain’s frontal lobes, which are crucial for self-control, are not yet mature in adolescents. This finding has helped shape attitudes about whether young people are fully responsible for their actions. In 2005, when the Supreme Court ruled that the death penalty for juveniles was unconstitutional, its decision explicitly took into consideration that “parts of the brain involved in behavior control continue to mature through late adolescence.” Similar reasoning is often applied to behavior arising from chemical imbalances in the brain. It is possible, when the facts emerge, that the case of James E. Holmes, the suspect in the Colorado shootings, will spark debate about neurotransmitters and culpability. Whatever the merit of such cases, it’s worth stressing an important point: as a general matter, it is always true that our brains “made us do it.” Each of our behaviors is always associated with a brain state. If we view every new scientific finding about brain involvement in human behavior as a sign that the behavior was not under the individual’s control, the very notion of responsibility will be threatened. So it is imperative that we think clearly about when brain science frees someone from blame — and when it doesn’t. © 2012 The New York Times Company

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 17093 - Posted: 07.28.2012

By Michael Shermer Where is the experience of red in your brain? The question was put to me by Deepak Chopra at his Sages and Scientists Symposium in Carlsbad, Calif., on March 3. A posse of presenters argued that the lack of a complete theory by neuroscientists regarding how neural activity translates into conscious experiences (such as redness) means that a physicalist approach is inadequate or wrong. The idea that subjective experience is a result of electrochemical activity remains a hypothesis, Chopra elaborated in an e-mail. It is as much of a speculation as the idea that consciousness is fundamental and that it causes brain activity and creates the properties and objects of the material world. Where is Aunt Millie's mind when her brain dies of Alzheimer's? I countered to Chopra. Aunt Millie was an impermanent pattern of behavior of the universe and returned to the potential she emerged from, Chopra rejoined. In the philosophic framework of Eastern traditions, ego identity is an illusion and the goal of enlightenment is to transcend to a more universal nonlocal, nonmaterial identity. The hypothesis that the brain creates consciousness, however, has vastly more evidence for it than the hypothesis that consciousness creates the brain. Damage to the fusiform gyrus of the temporal lobe, for example, causes face blindness, and stimulation of this same area causes people to see faces spontaneously. Stroke-caused damage to the visual cortex region called V1 leads to loss of conscious visual perception. Changes in conscious experience can be directly measured by functional MRI, electroencephalography and single-neuron recordings. Neuroscientists can predict human choices from brain-scanning activity before the subject is even consciously aware of the decisions made. Using brain scans alone, neuroscientists have even been able to reconstruct, on a computer screen, what someone is seeing. © 2012 Scientific American

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 16979 - Posted: 06.28.2012

By Tom Siegfried Arguably, and it would be a tough argument to win if you took the other side, computers have had a greater impact on civilization than any other machine since the wheel. Sure, there was the steam engine, the automobile and the airplane, the printing press and the mechanical clock. Radios and televisions also made their share of societal waves. But look around. Computers do everything TVs and radios ever did. And computers tell time, control cars and planes, and have rendered printing presses pretty darn near obsolete. Computers have invaded every realm of life, from work to entertainment to medicine to education: Reading, writing and arithmetic are now all computer-centric activities. Every nook and cranny of human culture is controlled, colored or monitored by the digital computer. Even though, merely 100 years ago, no such machine existed. In 1912, the word computer referred to people (typically women) using pencils and paper or adding machines. Coincidentally, that was the year that Alan Turing was born. If you don’t like the way computers have taken over the world, you could blame him. No one did more to build the foundation of computer science than Turing. In a paper published in 1936, he described the principle behind all of today’s computing devices, sketching out the theoretical blueprint for a machine able to implement instructions for making any calculation. Turing didn’t invent the idea of a computer, of course. Charles Babbage had grand plans for a computing machine a century earlier (and even he had precursors). George Boole, not long after Babbage, developed the underlying binary mathematics (originally conceived much earlier by Gottfried Leibniz) that modern digital computers adopted. © Society for Science & the Public 2000 - 2012

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 16914 - Posted: 06.16.2012

By Laura Sanders A mysterious kind of nerve cell that has been linked to empathy, self-awareness, and even consciousness resides in Old World monkeys. The finding, published May 10 in Neuron, extends the domain of the neurons beyond humans, great apes and other large-brained creatures and will now allow scientists to study the habits of a neuron that may be key to human self-awareness. “People have been reluctant to say, but want to believe, that these neurons might be the neural correlate of consciousness,” says neuroscientist and psychiatrist Hugo Critchley of the University of Sussex in England. Finding the neurons in macaques, which can be studied in laboratories, “opens up the possibility to study directly the role of these cells,” he says. An earlier study saw no signs of the cells, called von Economo neurons, in macaques. But while carefully scrutinizing a small piece of a macaque brain for a different experiment, anatomist Henry Evrard of the Max Planck Institute for Biological Cybernetics in Tübingen, Germany, stumbled across the rare, distinctive cells. About three times bigger than other nerve cells, von Economo neurons have long, fat bodies and tufts of message-receiving dendrites at each end. Evrard compares the first sighting to seeing the tip of an iceberg. After many additional tests, he and his colleagues concluded that the cells, though smaller and sparser than their human counterparts, were indeed the elusive von Economo neurons. © Society for Science & the Public 2000 - 2012

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 16776 - Posted: 05.10.2012

By JAMES GORMAN The puzzle of consciousness is so devilish that scientists and philosophers are still struggling with how to talk about it, let alone figure out what it is and where it comes from. One problem is that the word has more than one meaning. Trying to plumb the nature of self-awareness or self-consciousness leads down one infamous rabbit hole. But what if the subject is simply the difference in brain activity between being conscious and being unconscious? Scientists and doctors certainly know how to knock people out. Michael T. Alkire at the University of California, Irvine, put it this way in an article in Science in 2008: “How consciousness arises in the brain remains unknown,” he wrote. “Yet, for nearly two centuries our ignorance has not hampered the use of general anesthesia for routinely extinguishing consciousness during surgery.” And a good thing, too. Setting aside what philosophers call “the hard problem” (self-awareness), a lot has been learned about the boundary between being awake and alert and being unconscious since ether was used in 1846 to put a patient under for surgery. Researchers have used anesthesia, recently in combination with brain scans, as a tool to see what happens in the brain when people fade in and out of consciousness — which parts turn on and which turn off. For instance, in a recent study, investigators showed that a person could respond to a simple command to open his eyes (the subjects were all right-handed men) when the higher parts of the brain were not yet turned on. The finding may be useful in deciding how to measure the effects of anesthetics, and it adds another data point to knowledge of what’s going on in the brain. © 2012 The New York Times Company

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 14: Biological Rhythms, Sleep, and Dreaming
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 10: Biological Rhythms and Sleep
Link ID: 16645 - Posted: 04.14.2012

Tim Parks “There are no images.” This was the first time I noticed Riccardo Manzotti. It was a conference on art and neuroscience. Someone had spoken about the images we keep in our minds. Manzotti seemed agitated. The girl sitting next to me explained that he built robots, was a genius. “There are no images and no representations in our minds,” he insisted. “Our visual experience of the world is a continuum between see-er and seen united in a shared process of seeing.” I was curious, if only because, as a novelist I’d always supposed I was dealing in images, imagery. This stuff might have implications. So we had a beer together. Manzotti has a degree in engineering and another in philosophy. He teaches in the psychology department at IULM University, Milan. The move from engineering to philosophy was prompted by conceptual problems he’d run into when first seeking to build robots. What does it mean that a subject sees an object? “People say the robot stores images of the world through its video camera. It doesn’t, it stores digital data. It has no images.” Manzotti is what they call a radical externalist: for him consciousness is not safely confined within a brain whose neurons select and store information received from a separate world, appropriating, segmenting, and manipulating various forms of input. Instead, he offers a model he calls Spread Mind: consciousness is a process shared between various otherwise distinct processes which, for convenience’s sake we have separated out and stabilized in the words subject and object. Language, or at least our modern language, thus encourages a false account of experience. © 1963-2012 NYREV, Inc.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 16634 - Posted: 04.12.2012

By Christof Koch What is the relation between selective attention and consciousness? When you strain to listen to the distant baying of coyotes over the sound of a campsite conversation, you do so by attending to the sound and becoming conscious of their howls. When you attend to your sparring opponent out of the corner of your eye, you become hyperaware of his smallest gestures. Because of the seemingly intimate relation between attention and consciousness, most scholars conflate the two processes. Indeed, when I came out of the closet to give public talks on the mind-body problem in the early 1990s (at that time, it wouldn’t do for a young professor in biology or engineering who had not even yet attained the holy state of tenure to talk about consciousness: it was considered too fringy), some of my colleagues insisted that I replace the incendiary “consciousness” with the more neutral “attention” because the two concepts could not be distinguished and were probably the same thing anyway. Two decades later a number of experiments prove that the two are not the same. Stage magicians are superb at manipulating the audience’s attention. By misdirecting your gaze using their hands or a beautiful, bikini-clad assistant, you look but don’t see, inverting Yogi Berra’s famous witticism, “You can observe a lot just by watching.” Scientists can do the same, sans the sexy woman. I described a psychophysical technique called continuous flash suppression in an earlier column [see “Rendering the Visible Invisible,” October/November 2008], in which a faint image in one eye—say, an angry face in the left eye—becomes invisible by flashing a series of colorful overlaid rectangles into the other eye. As long as you keep both eyes open, you see only the flashed pictures. Attention is drawn to the rapidly changing images, effectively camouflaging the angry face. As soon as you wink with the right eye, however, you see the face. This technique has been used to great effect both to hide things from consciousness—such as a naked man or woman—and to demonstrate that the brain will still attend to them. © 2012 Scientific American

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 16612 - Posted: 04.05.2012

By John Horgan I met Christof Koch in 1994 at the first of series of big conferences on consciousness held in Tucson, Ariz. A professor at Caltech, Koch had helped popularize consciousness as a topic for serious scientific investigation—instead of windy philosophical supposition—through his collaboration with the great Francis Crick, who had already cracked the genetic code and now wanted to solve the riddle of mind as well. In Tucson Koch outlined a theory, jointly fashioned by him and Crick, that 40-hertz brain waves might be a key to consciousness. Although I was skeptical of that particular theory, I liked the hard-nosed, materialist, reductionist approach that Koch and Crick took toward consciousness. I also liked the quirky intensity that Koch brought to his scientific work. This trait was on display in Tucson during an encounter between Koch and the philosopher David Chalmers, who proposed that consciousness is such a “hard problem” that it needs new approaches, such as one incorporating ideas from information theory. Confronting Chalmers at a cocktail party, Koch declared that Chalmers’s information-based theory of consciousness was untestable and therefore useless. “Why don’t you just say that when you have a brain the Holy Ghost comes down and makes you conscious!” Koch exclaimed. Such a theory was unnecessarily complicated, Chalmers responded dryly, and it would not accord with his own subjective experience. “But how do I know that your subjective experience is the same as mine?” Koch retorted. “How do I even know you’re conscious?” © 2012 Scientific American

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 16599 - Posted: 04.04.2012

Sumit Paul-Choudhury, editor LOOKING at your own brain is a humbling and slightly unnerving experience. Mine, depicted in a freshly acquired MRI scan, is startlingly intricate, compact - and baffling. This is as much of a portrait of my own mind as I am ever likely to see. But to my ignorant eyes (which, by way of an eerie bonus, are now looking at their own cross-sections) it looks pretty much like any other brain. Apparently a more expert eye wouldn't help. "Whilst all my participants get very excited about seeing their brain for the first time after being scanned, and I frequently get asked 'What can you tell me about my brain?', the reality is that the brain will for a long time yet remain a mysterious mass," says the neuroscientist who scanned my brain, for research purposes. "We must be content with knowing that the 'I' is constructed in its intricacies, but we cannot explain how.” The hope of closing the gap between the physical and mental is presumably what gets neuroscientists up in the morning, but it’s frustrating for a layperson like me. Avowed materialist though I am, I nonetheless rebel against the knowledge that the impassive blob on screen is "me". This cognitive dissonance was what I took with me to the opening of Brains, a new show at London’s Wellcome Collection, whose subtitle, "The Mind as Matter", suggests that its curators sympathise with my materialist perspective. “The neurosciences hold out the prospect of an objective account of consciousness - the soul or mind as nothing more than intricately connected flesh,” reads the introduction. But the bulk of the exhibition is dedicated to whole brains, brain collectors and anatomical paraphernalia, with little explicit reference to the brain’s fine structure, or how it might give rise to thought. © Copyright Reed Business Information Ltd.

Related chapters from BP7e: Chapter 2: Functional Neuroanatomy: The Nervous System and Behavior; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 2: Cells and Structures: The Anatomy of the Nervous System; Chapter 14: Attention and Consciousness
Link ID: 16594 - Posted: 03.31.2012

Robert Stickgold The psychologist Stuart Sutherland wrote that it is impossible to define consciousness “except in terms that are unintelligible without a grasp of what consciousness means ... Nothing worth reading has been written about it.” It is arguable whether Christof Koch's Consciousness provides such a definition, but the book is definitely worth reading. Koch, chief scientific officer at the Allen Institute for Brain Science in Seattle, Washington, is perhaps best known for his work with the late Francis Crick, searching for the neurobiological 'correlates of consciousness'. Here, he succinctly lays out the story of that quest. Focusing on how the brain might produce the mind, Koch mixes descriptions of major experiments with self-reflection and warnings of the inherent danger of the exercise. From Koch's collaborations with Crick, whom he seems to idolize, to his struggles with religion and free will, this is an engaging mixture of personal anecdote, scientific fact and pure speculation. It is often charming: Chapter 2, for instance, is entitled, 'In which I write about the wellsprings of my inner conflict between religion and reason, why I grew up wanting to be a scientist, why I wear a lapel pin of Professor Calculus, and how I acquired a second mentor late in life'. For many, the richest parts of the book will be Koch's lucid descriptions of experiments such as his work with Itzhak Fried, a neurosurgeon who implanted electrodes into the hippocampi of people with epilepsy. In one patient, Fried found a single neuron that responded only to the name or pictures of Saddam Hussein; in another, he found one that responded only to pictures of the actress Jennifer Aniston. In a descriptive tour de force, Koch explains that although Fried dubbed these cells concept neurons, we can think of them as “the cellular substrate of the Platonic Ideal of Jennifer Aniston”. © 2012 Nature Publishing Group,

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 16512 - Posted: 03.15.2012

Barry Smith Human beings are part of nature. They are made of flesh and blood, brain and bone; but for much of the time they are also conscious. The puzzling thing is how the intricate sequences of nerve cells and tissue that make up a person's brain and body can generate the special subjective feel of conscious experience. Consciousness creates, in each of us, an inner life where we think and feel; a realm where we experience the sights, sounds, feels, tastes and smells that inform us of the world around us. To many philosophers the central problem of consciousness is, how can the facts of conscious mental life be part of the world of facts described by the natural sciences? The 17th-century philosopher, René Descartes, thought they couldn't and argued that, in addition to our physical makeup, creatures like us had a non-material mind, or soul, in which thinking took place. For Descartes, only humans were subjects of experience. Animals were mere mechanisms. When they squealed with what we mistakenly took to be pain, it was just air escaping from their lungs. Today we take other animals to be conscious; although we are not sure how far down the phylogenetic scale consciousness extends. Most problematically of all, if consciousness was immaterial, how could the immaterial soul move the physical body, or feel pain in response to physical injury? © 2012 Guardian News and Media Limited

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 16473 - Posted: 03.06.2012

By Laura Sanders As a scientist, Giulio Tononi’s goal is as lofty as it gets: He wants to understand how the brain generates consciousness. In his hunt, he and colleagues at the University of Wisconsin–Madison routinely use state-of-the-art brain scanners to produce torrents of information that stream into sophisticated computer programs describing various aspects of brain function. But Tononi’s most profound insight didn’t spring from this huge cache of scientific data. It came instead from a moment of quiet reflection. When he stepped away from his scanners and data and the hustle of the lab and thought — deeply — about what it was like to be conscious, he realized something: Each split second of awareness is a unified, holistic experience, completely different from any experience before or after it. From that observation alone, Tononi intuited a powerful new theory of consciousness, a theory based on the flow of information. He and others believe that mathematics — in particular, a set of equations describing how bits of data move through the brain — is the key to explaining how the mind knits together an experience. Because of its clarity, this informational intuition has resonated with other researchers, inspiring a new way to see the consciousness problem. “This insight was very important to me,” says Anil Seth of the Sackler Centre for Consciousness Science at the University of Sussex in Brighton, England. “I thought, there’s something right about all this.” © Society for Science & the Public 2000 - 2012

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 16426 - Posted: 02.25.2012

By Laura Sanders In one of science’s most iconic moments, Isaac Newton’s eye caught the red glint of an apple as it plunged toward the ground. He heard the leaves rustle in the light breeze and felt the warmth of the tea he was drinking at the time. These sensory inputs streamed into his brain, where they met his vast stores of knowledge, his internal musings, his peculiar brand of curiosity and perhaps even a fond recollection of escaping the ground’s hold while climbing a tree as a boy. All at once, sights, sounds, emotions and memories converged to form a whole, rich experience in the garden that day. It was this fortuitous experience — perfectly ripe for a big idea — that (legend has it) caused Newton to wonder why the apple fell not sideways or even upward, but straight down. Inspiration struck, ushering in a new understanding of gravity. Newton gets the glory for figuring out that the same mysterious force pulls planets toward the sun and apples toward Earth, but how he did it hinges on an even deeper mystery: How his brain created a single, seamless experience from a chaotic flux of internal and external messages. And that mystery isn’t confined to brains like Newton’s. In all conscious people, the brain somehow gives meaning to the external environment, allowing for thought, self-reflection and discovery. “It’s not that conscious experience is one little interesting phenomenon,” says neuroscientist Ralph Adolphs of Caltech. “It’s literally the whole world.” © Society for Science & the Public 2000 - 2012

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 16366 - Posted: 02.11.2012

By Antonio Damasio, Special to CNN (CNN) -- How do living organisms become conscious of what is happening to them and around them? How is it that I as well as you, reader of these words, can be conscious of our respective existences and of what is going on in our minds — in my case, ideas about how the brain generates consciousness, about the fact that I was asked to prepare this particular text for a specific deadline, along with the fact that I happen to be in Paris, at the moment, not Los Angeles, and that I am writing this on a cold January day. The biological mechanisms behind the phenomena of consciousness remain unclear although it is fair to say that recently our understanding has made remarkable progress. What are we are certain of understanding and where is it that our understanding fails? On the side of understanding, we can point to the process of sensory representation as an important part of consciousness. Most of what we are conscious of (conceivably all that we are conscious of) consists of representations of objects and events in the sensory modalities in which our brains trade, for example, vision, hearing, touching, smelling, taste, sensing the state of our body's interior. Mapping, in other words. Our brains, at all the levels of their organization, are inveterate makers of maps, simple and not so simple, and as far as I can gather, we only become conscious of the things and actions that the sensory systems help us map. CNN© 2012 Cable News Network.

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 16343 - Posted: 02.06.2012

Christof Koch We moderns believe that our momentary, subjective experience is intimately linked to events in the brain. One set of neurons fires, and we perceive an apple's colour, while a different population of cells gives rise to its taste. Yet the self is also stable: turn the brain off, as happens during heart surgery when the body is cooled to frigid temperatures, and on recovery, the patient's character, personality, habits and long-term memories remain intact. It is these stable aspects of the self, rather than the ebb and flow of our thoughts and percepts, that physicist-turned-neuroscientist Sebastian Seung seeks to explain in Connectome. Seung argues intelligently and powerfully that the self lies in the totality of the brain's wiring — the eponymous 'connectome', the word used by neuroscientists to denote all the fibre bundles (the white matter) of the human brain. These insulated nerve axons have a total length of around 150,000 kilometres. Seung hails a new science, 'connectomics', as the key to understanding the brain and its pathologies. This view is grounded in a older doctrine known as connectionism, which postulates that neurons are simple devices and that their connections determine their functions. Cataloguing the links among neurons therefore charts the mind. The heart of Connectome deals with how nervous systems can be reconstructed using electron microscopy. To do this, neural tissue is cut into slices 40–50 nanometres thick, and then imaged to a resolution of a few nanometres. Imaging 1 cubic millimetre of cortex generates 1 petabyte of data, or about a billion photo images from a typical digital camera. © 2012 Nature Publishing Group,

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 16337 - Posted: 02.04.2012

By Laura Sanders Humankind’s sharpest minds have figured out some of nature’s deepest secrets. Why the sun shines. How humans evolved from single-celled life. Why an apple falls to the ground. Humans have conceived and built giant telescopes that glimpse galaxies billions of light-years away and microscopes that illuminate the contours of a single atom. Yet the peculiar quality that enabled such flashes of scientific insight and grand achievements remains a mystery: consciousness. Though in some ways deeply familiar, consciousness is at the same time foreign to those in its possession. Deciphering the cryptic machinations of the brain — and how they create a mind — poses one of the last great challenges facing the scientific world. For a long time, the very question was considered to be in poor taste, acceptable for philosophical musing but outside the bounds of real science. Whispers of the C-word were met with scorn in polite scientific society. Toward the end of the last century, though, sentiment shifted as some respectable scientists began saying the C-word out loud. Initially these discussions were tantalizing but hazy: Like kids parroting a dirty word without knowing what it means, scientists speculated on what consciousness is without any real data. After a while, though, researchers developed ways to turn their instruments inward to study the very thing that was doing the studying. © Society for Science & the Public 2000 - 2012

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 16306 - Posted: 01.28.2012

By Tom Siegfried When Francis Crick decided to embark on a scientific research career, he chose his specialty by applying the “gossip test.” He’d noticed that he liked to gossip about two especially hot topics in the 1940s — the molecular basis for heredity and the mysteries of the brain. He decided to tackle biology’s molecules first. By 1953, with collaborator James Watson (and aided by data from competitor Rosalind Franklin), Crick had identified the structure of the DNA molecule, establishing the foundation for modern genetics. A quarter century later, he decided it was time to try the path not taken and turn his attention to the brain — in particular, the enigma of consciousness. At first, Crick believed the mysteries of consciousness would be solved with a striking insight, similar to the way the DNA double helix structure explained heredity’s mechanisms. But after a while he realized that consciousness posed a much tougher problem. Understanding DNA was easier because it appeared in life’s history sooner; the double helix template for genetic replication marked the beginning of evolution as we know it. Consciousness, on the other hand, represented evolution’s pinnacle, the outcome of eons of ever growing complexity in biochemical information processing. “The simplicity of the double helix … probably goes back to near the origin of life when things had to be simple,” Crick said in a 1998 interview. “It isn’t clear there will be a similar thing in the brain.” © Society for Science & the Public 2000 - 2012

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Consciousness
Link ID: 16305 - Posted: 01.28.2012