Chapter 7. Vision: From Eye to Brain
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
By JAMES GORMAN H. Sebastian Seung is a prophet of the connectome, the wiring diagram of the brain. In a popular book, debates and public talks he has argued that in that wiring lies each person’s identity. By wiring, Dr. Seung means the connections from one brain cell to another, seen at the level of the electron microscope. For a human, that would be 85 billion brain cells, with up to 10,000 connections for each one. The amount of information in the three-dimensional representation of the whole connectome at that level of detail would equal a zettabyte, a term only recently invented when the amount of digital data accumulating in the world required new words. It equals about a trillion gigabytes, or as one calculation framed it, 75 billion 16-gigabyte iPads. He is also a realist. When he speaks publicly, he tells his audiences, “I am my connectome.” But he can be brutally frank about the limitations of neuroscience. “We’ve failed to answer simple questions,” he said. “People want to know, ‘What is consciousness?’ And they think that neuroscience is up to understanding that. They want us to figure out schizophrenia and we can’t even figure out why this neuron responds to one direction and not the other.” This mix of intoxicating ideas, and the profound difficulties of testing them, not only defines Dr. Seung’s career but the current state of neuroscience itself. He is one of the stars of the field, and yet his latest achievement, in a paper published this month, is not one that will set the world on fire. He and his M.I.T. colleagues have proposed an explanation of how a nerve cell in the mouse retina — the starburst amacrine cell — detects the direction of motion. If he’s right, this is significant work. But it may not be what the public expects, and what they have been led to expect, from the current push to study the brain. © 2014 The New York Times Company
By JAMES GORMAN Crowd-sourced science has exploded in recent years. Foldit enlists users to help solve scientific puzzles such as how proteins are put together. Zooniverse hosts dozens of projects, including searching for planets and identifying images of animals caught on automatic cameras. Eyewire, which came out of H. Sebastian Seung’s lab at the Massachusetts Institute of Technology about a year and a half ago, is neuroscience’s entry into the field. The EyeWirers, as the players are called, have scored their first scientific success, contributing to a paper in the May 4 issue of Nature by Dr. Seung and his M.I.T. colleagues that offers a solution to a longstanding problem in how motion is detected. Anyone can sign up online, learn to use the software and start working on what Amy Robinson, the creative director of Eyewire, calls a “3-D coloring book.” The task is something like tracing one piece of yarn through an extremely tangled ball. More than 130,000 players in 145 countries, at last count, work on a cube that represents a bit of retinal tissue 4.5 microns on a side. The many branches of neurons are densely packed within. A micron is .00004 inches or, in Eyewire’s calculus, about one-tenth the width of a human hair. Some of the players spend upward of 40 hours a week on Eyewire. These cubes are created by an automated process in which electron microscopes make images of ultrathin slices of brain tissue. Computers then analyze and compile the data to create a three-dimensional representation of a cube of tissue with every neuron and connection visible. © 2014 The New York Times Company
By KATE MURPHY The baseball hurtles toward the batter, and he must decide from its rotation whether it’s a fastball worth a swing or a slider about to drop out of the strike zone. Running full speed, the wide receiver tracks both the football flying through the air and the defensive back on his heels. Golfers must rapidly shift visual focus in order to drive the ball at their feet toward a green in the distance. Many athletes need excellent vision to perform well in their sports, and now many are adding something new to their practice regimens: vision training. The idea has been around for years, but only recently have studies hinted that it might really work — that it might be possible to train yourself to see better without resorting to glasses or surgery. “Vision training has been out there for a long time,” said Mark Blumenkranz, a professor of ophthalmology at Stanford University Medical School. “But it’s being made more respectable lately thanks to the attention it’s been getting from psychophysicists, vision scientists, neurologists and optometrists.” Vision training actually has little to do with improving eyesight. The techniques, a form of perceptual learning, are intended to improve the ability to process what is seen. The idea is that if visual sensory neurons are repeatedly activated, they increase their ability to send electrical signals from one cell to another across connecting synapses. If neurons are not used, over time these transmissions are weakened. “With sensory neurons, just like muscles, it’s use or lose it,” said Dr. Bernhard Sabel, a neuroscientist at Otto von Guericke University in Magdeburg, Germany, who studies plasticity in the brain. “This applies both to athletes and the partially blind.” Vision training may involve simple strategies — for instance, focusing sequentially on beads knotted at intervals on a length of string with one end held at the tip of the nose. This is said to improve convergence (inward turning of the eye to maintain binocular vision) and the ability to focus near and far. © 2014 The New York Times Company
|By Ariel Van Brummelen The presence of light may do more for us than merely allow for sight. A study by Gilles Vandewalle and his colleagues at the University of Montreal suggests that light affects important brain functions—even in the absence of vision. Previous studies have found that certain photoreceptor cells located in the retina can detect light even in people who do not have the ability to see. Yet most studies suggested that at least 30 minutes of light exposure is needed to significantly affect cognition via these nonvisual pathways. Vandewalle's study, which involved three completely blind participants, found that just a few seconds of light altered brain activity, as long as the brain was engaged in active processing rather than at rest. First the experimenters asked their blind subjects whether a blue light was on or off, and the subjects answered correctly at a rate significantly higher than random chance—even though they confirmed they had no conscious perception of the light. Using functional MRI, the researchers then determined that less than a minute of blue light exposure triggered changes in activity in regions of their brain associated with alertness and executive function. Finally, the scientists found that if the subjects received simultaneous auditory stimulation, a mere two seconds of blue light was enough to modify brain activity. The researchers think the noise engaged active sensory processing, which allowed the brain to respond to the light much more quickly than in previous studies when subjects rested while being exposed to light. The results confirm that the brain can detect light in the absence of working vision. They also suggest that light can quickly alter brain activity through pathways unrelated to sight. The researchers posit that this nonvisual light sensing may aid in regulating many aspects of human brain function, including sleep/wake cycles and threat detection. © 2014 Scientific American,
Mo Costandi A vast project to map neural connections in the mouse retina may have answered the long-standing question of how the eyes detect motion. With the help of volunteers who played an online brain-mapping game, researchers showed that pairs of neurons positioned along a given direction together cause a third neuron to fire in response to images moving in the same direction. It is sometimes said that we see with the brain rather than the eyes, but this is not entirely true. People can only make sense of visual information once it has been interpreted by the brain, but some of this information is processed partly by neurons in the retina. In particular, 50 years ago researchers discovered that the mammalian retina is sensitive to the direction and speed of moving images1. This showed that motion perception begins in the retina, but researchers struggled to explain how. When light enters the eye, it is captured by photoreceptor cells, which convert the information into electrical impulses and transmit them to deeper layers of the retina. Individual photoreceptors are not sensitive to the direction in which an object may be moving, so neuroscientist Jinseop Kim, of the Massachusetts Institute of Technology (MIT) in Cambridge, and his colleagues wanted to test whether the answer to the puzzle could lie in the way various types of cells in the retina are connected. Photoreceptors relay their signals via ‘bipolar neurons’, named this way because they have two stems that jut out of the cell's body in opposite directions. The signal then transits through ‘starburst amacrine cells’ — which have filaments, or dendrites, that extend in all directions similarly to light rays out of a star — before reaching the cells that form the optic nerve, which relays them into the brain. © 2014 Nature Publishing Group,
Link ID: 19576 - Posted: 05.05.2014
By Melissa Hogenboom Science reporter, BBC Radio Science Neuroscience is a fast growing and popular field, but despite advances, when an area of the brain 'lights up" it does not tell us as much as we'd like about the inner workings of the mind. Many of us have seen the pictures and read the stories. A beautiful picture of the brain where an area is highlighted and found to be fundamental for processes like fear, disgust or impaired social ability. There are so many stories it can be easy to be swayed into thinking that much more of the brain's mystery has been solved than is the case. The technology is impressive but one of the most popular scanning methods - functional magnetic resonance imaging (fMRI) actually measures regional regional changes of blood flow to areas of the brain, not our neurons directly. Researchers use it when they want to understand what part of the brain is involved in a particular task. They can place a person in a brain scanner and see which areas become active. The areas that light up are then inferred to be important for that task, but the resulting images and phrase "lighting up the brain" can lead to over interpretation. Neuroscientist Molly Crocket from University College London explains that while fMRI is extremely useful, we are still very far from being able to read an individual's mind from a scan. "There's a misconception that's still rather common that you can look at someone's brain imaging data and be able to read off what they're thinking and feeling. This is certainly not the case," Dr Crocket told the BBC's Inside Science programme. 19th Century brain "A study will have been done which tells us something about the brain, but what [the public] really want to do is make the leap and understand the mind." She cites an article with the headline, "You love your iPhone, literally". In this case a team saw an area previously associated with love - the insula - was active when participants watched videos of a ringing iPhone. BBC © 2014
By Greg Miller As a journalist who writes about neuroscience, I’ve gotten a lot of super enthusiastic press releases touting a new breakthrough in using brain scans to read people’s minds. They usually come from a major university or a prestigious journal. They make it sound like a brave new future has suddenly arrived, a future in which brain scans advance the cause of truth and justice and help doctors communicate with patients whose minds are still active despite their paralyzed bodies. Amazing, right? Drop everything and write a story! Well, not so fast. Whenever I read these papers and talk to the scientists, I end up feeling conflicted. What they’ve done–so far, anyway–really doesn’t live up to what most people have in mind when they think about mind reading. Then again, the stuff they actually can do is pretty amazing. And they’re getting better at it, little by little. In pop culture, mind reading usually looks something like this: Somebody wears a goofy-looking cap with lots of wires and blinking lights while guys in white lab coats huddle around a monitor in another room to watch the movie that’s playing out in the person’s head, complete with cringe-inducing internal monologue. We are not there yet. “We can decode mental states to a degree,” said John-Dylan Haynes, a cognitive neuroscientist at Charité-Universitätsmedizin Berlin. “But we are far from a universal mind reading machine. For that you would need to be able to (a) take an arbitrary person, (b) decode arbitrary mental states and (c) do so without long calibration.” © 2014 Condé Nast.
by Helen Thomson A 22-year-old man has been instantaneously transported to his family's pizzeria and his local railway station – by having his brain zapped. These fleeting visual hallucinations have helped researchers pinpoint places where the brain stores visual location information. Pierre Mégevand at the Feinstein Institute for Medical Research in Manhasset, New York, and his colleagues wanted to discover just where in the brain we store and retrieve information about locations and places. They sought the help of a 22-year-old man being treated for epilepsy, because the treatment involved implanting electrodes into his brain that would record his neural activity. Mégevand and his colleagues scanned the volunteer's brain using functional MRI while he looked at pictures of different objects and scenes. They then recorded activity from the implanted electrodes as he looked at a similar set of pictures. In both situations, a specific area of the cortex around the hippocampus responded to images of places, but not to images of other kinds of objects, such as body parts or tools. "There are these little spots of tissues that seem to care about houses and places more than any other class of object," says research team member Ashesh Mehta, also at the Feinstein Institute. Next, the team used the implanted electrodes to stimulate the brain in this area – a move that the volunteer said triggered a series complex visual hallucinations. First he described seeing a railway station in the neighbourhood where he lives. Stimulation of a nearby area elicited another hallucination, this time of a staircase and a blue closet in his home. When stimulation of these areas was repeated, the same scenes arose. © Copyright Reed Business Information Ltd.
I am a sociologist by training. I come from academic world, reading scholarly articles on topics of social import, but they're almost always boring, dry and quickly forgotten. Yet I can't count how many times I've gone to a movie, a theater production or read a novel and been jarred into seeing something differently, learned something new, felt deep emotions and retained the insights gained. I know from both my research and casual conversations with people in daily life that my experiences are echoed by many. The arts can tap into issues that are otherwise out of reach and reach people in meaningful ways. This realization brought me to arts-based research (ABR). Arts-based research is an emergent paradigm whereby researchers across the disciplines adapt the tenets of the creative arts in their social research projects. Arts-based research, a term first coined by Eliot Eisner at Stanford University in the early 90s, is based on the assumption that art can teach us in ways that other forms cannot. Scholars can take interview or survey research, for instance, and represent it through art. I've written two novels based on sociological interview research. Sometimes researchers use the arts during data collection, involving research participants in the art-making process, such as drawing their response to a prompt rather than speaking. The turn by many scholars to arts-based research is most simply explained by my opening example of comparing the experience of consuming jargon-filled and inaccessible academic articles to that of experiencing artistic works. While most people know on some level that the arts can reach and move us in unique ways, there is actually science behind this. ©2014 TheHuffingtonPost.com, Inc
Associated Press Diagnosed with retinitis pigmentosa as a teenager, Pontz has been almost completely blind for years. Now, thanks to a high-tech procedure that involved the surgical implantation of a "bionic eye," he has regained enough of his eyesight to catch small glimpses of his wife, grandson and cat. "It's awesome. It's exciting - seeing something new every day," Pontz said during a recent appointment at the University of Michigan Kellogg Eye Center. The 55-year-old former competitive weightlifter and factory worker is one of four people in the U.S. to receive an artificial retina since the Food and Drug Administration signed off on its use last year. The facility in Ann Arbor has been the site of all four such surgeries since FDA approval. A fifth is scheduled for next month. Retinitis pigmentosa is an inherited disease that causes slow but progressive vision loss due to a gradual loss of the light-sensitive retinal cells called rods and cones. Patients experience loss of side vision and night vision, then central vision, which can result in near blindness. Not all of the 100,000 or so people in the U.S. with retinitis pigmentosa can benefit from the bionic eye. An estimated 10,000 have vision low enough, said Dr. Brian Mech, an executive with Second Sight Medical Products Inc., the Sylmar (Los Angeles County) company that makes the device. Of those, about 7,500 are eligible for the surgery. The artificial implant in Pontz's left eye is part of a system developed by Second Sight that includes a small video camera and transmitter housed in a pair of glasses. © 2014 Hearst Communications, Inc.
|By Stephen L. Macknik and Susana Martinez-Conde The Best Illusion of the Year Contest brings scientific and popular attention to perceptual oddities. Anyone can submit an illusion to next year's contest at http://illusionoftheyear.com/submission-instructions for the rules Decked out in a mask, cape and black spandex, a fit young man leaps onto the stage, one hand raised high, and bellows, “I am Japaneeeese Bat-Maaaaaan!” in a thick accent. The performer is neither actor nor acrobat. He is a mathematician named Jun Ono, hailing from Meiji University in Japan. Ono's single bound, front and center, at the Philharmonic Center for the Arts in Naples, Fla. (now called Artis-Naples), was the opening act of the ninth Best Illusion of the Year Contest, held May 13, 2013. Four words into the event, we knew Ono had won. Aside from showcasing new science, the contest celebrates our brain's wonderful and mistaken sense that we can accurately see, smell, hear, taste and touch the world around us. In reality, accuracy is not the brain's forte, as the illusion creators competing each year will attest. Yes, there is a real world out there, and you do perceive (some of) the events that occur around you, but you have never actually lived in reality. Instead your brain gathers pieces of data from your sensory systems—some of which are quite subjective or frankly wrong—and builds a simulation of the world. This simulation, which some call consciousness, becomes the universe in which you live. It is the only thing you have ever perceived. Your brain uses incomplete and flawed information to build this mental model and relies on quirky neural algorithms to often—but not always—obviate the flaws. Let us take a spin through some of the world's top illusions and their contributions to the science of perception. (To see videos of these illusions, see ScientificAmerican.com/may2014/illusions.) © 2014 Scientific American
Mégevand P et al., Journal of Neuroscience (2014) Close your eyes and imagine home. Sharp details—such as the shape of the front doorknob, the height of the windows, or the paint color—assemble in your mind with a richness that seems touchable. A new study has found where this mental projection lives in the brain by inducing hallucinations in an epilepsy patient. A 22-year-old male was receiving deep brain stimulation to isolate where his daily seizures originated. His disorder appeared after he caught West Nile virus at the age of 10 and subsequently suffered from brain inflammation. His episodes were always preceded by intense déjà vu, suggesting a visual component of his disease, but he had no history of hallucinations. Brain scans revealed a shrunken spot near his hippocampus—the brain’s memory center. Studies had shown that this region—known as the parahippocampal place area (PPA)—was involved with recognizing of scenes and places. Doctors reconfirmed this by showing the patient pictures of a house and seeing the PPA light up on brain scans with functional magnetic resonance imaging (images above show brain activity; yellow indicates stronger activation than red). Thin wire electrodes—less than 2 mm thick—placed in the PPA (yellow dots in right panel) recorded similar brain activity after viewing these pictures. To assess if the PPA was ground zero for seizures, the doctors used a routine procedure that involves shooting soft jolts of electricity into the region and seeing if the patient senses an oncoming seizure. Rather than have déjà vu, the patient’s surroundings suddenly changed as he hallucinated places familiar to him. In one instance, the doctors morphed into the Italians from his local pizza place. Zapping a nearby cluster of neurons produced a vision of his subway station. The findings, published on 16 April in The Journal of Neuroscience, confirm that this small corner of the brain is not only responsible for recognizing places, but is also crucial to recalling a mental vision of that place. © 2014 American Association for the Advancement of Science
Link ID: 19499 - Posted: 04.17.2014
By Ariel Van Brummelen The presence of light may do more for us than merely allow for sight. A study by Gilles Vandewalle and his colleagues at the University of Montreal suggests that light affects important brain functions—even in the absence of vision. Previous studies have found that certain photoreceptor cells located in the retina can detect light even in people who do not have the ability to see. Yet most studies suggested that at least 30 minutes of light exposure is needed to significantly affect cognition via these nonvisual pathways. Vandewalle's study, which involved three completely blind participants, found that just a few seconds of light altered brain activity, as long as the brain was engaged in active processing rather than at rest. First the experimenters asked their blind subjects whether a blue light was on or off, and the subjects answered correctly at a rate significantly higher than random chance—even though they confirmed they had no conscious perception of the light. Using functional MRI, the researchers then determined that less than a minute of blue light exposure triggered changes in activity in regions of their brain associated with alertness and executive function. Finally, the scientists found that if the subjects received simultaneous auditory stimulation, a mere two seconds of blue light was enough to modify brain activity. The researchers think the noise engaged active sensory processing, which allowed the brain to respond to the light much more quickly than in previous studies when subjects rested while being exposed to light. The results confirm that the brain can detect light in the absence of working vision. They also suggest that light can quickly alter brain activity through pathways unrelated to sight. The researchers posit that this nonvisual light sensing may aid in regulating many aspects of human brain function, including sleep/wake cycles and threat detection. © 2014 Scientific American
By GRETCHEN REYNOLDS Age-related vision loss is common and devastating. But new research suggests that physical activity might protect our eyes as we age. There have been suggestions that exercise might reduce the risk of macular degeneration, which occurs when neurons in the central part of the retina deteriorate. The disease robs millions of older Americans of clear vision. A 2009 study of more than 40,000 middle-aged distance runners, for instance, found that those covering the most miles had the least likelihood of developing the disease. But the study did not compare runners to non-runners, limiting its usefulness. It also did not try to explain how exercise might affect the incidence of an eye disease. So, more recently, researchers at Emory University in Atlanta and the Atlanta Veterans Administration Medical Center in Decatur, Ga., took up that question for a study published last month in The Journal of Neuroscience. Their interest was motivated in part by animal research at the V.A. medical center. That work had determined that exercise increases the levels of substances known as growth factors in the animals’ bloodstream and brains. These growth factors, especially one called brain-derived neurotrophic factor, or B.D.N.F., are known to contribute to the health and well-being of neurons and consequently, it is thought, to improvements in brain health and cognition after regular exercise. But the brain is not the only body part to contain neurons, as the researchers behind the new study knew. The retina does as well, and the researchers wondered whether exercise might raise levels of B.D.N.F. there, too, potentially affecting retinal health and vision. © 2014 The New York Times Company
Link ID: 19451 - Posted: 04.07.2014
Visual illusions, such as the rabbit-duck (shown above) and café wall (shown below) are fascinating because they remind us of the discrepancy between perception and reality. But our knowledge of such illusions has been largely limited to studying humans. That is now changing. There is mounting evidence that other animals can fall prey to the same illusions. Understanding whether these illusions arise in different brains could help us understand how evolution shapes visual perception. For neuroscientists and psychologists, illusions not only reveal how visual scenes are interpreted and mentally reconstructed, they also highlight constraints in our perception. They can take hundreds of different forms and can affect our perception of size, motion, colour, brightness, 3D form and much more. Artists, architects and designers have used illusions for centuries to distort our perception. Some of the most common types of illusory percepts are those that affect the impression of size, length or distance. For example, Ancient Greek architects designed columns for buildings so that they tapered and narrowed towards the top, creating the impression of a taller building when viewed from the ground. This type of illusion is called forced perspective, commonly used in ornamental gardens and stage design to make scenes appear larger or smaller. As visual processing needs to be both rapid and generally accurate, the brain constantly uses shortcuts and makes assumptions about the world that can, in some cases, be misleading. For example, the brain uses assumptions and the visual information surrounding an object (such as light level and presence of shadows) to adjust the perception of colour accordingly. © 2014 Guardian News and Media Limited
Link ID: 19398 - Posted: 03.22.2014
Neuroscientist Bevil Conway thinks about color for a living. An artist since youth, Conway now spends much of his time studying vision and perception at Wellesley College and Harvard Medical School. His science remains strongly linked to art--in 2004 he and Margaret Livingstone famously reported that Rembrandt may have suffered from flawed vision--and in recent years Conway has focused his research almost entirely on the neural machinery behind color. "I think it's a very powerful system," he tells Co.Design, "and it's completely underexploited." Conway's research into the brain's color systems has clear value for designers and artists like himself. It stands to reason, after all, that someone who understands how the brain processes color will be able to present it to others in a more effective way. But the neuroscience of color carries larger implications for the rest of us. In fact, Conway thinks his insights into color processing may ultimately shed light on some fundamental questions about human cognition. Step back for a moment to one of Conway's biggest findings, which came while examining how monkeys process color. Using a brain scanner, he and some collaborators found "globs" of specialized cells that detect distinct hues--suggesting that some areas of the primate brain are encoded for color. Interestingly, not all colors are given equal glob treatment. The largest neuron cluster was tuned to red, followed by green then blue; a small cell collection also cared about yellow. © 2014 Mansueto Ventures, LLC.
Link ID: 19390 - Posted: 03.21.2014
|By Nathan Collins A car detects when a driver starts to nod off and gently pulls over. A tablet or laptop senses its user is confused and offers assistance. Such interventions seem futuristic, but in fact they may not require any technological breakthroughs: a recent study suggests that with the aid of a standard camera, a simple computer program can learn to read people's eye movements to determine what they are doing and perhaps how they are feeling. Psychologists at the University of South Carolina were curious if a computer could figure out what a person was up to based on their eye movements. They first had 12 people engage in four tasks, including reading lines of text and searching photographs for a specific printed letter. Each person repeated the tasks 35 to 50 times while a camera recorded how their eyes moved. Using a subset of those data, the team trained a simple computer program, called a naive Bayes classifier, to identify which of the four tasks each person was doing. In the remaining trials, the classifier correctly determined which task the person was working on 75 percent of the time, well above the 25 percent expected by chance. Because the computer program is based on a flexible algorithm that is simple but powerful, this set-up could most likely be used to identify emotions or mental states such as confusion or fatigue, the researchers suggest in the paper, which appeared in September 2013 in PLOS ONE. With only a brief training period, a car's onboard computer—existing models are more than powerful enough—could learn how a driver's gaze changed as he or she became more exhausted. Further work, the authors suggest, could lead to devices capable of identifying and aiding people in need of assistance in a variety of situations. © 2014 Scientific American
by Tania Lombrozo St. Patrick's Day is my excuse to present you with the following illusion in green, courtesy of , a psychology professor at Ritsumeikan University in Japan. In this perceptual illusion, the two spirals appear to be different shades of green. In fact, they are the same. In this perceptual illusion, the two spirals appear to be different shades of green. In fact, they are the same. This image includes two spirals in different shades of green, one a yellowish light green and the other a darker turquoise green. Right? Wrong. At least, that's not what the pixel color values on your monitor will tell you, or what you'd find if you used a photometer to measure the distribution of lightwaves bouncing back from the green-looking regions of either spiral. In fact, the two spirals are the very same shade of green. If you don't believe me, here's a trick to make the illusion go away: replace the yellow and blue surrounding the green segments with a uniform background. Here I've replaced the blue with black: And here the yellow is gone, too: Tada! The very same green. The fact that the illusion disappears when the surrounding colors are replaced with a uniform background illustrates an important feature of color perception. Our experience of color for a given region of space isn't just a consequence of the wavelengths of light reaching our retinas from that region. Instead, the context matters a lot! ©2014 NPR
Link ID: 19375 - Posted: 03.18.2014
By Gary Marcus and Christof Koch What would you give for a retinal chip that let you see in the dark or for a next-generation cochlear implant that let you hear any conversation in a noisy restaurant, no matter how loud? Or for a memory chip, wired directly into your brain's hippocampus, that gave you perfect recall of everything you read? Or for an implanted interface with the Internet that automatically translated a clearly articulated silent thought ("the French sun king") into an online search that digested the relevant Wikipedia page and projected a summary directly into your brain? Science fiction? Perhaps not for very much longer. Brain implants today are where laser eye surgery was several decades ago. They are not risk-free and make sense only for a narrowly defined set of patients—but they are a sign of things to come. Unlike pacemakers, dental crowns or implantable insulin pumps, neuroprosthetics—devices that restore or supplement the mind's capacities with electronics inserted directly into the nervous system—change how we perceive the world and move through it. For better or worse, these devices become part of who we are. Neuroprosthetics aren't new. They have been around commercially for three decades, in the form of the cochlear implants used in the ears (the outer reaches of the nervous system) of more than 300,000 hearing-impaired people around the world. Last year, the Food and Drug Administration approved the first retinal implant, made by the company Second Sight. ©2014 Dow Jones & Company, Inc.
Link ID: 19371 - Posted: 03.17.2014
by Kat Arney Feeling dopey? Refresh your "circadian eye" with a burst of orange light. Light is a powerful wake-up call, enhancing alertness and activity. Its effect is controlled by a group of photoreceptor cells in the eyeball that make the light-sensing pigment melanopsin. These cells, which work separately to the rods and cones needed for vision, are thought to help reset animals' body clocks - or circadian rhythms. Studies with people who are blind suggest this also happens in humans, although the evidence isn't conclusive. To find out how melanopsin wakes up the brain, Gilles Vandewalle at the University of Liege, Belgium, and his team gave 16 people a 10-minute blast of blue or orange light while they performed a memory test in an fMRI scanner. They were then blindfolded for 70 minutes, before being retested under a green light. People initially exposed to orange light had greater brain activity in several regions related to alertness and cognition when they were retested, compared with those pre-exposed to blue light. Light switch Vandewalle thinks that melanopsin is acting as a kind of switch, sending different signals to the brain depending on its state. Orange light, which has the longer wavelength, is known to make the pigment more light-sensitive, but blue light has the opposite effect. Green light lies somewhere in the middle. The findings suggest that pre-exposure to orange light pushes the balance towards the more light-sensitive form of melanopsin, enhancing the response in the brain. © Copyright Reed Business Information Ltd.