Links for Keyword: Vision

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 21 - 40 of 817

Simon Makin Running helps mice to recover from a type of blindness caused by sensory deprivation early in life, researchers report. The study, published on 26 June in eLife1, also illuminates processes underlying the brain’s ability to rewire itself in response to experience — a phenomenon known as plasticity, which neuroscientists believe is the basis of learning. More than 50 years ago, neurophysiologists David Hubel and Torsten Wiesel cracked the 'code' used to send information from the eyes to the brain. They also showed that the visual cortex develops properly only if it receives input from both eyes early in life. If one eye is deprived of sight during this ‘critical period’, the result is amblyopia, or ‘lazy eye’, a state of near blindness. This can happen to someone born with a droopy eyelid, cataract or other defect not corrected in time. If the eye is opened in adulthood, recovery can be slow and incomplete. In 2010, neuroscientists Christopher Niell and Michael Stryker, both at the University of California, San Francisco (UCSF), showed that running more than doubled the response of mice's visual cortex neurons to visual stimulation2 (see 'Neuroscience: Through the eyes of a mouse'). Stryker says that it is probably more important, and taxing, to keep track of the environment when navigating it at speed, and that lower responsiveness at rest may have evolved to conserve energy in less-demanding situations. “It makes sense to put the visual system in a high-gain state when you’re moving through the environment, because vision tells you about far away things, whereas touch only tells you about things that are close,” he says. © 2014 Nature Publishing Group

Related chapters from BP7e: Chapter 10: Vision: From Eye to Brain; Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 7: Vision: From Eye to Brain; Chapter 5: The Sensorimotor System
Link ID: 19779 - Posted: 07.01.2014

by Sarah Zielinski Would you recognize a stop sign if it was a different shape, though still red and white? Probably, though there might be a bit of a delay. After all, your brain has long been trained to expect a red-and-white octagon to mean “stop.” The animal and plant world also uses colorful signals. And it would make sense if a species always used the same pattern to signal the same thing — like how we can identify western black widows by the distinctive red hourglass found on the adult spiders’ back. But that doesn’t always happen. Even with really important signals, such as the ones that tell a predator, “Don’t eat me — I’m poisonous.” Consider the dyeing dart frog (Dendrobates tinctorius), which is found in lowland forests of the Guianas and Brazil. The backs of the 5-centimeter-long frogs are covered with a yellow-and-black pattern, which warns of its poisonous nature. But that pattern isn’t the same from frog to frog. Some are decorated with an elongated pattern; others have more complex, sometimes interrupted patterns. The difference in patterns should make it harder for predators to recognize the warning signal. So why is there such variety? Because the patterns aren’t always viewed on a static frog, and the different ways that the frogs move affects how predators see the amphibians, according to a study published June 18 in Biology Letters. Bibiana Rojas of Deakin University in Geelong, Australia, and colleagues studied the frogs in a nature reserve in French Guiana from February to July 2011. They found 25 female and 14 male frogs, following each for two hours from about 2.5 meters away, where the frog wouldn’t notice a scientist. As a frog moved, a researcher would follow, recording how far it went and in what direction. Each frog was then photographed. © Society for Science & the Public 2000 - 2013.

Related chapters from BP7e: Chapter 10: Vision: From Eye to Brain; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 7: Vision: From Eye to Brain
Link ID: 19767 - Posted: 06.25.2014

By HELENE STAPINSKI A few months ago, my 10-year-old daughter, Paulina, was suffering from a bad headache right before bedtime. She went to lie down and I sat beside her, stroking her head. After a few minutes, she looked up at me and said, “Everything in the room looks really small.” And I suddenly remembered: When I was young, I too would “see things far away,” as I once described it to my mother — as if everything in the room were at the wrong end of a telescope. The episodes could last anywhere from a few minutes to an hour, but they eventually faded as I grew older. I asked Paulina if this was the first time she had experienced such a thing. She shook her head and said it happened every now and then. When I was a little girl, I told her, it would happen to me when I had a fever or was nervous. I told her not to worry and that it would go away on its own. Soon she fell asleep, and I ran straight to my computer. Within minutes, I discovered that there was an actual name for what turns out to be a very rare affliction — Alice in Wonderland Syndrome. Episodes usually include micropsia (objects appear small) or macropsia (objects appear large). Some sufferers perceive their own body parts to be larger or smaller. For me, and Paulina, furniture a few feet away seemed small enough to fit inside a dollhouse. Dr. John Todd, a British psychiatrist, gave the disorder its name in a 1955 paper, noting that the misperceptions resemble Lewis Carroll’s descriptions of what happened to Alice. It’s also known as Todd’s Syndrome. Alice in Wonderland Syndrome is not an optical problem or a hallucination. Instead, it is most likely caused by a change in a portion of the brain, likely the parietal lobe, that processes perceptions of the environment. Some specialists consider it a type of aura, a sensory warning preceding a migraine. And the doctors confirmed that it usually goes away by adulthood. © 2014 The New York Times Company

Related chapters from BP7e: Chapter 18: Attention and Higher Cognition; Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 7: Vision: From Eye to Brain
Link ID: 19766 - Posted: 06.24.2014

By Gary Stix James DiCarlo: We all have this intuitive feel for what object recognition is. It’s the ability to discriminate your face from other faces, a car from other cars, a dog from a camel, that ability we all intuitively feel. But making progress in understanding how our brains are able to accomplish that is a very challenging problem and part of the reason is that it’s challenging to define what it isn’t and is. We take this problem for granted because it seems effortless to us. However, a computer vision person would tell you is that this is an extremely challenging problem because each object presents an essentially infinite number of images to your retina so you essentially never see the same image of each object twice. SA: It seems like object recognition is actually one of the big problems both in neuroscience and in the computational science of machine learning? DiCarlo: That’s right., not only machine learning but also in psychology or cognitive science because the objects that we see are the sources in the world of what we use to build higher cognition, things like memory and decision-making. Should I reach for this, should I avoid it? Our brains can’t do what you would call higher cognition without these foundational elements that we often take for granted. SA: Maybe you can talk about what’s actually happening in the brain during this process. DiCarlo: It’s been known for several decades that there’s a portion of the brain, the temporal lobe down the sides of our head, that, when lost or damaged in humans and non-human primates, leads to deficits of recognition. So we had clues that that’s where these algorithms for object recognition are living. But just saying that part of your brain solves the problem is not really specific. It’s still a very large piece of tissue. Anatomy tells us that there’s a whole network of areas that exist there, and now the tools of neurophysiology and still more advanced tools allow us to go in and look more closely at the neural activity, especially in non-human primates. We can then begin to decipher the actual computations to the level that an engineer might, for instance, in order to emulate what’s going on in our heads. © 2014 Scientific American

Related chapters from BP7e: Chapter 10: Vision: From Eye to Brain; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 7: Vision: From Eye to Brain; Chapter 14: Attention and Consciousness
Link ID: 19754 - Posted: 06.21.2014

By Adam Brimelow Health Correspondent, BBC News Researchers from Oxford University say they've made a breakthrough in developing smart glasses for people with severe sight loss. The glasses enhance images of nearby people and objects on to the lenses, providing a much clearer sense of surroundings. They have allowed some people to see their guide dogs for the first time. The Royal National Institute of Blind People says they could be "incredibly important". Lyn Oliver has a progressive eye disease which means she has very limited vision. Now 70, she was diagnosed with retinitis pigmentosa in her early twenties. She can spot movement but describes her sight as "smudged and splattered". Her guide dog Jess helps her find her way around - avoiding most obstacles and hazards - but can't convey other information about her surroundings. Lyn is one of nearly two million people in the UK with a sight problem which seriously affects their daily lives. Most though have at least some residual sight. Researchers at Oxford University have developed a way to enhance this - using smart glasses. They are fitted with a specially adapted 3D camera. retinitis pigmentosa Dark spots across the retina (back of the eye) correspond with the extent of vision loss in retinitis pigmentosa The images are processed by computer and projected in real-time on to the lenses - so people and objects nearby become bright and clearly defined. 'More independent' Lyn Oliver has tried some of the early prototypes, but the latest model marks a key stage in the project, offering greater clarity and detail than ever before. Dr Stephen Hicks, from the University of Oxford, who has led the project, says they are now ready to be taken from the research setting to be used in the home. BBC © 2014

Related chapters from BP7e: Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 7: Vision: From Eye to Brain
Link ID: 19738 - Posted: 06.17.2014

By EVAN FLEISCHER In two labs some 50 miles apart in Israel, computer scientists and engineers are refining devices that employ tiny cameras as translators of sorts. For both teams, the goal is to give blind people a form of sight — or at least an experience analogous to sight. At Bar-Ilan University near Tel Aviv, where Zeev Zalevsky is head of the electro-optics program, these efforts have taken shape in the form of a smart contact lens. The device begins with a camera mounted on a pair of glasses, and the contact lens, Dr. Zalevsky explained, is embedded with an electrode that will produce an image of what is before the camera directly on the cornea. The image would be experienced in one of two ways: If an apple is placed before the camera, it could be “seen” either as the contour of an apple or as a Braille-like shape that a trained user would recognize as a representation of an apple. Continue reading the main story Contact lens could open new vistas for the blind. Video by Reuters Yevgeny Beiderman, a graduate student who worked with Dr. Zalevsky in testing the prototype, said: “The first time, the usage of the glasses feels strange. It takes at least a few attempts to start using it.” The image captured by Dr. Zalevsky’s device is 110 by 110 pixels — hardly photograph-quality resolution, but Dr. Zalevsky said by email that the camera captures several images in time, and the compressed and encoded result “is enough to allow functionality to the blind person (for example: Braille contains only six points and is enough for reading.)” Dr. Zalevsky is awaiting permission from a hospital to test the electrode lens on people, so in the meantime he has conducted preliminary trials using lenses that apply air pressure to the cornea instead. He has also conducted tests in which participants identified various shapes based on electrical stimulation of the tongue, after the same sort of training that would let someone wearing his lens “see” an apple as a Braille-like pattern. © 2014 The New York Times Company

Related chapters from BP7e: Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 7: Vision: From Eye to Brain
Link ID: 19723 - Posted: 06.12.2014

By C. CLAIBORNE RAY Q. Does the slit shape of a cat’s pupil confer any advantages over the more rounded pupils of other animals? A. “There are significant advantages,” said Dr. Richard E. Goldstein, chief medical officer of the Animal Medical Center in New York City. “A cat can quickly adjust to different lighting conditions, control the amount of light that reaches the eye and see in almost complete darkness,” he said. “Moreover, the slit shape protects the sensitive retina in daylight.” The slit-shaped pupil found in many nocturnal animals, including the domestic cat, presumably allows more effective control of how much light reaches the retina, in terms of both speed and completeness. “A cat has the capacity to alter the intensity of light falling on its retina 135-fold, compared to tenfold in a human, with a circular pupil,” Dr. Goldstein said. “A cat’s eye has a large cornea, which allows more light into the eye, and a slit pupil can dilate more than a round pupil, allowing more light to enter in dark conditions.” Cats have other visual advantages as well, Dr. Goldstein said. A third eyelid, between the regular eyelids and the cornea, protects the globe and also has a gland at the bottom that produces extra tears. The eyes’ location, facing forward in the front of the skull, gives cats a large area of binocular vision, providing depth perception and helping them to catch prey. © 2014 The New York Times Company

Related chapters from BP7e: Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 7: Vision: From Eye to Brain
Link ID: 19706 - Posted: 06.07.2014

|By Christie Nicholson Conventional wisdom once had it that each brain region is responsible for a specific task. And so we have the motor cortex for handling movements, and the visual cortex, for processing sight. And scientists thought that such regions remained fixed for those tasks beyond the age of three. But within the past decade researchers have realized that some brain regions can pinch hit for other regions, for example, after a damaging stroke. And now new research finds that the visual cortex is constantly doing double duty—it has a role in processing not just sight, but sound. When we hear [siren sound], we see a siren. In the study, scientists scanned the brains of blindfolded participants as the subjects listened to three sounds: [audio of birds, audio of traffic, audio of a talking crowd.] And the scientists could tell what specific sounds the subjects were hearing just by analyzing the brain activity in the visual cortex. [Petra Vetter, Fraser W. Smith and Lars Muckli, Decoding Sound and Imagery Content in Early Visual Cortex, in Current Biology] The next step is to determine why the visual cortex is horning in on the audio action. The researchers think the additional role conferred an evolutionary advantage: having a visual system primed by sound to see the source of that sound could have given humans an extra step in the race for survival. © 2014 Scientific American

Related chapters from BP7e: Chapter 10: Vision: From Eye to Brain; Chapter 9: Hearing, Vestibular Perception, Taste, and Smell
Related chapters from MM:Chapter 7: Vision: From Eye to Brain; Chapter 6: Hearing, Balance, Taste, and Smell
Link ID: 19685 - Posted: 06.03.2014

By Susana Martinez-Conde Expanding and contracting circles, mutating colors, and false image matches dominated the 2014 Best Illusion of the Year Contest, held on May 18th in the TradeWinds Island Grand in St. Petersburg, FL. One thousand perceptual scientists joined artists and the general public to determine the TOP THREE illusion masters from a pre-selected group of TOP TEN finalists, chosen by an international committee of judges. Each winner took home a trophy designed by the acclaimed Italian sculptor Guido Moretti: the trophies are visual illusions themselves. It was the 10th annual edition of the contest, which annually draws numerous accolades from attendees as well as international media coverage. Las Vegas magician Mac King was master of ceremonies for the event, hosted by the Neural Correlate Society, a non-profit organization whose mission is to promote public awareness of neuroscience research and discovery, and sponsored by Scientific American. Each of the 10 presenters displayed and described their creations for 5 minutes, to the sounds of music and confetti cannons, in an event unlike anything else in science. Afterwards, the audience voted on their favorite illusion while Mac King performed some of his signature magic tricks for the audience. The First Prize winner of the contest, an illusion by Christopher Blair, Gideon Caplovitz and Ryan Mruczek from University of Nevada Reno, took the classical Ebbinghaus illusion, where the perceived size of a central circle varies with the size of surrounding circles, and put it on steroids by making it into an ever-changing dynamic display. Blair rhymed his 5-minute presentation Dr. Seuss-style. Second Prize went to Mark Vergeer, Stuart Anstis and Rob van Lier from the University of Leuven, UC San Diego and Radboud University Nijmegen, for showing that a single colored image can produce several different color perceptions depending on the position of black outlines over the image. © 2014 Scientific American

Related chapters from BP7e: Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 7: Vision: From Eye to Brain
Link ID: 19664 - Posted: 05.28.2014

By JAMES GORMAN H. Sebastian Seung is a prophet of the connectome, the wiring diagram of the brain. In a popular book, debates and public talks he has argued that in that wiring lies each person’s identity. By wiring, Dr. Seung means the connections from one brain cell to another, seen at the level of the electron microscope. For a human, that would be 85 billion brain cells, with up to 10,000 connections for each one. The amount of information in the three-dimensional representation of the whole connectome at that level of detail would equal a zettabyte, a term only recently invented when the amount of digital data accumulating in the world required new words. It equals about a trillion gigabytes, or as one calculation framed it, 75 billion 16-gigabyte iPads. He is also a realist. When he speaks publicly, he tells his audiences, “I am my connectome.” But he can be brutally frank about the limitations of neuroscience. “We’ve failed to answer simple questions,” he said. “People want to know, ‘What is consciousness?’ And they think that neuroscience is up to understanding that. They want us to figure out schizophrenia and we can’t even figure out why this neuron responds to one direction and not the other.” This mix of intoxicating ideas, and the profound difficulties of testing them, not only defines Dr. Seung’s career but the current state of neuroscience itself. He is one of the stars of the field, and yet his latest achievement, in a paper published this month, is not one that will set the world on fire. He and his M.I.T. colleagues have proposed an explanation of how a nerve cell in the mouse retina — the starburst amacrine cell — detects the direction of motion. If he’s right, this is significant work. But it may not be what the public expects, and what they have been led to expect, from the current push to study the brain. © 2014 The New York Times Company

Related chapters from BP7e: Chapter 10: Vision: From Eye to Brain; Chapter 2: Functional Neuroanatomy: The Nervous System and Behavior
Related chapters from MM:Chapter 7: Vision: From Eye to Brain; Chapter 2: Cells and Structures: The Anatomy of the Nervous System
Link ID: 19663 - Posted: 05.26.2014

By JAMES GORMAN Crowd-sourced science has exploded in recent years. Foldit enlists users to help solve scientific puzzles such as how proteins are put together. Zooniverse hosts dozens of projects, including searching for planets and identifying images of animals caught on automatic cameras. Eyewire, which came out of H. Sebastian Seung’s lab at the Massachusetts Institute of Technology about a year and a half ago, is neuroscience’s entry into the field. The EyeWirers, as the players are called, have scored their first scientific success, contributing to a paper in the May 4 issue of Nature by Dr. Seung and his M.I.T. colleagues that offers a solution to a longstanding problem in how motion is detected. Anyone can sign up online, learn to use the software and start working on what Amy Robinson, the creative director of Eyewire, calls a “3-D coloring book.” The task is something like tracing one piece of yarn through an extremely tangled ball. More than 130,000 players in 145 countries, at last count, work on a cube that represents a bit of retinal tissue 4.5 microns on a side. The many branches of neurons are densely packed within. A micron is .00004 inches or, in Eyewire’s calculus, about one-tenth the width of a human hair. Some of the players spend upward of 40 hours a week on Eyewire. These cubes are created by an automated process in which electron microscopes make images of ultrathin slices of brain tissue. Computers then analyze and compile the data to create a three-dimensional representation of a cube of tissue with every neuron and connection visible. © 2014 The New York Times Company

Related chapters from BP7e: Chapter 10: Vision: From Eye to Brain; Chapter 2: Functional Neuroanatomy: The Nervous System and Behavior
Related chapters from MM:Chapter 7: Vision: From Eye to Brain; Chapter 2: Cells and Structures: The Anatomy of the Nervous System
Link ID: 19662 - Posted: 05.26.2014

By KATE MURPHY The baseball hurtles toward the batter, and he must decide from its rotation whether it’s a fastball worth a swing or a slider about to drop out of the strike zone. Running full speed, the wide receiver tracks both the football flying through the air and the defensive back on his heels. Golfers must rapidly shift visual focus in order to drive the ball at their feet toward a green in the distance. Many athletes need excellent vision to perform well in their sports, and now many are adding something new to their practice regimens: vision training. The idea has been around for years, but only recently have studies hinted that it might really work — that it might be possible to train yourself to see better without resorting to glasses or surgery. “Vision training has been out there for a long time,” said Mark Blumenkranz, a professor of ophthalmology at Stanford University Medical School. “But it’s being made more respectable lately thanks to the attention it’s been getting from psychophysicists, vision scientists, neurologists and optometrists.” Vision training actually has little to do with improving eyesight. The techniques, a form of perceptual learning, are intended to improve the ability to process what is seen. The idea is that if visual sensory neurons are repeatedly activated, they increase their ability to send electrical signals from one cell to another across connecting synapses. If neurons are not used, over time these transmissions are weakened. “With sensory neurons, just like muscles, it’s use or lose it,” said Dr. Bernhard Sabel, a neuroscientist at Otto von Guericke University in Magdeburg, Germany, who studies plasticity in the brain. “This applies both to athletes and the partially blind.” Vision training may involve simple strategies — for instance, focusing sequentially on beads knotted at intervals on a length of string with one end held at the tip of the nose. This is said to improve convergence (inward turning of the eye to maintain binocular vision) and the ability to focus near and far. © 2014 The New York Times Company

Related chapters from BP7e: Chapter 10: Vision: From Eye to Brain; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 7: Vision: From Eye to Brain; Chapter 13: Memory, Learning, and Development
Link ID: 19659 - Posted: 05.26.2014

|By Ariel Van Brummelen The presence of light may do more for us than merely allow for sight. A study by Gilles Vandewalle and his colleagues at the University of Montreal suggests that light affects important brain functions—even in the absence of vision. Previous studies have found that certain photoreceptor cells located in the retina can detect light even in people who do not have the ability to see. Yet most studies suggested that at least 30 minutes of light exposure is needed to significantly affect cognition via these nonvisual pathways. Vandewalle's study, which involved three completely blind participants, found that just a few seconds of light altered brain activity, as long as the brain was engaged in active processing rather than at rest. First the experimenters asked their blind subjects whether a blue light was on or off, and the subjects answered correctly at a rate significantly higher than random chance—even though they confirmed they had no conscious perception of the light. Using functional MRI, the researchers then determined that less than a minute of blue light exposure triggered changes in activity in regions of their brain associated with alertness and executive function. Finally, the scientists found that if the subjects received simultaneous auditory stimulation, a mere two seconds of blue light was enough to modify brain activity. The researchers think the noise engaged active sensory processing, which allowed the brain to respond to the light much more quickly than in previous studies when subjects rested while being exposed to light. The results confirm that the brain can detect light in the absence of working vision. They also suggest that light can quickly alter brain activity through pathways unrelated to sight. The researchers posit that this nonvisual light sensing may aid in regulating many aspects of human brain function, including sleep/wake cycles and threat detection. © 2014 Scientific American,

Related chapters from BP7e: Chapter 14: Biological Rhythms, Sleep, and Dreaming; Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 10: Biological Rhythms and Sleep; Chapter 7: Vision: From Eye to Brain
Link ID: 19577 - Posted: 05.06.2014

Mo Costandi A vast project to map neural connections in the mouse retina may have answered the long-standing question of how the eyes detect motion. With the help of volunteers who played an online brain-mapping game, researchers showed that pairs of neurons positioned along a given direction together cause a third neuron to fire in response to images moving in the same direction. It is sometimes said that we see with the brain rather than the eyes, but this is not entirely true. People can only make sense of visual information once it has been interpreted by the brain, but some of this information is processed partly by neurons in the retina. In particular, 50 years ago researchers discovered that the mammalian retina is sensitive to the direction and speed of moving images1. This showed that motion perception begins in the retina, but researchers struggled to explain how. When light enters the eye, it is captured by photoreceptor cells, which convert the information into electrical impulses and transmit them to deeper layers of the retina. Individual photoreceptors are not sensitive to the direction in which an object may be moving, so neuroscientist Jinseop Kim, of the Massachusetts Institute of Technology (MIT) in Cambridge, and his colleagues wanted to test whether the answer to the puzzle could lie in the way various types of cells in the retina are connected. Photoreceptors relay their signals via ‘bipolar neurons’, named this way because they have two stems that jut out of the cell's body in opposite directions. The signal then transits through ‘starburst amacrine cells’ — which have filaments, or dendrites, that extend in all directions similarly to light rays out of a star — before reaching the cells that form the optic nerve, which relays them into the brain. © 2014 Nature Publishing Group,

Related chapters from BP7e: Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 7: Vision: From Eye to Brain
Link ID: 19576 - Posted: 05.05.2014

Associated Press Diagnosed with retinitis pigmentosa as a teenager, Pontz has been almost completely blind for years. Now, thanks to a high-tech procedure that involved the surgical implantation of a "bionic eye," he has regained enough of his eyesight to catch small glimpses of his wife, grandson and cat. "It's awesome. It's exciting - seeing something new every day," Pontz said during a recent appointment at the University of Michigan Kellogg Eye Center. The 55-year-old former competitive weightlifter and factory worker is one of four people in the U.S. to receive an artificial retina since the Food and Drug Administration signed off on its use last year. The facility in Ann Arbor has been the site of all four such surgeries since FDA approval. A fifth is scheduled for next month. Retinitis pigmentosa is an inherited disease that causes slow but progressive vision loss due to a gradual loss of the light-sensitive retinal cells called rods and cones. Patients experience loss of side vision and night vision, then central vision, which can result in near blindness. Not all of the 100,000 or so people in the U.S. with retinitis pigmentosa can benefit from the bionic eye. An estimated 10,000 have vision low enough, said Dr. Brian Mech, an executive with Second Sight Medical Products Inc., the Sylmar (Los Angeles County) company that makes the device. Of those, about 7,500 are eligible for the surgery. The artificial implant in Pontz's left eye is part of a system developed by Second Sight that includes a small video camera and transmitter housed in a pair of glasses. © 2014 Hearst Communications, Inc.

Related chapters from BP7e: Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 7: Vision: From Eye to Brain
Link ID: 19527 - Posted: 04.24.2014

|By Stephen L. Macknik and Susana Martinez-Conde The Best Illusion of the Year Contest brings scientific and popular attention to perceptual oddities. Anyone can submit an illusion to next year's contest at http://illusionoftheyear.com/submission-instructions for the rules Decked out in a mask, cape and black spandex, a fit young man leaps onto the stage, one hand raised high, and bellows, “I am Japaneeeese Bat-Maaaaaan!” in a thick accent. The performer is neither actor nor acrobat. He is a mathematician named Jun Ono, hailing from Meiji University in Japan. Ono's single bound, front and center, at the Philharmonic Center for the Arts in Naples, Fla. (now called Artis-Naples), was the opening act of the ninth Best Illusion of the Year Contest, held May 13, 2013. Four words into the event, we knew Ono had won. Aside from showcasing new science, the contest celebrates our brain's wonderful and mistaken sense that we can accurately see, smell, hear, taste and touch the world around us. In reality, accuracy is not the brain's forte, as the illusion creators competing each year will attest. Yes, there is a real world out there, and you do perceive (some of) the events that occur around you, but you have never actually lived in reality. Instead your brain gathers pieces of data from your sensory systems—some of which are quite subjective or frankly wrong—and builds a simulation of the world. This simulation, which some call consciousness, becomes the universe in which you live. It is the only thing you have ever perceived. Your brain uses incomplete and flawed information to build this mental model and relies on quirky neural algorithms to often—but not always—obviate the flaws. Let us take a spin through some of the world's top illusions and their contributions to the science of perception. (To see videos of these illusions, see ScientificAmerican.com/may2014/illusions.) © 2014 Scientific American

Related chapters from BP7e: Chapter 10: Vision: From Eye to Brain; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 7: Vision: From Eye to Brain; Chapter 14: Attention and Consciousness
Link ID: 19525 - Posted: 04.23.2014

Mégevand P et al., Journal of Neuroscience (2014) Close your eyes and imagine home. Sharp details—such as the shape of the front doorknob, the height of the windows, or the paint color—assemble in your mind with a richness that seems touchable. A new study has found where this mental projection lives in the brain by inducing hallucinations in an epilepsy patient. A 22-year-old male was receiving deep brain stimulation to isolate where his daily seizures originated. His disorder appeared after he caught West Nile virus at the age of 10 and subsequently suffered from brain inflammation. His episodes were always preceded by intense déjà vu, suggesting a visual component of his disease, but he had no history of hallucinations. Brain scans revealed a shrunken spot near his hippocampus—the brain’s memory center. Studies had shown that this region—known as the parahippocampal place area (PPA)—was involved with recognizing of scenes and places. Doctors reconfirmed this by showing the patient pictures of a house and seeing the PPA light up on brain scans with functional magnetic resonance imaging (images above show brain activity; yellow indicates stronger activation than red). Thin wire electrodes—less than 2 mm thick—placed in the PPA (yellow dots in right panel) recorded similar brain activity after viewing these pictures. To assess if the PPA was ground zero for seizures, the doctors used a routine procedure that involves shooting soft jolts of electricity into the region and seeing if the patient senses an oncoming seizure. Rather than have déjà vu, the patient’s surroundings suddenly changed as he hallucinated places familiar to him. In one instance, the doctors morphed into the Italians from his local pizza place. Zapping a nearby cluster of neurons produced a vision of his subway station. The findings, published on 16 April in The Journal of Neuroscience, confirm that this small corner of the brain is not only responsible for recognizing places, but is also crucial to recalling a mental vision of that place. © 2014 American Association for the Advancement of Science

Related chapters from BP7e: Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 7: Vision: From Eye to Brain
Link ID: 19499 - Posted: 04.17.2014

By Ariel Van Brummelen The presence of light may do more for us than merely allow for sight. A study by Gilles Vandewalle and his colleagues at the University of Montreal suggests that light affects important brain functions—even in the absence of vision. Previous studies have found that certain photoreceptor cells located in the retina can detect light even in people who do not have the ability to see. Yet most studies suggested that at least 30 minutes of light exposure is needed to significantly affect cognition via these nonvisual pathways. Vandewalle's study, which involved three completely blind participants, found that just a few seconds of light altered brain activity, as long as the brain was engaged in active processing rather than at rest. First the experimenters asked their blind subjects whether a blue light was on or off, and the subjects answered correctly at a rate significantly higher than random chance—even though they confirmed they had no conscious perception of the light. Using functional MRI, the researchers then determined that less than a minute of blue light exposure triggered changes in activity in regions of their brain associated with alertness and executive function. Finally, the scientists found that if the subjects received simultaneous auditory stimulation, a mere two seconds of blue light was enough to modify brain activity. The researchers think the noise engaged active sensory processing, which allowed the brain to respond to the light much more quickly than in previous studies when subjects rested while being exposed to light. The results confirm that the brain can detect light in the absence of working vision. They also suggest that light can quickly alter brain activity through pathways unrelated to sight. The researchers posit that this nonvisual light sensing may aid in regulating many aspects of human brain function, including sleep/wake cycles and threat detection. © 2014 Scientific American

Related chapters from BP7e: Chapter 10: Vision: From Eye to Brain; Chapter 14: Biological Rhythms, Sleep, and Dreaming
Related chapters from MM:Chapter 7: Vision: From Eye to Brain; Chapter 10: Biological Rhythms and Sleep
Link ID: 19482 - Posted: 04.14.2014

By GRETCHEN REYNOLDS Age-related vision loss is common and devastating. But new research suggests that physical activity might protect our eyes as we age. There have been suggestions that exercise might reduce the risk of macular degeneration, which occurs when neurons in the central part of the retina deteriorate. The disease robs millions of older Americans of clear vision. A 2009 study of more than 40,000 middle-aged distance runners, for instance, found that those covering the most miles had the least likelihood of developing the disease. But the study did not compare runners to non-runners, limiting its usefulness. It also did not try to explain how exercise might affect the incidence of an eye disease. So, more recently, researchers at Emory University in Atlanta and the Atlanta Veterans Administration Medical Center in Decatur, Ga., took up that question for a study published last month in The Journal of Neuroscience. Their interest was motivated in part by animal research at the V.A. medical center. That work had determined that exercise increases the levels of substances known as growth factors in the animals’ bloodstream and brains. These growth factors, especially one called brain-derived neurotrophic factor, or B.D.N.F., are known to contribute to the health and well-being of neurons and consequently, it is thought, to improvements in brain health and cognition after regular exercise. But the brain is not the only body part to contain neurons, as the researchers behind the new study knew. The retina does as well, and the researchers wondered whether exercise might raise levels of B.D.N.F. there, too, potentially affecting retinal health and vision. © 2014 The New York Times Company

Related chapters from BP7e: Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 7: Vision: From Eye to Brain
Link ID: 19451 - Posted: 04.07.2014

Visual illusions, such as the rabbit-duck (shown above) and café wall (shown below) are fascinating because they remind us of the discrepancy between perception and reality. But our knowledge of such illusions has been largely limited to studying humans. That is now changing. There is mounting evidence that other animals can fall prey to the same illusions. Understanding whether these illusions arise in different brains could help us understand how evolution shapes visual perception. For neuroscientists and psychologists, illusions not only reveal how visual scenes are interpreted and mentally reconstructed, they also highlight constraints in our perception. They can take hundreds of different forms and can affect our perception of size, motion, colour, brightness, 3D form and much more. Artists, architects and designers have used illusions for centuries to distort our perception. Some of the most common types of illusory percepts are those that affect the impression of size, length or distance. For example, Ancient Greek architects designed columns for buildings so that they tapered and narrowed towards the top, creating the impression of a taller building when viewed from the ground. This type of illusion is called forced perspective, commonly used in ornamental gardens and stage design to make scenes appear larger or smaller. As visual processing needs to be both rapid and generally accurate, the brain constantly uses shortcuts and makes assumptions about the world that can, in some cases, be misleading. For example, the brain uses assumptions and the visual information surrounding an object (such as light level and presence of shadows) to adjust the perception of colour accordingly. © 2014 Guardian News and Media Limited

Related chapters from BP7e: Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 7: Vision: From Eye to Brain
Link ID: 19398 - Posted: 03.22.2014