Chapter 10. Vision: From Eye to Brain

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.

Links 1 - 20 of 1310

By Mitch Leslie Like many animals, you couldn’t see without proteins called opsins, which dwell in the light-sensitive cells of your eyes. A new study reveals for the first time that fruit flies can also use some of these proteins, nestled at the tip of their nose, to taste noxious molecules in their food. Opsins in our bodies could also serve the same function, researchers speculate. The results are “paradigm shifting,” says sensory biologist Phyllis Robinson of the University of Maryland, Baltimore County, who wasn’t connected to the research. The most famous opsin forms the backbone of rhodopsin, the pigment in eye cells known as rods that allow you to see in low light. Your cone cells, which permit vision in bright light, harbor different opsins. Altogether, researchers have uncovered about 1000 other varieties of the proteins in various animals and microbes since rhodopsin was discovered more than 150 years ago. But the opsin molecular family still offers some surprises, notes neuroscientist Craig Montell of the University of California, Santa Barbara. A handful of studies, including one in 2011 by Montell and his team, have implicated opsins in hearing, touch, and temperature detection. Montell and colleagues wanted to determine whether any opsins play a role in taste—specifically, whether flies use them to detect a bitter molecule they are known to dislike. The researchers set up a taste test for unmodified Drosophila melanogaster fruit flies and for seven strains that had been genetically altered to each lack a different opsin. All of the flies had the choice between two sugar solutions, one of which was spiked with the bitter compound. © 2020 American Association for the Advancement of Science

Keyword: Vision; Chemical Senses (Smell & Taste)
Link ID: 27166 - Posted: 04.03.2020

Richard Masland The eye is something like a camera, but there is a whole lot more to vision than that. One profound difference is that our vision, like the rest of our senses, is malleable and modifiable by experience. Take the commonplace observation that people deprived of one sense may have a compensatory increase in others — for example, that blind people have heightened senses of hearing and touch. A skeptic could say that this was just a matter of attention, concentration and practice at the task, rather than a true sensory improvement. Indeed, experiments show that a person’s sensory acuity can achieve major improvement with practice. Yet with modern methodologies, neuroscientists have conclusively proved that the circuits of the brain neurons do physically change. Our senses are malleable because the sensory centers of the brain rewire themselves to strike a useful balance between the capacities of the available neural resources and the demands put on them by incoming sensory impressions. Studies of this phenomenon are revealing that some sensory areas have innate tendencies toward certain functions, but they show just as powerfully the plasticity of the developing brain. Take a rat that has been deprived of vision since birth — let’s say because of damage to both retinas. When the rat grows up, you train that rat to run a maze. Then you damage the visual cortex slightly. You ask the rat to run the maze again and compare its time before the operation and after. In principle, damaging the visual cortex should not do anything to the maze-running ability of that blind rat. But the classic experimental finding made decades ago by Karl Lashley of Yerkes Laboratories of Primate Biology and others is that the rat’s performance gets worse, suggesting that the visual cortex in the blind rat was contributing something, although we do not know what it was.­­ All Rights Reserved © 2020

Keyword: Vision; Development of the Brain
Link ID: 27143 - Posted: 03.25.2020

By Alex Fox Deposits of a mineral found in tooth enamel at the back of the eye could be hastening the progression of age-related macular degeneration, the leading cause of deteriorating eyesight in people over 50. Now researchers have identified a protein called amelotin that experiments suggest is involved in producing the mineral deposits that are the hallmark of “dry” age-related macular degeneration, the most common of the two forms of the disease. Age-related macular degeneration, or AMD, affects about 3 million people in the United States. But the new finding, if confirmed, could change that. While the “wet” form of AMD, which comprises up to 30 percent of AMD cases, can be treated with injections, there are currently no treatments for dry AMD. “Finding amelotin in these deposits makes it a target to try to slow the progression of mineralization, which, if it’s borne out, could result in new therapies,” says Imre Lengyel, an ophthalmologist at Queen’s University Belfast in Scotland who was not involved in the research. These deposits, first documented in 2015, are made of a type of mineralized calcium called hydroxyapatite and appear beneath the retinal pigment epithelium — a layer of cells just outside the retina that keeps its light-sensing rods and cones happy and healthy. The deposits may worsen vision by blocking the flow of oxygen and nutrients needed to nourish those light-sensitive cells of the retina. By contrast, in wet AMD abnormal blood vessels intrude into the retina and often leak. Both types of AMD distort a person’s central vision — the focused, detailed sight needed for reading and recognizing faces — which can make independent living difficult. © Society for Science & the Public 2000–2020

Keyword: Vision
Link ID: 27136 - Posted: 03.24.2020

A protein that normally deposits mineralized calcium in tooth enamel may also be responsible for calcium deposits in the back of the eye in people with dry age-related macular degeneration (AMD), according to a study from researchers at the National Eye Institute (NEI). This protein, amelotin, may turn out to be a therapeutic target for the blinding disease. The findings were published in the journal Translational Research. NEI is part of the National Institutes of Health. “Using a simple cell culture model of retinal pigment epithelial cells, we were able to show that amelotin gets turned on by a certain kind of stress and causes formation of a particular kind of calcium deposit also seen in bones and teeth. When we looked in human donor eyes with dry AMD, we saw the same thing,” said Graeme Wistow, Ph.D., chief of the NEI Section on Molecular Structure and Functional Genomics, and senior author of the study. There are two forms of AMD – wet and dry. While there are treatments that can slow the progression of wet AMD, there are currently no treatments for dry AMD, also called geographic atrophy. In dry AMD, deposits of cholesterol, lipids, proteins, and minerals accumulate at the back of the eye. Some of these deposits are called soft drusen and have a specific composition, different from deposits found in wet AMD. Drusen form under the retinal pigment epithelium (RPE), a layer of cells that transports nutrients from the blood vessels below to support the light-sensing photoreceptors of the retina above them. As the drusen develop, the RPE and eventually the photoreceptors die, leading to blindness. The photoreceptors cannot grow back, so the blindness is permanent.

Keyword: Vision
Link ID: 27118 - Posted: 03.14.2020

Heidi Ledford A person with a genetic condition that causes blindness has become the first to receive a CRISPR–Cas9 gene therapy administered directly into their body. The treatment is part of a landmark clinical trial to test the ability of CRISPR–Cas9 gene-editing techniques to remove mutations that cause a rare condition called Leber’s congenital amaurosis 10 (LCA10). No treatment is currently available for the disease, which is a leading cause of blindness in childhood. For the latest trial, the components of the gene-editing system – encoded in the genome of a virus — are injected directly into the eye, near photoreceptor cells. By contrast, previous CRISPR–Cas9 clinical trials have used the technique to edit the genomes of cells that have been removed from the body. The material is then infused back into the patient. “It’s an exciting time,” says Mark Pennesi, a specialist in inherited retinal diseases at Oregon Health & Science University in Portland. Pennesi is collaborating with the pharmaceutical companies Editas Medicine of Cambridge, Massachusetts, and Allergan of Dublin to conduct the trial, which has been named BRILLIANCE. This is not the first time gene editing has been tried in the body: an older gene-editing system, called zinc-finger nucleases, has already been administered directly into people participating in clinical trials. Sangamo Therapeutics of Brisbane, California, has tested a zinc-finger-based treatment for a metabolic condition called Hunter’s syndrome. The technique inserts a healthy copy of the affected gene into a specific location in the genome of liver cells. Although it seems to be safe, early results suggest it might do little to ease the symptoms of Hunter’s syndrome. © 2020 Springer Nature Limited

Keyword: Vision; Genes & Behavior
Link ID: 27100 - Posted: 03.06.2020

By Simon Makin Neuroscientists understand much about how the human brain is organized into systems specialized for recognizing faces or scenes or for other specific cognitive functions. The questions that remain relate to how such capabilities arise. Are these networks—and the regions comprising them—already specialized at birth? Or do they develop these sensitivities over time? And how might structure influence the development of function? “This is an age-old philosophical question of how knowledge is organized,” says psychologist Daniel Dilks of Emory University. “And where does it come from? What are we born with, and what requires experience?” Dilks and his colleagues addressed these questions in an investigation of neural connectivity in the youngest humans studied in this context to date: 30 infants ranging from six to 57 days old (with an average age of 27 days). Their findings suggest that circuit wiring precedes, and thus may guide, regional specialization, shedding light on how knowledge systems emerge in the brain. Further work along these lines may provide insight into neurodevelopmental disorders such as autism. In the study, published Monday in Proceedings of the National Academy of Sciences USA, the researchers looked at two of the best-studied brain networks dedicated to a particular visual function—one that underlies face recognition and another that processes scenes. The occipital face area and fusiform face area selectively respond to faces and are highly connected in adults, suggesting they constitute a face-recognition network. The same description applies to the parahippocampal place area and retrosplenial complex but for scenes. All four of these areas are in the inferior temporal cortex, which is behind the ear in humans. © 2020 Scientific American,

Keyword: Development of the Brain; Vision
Link ID: 27088 - Posted: 03.03.2020

By Sara Reardon To many people’s eyes, artist Mark Rothko’s enormous paintings are little more than swaths of color. Yet a Rothko can fetch nearly $100 million. Meanwhile, Pablo Picasso’s warped faces fascinate some viewers and terrify others. Why do our perceptions of beauty differ so widely? The answer may lie in our brain networks. Researchers have now developed an algorithm that can predict art preferences by analyzing how a person’s brain breaks down visual information and decides whether a painting is “good.” The findings show for the first time how intrinsic features of a painting combine with human judgment to give art value in our minds. Most people—including researchers—consider art preferences to be all over the map, says Anjan Chatterjee, a neurologist and cognitive neuroscientist at the University of Pennsylvania who was not involved in the study. Many preferences are rooted in biology–sugary foods, for instance, help us survive. And people tend to share similar standards of beauty when it comes to human faces and landscapes. But when it comes to art, “There are relatively arbitrary things we seem to care about and value,” Chatterjee says. To figure out how the brain forms value judgments about art, computational neuroscientist Kiyohito Iigaya and his colleagues at the California Institute of Technology first asked more than 1300 volunteers on the crowdsourcing website Amazon Mechanical Turk to rate a selection of 825 paintings from four Western genres including impressionism, cubism, abstract art, and color field painting. Volunteers were all over the age of 18, but researchers didn’t specify their familiarity with art or their ethnic or national origin. © 2020 American Association for the Advancement of Science

Keyword: Vision; Attention
Link ID: 27062 - Posted: 02.21.2020

By Viviane Callier In 1688 Irish philosopher William Molyneux wrote to his colleague John Locke with a puzzle that continues to draw the interest of philosophers and scientists to this day. The idea was simple: Would a person born blind, who has learned to distinguish objects by touch, be able to recognize them purely by sight if he or she regained the ability to see? The question, known as Molyneux’s problem, probes whether the human mind has a built-in concept of shapes that is so innate that such a blind person could immediately recognize an object with restored vision. The alternative is that the concepts of shapes are not innate but have to be learned by exploring an object through sight, touch and other senses, a process that could take a long time when starting from scratch. An attempt was made to resolve this puzzle a few years ago by testing Molyneux's problem in children who were congenitally blind but then regained their sight, thanks to cataract surgery. Although the children were not immediately able to recognize objects, they quickly learned to do so. The results were equivocal. Some learning was needed to identify an object, but it appeared that the study participants were not starting completely from scratch. Lars Chittka of Queen Mary University of London and his colleagues have taken another stab at finding an answer, this time using another species. To test whether bumblebees can form an internal representation of objects, Chittka and his team first trained the insects to discriminate spheres and cubes using a sugar reward. The bees were trained in the light, where they could see but not touch the objects that were isolated inside a closed petri dish. Then they were tested in the dark, where they could touch but not see the spheres or cubes. The researchers found that the invertebrates spent more time in contact with the shape they had been trained to associate with the sugar reward, even though they had to rely on touch rather than sight to discriminate the objects. © 2020 Scientific American

Keyword: Development of the Brain; Vision
Link ID: 27061 - Posted: 02.21.2020

Fergus Walsh Medical correspondent A new gene therapy has been used to treat patients with a rare inherited eye disorder which causes blindness. It's hoped the NHS treatment will halt sight loss and even improve vision. Matthew Wood, 48, one of the first patients to receive the injection, told the BBC: "I value the remaining sight I have so if I can hold on to that it would be a big thing for me." The treatment costs around £600,000 but NHS England has agreed a discounted price with the manufacturer Novartis. Luxturna (voretigene neparvovec), has been approved by The National Institute for Health and Care Excellence (NICE), which estimates that just under 90 people in England will be eligible for the treatment. The gene therapy is for patients who have retinal dystrophy as a result of inheriting a faulty copy of the RPE65 gene from both parents. The gene is important for providing the pigment that light sensitive cells need to absorb light. Initially this affects night vision but eventually, as the cells die, it can lead to complete blindness. An injection is made into the back of the eye - this delivers working copies of the RPE65 gene. These are contained inside a harmless virus, which enables them to penetrate the retinal cells. Once inside the nucleus, the gene provides the instructions to make the RPE65 protein, which is essential for healthy vision. © 2020 BBC

Keyword: Vision; Genes & Behavior
Link ID: 27046 - Posted: 02.18.2020

By Veronique Greenwood You might mistake jewel wings for their colorful cousins, dragonflies. New research shows that these two predators share something more profound than their appearance, however. In a paper published this month in Current Biology, Dr. Gonzalez-Bellido and colleagues reveal that the neural systems behind jewel wings’ vision are shared with dragonflies, with whom they have a common ancestor that lived before the dinosaurs. But over the eons, this brain wiring has adapted itself in different ways in each creature, enabling radically different hunting strategies. For flying creatures, instantaneous, highly accurate vision is crucial to survival. Recent research showed that birds of prey that fly faster also see changes in their field of vision more quickly, demonstrating the link between speed on the wing and speed in the brain. But the group of insects that includes jewel wings and dragonflies took to the air long before birds were even on the evolutionary horizon, and their vision is swifter than any vertebrate’s studied thus far, said Dr. Gonzalez-Bellido. Researchers looking to understand how their vision, flight and hunting abilities are connected are thus particularly interested in the neurons that send visual information to the wings. But recordings made in the lab by Dr. Gonzalez-Bellido and her colleagues confirmed that dragonflies rise up in a straight line to seize unsuspecting insects from below, almost like their prey had stepped on a land mine. This eerie climb may contribute to their startling success rate: Dragonflies snag their prey 97 percent of the time. The difference in hunting behavior may be linked to the placement of the insects’ eyes. Jewel wings’ eyes are on either side of the head, facing forward. The eyes of these dragonflies — the species Sympetrum vulgatum, also known as the vagrant darter — encase the top of the insect’s head in an iridescent dome, with a thin line running down the middle the only visible reminder that they may have once been separate. © 2020 The New York Times Company

Keyword: Vision; Evolution
Link ID: 27008 - Posted: 01.29.2020

By Stephen L. Macknik The year 2015 will go down in the annals of vision research history as a watershed moment. in which the internet discovered an entirely new visual phenomenon—a dress that half of the world saw as black/blue and the other half as white/gold. Had it not been for social media and its particular way of framing conversations around shared crowd-sourced images, this peculiar visual puzzle might have remained unknown. The idea that an object could look one color under one set of lighting conditions, and another color under another set of lighting conditions, was not new. What was unique about The Dress was that the same image, under the same exact viewing conditions, looked very different to different people. The color ambiguity only became evident when half of the viewers disagreed with the other half, which is probably why social media was so pivotal in its discovery. Vision scientists went bananas. Was it an artifact of different device screens? Did it have to do with gender, culture, education, or some other categorization of brain and persona? How many people—exactly—saw the image one way or the other? This was a dress that sailed a thousand ships. The vision science field eventually verified that the phenomenon was definitely real and not an artifact of viewing conditions. Though the precise underlying mechanisms remain unknown, even now. Similarly ambiguous color images followed the dress, but a main obstacle to figuring out how and why such effects existed was that all of the images were flukes. They were accidental happy snaps created by internet picture-posters. Scientists could not intentionally create new and carefully controlled examples for deep study in the lab. Until now. © 2020 Scientific American

Keyword: Vision
Link ID: 26996 - Posted: 01.27.2020

By Veronique Greenwood The cuttlefish hovers in the aquarium, its fins rippling and large, limpid eyes glistening. When a scientist drops a shrimp in, this cousin of the squid and octopus pauses, aims and shoots its tentacles around the prize. There’s just one unusual detail: The diminutive cephalopod is wearing snazzy 3-D glasses. Putting 3-D glasses on a cuttlefish is not the simplest task ever performed in the service of science. “Some individuals will not wear them no matter how much I try,” said Trevor Wardill, a sensory neuroscientist at the University of Minnesota, who with other colleagues managed to gently lift the cephalopods from an aquarium, dab them between the eyes with a bit of glue and some Velcro and fit the creatures with blue-and-red specs. The whimsical eyewear was part of an attempt to tell whether cuttlefish see in 3-D, using the distance between their two eyes to generate depth perception like humans do. It was inspired by research in which praying mantises in 3-D glasses helped answer a similar question. The team’s results, published Wednesday in the journal Science Advances, suggest that, contrary to what scientists believed in the past, cuttlefish really can see in three dimensions. Octopuses and squid, despite being savvy hunters, don’t seem to have 3-D vision like ours. Previous work, more than 50 years ago, had found that one-eyed cuttlefish could still catch prey, suggesting they might be similar. But cuttlefish eyes often focus in concert when they’re hunting, and there is significant overlap in what each eye sees, a promising combination for generating 3-D vision. Dr. Wardill, Rachael Feord, a graduate student at the University of Cambridge, and the team decided to give the glasses a try during visits to the Marine Biological Lab in Woods Hole, Mass. The logic went like this: With each eye covered by a different colored lens, two different-colored versions of a scene, just slightly offset from each other, should pop out into a three-dimensional image. By playing a video on the tank wall of a scuttling pair of shrimp silhouettes, each a different color and separated from each other by varying amounts, the researchers could make a shrimp seem closer to the cuttlefish or farther away. If, that is, the cuttlefish experienced 3-D vision like ours. © 2020 The New York Times Company

Keyword: Vision; Evolution
Link ID: 26945 - Posted: 01.09.2020

A cousin of the starfish that resides in the coral reefs of the Caribbean and Gulf of Mexico lacks eyes but can still see, according to scientists who studied the creature. Researchers said on Thursday that the red brittle star, called Ophiocoma wendtii, joins a species of sea urchin as the only creatures known to be able to see without having eyes — known as extraocular vision. The red brittle star possesses this exotic capability thanks to light-sensing cells, called photoreceptors, covering its body and pigment cells, called chromatophores, that move during the day to facilitate the animal's dramatic colour change from a deep reddish-brown in daytime to a striped beige at night. Brittle stars, with five radiating arms extending from a central disk, are related to starfish (also called sea stars), sea cucumbers, sea urchins and others in a group of marine invertebrates called echinoderms. They have a nervous system but no brain. Looking for a safe hiding place The red brittle star — which measure up to about 35 centimetres (14 inches) from arm tip to arm tip — lives in bright and complex habitats, with high predation threats from reef fish. It stays hidden during daytime — making the ability to spot a safe place to hide critical — and comes out at night to feed on detritus. Its photoreceptors are surrounded during daytime by chromatophores that narrow the field of the light being detected, making each photoreceptor like the pixel of a computer image that, when combined with other pixels, makes a whole image. The visual system does not work at night, when the chromatophores contract. "If our conclusions about the chromatophores are correct, this is a beautiful example of innovation in evolution," said Lauren Sumner-Rooney, a research fellow at Oxford University Museum of Natural History, who led the study published in the journal Current Biology. ©2020 CBC/Radio-Canada.

Keyword: Vision; Evolution
Link ID: 26929 - Posted: 01.04.2020

By Sharon Begley @sxbegle The filmgoers didn’t flinch at the scene of the dapper man planting a time bomb in the trunk of the convertible, or tense up as the unsuspecting driver and his beautiful blonde companion drove slowly through the town teeming with pedestrians, or jump out of their seats when the bomb exploded in fiery carnage. And they sure as heck weren’t wowed by the technical artistry of this famous opening shot of Orson Welles’ 1958 noir masterpiece, “Touch of Evil,” a single three-minute take that ratchets up the suspense to 11 on a scale of 1 to 10. In fairness, lab mice aren’t cineastes. But where the rodents fell short as film critics they more than delivered as portals into the brain. As the mice watched the film clip, scientists eavesdropped on each one’s visual cortex. By the end of the study, the textbook understanding of how the brain “sees” had been as badly damaged as the “Touch of Evil” convertible, scientists reported on Monday. The new insights into the workings of the visual cortex, they said, could improve technologies as diverse as self-driving cars and brain prostheses to let the blind see. “Neuroscience lets us make better object recognition systems” for, say, self-driving cars and artificial intelligence-based diagnostics, said Joel Zylberberg of York University, an expert on machine learning and neuroscience who was not involved in the new research. “But computer vision has been hampered by an insufficient understanding of visual processing in the brain.” The “unprecedented” findings in the new study, he said, promise to change that. The textbook understanding of how the brain sees, starting with streams of photons landing on the retina, reflects research from the 1960s that won two of its pioneers a Nobel prize in medicine in 1981. It basically holds that neurons in the primary visual cortex, where the signals go first, respond to edges: vertical edges, horizontal edges, and every edge orientation in between, moving and static. We see a laptop screen because of how its edges abut what’s behind it, sidewalks because of where their edges touch the curb’s. Higher-order brain systems take these rudimentary perceptions and process them into the perception of a scene or object. © 2019 STAT

Keyword: Vision
Link ID: 26910 - Posted: 12.21.2019

By Susana Martinez-Conde The many evils of social media notwithstanding, millions of users agree that some of its most delightful aspects include viral illusions and cute cat videos. The potential for synergy was vast in retrospect—but only realized in 2013, when Rasmus Bååth, a cognitive scientist from Lund University in Sweden, blended both elements in a YouTube video of his kitten attacking a printed version of Akiyoshi Kitaoka’s famous “Rotating Snakes” illusion. The clip, which has been viewed more than 6 million times as of this writing, led to subsequent empirical research and an internet survey of cat owners, where 29% of respondents answered that their pets reacted to the Rotating Snakes. The results, published in the journal Psychology in 2014, indicated—though not conclusively—that cats experience illusory motion when they look at the Rotating Snakes pattern, much as most humans do. Now, a team of researchers from University of Padova, Italy, Queen Mary University of London in the UK, and the Parco Natura Viva—Garda Zoological Park in Bussolengo, Italy, has collected additional evidence that cats—in this case, big cats—find the Rotating Snakes Illusion fascinating. Advertisement Intrigued by the earlier study on house cats, Christian Agrillo of the University of Padova and his collaborators set out to determine whether lions at Parco Natura Viva were similarly susceptible to motion illusions, as well as explore the possibility that such patterns might serve as a source of visual enrichment for zoo animals. Their findings were published last month in Frontiers in Psychology. © 2019 Scientific American,

Keyword: Vision; Evolution
Link ID: 26826 - Posted: 11.18.2019

By Erica Tennenhouse Live in the urban jungle long enough, and you might start to see things—in particular humanmade objects like cars and furniture. That’s what researchers found when they melded photos of artificial items with images of animals and asked 20 volunteers what they saw. The people, all of whom lived in cities, overwhelmingly noticed the manufactured objects whereas the animals faded into the background. To find out whether built environments can alter peoples’ perception, the researchers gathered hundreds of photos of animals and artificial objects such as bicycles, laptops, or benches. Then, they superimposed them to create hybrid images—like a horse combined with a table (above, top left) or a rhinoceros combined with a car (above, bottom right). As volunteers watched the hybrids flash by on a screen, they categorized each as a small animal, a big animal, a small humanmade object, or a big humanmade object. Overall, volunteers showed a clear bias toward the humanmade objects, especially when they were big, the researchers report today in the Proceedings of the Royal Society B. The bias itself was a measure of how much the researchers had to visually “amp up” an image before participants saw it instead of its partner image. That bias suggests people’s perceptions are fundamentally altered by their environments, the researchers say. Humans often rely on past experiences to process new information—the classic example is mistaking a snake for a garden hose. But in this case, living in industrialized nations—where you are exposed to fewer “natural” objects—could change the way you view the world. © 2019 American Association for the Advancement of Science

Keyword: Vision; Attention
Link ID: 26793 - Posted: 11.06.2019

By Kelly Servick CHICAGO, ILLINOIS—In 2014, U.S. regulators approved a futuristic treatment for blindness. The device, called Argus II, sends signals from a glasses-mounted camera to a roughly 3-by-5-millimeter grid of electrodes at the back of eye. Its job: Replace signals from light-sensing cells lost in the genetic condition retinitis pigmentosa. The implant’s maker, Second Sight, estimates that about 350 people in the world now use it. Argus II offers a relatively crude form of artificial vision; users see diffuse spots of light called phosphenes. “None of the patients gave up their white cane or guide dog,” says Daniel Palanker, a physicist who works on visual prostheses at Stanford University in Palo Alto, California. “It’s a very low bar.” But it was a start. He and others are now aiming to raise the bar with more precise ways of stimulating cells in the eye or brain. At the annual meeting of the Society for Neuroscience here last week, scientists shared progress from several such efforts. Some have already advanced to human trials—“a real, final test,” Palanker says. “It’s exciting times.” Several common disorders steal vision by destroying photoreceptors, the first cells in a relay of information from the eye to the brain. The other players in the relay often remain intact: the so-called bipolar cells, which receive photoreceptors’ signals; the retinal ganglion cells, which form the optic nerve and carry those signals to the brain; and the multilayered visual cortex at the back of the brain, which organizes the information into meaningful sight. Because adjacent points in space project onto adjacent points on the retina, and eventually activate neighboring points in an early processing area of the visual cortex, a visual scene can be mapped onto a spatial pattern of signals. But this spatial mapping gets more complex along the relay, so some researchers aim to activate cells as close to the start as possible. © 2019 American Association for the Advancement of Science

Keyword: Vision; Robotics
Link ID: 26779 - Posted: 11.01.2019

By Stephen L. Macknik Most of us look at a bird and see its avian shape in perfect alignment with the colors of its beautiful plumage. How could it be any other way? The shape and color are derived from the same object and so the brain must process shape and color together as a unified percept. Right? Wrong. The brain processes forms and color in separate neural circuits, but because these brain regions communicate with each other, our perception appears unified. To understand how this all works, let’s do an experiment on ourselves to separate (and then recombine in our brains) the colors and shapes from an image. We will use an illusion called the Color Assimilation Grid, developed by Øyvind Kolås, a digital media artist and software developer. First, let’s apply a screen to the birds (above) so that we can sample their colors but get rid of most of their shape information. We simply take the original image and blur it (to break down the shape information, not shown) and then multiply the resultant pixels in a step-by-step, pixel-by-pixel fashion with a grid screen of the same size as the original image. In the screen, white pixels equal 1 and the gray regions equal zero, so the result is a blurry colored plaid sample of the birds’ colors. Now that we have diminished the shape information and sampled the colors, we need to do the opposite: sample the shapes after diminishing the color information, so that we can later mix the two to see how shape and color assimilate in the brain. To create the shape-only image we first turn the original image to grayscale (above) and then we apply the inverse of the screen we used in the color sampling. The result is a grayscale image of the birds with the shape information preserved, superimposed with a tiny grid of empty spaces where we can later add color information without altering the rest of the image. © 2019 Scientific American

Keyword: Vision
Link ID: 26760 - Posted: 10.28.2019

By Tanya Lewis The lenses in human eyes lose some ability to focus as they age. Monovision—a popular fix for this issue—involves prescription contacts (or glasses) that focus one eye for near-vision tasks such as reading and the other for far-vision tasks such as driving. About 10 million people in the U.S. currently use this form of correction, but a new study finds it may cause a potentially dangerous optical illusion. Nearly a century ago German physicist Carl Pulfrich described a visual phenomenon now known as the Pulfrich effect: When one eye sees either a darker or a lower-contrast image than the other, an object moving side to side (such as a pendulum) appears to travel in a three-dimensional arc. This is because the brain processes the darker or lower-contrast image more slowly than the lighter or higher-contrast one, creating a lag the brain perceives as 3-D motion. Johannes Burge, a psychologist at the University of Pennsylvania, and his colleagues recently found that monovision can cause a reverse Pulfrich effect. They had participants look through a device showing a different image to each eye—one blurry and one in focus—of an object moving side to side. The researchers found that viewers processed the blurrier image a couple of milliseconds faster than the sharper one, making the object seem to arc in front of the display screen. It appeared closer to the viewer as it moved to the right (if the left eye saw the blurry image) or to the left (if the right eye did). “That does not sound like a very big deal,” Burge says, but it is enough for a driver at an intersection to misjudge the location of a moving cyclist by about the width of a narrow street lane (graphic). © 2019 Scientific American

Keyword: Vision
Link ID: 26758 - Posted: 10.28.2019

By Evan Cooper Early one summer morning, I was awakened by a hammering on the inside of my skull. It felt as if a prisoner were trying to Shawshank it out through my left eye socket. When I sat up in bed to reach for the Advil on my nightstand, I became panic-stricken. Both eyes were open, but I could see through only one. I’d been known to leap to worst-case scenarios at the first sign of any physical discomfort. (Pain in my abdomen? Appendicitis! Headache? Definitely a brain tumor.) But this was different: I wasn’t paranoid, I was blind in my left eye. At the ophthalmologist’s office later that morning, I tried not to panic. I was nearly 20 years old, midway through my studies at U.C.L.A. Everything is fine, I told myself. You’re FINE. Like a mantra, I repeated this over and over, determined that, for once, I was not going to catastrophize. I briefly thought I might be imagining it all, conjuring up some drama for attention. Once when I was 11, I called my dad, who lived 3,000 miles away in Los Angeles, and begged him to send an ambulance to my house in Cleveland because I was certain that I had a collapsed lung and my mom was refusing to take me to the hospital. But the doctor I saw told me with some urgency that I needed to see a specialist, immediately. I overheard his assistant quietly consider potential diagnoses: “multiple sclerosis, lupus, another autoimmune disease?” I closed my eyes and imagined myself on the sort of carnival ride where you stick to the wall as you spin round and round until the floor falls away. Beyond disbelief and dread, however, I also felt a familiar swell of self-loathing. Of course I have an incurable, degenerative disease, I reprimanded myself. This is my fault. After all, up until this point, I had lived as if an internal army of drill sergeants were commanding me to eat less, exercise harder, study more, stand out, be The Best. No achievement was ever good enough. And what is an autoimmune disease if not the Self waging a war upon the Self? © 2019 The New York Times Company

Keyword: Vision
Link ID: 26680 - Posted: 10.08.2019