Chapter 10. Vision: From Eye to Brain

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.

Links 21 - 40 of 1325

By Simon Makin Neuroscientists understand much about how the human brain is organized into systems specialized for recognizing faces or scenes or for other specific cognitive functions. The questions that remain relate to how such capabilities arise. Are these networks—and the regions comprising them—already specialized at birth? Or do they develop these sensitivities over time? And how might structure influence the development of function? “This is an age-old philosophical question of how knowledge is organized,” says psychologist Daniel Dilks of Emory University. “And where does it come from? What are we born with, and what requires experience?” Dilks and his colleagues addressed these questions in an investigation of neural connectivity in the youngest humans studied in this context to date: 30 infants ranging from six to 57 days old (with an average age of 27 days). Their findings suggest that circuit wiring precedes, and thus may guide, regional specialization, shedding light on how knowledge systems emerge in the brain. Further work along these lines may provide insight into neurodevelopmental disorders such as autism. In the study, published Monday in Proceedings of the National Academy of Sciences USA, the researchers looked at two of the best-studied brain networks dedicated to a particular visual function—one that underlies face recognition and another that processes scenes. The occipital face area and fusiform face area selectively respond to faces and are highly connected in adults, suggesting they constitute a face-recognition network. The same description applies to the parahippocampal place area and retrosplenial complex but for scenes. All four of these areas are in the inferior temporal cortex, which is behind the ear in humans. © 2020 Scientific American,

Keyword: Development of the Brain; Vision
Link ID: 27088 - Posted: 03.03.2020

By Sara Reardon To many people’s eyes, artist Mark Rothko’s enormous paintings are little more than swaths of color. Yet a Rothko can fetch nearly $100 million. Meanwhile, Pablo Picasso’s warped faces fascinate some viewers and terrify others. Why do our perceptions of beauty differ so widely? The answer may lie in our brain networks. Researchers have now developed an algorithm that can predict art preferences by analyzing how a person’s brain breaks down visual information and decides whether a painting is “good.” The findings show for the first time how intrinsic features of a painting combine with human judgment to give art value in our minds. Most people—including researchers—consider art preferences to be all over the map, says Anjan Chatterjee, a neurologist and cognitive neuroscientist at the University of Pennsylvania who was not involved in the study. Many preferences are rooted in biology–sugary foods, for instance, help us survive. And people tend to share similar standards of beauty when it comes to human faces and landscapes. But when it comes to art, “There are relatively arbitrary things we seem to care about and value,” Chatterjee says. To figure out how the brain forms value judgments about art, computational neuroscientist Kiyohito Iigaya and his colleagues at the California Institute of Technology first asked more than 1300 volunteers on the crowdsourcing website Amazon Mechanical Turk to rate a selection of 825 paintings from four Western genres including impressionism, cubism, abstract art, and color field painting. Volunteers were all over the age of 18, but researchers didn’t specify their familiarity with art or their ethnic or national origin. © 2020 American Association for the Advancement of Science

Keyword: Vision; Attention
Link ID: 27062 - Posted: 02.21.2020

By Viviane Callier In 1688 Irish philosopher William Molyneux wrote to his colleague John Locke with a puzzle that continues to draw the interest of philosophers and scientists to this day. The idea was simple: Would a person born blind, who has learned to distinguish objects by touch, be able to recognize them purely by sight if he or she regained the ability to see? The question, known as Molyneux’s problem, probes whether the human mind has a built-in concept of shapes that is so innate that such a blind person could immediately recognize an object with restored vision. The alternative is that the concepts of shapes are not innate but have to be learned by exploring an object through sight, touch and other senses, a process that could take a long time when starting from scratch. An attempt was made to resolve this puzzle a few years ago by testing Molyneux's problem in children who were congenitally blind but then regained their sight, thanks to cataract surgery. Although the children were not immediately able to recognize objects, they quickly learned to do so. The results were equivocal. Some learning was needed to identify an object, but it appeared that the study participants were not starting completely from scratch. Lars Chittka of Queen Mary University of London and his colleagues have taken another stab at finding an answer, this time using another species. To test whether bumblebees can form an internal representation of objects, Chittka and his team first trained the insects to discriminate spheres and cubes using a sugar reward. The bees were trained in the light, where they could see but not touch the objects that were isolated inside a closed petri dish. Then they were tested in the dark, where they could touch but not see the spheres or cubes. The researchers found that the invertebrates spent more time in contact with the shape they had been trained to associate with the sugar reward, even though they had to rely on touch rather than sight to discriminate the objects. © 2020 Scientific American

Keyword: Development of the Brain; Vision
Link ID: 27061 - Posted: 02.21.2020

Fergus Walsh Medical correspondent A new gene therapy has been used to treat patients with a rare inherited eye disorder which causes blindness. It's hoped the NHS treatment will halt sight loss and even improve vision. Matthew Wood, 48, one of the first patients to receive the injection, told the BBC: "I value the remaining sight I have so if I can hold on to that it would be a big thing for me." The treatment costs around £600,000 but NHS England has agreed a discounted price with the manufacturer Novartis. Luxturna (voretigene neparvovec), has been approved by The National Institute for Health and Care Excellence (NICE), which estimates that just under 90 people in England will be eligible for the treatment. The gene therapy is for patients who have retinal dystrophy as a result of inheriting a faulty copy of the RPE65 gene from both parents. The gene is important for providing the pigment that light sensitive cells need to absorb light. Initially this affects night vision but eventually, as the cells die, it can lead to complete blindness. An injection is made into the back of the eye - this delivers working copies of the RPE65 gene. These are contained inside a harmless virus, which enables them to penetrate the retinal cells. Once inside the nucleus, the gene provides the instructions to make the RPE65 protein, which is essential for healthy vision. © 2020 BBC

Keyword: Vision; Genes & Behavior
Link ID: 27046 - Posted: 02.18.2020

By Veronique Greenwood You might mistake jewel wings for their colorful cousins, dragonflies. New research shows that these two predators share something more profound than their appearance, however. In a paper published this month in Current Biology, Dr. Gonzalez-Bellido and colleagues reveal that the neural systems behind jewel wings’ vision are shared with dragonflies, with whom they have a common ancestor that lived before the dinosaurs. But over the eons, this brain wiring has adapted itself in different ways in each creature, enabling radically different hunting strategies. For flying creatures, instantaneous, highly accurate vision is crucial to survival. Recent research showed that birds of prey that fly faster also see changes in their field of vision more quickly, demonstrating the link between speed on the wing and speed in the brain. But the group of insects that includes jewel wings and dragonflies took to the air long before birds were even on the evolutionary horizon, and their vision is swifter than any vertebrate’s studied thus far, said Dr. Gonzalez-Bellido. Researchers looking to understand how their vision, flight and hunting abilities are connected are thus particularly interested in the neurons that send visual information to the wings. But recordings made in the lab by Dr. Gonzalez-Bellido and her colleagues confirmed that dragonflies rise up in a straight line to seize unsuspecting insects from below, almost like their prey had stepped on a land mine. This eerie climb may contribute to their startling success rate: Dragonflies snag their prey 97 percent of the time. The difference in hunting behavior may be linked to the placement of the insects’ eyes. Jewel wings’ eyes are on either side of the head, facing forward. The eyes of these dragonflies — the species Sympetrum vulgatum, also known as the vagrant darter — encase the top of the insect’s head in an iridescent dome, with a thin line running down the middle the only visible reminder that they may have once been separate. © 2020 The New York Times Company

Keyword: Vision; Evolution
Link ID: 27008 - Posted: 01.29.2020

By Stephen L. Macknik The year 2015 will go down in the annals of vision research history as a watershed moment. in which the internet discovered an entirely new visual phenomenon—a dress that half of the world saw as black/blue and the other half as white/gold. Had it not been for social media and its particular way of framing conversations around shared crowd-sourced images, this peculiar visual puzzle might have remained unknown. The idea that an object could look one color under one set of lighting conditions, and another color under another set of lighting conditions, was not new. What was unique about The Dress was that the same image, under the same exact viewing conditions, looked very different to different people. The color ambiguity only became evident when half of the viewers disagreed with the other half, which is probably why social media was so pivotal in its discovery. Vision scientists went bananas. Was it an artifact of different device screens? Did it have to do with gender, culture, education, or some other categorization of brain and persona? How many people—exactly—saw the image one way or the other? This was a dress that sailed a thousand ships. The vision science field eventually verified that the phenomenon was definitely real and not an artifact of viewing conditions. Though the precise underlying mechanisms remain unknown, even now. Similarly ambiguous color images followed the dress, but a main obstacle to figuring out how and why such effects existed was that all of the images were flukes. They were accidental happy snaps created by internet picture-posters. Scientists could not intentionally create new and carefully controlled examples for deep study in the lab. Until now. © 2020 Scientific American

Keyword: Vision
Link ID: 26996 - Posted: 01.27.2020

By Veronique Greenwood The cuttlefish hovers in the aquarium, its fins rippling and large, limpid eyes glistening. When a scientist drops a shrimp in, this cousin of the squid and octopus pauses, aims and shoots its tentacles around the prize. There’s just one unusual detail: The diminutive cephalopod is wearing snazzy 3-D glasses. Putting 3-D glasses on a cuttlefish is not the simplest task ever performed in the service of science. “Some individuals will not wear them no matter how much I try,” said Trevor Wardill, a sensory neuroscientist at the University of Minnesota, who with other colleagues managed to gently lift the cephalopods from an aquarium, dab them between the eyes with a bit of glue and some Velcro and fit the creatures with blue-and-red specs. The whimsical eyewear was part of an attempt to tell whether cuttlefish see in 3-D, using the distance between their two eyes to generate depth perception like humans do. It was inspired by research in which praying mantises in 3-D glasses helped answer a similar question. The team’s results, published Wednesday in the journal Science Advances, suggest that, contrary to what scientists believed in the past, cuttlefish really can see in three dimensions. Octopuses and squid, despite being savvy hunters, don’t seem to have 3-D vision like ours. Previous work, more than 50 years ago, had found that one-eyed cuttlefish could still catch prey, suggesting they might be similar. But cuttlefish eyes often focus in concert when they’re hunting, and there is significant overlap in what each eye sees, a promising combination for generating 3-D vision. Dr. Wardill, Rachael Feord, a graduate student at the University of Cambridge, and the team decided to give the glasses a try during visits to the Marine Biological Lab in Woods Hole, Mass. The logic went like this: With each eye covered by a different colored lens, two different-colored versions of a scene, just slightly offset from each other, should pop out into a three-dimensional image. By playing a video on the tank wall of a scuttling pair of shrimp silhouettes, each a different color and separated from each other by varying amounts, the researchers could make a shrimp seem closer to the cuttlefish or farther away. If, that is, the cuttlefish experienced 3-D vision like ours. © 2020 The New York Times Company

Keyword: Vision; Evolution
Link ID: 26945 - Posted: 01.09.2020

A cousin of the starfish that resides in the coral reefs of the Caribbean and Gulf of Mexico lacks eyes but can still see, according to scientists who studied the creature. Researchers said on Thursday that the red brittle star, called Ophiocoma wendtii, joins a species of sea urchin as the only creatures known to be able to see without having eyes — known as extraocular vision. The red brittle star possesses this exotic capability thanks to light-sensing cells, called photoreceptors, covering its body and pigment cells, called chromatophores, that move during the day to facilitate the animal's dramatic colour change from a deep reddish-brown in daytime to a striped beige at night. Brittle stars, with five radiating arms extending from a central disk, are related to starfish (also called sea stars), sea cucumbers, sea urchins and others in a group of marine invertebrates called echinoderms. They have a nervous system but no brain. Looking for a safe hiding place The red brittle star — which measure up to about 35 centimetres (14 inches) from arm tip to arm tip — lives in bright and complex habitats, with high predation threats from reef fish. It stays hidden during daytime — making the ability to spot a safe place to hide critical — and comes out at night to feed on detritus. Its photoreceptors are surrounded during daytime by chromatophores that narrow the field of the light being detected, making each photoreceptor like the pixel of a computer image that, when combined with other pixels, makes a whole image. The visual system does not work at night, when the chromatophores contract. "If our conclusions about the chromatophores are correct, this is a beautiful example of innovation in evolution," said Lauren Sumner-Rooney, a research fellow at Oxford University Museum of Natural History, who led the study published in the journal Current Biology. ©2020 CBC/Radio-Canada.

Keyword: Vision; Evolution
Link ID: 26929 - Posted: 01.04.2020

By Sharon Begley @sxbegle The filmgoers didn’t flinch at the scene of the dapper man planting a time bomb in the trunk of the convertible, or tense up as the unsuspecting driver and his beautiful blonde companion drove slowly through the town teeming with pedestrians, or jump out of their seats when the bomb exploded in fiery carnage. And they sure as heck weren’t wowed by the technical artistry of this famous opening shot of Orson Welles’ 1958 noir masterpiece, “Touch of Evil,” a single three-minute take that ratchets up the suspense to 11 on a scale of 1 to 10. In fairness, lab mice aren’t cineastes. But where the rodents fell short as film critics they more than delivered as portals into the brain. As the mice watched the film clip, scientists eavesdropped on each one’s visual cortex. By the end of the study, the textbook understanding of how the brain “sees” had been as badly damaged as the “Touch of Evil” convertible, scientists reported on Monday. The new insights into the workings of the visual cortex, they said, could improve technologies as diverse as self-driving cars and brain prostheses to let the blind see. “Neuroscience lets us make better object recognition systems” for, say, self-driving cars and artificial intelligence-based diagnostics, said Joel Zylberberg of York University, an expert on machine learning and neuroscience who was not involved in the new research. “But computer vision has been hampered by an insufficient understanding of visual processing in the brain.” The “unprecedented” findings in the new study, he said, promise to change that. The textbook understanding of how the brain sees, starting with streams of photons landing on the retina, reflects research from the 1960s that won two of its pioneers a Nobel prize in medicine in 1981. It basically holds that neurons in the primary visual cortex, where the signals go first, respond to edges: vertical edges, horizontal edges, and every edge orientation in between, moving and static. We see a laptop screen because of how its edges abut what’s behind it, sidewalks because of where their edges touch the curb’s. Higher-order brain systems take these rudimentary perceptions and process them into the perception of a scene or object. © 2019 STAT

Keyword: Vision
Link ID: 26910 - Posted: 12.21.2019

By Susana Martinez-Conde The many evils of social media notwithstanding, millions of users agree that some of its most delightful aspects include viral illusions and cute cat videos. The potential for synergy was vast in retrospect—but only realized in 2013, when Rasmus Bååth, a cognitive scientist from Lund University in Sweden, blended both elements in a YouTube video of his kitten attacking a printed version of Akiyoshi Kitaoka’s famous “Rotating Snakes” illusion. The clip, which has been viewed more than 6 million times as of this writing, led to subsequent empirical research and an internet survey of cat owners, where 29% of respondents answered that their pets reacted to the Rotating Snakes. The results, published in the journal Psychology in 2014, indicated—though not conclusively—that cats experience illusory motion when they look at the Rotating Snakes pattern, much as most humans do. Now, a team of researchers from University of Padova, Italy, Queen Mary University of London in the UK, and the Parco Natura Viva—Garda Zoological Park in Bussolengo, Italy, has collected additional evidence that cats—in this case, big cats—find the Rotating Snakes Illusion fascinating. Advertisement Intrigued by the earlier study on house cats, Christian Agrillo of the University of Padova and his collaborators set out to determine whether lions at Parco Natura Viva were similarly susceptible to motion illusions, as well as explore the possibility that such patterns might serve as a source of visual enrichment for zoo animals. Their findings were published last month in Frontiers in Psychology. © 2019 Scientific American,

Keyword: Vision; Evolution
Link ID: 26826 - Posted: 11.18.2019

By Erica Tennenhouse Live in the urban jungle long enough, and you might start to see things—in particular humanmade objects like cars and furniture. That’s what researchers found when they melded photos of artificial items with images of animals and asked 20 volunteers what they saw. The people, all of whom lived in cities, overwhelmingly noticed the manufactured objects whereas the animals faded into the background. To find out whether built environments can alter peoples’ perception, the researchers gathered hundreds of photos of animals and artificial objects such as bicycles, laptops, or benches. Then, they superimposed them to create hybrid images—like a horse combined with a table (above, top left) or a rhinoceros combined with a car (above, bottom right). As volunteers watched the hybrids flash by on a screen, they categorized each as a small animal, a big animal, a small humanmade object, or a big humanmade object. Overall, volunteers showed a clear bias toward the humanmade objects, especially when they were big, the researchers report today in the Proceedings of the Royal Society B. The bias itself was a measure of how much the researchers had to visually “amp up” an image before participants saw it instead of its partner image. That bias suggests people’s perceptions are fundamentally altered by their environments, the researchers say. Humans often rely on past experiences to process new information—the classic example is mistaking a snake for a garden hose. But in this case, living in industrialized nations—where you are exposed to fewer “natural” objects—could change the way you view the world. © 2019 American Association for the Advancement of Science

Keyword: Vision; Attention
Link ID: 26793 - Posted: 11.06.2019

By Kelly Servick CHICAGO, ILLINOIS—In 2014, U.S. regulators approved a futuristic treatment for blindness. The device, called Argus II, sends signals from a glasses-mounted camera to a roughly 3-by-5-millimeter grid of electrodes at the back of eye. Its job: Replace signals from light-sensing cells lost in the genetic condition retinitis pigmentosa. The implant’s maker, Second Sight, estimates that about 350 people in the world now use it. Argus II offers a relatively crude form of artificial vision; users see diffuse spots of light called phosphenes. “None of the patients gave up their white cane or guide dog,” says Daniel Palanker, a physicist who works on visual prostheses at Stanford University in Palo Alto, California. “It’s a very low bar.” But it was a start. He and others are now aiming to raise the bar with more precise ways of stimulating cells in the eye or brain. At the annual meeting of the Society for Neuroscience here last week, scientists shared progress from several such efforts. Some have already advanced to human trials—“a real, final test,” Palanker says. “It’s exciting times.” Several common disorders steal vision by destroying photoreceptors, the first cells in a relay of information from the eye to the brain. The other players in the relay often remain intact: the so-called bipolar cells, which receive photoreceptors’ signals; the retinal ganglion cells, which form the optic nerve and carry those signals to the brain; and the multilayered visual cortex at the back of the brain, which organizes the information into meaningful sight. Because adjacent points in space project onto adjacent points on the retina, and eventually activate neighboring points in an early processing area of the visual cortex, a visual scene can be mapped onto a spatial pattern of signals. But this spatial mapping gets more complex along the relay, so some researchers aim to activate cells as close to the start as possible. © 2019 American Association for the Advancement of Science

Keyword: Vision; Robotics
Link ID: 26779 - Posted: 11.01.2019

By Stephen L. Macknik Most of us look at a bird and see its avian shape in perfect alignment with the colors of its beautiful plumage. How could it be any other way? The shape and color are derived from the same object and so the brain must process shape and color together as a unified percept. Right? Wrong. The brain processes forms and color in separate neural circuits, but because these brain regions communicate with each other, our perception appears unified. To understand how this all works, let’s do an experiment on ourselves to separate (and then recombine in our brains) the colors and shapes from an image. We will use an illusion called the Color Assimilation Grid, developed by Øyvind Kolås, a digital media artist and software developer. First, let’s apply a screen to the birds (above) so that we can sample their colors but get rid of most of their shape information. We simply take the original image and blur it (to break down the shape information, not shown) and then multiply the resultant pixels in a step-by-step, pixel-by-pixel fashion with a grid screen of the same size as the original image. In the screen, white pixels equal 1 and the gray regions equal zero, so the result is a blurry colored plaid sample of the birds’ colors. Now that we have diminished the shape information and sampled the colors, we need to do the opposite: sample the shapes after diminishing the color information, so that we can later mix the two to see how shape and color assimilate in the brain. To create the shape-only image we first turn the original image to grayscale (above) and then we apply the inverse of the screen we used in the color sampling. The result is a grayscale image of the birds with the shape information preserved, superimposed with a tiny grid of empty spaces where we can later add color information without altering the rest of the image. © 2019 Scientific American

Keyword: Vision
Link ID: 26760 - Posted: 10.28.2019

By Tanya Lewis The lenses in human eyes lose some ability to focus as they age. Monovision—a popular fix for this issue—involves prescription contacts (or glasses) that focus one eye for near-vision tasks such as reading and the other for far-vision tasks such as driving. About 10 million people in the U.S. currently use this form of correction, but a new study finds it may cause a potentially dangerous optical illusion. Nearly a century ago German physicist Carl Pulfrich described a visual phenomenon now known as the Pulfrich effect: When one eye sees either a darker or a lower-contrast image than the other, an object moving side to side (such as a pendulum) appears to travel in a three-dimensional arc. This is because the brain processes the darker or lower-contrast image more slowly than the lighter or higher-contrast one, creating a lag the brain perceives as 3-D motion. Johannes Burge, a psychologist at the University of Pennsylvania, and his colleagues recently found that monovision can cause a reverse Pulfrich effect. They had participants look through a device showing a different image to each eye—one blurry and one in focus—of an object moving side to side. The researchers found that viewers processed the blurrier image a couple of milliseconds faster than the sharper one, making the object seem to arc in front of the display screen. It appeared closer to the viewer as it moved to the right (if the left eye saw the blurry image) or to the left (if the right eye did). “That does not sound like a very big deal,” Burge says, but it is enough for a driver at an intersection to misjudge the location of a moving cyclist by about the width of a narrow street lane (graphic). © 2019 Scientific American

Keyword: Vision
Link ID: 26758 - Posted: 10.28.2019

By Evan Cooper Early one summer morning, I was awakened by a hammering on the inside of my skull. It felt as if a prisoner were trying to Shawshank it out through my left eye socket. When I sat up in bed to reach for the Advil on my nightstand, I became panic-stricken. Both eyes were open, but I could see through only one. I’d been known to leap to worst-case scenarios at the first sign of any physical discomfort. (Pain in my abdomen? Appendicitis! Headache? Definitely a brain tumor.) But this was different: I wasn’t paranoid, I was blind in my left eye. At the ophthalmologist’s office later that morning, I tried not to panic. I was nearly 20 years old, midway through my studies at U.C.L.A. Everything is fine, I told myself. You’re FINE. Like a mantra, I repeated this over and over, determined that, for once, I was not going to catastrophize. I briefly thought I might be imagining it all, conjuring up some drama for attention. Once when I was 11, I called my dad, who lived 3,000 miles away in Los Angeles, and begged him to send an ambulance to my house in Cleveland because I was certain that I had a collapsed lung and my mom was refusing to take me to the hospital. But the doctor I saw told me with some urgency that I needed to see a specialist, immediately. I overheard his assistant quietly consider potential diagnoses: “multiple sclerosis, lupus, another autoimmune disease?” I closed my eyes and imagined myself on the sort of carnival ride where you stick to the wall as you spin round and round until the floor falls away. Beyond disbelief and dread, however, I also felt a familiar swell of self-loathing. Of course I have an incurable, degenerative disease, I reprimanded myself. This is my fault. After all, up until this point, I had lived as if an internal army of drill sergeants were commanding me to eat less, exercise harder, study more, stand out, be The Best. No achievement was ever good enough. And what is an autoimmune disease if not the Self waging a war upon the Self? © 2019 The New York Times Company

Keyword: Vision
Link ID: 26680 - Posted: 10.08.2019

Joe Palca It's hard for doctors to do a thorough eye exam on infants. They tend to wiggle around — the babies, that is, not the doctors. But a new smart phone app takes advantage of parents' fondness for snapping pictures of their children to look for signs that a child might be developing a serious eye disease. The app is the culmination of one father's the five-year quest to find a way to catch the earliest signs of eye disease, and prevent devastating loss of vision. Five years ago, NPR reported the story of Bryan Shaw's son Noah, and how he lost an eye to cancer. Doctors diagnosed Noah Shaw's retinoblastoma when he was 4 months old. To make the diagnosis, the doctors shined a light into Noah's eye, and got a pale reflection from the back of the eyeball, an indication that there were tumors there. Noah's father Bryan is a scientist. He wondered if he could see that same pale reflection in flash pictures his wife was always taking of his baby son. Sure enough, he saw the reflection or glow, which doctors call "white eye," in a picture taken right after Noah was born. "We had white eye showing up in pictures at 12 days old," Shaw said at the time, months before his ultimate diagnosis Shaw is a chemist, not an eye doctor nor a computer scientist, but he decided to create software that could scan photos for signs of this reflection. © 2019 npr

Keyword: Vision
Link ID: 26679 - Posted: 10.08.2019

By James Gorman When the moon hits your eye like a big pizza pie, it may not be amore at all, but a ghostly white barn owl about to kill and eat you. If you’re a vole, that is. Voles are a favorite meal for barn owls, which come in two shades, reddish brown and white. When the moon is new, both have equal success hunting for their young, snagging about five voles in a night. But when the moon is full and bright, the reddish owls do poorly, dropping to three a night. Barn owls with white faces and breasts do as well as ever, however, even though they should be more easily spotted than their reddish relatives when the lunar light reflects off their feathers. They may well be more easily seen, but it doesn’t matter because of the behavior of their prey. Voles have two responses to owl sightings. They freeze, and hope the owl doesn’t see them. Or they run. But when they see a white owl in bright moonlight, the terrified rodents act like deer caught in headlights and freeze up to five seconds longer than they do for a reddish brown barn owl. This is not what Luis M. San-Jose and Alexandre Roulin, both of the University of Lausanne in Switzerland, expected. They and other scientists reported in Nature Ecology and Evolution on Monday that they expected the white owls to do worse. “The study is a fascinating new look at an old question: How does moonlight affect the plumage of nocturnal predators?” said Richard Prum, an evolutionary biologist and ornithologist at Yale University, who has studied how coloration evolved in birds. He added that authors used “a remarkable array of technologies and methods” to investigate the effect of the variation. Dr. San-Jose, who researches animal coloration, said that there has been little study of color in nocturnal animals in the past, but that has begun to change, producing many surprises in recent years. “Many nocturnal species actually see color at night,” he said. Voles probably don’t. For them, the owls probably appear in shades of gray. Still, the lighter the shade, the more visible the owl. © 2019 The New York Times Company

Keyword: Vision; Biological Rhythms
Link ID: 26567 - Posted: 09.03.2019

David Cyranoski A Japanese woman in her forties has become the first person in the world to have her cornea repaired using reprogrammed stem cells. At a press conference on 29 August, ophthalmologist Kohji Nishida from Osaka University, Japan, said the woman has a disease in which the stem cells that repair the cornea, a transparent layer that covers and protects the eye, are lost. The condition makes vision blurry and can lead to blindness. How iPS cells changed the world To treat the woman, Nishida says his team created sheets of corneal cells from induced pluripotent stem (iPS) cells. These are made by reprogramming adult skin cells from a donor into an embryonic-like state from which they can transform into other cell types, such as corneal cells. Nishida said that the woman’s cornea remained clear and her vision had improved since the transplant a month ago. Currently people with damaged or diseased corneas are generally treated using tissue from donors who have died, but there is a long waiting list for such tissue in Japan. Japan has been ahead of the curve in approving the clinical use of iPS cells, which were discovered by stem-cell biologist Shinya Yamanaka at Kyoto University, who won a Nobel prize for the work. Japanese physicians have also used iPS cells to treat spinal cord injury, Parkinson’s disease and another eye disease. © 2019 Springer Nature Publishing AG

Keyword: Vision; Stem Cells
Link ID: 26564 - Posted: 09.03.2019

By Michelle Roberts Health editor, BBC News online Experts are warning about the risks of extreme fussy eating after a teenager developed permanent sight loss after living on a diet of chips and crisps. Eye doctors in Bristol cared for the 17-year-old after his vision had deteriorated to the point of blindness. Since leaving primary school, the teen had been eating only French fries, Pringles and white bread, as well as an occasional slice of ham or a sausage. Tests revealed he had severe vitamin deficiencies and malnutrition damage. Extreme picky eater The adolescent, who cannot be named, had seen his GP at the age of 14 because he had been feeling tired and unwell. At that time he was diagnosed with vitamin B12 deficiency and put on supplements, but he did not stick with the treatment or improve his poor diet. Three years later, he was taken to the Bristol Eye Hospital because of progressive sight loss, Annals of Internal Medicine journal reports. Dr Denize Atan, who treated him at the hospital, said: "His diet was essentially a portion of chips from the local fish and chip shop every day. He also used to snack on crisps - Pringles - and sometimes slices of white bread and occasional slices of ham, and not really any fruit and vegetables. "He explained this as an aversion to certain textures of food that he really could not tolerate, and so chips and crisps were really the only types of food that he wanted and felt that he could eat." Dr Atan and her colleagues rechecked the young man's vitamin levels and found he was low in B12 as well as some other important vitamins and minerals - copper, selenium and vitamin D. He was not over or underweight, but was severely malnourished from his eating disorder - avoidant-restrictive food intake disorder. "He had lost minerals from his bone, which was really quite shocking for a boy of his age." He was put on vitamin supplements and referred to a dietitian and a specialist mental health team. In terms of his sight loss, he met the criteria for being registered blind. "He had blind spots right in the middle of his vision," said Dr Atan. "That means he can't drive and would find it really difficult to read, watch TV or discern faces. © 2019 BBC.

Keyword: Vision
Link ID: 26563 - Posted: 09.03.2019

By Stephen L. Macknik In normal vision, light falls on the retinas inside the eyes, and is immediately transduced into electrochemical signals before being uploaded to the brain through the optic nerves. So you do not see light itself, but the brain's interpretation of electrochemical signals in the visual parts of the brain. It follows that, if your eyes do not work, but your brain is stimulated just so, your visual neurons will activate (and you will be able to see) just the same as if your eyes were in perfect condition. Sounds easy, but can we do that? Building on decades of research in visual neuroscience, my lab, in collaboration with Susana Martinez-Conde’s, has now conducted some of the studies that validate this idea, completing some of the most important preliminary steps towards a new kind of visual prosthetic. Francis Collins, the Director of the National Institutes of Health, has just posted a blog that highlights our approach. He took notice of our work when we first presented it at this year's meeting for the Principal Investigators of the BRAIN Initiative—the NIH led government funding initiative meant to spur research along on topics like brain implants. The BRAIN Initiative funds several agencies including the NIH, including the National Science Foundation, who kindly funded the grant driving our research thus far. Our starting premise is that vision is fundamentally a thumbnail sketch. Even if 99.9% of your retina works fine, but the central 1/1000th of your visual field is broken, you will be legally blind. That central 0.1% of your visual field is about the same size as your thumbnail held up at arm's length. Because that central 0.1% of the retina is the visual sweet spot, it is the place where the visual magic happens. In fact, much of the remaining 99.9% of the retina’s main job is to help you detect where to move your eyes next. This means that we need to restore central vision in the blind, or we are not really restoring functional vision at all. © 2019 Scientific American

Keyword: Vision; Robotics
Link ID: 26553 - Posted: 08.29.2019