Chapter 10. Vision: From Eye to Brain
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
By Lily Hay Newman When I was growing up, I had a lazy eye. I had to wear a patch over my stronger eye for many years so that good-for-nothing, freeloading, lazy eye could learn some responsibility and toughen up. Wearing a patch was really lousy, though, because people would ask me about it all the time and say things like, "What's wrong with you?" Always fun to hear. I would have much preferred to treat my condition, which is also called amblyopia, by playing video games. Who wouldn't? And it seems like that dream may become a possibility. On Tuesday, developer Ubisoft announced Dig Rush, a game that uses stereoscopic glasses and blue and red figures in varying contrasts to attempt to treat amblyopia. Working in collaboration with McGill University and the eye treatment startup Amblyotech, Ubisoft created a world where controlling a mole character to mine precious metals is really training patients' brains to coordinate their eyes. When patients wear a patch, they may force their lazy eye to toughen up, but they aren't doing anything to teach their eyes how to work together. This lack of coordination, called strabismus, is another important factor that the game makers hope can be addressed better by Dig Rush than by "patching" alone. Amblyotech CEO Joseph Koziak said in a statement, “[This] electronic therapy has been tested clinically to significantly increase the visual acuity of both children and adults who suffer from this condition without the use of an eye patch.” One advantage of Dig Rush, he noted, is that it's easier to measure compliance with video games.
Link ID: 20667 - Posted: 03.09.2015
by Sarah Zielinski Before they grow wings and fly, young praying mantises have to rely on leaps to move around. But these little mantises are really good at jumping. Unlike most insects, which tend to spin uncontrollably and sometimes crash land, juvenile praying mantises make precision leaps with perfect landings. But how do they do that? To find out, Malcolm Burrows of the University of Cambridge in England and colleagues filmed 58 juvenile Stagmomantis theophila praying mantises making 381 targeted jumps. The results of their study appear March 5 in Current Biology. For each test leap, the researchers put a young insect on a ledge with a black rod placed one to two body lengths away. A jump to the rod was fast — only 80 milliseconds, faster than a blink of an eye — but high-speed video captured every move at 1,000 frames per second. That let the scientists see what was happening: First, the insect shook its head from side to side, scanning its path. Then it rocked backwards and curled up its abdomen, readying itself to take a leap. With a push of its legs, the mantis was off. In the air, it rotated its abdomen, hind legs and front legs, but its body stayed level until it hit the target and landed on all four limbs. “The abdomen, front legs and hind legs performed a series of clockwise and anticlockwise rotations during which they exchanged angular momentum at different times and in different combinations,” the researchers write. “The net result … was that the trunk of the mantis spun by 50˚relative to the horizontal with a near-constant angular momentum, aligning itself perfectly for landing with the front and hind legs ready to grasp the target.” © Society for Science & the Public 2000 - 2015
Link ID: 20663 - Posted: 03.07.2015
By Jonathan Webb Science reporter, BBC News, San Antonio Physicists have pinned down precisely how pipe-shaped cells in our retina filter the incoming colours. These cells, which sit in front of the ones that actually sense light, play a major role in our colour vision that was only recently confirmed. They funnel crucial red and green light into cone cells, leaving blue to spill over and be sensed by rod cells - which are responsible for our night vision. Key to this process, researchers now say, is the exact shape of the pipes. The long, thin cells are known as "Muller glia" and they were originally thought to play more of a supporting role in the retina. They clear debris, store energy and generally keep the conditions right for other cells - like the rods and cones behind them - to turn light into electrical signals for the brain. But a study published last year confirmed the idea, proposed in earlier simulations, that Muller cells also function rather like optical fibres. 3D scans revealed the pipe-like structure of the Muller cells (in red) sitting above the photoreceptor cells (in blue) 3D scans revealed the pipe-like structure of the Muller cells (in red) sitting above the photoreceptor cells (in blue) And more than just piping light to the back of the retina, where the rods and cones sit, they selectively send red and green light - the most important for human colour vision - to the cone cells, which handle colour. Meanwhile, they leave 85% of blue light to spill over and reach nearby rod cells, which specialise in those wavelengths and give us the mostly black-and-white vision that gets us by in dim conditions. © 2015 BBC.
By Felicity Muth Visual illusions are fun: we know with our rational mind that, for example, these lines are parallel to each other, yet they don’t appear that way. Similarly, I could swear that squares A and B are different colours. But they are not. This becomes clearer when a connecting block is drawn between the two squares (see the image below). Illusions aren’t just fun tricks for us to play with, they can also tell us something about our minds. Things in the world look to us a certain way, but that doesn’t mean that they are that way in reality. Rather, our brain represents the world to us in a particular way; one that has been selected over evolutionary time. Having such a system means that, for example, we can see some animals running but not others; we couldn’t see a mouse moving from a mile away like a hawk could. This is because there hasn’t been the evolutionary selective pressures on our visual system to be able to do such a thing, whereas there has on the hawk’s. We can also see a range of wavelengths of light, represented as particular colours in our brain, while not being able to see other wavelengths (that, for example, bees and birds can see). Having a system limited by what evolution has given us means that there are many things we are essentially blind to (and wouldn’t know about if it weren’t for technology). It also means that sometimes our brain misrepresents physical properties of the external world in a way that can be confusing once our rational mind realises it. Of course, all animals have their own representation of the world. How a dog visually perceives the world will be different to how we perceive it. But how can we know how other animals perceive the world? What is their reality? One way we can try to get this is through visual illusions. © 2015 Scientific American
By JONATHAN MAHLER The mother of the bride wore white and gold. Or was it blue and black? From a photograph of the dress the bride posted online, there was broad disagreement. A few days after the wedding last weekend on the Scottish island of Colonsay, a member of the wedding band was so frustrated by the lack of consensus that she posted a picture of the dress on Tumblr, and asked her followers for feedback. “I was just looking for an answer because it was messing with my head,” said Caitlin McNeill, a 21-year-old singer and guitarist. Within a half-hour, her post attracted some 500 likes and shares. The photo soon migrated to Buzzfeed and Facebook and Twitter, setting off a social media conflagration that few were able to resist. As the debate caught fire across the Internet — even scientists could not agree on what was causing the discrepancy — media companies rushed to get articles online. Less than a half-hour after Ms. McNeil’s original Tumblr post, Buzzfeed posted a poll: “What Colors Are This Dress?” As of Friday afternoon, it had been viewed more than 28 million times. (White and gold was winning handily.) At its peak, more than 670,000 people were simultaneously viewing Buzzfeed’s post. Between that and the rest of Buzzfeed’s blanket coverage of the dress Thursday night, the site easily smashed its previous records for traffic. So did Tumblr. Everyone, it seems, had an opinion. And everyone was convinced that he, or she, was right. “I don’t understand this odd dress debate and I feel like it’s a trick somehow,” Taylor Swift wrote on Twitter. “PS it’s OBVIOUSLY BLUE AND BLACK.” “IT’S A BLUE AND BLACK DRESS!” wrote Mindy Kaling. “ARE YOU KIDDING ME,” she continued, including an unprintable modifier for emphasis. © 2015 The New York Times Company
Link ID: 20635 - Posted: 02.28.2015
By Pascal Wallisch If you are just encountering The Dress for the first time, you might first want to click here to see what all the fuss was about. The brain lives in a bony shell. The completely light-tight nature of the skull renders this home a place of complete darkness. So the brain relies on the eyes to supply an image of the outside world, but there are many processing steps between the translation of light energy into electrical impulses that happens in the eye and the neural activity that corresponds to a conscious perception of the outside world. In other words, the brain is playing a game of telephone and—contrary to popular belief—our perception corresponds to the brain’s best guess of what is going on in the outside world, not necessarily to the way things actually are. This has been recognized for at least 150 years, since the time of Hermann von Helmholtz. This week, it was recognized by masses of people on the Internet, who have been debating furiously over what should be a simple question: What color is this dress? Many parts of the brain contribute to any given perception, and it should not be surprising that different people can reconstruct the outside world in different ways. This is true for many perceptual qualities, including form and motion. While this guessing game is going on all the time, it is possible to demonstrate it clearly by generating impoverished stimulus displays that are consistent with different, mutually exclusive interpretations. That means the brain will not necessarily commit to one interpretation, but will switch back and forth. These are known as ambiguous or bi-stable stimuli, and they illustrate the point that the brain is ultimately only guessing when perceiving the world. It usually just has more information to disambiguate the interpretation. © 2014 The Slate Group LLC. All
Link ID: 20634 - Posted: 02.28.2015
Carmen Fishwick It’s not every day that fashion and science come together to polarise the world. Tumblr blogger Caitlin posted a photograph of what is now known as #TheDress – a layered lace dress and jacket that was causing much distress among her friends. The distress spread rapidly across social media, with Taylor Swift admitting she was “confused and scared”. The internet is now made up by people firmly in two camps: the white and gold, and the blue and black – with each thinking the other is completely wrong. But Ron Chrisley, director of the Centre for Research in Cognitive Science at the University of Sussex, believes that the problem mainly lies in the fact that everyone has forgotten we are dealing with a colour illusion. Chrisley said: “The first step in reaching a truce in the dress war is to construct a demonstration that can show to the white-and-gold crowd how the very same dress can also look blue and black under different conditions.” The image below, tweeted by @namin3485, demonstrates that even though the right-hand side of each image is the same, in the context of the two different left halves, the right is interpreted as being either white and gold, or blue and black. So does this mean people who are less self-confident are more likely to be able to see both, at least eventually? Chrisley said: “My guess is it’s not to do with self-confidence. It’s a perceptual issue. I could imagine someone that’s open minded could still see it only one way. This is below the level of us trying to understand other peoples views. It’s more physiological than that.” © 2015 Guardian News and Media Limited
Link ID: 20633 - Posted: 02.28.2015
By Adam Rogers The fact that a single image could polarize the entire Internet into two aggressive camps is, let’s face it, just another Thursday. But for the past half-day, people across social media have been arguing about whether a picture depicts a perfectly nice bodycon dress as blue with black lace fringe or white with gold lace fringe. And neither side will budge. This fight is about more than just social media—it’s about primal biology and the way human eyes and brains have evolved to see color in a sunlit world. Light enters the eye through the lens—different wavelengths corresponding to different colors. The light hits the retina in the back of the eye where pigments fire up neural connections to the visual cortex, the part of the brain that processes those signals into an image. Critically, though, that first burst of light is made of whatever wavelengths are illuminating the world, reflecting off whatever you’re looking at. Without you having to worry about it, your brain figures out what color light is bouncing off the thing your eyes are looking at, and essentially subtracts that color from the “real” color of the object. “Our visual system is supposed to throw away information about the illuminant and extract information about the actual reflectance,” says Jay Neitz, a neuroscientist at the University of Washington. “But I’ve studied individual differences in color vision for 30 years, and this is one of the biggest individual differences I’ve ever seen.” (Neitz sees white-and-gold.) Usually that system works just fine. This image, though, hits some kind of perceptual boundary. That might be because of how people are wired. Human beings evolved to see in daylight, but daylight changes color. WIRED.com © 2015 Condé Nast
|By Esther Landhuis Whereas cholesterol levels measured in a routine blood test can serve as a red flag for heart disease, there’s no simple screen for impending Alzheimer’s. A new Silicon Valley health start-up hopes to change that. A half million Americans die of Alzheimer’s disease each year. Most are diagnosed after a detailed medical workup and extensive neurological and psychological tests that gauge mental function and rule out other causes of dementia. Yet things begin going awry some 10 to 15 years before symptoms show. Spinal fluid analyses and positron emission tomography (PET) scans can detect a key warning sign—buildup of amyloid-beta protein in the brain. Studies suggest that adults with high brain amyloid have elevated risk for Alzheimer’s and stand the best chance of benefiting from treatments should they become available. Getting Alzheimer’s drugs to market requires long and costly clinical studies, which some experts say have failed thus far because experimental drugs were tested too late in the disease process. By the time people show signs of dementia, their brains have lost neurons and no current therapy can revive dead cells. That is why drug trials are looking to recruit seniors with preclinical Alzheimer’s who are on the verge of decline but otherwise look healthy. This poses a tall order. Spinal taps are cumbersome and PET costs $3,000 per scan. “There’s no cheap, fast, noninvasive test that can accurately identify people at risk of Alzheimer’s,” says Brad Dolin, chief technology officer of Neurotrack. The company is developing a computerized visual test that might fit the bill. © 2015 Scientific American
by Jacob Aron Ever struggled to tell the difference between two shades of paint? When it comes to colour, one person's peach is another's puce, but there are 11 basic colours that we all agree on. Now it seems two more should be in the mix: lilac and turquoise. In 1969, two researchers looked at 100 languages and found that all had words for black, white, red, green, yellow, blue, brown, purple, pink, orange and grey. These terms pass a number of tests: they refer to easily distinguishable colours, are widely used and are single words. The chart divided into basic colours (Image: D.Mylonas/L.MacDonald) We might quibble over which shade is cream or peach, for example, but everyone knows yellow when they see it. There are exceptions - Russian and Greek speakers have separate words for light and dark blue. Now Dimitris Mylonas of Queen Mary University of London and Lindsay MacDonald of University College London says the same applies to two more colours, in the case of English-speakers, at least. For the past seven years, they've been running an online test in which people name a range of shades – you can try it for yourself. Results from 330 participants were analysed to pick out basic names. These were ranked in a number of ways, such as how often each colour name came up and whether the name was unique to one shade or common to many. Lilac and turquoise came ninth and tenth overall, beating white, red and orange. The only measure turquoise didn't score highly on was the time it took people to enter an answer, says Mylonas. "Our observers had problems spelling it correctly." © Copyright Reed Business Information Ltd.
Link ID: 20550 - Posted: 02.05.2015
By Viviane Callier In the deep sea, where light is dim and blue, animals with bigger eyes see better—but bigger eyes are more conspicuous to predators. In response, the small (10 mm to 17 mm), transparent crustacean Paraphronima gracilis has evolved a unique eye structure. Researchers collected the animals from 200- to 500-meter deep waters in California’s Monterey Bay using a remote-operated vehicle. They then characterized the pair of compound eyes, discovering that each one was composed of a single row of 12 distinct red retinas. Reporting online on 15 January in Current Biology, the researchers hypothesize that each retina captures an image that is transmitted to the crustacean’s brain, which integrates the 12 images to increase brightness and contrast sensitivity, adapting to changing light levels. Future work will focus on how images are processed by the neural connections between the retinas and the brain. © 2015 American Association for the Advancement of Science.
|By Matthew H. Schneps Many of the etchings by artist M. C. Escher appeal because they depict scenes that defy logic. His famous “Waterfall” shows a waterwheel powered by a cascade pouring down from a brick flume. Water turns the wheel and is redirected uphill back to the mouth of the flume, where it can once again pour over the wheel in an endless cycle. The drawing shows us an impossible situation that violates nearly every law of physics. In 2003 a team of psychologists led by Catya von Károlyi of the University of Wisconsin–Eau Claire made a discovery using such images. When the researchers asked people to pick out impossible figures from similarly drawn illustrations, they found that participants with dyslexia were among the fastest at this task. Dyslexia is often called a learning disability. And it can indeed present learning challenges. Although its effects vary widely, some children with dyslexia read so slowly that it would typically take them months to read the same number of words that their peers read in a day. Therefore, the fact that people with this difficulty were so adept at rapidly picking out the impossible figures puzzled von Károlyi. The researchers had stumbled on a potential upside to dyslexia, one that investigators have just begun to understand. Scientists had long suspected dyslexia might be linked to creativity, but laboratory evidence for this was rare. In the years to follow, sociologist Julie Logan of Cass Business School in London showed that there is a higher incidence of dyslexia among entrepreneurs than in the general population. Meanwhile cognitive scientist Gadi Geiger of the Massachusetts Institute of Technology found that people with dyslexia could attend to multiple auditory inputs at once. © 2015 Scientific American
|By Karen Hopkin Sometimes it’s hard to see the light. Especially if it lies outside the visible spectrum, like x-rays or ultraviolet radiation. But if you long to see the unseeable, you might be interested to hear that under certain conditions people can catch a glimpse of usually invisible infrared light. That’s according to a study in the Proceedings of the National Academy of Sciences. [Grazyna Palczewska et al, Human infrared vision is triggered by two-photon chromophore isomerization] Our eyes are sensitive to elementary particles called photons that have sufficient energy to excite light-sensitive receptor proteins in our retinas. But the photons in infrared radiation don’t have enough oomph. We can detect these lower energy photons using what are sometimes called night-vision goggles or cameras. But the naked eye is usually blind to infrared radiation. But recently researchers in a laser lab noticed that they sometimes saw flashes of light while working with devices that emitted brief infrared pulses. So they filled a test tube with retinal cells and zapped it with their lasers. When the light pulses rapidly enough, the receptors can get hit with two photons at the same time—which supplies enough energy to excite the receptor. This double dose makes the infrared visible. One application of the finding is that it could give doctors a new tool to diagnose diseases of the retina. So they could eyeball trouble before it might otherwise be seen.
Link ID: 20458 - Posted: 01.08.2015
By SINDYA N. BHANOO That bats use echolocation to navigate and to find food is well known. But some blind people use the technique, too, clicking their tongues and snapping fingers to help identify objects. Now, a study reports that human echolocators can experience illusions, just as sighted individuals do. Gavin Buckingham, a psychology lecturer at Heriot-Watt University in Scotland, and his colleagues at the University of Western Ontario asked 10 study subjects to pick up strings attached to three boxes of identical weight but different sizes. Overwhelmingly, the sighted individuals succumbed to what is known as the “size-weight illusion.” The bigger boxes felt lighter to them. Blind study subjects who picked up each of the three strings did not experience the illusion. They correctly surmised that the boxes were of equal weight. But blind participants who relied on echolocation to get a sense of the box sizes before picking up the strings fell into the same trap as the sighted subjects and misjudged the weights. The research, published in the journal Psychological Science, supports other research suggesting that echolocation techniques may stimulate the brain in ways that resemble visual input. “It does mean this is more than a functional tool,” Dr. Buckingham said. Echolocation “doesn’t help them appreciate art or tell the difference between the color red or color blue, but it’s a step in that direction.” © 2015 The New York Times Company
By Stephanie M. Lee Decorating the house has always been challenging for Sheila Carter. Like other color-blind people, she limits her wardrobe to a few bold hues that can be easily mixed and matched, like blue and black. But a new pair of glasses she recently started wearing, she said, has changed her worldview. Carter owns high-tech eyewear made by EnChroma, a Berkeley startup that wants to help people with color deficiency see the full spectrum of the rainbow. Carter is among an estimated 32 million Americans who are color-blind, either from birth or as a result of some condition, like head trauma. The condition is most prevalent among people of Northern European descent, affecting 8 percent of men and 0.5 percent of women. EnChroma makes color-enhancing sunglasses for the vast majority of such people, who have trouble seeing red or green due to a genetic defect. The company has sold more than 1,000 pairs in two years. Last month, it introduced glasses with polycarbonate lenses for children, athletes and prescription- and nonprescription-wearers at prices ranging from $325 to $450. The proprietary lens contains a filter that blocks a portion of the spectrum where the overlap between the two cones occurs and restores the separation between them. “It’s essentially taking out that stuff that’s confusing the signal,” said Andy Schmeder, vice president of technology.
Link ID: 20450 - Posted: 01.01.2015
|By Stephen L. Macknik and Susana Martinez-Conde To a neuroscientist, the trouble with cocktail parties is not that we do not love cocktails or parties (many neuroscientists do). Instead what we call “the cocktail party problem” is the mystery of how anyone can have a conversation at a cocktail party at all. Consider a typical scene: You have a dozen or more lubricated and temporarily uninhibited adults telling loud, improbable stories at increasing volumes. Interlocutors guffaw and slap backs. Given the decibel level, it is a minor neural miracle that any one of these revelers can hear and parse one word from any other. The alcohol does not help, but it is not the main source of difficulties. The cocktail party problem is that there is just too much going on at once: How can our brain filter out the noise to focus on the wanted information? This problem is a central one for perceptual neuroscience—and not just during cocktail parties. The entire world we live in is quite literally too much to take in. Yet the brain does gather all of this information somehow and sorts it in real time, usually seamlessly and correctly. Whereas the physical reality consists of comparable amounts of signal and noise for many of the sounds and sights around you, your perception is that the conversation or object that interests you remains in clear focus. So how does the brain accomplish this feat? One critical component is that our neural circuits simplify the problem by actively ignoring—suppressing—anything that is not task-relevant. Our brain picks its battles. It stomps out irrelevant information so that the good stuff has a better chance of rising to awareness. This process, colloquially called attention, is how the brain sorts the wheat from the chaff. © 2014 Scientific American
by Andy Coghlan To catch agile prey on the wing, dragonflies rely on the same predictive powers we use to catch a ball: that is, anticipating by sight where the ball will go and readying body and hand to snatch it from mid-air. Until now, dragonflies were thought to catch their prey without this predictive skill, instead blindly copying every steering movement made by their prey, which can include flies and bees. Now, sophisticated laboratory experiments have tracked the independent body and eye movements of dragonflies as they pursue prey, showing for the first time that dragonflies second guess where their prey will fly to next and then steer their flight accordingly. Throughout the pursuit, they lock on to their target visually while they orient their bodies and flight path for ultimate interception, rather than copying each little deviation in their prey's flight path in the hope of ultimately catching up with it. "The dragonfly lines up its body axis in the flight direction of the prey, but keeps the eyes in its head firmly fixed on the prey," says Anthony Leonardo of the Howard Hughes Medical Institute in Ashburn, Virginia. "It enables the dragonfly to catch the prey from beneath and behind, the prey's blind spot," he says. © Copyright Reed Business Information Ltd.
Link ID: 20412 - Posted: 12.13.2014
Jia You Ever wonder how cockroaches scurry around in the dark while you fumble to switch on the kitchen light? Scientists know the insect navigates with its senses of touch and smell, but now they have found a new piece to the puzzle: A roach can also see its environment in pitch darkness, by pooling visual signals from thousands of light-sensitive cells in each of its compound eyes, known as photoreceptors. To test the sensitivity of roach vision, researchers created a virtual reality system for the bugs, knowing that when the environment around a roach rotates, the insect spins in the same direction to stabilize its vision. First, they placed the roach on a trackball, where it couldn’t navigate with its mouthpart or antennae. Then the scientists spun black and white gratings around the insect, illuminated by light at intensities ranging from a brightly lit room to a moonless night. The roach responded to its rotating environment in light as dim as 0.005 lux, when each of its photoreceptors was picking up only one photon every 10 seconds, the researchers report online today in The Journal of Experimental Biology. They suggest that the cockroach must rely on unknown neural processing in the deep ganglia, an area in the base of the brain involved in coordinating movements, to process such complex visual information. Understanding this mechanism could help scientists design better imaging systems for night vision. © 2014 American Association for the Advancement of Science.
Link ID: 20389 - Posted: 12.04.2014
Katharine Sanderson Although we do not have X-ray vision like Superman, we have what could seem to be another superpower: we can see infrared light — beyond what was traditionally considered the visible spectrum. A series of experiments now suggests that this little-known, puzzling effect could occur when pairs of infrared photons simultaneously hit the same pigment protein in the eye, providing enough energy to set in motion chemical changes that allow us to see the light. Received wisdom, and the known chemistry of vision, say that human eyes can see light with wavelengths between 400 (blue) and 720 nanometres (red). Although this range is still officially known as the 'visible spectrum', the advent of lasers with very specific infrared wavelengths brought reports that people were seeing laser light with wavelengths above 1,000 nm as white, green and other colours. Krzysztof Palczewski, a pharmacologist at Case Western Reserve University in Cleveland, Ohio, says that he has seen light of 1,050 nm from a low-energy laser. “You see it with your own naked eye,” he says. To find out whether this ability is common or a rare occurrence, Palczewski scanned the retinas of 30 healthy volunteers with a low-energy beam of light, and changed its wavelength. As the wavelength increased into the infrared (IR), participants found the light at first harder to detect, but at around 1,000 nm the light became easier to see. How humans can do this has puzzled scientists for years. Palczewski wanted to test two leading hypotheses to explain infrared vision. © 2014 Nature Publishing Group,
Link ID: 20388 - Posted: 12.03.2014
By Amy Ellis Nutt Scientists say the "outdoor effect" on nearsighted children is real: natural light is good for the eyes. (Photo by Bill O'Leary/The Washington Post) It's long been thought kids are more at risk of nearsightedness, or myopia, if they spend hours and hours in front of computer screens or fiddling with tiny hand-held electronic devices. Not true, say scientists. But now there is research that suggests that children who are genetically predisposed to the visual deficit can improve their chances of avoiding eyeglasses just by stepping outside. Yep, sunshine is all they need -- more specifically, the natural light of outdoors -- and 14 hours a week of outdoor light should do it. Why this is the case is not exactly clear. "We don't really know what makes outdoor time so special," said Donald Mutti, the lead researcher of the study from Ohio State University College of Optometry, in a press release. "If we knew, we could change how we approach myopia." What is known is that UVB light, (invisible ultraviolet B rays), plays a role in the cellular production of vitamin D, which is believed to help the eyes focus light on the retina. However, the Ohio State researchers think there is another possibility. "Between the ages of five and nine, a child's eye is still growing," said Mutti. "Sometimes this growth causes the distance between the lens and the retina to lengthen, leading to nearsightedness. We think these different types of outdoor light may help preserve the proper shape and length of the eye during that growth period."