Chapter 10. Vision: From Eye to Brain
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
By C. CLAIBORNE RAY Q. What’s the No. 1 cause of blindness in seniors in the United States? A. “It sounds like a simple question, but there’s no perfect answer,” said Dr. Susan Vitale, a research epidemiologist at the National Eye Institute of the National Institutes of Health. “It depends on age, how blindness is measured and how statistics are collected.” For example, some studies have relied on the self-reported answer to the vague question: “Do you have vision problems?” The best available estimates, she said, come from a 2004 paper aggregating many other studies, some in the United States and some in other countries, updated by applying later census data. This paper and others have found striking differences by age and by racial and socioeconomic groups, Dr. Vitale said. In white people, she said, the major cause of blindness at older ages is usually age-related macular degeneration, progressive damage to the central portion of the retina. In older black people, the major causes are likely to be glaucoma or cataracts. In older people of working age, from their 40s to their 60s, the major cause, regardless of race, is diabetic retinopathy, damage to the retina as a result of diabetes. Many studies have shown that white people are more likely to have age-related macular degeneration, Dr. Vitale said, but as for cataracts, for which blindness is preventable by surgery, there are questions about access to health care and whether those affected can get the needed surgery. It is not known why black people are at higher risk of glaucoma. There are also some gender differences, she said, with white women more likely than white men to become blind. Studies have not found the same difference by gender in black and Hispanic people. Because many of the causes of blindness at all ages are preventable, Dr. Vitale said, it is essential to have regular eye checkups, even if there are no obvious symptoms. © 2016 The New York Times Company
Link ID: 21958 - Posted: 03.07.2016
By Susana Martinez-Conde, Stephen L. Macknik In the forests of Australia and New Guinea lives a pigeon-sized creature that is not only a master builder but a clever illusionist, too. The great bowerbird (Chlamydera nuchalis)—a cousin of crows and jays—has an elaborate mating ritual that relies on the male's ability to conjure forced perspective. Throughout the year he painstakingly builds and maintains his bower: a 60-centimeter-long corridor made of twigs, leading to a courtyard decorated with gray and white pebbles, shells and bones. Some species also add flowers, fruits, feathers, bottle caps, acorns, abandoned toys—whatever colorful knickknacks they can find. The male takes great care to arrange the objects according to size so that the smallest pieces are closest to the bower's entrance and the largest items are farthest away. The elaborate structure is not a nest. Its sole purpose is to attract a female for mating. Once construction is complete, the male performs in the courtyard for a visiting female, who—poised like a critical American Idol judge—evaluates the routine from the middle of the corridor. He sings, dances and prances, tossing around a few select trinkets to impress his potential mate. Her viewpoint is very narrow, and so she perceives objects paving the courtyard as being uniform in size. This forced perspective makes the choice offerings appear grander and therefore all the more enticing. The offerings, and the male himself, appear larger than life because of an effect that visual scientists call the Ebbinghaus illusion, which causes an object to look bigger if it is surrounded by smaller objects. © 2016 Scientific American
Floaters, those small dots or cobweb-shaped patches that move or “float” through the field of vision, can be alarming. Though many are harmless, if you develop a new floater, “you need to be seen pretty quickly” by an eye doctor in order to rule out a retinal tear or detachment, said Dr. Rebecca Taylor, a spokeswoman for the American Academy of Ophthalmology. Floaters are caused by clumping of the vitreous humor, the gel-like fluid that fills the inside of the eye. Normally, the vitreous gel is anchored to the back of the eye. But as you age, it tends to thin out and may shrink and pull away from the inside surface of the eye, causing clumps or strands of connective tissue to become lodged in the jelly, much as “strands of thread fray when a button comes off on your coat,” Dr. Taylor said. The strands or clumps cast shadows on the retina, appearing as specks, dots, clouds or spider webs in your field of vision. Such changes may occur at younger ages, too, particularly if you are nearsighted or have had a head injury or eye surgery. There is no treatment for floaters, though they usually fade with time. But it’s still important to see a doctor if new floaters arise because the detaching vitreous gel can pull on the retina, causing it to tear, which can lead to retinal detachment, a serious condition. The pulling or tugging on the retina may be perceived as lightning-like flashes, “like a strobe light off to the side of your vision,” Dr. Taylor said. See an eye doctor within 24 to 48 hours if you have a new floater, experience a sudden “storm” of floaters, see a gray curtain or shadow move across your field of vision, or have a sudden decrease in vision. © 2016 The New York Times Company
Link ID: 21868 - Posted: 02.08.2016
By Susana Martinez-Conde Take a look at the red chips on the two Rubik cubes below. They are actually orange on the left and purple on the right, if you look at them in isolation. They only appear more or less equally red across the images because your brain is interpreting them as red chips lit by either yellow or blue light. This kind of misperception is an example of perceptual constancy, the mechanism that allows you to recognize an object as being the same in different environments, and under very diverse lighting conditions. Constancy illusions are adaptive: consider what would have happened if your ancestors thought a friend became a foe whenever a cloud hid the sun, or if they lost track of their belongings–and even their own children—every time they stepped out of the cave and into the sunlight. Why, they might have even eaten their own kids! You are here because the perceptual systems of your predecessors were resistant to annoying changes in the physical reality–as is your own (adult) perception. There are many indications that constancy effects must have helped us survive (and continue to do so). One such clue is that we are not born with perceptual constancy, but develop it many months after birth. So at first we see all differences, and then we learn to ignore certain types of differences so that we can recognize the same object as unchanging in many varied scenarios. When perceptual constancy arises, we lose the ability to detect multiple contradictions that are nevertheless highly noticeable to young babies. © 2016 Scientific American
by Laura Sanders Young babies get a bad rap. They’re helpless, fickle and noisy. And even though they allegedly sleep for 16 hours a day, those hours come in 20-minute increments. Yet hidden in the chaos of a young infant’s life are some truly magnificent skills — perceptual feats that put adults to shame. So next time your baby loses it because she can’t get her thumb into her mouth, keep in mind that her strengths lie elsewhere. Six-month-old babies can spot subtle differences between two monkey faces easy as pie. But 9-month-olds — and adults — are blind to the differences. In a 2002 study of facial recognition, scientists pitted 30 6-month-old babies against 30 9-month-olds and 11 adults. First, the groups got familiar with a series of monkey and human faces that flashed on a screen. Then new faces showed up, interspersed with already familiar faces. The idea is that the babies would spend more time looking at new faces than ones they had already seen. When viewing human faces, all of the observers, babies and adults alike, did indeed spend more time looking at the new people, showing that they could easily pick out familiar human faces. But when it came to recognizing monkey faces, the youngsters blew the competition out of the water. Six-month-old babies recognized familiar monkey faces and stared at the newcomers longer. But both adults and 9-month-old babies were flummoxed, and looked at the new and familiar monkey faces for about the same amount of time. © Society for Science & the Public 2000 - 2015
By Ana Swanson Earlier this year, the famous blue-and-black (or white-and-gold) dress captivated the Internet, serving as a reminder that color is truly in the eye of the beholder. The dress was also a lesson in the power of social media, the science of shifting colors, and the fun of optical illusions. Here we present a visual story from February 27 that rounded up some of the best-known optical illusions on the Web. The Internet erupted in an energetic debate yesterday about whether an ugly dress was blue and black or white and gold, with celebrities from Anna Kendrick (white) to Taylor Swift (black) weighing in. (For the record, I’m with Taylor – never a bad camp to be in.) It sounds inane, but the dress question was actually tricky: Some declared themselves firmly in the blue and black camp, only to have the dress appear white and gold when they looked back a few hours later. Wired had the best explanation of the science behind the dress’s shifting colors. When your brain tries to figure out what color something is, it essentially subtracts the lighting and background colors around it, or as the neuroscientist interviewed by Wired says, tries to “discount the chromatic bias of the daylight axis.” This is why you can identify an apple as red whether you see it at noon or at dusk. The dress is on some kind of perceptual boundary, with a pretty even mix of blue, red and green. (Frankly, it’s just a terrible, washed out photo.) So for those who see it as white, your eyes may be subtracting the wrong background and lighting.
Link ID: 21742 - Posted: 01.02.2016
by Laura Sanders There’s only so much brainpower to go around, and when the eyes hog it all, the ears suffer. When challenged with a tough visual task, people are less likely to perceive a tone, scientists report in the Dec. 9 Journal of Neuroscience. The results help explain what parents of screen-obsessed teenagers already know. For the study, people heard a tone while searching for a letter on a computer screen. When the letter was easy to find, participants were pretty good at identifying a tone. But when the search got harder, people were less likely to report hearing the sound, a phenomenon called inattentional deafness. Neural responses to the tone were blunted when people worked on a hard visual task, but not when the visual task was easy, researchers found. By showing that a demanding visual job can siphon resources away from hearing, the results suggest that perceptual overload can jump between senses. © Society for Science & the Public 2000 - 2015
By John Bohannon It may sound like a bird-brained idea, but scientists have trained pigeons to spot cancer in images of biopsied tissue. Individually, the avian analysts can't quite match the accuracy of professional pathologists. But as a flock, they did as well as trained humans, according to a new study appearing this week in PLOS ONE. Cancer diagnosis often begins as a visual challenge: Does this lumpy spot in a mammogram image justify a biopsy? And do cells in biopsy slides look malignant or benign? Training doctors and medical technicians to tell the difference is expensive and time-consuming, and computers aren't yet up to the task. To see whether a different type of trainee could do better, a team led by Richard Levenson, a pathologist and technologist at the University of California, Davis, and Edward Wasserman, a psychologist at the University of Iowa, in Iowa City, turned to pigeons. In spite of their limited intellect, the bobble-headed birds have certain advantages. They have excellent visual systems, similar to, if not better than, a human's. They sense five different colors as opposed to our three, and they don’t “fill in” the gaps like we do when expected shapes are missing. However, training animals to do a sophisticated task is tricky. Animals can pick up on unintentional cues from their trainers and other humans that may help them correctly solve problems. For example, a famous 20th century horse named Clever Hans was purportedly able to do simple arithmetic, but was later shown to be observing the reactions of his human audience. And although animals can perform extremely well on tasks that are confined to limited circumstances, overtraining on one set of materials can lead to total inaccuracy when the same information is conveyed slightly differently. © 2015 American Association for the Advancement of Science
Susan Milius Certain species of the crawling lumps of mollusk called chitons polka-dot their armor-plated backs with hundreds of tiny black eyes. But mixing protection and vision can come at a price. The lenses are rocky nuggets formed mostly of aragonite, the same mineral that pearls and abalone shells are made of. New analyses of these eyes support previous evidence that they form rough images instead of just sensing overall lightness or darkness, says materials scientist Ling Li of Harvard University. Adding eyes to armor does introduce weak spots in the shell. Yet the positioning of the eyes and their growth habits show how chitons compensate for that, Li and his colleagues report in the November 20 Science. Li and coauthor Christine Ortiz of MIT have been studying such trade-offs in biological materials that serve multiple functions. Human designers often need substances that multitask, and the researchers have turned to evolution’s solutions in chitons and other organisms for inspiration. Biologists had known that dozens of chiton species sprinkle their armored plates with simple-seeming eye spots. (The armor has other sensory organs: pores even tinier than the eyes.) But in 2011, a research team showed that the eyes of the West Indian fuzzy chiton (Acanthopleura granulata) were much more remarkable than anyone had realized. Their unusual aragonite lens can detect the difference between a looming black circle and a generally gray field of vision. Researchers could tell because chitons clamped their shells defensively to the bottom when a scary circle appeared but not when an artificial sky turned overall shadowy. © Society for Science & the Public 2000 - 2015
Angus Chen If you peek into classrooms around the world, a bunch of bespectacled kids peek back at you. In some countries such as China, as much as 80 percent of children are nearsighted. As those kids grow up, their eyesight gets worse, requiring stronger and thicker eyeglasses. But a diluted daily dose of an ancient drug might slow that process. The drug is atropine, one of the toxins in deadly nightshade and jimsonweed. In the 19th and early 20th centuries, atropine was known as belladonna, and fancy Parisian ladies used it to dilate their pupils, since big pupils were considered alluring at the time. A few decades later, people started using atropine to treat amblyopia, or lazy eye, since it blurs the stronger eye's vision and forces the weaker eye to work harder. As early as the 1990s, doctors had some evidence that atropine can slow the progression of nearsightedness. In some countries, notably in Asia, a 1 percent solution of atropine eyedrops is commonly prescribed to children with myopia. It's not entirely clear how atropine works. Because people become nearsighted when their eyeballs get too elongated, it's generally thought that atropine must be interfering with that unwanted growth. But as Parisians discovered long ago, the drug can have some inconvenient side effects. © 2015 npr
A clinical trial funded by the National Institutes of Health has found that the drug ranibizumab (Lucentis) is highly effective in treating proliferative diabetic retinopathy. The trial, conducted by the Diabetic Retinopathy Clinical Research Network (DRCR.net) compared Lucentis with a type of laser therapy called panretinal or scatter photocoagulation, which has remained the gold standard for proliferative diabetic retinopathy since the mid-1970s. The findings demonstrate the first major therapy advance in nearly 40 years. “These latest results from the DRCR Network provide crucial evidence for a safe and effective alternative to laser therapy against proliferative diabetic retinopathy,” said Paul A. Sieving, M.D., Ph.D., director of NIH’s National Eye Institute (NEI), which funded the trial. The results were published online today in the Journal of the American Medical Association. Treating abnormal retinal blood vessels with laser therapy became the standard treatment for proliferative diabetic retinopathy after the NEI announced results of the Diabetic Retinopathy Study in 1976. Although laser therapy effectively preserves central vision, it can damage night and side vision; so, researchers have sought therapies that work as well or better than laser but without such side effects. A complication of diabetes, diabetic retinopathy can damage blood vessels in the light-sensitive retina in the back of the eye. As the disease worsens, blood vessels may swell, become distorted and lose their ability to function properly. Diabetic retinopathy becomes proliferative when lack of blood flow in the retina increases production of a substance called vascular endothelial growth factor, which can stimulate the growth of new, abnormal blood vessels.
Link ID: 21637 - Posted: 11.17.2015
By Kelli Whitlock Burton More than half of Americans over the age of 70 have cataracts, caused by clumps of proteins collecting in the eye lens. The only way to remove them is surgery, an unavailable or unaffordable option for many of the 20 million people worldwide who are blinded by the condition. Now, a new study in mice suggests eye drops made with a naturally occurring steroid could reverse cataracts by teasing apart the protein clumps. “This is a game changer in the treatment of cataracts,” says Roy Quinlan, a molecular biologist at Durham University in the United Kingdom who was not part of the study. “It takes decades for the cataracts to get to that point, so if you can reverse that by a few drops in the eye over a couple of weeks, that’s amazing.” The proteins that make up the human lens are among the oldest in the body, forming at about 4 weeks after fertilization. The majority are crystallins, a family of proteins that allow the eye to focus and keep the lens clear. Two of the most abundant crystallins, CRYAA and CRYAB, are produced in response to stress or injury. They act as chaperones, identifying and binding to damaged and misfolded proteins in the lens, preventing them from aggregating. But over the years, as damaged proteins accumulate in the lens, these chaperones become overwhelmed. The mutated proteins then clump together, blocking light and producing the tell-tale cloudiness of cataracts. © 2015 American Association for the Advancement of Science
Link ID: 21611 - Posted: 11.06.2015
Nancy Shute In September, we reported on a charming little study that found people who feel blue after watching sad videos have a harder time perceiving colors on the blue-yellow axis. Now the researchers may be feeling blue themselves. On Thursday they retracted their study, saying that errors in how they structured the experiment skewed the results. Shortly after the study was published online, commenters started looking skeptically at the results. And because the researchers had posted their data online, those commenters were able to run the numbers themselves. They didn't like what they found. As one blogger wrote: "A major problem is that the authors are claiming that they've found an interaction between video condition and color axis, but they haven't actually tested this interaction, they've just done a pair of independent t-tests and found different results." As the indefatigable crew at the Retraction Watch blog points out, it's not the first time scientists have messed this up. "This exact experimental oversight occurs all too often, according to a 2011 paper in Nature Neuroscience, which found that the same number of papers performed the procedure incorrectly as did it correctly." And there were other problems, too, such as not testing participants' color perception before the study. © 2015 npr
By Christof Koch Artificial intelligence has been much in the news lately, driven by ever cheaper computer processing power that has become effectively a near universal commodity. The excitement swirls around mathematical abstractions called deep convolutional neural networks, or ConvNets. Applied to photographs and other images, the algorithms that implement ConvNets identify individuals from their faces, classify objects into one of 1,000 distinct categories (cheetah, husky, strawberry, catamaran, and so on)—and can describe whether they see “two pizzas sitting on top of a stove top oven” or “a red motorcycle parked on the side of the road.” All of this happens without human intervention. Researchers looking under the hood of these powerful algorithms are surprised, puzzled and entranced by the beauty of what they find. How do ConvNets work? Conceptually they are but one or two generations removed from the artificial neural networks developed by engineers and learning theorists in the 1980s and early 1990s. These, in turn, are abstracted from the circuits neuroscientists discovered in the visual system of laboratory animals. Already in the 1950s a few pioneers had found cells in the retinas of frogs that responded vigorously to small, dark spots moving on a stationary background, the famed “bug detectors.” Recording from the part of the brain's outer surface that receives visual information, the primary visual cortex, Torsten Wiesel and the late David H. Hubel, both then at Harvard University, found in the early 1960s a set of neurons they called “simple” cells. These neurons responded to a dark or a light bar of a particular orientation in a specific region of the visual field of the animal. © 2015 Scientific American
By Jessica Schmerler Young brains are plastic, meaning their circuitry can be easily rewired to promote learning. By adulthood, however, the brain has lost much of its plasticity and can no longer readily recover lost function after, say, a stroke. Now scientists have successfully restored full youthful plasticity in adult mice by transplanting young neurons into their brain—curing their severe visual impairments in the process. In a groundbreaking study published in May in Neuron, a team of neuroscientists led by Sunil Gandhi of the University of California, Irvine, transplanted embryonic mouse stem cells into the brains of other mice. The cells were primed to become inhibitory neurons, which tamp down brain activity. Prior to this study, “it was widely doubted that the adult brain would allow these cells to disperse, integrate and reactivate plasticity,” says Melissa Davis, first author of the study. Scientists have been attempting such a feat for years, refining their methods along the way, and the Irvine team finally saw success: the cells were integrated in the brain and caused large-scale rewiring, restoring the high-level plasticity of early development. In visually impaired mice, the transplant allowed for the restoration of normal vision, as demonstrated by tests of visual nerve signals and a swimming maze test. The scientists have not yet tested the transplanting technique for other neurological disorders, but they believe the technique has potential for many conditions and injuries depending on how, exactly, the new neurons restore plasticity. It is not yet known whether the proliferation of the transplanted cells accounts for the restored plasticity or if the new cells trigger plasticity in existing neurons. If the latter, the treatment could spur the rewiring and healing of the brain following traumatic brain injury or stroke. © 2015 Scientific American
By Karen Weintraub The short answer is: not yet, but treatments are getting better. Getting older is the leading risk factor for age-related macular degeneration, the leading cause of vision loss in the United States. Macular degeneration comes in two forms: dry and wet. The dry form is milder and usually has no symptoms, but it can degenerate into the wet form, which is characterized by the growth of abnormal blood vessels in the back of the eye, potentially causing blurriness or vision loss in the center of the field of vision. The best treatment for wet macular degeneration is prevention, said Dr. Rahul N. Khurana, a clinical spokesman for the American Academy of Ophthalmology and a retina specialist practicing in Mountain View, Calif. Not smoking, along with eating dark green vegetables and at least two servings of fish a week, may help reduce the risk of macular degeneration, he said. An annual eye exam can catch macular degeneration while it is still in the dry form, Dr. Khurana said, and vitamins can help prevent it from progressing into the wet form, the main cause of vision loss. Dr. Joan W. Miller, chief of ophthalmology at Massachusetts Eye and Ear, said anyone with a family history of the disease should get a retina check at age 50. People should also get an eye exam if they notice problems like trouble adjusting to the dark or needing more light to read. The federally funded Age-Related Eye Disease Study, published in 2001 and updated in 2013, found that people at high risk for advanced age-related macular degeneration could cut that risk by about 25 percent by taking a supplement that included 500 milligrams of vitamin C, 400 I.U.s of vitamin E, 10 milligrams of lutein, 2 milligrams of zeaxanthin, 80 milligrams of zinc, and 2 milligrams of copper. © 2015 The New York Times Company
Link ID: 21551 - Posted: 10.23.2015
By Hanae Armitage CHICAGO, ILLINOIS—Aside from a few animals—like pythons and vampire bats—that can sense infrared light, the world of this particular electromagnetic radiation has been off-limits to most creatures. But now, researchers have engineered rodents to see infrared light by implanting sensors in their visual cortex—a first-ever feat announced here yesterday at the annual meeting of the Society for Neuroscience. Before they wired rats to see infrared light, Duke University neuroscientist Miguel Nicolelis and his postdoc Eric Thomson engineered them to feel it. In 2013, they surgically implanted a single infrared-detecting electrode into an area of the rat’s brain that processes touch called the somatosensory cortex. The other end of the sensor, outside the rat’s head, surveyed the environment for infrared light. When it picked up infrared, the sensor sent electrical messages to the rats’ brains that seemed to give them a physical sensation. At first, the rats would groom and rub their whiskers repeatedly whenever the light went on. But after a short while, they stopped fidgeting. They even learned to associate infrared with a reward-based task in which they followed the light to a bowl of water. In the new experiment, the team inserted three additional electrodes, spaced out equally so that the rats could have 360 degrees of infrared perception. When they were primed to perform the same water-reward task, they learned it in just 4 days, compared to 40 days with the single implant. “Frankly, this was a surprise,” Thomson says. “I thought it would be really confusing for [the rats] to have so much stimulation all over their brain, rather than [at] one location.” © 2015 American Association for the Advancement of Science.
By Kerry Grens Eric Altschuler has been staring at mirrors. Specifically, those of van Eyck, Caravaggio, Parmigianino, Escher, and other painters. The Temple University professor and his colleague V.S. Ramachandran of the University of California, San Diego, are on the hunt for novel ways that artists have presented reflections, as a means of seeking out potentially new modes of therapy. Ramachandran and Altschuler have pioneered methods of using a mirror to alleviate phantom limb pain and other conditions. A patient sits at the side of the mirror with, say, his right arm reflected in front of the glass. The patient peeks around the corner to view the reflection as if he were looking at his left arm—a setup Ramachandran and Altschuler call the parasagittal reflection. In their cataloging of mirrors in art, presented as a poster at the Society for Neuroscience (SfN) meeting held in Chicago this week, Altschuler and Ramachandran found that for 500 or more years, painters presented frontal plane reflections (a straight-on view in the mirror). It wasn’t until 1946 that something different—the parasagittal view, in particular—appeared in fine art: in M.C. Escher’s lithograph, “Magic Mirror,” Altschuler and Ramachandran reported at SfN. The viewer has an angled view at a ball reflected in a mirror, with an identical ball positioned symmetrically behind the mirror—very similar to the concept of mirror therapy. “Magic Mirror” was produced 50 years before Ramachandran first published on mirror therapy, and even then Ramachandran was unaware of the artwork. “Escher was very clever,” Altschuler told The Scientist, noting that perhaps there are other novel approaches just waiting to be discovered in paintings. © 1986-2015 The Scientist
Link ID: 21543 - Posted: 10.22.2015
Gene therapy preserved vision in a study involving dogs with naturally occurring, late-stage retinitis pigmentosa, according to research funded by the National Eye Institute (NEI), part of the National Institutes of Health. The findings contribute to the groundwork needed to move gene therapy forward into clinical trials for people with the blinding eye disorder, for which there is currently no cure. Scientists from the University of Pennsylvania and the University of Florida, Gainesville also determined for the first time that gene therapy may be of potential benefit even after there has been significant loss of cells in the eye. Up to this point, animal studies had shown benefits from gene therapy only when it was used in the earliest stages of the disease. “The study shows that a corrective gene can stop the loss of photoreceptors in the retina, and provides good proof of concept for gene therapy at the intermediate stage of the disease, thus widening the therapeutic window,” said Neeraj Agarwal, Ph.D., a program director at NEI. Retinitis pigmentosa is the most common inherited disease that causes degeneration of the retina, the light-sensitive tissue lining the back of the eye. Roughly 1 in 4,000 people are affected and about 10 to 20 percent have a particularly severe form called X-linked retinitis pigmentosa, which predominately affects males, causing night blindness by age 10 and progressive loss of the visual field by age 45. About 70 percent of people with the X-linked form carry mutations that cause loss of function of the retinitis pigmentosa GTPase Regulator (RPGR) gene, which encodes a protein important for maintaining the health of photoreceptors.
Link ID: 21510 - Posted: 10.14.2015
By Ariana Eunjung Cha When it comes to studies on birth order, first-borns tend to make out pretty well. Research says they tend to be smarter, more outgoing, and exhibit more leadership qualities. Unfortunately, it's not all good news. A new paper published in JAMA Ophthalmology shows that first-borns also tend to be 10 percent more likely to be near-sighted and 20 percent more likely to have severe myopia than their siblings. In fact, the risk for myopia appeared to be progressively lower the later you were born in terms of your birth order. The researchers from Cardiff University suggested that the cause was “parental investment in education” because parents may have a tendency to put more pressure on first-borns. They theorized that parents may be more demanding that first-borns do more "near" activities, such as reading, which may impact their eyesight. Previous studies have shown a strong link between time spent outdoors and a diminished risk of myopia, and it may stand to reason that children who spend more time on studies may be spending less time outdoors. Jeremy Guggenheim, a doctoral student, and colleagues wrote that while there's no way to make a definitive causal link, their study found that when they adjusted for a proxy for educational exposure — the highest educational degree or age at completion of full-time education — they saw a less dramatic association between near-sightedness and birth order.
Link ID: 21497 - Posted: 10.10.2015