Chapter 10. Vision: From Eye to Brain
Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.
Strobe lighting at music festivals can increase the risk of epileptic seizures, researchers have warned. The Dutch team said even people who have not been diagnosed with epilepsy might be affected. Their study was prompted by the case of a 20-year-old, with no history of epilepsy, who suddenly collapsed and had a fit at a festival. The Epilepsy Society said festivals should limit lighting to the recommended levels. Epilepsy is a condition that affects the brain. There are many types, and it can start at any age. Around 3% of people with epilepsy are photosensitive, which means their seizures are triggered by flashing or flickering lights, or patterns. The Health and Safety Executive recommends strobe lighting should be kept to a maximum of four hertz (four flashes per second) in clubs and at public events. 'Life-affirming' The researchers studied electronic dance music festivals because they often use strobe lighting. They looked at data on people who needed medical care among the 400,000 visitors to 28 day and night-time dance music festivals across the Netherlands in 2015. The figures included 241,000 people who were exposed to strobe lights at night-time festivals. Thirty people at night-time events with strobe lighting had a seizure, compared with nine attending daytime events. The team, led by Newel Salet of the VU Medical Centre in Amsterdam, writing in BMJ Open, said other factors could increase the risk of seizures. But they added: "Regardless of whether stroboscopic lights are solely responsible or whether sleep deprivation and/or substance abuse also play a role, the appropriate interpretation is that large [electronic dance music] festivals, especially during the night-time, probably cause at least a number of people per event to suffer epileptic seizures." They advise anyone with photosensitive epilepsy to either avoid such events or to take precautionary measures, such as getting enough sleep and not taking drugs, not standing close to the stage, and leaving quickly if they experience any "aura" effects. © 2019 BBC
Keyword: Epilepsy; Vision
Link ID: 26323 - Posted: 06.12.2019
Children can keep full visual perception — the ability to process and understand visual information — after brain surgery for severe epilepsy, according to a study funded by the National Eye Institute (NEI), part of the National Institutes of Health. While brain surgery can halt seizures, it carries significant risks, including an impairment in visual perception. However, a new report by Carnegie Mellon University, Pittsburgh, researchers from a study of children who had undergone epilepsy surgery suggests that the lasting effects on visual perception can be minimal, even among children who lost tissue in the brain’s visual centers. Normal visual function requires not just information sent from the eye (sight), but also processing in the brain that allows us to understand and act on that information (perception). Signals from the eye are first processed in the early visual cortex, a region at the back of the brain that is necessary for sight. They then travel through other parts of the cerebral cortex, enabling recognition of patterns, faces, objects, scenes, and written words. In adults, even if their sight is still present, injury or removal of even a small area of the brain’s vision processing centers can lead to dramatic, permanent loss of perception, making them unable to recognize faces, locations, or to read, for example. But in children, who are still developing, this part of the brain appears able to rewire itself, a process known as plasticity. “Although there are studies of the memory and language function of children who have parts of the brain removed surgically for the treatment of epilepsy, there have been rather few studies that examine the impact of the surgery on the visual system of the brain and the resulting perceptual behavior,” said Marlene Behrmann, Ph.D., senior author of the study. “We aimed to close this gap.”
Keyword: Development of the Brain; Vision
Link ID: 26303 - Posted: 06.05.2019
Nicole Karlis There is no way Leonardo da Vinci could have predicted that the Mona Lisa would remain one of the most widely-debated works of art in modern day — thanks in no small part to her intriguing expression. Indeed, as one of the most famous paintings in the world, Mona Lisa's facial expression continues to beguile both commoners and academics. A 2017 study published in the journal Scientific Reports (part of the network of Nature's journals) proclaimed that Mona Lisa’s smile did indeed depict genuine happiness, according to the study's subjects who compared it with subtly manipulated facial expressions. Now, a new study published in the neuroscience journal Cortex says that her smile is non-genuine. In other words, she's faking it. The three neuroscience and cognition researchers who penned the article fixated on the asymmetry of Mona Lisa’s smile. Some historical theories suggest the facial asymmetry is due to the loss of the subject's anterior teeth, while others have speculated it could have been related to Bell’s Palsy. The Cortex article's authors note that as the upper part of her face does not appear to be active, it is possible to interpret her smile as “non-genuine.” This would relate to theories of emotion neuropsychology, which is the characterization of the behavioral modifications that follow a neurological condition. © 2018 Salon Media Group, Inc
Keyword: Emotions; Vision
Link ID: 26300 - Posted: 06.05.2019
By Susana Martinez-Conde and Stephen L. Macknik The man and the woman sat down, facing each other in the dimly illuminated room. This was the first time the two young people had met, though they were about to become intensely familiar with each other—in an unusual sort of way. The researcher informed them that the purpose of the study was to understand “the perception of the face of another person.” The two participants were to gaze at each other’s eyes for 10 minutes straight, while maintaining a neutral facial expression, and pay attention to their partner’s face. After giving these instructions, the researcher stepped back and sat on one side of the room, away from the participants’ lines of sight. The two volunteers settled in their seats and locked eyes—feeling a little awkward at first, but suppressing uncomfortable smiles to comply with the scientist’s directions. Ten minutes had seemed like a long stretch to look deeply into the eyes of a stranger, but time started to lose its meaning after a while. Sometimes, the young couple felt as if they were looking at things from outside their own bodies. Other times, it seemed as if each moment contained a lifetime. Throughout their close encounter, each member of the duo experienced their partner’s face as everchanging. Human features became animal traits, transmogrifying into grotesqueries. There were eyeless faces, and faces with too many eyes. The semblances of dead relatives materialized. Monstrosities abounded. The bizarre perceptual phenomena that the pair witnessed were manifestations of the “strange face illusion,” first described by the psychologist Giovanni Caputo of the University of Urbino, Italy. Urbino’s original study, published in 2010, reported a new type of illusion, experienced by people looking at themselves in the mirror in low light conditions. © 2019 Scientific American
Keyword: Attention; Vision
Link ID: 26230 - Posted: 05.14.2019
By Elizabeth Pennisi When the ancestors of cave fish and certain crickets moved into pitchblack caverns, their eyes virtually disappeared over generations. But fish that ply the sea at depths greater than sunlight can penetrate have developed super-vision, highly attuned to the faint glow and twinkle given off by other creatures. They owe this power, evolutionary biologists have learned, to an extraordinary increase in the number of genes for rod opsins, retinal proteins that detect dim light. Those extra genes have diversified to produce proteins capable of capturing every possible photon at multiple wavelengths—which could mean that despite the darkness, the fish roaming the deep ocean actually see in color. The finding "really shakes up the dogma of deep-sea vision," says Megan Porter, an evolutionary biologist studying vision at the University of Hawaii in Honolulu who was not involved in the work. Researchers had observed that the deeper a fish lives, the simpler its visual system is, a trend they assumed would continue to the bottom. "That [the deepest dwellers] have all these opsins means there's a lot more complexity in the interplay between light and evolution in the deep sea than we realized," Porter says. At a depth of 1000 meters, the last glimmer of sunlight is gone. But over the past 15 years, researchers have realized that the depths are pervaded by a faint bioluminescence from flashing shrimp, octopus, bacteria, and even fish. Most vertebrate eyes could barely detect this subtle shimmer. To learn how fish can see it, a team led by evolutionary biologist Walter Salzburger from the University of Basel in Switzerland studied deep-sea fishes' opsin proteins. Variation in the opsins' amino acid sequences changes the wavelength of light detected, so multiple opsins make color vision possible. One opsin, RH1, works well in low light. Found in the eye's rod cells, it enables humans to see in the dark—but only in black and white. © 2019 American Association for the Advancement of Science
Keyword: Vision
Link ID: 26224 - Posted: 05.10.2019
Maria Temming New artwork created by artificial intelligence does weird things to the primate brain. When shown to macaques, AI-generated images purposefully caused nerve cells in the monkeys’ brains to fire more than pictures of real-world objects. The AI could also design patterns that activated specific neurons while suppressing others, researchers report in the May 3 Science. This unprecedented control over neural activity using images may lead to new kinds of neuroscience experiments or treatments for mental disorders. The AI’s ability to play the primate brain like a fiddle also offers insight into how closely AIs can emulate brain function. The AI responsible for the new mind-bending images is an artificial neural network — a computer model composed of virtual neurons — modeled after the ventral stream. This is a neural pathway in the brain involved in vision (SN Online: 8/12/09). The AI learned to “see” by studying a library of about 1.3 million labeled images. Researchers then instructed the AI to design pictures that would affect specific ventral stream neurons in the brain. Viewing any image triggers some kind of neural activity in a brain. But neuroscientist Kohitij Kar of MIT and colleagues wanted to see whether the AI’s deliberately designed images could induce specific neural responses of the team’s choosing. The researchers showed these images to three macaques fitted with neuron-monitoring microelectrodes. |© Society for Science & the Public 2000 - 2019.
Keyword: Vision
Link ID: 26206 - Posted: 05.03.2019
Ruth Williams Showing monkeys a series of computer-generated images and simultaneously recording the animals’ brain cell activities enables deep machine learning systems to generate new images that ramp up the cells’ excitation, according to two papers published today (May 2) in Cell and Science. “It’s exciting because it’s bridging the fields of deep learning and neuroscience . . . to try and understand what is represented in different parts of the brain,” says neuroscientist Andreas Tolias of Baylor College of Medicine who was not involved with either of the studies, but has carried out similar experiments in mice. “I think these methods and their further development could provide a more systematic way for us to open the black box of the brain,” he says. It’s a goal of sensory neuroscience to understand exactly which stimuli activate which brain cells. In the primate visual system, certain neurons in the visual cortex and inferior temporal cortex (two key vision areas) are known to respond preferentially to certain stimuli—such as colors, specific directions of motion, curves, and even faces. But, says neuroscientist Carlos Ponce of Washington University School of Medicine in St. Louis, who co-authored the Cell paper, “the problem is, we’ve never quite known whether, in our selection of pictures, we have the secret true image that the cell really is encoding.” Maybe, he suggests, a cell isn’t responding to a face, but to an arrangement of features and shapes found in a face that may also be found in other images. And with countless available images, “it’s impossible to test all of them,” he says. In short, it has been impossible to determine the exact visual stimulus that would maximally activate a given neuron. © 1986–2019 The Scientist.
Keyword: Vision
Link ID: 26205 - Posted: 05.03.2019
By Susana Martinez-Conde Human night vision is not as precise as day vision. That’s why getting up barefoot in the middle of the night comes with a much higher risk of stepping on painful Lego pieces than walking along the same path during the day. I have three kids of ages twelve and under, so I know. But the specific ways in which our night vision is worse than our day vision are surprisingly counterintuitive to most of us. I remember learning in college that night-vision is achromatic (meaning that we only see in grayscale at night) and not really believing it. It took some careful night-time observation to conclude that my professor was right: objects that were colorful during the day had no hue at night. Most shocking of all was the realization that, though I had always suffered from night-time color blindness (as all of us do), I had never been aware of my deficiency. A recent study by Alejandro Gloriani and Alexander Schütz, from the University of Marburg, Germany, published earlier this month in Current Biology, shows that our night vision self-delusion is even more pervasive than previously thought. Advertisement To appreciate Gloriani and Schütz’s discovery, the first thing to understand is that day and night vision rely on the activity of different types of photoreceptors (these are the retinal cells that convert light energy into electrical signals, which your brain can then process). ‘Cones’ are active during the day (or when you turn the lights on at night). ‘Rods’ are active during the night (or at very dim light levels). © 2019 Scientific American
Keyword: Vision
Link ID: 26181 - Posted: 04.29.2019
National Institutes of Health scientists studying the progression of inherited and infectious eye diseases that can cause blindness have found that microglia, a type of nervous system cell suspected to cause retinal damage, surprisingly had no damaging role during prion disease in mice. In contrast, the study findings indicated that microglia might delay disease progression. The discovery could apply to studies of inherited photoreceptor degeneration diseases in people, known as retinitis pigmentosa. In retinitis pigmentosa cases, scientists find an influx of microglia near the photoreceptors, which led to the belief that microglia contribute to retina damage. These inherited diseases appear to damage the retina similarly to prion diseases. Prion diseases are slow degenerative diseases of the central nervous system that occur in people and various other mammals. No vaccines or treatments are available, and the diseases are almost always fatal. Prion diseases primarily involve the brain but also can affect the retina and other tissues. Expanding on work published in 2018, scientists at NIH’s National Institute of Allergy and Infectious Diseases (NIAID) used an experimental drug to eliminate microglia in prion-infected mice. They studied prion disease progression in the retina to see if they could discover additional details that might be obscured in the more complex structure of the brain. When the scientists examined their prion-infected study mice, they found that photoreceptor damage still occurred – even somewhat faster – despite the absence of microglia. They also observed early signs of new prion disease in the photoreceptor cells, which may provide clues as to how prions damage photoreceptors. Their work appears in Acta Neuropathologica Communications.
David Cyranoski A Japanese committee has provisionally approved the use of reprogrammed stem cells to treat diseased or damaged corneas. Researchers are now waiting for final approval from the health ministry to test the treatment in people with corneal blindness, which affects millions of people around the world. The cornea, a transparent layer that covers and protects the eye, contains stem cells that repair it when damaged. But these can be destroyed by disease or by trauma from chemicals or burns, which can result in patients losing their vision. Currently, cornea transplants from donors who have died are used to treat damaged or diseased corneas, but good-quality tissue is scarce. A team led by ophthalmologist Kohji Nishida at Osaka University plans to treat damaged corneas using sheets of tissue made from induced pluripotent stem cells. These are created by reprogramming cells from a donor into an embryonic-like state that can then transform into other tissue, such as corneal cells. Nishida’s team plans to lay 0.05-millimetre-thick sheets of corneal cells across patients’ eyes. Animal studies have shown1 that this can save or restore vision. The health ministry is expected to decide soon. If Nishida and his team receive approval, they will treat four people, whom they will then monitor for a year to check the safety and efficacy of the treatment. The first treatment is planned to take place before the end of July. Other Japanese researchers have carried out clinical studies using induced pluripotent stem cells to treat spinal cord injury, Parkinson's disease and another eye disease. © 2019 Springer Nature Publishing AG
Keyword: Vision; Stem Cells
Link ID: 26045 - Posted: 03.18.2019
Liam Drew A mouse scurries down a hallway, past walls lined with shifting monochrome stripes and checks. But the hallway isn’t real. It’s part of a simulation that the mouse is driving as it runs on a foam wheel, mounted inside a domed projection screen. While the mouse explores its virtual world, neuroscientist Aman Saleem watches its brain cells at work. Light striking the mouse’s retinas triggers electrical pulses that travel to neurons in its primary visual cortex, where Saleem has implanted electrodes. Textbooks say that these neurons each respond to a specific stimulus, such as a horizontal or vertical line, so that identical patterns of inputs should induce an identical response. But that’s not what happens. When the mouse encounters a repeat of an earlier scene, its neurons fire in a different pattern. “Five years ago, if you’d told me that, I’d have been like, ‘No, that’s not true. That’s not possible’,” says Saleem, in whose laboratory at University College London we are standing. His results, published last September1, show that cells in the hippocampus that track where the mouse has run along the hallway are somehow changing how cells in the visual cortex fire. In other words, the mouse’s neural representation of two identical scenes differs, depending on where it perceives itself to be. It’s no surprise that an animal’s experiences change how it sees the world: all brains learn from experience and combine multiple streams of information to construct perceptions of reality. But researchers once thought that at least some areas in the brain — those that are the first to process inputs from the sense organs — create relatively faithful representations of the outside world. According to this model, these representations then travel to ‘association’ areas, where they combine with memories and expectations to produce perceptions.
Keyword: Learning & Memory; Vision
Link ID: 26039 - Posted: 03.15.2019
Matthew Warren Cue the super-mouse. Scientists have engineered mice that can see infrared light normally invisible to mammals — including humans. To do so, they injected into the rodents’ eyes nanoparticles that convert infrared light into visible wavelengths1. Humans and mice, like other mammals, cannot see infrared light, which has wavelengths slightly longer than red light — between 700 nanometres and 1 millimetre. But Tian Xue, a neuroscientist at the University of Science and Technology of China in Hefei, and his colleagues developed nanoparticles that convert infrared wavelengths into visible light. The nanoparticles absorb photons at wavelengths of around 980 nanometres and emit them at shorter wavelengths, around 535 nanometres, corresponding to green light. Xue’s team attached the nanoparticles to proteins that bind to photoreceptors — the cells in the eye that convert light into electrical impulses — and then injected them into mice. The researchers showed that the nanoparticles successfully attached to the photoreceptors, which in turn responded to infrared light by producing electrical signals and activating the visual-processing areas of the brain. The team conducted experiments to show that the mice did actually detect and respond to infrared light.
Keyword: Vision
Link ID: 25999 - Posted: 03.01.2019
By Tom Avril Smoking cigarettes has long been known for its ability to damage eyesight, on top of the harm it causes to the lungs, heart and other organs. But a new study suggests that smoking can impair vision far earlier than is commonly thought. Heavy smokers with an average age of 35 were markedly worse than nonsmokers at distinguishing colors as well as the contrast between different shades of gray, the study authors said. Previous research has linked smoking with macular degeneration and cataracts, which tend to occur decades later. The new results, published in Psychiatry Research, do not indicate how smoking damages perception of color and contrast. But the broad nature of the impairment suggests that it is not the result of damage to specific kinds of light-sensitive cells, such as rods or cones, said co-author Steven Silverstein, a professor of psychiatry and ophthalmology at Rutgers Robert Wood Johnson Medical School. Instead, cigarette use probably harms a more general aspect of vision biology, such as blood vessels or nerve cells. “There is probably some more widespread problem like overall blood flow in the eye that is compromised due to all the toxic chemicals in cigarettes,” said Silverstein, who collaborated with authors from the Perception, Neuroscience and Behavior Laboratory in Joao Pessoa, Brazil. © 1996-2019 The Washington Post
Keyword: Drug Abuse; Vision
Link ID: 25985 - Posted: 02.26.2019
Nicola Davis The mystery of how the zebra got its stripes might have been solved: researchers say the pattern appears to confuse flies, discouraging them from touching down for a quick bite. The study, published in the journal Plos One, involved horses, zebras, and horses dressed as zebras. The team said the research not only supported previous work suggesting stripes might act as an insect deterrent, but helped unpick why, revealing the patterns only produced an effect when the flies got close. Dr Martin How, co-author of the research from the University of Bristol, said: “The flies seemed to be behaving relatively naturally around both [zebras and horses], until it comes to landing. “We saw that these horseflies were coming in quite fast and almost turning away or sometimes even colliding with the zebra, rather than doing a nice, controlled flight.” Researchers made their discovery by spending more than 16 hours standing in fields and noting how horseflies interacted with nine horses and three zebras – including one somewhat bemusingly called Spot. While horseflies circled or touched the animals at similar rates, landing was a different matter, with a lower rate seen for zebras than horses. To check the effect was not caused by a different smell of zebras and horses, for example, the researchers put black, white and zebra-striped coats on seven horses in turn. While there was no difference in the rate at which the flies landed on the horses’ exposed heads, they touched and landed on the zebra coat far less often than either the black or white garment. © 2019 Guardian News & Media Limited
Keyword: Vision; Evolution
Link ID: 25977 - Posted: 02.21.2019
Fergus Walsh Medical correspondent A woman from Oxford has become the first person in the world to have gene therapy to try to halt the most common form of blindness in the Western world. Surgeons injected a synthetic gene into the back of Janet Osborne's eye in a bid to prevent more cells from dying. It is the first treatment to target the underlying genetic cause of age-related macular degeneration (AMD). About 600,000 people in the UK are affected by AMD, most of whom are severely sight impaired. Janet Osborne told BBC News: "I find it difficult to recognise faces with my left eye because my central vision is blurred - and if this treatment could stop that getting worse, it would be amazing." The treatment was carried out under local anaesthetic last month at Oxford Eye Hospital by Robert MacLaren, professor of ophthalmology at the University of Oxford. He told BBC News: "A genetic treatment administered early on to preserve vision in patients who would otherwise lose their sight would be a tremendous breakthrough in ophthalmology and certainly something I hope to see in the near future." Mrs Osborne, 80, is the first of 10 patients with AMD taking part in a trial of the gene therapy treatment, manufactured by Gyroscope Therapeutics, funded by Syncona, the Wellcome Trust founded investment firm. The macula is part of the retina and responsible for central vision and fine detail. In age-related macular degeneration, the retinal cells die and are not renewed. The risk of getting AMD increases with age. © 2019 BBC.
Keyword: Vision
Link ID: 25974 - Posted: 02.19.2019
By Zoe Dubno When I was 12, I became part of the very select group of people who have had a life-changing experience at a fondue restaurant. After repeatedly grabbing my brother’s green fondue fork and eating his steak from the broth pot, I found myself accused of elder-sibling entitlement. But my father, who is colorblind, said I had done nothing wrong; like me, he was unable to see any difference between my brother’s green fork and my orange one. The Ishihara color-vision test he administered on his computer later that night confirmed that I was among those few women with red-green colorblindness. He was excited that I saw “correctly” — which is to say, like him. Back then, the ability to understand his frame of reference was mostly limited to other people barred from becoming astronauts. Now there’s an app for it. Colorblindness can be sort of a fun affliction. Sometimes I see my own private colors, and objects lose their prescribed meanings. Someone’s fashionable, Instagram-friendly sand-colored apartment might become, just for me, a garish baby-food green. The English scientist John Dalton described something similar in “Extraordinary Facts Relating to the Vision of Colours” (1794), the first known scientific study of anomalous color vision: He would often earnestly ask people whether a flower was blue or pink “but was generally considered to be in jest.” I attended a liberal-arts college, so I know full well that philosophizing about the subjective experience of color is best done barefoot in a field while listening to Alice Coltrane music. Biologically, though, the mechanics are relatively straightforward. Humans are trichromats: We see color because three sets of cones inside the eye absorb light at different wavelengths, from red to blue. Colorblindness is, typically, a congenital weakness in one set or another. The cones in my eyes that are meant to detect long red wavelengths are abnormal; I may see red and orange, but they’re dim and green-tinted, their energy registering partly on the cones that detect medium-length green wavelengths. (For some colorblind people, the entire season of “autumn” must feel like an elaborate prank.) Those with no working cones in one group — dichromats — experience almost total blindness of that color. Red becomes black. Orange, now redless, becomes yellow. © 2019 The New York Times Company
Keyword: Vision
Link ID: 25957 - Posted: 02.13.2019
Using a novel patient-specific stem cell-based therapy, researchers at the National Eye Institute (NEI) prevented blindness in animal models of geographic atrophy, the advanced "dry" form of age-related macular degeneration (AMD), which is a leading cause of vision loss among people age 65 and older. The protocols established by the animal study, published January 16 in Science Translational Medicine (STM), set the stage for a first-in-human clinical trial testing the therapy in people with geographic atrophy, for which there is currently no treatment. "If the clinical trial moves forward, it would be the first ever to test a stem cell-based therapy derived from induced pluripotent stem cells (iPSC) for treating a disease," said Kapil Bharti, Ph.D., a Stadtman Investigator and head of the NEI Unit on Ocular and Stem Cell Translational Research. Bharti was the lead investigator for the animal-model study published in STM. The NEI is part of the National Institutes of Health. The therapy involves taking a patient’s blood cells and, in a lab, converting them into iPS cells, which can become any type of cell in the body. The iPS cells are programmed to become retinal pigment epithelial cells, the type of cell that dies early in the geographic atrophy stage of macular degeneration. RPE cells nurture photoreceptors, the light-sensing cells in the retina. In geographic atrophy, once RPE cells die, photoreceptors eventually also die, resulting in blindness. The therapy is an attempt to shore up the health of remaining photoreceptors by replacing dying RPE with iPSC-derived RPE.
Keyword: Vision
Link ID: 25873 - Posted: 01.17.2019
Kohske Takahashi We report a novel illusion––curvature blindness illusion: a wavy line is perceived as a zigzag line. The following are required for this illusion to occur. First, the luminance contrast polarity of the wavy line against the background is reversed at the turning points. Second, the curvature of the wavy line is somewhat low; the right angle is too steep to be perceived as an illusion. This illusion implies that, in order to perceive a gentle curve, it is necessary to satisfy more conditions––constant contrast polarity––than perceiving an obtuse corner. It is notable that observers exactly “see” an illusory zigzag line against a physically wavy line, rather than have an impaired perception. We propose that the underlying mechanisms for the gentle curve perception and those of obtuse corner perception are competing with each other in an imbalanced way and the percepts of corner might be dominant in the visual system. Perception of contour and shape is one of basic functions of vision. To this end, visual system processes information in a hierarchical way; first it extracts local orientations, then it integrates the local orientations into intermediate representations of contour, and finally it forms global shape percepts (Loffler, 2008). The intermediate representation of contour would include concavity, convexity, corner angle, curvature, and so forth. Although it is obvious that the physical shape is determined by combination of the local orientations, perceptual shape is susceptible to several factors. Accordingly, as visual illusions demonstrate, percepts are not necessarily veridical. For example, the café wall illusion (Pierce, 1898) demonstrates that parallel horizontal lines are perceived as different angles to each other.
Keyword: Vision
Link ID: 25827 - Posted: 12.28.2018
By Nicole Wetsman If a sign tells you To follow the purple skull to your destination, the answer seems simple: Veer left. But isolate the stripes that make up the skulls, and you’ll find neither skull has purple bones; in fact, all the bones are the same color. Slot them back into the banded setting, and they shift to purple and orange. The pigments morph because of the Munker-White illusion, which shifts the perception of two identical color tones when they’re placed against different surrounding hues. No one knows for sure, but the illusion probably results from what David Novick, a computer scientist at the University of Texas at El Paso, calls the color-completion effect. The phenomenon causes an image to skew toward the color of the objects that surround it. In a black-and-white image, a gray element would appear lighter when it’s striped with white, and darker when banded with black. Many neuroscientists think that neural signals in charge of relaying information about the pigments in our visual field get averaged—creating a color somewhere in the middle. Here, one skull is covered by blue stripes in the foreground and the other with yellow ones. When the original skulls take on the characteristics of the separate surroundings, they look like different colors entirely. Don’t be fooled: Follow both skulls by going straight. Copyright © 2018 Popular Science
Keyword: Vision
Link ID: 25806 - Posted: 12.21.2018
Scientists at the National Eye Institute (NEI) have found that neurons in the superior colliculus, an ancient midbrain structure found in all vertebrates, are key players in allowing us to detect visual objects and events. This structure doesn’t help us recognize what the specific object or event is; instead, it’s the part of the brain that decides something is there at all. By comparing brain activity recorded from the right and left superior colliculi at the same time, the researchers were able to predict whether an animal was seeing an event. The findings were published today in the journal Nature Neuroscience. NEI is part of the National Institutes of Health. Perceiving objects in our environment requires not just the eyes, but also the brain’s ability to filter information, classify it, and then understand or decide that an object is actually there. Each step is handled by different parts of the brain, from the eye’s light-sensing retina to the visual cortex and the superior colliculus. For events or objects that are difficult to see (a gray chair in a dark room, for example), small changes in the amount of visual information available and recorded in the brain can be the difference between tripping over the chair or successfully avoiding it. This new study shows that this process – deciding that an object is present or that an event has occurred in the visual field – is handled by the superior colliculus. “While we’ve known for a long time that the superior colliculus is involved in perception, we really wanted to know exactly how this part of the brain controls the perceptual choice, and find a way to describe that mechanism with a mathematical model,” said James Herman, Ph.D., lead author of the study.
Keyword: Vision
Link ID: 25723 - Posted: 11.27.2018