Chapter 17. Learning and Memory
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
By Jason G. Goldman In the summer of 2015 University of Oxford zoologists Antone Martinho III and Alex Kacelnik began quite the cute experiment—one involving ducklings and blindfolds. They wanted to see how the baby birds imprinted on their mothers depending on which eye was available. Why? Because birds lack a part of the brain humans take for granted. Suspended between the left and right hemispheres of our brains sits the corpus callosum, a thick bundle of nerves. It acts as an information bridge, allowing the left and right sides to rapidly communicate and act as a coherent whole. Although the hemispheres of a bird's brain are not entirely separated, the animals do not enjoy the benefits of this pathway. This quirk of avian neuroanatomy sets up a natural experiment. “I was in St. James's Park in London, and I saw some ducklings with their parents in the lake,” Martinho says. “It occurred to me that we could look at the instantaneous transfer of information through imprinting.” The researchers covered one eye of each of 64 ducklings and then presented a fake red or blue adult duck. This colored duck became “Mom,” and the ducklings followed it around. But when some of the ducklings' blindfolds were swapped so they could see out of only the other eye, they did not seem to recognize their “parent” anymore. Instead the ducklings in this situation showed equal affinity for both the red and blue ducks. It took three hours before any preferences began to emerge. Meanwhile ducklings with eyes that were each imprinted to a different duck did not show any parental preferences when allowed to use both eyes at once. The study was recently published in the journal Animal Behaviour. © 2017 Scientific American
By David Wiegand I just did something great for my brain and you can do the same, when the documentary “My Love Affair With the Brain: The Life and Science of Dr. Marian Diamond” airs on KQED on Wednesday, March 22. According to the UC Berkeley professor emerita, the five things that contribute to the continued development of the brain at any age are: diet, exercise, newness, challenge and love. You can check off three of those elements for the day by watching the film by Catherine Ryan and Gary Weimberg. No matter how smart you are, even about anatomy and neuroscience, you will find newness in the information about the miraculous human brain, how it works, and how it keeps on working no matter how old you are. That’s one of the fundamentals of modern neuroscience, of which Diamond is one of the founders. You will also be challenged to consider your own brain, to consider how Diamond’s favorite expression — “use it or lose it” — applies to your brain and your life. You will be challenged to consider what Diamond means when she says brain plasticity (its ability to keep developing by forming new connections between its cells) makes us “the masters of our own minds. We literally create our own masterpiece.” Before Diamond and her colleagues proved otherwise, the prevailing thought was that brains developed according to a genetically determined pattern, hit a high point and then essentially began to deteriorate. Bushwa: A brain can grow — i.e., learn — at any age, and you can teach an old dog new tricks. © 2017 Hearst Corporation
Keyword: Learning & Memory
Link ID: 23392 - Posted: 03.23.2017
By Mo Costandi This map of London shows how many other streets are connected to each street, with blue representing simple streets with few connecting streets and red representing complex streets with many connecting streets. Credit: Joao Pinelo Silva The brain contains a built-in GPS that relies on memories of past navigation experiences to simulate future ones. But how does it represent new environments in order to determine how to navigate them successfully? And what happens in the brain when we enter a new space, or use satellite navigation (SatNav) technology to help us find our way around? Research published Tuesday in Nature Communications reveals two distinct brain regions that cooperate to simulate the topology of one’s environment and plan future paths through it when one is actively navigating. In addition, the research suggests both regions become inactive when people follow SatNav instructions instead of using their spatial memories. In a previous study researchers at University College London took participants on a guided tour through the streets of London’s Soho district and then used functional magnetic resonance imaging (fMRI) to scan their brains as they watched 10 different simulations of navigating those streets. Some of the movies required them to decide at intersections which way would be the shortest path to a predetermined destination; others came with instructions about which way to go at each junction. © 2017 Scientific American,
Keyword: Learning & Memory
Link ID: 23391 - Posted: 03.22.2017
Laura Sanders Not too long ago, the internet was stationary. Most often, we’d browse the Web from a desktop computer in our living room or office. If we were feeling really adventurous, maybe we’d cart our laptop to a coffee shop. Looking back, those days seem quaint. Today, the internet moves through our lives with us. We hunt Pokémon as we shuffle down the sidewalk. We text at red lights. We tweet from the bathroom. We sleep with a smartphone within arm’s reach, using the device as both lullaby and alarm clock. Sometimes we put our phones down while we eat, but usually faceup, just in case something important happens. Our iPhones, Androids and other smartphones have led us to effortlessly adjust our behavior. Portable technology has overhauled our driving habits, our dating styles and even our posture. Despite the occasional headlines claiming that digital technology is rotting our brains, not to mention what it’s doing to our children, we’ve welcomed this alluring life partner with open arms and swiping thumbs. Scientists suspect that these near-constant interactions with digital technology influence our brains. Small studies are turning up hints that our devices may change how we remember, how we navigate and how we create happiness — or not. Somewhat limited, occasionally contradictory findings illustrate how science has struggled to pin down this slippery, fast-moving phenomenon. Laboratory studies hint that technology, and its constant interruptions, may change our thinking strategies. Like our husbands and wives, our devices have become “memory partners,” allowing us to dump information there and forget about it — an off-loading that comes with benefits and drawbacks. Navigational strategies may be shifting in the GPS era, a change that might be reflected in how the brain maps its place in the world. Constant interactions with technology may even raise anxiety in certain settings. |© Society for Science & the Public 2000 - 2017
Ian Sample Science editor Researchers have overcome one of the major stumbling blocks in artificial intelligence with a program that can learn one task after another using skills it acquires on the way. Developed by Google’s AI company, DeepMind, the program has taken on a range of different tasks and performed almost as well as a human. Crucially, and uniquely, the AI does not forget how it solved past problems, and uses the knowledge to tackle new ones. The AI is not capable of the general intelligence that humans draw on when they are faced with new challenges; its use of past lessons is more limited. But the work shows a way around a problem that had to be solved if researchers are ever to build so-called artificial general intelligence (AGI) machines that match human intelligence. “If we’re going to have computer programs that are more intelligent and more useful, then they will have to have this ability to learn sequentially,” said James Kirkpatrick at DeepMind. The ability to remember old skills and apply them to new tasks comes naturally to humans. A regular rollerblader might find ice skating a breeze because one skill helps the other. But recreating this ability in computers has proved a huge challenge for AI researchers. AI programs are typically one trick ponies that excel at one task, and one task only.
Laurel Hamers Mistakes can be learning opportunities, but the brain needs time for lessons to sink in. When facing a fast and furious stream of decisions, even the momentary distraction of noting an error can decrease accuracy on the next choice, researchers report in the March 15 Journal of Neuroscience. “We have a brain region that monitors and says ‘you messed up’ so that we can correct our behavior,” says psychologist George Buzzell, now at the University of Maryland in College Park. But sometimes, that monitoring system can backfire, distracting us from the task at hand and causing us to make another error. “There does seem to be a little bit of time for people, after mistakes, where you're sort of offline,” says Jason Moser, a psychologist at Michigan State University in East Lansing, who wasn’t part of the study. To test people’s response to making mistakes, Buzzell and colleagues at George Mason University in Fairfax, Va., monitored 23 participants’ brain activity while they worked through a challenging task. Concentric circles flashed briefly on a screen, and participants had to respond with one hand if the two circles were the same color and the other hand if the circles were subtly different shades. After making a mistake, participants generally answered the next question correctly if they had a second or so to recover. But when the next challenge came very quickly after an error, as little as 0.2 seconds, accuracy dropped by about 10 percent. Electrical activity recorded from the visual cortex showed that participants paid less attention to the next trial if they had just made a mistake than if they had responded correctly. |© Society for Science & the Public 2000 - 2017
There is widespread interest among teachers in the use of neuroscientific research findings in educational practice. However, there are also misconceptions and myths that are supposedly based on sound neuroscience that are prevalent in our schools. We wish to draw attention to this problem by focusing on an educational practice supposedly based on neuroscience that lacks sufficient evidence and so we believe should not be promoted or supported. Generally known as “learning styles”, it is the belief that individuals can benefit from receiving information in their preferred format, based on a self-report questionnaire. This belief has much intuitive appeal because individuals are better at some things than others and ultimately there may be a brain basis for these differences. Learning styles promises to optimise education by tailoring materials to match the individual’s preferred mode of sensory information processing. There are, however, a number of problems with the learning styles approach. First, there is no coherent framework of preferred learning styles. Usually, individuals are categorised into one of three preferred styles of auditory, visual or kinesthetic learners based on self-reports. One study found that there were more than 70 different models of learning styles including among others, “left v right brain,” “holistic v serialists,” “verbalisers v visualisers” and so on. The second problem is that categorising individuals can lead to the assumption of fixed or rigid learning style, which can impair motivation to apply oneself or adapt. Finally, and most damning, is that there have been systematic studies of the effectiveness of learning styles that have consistently found either no evidence or very weak evidence to support the hypothesis that matching or “meshing” material in the appropriate format to an individual’s learning style is selectively more effective for educational attainment. Students will improve if they think about how they learn but not because material is matched to their supposed learning style.
Keyword: Learning & Memory
Link ID: 23352 - Posted: 03.14.2017
By Knvul Sheikh As we get older, we start to think a little bit more slowly, we are less able to multitask and our ability to remember things gets a little wobblier. This cognitive transformation is linked to a steady, widespread thinning of the cortex, the brain's outermost layer. Yet the change is not inevitable. So-called super agers retain their good memory and thicker cortex as they age, a recent study suggests. Researchers believe that studying what makes super agers different could help unlock the secrets to healthy brain aging and improve our understanding of what happens when that process goes awry. “Looking at successful aging could provide us with biomarkers for predicting resilience and for things that might go wrong in people with age-related diseases like Alzheimer's and dementia,” says study co-author Alexandra Touroutoglou, a neuroscientist at Harvard Medical School. Touroutoglou and her team gave standard recall tests to a group of 40 participants between the ages of 60 and 80 and 41 participants aged 18 to 35. Among the older participants, 17 performed as well as or better than adults four to five decades younger. When the researchers looked at MRI scans of the super agers' brains, they found that their brains not only functioned more like young brains, they also looked very similar. Two brain networks in particular seemed to be protected from shrinking: the default mode network, which helps to store and recall new information, and the salience network, which is associated with directing attention and identifying important details. In fact, the thicker these regions were, the better the super agers' memory was. © 2017 Scientific American,
By Torah Kachur, A simple, non-invasive, non-medicinal, safe and cheap way to get a better night's sleep is to play some pink noise, according to a study published on Wednesday in the journal Frontiers in Human Neuroscience. Pink noise has more lower octaves than typical white noise and is hardly soothing. For example, it can be one-second pulses of the sound of a rushing waterfall. The short pieces of quick, quiet sounds would be really annoying if you were trying to fall asleep. But the pink noise isn't trying to get you to fall asleep; it's trying to keep you in a very deep sleep where you have slow brainwaves. This is one of our deepest forms of sleep and, in particular, seems to decline in aging adults. "When you play the pulses at particular times during deep sleep, it actually leads to an enhancement of the electrical signal. So it leads to essentially more of a synchronization of the neurons," said Nelly Papalambros, a PhD student at Northwestern University and the first author on the work. The pulses are timed to coincide with your entry into slow wave sleep. They sound to the same beat as your brainwaves, and they seem to increase the effectiveness of your very valuable and very elusive deep sleep. That slow wave sleep is critical for memory consolidation or, basically, your ability to incorporate new material learned that day with old material and memories. ©2017 CBC/Radio-Canada.
Mo Costandi To many of us, having to memorize a long list of items feels like a chore. But for others, it is more like a sport. Every year, hundreds of these ‘memory athletes’ compete with one another in the World Memory Championships, memorising hundreds of words, numbers, or other pieces of information within minutes. The current world champion is Alex Mullen, who beat his competitors by memorizing a string of more than 550 digits in under 5 minutes. You may think that such prodigious mental feats are linked to having an unusual brain, or to being extraordinarily clever. But they are not. New research published in the journal Neuron shows that you, too, can be a super memorizer with just six weeks of intensive mnemonic training, and also reveals the long-lasting changes to brain structure and function that occur as a result of such training. The Homer Simpson effect: forgetting to remember Read more Martin Dresler of Radboud University in the Netherlands and his colleagues recruited 23 memory athletes, all of whom are currently in the top 50 of the memory sports world rankings, and a group of control participants, who had no previous experience of memory training, and who were carefully selected to match the group of champions in age, sex, and IQ. © 2017 Guardian News and Media Limited
Keyword: Learning & Memory
Link ID: 23333 - Posted: 03.09.2017
By Jackie Snow Last month, Facebook announced software that could simply look at a photo and tell, for example, whether it was a picture of a cat or a dog. A related program identifies cancerous skin lesions as well as trained dermatologists can. Both technologies are based on neural networks, sophisticated computer algorithms at the cutting edge of artificial intelligence (AI)—but even their developers aren’t sure exactly how they work. Now, researchers have found a way to "look" at neural networks in action and see how they draw conclusions. Neural networks, also called neural nets, are loosely based on the brain’s use of layers of neurons working together. Like the human brain, they aren't hard-wired to produce a specific result—they “learn” on training sets of data, making and reinforcing connections between multiple inputs. A neural net might have a layer of neurons that look at pixels and a layer that looks at edges, like the outline of a person against a background. After being trained on thousands or millions of data points, a neural network algorithm will come up with its own rules on how to process new data. But it's unclear what the algorithm is using from those data to come to its conclusions. “Neural nets are fascinating mathematical models,” says Wojciech Samek, a researcher at Fraunhofer Institute for Telecommunications at the Heinrich Hertz Institute in Berlin. “They outperform classical methods in many fields, but are often used in a black box manner.” © 2017 American Association for the Advancement of Science.
Laura Spinney The misinformation was swiftly corrected, but some historical myths have proved difficult to erase. Since at least 2010, for example, an online community has shared the apparently unshakeable recollection of Nelson Mandela dying in prison in the 1980s, despite the fact that he lived until 2013, leaving prison in 1990 and going on to serve as South Africa's first black president. Memory is notoriously fallible, but some experts worry that a new phenomenon is emerging. “Memories are shared among groups in novel ways through sites such as Facebook and Instagram, blurring the line between individual and collective memories,” says psychologist Daniel Schacter, who studies memory at Harvard University in Cambridge, Massachusetts. “The development of Internet-based misinformation, such as recently well-publicized fake news sites, has the potential to distort individual and collective memories in disturbing ways.” Collective memories form the basis of history, and people's understanding of history shapes how they think about the future. The fictitious terrorist attacks, for example, were cited to justify a travel ban on the citizens of seven “countries of concern”. Although history has frequently been interpreted for political ends, psychologists are now investigating the fundamental processes by which collective memories form, to understand what makes them vulnerable to distortion. They show that social networks powerfully shape memory, and that people need little prompting to conform to a majority recollection — even if it is wrong. Not all the findings are gloomy, however. Research is pointing to ways of dislodging false memories or preventing them from forming in the first place. © 2017 Macmillan Publishers Limited,
Keyword: Learning & Memory
Link ID: 23324 - Posted: 03.07.2017
By Clare Wilson The repeated thoughts and urges of obsessive compulsive disorder (OCD) may be caused by an inability to learn to distinguish between safe and risky situations. A brain-scanning study has found that the part of the brain that sends out safety signals seems to be less active in people with the condition. People with OCD feel they have to carry out certain actions, such as washing their hands again and again, checking the oven has been turned off, or repeatedly going over religious thoughts. Those worst affected may spend hours every day on these compulsive “rituals”. To find out more about why this happens, Naomi Fineberg of Hertfordshire Partnership University NHS Foundation Trust in the UK and her team trained 78 people to fear a picture of an angry face. The team did it by sometimes giving the volunteers an electric shock to the wrist when they saw the picture while they were lying in an fMRI brain scanner. About half the group had OCD. The team then tried to “detrain” the volunteers, by showing them the same picture many times, but without any shocks. Judging by how much the volunteers sweated in response to seeing the picture, the team found that people without OCD soon learned to stop associating the face with the shock, but people with the condition remained scared. © Copyright Reed Business Information Ltd.
Amanda Montañez A couple of weeks ago I listened to an excellent podcast series on poverty in America. One message that stuck with me is just how many factors the poor have working against them—factors that, if you’re not poor, are all too easy to deny, disregard, or simply fail to notice. In the March issue of Scientific American, neuroscientist Kimberly Noble highlights one such invisible, yet very real, element of poverty: its effect on brain development in children. When considering such a complex topic, any sort of data-driven approach can feel mired in confounding factors and variables. After all, it’s not as if money itself has any impact on the structure or function of one’s brain; rather, it is likely to be an amalgamation of environmental and/or genetic influences accompanying poverty, which results in an overall trend of relatively low achievement among poor children. By definition, this is a multifaceted problem in which correlation and causation seem virtually impossible to untangle. Nonetheless, Noble’s lab is tackling this challenge using the best scientific tools and methods available. First, it is essential to define the problem: in what specific ways does poverty impact brain function? To address this question, Noble recruited some 150 children from various socioeconomic backgrounds and used standard psychological testing methods to evaluate their abilities in several cognitive areas associated with particular parts of the brain. As outlined in the graphs below, the relationships are clear, especially in terms of language skills. © 2017 Scientific American,
By Victoria Sayo Turner When you want to learn something new, you practice. Once you get the hang of it, you can hopefully do what you learned—whether it’s parallel parking or standing backflips—on the next day, and the next. If not, you fall back to stage one and practice some more. But your brain may have a shortcut that helps you lock in learning. Instead of practicing until you’re decent at something and then taking a siesta, practicing just a little longer could be the fast track to solidifying a skill. “Overlearning” is the process of rehearsing a skill even after you no longer improve. Even though you seem to have already learned the skill, you continue to practice at that same level of difficulty. A recent study suggests that this extra practice could be a handy way to lock in your hard-earned skills. In the experiment, participants were asked to look at a screen and say when they saw a stripe pattern. Then two images were flashed one after the other. The images were noisy, like static on an old TV, and only one contained a hard-to-see stripe pattern. It took about twenty minutes of practice for people to usually recognize the image with stripes in it. The participants then continued to practice for another twenty minutes for the overlearning portion. Next, the participants took a break before spending another twenty minutes learning a similar “competitor” task where the stripes were oriented at a new angle. Under normal circumstances, this second task would compete with the first and actually overwrite that skill, meaning people should now be able to detect the second pattern but no longer see the first. The researchers wanted to see if overlearning could prevent the first skill from disappearing. © 2017 Scientific American
Keyword: Learning & Memory
Link ID: 23293 - Posted: 03.01.2017
Rae Ellen Bichell Initially, Clint Perry wanted to make a vending machine for bumblebees. He wanted to understand how they solve problems. Perry, a cognitive biologist at Queen Mary University of London, is interested in testing the limits of animal intelligence. "I want to know: How does the brain do stuff? How does it make decisions? How does it keep memory?" says Perry. And how big does a brain need to be in order to do all of those things? He decided to test this on bumblebees by presenting the insects with a puzzle that they'd likely never encounter in the wild. He didn't end up building that vending machine, but he did put bees through a similar scenario. Perry and his colleagues wrote Thursday in the journal Science that, despite bees' miniature brains, they can solve new problems quickly just by observing a demonstration. This suggests that bees, which are important crop pollinators, could in time adapt to new food sources if their environment changed. As we have reported on The Salt before, bee populations around the world have declined in recent years. Scientists think a changing environment is at least partly responsible. Perry and colleagues built a platform with a porous ball sitting at the center of it. If a bee went up to the ball, it would find that it could access a reward, sugar water. One by one, bumblebees walked onto the platform, explored a bit, and then slurped up the sugar water in the middle. "Essentially, the first experiment was: Can bees learn to roll a ball?" says Perry. © 2017 npr
Jon Hamilton Researchers have created mice that appear impervious to the lure of cocaine. Even after the genetically engineered animals were given the drug repeatedly, they did not appear to crave it the way typical mice do, a team reports in Nature Neuroscience. "They didn't keep going into the room where they received the cocaine and they seemed to be just as happy exploring all around the cage," says Shernaz Bamji, a professor in the Department of Cellular and Physiological Sciences at the University of British Columbia in Vancouver. "Addiction is a form of learning," Bamji says. And somehow, these mice never learned to associate the pleasurable feelings produced by cocaine with the place where they received the drug. The result was startling because the scientists thought these mice would be especially susceptible to addiction. "We repeated the experiment several times to see if we had made a mistake," Bamji says. The reason for the team's surprise had to do with proteins that affect learning. The animals had been genetically engineered to produce high levels of proteins called cadherins in the brain's "reward circuit," which plays an important role in addiction. And genetic studies have suggested that people with high levels of cadherins are more susceptible to drug addiction. Cadherins act a bit like glue, binding cells together. Usually this glue enhances learning by strengthening the connections, or synapses, between brain cells. © 2017 npr
After A Stroke At 33, A Writer Relies On Journals To Piece Together Her Own Story On New Year's Eve, 2006, Christine Hyung-Oak Lee developed a splitting headache. She was 33, and her world turned upside down — as in, she literally saw the world upside down. Suddenly, she could hold things in her mind for only 15 minutes at a time. She was a writer who now couldn't recall words or craft sentences. She remembers looking at the phone and thinking to herself: What is the phone number for 911? Days later, she learned she'd had a stroke. "I had a 15-minute short-term memory, like Dory the fish in Finding Nemo," Lee wrote in a Buzzfeed essay chronicling her experience. "My doctors instructed me to log happenings with timestamps in my Moleskine journal. That, they said, would be my working short-term memory. My memento to my mori." Lee used those journals to reconstruct her experience in a new memoir called Tell Me Everything You Don't Remember. She talks with NPR's Scott Simon about the silver linings of memory loss and the unexpected grief that came with her recovery. Interview Highlights On what it's like to have a 15-minute memory You don't even fathom the magnitude of your loss — or at least I didn't. I couldn't plan for the future. I couldn't think of the past. I had no regrets. So it's literally living in the moment. I was experiencing something that people go to yoga and Zen retreats to achieve. So it was quite pleasant. It was not pleasant for the people around me. But in that period of my recovery, where I couldn't remember everything, I think I was incredibly at peace and happy. On having an "invisible" disability It was frustrating. On the one hand, you want people to know: Hey, slow down for me. Hey, I'm going through a crisis. On the other hand, I was also privileged to be disabled in a way that wasn't visible. So people also didn't treat me any differently. So it was very isolating. ... When I told people that I was sick and I needed them to slow down, along with that came this need to explain my position and I ... felt a lot of resentment for having to do with that. © 2017 npr
By Catherine Offord As an undergraduate at Auburn University in the early 2000s, Jeremy Day was thinking of becoming an architect. But an opportunity to work on a research project investigating reward learning in rodents changed the course of his career. “It really hooked me,” he says. “It made me immediately wonder what mechanisms were underlying that behavior in the animal’s brain.” It’s a question Day has pursued ever since. In 2004, he enrolled in a PhD program at the University of North Carolina at Chapel Hill and began studying neural reward signaling under the mentorship of neuroscientist Regina Carelli. “He was a stellar student by all accounts,” Carelli recalls. “He was very clear on the type of work he wanted to do, even that early on in his career.” Focusing on the nucleus accumbens, a brain region involved in associative learning, Day measured dopamine levels in rats undergoing stimulus-reward experiments. Although a rat’s brain released dopamine on receipt of a reward early in training, Day found that, as the rodent became accustomed to specific cues predicting those rewards, this dopamine spike shifted to accompany the cues instead, indicating a changing role for the chemical during learning.1 Day completed his PhD in 2009, but realized that to better understand dopamine signaling and errors in the brain’s reward system that lead to addiction, he would need a broader skill set. “I had a strong background in systems neuroscience, but my training in molecular neuroscience was not as strong,” he explains. So he settled on “a field that I knew almost nothing about?”—epigenetics—and joined David Sweatt’s lab at the University of Alabama at Birmingham (UAB) as a postdoc. For someone used to a field where “data come in as it’s happening,” Day says, “transitioning to a molecular lab where you might do an assay and you don’t get an answer for a week or two was a culture shock.” © 1986-2017 The Scientist
by Linda Rodriguez McRobbie If you ask Jill Price to remember any day of her life, she can come up with an answer in a heartbeat. What was she doing on 29 August 1980? “It was a Friday, I went to Palm Springs with my friends, twins, Nina and Michelle, and their family for Labour Day weekend,” she says. “And before we went to Palm Springs, we went to get them bikini waxes. They were screaming through the whole thing.” Price was 14 years and eight months old. What about the third time she drove a car? “The third time I drove a car was January 10 1981. Saturday. Teen Auto. That’s where we used to get our driving lessons from.” She was 15 years and two weeks old. The first time she heard the Rick Springfield song Jessie’s Girl? “March 7 1981.” She was driving in a car with her mother, who was yelling at her. She was 16 years and two months old. Price was born on 30 December 1965 in New York City. Her first clear memories start from around the age of 18 months. Back then, she lived with her parents in an apartment across the street from Roosevelt Hospital in Midtown Manhattan. She remembers the screaming ambulances and traffic, how she used to love climbing on the living room couch and staring out of the window down 9th Avenue. When she was five years and three months old, her family – her father, a talent agent with William Morris who counted Ray Charles among his clients; her mother, a former variety show dancer, and her baby brother – moved to South Orange, New Jersey. They lived in a three-storey, red brick colonial house with a big backyard and huge trees, the kind of place people left the city for. Jill loved it.
Keyword: Learning & Memory
Link ID: 23201 - Posted: 02.08.2017