Friday, September 14, 2007

2007 The Memory Code (extended version) By Joe Z. Tsien scientific american

June 17, 2007

The Memory Code (extended version)

Extended version: Researchers are closing in on the rules that the brain uses to lay down memories. Discovery of this memory code could lead to the design of smarter computers and robots and even to new ways to peer into the human mind

By Joe Z. Tsien

Anyone who has ever been in an earthquake has vivid memories of it: the ground shakes, trembles, buckles and heaves; the air fills with sounds of rumbling, cracking and shattering glass; cabinets fly open; books, dishes and knickknacks tumble from shelves. We remember such episodes--with striking clarity and for years afterward--because that is what our brains evolved to do: extract information from salient events and use that knowledge to guide our responses to similar situations in the future. This ability to learn from past experience allows all animals to adapt to a world that is complex and ever changing.

For decades, neuroscientists have attempted to unravel how the brain makes memories. Now, by combining a set of novel experiments with powerful mathematical analyses and an ability to record simultaneously the activity of more than 200 neurons in awake mice, my colleagues and I have discovered what we believe is the basic mechanism the brain uses to draw vital information from experiences and turn that information into memories. Our results add to a growing body of work indicating that a linear flow of signals from one neuron to another is not enough to explain how the brain represents perceptions and memories. Rather, the coordinated activity of large populations of neurons is needed.

Furthermore, our studies indicate that neuronal populations involved in encoding memories also extract the kind of generalized concepts that allow us to transform our daily experiences into knowledge and ideas. Our findings bring biologists closer to deciphering the universal neural code: the rules the brain follows to convert collections of electrical impulses into perception, memory, knowledge and, ultimately, behavior. Such understanding could allow investigators to develop more seamless brain-machine interfaces, design a whole new generation of smart computers and robots, and perhaps even assemble a codebook of the mind that would make it possible to decipher--by monitoring neural activity--what someone remembers and thinks.

My group's research into the brain code grew out of work focused on the molecular basis of learning and memory. In the fall of 1999 we generated a strain of mice engineered to have improved memory. This "smart" mouse--nicknamed Doogie after the brainy young doctor in the early-1990s TV dramedy Doogie Howser, M.D.—learns faster and remembers things longer than wild-type mice. The work generated great interest and debate and even made the cover of Time magazine. But our findings left me asking, What exactly is a memory?

Scientists knew that converting perceptual experiences into long-lasting memories requires a brain region called the hippocampus. And we even knew what molecules are critical to the process, such as the NMDA receptor, which we altered to produce Doogie. But no one knew how, exactly, the activation of nerve cells in the brain represents memory. A few years ago I began to wonder if we could find a way to describe mathematically or physiologically what memory is. Could we identify the relevant neural network dynamic and visualize the activity pattern that occurs when a memory is formed?

For the better part of a century, neuroscientists had been attempting to discover which patterns of nerve cell activity represent information in the brain and how neural circuits process, modify and store information needed to control and shape behavior. Their earliest efforts involved simply trying to correlate neural activity--the frequency at which nerve cells fire--with some sort of measurable physiological or behavioral response. For example, in the mid-1920s Edgar Adrian performed electrical recordings on frog tissue and found that the firing rate of individual stretch nerves attached to a muscle varies with the amount of weight that is put on the muscle. This study was the first to suggest that information (in this case the intensity of a stimulus) can be conveyed by changes in neural activity--work for which he later won a Nobel Prize.

Since then, many researchers using a single electrode to monitor the activity of one neuron at a time have shown that, when stimulated, neurons in different areas of the brain also change their firing rates. For example, pioneering experiments by David H. Hubel and Torsten N. Wiesel demonstrated that the neurons in the primary visual cortex of cats, an area at the back of the brain, respond vigorously to the moving edges of a bar of light. Charles G. Gross of Princeton University and Robert Desimone of the Massachusetts Institute of Technology found that neurons in a different brain region of the monkey (the inferotemporal cortex) can alter their behavior in response to more complex stimuli, such as pictures of faces.

Cells in the hippocampus also change their firing rates in reaction to various types of stimulation. Elegant studies by Richard Thompson of University of Southern California showed that classical "eye-blink conditioning" modifies neuronal discharges in the hippocampus. In these experiments, Thompson coupled a tone with the administration of a puff of air to a rabbit's eye. The animals learned to blink when they heard the tone--and neurons in the hippocampus changed their firing frequencies in response to the cue. Working with rats, John O'Keefe of University College London discovered "place" cells: neurons in the hippocampus that exhibit heightened firing when an animal runs across a particular spot in a familiar environment.

If a stimulus can alter the activity of specific neurons, researchers reasoned, then perhaps information is encoded entirely by changes in the frequency with which nerve cells fire. This "rate code theory," however, is not sufficient to explain how the brain handles perceptions, thoughts and memories. Time and again, researchers were disappointed to learn that the firing rate of a single neuron could not be relied on to predict whether an animal had received a particular stimulus. A neuron that increases its firing dramatically in response to a face in one trial might respond weakly--or not at all--in subsequent trials. This variability issue also affects place cells, which can behave even more erratically. A place cell, for example, generally increases its firing only when an animal moves through a physical locationat a certain speed--but not when the animal sits in that spot. Thus, a researcher cannot simply look at the activity of a particular neuron and declare: yes, the monkey saw the face or yes, the rat visited this part of its cage.

The traditional way neuroscientists deal with the variable response of individual neurons is to average the discharge rate of the neuron in question over repeated trials. But the brain is unlikely to use this strategy. An animal can’t wait until it has experienced something 100 times to figure out what is going on.

Because the absolute neuronal firing rate turned out to be a relatively poor indicator of whether a given stimulus is present, researchers speculated that they must be missing something. Perhaps, they reasoned, it is not the average frequency with which a given neuron spikes that carries the information, but the duration of the intervals between each individual spike that matters. Such a "temporal code" appears to operate in the visual system. Incorporating interspike intervals gives scientists, an enhanced ability to decode information carried by neurons in the retina. Keeping track of interspike intervals, however, might be trickier for a neuron in the brain than it is for a lab-coated observer: In the controlled experiments, researchers know exactly when the stimulus is presented, so they know which spike is the first in the series and hence when they need to start counting. The brain would need some other method of determining when a particular neuron begins transmitting the information in its often irregularly timed train of spikes.

Even studies that combined both the rate code and the temporal code in analyzing the activities of single neurons failed to predict reliably the occurrence or identity of a stimulus, however. Such disappointments led to the idea that information about experience is encoded not by the firing of single chains of neurons but by the combined activity of whole populations of nerve cells--by "population coding." Technical advances in the past 10 years have permitted researchers to monitor simultaneously the activities of multiple neurons--and thus to explore whether neuronal populations can convey richer information than individual neurons do.

In the 1980s Apostolos P. Georgopoulos of the University of Minnesota was among the first to apply a population-based method to estimate the direction of arm movement in monkeys. Recording the activities of a handful of neurons, Georgopoulos found that calculating the sum of the contribution of all cells in a population--weighted to account for how heavily each neuron influences the overall activity of the network--can greatly improve prediction of which way the monkey would move its arm. Similarly, Bruce L. McNaughton of the University of Arizona has used recordings from groups of place cells to better predict whether a rat has scampered through a particular location.

Today neuroscientists widely agree that neural populations convey information better than individual neurons do. But what are the organizing principles that enable the brain to achieve real-time processing of cognitive information, that enables neuronal populations to extract and record the most vital details of a situation or experience? Although different regions of the brain use their resident populations of neurons differently to meet their own specific needs--whether it's recording memories or representing sensory information--my colleagues and I believe the organizing principles that allow these networks to do their duty is most likely universal. Because we are interested in memory, we set out to devise a new approach to exploring the neural code involved in memory.

The Experiments
First, we needed to design better brain-monitoring equipment. We wanted to continue working with mice, in part so that we could eventually conduct experiments in animals with genetically altered abilities to learn and remember, such as the smart mouse Doogie and mutant mice with impaired memory. Researchers had monitored the activities of hundreds of neurons in awake monkeys, but investigators working with mice had managed at best to record from only 20 or 30 cells at once--mostly because the mouse brain is not much bigger than a peanut. So Longnian Lin, then a postdoctoral fellow in my lab, and I developed a recording device that allowed us to monitor the activities of much larger numbers of individual neurons in the awake, freely behaving mouse.

We then designed experiments that take advantage of what the brain seems to do best: laying down memories of dramatic events that can have profound influences on one's life. Witnessing the 9/11 terrorist attacks, surviving an earthquake or even plummeting 13 stories in Walt DisneyWorld’s Tower of Terror are things that are hard to forget. So we developed tests that would mimic this type of emotionally charged, episodic event. Such experiences should produce memories that are long lasting and strong. And encoding such robust memories, we reasoned, might involve a large number of cells in the hippocampus, thus making it more likely that we would be able to find cells activated by the experience and gather enough data to unravel any patterns and organizing principles involved in the process.

The episodic events we chose include a lab version of an earthquake (induced by shaking a small container holding a mouse), a sudden blast of air to the animal's back (meant to mimic an owl attack from the sky) and a brief vertical free fall inside a small "elevator" (which, when we first started doing these experiments, was provided by a cookie jar we had in the lab). Each animal was subjected to seven episodes of each event separated by periods of rest over several hours. During the events--and the intervening rest periods--we recorded activity from as many as 260 cells in the CA1 region of the hippocampus, an area that is key to memory formation in both animals and humans.

After collecting the data, we first attempted to tease out any patterns that might encode memories of these startling events. Remus Osan--another postdoctoral fellow--and I analyzed the recordings using powerful pattern-recognition methods, especially multiple discriminant analysis, or MDA. This mathematical method collapses what would otherwise be a problem with a large number of dimensions (for instance, the activities of 260 neurons before and after an event, which would make 520 dimensions) into a graphical space with only three dimensions. Sadly, for classically trained biologists the axes no longer correspond to any tangible measure of neuronal activity but they do map out a mathematical subspace capable of discriminating distinct patterns generated by different events.

When we projected the collected responses of all recorded neurons from an individual animal into this three-dimensional space, four distinct "bubbles" of network activity popped out: one associated with the resting brain state, one with the earthquake, one with the air puff and one with the elevator drop. Thus, each of our startling episodes resulted in a distinct pattern of activity in the CA1 neural ensembles. The patterns, we believe, represent integrated information about perceptual, emotional and factual aspects of the events.

To see how these patterns evolved dynamically as the animals endured their various experiences, we then applied a "sliding window" technique to hours of recorded data for each animal--moving through the recordings moment by moment and repeating the MDA analysis for each half-second window. As a result, we were able to visualize how the response patterns changed as the animal laid down memories of each event while it happened. In an animal that went through an earthquake, for example, we could watch the ensemble activity begin in the rest bubble, shoot out into the earthquake bubble and then return to the resting state, forming a trajectory with a characteristic triangular shape.

This temporal analysis revealed something even more interesting: the activity patterns associated with those startling experiences recurred spontaneously at intervals ranging from seconds to minutes after the actual event. These "replays" showed similar trajectories, including the characteristic geometric shape, but had smaller amplitudes than their original responses. The recurrence of these activation patterns provides evidence that the information traveling through the hippocampal system was inscribed into the brain's memory circuits--and we imagine the replay corresponds to a recollection of the experience after the fact. This ability to qualitatively and quantitatively measure spontaneous reactivations of memory-encoding patterns opens a door to being able to monitor how newly formed memory traces are consolidated into long-lasting memories and to examine how such processes are affected in both smart and learning-impaired mice.

With the patterns indicative of specific memories in hand, we sought to understand how the neurons among those we were "tapping" actually work together to encode these different events. By coupling another mathematical tool called hierarchical clustering analysis with the sequential MDA methods, Osan and I discovered that these overall network-level patterns are generated by distinct subsets of neural populations that we have dubbed "neural cliques." A clique is a group of neurons that respond similarly to a select event and thus operate collectively as a robust coding unit.

Furthermore, we found that each specific event is always represented by a set of neural cliques that encode different features ranging from the general to the specific. Notably, an earthquake episode activates a general startle clique (one that responds to all three startling stimuli) as well as a second clique that responds only to the events involving motion disturbance (both the earthquake and the elevator drop), a third clique that is activated exclusively by shaking and a fourth clique that indicates where the event took place (we put the animal in one of two different containers before each quake). Thus, information about these episodic events is represented by neural clique assemblies that are invariantly organized hierarchically (from general to specific). We think of the hierarchical arrangement as forming a feature-encoding pyramid whose base encodes a general characteristic (such as "startling event") and whose apex represents more specific information (such as "shaking" or "shaking in the black box").

The CA1 region of the hippocampus receives inputs from many brain regions and sensory systems, and this feature most likely influences what type of information a given clique encodes. For example, the clique that responds to all three startling events could be integrating information from the amygdala (which processes emotions such as fear or the experience of novelty), thereby encoding that "these events are scary and shocking"; the cliques that are activated by both the earthquake and the elevator drop, on the other hand, could be processing input from the vestibular system (which provides information about motion disturbance), thus encoding that "these events make me lose my balance." Likewise, the cliques that respond only to a particular event occurring at a particular place could be integrating additional input from place cells, thereby encoding that "this earthquake took place in the black container."

Our findings suggest a number of things about the organizing principles that govern the encoding of memory. First, we believe that neural cliques serve as the functional coding units that give rise to memories and that they are robust enough to represent information even if some individual neurons in the ensemble vary somewhat in their activity. Although the idea that memories and perception might be represented by neural populations is not new, we think we have the first experimental data that reveal how such information is actually organized within the neural population. The brain relies on memory-coding cliques to record and extract different features of the same event, and it essentially arranges the information relating to a given event into a pyramid whose levels are arranged hierarchically, from the most general, abstract features to the most specific aspects. We believe, as well, that each such pyramid can be thought of as a component of a polyhedron that represents all events falling into a shared category, such as "all startling events."

This combinatorial, hierarchical approach to memory formation provides a way for the brain to generate an almost unlimited number of unique network-level patterns for representing the infinite number of experiences that an organism might encounter during life--similar to the way that the four "letters" or nucleotides that make up DNA molecules can be combined in a virtually unlimited number of patterns to produce the seemingly infinite variety of organisms on Earth. And because the memory code is categorical and hierarchical, representing new experiences might simply involve substituting the specific cliques that form the tops of the memory pyramids to indicate, for example, that the dog barking behind the hedge this time is a poodle instead of a German shepherd or that the earthquake took place in California rather than in Indonesia.

The fact that each memory-encoding pyramid invariably includes cliques that process rather abstract information also reinforces the idea that the brain is not simply a device that records every detail of a particular event. Instead neural cliques in the memory system allow the brain to encode the key features of specific episodes and, at the same time, to extract from those experiences general information that can be applied to a future situation that may share some essential features but vary in physical detail. This ability to generate abstract concepts and knowledge from daily episodes is the essence of our intelligence and enables us to solve new problems in the ever changing world.

Consider, for instance, the concept of "bed." People can go into any hotel room in the world and immediately recognize the bed, even if they have never seen that particular bed before. It is the structure of our memory-encoding ensembles that enables us to retain not only an image of a specific bed but also a general knowledge of what a bed is. Indeed, my colleagues and I have seen evidence of this in mice. During the course of our experiments, we accidentally discovered a small number of hippocampal neurons that appear to respond to the abstract concept of "nest." These cells react vigorously to all types of nests, regardless of whether they are round or square or triangular or made of cotton or plastic or wood. Place a piece of glass over the nest so the animal can see it but can no longer climb in, and the nest cells cease to react. We conclude that these cells are responding not to the specific physical features of the nest--its appearance or shape or composition--but to its functionality: a nest is someplace to curl up in to sleep.

The categorical and hierarchical organization of neural cliques most likely represents a general mechanism not only for encoding memory but also for processing and representing other types of information in brain areas outside the hippocampus, from sensory perceptions to conscious thoughts. Some evidence suggests this supposition is true. In the visual system, for example, researchers have discovered neurons that respond to "faces," including human faces, monkey faces or even leaves that have the shape of a face. Others have found cells that respond only to a subclass of faces. Back in the hippocampus, researchers studying patients with epilepsy have discovered a subset of cells that increase their firing rates in response to images of famous people. Itzhak Fried of the University of California, Los Angeles, further made the fascinating observation that one particular cell in a patient's hippocampus seemed to respond only to the actress Halle Berry. (Perhaps it was part of a Halle Berry clique!) Together such observations support the notion that the general-to-specific hierarchical organization of information-processing units represents a general organizing principle throughout the brain.

Our work with mice also yielded a way for us to compare patterns from one brain to another--and even to pass information from a brain to a computer. Using a mathematical treatment called matrix inversion, we were able to translate the activities of neural clique assemblies into a string of binary code, where 1 represents an active state and 0 represents an inactive state for each coding unit within a given assembly we examined. For example, the memory of an earthquake might be recorded as "11001," where the first 1 represents activation of the general startle clique, the second 1 represents activation of the clique that responds to a motion disturbance, the first 0 indicates lack of activity in the air-puff clique, the second 0 indicates lack of activity in the elevator-drop clique and the final 1 shows activation of the earthquake clique. We have applied a similar binary code to the neural ensemble activity from four different mice and were able to predict, with up to 99 percent accuracy, which event they had experienced and where it had happened. In other words, by scanning the binary code we could read and compare the animals' minds mathematically.

Such a binary code of the brain could also provide a potentially unifying framework for studying cognition, even across animal species, and could greatly facilitate the design of more seamless, real-time brain-to-machine communication. For example, we have arranged a system that converts the neural activity of a mouse experiencing an earthquake into a binary code that instructs an escape hatch to open, allowing the animal to exit the shaking container. We believe our approach provides an alternative, more intuitive decoding method for powering the kinds of devices that have already allowed patients with neural implants to control a cursor on a computer screen or a monkey to move a robotic arm using signals recorded from its motor cortex. Moreover, real-time processing of memory codes in the brain might, one day, lead to downloading of memories directly to a computer for permanent digital storage.

In addition, we and other computer engineers are beginning to apply what we have learned about the organization of the brain's memory system to the design of an entirely new generation of intelligent computers and network-centric systems, because the current machines fail miserably in the type of cognitive decision making that humans find easy, such as recognizing a high school classmate even though he has grown a beard and aged 20 years. Someday intelligent computers and machines equipped with sophisticated sensors and with a logical architecture similar to the categorical, hierarchical organization of memory-coding units in the hippocampus might do more than imitate, and perhaps even exceed our human ability to handle complex cognitive tasks.

For me, our discoveries raise many interesting--and unnerving--philosophical possibilities. If all our memories, emotions, knowledge and imagination can be translated into 1s and 0s, who knows what that would mean for who we are and how we will operate in the future. Could it be that 5,000 years from now, we will be able to download our minds into computers, travel to distant worlds and live forever in the network?

No comments: