The most fascinating thing I read recently in the philosophy of memory is the idea that we learn to remember. How cool is that? And how can it be the case that we learn something so basic, so constitutive of ourselves? How can memory not be innate?
Let’s start with something we don’t learn: the scenes that pop into our minds. We are hardwired in such a way that we can picture scenes in our minds. The scenes can be about reality or fantasy; they can be about the future or the past; they can be about your own life or someone else’s life (real or not).
As we grow up, we learn ways of labeling these mental pictures. Our caregivers teach us to use logic to label the scenes.
For example, suppose that the nerdy little Annie says that she remembers Napoleon arriving in Grenoble in 1815. Sweetly, her mother corrects her by saying that she can’t remember a thing she never saw in the first place. This is the way our caregivers teach us to remember: by teaching us the rules of the game of labeling the scenes that pop into our minds.
This way of thinking about memory is interesting because it is usual to believe that our minds have different places for different things. This common way of thinking about memory and other mental faculties goes (more or less) like this.
If an idea comes from our eyes, ears, skin, internal organs, nose, or tongue, it naturally goes to the box of memory — the box for real things. But if the idea comes from our fantasy, it is stored in the box of imagination — the box for things we think about, among others, when we are trying to escape from reality. According to this usual way of thinking, no labeling is required: everything goes to its ‘natural’ box.
But what if our minds have no natural homes for reality and fantasy? What if what happens is that we watch mental movies projected onto our brains’ screens, but the movies still need to be labeled as —say— a drama movie or a documentary, accurate news or fake news?
In this case, language could be used to label our mental movies. This would be a good thing, since maybe it would be the only way of connecting our minds to reality.
The idea that we can learn to remember is gaining significant attention in philosophy. For example, Johannes Mahr and his collaborators say that our ability to tell memory from imagination is a cognitive gadget. Following research from psychologist Cecilia Heyes and her collaborators, they propose that people teach us to look for clues about the origins of our mental movies. This is what they call “discrimination” and “interpretation”.
For example, one learns to fear a bear nearby, and one also learns not to fear the memory of a bear that was nearby. To do that, one needs to learn to distinguish perception from memory. Then, time passes, and we teach others to do the same. This is what they call “broadcasting”. For example, the child grows and teaches other children not to fear their dreams and memories.
Mahr’s idea makes it hard to believe that memory is different from imagination in the brain. But that is ok. Science says the same. Experts say that the “photos” of the brain of a person who is remembering are not so different from the “photos” of the brain of a person who is imagining.
What is really challenging —and even frightening— about Mahr’s idea is how it makes us think about children who never learned to separate their memories from their dreams. Since there are no separate boxes for these things in their brains, the theory says that they live in a world where the concepts of reality and of past experience make little sense.
The idea that we can and often do learn to remember has other important consequences. For example, some philosophers think that memory is a natural kind: memory is such that it has particular features (such as being a knowledge-like mental simulation) because there is a particular mechanism (some operations of the hippocampus) that makes it the way it is.
But if we learn to remember, then one might say that there is no innate mechanism in our brain. The mechanism is social. (Does it make sense to say that this social mechanism identifies a natural kind? That’s a hard question to answer).
Another consequence is that some vintage theories that describe memory as some kind of success come back into fashion. Remember little Annie. She sees the scene of Napoleon arriving in Grenoble in her mind’s eyes. She tells her mother what she visualizes, and then her mother says that what she experienced is not memory because you cannot remember something successfully if you were not there in the first place.
The mother is teaching little Annie a principle that is part of classical theories of memory, such as the empiricist theory of memory and the causal theory of memory: the previous awareness condition.
Mahr’s ingenious idea has pushed philosophers of to explore an important question:
How does our culture shape the way we remember things?
Chris McCarroll and Nikola Andonovski, for example, have suggested that answering this question requires us to think about another consequence of the view that we learn to remember: our memories aren’t just recordings of the past—our social and cultural environment actively shapes them.
To get a sense of this idea, let’s start with something that might surprise you. Think about your earliest childhood memory. Got it? Well, here’s the thing: the way you remember that moment has been profoundly influenced by how your parents talked to you about memories when you were young.
Melissa K. Welch-Ross’s research shows just how important these parent-child conversations about memories are. Welch-Ross found that something remarkable happened when mothers engaged their preschool children in detailed conversations about past events, asking open-ended questions like “What did we do at the park?” and following up with their children’s responses. These conversations didn’t just help children remember better — they helped children better understand how minds work.
Here’s what she found: mothers who elaborated more during memory conversations, particularly about people’s thoughts and feelings during past events, had children who became better at understanding that different people can have different memories and beliefs about the same event.
Think about what this means. When a mother asks her child “What do you think your friend was thinking?”, she’s not just helping her child remember—she’s teaching them that memories are personal experiences that can be different for different people.
But what does this mean for how we actually remember things? It means that different children learn to remember in various ways and, crucially, learn different ways of telling their memories from other thoughts. As Mahr says, different people learn different criteria of “mnemicity”, different ways of tracking whether the movies they watch in their minds are memories, dreams, desires, or fantasies.
Now, the differences are not random: they reflect deeper cultural values and ways of thinking that we absorb from our earliest years. As McCarroll and Andonovski propose, your mental time travel to your personal past —your episodic memory— is shaped by your culture. They call this thing “mindshaping”.
Think of mindshaping as a kind of cultural programming that helps us fit into our social groups. When we share memories with others, we’re not just exchanging information: we’re participating in a process that makes our thinking more similar to those around us.
McCarroll & Andonovski describe the interest in memory as a cultural gadget a “normative turn” in the philosophy of memory. The idea is that philosophers are now interested in the ways you should think when you interpret a scene in your mind as a memory.
Think about it this way: you’re not just reporting facts when you share a memory with someone. You’re making choices about what details matter and why these details matter. These choices are guided by social and cultural norms about what makes a mental simulation a memory.
Is it something about the origin of the mental scene? Does coherence with your beliefs matter? Depending on how you learned to remember, these criteria can be more or less relevant for you.
An underexplored branch of this “normative turn” in the philosophy of memory is the creation and broadcasting of improved criteria for telling memory from imagination proposed by philosophers. For example, Sven Bernecker claims that the notion of false memory is an “oxymoron”. He is not saying that no people report bizarre situations that never happened as memories. They are. Bernecker’s point is that we should not call it a “memory”.
Taking Bernecker’s key message home, the point is that it is a good social policy to teach children that they cannot remember what they never experienced in the first place. And, as researchers in the area tell us, this is, in fact, mothers’s usual policy.1

This study was financed in part by the Capes-PrInt Program and the Programme Cofecub.