Steve Fleming’s research is definitely “meta” — a Greek prefix indicating self-reference. He’s a cognitive neuroscientist at University College London who studies metacognition: what we know about what we know, think about what we think, believe about what we believe. While this may seem quite philosophical and well-nigh impossible to study in the lab, he has made it his mission to measure and model it and understand where in the brain it manifests itself.
Fleming explored these issues in his 2021 book, Know Thyself: The Science of Self-Awareness. In the 2024 Annual Review of Psychology, he further examined the link between metacognition and confidence: our sense of whether we have made the right decision, whether we are successful at the tasks presented to us, and whether our worldview is likely correct.
Fleming’s work is casting new light on why some people seem chronically underconfident even when they’re doing just fine, and why others are entirely convinced they’re right about everything, even when there is overwhelming evidence to the contrary. In the following discussion, which has been edited for length and clarity, Fleming shared his thoughts on some of the questions that inevitably come up when our brains assess their own activity.
Metacognition is quite an uncommon research topic. How did you end up studying this?
I studied experimental psychology in Oxford, where I had the opportunity to work with psychologist Paul Azzopardi. He studies blindsight, a condition where, due to certain types of brain damage, people are subjectively blind but still able to perform various tasks using visual information. This presents a fascinating dissociation between conscious experience and actual functionality.
At that point, I hadn’t figured out how to connect the more philosophical ideas about conscious experience to something we can actually measure and study in the lab. But ever since then, my career has been inching towards achieving the original goal of using mathematical models from psychology to explain aspects of self-awareness. These are things that psychologists and philosophers have always been interested in, but that are quite difficult to pin down in practice.
How do you measure something like metacognition in the lab?
The standard approach is to measure people’s objective performance on a task as well as their subjective assessment of their own performance, usually in the form of confidence ratings. For example, we might be asking whether a visual stimulus known as a grating is tilted to the left or to the right, or to compare the brightness of two gratings shown one after the other. That would be a judgment about the outside world. We can then also ask them a metacognitive question, to evaluate their confidence in their decision about the world.
When we have lots of these kinds of judgments over time, we can observe the extent to which confidence is tracking performance, on a trial-by-trial basis. If someone has high confidence when they’re right and lower confidence when they’re wrong, they can be ascribed a high degree of what we call metacognitive efficiency. We can use that as a way of quantifying differences in metacognition between individuals or groups.

Can you link these differences to what is happening in people’s brains?
One popular way of doing this has been to look at differences in brain activity and structure between people, using brain imaging techniques like fMRI and magnetoencephalography to try and find out what aspects of brain function gives some people better metacognition than others. But we’ve realized that approach is limited.
So the field has shifted. More recently, we’re instead looking at the relationship between patterns of brain activity and trial-by-trial variation in how confident individual people feel about decisions we ask them to make in experiments.
Essentially, what’s been found is that there are different stages of tracking uncertainty about our own performance when we’re performing a particular task.
For example, if you’re trying to discriminate the orientation of a line, neurons in the part of the brain that are sensitive to different possible line orientations will be firing to different extents, reflecting any uncertainty in what you see. Studies show that if there is conflicting information at that level, that affects people’s confidence estimates in the tests.
There are also data suggesting another higher-level stage of assessment: There are brain areas in the prefrontal cortex signaling confidence in a more general fashion, one that is not tied to the specific input we receive when conducting a particular task. This process continues after you’ve made a decision, and the brain is then also considering information that wasn’t initially available. It’s as if it is still trying to figure out whether it got it right or wrong.
That seems to happen pretty much automatically. It doesn’t require any external instruction or conscious effort. When we do ask people to consciously engage in metacognition and report how they feel about their performance, they seem to engage yet another stage of processing, which involves the frontopolar areas of the human brain: regions right towards the front of the cortex that are particularly well-developed in humans compared to other primates. These areas are activated when metacognitive estimates are used to communicate to others or to consciously control behavior, like we asked them to do in these experiments.
What happens if metacognition does not work the way it should?
A pervasive sense of underconfidence has been regularly linked to symptoms of anxiety and depression. We know that individuals who suffer from this general sense of underconfidence are not necessarily performing the tasks any worse than the next person. So one of the puzzles we are interested in trying to solve is why some people are not learning from their own performance. Why is it that they’re unable to realize that they’re actually doing quite well, and then update their beliefs about their skills and abilities appropriately?
What we’ve found is that at a trial-by-trial level, people with anxiety and depression are just as likely as others to show instances of high confidence. But there is an asymmetry in how they learn from these. They sometimes are very confident that they are doing well, but they don’t incorporate those signals into their more global estimates of how well they are doing in these experiments, and presumably daily life as well. At the same time, they are perfectly able to incorporate evidence from trials in which they weren’t very confident about performing well.
Interestingly, this isn’t the case when we give them explicit feedback about their performance. When we tell them that they are right, they realize that they are actually performing quite well.
How could this be applied to help people who struggle with underconfidence?
In a recent study, we’ve shown that underconfidence in people with greater anxiety symptoms is exacerbated with time. If we probe their confidence immediately after they make a decision, they’ll be a bit underconfident. But if we wait a few seconds, they’re even more underconfident about that previous decision, everything else being equal. And it only gets worse.
What we think is happening is that they’re engaging all these brain mechanisms that I talked about earlier to reflect on their own decisions and actions. Now, as time elapses, if you tend to be a more anxious person, those processes lead you to become even more underconfident than you would otherwise be. You’re spending too much time ruminating on your performance.
So one concrete piece of advice that we can extract out of those findings is that if you know that you are prone to that kind of bias, it’s better not to think too much after you’ve made a choice. If immediately after, you think, “All right, yeah, that was a reasonable thing to do,” leave it be.

What about people who are, perhaps, a bit more confident than they should be? It appears that can be quite helpful in today’s society.
It’s very interesting to think about what is adaptive, on a societal level, for future success. One hypothesis I advance in the book is that if you have a slightly overconfident worldview as well as good metacognitive sensitivity that helps you realize when you’re really wrong, that can be quite a powerful mix. Because, as you say, there is a lot of research suggesting that people who are perhaps a little overconfident do well socially. People tend to like them and want them in positions of power because they seem decisive.
At the same time, you don’t want someone without proper self-awareness to be able to bluff their way to the top and reach a position of power.
So I think there is a sweet spot where you do need to project a bit of overconfidence to be perceived as competent, yet you also want to make sure you’re not too seduced by self-confidence, whether it’s your own or someone else’s.
We’ve found that people with a more open-minded worldview, who are willing to acknowledge that their view might not be the only valid one and believe it’s important to listen to the views of people who disagree with them, also tend to have more accurate metacognition in the kinds of tasks we can study in the lab. Accurate metacognition prompts them to seek out new information and update their beliefs if they might be inaccurate. There is a solid body of evidence to suggest that in this way, these signals can help us, over time, to develop a more accurate worldview.
Might it be possible to train metacognition using these kinds of tasks, and do you think that might help us to reduce the societal tensions we experience today?
I think a lack of metacognition is far from the only reason we see polarization in society today. But our research does offer some tools that we could use to try and cultivate people’s ability to think critically about their own thinking, knowledge and decisions, without getting into politics.
The obvious place to do this would be in education, which I believe has a lot of potential. Parents and teachers implicitly encourage children to be more self-aware, but they rarely do so explicitly.
We don’t teach metacognition in the same way we teach math or history or physics. I think that might be a really powerful way of developing more open-minded ways of thinking.
This article originally appeared in Knowable Magazine, a nonprofit publication dedicated to making scientific knowledge accessible to all. Sign up for Knowable Magazine’s newsletter.
