Boo Ya, Says The Epistemological Philosopher
Robin McKenna on Why He Crushes It at Parties
Dear Republic,
I asked a few academics to explain their discipline — why they chose it, what they love about it, why everybody else should love it as much as they do. Robin McKenna — the most debonair epistemologist I know — was the first to write in, but needn’t be the last. If you are an academic and want to take up this prompt, write in to republic.of.letters.substack@gmail.com with “Discipline” in the subject line. Actually, that goes for everyone. Please feel free to write in about what the hell it is that you do, why it rules, and why everybody should bow down before everyone in your profession.
-The Editor
BOO YA, SAYS THE EPISTEMOLOGICAL PHILOSOPHER
I am an academic philosopher. This makes me a great hit at parties. Usually, telling someone what I do ends the conversation—that’s nice, let’s talk about something else. Every so often, though, I meet someone who seems genuinely interested. They want to know more. What does a philosopher do? Do you just read the old dead philosophers—Plato, Aristotle, Descartes?
When I’m in the mood, I try to explain my field of research: epistemology. Epistemology, I tell them, is about two things: knowledge, and how to get knowledge. There is a difference between knowing something and just thinking it. If you know that the Liverpool-Arsenal game starts at 8pm, then you’re not just guessing or assuming. You have some evidence to back it up—you looked online, you spoke to a friend who is going to the game. Epistemologists don’t stop here, though. They want to know what counts as good evidence, and how much evidence you need to have knowledge. Some of them worry that we rarely, if ever, have enough evidence for knowledge.
That’s knowledge. What about how to get it? Some ways of figuring out what to think about the world are better than others. Imagine you’re a detective. When investigating a crime, you don’t just write down a list of suspects and choose one at random. You gather evidence, interview witnesses, try to construct an explanation of what happened. You use procedures that transform any hunches you started with into a conclusion that might stand up in court. Similarly, when medical researchers are looking for a new treatment, they don’t just try it on a couple of patients, see they are getting better, and conclude that it works. They run large clinical trials, track outcomes against placebos, and subject their work to scrutiny from other researchers.
Epistemologists don’t tell detectives or medical researchers how to do their jobs. They take a step back from the particular procedures that detectives and researchers employ and try to extract general lessons about how to turn initial hunches and hypotheses into knowledge. They ask questions like “what are good ways of getting knowledge?” and “what procedures for forming beliefs should we employ?”
As an explanation of what epistemology is, this is ok; it does the job. But it doesn’t explain why I’m particularly interested in epistemology. This may be a failing on my part. Some of my epistemology friends think questions like “what is evidence?” or “what is knowledge?” are interesting questions in their own right—the sorts of things worth devoting a career to. A lot of them are smarter than me, and in any case, I have no desire to tell other philosophers what (not) to do.
What interests me is how epistemology plays out in our collective knowledge-building practices and in the political sphere. I don’t like the phrase “collective knowledge-building practices,” but I’m using it deliberately. I’m interested in how communities of non-specialists—in particular, groups of patient activists—can band together to produce something akin to scientific knowledge despite lacking the usual credentials and markers of expertise. This is particularly important when it comes to medical conditions that aren’t well understood by specialists, such as myalgic encephalomyelitis, fibromyalgia, or long COVID. Understanding how these patient groups can mimic the ways in which scientific research communities produce knowledge is crucial because it reveals the shortcomings with common narratives about expertise.
A common narrative says that ordinary people—people who lack relevant medical or other qualifications—should simply defer to “the experts.” But it is a mistake to equate a social status (possessing relevant academic credentials) with an epistemological status (possessing relevant skills and knowledge). Someone may possess the relevant skills and knowledge without possessing the relevant credentials.
It is also a mistake to think that defending the authority of science and scientific knowledge requires propping up these simplifying narratives. Indeed, the opposite is the case: the basis of scientific authority is the social structures by which scientific knowledge is produced, not the credentials that scientists may possess. If communities form outside of institutionalised science that mimic or parallel those social structures, then we should expect those communities to also produce genuine insight and knowledge.
More generally, I am interested in what is called “political epistemology.” Political epistemology looks at the politics of knowledge and knowledge claims. As I see it, political epistemology is quite distinct from “traditional” epistemology. It is one thing to say, in an abstract way, what the difference is between knowledge and mere belief or opinion. It is quite another to adjudicate the rival claims to knowledge that are made by different political factions, or to try and understand where those claims come from, or why debates about them tend to be so intractable.
Let me give an example. We are all familiar with the claim that we live in a “post-truth” world that is awash with fake news, misinformation, and other forms of deceit, bullshit, and deception. It is, I think, an open question whether this is true—or at least whether it is any more true in 2025 than it was in, say, 1939, 1848 or 1789. But, even if it is true, it is not a simple matter to determine what is (and what is not) misinformation.
The difficulty is not with defining misinformation. While theorists will quibble about the details, the basic definition is clear: misinformation is false or inaccurate information. (Disinformation is then false or inaccurate information spread with intent to deceive). The difficulty is with determining when something—a news story, an academic paper, a tweet—is false or inaccurate. Was it “misinformation” to raise concerns about the efficacy of masks, especially cloth masks, in limiting the transmission of Covid-19? What about those who put forward the “lab leak” theory about the origins of Covid-19 before this became a serious hypothesis—were they peddling misinformation?
There are two ways in which you might approach these questions. The first is as someone committed to the truth of certain claims, which you see as supported by the relevant body of scientific knowledge. This is the way in which many academics tend to approach misinformation and related issues. They know what is true and misinformation is simply purported “information” that goes against what they know is true. The “problem” of misinformation is simply explaining why people are taken in by it.
The second way is as an impartial but critical observer. The observer is impartial because they don’t start from what they know or presume to be true. Rather, they look at how people on all sides of a debate go about forming their beliefs, highlighting commonalities. It may be, for example, that people on all sides adopt beliefs that fit with their political identities and values, or that the different sides in the debate start with very different background beliefs. But the observer is also critical in that they don’t think that all ways of forming beliefs are on a par. There are good and bad ways of forming beliefs, and if you form beliefs in bad ways then it is fair to describe those beliefs as irrational. But you can only say that someone or some group is forming their beliefs in a bad, irrational way once you understand how they are forming those beliefs, as well as the social context in which they are forming them.
The political epistemologist—or at least my sort of political epistemologist—approaches these questions in the second way. Rather than simply declaring certain views to be the product of “misinformation”, they try to understand those views—where they come from, why people hold them, what reasons they might have for them. They are open to the possibility that some people have views that are just irrational, but this is the result of their analysis, not the starting point. Most importantly, they recognise the need to extend this interpretive charity to views they are inclined to view as false as well as to views they think are true.
You might be worried that the—or at least this—political epistemologist is one of these annoying postmodern relativists you have heard about. Am I saying that truth is subjective or socially constructed, that all viewpoints are equally valid? No. Some things are true irrespective of what you, I, or anyone else happens to think about them.
But our attempts to grasp those truths—at least, the sorts of truths that are relevant in politics—are mediated by social practices and our external environment. If you want to understand the views people form about the world, you need to understand those mediation processes. It is not postmodern relativism to say that people view the world differently. Indeed, if we want to better understand the heated political disagreements going on around us, we need to better understand how the participants in these disagreements view the world, and why they view it that way. I can’t say that the future of the world depends on it, but at least trying to understand is better than refusing to understand your political opponents because you think they are victims of a “misinformation crisis.”
Robin McKenna is a Senior Lecturer in Philosophy at the University of Liverpool and a Senior Research Associate at the University of Johannesburg. He is interested in the politics of knowledge, the psychology of belief formation, and the ways in which our beliefs are shaped by social and cultural forces. His first book, Non-Ideal Epistemology, was published by Oxford University Press in 2023. Here is his website, X account, Bluesky account, and Substack.
Interesting. I wonder which epistemological branch is closest to Machine Learning (ML). It seems to me ML implicitly adopts a minimalist epistemological stance - perhaps closest to a Bayesian or probabilistic epistemology, if that's a thing? (ChatGPT tells me it is.) Afaics in ML knowledge equals knowing a joint probability function, either the density (p.d.f.) or cumulative (c.d.f.) distribution. Illustrating with a joint p.d.f. next.
This joint p.d.f. (or co-counts in discrete cases) records how often different observed events co-occur. Generally, observations are N-dimensional vectors rather than single scalars. Time is the special dimension in the real world where counting occurs. (mathematically it can be just another dimension.)
Usually we partition our observations Z into two sets: Z = (X, Y). X are things we can directly observe, whereas Y are things we care about but cannot observe directly. Hence us needing to observe X, and knowing the relationship f_{X,Y}(x,y), to give us the means to understand Y.
Before observing X, all we know about Y is its marginal (prior) distribution $f_Y(y)$, obtained by marginalising out X from the joint distribution: $f_Y(y) = \int_x f_{X,Y}(x,y) dx$. (If Y is discrete, this integral becomes a summation.)
After observing a specific value of X, say x = a, we gain more information about Y. Geometrically, we intersect the joint distribution $f_{X,Y}(x,y)$ with the plane $x = a$, yielding a slice $f_{X,Y}(x=a,y)$. However, this slice alone isn't yet a proper p.d.f. because it doesn't integrate to 1. To correct this, we normalise it by dividing it with the marginal distribution of X at $x = a$, i.e., $f_X(a) = \int_y f_{X,Y}(a,y) dy$. This gives us the conditional distribution $f_{Y|X}(y|x=a) = \frac{f_{X,Y}(x=a,y)}{f_X(x=a)}$.
(Noticing this relationship for it's Bayesian structure. We've got $f_{X,Y}(x,y) = f_{Y|X}(y|x) f_X(x)$. Marginalising to find $f_Y(y) = \int f_{Y|X}(y|x) f_X(x) dx $ involves integrating over all possible conditionals, weighted by their probability $f_X(x)$. Bayesian updating embedded in ML?)
Once we have this conditional p.d.f. $f_{Y|X}(y|x=a)$, it encodes all our updated knowledge about Y, given the observation about X that is $x = a$. We can subsequently use this p.d.f. for forecasting - choosing a point, or an interval, (two points,) or to weight things in e.g. decision-making in various contexts, etc.
What I love about this is how it flips the usual take on ‘misinformation.’ Instead of starting with who’s wrong, it starts with how people come to believe what they believe. That shift,from judging to understanding,feels like something we badly need, especially when everyone’s convinced they’re the only ones being rational. Honestly, more political debates would go somewhere if we cared less about being right and more about why we think we're right.