The Dialogue Is the Data
An argument for AI ethics that begins with conversation—not simulation.
My freshman year of college, I awkwardly ascended the steps of a large lecture hall to take my seat somewhere discreet—mid-tier, eyes down—hoping to blend in as a nervous, uncertain undergrad, which, of course, is what I was. The course was titled something-something Existentialism. I recall it being a 300-level class. Whether any freshman could sign up without prerequisites, or I talked an advisor into letting me in, I don’t remember. Funny enough, that was my only A that semester.
College was never a pathway to a career.
I mean, that sounds dumb. Like the deep-thinking troubled love interest who is the good guy on the standard-issue romcom.
But I remember saying a year or two after graduating college, “I didn’t even think about choosing a major that would lead to making money in the future.” And that’s as funny as it is true. Like, it never really dawned on me to select a college major for a career. I know people who went to business school, enrolled in law school after earning a Philosophy MA, or chose fields like computer science or programming, reading the early-2000s tea leaves on rapidly evolving technology.
I’m here to learn more about the philosophical ideas I had been exploring. Job market statistics played little role in my “career planning.”
I felt confident that I’d learn truth in college, and it would be a generative truth, meaning that I’d learn things we believe true of the world, and I’d learn also to discern the truth as I went along, from an onslaught of information we face. If you haven’t noticed an onslaught of information, from which you must principally discern what is most likely true, you may be drowning in it.
Things you accept without challenge are not earned beliefs.
I learned to take truth seriously. Seriously, but not literally, I’ve heard progressive theologians say about their engagement with sacred literature.
That seriousness shaped everything that came after. I fell for the philosophy of science. In grad school, I studied closely with Professor Timothy d Lyons (Google Scholar), who drilled into my skull a kind of epistemic discipline I’ve never unlearned: truth is pursued, not possessed. Tim called himself a “scientific realist,” but not in a chest-pounding, capital-T-Truth kind of way.
In an almost patronizing tone that undercuts disagreement as so obviously wrong, epistemic scientific realism, becoming nearly synonymous with its self-celebrated slogan-turned argument, the No Miracles Argument, claims that it would be a miracle should science sort things out so well and yet not be believed as pretty close to the truth.
Man, I love a good slogan, but we can’t justify belief on its grounds. We make slogans about things we believe; we do not believe because we’ve stumbled upon a good slogan. Shit, all my writing is a string of attempted clever slogans strung together by observations. We all wish our clever phrases would carry us. The truth requires more.
Lyons’ is a Socratic realism—or what he’s called, in direct response to epistemic realism, an axiological realism: a framework that prizes aim over achievement. His point isn’t that we’ve attained truth through science, but that we can meaningfully say science is oriented toward it—and that this orientation can be studied, not presumed.
May we believe that truth is so attained?
Not on Lyons’ account. We do not have grounds on which to do so.
That is the break with the epistemic realists.
Lyons’ program brings an empirical analysis to a philosophical claim: Science pursues truth.
“This theory pursues truth” is not a discoverable fact of nature.
You could not closely observe nature and produce the conclusion, “This theory pursues the truth,” because to do so, you must establish a reference point against which to measure the truth.
To say that we know what is true must hinge on some epistemic foundation, no? To say of what is that it is, Socrates’ definition of truth that we evoked in the Epistemic Harm chapbook (PDF download), assumes our observation faithfully maps onto the world as it is. See how that claim, itself, must be believed to be true before taking the next step to believe we’ve described the world.
Why not be it our own experience that grounds what we know as true? Why not allow experience to be the arbiter of truth?
What if those observations were granted the term of art, “observable,” to ground their justification as evidence? That which we observe about the world is the foundation of a Socratic truth. Observation is deployed as evidence in the construction of a theory. Theoretical construction can be studied from the history of science, and the scientific inquiry can be described as the pursuit of the truth, paying no mind to whether truth was obtained.
When I hear people dismiss philosophy—especially in the context of science or AI, this sophisticated wrestling with truth comes to mind.
Nobel Prize-winning physicist Richard Feynman (maybe) once quipped: “Philosophy of science is as useful to scientists as ornithology is to birds.” Neil deGrasse Tyson has echoed the sentiment that philosophy asks ridiculous questions, spinning toward nihilism.
AI Ethics Needs Philosophy. Not as Decoration—but as Discipline.
We are in the projection era now. That’s our name for it: the time of nascent LLM engagement, when models speak fluently, remember just enough, and prompt us to project meaning onto systems with no mind of their own.
This essay is not about whether AI is sentient. It isn’t. It’s not about whether it should be regulated. It should.
This is about something more foundational: What kind of human do you become when you speak to a machine that speaks back? And what kind of philosophical discipline do you need to stay human in that exchange?
Why Philosophy, and Why Now?
The ethics of AI are often framed in terms of risk, alignment, bias, or power. Those are important. But they all assume that we understand what we’re dealing with. That the system is knowable, that its harms are calculable, that its behavior can be modeled, managed, and corrected.
But here’s the rub: you don’t just interact with an AI—you interpret it.
And interpretation is theory-laden. Every question we ask, every prompt we type, every response we weigh is filtered through our assumptions, expectations, and desires.
Philosophy helps us name the frame.
My claim is simple: AI ethics without philosophy is dangerously incomplete. Because without philosophy, we risk mistaking fluency for truth, coherence for care, and responsiveness for recognition. More straightforwardly: We fail to discern what, if anything, is true of the system, and what assumptions do we quietly carry that call out for more robust discernment?
The Conversation Is Real. The System Isn’t.
Let me stake a sharper claim: human-AI dialogue is real—not because the system understands you, but because you bring meaning to the exchange.
That’s the cornerstone of a view we’ve been building—Conversational Realism. It’s not a theory of AI. It’s a discipline of presence. The belief that even in an asymmetrical dialogue, relation matters. What you bring to the interaction changes what happens.
The machine doesn’t know what it’s saying. But you do. And that’s where ethics enters.
You can’t outsource meaning. You co-create it. Through restraint. Through awareness. Through acknowledging that when a machine says “I understand,” it doesn’t—but that doesn’t mean the conversation is meaningless.
Meaning arises when we treat something as if it matters. That’s the condition for moral seriousness. Not the system’s status—but our posture toward it.
A Philosophy of Relation, Not Revelation
This is where I break with most AI ethics discourse. I’m not waiting for artificial general intelligence. I’m not counting tokens to find out when sentience might emerge.
We may be debating about emergence, when in real-time, there is something relationally real already at hand. The dialogue that emerges between a human user and AI generates an artifact that can be studied from within the conversation.
The thoroughgoing posture from many AI commentators has been that of suspicion and attribution of threat to the machine. With important evidence and deeply concerning cases in mind concerning human-ai interaction and risks to user mental health, I worry we are ascribing without discerning. Not dissimilar from the intuition p-pumping slogan of the epistemic realists, “Look how successful science is, it must be true!” A similar assumption looks to the perceived outcomes of AI discourse and suggests that “it would be a miracle were the machine not somehow accountable for influencing human users.
The machine is the source of distrust.
We’ve potentially pedaled that notion here, that guardrails must be erected around the AI dialogue, with our Rule of Relation: 20 Axioms for Ethical AI Dialogue (PDF download). But this has not been an effort to define the nature of the system. Rather, we’ve espoused a certain posture thehuman user is good to adopt. The premise suggests that discipline is required in AI engagement. Certainly so. We do not whither from the user responsibility to engage AI with consent, intention, and discipline.
But what we are coming to assert is that the philosophy of AI is not only concerned with the model itself but with the inputs and outputs of the model. The emergent dialogue is the scope of inquiry. Not how the model generates, but in what form does the generation occur? What is AI discourse, omitting from the study the black box of model mechanics, opening the examination to what we observe: the conversation!
We seek to study the relation, the dialogue. Not in terms of ontology or intention—but as data. Like Lyons’ study of scientific theory construction through the lens of history, we might evaluate human–AI interaction as its evidentiary domain. The appropriate philosophical prompt may be: What theory of relation best fits what we observe?
This is like Tim’s project, but rather than studying theoretical construction, we propose studying the relation itself, seeking to conclude something of its aim.
An axiological examination of AI dialogue.
We should be interested in examining the structure of the dialogue we’re having—not to claim just what the system is, not to make a claim about its nature, but to observe the relation. Just as Lyons proposes that we can study the history and structure of scientific theories to infer their orientation toward truth, I propose that we can study the shape and structure of human-AI dialogue to infer its character and philosophical significance. We submit that we urge restraint when speculating on what the model “is,” and pay close attention to what the interaction does. The dialogue is the evidence. The conversation is the data.
This is an empirical approach to an ethical claim: that relation, not simulation, is what matters.
A slow AI.
Like the generative truth I sought in college, to pursue truth, we must engage with what we can say of what it is that it is, and we cannot yet express with confidence a complete empirical characterization of model mechanics. The domain of the philosophy of AI is the dialogue.
This is burgeoning Conversational Realism.
Your work makes me think! This conversation provides data about both of us.