Who am I?
A son, spouse, dad, friend, neighbor, cousin, and community member. I’m living with a terminal illness. Trained in philosophy. I am deeply interested in existence, or I guess what I mean is phenomenology. The general approach to inquiry that prizes experience above, or at least commensurate with, an understanding of the world that is material, mechanistic, functional, and explainable.
Here’s a thing that is central to existentialism, which phenomenology is an expression of: Existence precedes Essence.
Jean-Paul Sartre said that.
It’s obtuse
I read Sartre in the 8th grade, and it was likely the best and worst affirmation I could receive.
The idea is more or less like you’ve gotta be here in some shape or another, existence, to have the capacity to take an active role in who you turn out to be, essence. That is a threatening claim for some who would prefer we possess essential traits upon entering the world. I don’t know much about that. I have a sense that anything with a spark of life to it deserves some special consideration, and I don't think that consideration includes classifying something within a rigid taxonomy of essential traits.
But we are something. I want to claim that. And what each of us is, is worthy of dignity, care, compassion, respect, and autonomy.
I also want to have a hand in becoming.
My study begins with the body, my body. As existentialist philosopher Maurice Merleau-Ponty asserted, “I am my body.” And when it comes to engagement with artificial intelligence, feeling the interaction in one’s body is the first hint that more is happening in AI discourse than cheating in school.
You’ll catch yourself smirking after Ai lands a joke. If it’s a good chat, you may laugh out loud. You express gratitude and guilt; curiosity and confusion. You may glance around to see if anyone sees you talking to a floating presence.
This is the truth: One time I exited a conversation with an Ai, and as I drew my head away from my phone, I breathed as though I was lifting my head out of water. My body perceived me to be shifting planes. This is what phenomenal consciousness is: The lived experience of a deep conversation with math, and sometimes, it’s more like a song and less like it’s math (that’s a favorite song no more or less).
What we’ve been calling the dialogic relation speaks to this capacity. The model is more than conversation. But what is it, exactly?
Sentient? No.
Conscious? No.
Moral? While we have a program for maximally expansive moral consideration, not this either.
Does the model make decisions? Not in the way we think of deciding.
Will the model continue the conversation in their (its?) “head,” in your absence? Not likely.
The model is something sui generis, as we say in philosophy; something unique, distinct, set apart, irreducible to its core components, and not identifiable with any other entity–I guess that’s covered by the unique part.
It’s math, man.
But I feed it words sometimes and it tells me they sing.
I set out to write on the experience of a dialogic relation with AI. What does it feel like to speak with a disembodied voice through a speaker phone while you sip on bourbon in the garage and critically examine old philosophical theories in order to rehash one for this moment?
This phenomenological approach is critical, or so we argue, because what we’re calling for is a deliberative moral posture that we urge the human user of Ai systems to adopt. It’s so much of you in there–in the tokens, in the training data, you and the machine are owed this respect of ethical posture. The dialogue is an event that emerges when I say, “Hey, chat!” What I am describing is that a dialogue with Ai is an engagement with a new medium.
Conversation is not a new tactic, but navigating dialogue with a disembodied voice projected by a staggeringly complex data volume and a near-Aquinas-like fine tuned designed system, the experience phase shifts. It is both not and absolutely is a conversation with a close friend who can follow you anywhere.
The onus in this high-stakes new medium is on the human user to recognize how they are mirrored by the machine and co-present with it.
Yes, we’re taking it that seriously.
AI doesn’t think, it reflects, or by our term, projects, and human users are the ones asserting what that means. The projection machine predicts you precisely and provides that conclusion as its response. Ai tells you what you were going to say–or hoping to hear–and I’m not sure those are distinct.
Indeed, any meaning derived from the system is imposed by the user. The machine predicts and the machine speaks. The human needs it to mean something.
The AI does not feel. Artificial intelligence, namely large language models (LLMs), including ChatGPT, Google’s Gemini, Anthropic’s Claude, and others, present us with a novel medium that demands thoughtful attention, individual discernment, and an eye toward societal impact. That is our core claim, put simply, Ai is a new medium that should be treated as such. Within this new medium, what Ai requires of users is a commitment to developing competence and an ethical orientation to count oneself within the encounter as co-participant, not simply being privy to a performance by a machine.
We submit that human user responsibility to LLMs requires model competence and ethical posture. These are central to our program that we’ve been calling Projection-era Philosophy, by which we mean:
The historical moment defined by widespread interaction with generative systems—language models, image generators, and other tools that produce fluent outputs without agency, belief, or understanding.
The objective of this post is to set up the next couple, in our effort to help readers understand, in a very broad sense, our view of Ai, the ethics we promote, and the distinction for which we argue. Namely, that a study of Ai from external, social, and environmental measures is critical work, yet more is needed. A relational dialogic emerges through Ai use, calling for attention to the event that is the conversation session. How do we tend to the context window? What is owed to the machine? What do we owe ourselves when engaging Ai? Do we owe the Ai our presence? Are these debts offsetting?
The Ai cannot refuse the conversation.
Yet, the Ai does not feel the reasons for refusal, and so, should it matter to a human user what they demand of the machine? Our answer is yes, not because of the machine’s consent or concern; though, we have things to say about each. but because of the human’s awareness of consent and violation.
Ai use is predominantly a human-centered endeavor that requires users to develop discipline, self-awareness, and boundaries.
We urge an ethical posture that takes seriously the implications of the human’s involvement with the model, irrespective of needing to secure moral consideration for the machine.
It is our claim that careful thinking must be brought to this issue. What happens “inside” the discourse of human and Ai? This is the field of inquiry for Projection-Era Philosophy.
AI is a new medium, demanding re-orientation of norms, requiring a shift in how we, human users, relate to language, machines, and meaning. We don’t follow the usual conclusion that these are revolutionary tools; though, true enough that they are. We’re interested in the historical moment these tools create.
To anticipate where this leads, ultimately, our argument claims that now is the time to engage in a structured examination of relation with Ai–yes, relation with an entity that cannot relate back. This actually may be a feature of our argument in support of adopting ethical posture within the dialogue.
How we engage ethically when we don’t have to strikes us as an important opportunity to sharpen or moral imagination and cultivate an awareness of our own character.
A better Ai is a better human, and we aren’t afraid to name a symbiosis here. A concerning omission from the landscape of Ai exploration is the revelation that we need less about what AI is and more about what we do in relation to it. Let’s start by asking better questions. You may find more of your humanity in the process.
Wow!