This is the first really original perspective on AI that I've read in a while.
The technical points are very accurate. I'm familiar with Fraser's work; you mention his essay is "worth spending time with" – I can probably guess which one – but it's not linked in your essay.
I don't work on language models directly, but am a software engineer, and sit reasonably close to where some of the sausage is made. Of the three parts of an LLM that Fraser described, I'd like to tease apart two: the language model itself, and the "fictional character" or personality. I think of these as the former sitting nested inside the latter. The model itself is a only mathematical artifact (a probability distribution, to use jargon) represented as a software system. The personality layer is the complicated part: it queries the model, then it filters, neutralizes, and otherwise recasts the model's response. The personality layer is also a software system, one with lots of machine learning ingredients, it is this part where the commercial players spend the lion's share of their investment. This is where *theater* is manufactured, as your metaphor so eloquently puts it.
I've always been fascinated by the language model itself, what ML engineers sometimes call the "shoggoth." Since the mathematical artifact is only able to play back linguistic samples from within the contours of every linguistic sample it has already absorbed, it is essentially a mirror of our collective selves – a very distorted mirror, admittedly. My fascination is specifically with the idea that this mirror might contain some smooth patches that could in turn be used as a tool (or "lens") to examine parts of our collective selves in a way that has until now never been imagined. Your essay tempered some of my hopes. It draws into relief that we are biologically disposed to experience language as a form of reality. And that by attempting to use linguistic tools to probe a linguistic collective, still none of us will ever be able to access a meta-reality beyond the linguistic reality. It's probably why the word shaggoth exists.
I'll admit that I am basically ignorant of theater; I think this is why I love the essay so much! The pieces discussed at the end of the essay are fascinating examples that really drive your point home for me. I learned something – and would love to see any of those pieces on the stage.
There are numerous intriguing thoughts in this highly worthwhile read.
For me, the main argument is that interacting with LLMs invokes a familiar mimetic situation (the theatre) but with a very dark twist.
The user is enrolled into a play in which their role is hidden from them. The status of the role playing partner is obscured. There is an unresolvable ambiguity between real and imagined, and significantly, the experience is an isolated one where there is no afterward, no social interpretation possible for the user to ground themselves once again in a shared social reality.
You said these things more eloquently, of course, and I agree. Being a bit hopeful I will say that this space (Substack) makes it possible in some way to have that drink after the performance where interpretive and life-giving banter take place. It’s not the same thing but it is something.
I had the same thought when I got to that part of the essay: yes, there is no “after the performance,” no social interpretation, but many of us would love to somehow try.
I’m with you on this. I use AI too when editing my Substack pieces, but only as one might use spellcheck or an encyclopedic sidekick — helpful, yes, but not exactly soul-bearing company. I open it, ask for synonyms, metaphors, or the occasional nudge of clarity, then close it and make myself coffee (alone).
It’s a strange balancing act — using the tool without buying into the theater of it. The illusion of intellect is remarkable, but I prefer to treat it as software, not as a specter with opinions. Should we ban it? Probably not. But pretending it’s anything more than a very capable digital parrot seems like a dangerous act of collective miscasting.
I like the idea of making oneself a coffee after interacting with an LLM model. I still very much use dictionaries and thesauri when writing and editing, but these gems of knowledge are pretty much dead for the younger generation. And with them, perhaps, the habit of “reading in context”—for instance, when translating a word. I’m more interested in how LLMs will change linguistic practices, and in how this shift will reflect on writing itself.
Some good argumentation here, as we try to hack our way through the AI conundrum. I've been skeptical of the potential of AI, mainly because I don't see how it can deal with the distinction that David Hume makes between impressions and ideas in "Treatise of Human Nature":
"Those perceptions, which enter with most force and violence, we may name impressions; and under this name I comprehend all our sensations, passions and emotions, as they make their first appearance in the soul. By ideas I mean the faint images of these in thinking and reasoning; such as, for instance, are all the perceptions excited by the present discourse, excepting only, those which arise from the sight and touch, and excepting the immediate pleasure or uneasiness it may occasion."
Since AI is unable to receive impressions in Hume's sense of the word, it is correspondingly incapable of producing ideas, because ideas depend on impressions - and impressions can only be received as a result of living in the real world as a conscious being.
I never considered suspension of disbelief when dealing with AI until this article. Thank you for pointing it out, and the comparison to theatre fits. A scary consideration is how few people will likely think about AI this way. I've never used AI and see no need. This technology looks like it's loaded with too many opportunities to use it the wrong way and cause a lot of damage.
Thanks for writing this, it clarifies a lot! Your point about generative AI being "theater that doesn't acknowledge itself as such" totally nails it. It's so spot on and truely insightful, especially when we see cases like Adam Raine. It's the hidden performance that makes LLMs so dangerous, we don't treat them like actors.
This is the first really original perspective on AI that I've read in a while.
The technical points are very accurate. I'm familiar with Fraser's work; you mention his essay is "worth spending time with" – I can probably guess which one – but it's not linked in your essay.
I don't work on language models directly, but am a software engineer, and sit reasonably close to where some of the sausage is made. Of the three parts of an LLM that Fraser described, I'd like to tease apart two: the language model itself, and the "fictional character" or personality. I think of these as the former sitting nested inside the latter. The model itself is a only mathematical artifact (a probability distribution, to use jargon) represented as a software system. The personality layer is the complicated part: it queries the model, then it filters, neutralizes, and otherwise recasts the model's response. The personality layer is also a software system, one with lots of machine learning ingredients, it is this part where the commercial players spend the lion's share of their investment. This is where *theater* is manufactured, as your metaphor so eloquently puts it.
I've always been fascinated by the language model itself, what ML engineers sometimes call the "shoggoth." Since the mathematical artifact is only able to play back linguistic samples from within the contours of every linguistic sample it has already absorbed, it is essentially a mirror of our collective selves – a very distorted mirror, admittedly. My fascination is specifically with the idea that this mirror might contain some smooth patches that could in turn be used as a tool (or "lens") to examine parts of our collective selves in a way that has until now never been imagined. Your essay tempered some of my hopes. It draws into relief that we are biologically disposed to experience language as a form of reality. And that by attempting to use linguistic tools to probe a linguistic collective, still none of us will ever be able to access a meta-reality beyond the linguistic reality. It's probably why the word shaggoth exists.
I'll admit that I am basically ignorant of theater; I think this is why I love the essay so much! The pieces discussed at the end of the essay are fascinating examples that really drive your point home for me. I learned something – and would love to see any of those pieces on the stage.
Thank you for your kind comments. And you're quite right to note that Colin Fraser's essay isn't mentioned by name in the piece. I (obviously) recommend it highly! https://medium.com/@colin.fraser/who-are-we-talking-to-when-we-talk-to-these-bots-9a7e673f8525
There are numerous intriguing thoughts in this highly worthwhile read.
For me, the main argument is that interacting with LLMs invokes a familiar mimetic situation (the theatre) but with a very dark twist.
The user is enrolled into a play in which their role is hidden from them. The status of the role playing partner is obscured. There is an unresolvable ambiguity between real and imagined, and significantly, the experience is an isolated one where there is no afterward, no social interpretation possible for the user to ground themselves once again in a shared social reality.
You said these things more eloquently, of course, and I agree. Being a bit hopeful I will say that this space (Substack) makes it possible in some way to have that drink after the performance where interpretive and life-giving banter take place. It’s not the same thing but it is something.
I had the same thought when I got to that part of the essay: yes, there is no “after the performance,” no social interpretation, but many of us would love to somehow try.
I’m with you on this. I use AI too when editing my Substack pieces, but only as one might use spellcheck or an encyclopedic sidekick — helpful, yes, but not exactly soul-bearing company. I open it, ask for synonyms, metaphors, or the occasional nudge of clarity, then close it and make myself coffee (alone).
It’s a strange balancing act — using the tool without buying into the theater of it. The illusion of intellect is remarkable, but I prefer to treat it as software, not as a specter with opinions. Should we ban it? Probably not. But pretending it’s anything more than a very capable digital parrot seems like a dangerous act of collective miscasting.
I like the idea of making oneself a coffee after interacting with an LLM model. I still very much use dictionaries and thesauri when writing and editing, but these gems of knowledge are pretty much dead for the younger generation. And with them, perhaps, the habit of “reading in context”—for instance, when translating a word. I’m more interested in how LLMs will change linguistic practices, and in how this shift will reflect on writing itself.
Some good argumentation here, as we try to hack our way through the AI conundrum. I've been skeptical of the potential of AI, mainly because I don't see how it can deal with the distinction that David Hume makes between impressions and ideas in "Treatise of Human Nature":
"Those perceptions, which enter with most force and violence, we may name impressions; and under this name I comprehend all our sensations, passions and emotions, as they make their first appearance in the soul. By ideas I mean the faint images of these in thinking and reasoning; such as, for instance, are all the perceptions excited by the present discourse, excepting only, those which arise from the sight and touch, and excepting the immediate pleasure or uneasiness it may occasion."
Since AI is unable to receive impressions in Hume's sense of the word, it is correspondingly incapable of producing ideas, because ideas depend on impressions - and impressions can only be received as a result of living in the real world as a conscious being.
This is a persuasive explication of a powerful analogy. Thank you!
I never considered suspension of disbelief when dealing with AI until this article. Thank you for pointing it out, and the comparison to theatre fits. A scary consideration is how few people will likely think about AI this way. I've never used AI and see no need. This technology looks like it's loaded with too many opportunities to use it the wrong way and cause a lot of damage.
Thanks for writing this, it clarifies a lot! Your point about generative AI being "theater that doesn't acknowledge itself as such" totally nails it. It's so spot on and truely insightful, especially when we see cases like Adam Raine. It's the hidden performance that makes LLMs so dangerous, we don't treat them like actors.
Very, very good piece. Why we need humanism so badly: thinking about tech is so hard. Highly recommended.
I bet on Gemini to beat GPT every single month on poly https://nimnim1.substack.com/p/poly-hell
An extremely thoughtful piece - I shall reread!