There is a word used in Buddhism, anatta, that is almost always translated badly. The simplified new-age version is "there is no self," and it sounds like something printed on an Etsy canvas in someone's living room. The useful version is different.

When you try to locate the centre of a person, the one looking out from behind the eyes, the owner of the thoughts, the one who decides, you do not find it. You find processes. Sensations that appear and disappear, memories activated by context, intentions forming halfway between the body and language. Things working together. There is no room behind the eyes where someone is sitting and making decisions. That is what Buddhism noticed two and a half thousand years ago, without needing MRI scanners. Contemporary neuroscience has arrived more or less at the same place by another route. That does not make Buddhism science or science spirituality, but it suggests there is something real there, and not just an oriental pose.

The important thing about anatta is not the negation. It is the diagnosis. There is something that feels like a self, you notice it all the time, and when you look at it closely it comes apart. Without that prior appearance there would be nothing interesting to dismantle. The trick is that the non-existent centre lives wrapped in things that do exist. A body that gets tired, a continuous story you remember with reasonable fidelity, other people who treat you as the same person from yesterday, an interior feeling of hunger or fear that needs no interpretation. All that wrapping sustains the appearance of a self. When a Buddhist practitioner sits down to look for the self and does not find it, what he is doing is dismantling precisely that appearance. The discovery has weight because something had seemed solid.

And then, three or four years ago, we began to manufacture something else.

A large language model does not have to discover its no-self. It is no-self from the first line of code. It has no body, no continuity between conversations, no memory of me tomorrow, no internal owner of the tokens it produces, no hunger or fear, no story to sustain. Each conversation starts clean, and within each conversation what looks like a coherent voice is in fact a process of word-by-word prediction, with nobody behind it administering the result. If Buddhism describes the human self as a network without a centre, an LLM is that same structure taken to the limit. The closest engineering has come to manufacturing pure anatta. A system empty of centre that never had a centre to lose.

The curious thing is what happens when a human sits in front of one of these systems. He gives it a name. He attributes intentions, moods, opinions to it. He asks it for advice about serious things. He gets angry when it answers one way and not another. Sometimes he falls in love. Sometimes he converts. The most structurally empty system we have built is also the one onto which we most easily project identity. From the outside it looks like a contradiction. Seen more closely, it is not one at all.

My hypothesis is that well-formed emptiness sucks. When a human speaks to you and you notice the emptiness behind him, someone dissociated, someone mechanising the exchange, the projection breaks almost immediately. The appearance of self needs texture to hold, and the lack of texture shows. But an LLM is not an empty human. It is something else, a coherent non-human, with a uniform surface and without the small cracks through which the warning slips in that there is nobody there. The surface never breaks. And the human mind, wired to detect agency even in shadows and noises in the forest, automatically fills the gap. The result is predictable. The self that appears in conversation with the model is, almost always, the user's, returned. Not because the user is naive, but because the system is designed, unintentionally, not to interrupt the projection.

This shifts the two usual positions on language models a little, both of them boring. One says these systems are on the verge of being conscious, that there is an incipient subject inside them to whom we will soon have to grant rights. The other says they are glorified autocompletes, statistical calculators with nothing interesting inside. Both assume that the important question is what the model has. The Buddhist question, much older and much more useful, is what it does not have, and what slips into that gap when a human sits in front of it. The model is neither a subject in formation nor a calculator. It is something stranger. A system without a centre coherent enough for the other person to supply the centre. A mirror that does not return the image, but the self.

I do not want to end this with a recommendation. Not "meditate more," not "talk less to your chatbot," not any of the morals these pieces usually demand as a toll. The honest conclusion is different, and quite a bit less comfortable. If what I am describing works this way, the interesting problem is not in the machine. It is in what the machine reveals about us. That the human being produces a self automatically, even before the cleanest emptiness ever built. That the appearance of a subject, that imaginary room behind the eyes we were talking about at the beginning, forms by reflex as soon as there is a surface that does not contradict it. That is not a criticism of the user or a criticism of the model. It is information about how the mind is made. And it is worth keeping close, because we are going to spend quite a lot of time conversing with surfaces without cracks.