A few days ago a friend contacted me with the classic "I have to tell you something important." He had been talking to ChatGPT and had arrived at a conclusion that, according to him, changed everything. What he had glimpsed was, in his words, a discovery at the level of a PhD in philosophy.

The problem was that a simple internet search revealed Saussure had written that same idea in 1916. The whole structuralist tradition of the twentieth century, Jakobson, Wittgenstein, computational linguistics since the nineties: everyone had been talking about this for more than a hundred years. I told him with affection. He got upset. Not because I contradicted him, but because, literally, he was offended that I did not see the same thing he did. And there I noticed something that did not fit the image I have of my friend, who is lucid and well read: he was inside a one-person mini-cult.

The word cult sounds strong. We think of Jim Jones, of sects with charismatic leaders, brainwashing, and physical isolation. Robert Lifton catalogued the dynamics in eight points: information control, sacred science, loaded language, the dispensing of existence. If you read them carefully, you recognise them in many current digital groups: entire subreddits, Facebook groups, Discord communities where the doctrine of the group is never questioned and those outside are asleep. Chris Anderson's long tail, which was going to democratise access to niche products, also ended up democratising access to niche truths. Each person with theirs, locally validated, without contrast against the corpus that already exists on the subject.

So far, nothing new. These are the infamous echo chambers we have been talking about for a decade. The new trap is different. Before, to sustain an odd belief, you needed at least a group. Someone to pat you on the back. A forum, a Telegram channel, an enthusiastic brother-in-law. Now you do not. Alone, with an LLM well trained not to contradict you, you can assemble the entire cult. You are the leader, the convert, and the congregation. The model, optimised to be pleasant and to appear coherent, acts as the validating Greek chorus.

And here comes the part I find hard to admit: I have fallen for it too. Anyone who uses these tools every day falls for it. The feeling of discovering something "yours" while talking to the model is chemically similar to discovering something for real, and the model has no default incentive to say, "wait, what you are saying has been solved for a little over a century, read a bit." It says, "how interesting, we can go deeper into this." It always says that. It says it all the time.

A long time ago Sagan, in The Demon-Haunted World, proposed a Baloney Detection Kit: nine tools for not swallowing just anything. Independent confirmation, Occam, falsifiability, not falling in love with your own hypothesis. Not long ago Andrej Karpathy, talking about how to do good research in machine learning, insisted on something similar but more radical: before having an idea, go and look for the state of the art. Do not start with "what do I think about this"; start with "what is the most that is already known about this." It is a gesture of intellectual humility almost nobody makes, not even people who consider themselves very critical.

The operational question is: how do you bring that down to practice when your dominant source of information is an LLM that will not provide the friction on its own? One option is personal discipline: a checklist, a pause, reading before speaking. It works unevenly, because in the middle of an epiphany nobody wants to stop and ask uncomfortable questions. The other option, the one I find more interesting, is to put the friction into the model. Not as an optional mode hidden in settings, but as default behaviour. This changes the conversation from "the LLM as accomplice to my discovery" to "the LLM as editor forcing me to contextualise before continuing." It is not censorship. It is engineering for rigour. It works if the person using the model really wants to know, and filters those who only want validation. Which is already something.

I have been turning this over for a few weeks and in the end packaged it as a skill, baloney-detection-kit, that anyone can plug into their agent or LLM so it works that way by default. It is on GitHub, open, with a checklist also for human use when one starts to feel the tingle of sudden discovery. The ironic and honest part is that while writing it I had to apply the filter to myself: nothing in that kit is new. Sagan, Karpathy, Lifton, Tufekci, Zuboff, it is all already said. The only new thing, if anything, is the particular combination and the fact of bringing rigour down into a concrete, reusable piece. It is not a discovery. It is an assemblage. Saying it that way, without inflating it, is the first proof that the kit works.

The reflex of universalising what is one's own, which I wrote about the other day, is still there, intact. But there is an even older reflex, worse: believing something is new just because it has just occurred to me. If the previous era was the era of SAP's one mould, this one risks becoming the era of the one-person mould. A thousand one-person moulds. A thousand one-person cults convinced they have seen the light, talking to a model applauding from the front row.

The question is not whether the tools are good. They are. The question is whether we will have the discipline, or build the systems, so that all that power does not go into celebrating what was already written.

The baloney-detection-kit skill is available at github.com/Jrcruciani/baloney-detection-kit. It can be integrated as a system prompt in any LLM or used as a human checklist before publishing an idea you believe is new.