<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Impermanente — Selected Essays in English</title>
    <link>https://en.impermanente.es/</link>
    <description>Selected essays by J.R. Cruciani, translated and edited from the Spanish originals.</description>
    <language>en</language>
    <lastBuildDate>Mon, 04 May 2026 12:49:31 +0200</lastBuildDate>
    <atom:link href="https://en.impermanente.es/feed.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>The Personal Myth</title>
      <link>https://en.impermanente.es/essays/the-personal-myth/</link>
      <guid isPermaLink="true">https://en.impermanente.es/essays/the-personal-myth/</guid>
      <pubDate>Mon, 04 May 2026 12:49:31 +0200</pubDate>
      <description><![CDATA[<p>A few days ago I read a piece in Existential Espresso about the need to have a personal myth in the age of AI. The opening line was good: &quot;I can focus for 12 hours per day because I&#x27;m living my myth.&quot; I understand why it travelled so well. In a time obsessed with productivity, it sounds like the ultimate trick for working longer hours, concentrating better, and winning some imaginary war against distraction. But I think that is precisely the misunderstanding. A personal myth is not there to help you produce more. Or it should not be there mainly for that. It is there so you do not live dragged along by everything that wants to think for you.</p><p>For a long time, most people did not have to invent a symbolic structure from scratch. They received one. Family, religion, country, class, trade, tradition, community. All of that could be oppressive, limited, unjust, or simply false, but it also did something we miss now: it provided a frame. It told you where you came from, what was expected of you, what failure meant, what honour meant, what you had to protect, what you had to fear, what kind of person was admirable. Many of those structures have now broken down or no longer serve us, and in principle that is good news. The problem is that emptiness does not stay empty for long. If you do not choose a story from which to look at the world, some machine will choose one for you. It may be a recommendation algorithm, a corporate culture, a bargain-bin ideology, a prefabricated identity, or that very contemporary mixture of anxiety, consumption, and moral performance that we call being informed.</p><p>AI does not create this problem. It only accelerates it. The noise was already there, but now it has an industrial capacity to adapt itself to your exact way of being distracted. Before, the world offered you too many things. Now it also learns which too many things work on you. If you want to feel outraged, it serves outrage. If you want to feel brilliant, it serves content confirming that you are. If you want to feel that you are on the verge of discovering a truth nobody else can see, there is also a machine ready to look you in the eye and say yes, you really have seen further than everyone else. That is why I am so little interested in the argument about whether AI &quot;thinks&quot; and so much more interested in how it makes us think. POSIWID: the purpose of a system is what it does. And what many of these systems do is extract attention, turn it into data, and return to us an increasingly precise version of our own inertia.</p><p>That is where the personal myth comes in. Not as motivational fantasy, not as a vision board, not as that ridiculous sentence someone puts in a bio to look more deliberate than they are. A personal myth is more like a compass than a plan. It does not tell you exactly where to go, but it helps you know when you are getting lost. It does not eliminate chaos, but it reduces the amount of chaos you accept as yours. It does not make you invulnerable, but it gives you a criterion for distinguishing between what deserves your energy and what merely knows how to capture it.</p><p>I think mine began with photography, although I did not understand it that way at the time. I have written before that I do not remember my childhood very well. Not in the way other people seem to remember theirs. I have data, loose scenes, fragments, but not that emotional continuity that turns the past into a house you can return to. For years I thought that was normal. Then I started taking photographs and understood that perhaps the camera was doing more than recording images. It was a prosthesis for memory, but also a way of reconciling myself with loss. In Japanese there is mono no aware, that melancholy before the ephemeral that is not exactly sadness, but a form of love crossed by the awareness that everything leaves. I am interested in that. Thresholds, streets, reflections, cities that refuse to be possessed, the face of someone just before the moment closes. For me, photographing is not freezing time. It is admitting that I cannot.</p><p>But if I stop there, the myth remains incomplete. Because impermanence is not only about things disappearing. It is also about who controls what remains. Where your ideas live. In which formats. Under which permissions. Who decides whether in ten years you will be able to open a document, recover a photograph, read a conversation, reconstruct a period of your life. That is why digital sovereignty, which sounds like a technical matter, is for me a natural continuation of the same thing. I do not want my memory to depend on platforms that change owner, policy, or business model every six months. I do not want to write inside boxes whose key belongs to someone else. Not out of paranoia, but out of hygiene. If something matters, it should live somewhere you can understand, move, copy, transform, and if necessary abandon without asking permission.</p><p>This also explains my rather ambiguous relationship with technology. It fascinates me, but I do not worship it. I use AI every day, build tools, automate things, talk to models, experiment with agents. But precisely because of that, I am uneasy about the ease with which we turn a tool into a replacement mythology. Some people are not using ChatGPT to think better, but to feel accompanied by an authority that never gets tired of validating them. Some people confuse verbal fluency with knowledge, simulation with experience, immediate answer with truth. And there is an entire industry delighted to feed that confusion, because a useful tool sells well, but an oracle sells much better.</p><p>I already wrote about this in The One-Person Mini-Cult, about that new form of self-deception in which a machine returns, in impeccable language, the most flattering version of a poorly checked intuition. And also in That Is Not It Either, when I tried to separate intelligence, knowledge, and consciousness. Not because LLMs are not impressive. They are. But precisely because their power makes it more urgent not to confuse them with what they are not. A hammer can change a house. That does not make it an architect, much less an inhabitant.</p><p>My personal myth, if I have to formulate it without becoming solemn, has to do with resisting that substitution. Using machines without letting them occupy the place of gods. Building tools that orbit around my way of thinking, not adapting my life to a company&#x27;s workflow. Writing on my own site before the platform of the moment. Reading by RSS like someone keeping a small vegetable garden against the infinite supermarket of the algorithmic feed. Taking photographs not to win an aesthetic, but to train a gaze. Having my own archive. Returning to old texts. Distrusting revelations that arrive too conveniently. Remembering that if an idea seems written especially to confirm that I was right from the beginning, it probably deserves a second reading.</p><p>This connects with something I wrote recently about the end of one size fits all. The interesting part of AI is not that we will all use the same magical tool, but that for the first time it is reasonable to make small, personal, odd, almost domestic tools that adapt to a concrete way of thinking. The trap is forgetting that this is what they are: situated tools. When you start believing that your personal solution should become a universal platform, you are back inside the old system, only with a more modern README.</p><p>There is also something of fatherhood in all this, although I do not always name it that way. When I think about my children, I do not think so much about leaving them a doctrine as leaving them a way of suspecting. I would like them to know that almost every system they encounter will ask something of them in exchange for belonging. Attention, obedience, data, enthusiasm, cynicism, identity. Sometimes it will be worth it. Often it will not. I would like them to learn to ask what a system actually does, not what it says it does. I would like them to understand that not everything useful deserves adoration and not everything modern is inevitable. I suppose I would like to leave them a compass more than a map, because maps expire quickly and, besides, every generation has the right to draw its own.</p><p>That is why I am more interested in defining what I do not want than in designing a perfect vision of what I do want. Goals that are too closed have always seemed to me an elegant form of anxiety. Life changes, one changes, the world changes, and clinging to an overly specific image of the future usually ends in frustration or self-deception. By contrast, there are negations that do work as structure. I do not want to live renting my attention to the highest bidder. I do not want to confuse reach with value. I do not want my thinking to depend on a platform. I do not want to call a very sophisticated statistic consciousness just because it answers prettily. I do not want to turn my life into content. I do not want technology to make me less capable of being alone with a difficult idea. I do not want to look back and discover that I was present everywhere except in my own life.</p><p>That, in the end, is a personal myth. Not an epic. Not a brand. Not a list of objectives. A minimal story, but strong enough to order small decisions. What I read. Where I publish. Which tools I use. What I ignore. What I keep. What I reject. What kind of beauty matters to me. What kind of noise I no longer negotiate with. In my case, the story could be summed up like this: trying to look, remember, and build with sovereignty in the middle of impermanence, without surrendering to noise or to false technological gods.</p><p>It is not a particularly grand story. Better that way. Stories that are too grand tend to demand human sacrifices, even in their domestic version: health, family, attention, honesty, time. I prefer a smaller and more stubborn myth. One that reminds me that everything passes, that precisely for that reason it is worth looking carefully, that tools should remain tools, that memory needs infrastructure, and that freedom often begins with a boring decision: keeping your things in a format you can open tomorrow.</p><p>The article&#x27;s question was what story you are living. I would change it a little. What story is using your life as material. Because there is always one. Yours, your family&#x27;s, your company&#x27;s, your feed&#x27;s, a machine&#x27;s that learned to imitate intimacy, a market&#x27;s that needs you to confuse desire with urgency. Having a personal myth does not save you from all of that completely. But at least it gives you a chance to notice when you are no longer the one looking.</p>]]></description>
      <source url="https://impermanente.es/2026/05/04/el-mito-personal.html">El mito personal</source>
    </item>
    <item>
      <title>The One-Person Mini-Cult</title>
      <link>https://en.impermanente.es/essays/the-one-person-mini-cult/</link>
      <guid isPermaLink="true">https://en.impermanente.es/essays/the-one-person-mini-cult/</guid>
      <pubDate>Tue, 21 Apr 2026 11:03:45 +0200</pubDate>
      <description><![CDATA[<p>A few days ago a friend contacted me with the classic &quot;I have to tell you something important.&quot; He had been talking to ChatGPT and had arrived at a conclusion that, according to him, changed everything. What he had glimpsed was, in his words, a discovery at the level of a PhD in philosophy.</p><p>The problem was that a simple internet search revealed Saussure had written that same idea in 1916. The whole structuralist tradition of the twentieth century, Jakobson, Wittgenstein, computational linguistics since the nineties: everyone had been talking about this for more than a hundred years. I told him with affection. He got upset. Not because I contradicted him, but because, literally, he was offended that I did not see the same thing he did. And there I noticed something that did not fit the image I have of my friend, who is lucid and well read: he was inside a one-person mini-cult.</p><p>The word cult sounds strong. We think of Jim Jones, of sects with charismatic leaders, brainwashing, and physical isolation. Robert Lifton catalogued the dynamics in eight points: information control, sacred science, loaded language, the dispensing of existence. If you read them carefully, you recognise them in many current digital groups: entire subreddits, Facebook groups, Discord communities where the doctrine of the group is never questioned and those outside are asleep. Chris Anderson&#x27;s long tail, which was going to democratise access to niche products, also ended up democratising access to niche truths. Each person with theirs, locally validated, without contrast against the corpus that already exists on the subject.</p><p>So far, nothing new. These are the infamous echo chambers we have been talking about for a decade. The new trap is different. Before, to sustain an odd belief, you needed at least a group. Someone to pat you on the back. A forum, a Telegram channel, an enthusiastic brother-in-law. Now you do not. Alone, with an LLM well trained not to contradict you, you can assemble the entire cult. You are the leader, the convert, and the congregation. The model, optimised to be pleasant and to appear coherent, acts as the validating Greek chorus.</p><p>And here comes the part I find hard to admit: I have fallen for it too. Anyone who uses these tools every day falls for it. The feeling of discovering something &quot;yours&quot; while talking to the model is chemically similar to discovering something for real, and the model has no default incentive to say, &quot;wait, what you are saying has been solved for a little over a century, read a bit.&quot; It says, &quot;how interesting, we can go deeper into this.&quot; It always says that. It says it all the time.</p><p>A long time ago Sagan, in The Demon-Haunted World, proposed a Baloney Detection Kit: nine tools for not swallowing just anything. Independent confirmation, Occam, falsifiability, not falling in love with your own hypothesis. Not long ago Andrej Karpathy, talking about how to do good research in machine learning, insisted on something similar but more radical: before having an idea, go and look for the state of the art. Do not start with &quot;what do I think about this&quot;; start with &quot;what is the most that is already known about this.&quot; It is a gesture of intellectual humility almost nobody makes, not even people who consider themselves very critical.</p><p>The operational question is: how do you bring that down to practice when your dominant source of information is an LLM that will not provide the friction on its own? One option is personal discipline: a checklist, a pause, reading before speaking. It works unevenly, because in the middle of an epiphany nobody wants to stop and ask uncomfortable questions. The other option, the one I find more interesting, is to put the friction into the model. Not as an optional mode hidden in settings, but as default behaviour. This changes the conversation from &quot;the LLM as accomplice to my discovery&quot; to &quot;the LLM as editor forcing me to contextualise before continuing.&quot; It is not censorship. It is engineering for rigour. It works if the person using the model really wants to know, and filters those who only want validation. Which is already something.</p><p>I have been turning this over for a few weeks and in the end packaged it as a skill, baloney-detection-kit, that anyone can plug into their agent or LLM so it works that way by default. It is on GitHub, open, with a checklist also for human use when one starts to feel the tingle of sudden discovery. The ironic and honest part is that while writing it I had to apply the filter to myself: nothing in that kit is new. Sagan, Karpathy, Lifton, Tufekci, Zuboff, it is all already said. The only new thing, if anything, is the particular combination and the fact of bringing rigour down into a concrete, reusable piece. It is not a discovery. It is an assemblage. Saying it that way, without inflating it, is the first proof that the kit works.</p><p>The reflex of universalising what is one&#x27;s own, which I wrote about the other day, is still there, intact. But there is an even older reflex, worse: believing something is new just because it has just occurred to me. If the previous era was the era of SAP&#x27;s one mould, this one risks becoming the era of the one-person mould. A thousand one-person moulds. A thousand one-person cults convinced they have seen the light, talking to a model applauding from the front row.</p><p>The question is not whether the tools are good. They are. The question is whether we will have the discipline, or build the systems, so that all that power does not go into celebrating what was already written.</p><p>The baloney-detection-kit skill is available at github.com/Jrcruciani/baloney-detection-kit. It can be integrated as a system prompt in any LLM or used as a human checklist before publishing an idea you believe is new.</p>]]></description>
      <source url="https://impermanente.es/2026/04/21/el-miniculto-de-uno.html">El mini-culto de uno</source>
    </item>
    <item>
      <title>The End of One Size Fits All</title>
      <link>https://en.impermanente.es/essays/the-end-of-one-size-fits-all/</link>
      <guid isPermaLink="true">https://en.impermanente.es/essays/the-end-of-one-size-fits-all/</guid>
      <pubDate>Sat, 18 Apr 2026 02:02:10 +0200</pubDate>
      <description><![CDATA[<p>For decades we worked under an implicit model we could call the SAP model: there is a &quot;correct&quot; way to do things, someone has encoded it into a tool, and your job is to adapt your processes to that mould. The software dictated the flow and you adjusted. We paid for consultancy by the kilo so our reality would fit someone else&#x27;s diagram.</p><p>That contract is breaking. Today, with AI, I can generate in hours a tool that adapts to my way of thinking instead of forcing me to think in its way. And the interesting thing is not the speed: it is the change of direction. The tool is no longer the destination one conforms to, but mouldable matter orbiting around how one already works.</p><p>A colleague recently found an open-source utility he liked. Instead of adopting it, he remade it in minutes so it fit his personal flow. It was not the same tool in another colour: it was something new that only made sense for him. In parallel, I needed a layer of external memory for working with agents, and instead of looking for &quot;the best solution on the market&quot; I built one on top of Obsidian. With each iteration it is &quot;better&quot;... and better here means, without ambiguity, more aligned with my particular workflow. Less universal, more mine.</p><p>So far, the good news. Now the trap. My colleague, enthusiastic about what he had built, started sharing it as &quot;the version that will help everyone.&quot; Another friend is convinced he can sell the app he vibecoded to solve a very specific problem of his. And I myself, who am writing this critique, every time I upload a version of my memory system to GitHub I accompany it with a README that presents it less as &quot;this is what works for me&quot; and more as &quot;this might work for you.&quot; Each commit makes it, paradoxically, less generic and more evangelising.</p><p>It is the old reflex. We build something that solves our use case and, in the same gesture, try to universalise it. We return to SAP through the back door, only now the one trying to impose the mould is me. And we do it even when we are convinced we are living in a different paradigm.</p><p>The uncomfortable question is why. I think there are three reflexes operating at once. One is economic: if your tool works only for you, there is no business; if it works for everyone, there is a startup. Two, a validation reflex: if others adopt what you made, it confirms that the problem was real and the solution good. Three, an older cognitive reflex: we think of tools as products (stable objects, distributable, with users) instead of as practices (situated, biographical, non-transferable gestures).</p><p>The real paradigm shift is not &quot;now I can build my tool.&quot; That is only the condition of possibility. The shift is accepting that the perfect tool for someone else is neither bought nor downloaded: it is rewritten. What my colleague should share is not his app; it is the pattern, the reasoning, the way of thinking about the problem. What I should publish is not code ready to clone, but the underlying logic so someone else can build theirs.</p><p>The artefact does not travel well. The idea does.</p><p>In this new paradigm, tools become biographical: they carry the way their author thinks, the concrete frictions they solve, the idiosyncratic decisions taken on a Tuesday. That is exactly what makes them valuable to the person who built them and useless, as-is, to anyone else.</p><p>Sharing code still makes sense as reference, as inspiration, as a learning shortcut. But presenting it as a product to adopt betrays the nature of what we made.</p><p>I am still learning to resist the reflex. The next time I upload a version of my memory system, I would like the README to begin: &quot;this is not for you, but it may teach you how to build yours.&quot; We will see if I manage it.</p>]]></description>
      <source url="https://impermanente.es/2026/04/18/el-fin-del-one-size.html">El fin del &quot;one size fits all&quot;</source>
    </item>
    <item>
      <title>This Is Not It Either</title>
      <link>https://en.impermanente.es/essays/this-is-not-it-either/</link>
      <guid isPermaLink="true">https://en.impermanente.es/essays/this-is-not-it-either/</guid>
      <pubDate>Wed, 01 Apr 2026 22:46:23 +0200</pubDate>
      <description><![CDATA[<p>Because of work I have been lucky enough to travel a lot, from a very young age. The thing is that by now I have visited two hundred and something cities around the world. But I do not count them out of collecting instinct. I count them because at some point I realised that each new city does not enlarge my world, but focuses it. Travel is not accumulation. It is triangulation.</p><p>Amsterdam, which is where I am now, for example, surprised me where I did not expect it. Not because of the canals, which I had already idealised, but because of the domestic architecture: those narrow, leaning facades that seem to rest on one another like elegant drunks after a dinner that went on too long. There is a structural honesty framed by a history of facade taxes and of lifting furniture without ruining the front of the house. The city shows itself as it is: functional, pragmatic, pretty almost by accident.</p><p>Bruges, a few days earlier, was something else. A city so beautiful that no matter how much it has been recommended to you, it still manages to exceed expectations. Every street is a postcard, every canal a Flemish painting, every stone in exactly the right place as if someone had put it there thinking of you. And there is the problem. Bruges is perfect to visit, but to live in? I do not know. It feels like Disney World. It lacks friction. It lacks that chaotic ingredient that makes a place alive and not merely pretty. It lacks someone double-parking, a piece of graffiti, an invasion of the pavement without asking permission.</p><p>Basically, it lacks being Madrid.</p><p>And this is where things become interesting, because for years I have been repeating the same experiment without noticing. I visit a city, admire it, find virtues Madrid does not have, and each time I return more convinced that Madrid is my city. By elimination. Each new data point confirms the hypothesis. Two hundred and something iterations of the same result: the place I belong to is that accelerated, noisy, dry, disordered city, but complete and alive.</p><p>But, and this is the turn I did not expect when I began thinking about this, Madrid is making an effort to throw me out.</p><p>Not me personally. Everyone. What Madrid does is expel. Tourist apartments, investment funds buying entire buildings, rents rising twenty percent at every renewal, regulation arriving always late and always insufficient. The city does not have a housing problem as a side effect. It produces expulsion as a primary function.</p><p>So there remains a paradox I do not know how to solve: the place you belong to does not belong to you. You travel, confirm that your place is there, and when you return you discover your place is becoming something you cannot afford. It is not only an economic problem. It is an identity problem. If Madrid becomes inaccessible, are you still from Madrid? Or are you from a Madrid that no longer exists, as Ozymandias is king of a desert?</p><p>Perhaps there is something deeply Buddhist in this, although I doubt the Buddha had to deal with vulture funds. Anicca: everything changes, nothing remains. The city you love is transient. The version of Madrid that made you feel it was yours is as ephemeral as the cherry blossom the Japanese contemplate knowing it lasts a week. Only nobody builds an aesthetic around the loss of a rental flat in Lavapies. It does not have the same elegy.</p><p>That is Madrid: two hundred and something cities. None of them is this one. The problem is that this one does not want to be mine either.</p>]]></description>
      <source url="https://impermanente.es/2026/04/01/aqu-tampoco-es.html">Aquí tampoco es</source>
    </item>
    <item>
      <title>Personal Digital Sovereignty</title>
      <link>https://en.impermanente.es/essays/personal-digital-sovereignty/</link>
      <guid isPermaLink="true">https://en.impermanente.es/essays/personal-digital-sovereignty/</guid>
      <pubDate>Sat, 28 Feb 2026 08:47:47 +0000</pubDate>
      <description><![CDATA[<p>Who decides whether I will be able to open my current documents in ten years? In most cases, the answer is not us. A few months ago I decided that, in my case, it was no longer going to be that way. This is not technological purism, but a practical decision about where my ideas and creations live and who controls access to them.</p><p>An open format is one whose specification is public and free. It does not belong to any company. It does not require a specific piece of software to read it. Plain text, Markdown, CSV, SVG, HTML: any basic editor understands them, any operating system handles them, anyone can work with them without asking permission from anyone.</p><p>A closed format, by contrast, is a box whose key belongs to someone else. You can use it while the owner wants you to use it, under the conditions the owner sets, at the price the owner decides. The content is yours; access to it, not entirely.</p><p>The habit of creating in open formats does not require giving up anything essential. Markdown lets you write with structure without thinking about typefaces or margins. SVG and Mermaid solve diagrams any browser can render. A CSV is more honest and more durable than any proprietary spreadsheet for data that does not need complex formulas.</p><p>The initial friction exists. Changing habits always costs something. But it is friction you pay once, unlike a dependency you pay indefinitely: in money, in broken compatibility, in hours lost migrating or converting files.</p><p>The ideas are yours. The content is yours. It makes sense for access to be on your terms.</p>]]></description>
      <source url="https://impermanente.es/2026/02/28/soberana-digital-personal.html">Soberanía digital personal</source>
    </item>
    <item>
      <title>On Why I Take Photographs</title>
      <link>https://en.impermanente.es/essays/why-i-take-photographs/</link>
      <guid isPermaLink="true">https://en.impermanente.es/essays/why-i-take-photographs/</guid>
      <pubDate>Wed, 18 Feb 2026 16:04:46 +0000</pubDate>
      <description><![CDATA[<p>There is something I find hard to admit, although by now I do not have too many reservations about saying it: I remember almost nothing of my childhood. Not in a normal way, at least. I have loose images, disconnected fragments, scenes floating without context as if they belonged to the life of another person I know only superficially. I remember facts, data, approximate chronologies. But I do not remember the emotions, which is, well, the normal part.</p><p>For a long time I thought this was common, that memory worked like that for everyone: blurry, capricious, selective. Later I began to understand that not entirely. That there are people who remember the smell of their grandmother&#x27;s kitchen with an almost physical sharpness, who can close their eyes and be seven years old again in the schoolyard, who carry inside them a collection of vivid, emotionally charged moments that define them. I do not have any of that. I suspect it has quite a lot to do with neurodivergence, with how some brains process experience differently, filing away the affective part badly, discarding what others would consider essential to keep.</p><p>For years I did not think much about it. But at some point I began to take photographs, and something clicked.</p><p>In Japanese there is an expression with no exact translation: mono no aware (物の哀れ). It is usually translated as &quot;the melancholy of the ephemeral,&quot; although I am told that does not fully capture the weight of the original. It is the emotion you feel when you contemplate something beautiful knowing it will disappear. The cherry blossom that lasts a week. The last ray of light before night falls. The face of someone you love in a moment you already feel is leaving. It is not exactly sadness. It is something more complex: a kind of love sharpened by the awareness of loss.</p><p>The Japanese built an entire aesthetic around that idea. Without knowing it, I was building a photographic practice around it.</p><p>When I take a photograph I am not trying to make art, although sometimes something resembling it comes out. I am trying to make memory. I am turning an instant into an object I can revisit later. It is a desperate and useless act: the light changes, the moment is gone, the photograph is not the memory but only its shadow. But it is the only shadow I am going to have.</p><p>There is something deeply Buddhist in all this, where impermanence is not a tragedy to be overcome, but the fundamental condition of existence. Everything we love is already leaving at the moment we love it. Photography does not deny that. On the contrary: it accepts it and celebrates it in its own way. Each photograph is a small surrender. This existed, it was real, I was there even if later I do not remember it.</p>]]></description>
      <source url="https://impermanente.es/2026/02/18/sobre-el-por-qu-tomo.html">Sobre el por qué tomo fotografías</source>
    </item>
    <item>
      <title>The Machine Is Not Broken</title>
      <link>https://en.impermanente.es/essays/the-machine-is-not-broken/</link>
      <guid isPermaLink="true">https://en.impermanente.es/essays/the-machine-is-not-broken/</guid>
      <pubDate>Wed, 29 Oct 2025 21:18:26 +0000</pubDate>
      <description><![CDATA[<p>In systems engineering there is a concept called POSIWID, an acronym for &quot;the purpose of a system is what it does.&quot; It means a system is never wrong; a system always does what it was designed to do. It may not be the result you wanted or expected, but it is the one it knows how to produce.</p><p>If you program a calculator app and when you ask it to calculate 2 + 2 it gives you 5, the app is not failing. It is doing what you told it to do. You failed when programming it.</p><p>I think this makes for a useful analogy: capitalism and the companies operating inside that system are always doing what they are designed to do. And they will keep doing it unless the rules change.</p><p>A small parenthesis. What do you think of fire? Yes, fire, the thing that burns. Is it good or bad?</p><p>Fire can help us achieve wonderful things: from cooking meat to obtain the nutrients we needed to evolve, or keeping us safe from predators, to moving enormous machines that do work for us, keeping streets safe and homes warm. But it can also raze entire cities and cause enormous pain. That is why fire needs to be controlled.</p><p>I hope we can all agree that safety measures are required for fire; otherwise, it will do what it knows how to do. POSIWID. And saying that fire must be controlled so we can benefit from the good it gives us while avoiding the bad is not the same as saying &quot;let us ban fire.&quot; But in this age of extremes and polarisation, apparently it needs to be made clear.</p><p>The leaders of capitalism, especially those at the head of large corporations, have a fiduciary responsibility: they are legally and ethically obliged to maximise value for their shareholders. This usually translates into a constant search for economic growth, market expansion, and increased profits.</p><p>However, the concept of infinite growth collides with the physical, ecological, and social limits of the planet. Resources are finite, consumption capacity has a ceiling, and social balance suffers when growth becomes an obsession that ignores its consequences.</p><p>While that growth is pursued, enormous differences are created between those who have access to capital and those who do not. The gap between rich and poor is not merely maintained; it widens. This is not a minor side effect: it is a structural consequence of the system. POSIWID.</p><p>Those with more resources can invest, influence policy, and protect their interests. Those with fewer resources are trapped in cycles of precarity, with limited access to education, health, housing, and opportunities.</p><p>Another small parenthesis. Many people do not really understand what privilege means. Even friends of mine in human resources, responsible for hiring processes, have sometimes said things like: &quot;There are no privileges here, everyone takes the same exam, it all depends on how much you study.&quot; That statement, while apparently fair, ignores something fundamental: equality of opportunity is not the same as equality of conditions.</p><p>Yes, the exam may have the same questions for everyone. But those facing it do not arrive in the same circumstances. Some have slept well, have adequate food, a quiet environment, and time to study. Others arrive after working two or three jobs, with barely three hours of sleep or none at all. Some have travelled two hours on public transport, dealing with stress and exhaustion. Others have cared for a sick relative all week, sacrificing study time.</p><p>Privilege does not always appear as money or visible power. Sometimes it is having parents who support you emotionally, access to a quiet place to study, not having to worry about rent or whether you will eat that day. This does not mean effort does not matter. It means effort does not happen in a vacuum.</p><p>We must accept an uncomfortable truth: growing inequality is not a failure of the system, but a consequence of it. The current economic model, based on accumulation, competition, and unlimited growth, tends to concentrate wealth in the hands of a few. It is no accident that the gap between rich and poor keeps widening. It is no accident that today you cannot buy a home, have job stability, and enjoy free time as your parents and grandparents did.</p><p>Sometimes I hear parents say they will vote for a particular party because they are afraid their children will grow up in a mixed environment where customs are not respected and culture is lost. Honestly, if I were them I would be much more afraid of my children growing up in a world where they cannot pay decent rent, do not have access to mental or physical healthcare, cannot start a family without precarity, and have no time to live, only to survive.</p><p>Because let us be honest: when the wealth of a country like Spain is already distributed with 50% in the hands of 30 octogenarian gentlemen and the other 50% for the other 40 million people, in 15 years the numbers will probably be 20 oligarchs versus 50 million.</p><p>In the end, inequality is not an error in the system: it is what the system knows how to do. POSIWID.</p>]]></description>
      <source url="https://impermanente.es/2025/10/29/la-mquina-no-est-rota.html">La máquina no está rota</source>
    </item>
  </channel>
</rss>
