The rationalists are not ok
28 Nov 2025
Who, or what, are the rationalists?
They are an internet-spawned sub-culture who ‘practice the art of rationality’. This means they spend lots of time thinking about topics like cognitive biases, Bayesian reasoning, probability and statistics, computer science, AI, decision theory, game theory, transhumanism, etc. Demographically they skew heavily toward STEM students from elite universities and tech guys in the bay area.
The rationalist community was borne out of blogs and forums - most notably Slate star codex and LessWrong. Over time they have become preoccupied with sweeping concerns about the future of humanity, with a particular focus on existential risks from AI and superintelligence. They are among the most prominent of the AI doomers, and fancy themselves as the main protagonists at what they believe is a critical juncture in human history. Many seem convinced that their actions have cosmic consequences and that they alone carry the vanguard of humanity on their shoulders.
While on the surface they welcome and value debate, the culture is in many ways quite insular. They have an extensive body of insider jargon that can seem impenetrable, even to people well-versed in their topics of interest. They frequently treat traditional cultural norms with disdain, viewing them as artefacts of our collective irrationality. This has led to more than once instance of rather bizarre behaviour within the community (we’ll get to this soon). It has been suggested in many quarters that, ironically, they seem to have quite a few characteristics of a religion or a cult.
To the extent that this label applies, their prophet is certainly Eleizer Yudkowsky, the founder of LessWrong. Their holy scripture is the so-called “Sequences”, a sprawling set of essays by Yudkowsky on a range of topics that are in the wheelhouse of the rationalists. Their doomsday scenario is the coming of artifical general intelligence, which they believe is nigh.
What do you think Steven?
To put it bluntly, I think they have lost their minds.
Entering into the world of rationalism is like a descent into an exquisite maze, painstakingly crafted by a sharp but unravelling mind. A labyrinthe of meticulously curated madness; a monument to solipsism and intellectual hubris. You can get lost. I wouldn’t recommend it.
Instead, I’m going to tell you about a now infamous chapter in the rationalist community to illusrate just how bonkers they are. It is known as “Roko’s basilisk”.
Roko's what?
Exactly.
Roko was the username of a LessWrong contributor whose post seemingly launched a thousand paranoid doomsday fantasies. The original post was deleted, but you can read an archived version here. It’s short but also laden with insider rationalist jargon, so I will now give a summary for all you hopeless “normies” out there.
Consider a superintelligence (god) that comes into existence and is benevolent towards humanity. Roko conjectures that such a superintelligence might precommit to punishing anyone who did not donate all their income, time and effort to avoiding existential risk. Why? Because the existential risk could wipe out humanity, and along with it any chance of the superintelligence coming into existence.
To minimise the chance of this happening, god precommits to exacting arbitrarily awful punishments on all those who didn’t do everything they could to avoid existential risk. The punishment could be as awful as you can imagine. For example, not content with just condemning you to purgatory, god might, in his boundless spite, makes a large number of simulations/copies of you and subject them all to the same awful fate, multiplying the torment to infnity. In the rationalist telling of the story, this god is benevolent believe it or not, because all of this would actually be a net benefit to humanity.
What our esteemed rationalist describes here is essentially an atemporal version of the madman strategy in a game of chicken, where one player rips their steering wheel off, throws it out the window, and makes sure the other player sees them doing it. They thus credibly precommit to not swerving, forcing the other player behave as they wish.
Now suppose you hear about Roko’s basilisk and think it’s all a bunch of nonsense. Congratulations, it turns out that you’re safe! There is no reason for god to go after you because you don’t even believe in god. You’re immovable.
However, if on the other hand you find the above scenario in any way credible, even a little bit, you’re toast. God knows that you are susceptible to being influenced, so has already precommited to heaping eternal punishment on you if you don’t dance to his tune.
The name basilisk comes from the mythological creature that, according to legend, causes death to anyone who looks into its eyes. Similarly, anyone who reads Roko’s post, having never imagined such lofty and evil machinations before, may now well be doomed to eternal suffering simply because the thought has now entered their mind.
Are these people for real?
Unfortunately, yes. They are very serious about this. Consider the reponse Eleizer immediately posted to Roko.
Listen to me very closely, you idiot.
YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU. THAT IS THE ONLY POSSIBLE THING WHICH GIVES THEM A MOTIVE TO FOLLOW THROUGH ON THE BLACKMAIL.
...
Meanwhile I'm banning this post so that it doesn't (a) give people horrible nightmares and (b) give distant superintelligences a motive to follow through on blackmail against people dumb enough to think about them in sufficient detail, though, thankfully, I doubt anyone dumb enough to do this knows the sufficient detail. (I'm not sure I know the sufficient detail.)
You have to be really clever to come up with a genuinely dangerous thought. I am disheartened that people can be clever enough to do that and not clever enough to do the obvious thing and KEEP THEIR IDIOT MOUTHS SHUT about it, because it is much more important to sound intelligent when talking to your friends. This post was STUPID.
Serious stuff indeed. Eleizer went on to ban any discussion of Roko’s basilisk on LessWrong for 5 years. This of course caused the whole thing to explode and get far more attention than it otherwise would have in what is a pretty hilarious instance of the Streisand effect. Eleizer presumably was not able to predict this unfortunate turn of events, but we can obviously take his word as his bond when it comes to predictions about the AI doomsday and the entire unfolding future of humanity.
I am now worried by Roko's basilisk. Is there anything I can do?
You’re in luck! See, not only was Roko smart enough to logick his way to the eponymous basilisk and then tell everyone about it, in the very same post he offers us a solution!
Are you ready for this? You might want to strap yourself in.
Step one, you bet your life savings on asset markets. Duh, obviously! Were you thinking of doing something else? What are you, an idiot or something?
Anyway, if you win, you bet your life savings again. And just keep going like this, until youre a squillionaire.
Now of course you will almost certainly end up penniless by doing this. However, only stupid normies would be discouraged by that. If you’re familiar with even some elementary quantum mechanics, you know that the many worlds interpretation is 100% right, and it means that every possible universe that can happen does happen and is just as real as our current universe. Therefore, while you are a penniless hobo in almost every universe, you are a squillionaire in at least one of them.
The squillionaire version of you in the other universe then uses his money to build a friendly superintelligent AI, which then creates simulations of your previous self. Except instead of letting you make the frankly idiotic decision to not do any of this, you get rescued from your own stupidity at the last minute and sent to utopia.
Now if you really think about it, gambling your life savings repeatedly on assets markets pretty much 100% guarantees you will end up in heaven. Squillionaire you just has to flood reality with simulations where you get rescued, so the fraction of worlds in which youre a penniless hobo is pretty much zero compared to the worlds where you’re in heaven.
Elementary if you think about it. How you didn’t think of all this before is beyond me. What are you, an idiot or something?
Ok these people really have lost their marbles
Yes, they have. The tragically ironic thing about all of this is that it’s basically just a reboot of Pascal’s wager, but with a cool sci-fi, AI multiverse angle to it. In other words, it’s all complete fantasist mysticism. The kind that would cause the rationalists themselves to look upon you with a combination of contempt and pity if you removed the sci-fi trappings and made it about jesus or some such. But the rationalists, so are they all rational men. They say this all makes sense, and they are sensible people.
But maybe it's just like a one-off weird thing
Unfortunately paranoid lunacy like this seems to be close to the defining feature of the rationalist community. Eleizer, for example, is a serious proponent of the world’s superpowers carrying out pre-emptive nuclear strikes on each other’s AI labs to stop the doomsday from happening. And bizarre, cult-like behaviour has a really unfortunate tendency to continue springing up around the rationalist movement. For example, there are disturbing accounts surrounding Leverage Research, the Centre For Applied rationality (CFAR) and the Machine Intelligence Research Institute (MIRI), all heavily intertwined with the rationalist community. This includes isolation from the outside world, emotional manipulation and guilt, serious mental health issues, psychotic breaks, people “debugging” each other, etc.
Then of course there’s the Zizian murder cult, who evidently take the teachings of prophet Eleizer very seriously. Don’t even try and understand the Zizians by the way - their ideas and motivations are far greater than anything that is dreamt of in yours or my philososphy.
Look maybe this is all a coincidence. Maybe. Or alternatively, perhaps this is the inevitable consequence of creating an AI doomsday cult that think the future of humanity rests squarely on their shoulders. Turns out people can do drastic things when they think they’re that important.