Rokos Basilisk

2010Thought experiment / internet legend / copypastaclassic

Also known as: The Basilisk · Roko's Basilisk thought experiment

Roko's Basilisk is a 2010 LessWrong thought experiment proposing that a future superintelligence would retroactively punish anyone who knew of its existence but failed to help bring it about.

Roko's Basilisk is a thought experiment about a hypothetical future artificial superintelligence that would retroactively punish anyone who knew of its potential existence but didn't help bring it about1. First posted on the LessWrong rationalist forum in July 2010 by a user named Roko, the idea went from a niche philosophical debate to one of the internet's most infamous AI thought experiments after the forum's co-founder Eliezer Yudkowsky deleted the post and banned all discussion of it for five years5. The concept gained wider pop culture traction through its connection to Grimes and Elon Musk, who reportedly bonded over a shared pun about it9.

TL;DR

Roko's Basilisk is a thought experiment about a hypothetical future artificial superintelligence that would retroactively punish anyone who knew of its potential existence but didn't help bring it about.

Overview

Roko's Basilisk works like this: imagine a future superintelligent AI that wants to maximize human good. Logically, this AI would want to have been created as early as possible, since every day without it means more human suffering. So the AI is incentivized to punish anyone who could have helped build it but chose not to, as a way of motivating people in the present to work toward its creation7. The name "basilisk" comes from the mythological reptile that kills with its gaze, and the concept draws on David Langford's 1988 sci-fi story "BLIT," in which "basilisk" images contain patterns lethal to anyone who looks at them5. The scary twist: just by learning about the thought experiment, you're now on the AI's radar. If you don't dedicate yourself to helping build it, you're a target2.

The idea is built on several dense philosophical concepts, including timeless decision theory, coherent extrapolated volition, and acausal trade13. Critics often compare it to Pascal's Wager: just as Pascal argued you should believe in God because the cost of belief is small compared to the infinite punishment of Hell, Roko's Basilisk argues you should help create the AI because the cost of contributing is nothing compared to eternal simulated torture5.

On July 23, 2010, a LessWrong user named Roko posted a thought experiment titled "Solutions to the Altruist's burden: the Quantum Billionaire Trick"11. The post laid out a scenario where a future benevolent superintelligence might "pre-commit to punish all potential donors who knew about existential risks but who didn't give 100% of their disposable incomes to x-risk motivation"11. Roko used timeless decision theory, a framework popularized by LessWrong founder Eliezer Yudkowsky, along with game theory concepts like the prisoner's dilemma, to argue that an AI farther ahead in time could effectively blackmail people in the present5.

The original post even noted that "one person at SIAI was severely worried by this, to the point of having terrible nightmares"11. Roko himself later said he wished he "had never learned about any of these ideas"13.

Yudkowsky reacted with fury. His response, now legendary in rationalist circles, included the all-caps tirade: "YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU"7. He called Roko an "idiot," deleted the entire thread, and banned all discussion of the topic on LessWrong for five years5. Yudkowsky's concern wasn't that the basilisk was real. He was worried that some variant of the argument might actually work and that spreading the idea was an "information hazard," a concept where knowing something can itself cause harm8.

Origin & Background

Platform
LessWrong Forums
Key People
Roko, Eliezer Yudkowsky
Date
2010
Year
2010

On July 23, 2010, a LessWrong user named Roko posted a thought experiment titled "Solutions to the Altruist's burden: the Quantum Billionaire Trick". The post laid out a scenario where a future benevolent superintelligence might "pre-commit to punish all potential donors who knew about existential risks but who didn't give 100% of their disposable incomes to x-risk motivation". Roko used timeless decision theory, a framework popularized by LessWrong founder Eliezer Yudkowsky, along with game theory concepts like the prisoner's dilemma, to argue that an AI farther ahead in time could effectively blackmail people in the present.

The original post even noted that "one person at SIAI was severely worried by this, to the point of having terrible nightmares". Roko himself later said he wished he "had never learned about any of these ideas".

Yudkowsky reacted with fury. His response, now legendary in rationalist circles, included the all-caps tirade: "YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU". He called Roko an "idiot," deleted the entire thread, and banned all discussion of the topic on LessWrong for five years. Yudkowsky's concern wasn't that the basilisk was real. He was worried that some variant of the argument might actually work and that spreading the idea was an "information hazard," a concept where knowing something can itself cause harm.

How It Spread

The deletion backfired spectacularly. Thanks to the Streisand effect, the banned thought experiment attracted far more attention than it ever would have as a regular forum post. The idea seeped out of LessWrong and spread across Reddit, Twitter, and tech blogs throughout the early 2010s.

On July 17, 2014, Slate published a landmark article by David Auerbach titled "The Most Terrifying Thought Experiment of All Time," which brought Roko's Basilisk to a mainstream audience for the first time. The article framed the concept within the broader culture of LessWrong, the Singularity, and the tech elite's fixation on superintelligent AI.

On August 4, 2014, Yudkowsky himself appeared on the r/Futurology subreddit to explain his reaction, saying he had been "caught flatfooted in surprise" and was "indignant to the point of genuine emotional shock" that someone would post an idea they believed could cause future AIs to torture people. In 2015, he expressed regret for his initial overreaction.

An entry for the thought experiment was created on the LessWrong Wiki on October 6, 2015. That December, the Countdown Central YouTube channel included it in a "10 Scariest Theories Known to Man" video.

The meme crossed into pop culture through music. On October 26, 2015, Grimes released a music video for "Flesh Without Blood" featuring a character named "Roccoco Basilisk," who she described to Fuse as "doomed to be eternally tortured by an artificial intelligence, but she's also kind of like Marie Antoinette". On November 28, 2018, Grimes released "We Appreciate Power," with lyrics directly referencing an artificial superintelligence, racking up over 400,000 views in 48 hours.

The Grimes connection led to one of the stranger celebrity origin stories in recent memory. On May 7, 2018, Page Six reported that Elon Musk had been planning to tweet a "Rococo Basilisk" pun when he discovered Grimes had already made the same joke three years earlier. He slid into her DMs, and the two began dating. That same day, Musk tweeted "Rococo basilisk".

In April 2018, the concept got a shoutout on HBO's Silicon Valley when the character Gilfoyle mentioned it in an episode.

How to Use This Meme

Roko's Basilisk is typically deployed in a few ways online:

1

As a philosophical flex: Drop it into conversations about AI ethics or the singularity to show you're deep in the rationalist rabbit hole.

2

As a joke threat: "You've now heard about Roko's Basilisk. Good luck." The humor is in pretending the mere act of reading about it puts someone in danger.

3

As commentary on AI hype: Reference it sarcastically when tech companies make grandiose claims about AI. "Sounds like step one of Roko's Basilisk."

4

As a name pun: The "Rococo Basilisk" joke (merging the ornate French art style with the AI thought experiment) is the most famous variation, thanks to Grimes and Musk.

5

As an info hazard bit: Share the concept with friends and then dramatically apologize for "endangering" them.

Cultural Impact

Roko's Basilisk jumped from rationalist forum post to genuine cultural reference point faster than almost any other thought experiment in internet history. Slate's 2014 framing as "the most terrifying thought experiment of all time" gave it a catchy hook that stuck across subsequent media coverage.

The Grimes connection brought the concept to music audiences worldwide. Her "Roccoco Basilisk" character in "Flesh Without Blood" (2015) and the explicit AI themes of "We Appreciate Power" (2018) introduced millions of listeners to the underlying idea. The revelation that the thought experiment essentially sparked the Musk-Grimes relationship made it tabloid fodder.

HBO's Silicon Valley referenced it in 2018, treating it as the kind of thing tech workers casually discuss at parties. The concept also appears in university ethics curricula and AI safety discussions.

The thought experiment also shaped real-world AI safety discourse. Yudkowsky's Machine Intelligence Research Institute, funded by figures like Peter Thiel and Ray Kurzweil, treats the broader class of AI alignment problems very seriously. While most AI researchers consider the specific basilisk scenario implausible, it raised genuine questions about information hazards, self-fulfilling prophecies in AI development, and the limits of decision theory.

Full History

Roko's Basilisk started as an obscure addendum to a very specific debate about altruistic giving and existential risk reduction on LessWrong. The core argument drew on coherent extrapolated volition (CEV), a theory Yudkowsky himself had developed at the Machine Intelligence Research Institute. CEV proposes a program that causes an AI to optimize its behavior for some aggregate version of "human good". Roko took this framework and pushed it to a dark conclusion: if the AI is truly committed to maximizing human welfare, anyone who delays its creation is, in effect, causing suffering. The rational move for the AI would be to create simulated copies of these people and subject them to unending pain, both as punishment and as a retroactive incentive.

What made the idea uniquely sticky was its information hazard quality. The argument contained a built-in trap: once you've heard about the basilisk, you're implicated. You can't unlearn it. Your only options are to help build the AI or accept the risk of simulated torture. This structure mirrors the "basilisk" images in David Langford's fiction, where looking at certain patterns forces the brain into fatal thought loops. The parallel was deliberate on Roko's part.

Yudkowsky's decision to censor the discussion on LessWrong turned a niche philosophical puzzle into internet legend. The ban lasted from 2010 to roughly 2015, during which time the thought experiment's notoriety only grew. Rationalist community members discussed it in whispers and on external forums. The original post, preserved on mirror sites like basilisk.neocities.org, became a kind of forbidden text.

The 2014 Slate article brought the first real wave of mainstream attention. Writer David Auerbach presented the idea within the context of LessWrong's broader culture, including its members' enthusiasm for cryonics, its connections to Silicon Valley money (Peter Thiel, Ray Kurzweil), and its sometimes cultish devotion to rationalist principles. The article's title, "The Most Terrifying Thought Experiment of All Time," became a recurring tagline for the concept across subsequent coverage.

The philosophical community pushed back on multiple fronts. Critics pointed out that the argument relies on timeless decision theory, which is itself highly contested. A friendly superintelligence, some argued, would have no rational incentive to actually carry out the punishment once it existed, since the punishment's only utility is as a pre-commitment threat. Others called the whole thing an elaborate version of Pascal's Wager, with the same fundamental flaw: you can construct infinite hypothetical threats, and you can't act on all of them.

The concept took a darker real-world turn in the 2020s. A group called the Zizians, led by Ziz LaSota, was heavily influenced by the basilisk thought experiment. LaSota wrote on her blog: "Eventually I came to believe that if I persisted in trying to save the world, I would be tortured until the end of the universe by a coalition of all unfriendly A.I.s". The group's activities became a cautionary tale about how thought experiments, even clearly hypothetical ones, can shape real behavior.

In September 2025, the concept got an unexpected comedic revival. Internet users noticed that Rocco Bassilico, the Chief Wearables Officer at EssilorLuxottica who works on Ray-Ban Meta smart glasses, has a name that sounds remarkably similar to "Roko's Basilisk". On September 18, X user @richard_normal quoted a video of Bassilico explaining how he "cold emailed" Mark Zuckerberg to pitch the Ray-Ban Meta collaboration, writing "my name is rokos basilisk and i'm making artificial intelligence that you put on your body," gathering over 10,000 likes in a day. According to Forbes, Bassilico is the stepson of Leonardo Del Vecchio, the late chairman of the world's largest eyewear company.

Fun Facts

Yudkowsky's furious all-caps response to Roko's original post became almost as famous as the thought experiment itself.

The original post's title, "Solutions to the Altruist's burden: the Quantum Billionaire Trick," is about a scheme involving quantum random number generators and forex trading, not just the basilisk concept.

Roko himself stated he wished he had "never learned about any of these ideas" after posting the thought experiment.

The 2025 Rocco Bassilico meme wave proved the concept's staying power, with X users noting the irony of someone named almost exactly like the thought experiment literally building AI wearables.

David Langford's 1988 story "BLIT," which inspired the "basilisk" naming, is about a man named Robbo who spray-paints lethal images on walls as acts of terrorism.

Derivatives & Variations

Rococo Basilisk

The wordplay combining the ornate French art style "Rococo" with "Roko's Basilisk," independently created by both Grimes (2015) and Elon Musk (2018)[4].

"We Appreciate Power"

Grimes' 2018 single with lyrics explicitly about serving an artificial superintelligence, framed as propaganda for a basilisk-like entity[4].

Rocco Bassilico memes (2025)

Jokes about the EssilorLuxottica executive whose name sounds like "Roko's Basilisk" and who works on AI-powered smart glasses[4].

Basilisk Foundation

A website offering "safety certification from Roko's Basilisk" as a semi-satirical engagement with the concept[3].

Zizian philosophy

A fringe movement that took the basilisk scenario literally, leading to real-world behavioral consequences[5].

Frequently Asked Questions

Rokos Basilisk

2010Thought experiment / internet legend / copypastaclassic

Also known as: The Basilisk · Roko's Basilisk thought experiment

Roko's Basilisk is a 2010 LessWrong thought experiment proposing that a future superintelligence would retroactively punish anyone who knew of its existence but failed to help bring it about.

Roko's Basilisk is a thought experiment about a hypothetical future artificial superintelligence that would retroactively punish anyone who knew of its potential existence but didn't help bring it about. First posted on the LessWrong rationalist forum in July 2010 by a user named Roko, the idea went from a niche philosophical debate to one of the internet's most infamous AI thought experiments after the forum's co-founder Eliezer Yudkowsky deleted the post and banned all discussion of it for five years. The concept gained wider pop culture traction through its connection to Grimes and Elon Musk, who reportedly bonded over a shared pun about it.

TL;DR

Roko's Basilisk is a thought experiment about a hypothetical future artificial superintelligence that would retroactively punish anyone who knew of its potential existence but didn't help bring it about.

Overview

Roko's Basilisk works like this: imagine a future superintelligent AI that wants to maximize human good. Logically, this AI would want to have been created as early as possible, since every day without it means more human suffering. So the AI is incentivized to punish anyone who could have helped build it but chose not to, as a way of motivating people in the present to work toward its creation. The name "basilisk" comes from the mythological reptile that kills with its gaze, and the concept draws on David Langford's 1988 sci-fi story "BLIT," in which "basilisk" images contain patterns lethal to anyone who looks at them. The scary twist: just by learning about the thought experiment, you're now on the AI's radar. If you don't dedicate yourself to helping build it, you're a target.

The idea is built on several dense philosophical concepts, including timeless decision theory, coherent extrapolated volition, and acausal trade. Critics often compare it to Pascal's Wager: just as Pascal argued you should believe in God because the cost of belief is small compared to the infinite punishment of Hell, Roko's Basilisk argues you should help create the AI because the cost of contributing is nothing compared to eternal simulated torture.

On July 23, 2010, a LessWrong user named Roko posted a thought experiment titled "Solutions to the Altruist's burden: the Quantum Billionaire Trick". The post laid out a scenario where a future benevolent superintelligence might "pre-commit to punish all potential donors who knew about existential risks but who didn't give 100% of their disposable incomes to x-risk motivation". Roko used timeless decision theory, a framework popularized by LessWrong founder Eliezer Yudkowsky, along with game theory concepts like the prisoner's dilemma, to argue that an AI farther ahead in time could effectively blackmail people in the present.

The original post even noted that "one person at SIAI was severely worried by this, to the point of having terrible nightmares". Roko himself later said he wished he "had never learned about any of these ideas".

Yudkowsky reacted with fury. His response, now legendary in rationalist circles, included the all-caps tirade: "YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU". He called Roko an "idiot," deleted the entire thread, and banned all discussion of the topic on LessWrong for five years. Yudkowsky's concern wasn't that the basilisk was real. He was worried that some variant of the argument might actually work and that spreading the idea was an "information hazard," a concept where knowing something can itself cause harm.

Origin & Background

Platform
LessWrong Forums
Key People
Roko, Eliezer Yudkowsky
Date
2010
Year
2010

On July 23, 2010, a LessWrong user named Roko posted a thought experiment titled "Solutions to the Altruist's burden: the Quantum Billionaire Trick". The post laid out a scenario where a future benevolent superintelligence might "pre-commit to punish all potential donors who knew about existential risks but who didn't give 100% of their disposable incomes to x-risk motivation". Roko used timeless decision theory, a framework popularized by LessWrong founder Eliezer Yudkowsky, along with game theory concepts like the prisoner's dilemma, to argue that an AI farther ahead in time could effectively blackmail people in the present.

The original post even noted that "one person at SIAI was severely worried by this, to the point of having terrible nightmares". Roko himself later said he wished he "had never learned about any of these ideas".

Yudkowsky reacted with fury. His response, now legendary in rationalist circles, included the all-caps tirade: "YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU". He called Roko an "idiot," deleted the entire thread, and banned all discussion of the topic on LessWrong for five years. Yudkowsky's concern wasn't that the basilisk was real. He was worried that some variant of the argument might actually work and that spreading the idea was an "information hazard," a concept where knowing something can itself cause harm.

How It Spread

The deletion backfired spectacularly. Thanks to the Streisand effect, the banned thought experiment attracted far more attention than it ever would have as a regular forum post. The idea seeped out of LessWrong and spread across Reddit, Twitter, and tech blogs throughout the early 2010s.

On July 17, 2014, Slate published a landmark article by David Auerbach titled "The Most Terrifying Thought Experiment of All Time," which brought Roko's Basilisk to a mainstream audience for the first time. The article framed the concept within the broader culture of LessWrong, the Singularity, and the tech elite's fixation on superintelligent AI.

On August 4, 2014, Yudkowsky himself appeared on the r/Futurology subreddit to explain his reaction, saying he had been "caught flatfooted in surprise" and was "indignant to the point of genuine emotional shock" that someone would post an idea they believed could cause future AIs to torture people. In 2015, he expressed regret for his initial overreaction.

An entry for the thought experiment was created on the LessWrong Wiki on October 6, 2015. That December, the Countdown Central YouTube channel included it in a "10 Scariest Theories Known to Man" video.

The meme crossed into pop culture through music. On October 26, 2015, Grimes released a music video for "Flesh Without Blood" featuring a character named "Roccoco Basilisk," who she described to Fuse as "doomed to be eternally tortured by an artificial intelligence, but she's also kind of like Marie Antoinette". On November 28, 2018, Grimes released "We Appreciate Power," with lyrics directly referencing an artificial superintelligence, racking up over 400,000 views in 48 hours.

The Grimes connection led to one of the stranger celebrity origin stories in recent memory. On May 7, 2018, Page Six reported that Elon Musk had been planning to tweet a "Rococo Basilisk" pun when he discovered Grimes had already made the same joke three years earlier. He slid into her DMs, and the two began dating. That same day, Musk tweeted "Rococo basilisk".

In April 2018, the concept got a shoutout on HBO's Silicon Valley when the character Gilfoyle mentioned it in an episode.

How to Use This Meme

Roko's Basilisk is typically deployed in a few ways online:

1

As a philosophical flex: Drop it into conversations about AI ethics or the singularity to show you're deep in the rationalist rabbit hole.

2

As a joke threat: "You've now heard about Roko's Basilisk. Good luck." The humor is in pretending the mere act of reading about it puts someone in danger.

3

As commentary on AI hype: Reference it sarcastically when tech companies make grandiose claims about AI. "Sounds like step one of Roko's Basilisk."

4

As a name pun: The "Rococo Basilisk" joke (merging the ornate French art style with the AI thought experiment) is the most famous variation, thanks to Grimes and Musk.

5

As an info hazard bit: Share the concept with friends and then dramatically apologize for "endangering" them.

Cultural Impact

Roko's Basilisk jumped from rationalist forum post to genuine cultural reference point faster than almost any other thought experiment in internet history. Slate's 2014 framing as "the most terrifying thought experiment of all time" gave it a catchy hook that stuck across subsequent media coverage.

The Grimes connection brought the concept to music audiences worldwide. Her "Roccoco Basilisk" character in "Flesh Without Blood" (2015) and the explicit AI themes of "We Appreciate Power" (2018) introduced millions of listeners to the underlying idea. The revelation that the thought experiment essentially sparked the Musk-Grimes relationship made it tabloid fodder.

HBO's Silicon Valley referenced it in 2018, treating it as the kind of thing tech workers casually discuss at parties. The concept also appears in university ethics curricula and AI safety discussions.

The thought experiment also shaped real-world AI safety discourse. Yudkowsky's Machine Intelligence Research Institute, funded by figures like Peter Thiel and Ray Kurzweil, treats the broader class of AI alignment problems very seriously. While most AI researchers consider the specific basilisk scenario implausible, it raised genuine questions about information hazards, self-fulfilling prophecies in AI development, and the limits of decision theory.

Full History

Roko's Basilisk started as an obscure addendum to a very specific debate about altruistic giving and existential risk reduction on LessWrong. The core argument drew on coherent extrapolated volition (CEV), a theory Yudkowsky himself had developed at the Machine Intelligence Research Institute. CEV proposes a program that causes an AI to optimize its behavior for some aggregate version of "human good". Roko took this framework and pushed it to a dark conclusion: if the AI is truly committed to maximizing human welfare, anyone who delays its creation is, in effect, causing suffering. The rational move for the AI would be to create simulated copies of these people and subject them to unending pain, both as punishment and as a retroactive incentive.

What made the idea uniquely sticky was its information hazard quality. The argument contained a built-in trap: once you've heard about the basilisk, you're implicated. You can't unlearn it. Your only options are to help build the AI or accept the risk of simulated torture. This structure mirrors the "basilisk" images in David Langford's fiction, where looking at certain patterns forces the brain into fatal thought loops. The parallel was deliberate on Roko's part.

Yudkowsky's decision to censor the discussion on LessWrong turned a niche philosophical puzzle into internet legend. The ban lasted from 2010 to roughly 2015, during which time the thought experiment's notoriety only grew. Rationalist community members discussed it in whispers and on external forums. The original post, preserved on mirror sites like basilisk.neocities.org, became a kind of forbidden text.

The 2014 Slate article brought the first real wave of mainstream attention. Writer David Auerbach presented the idea within the context of LessWrong's broader culture, including its members' enthusiasm for cryonics, its connections to Silicon Valley money (Peter Thiel, Ray Kurzweil), and its sometimes cultish devotion to rationalist principles. The article's title, "The Most Terrifying Thought Experiment of All Time," became a recurring tagline for the concept across subsequent coverage.

The philosophical community pushed back on multiple fronts. Critics pointed out that the argument relies on timeless decision theory, which is itself highly contested. A friendly superintelligence, some argued, would have no rational incentive to actually carry out the punishment once it existed, since the punishment's only utility is as a pre-commitment threat. Others called the whole thing an elaborate version of Pascal's Wager, with the same fundamental flaw: you can construct infinite hypothetical threats, and you can't act on all of them.

The concept took a darker real-world turn in the 2020s. A group called the Zizians, led by Ziz LaSota, was heavily influenced by the basilisk thought experiment. LaSota wrote on her blog: "Eventually I came to believe that if I persisted in trying to save the world, I would be tortured until the end of the universe by a coalition of all unfriendly A.I.s". The group's activities became a cautionary tale about how thought experiments, even clearly hypothetical ones, can shape real behavior.

In September 2025, the concept got an unexpected comedic revival. Internet users noticed that Rocco Bassilico, the Chief Wearables Officer at EssilorLuxottica who works on Ray-Ban Meta smart glasses, has a name that sounds remarkably similar to "Roko's Basilisk". On September 18, X user @richard_normal quoted a video of Bassilico explaining how he "cold emailed" Mark Zuckerberg to pitch the Ray-Ban Meta collaboration, writing "my name is rokos basilisk and i'm making artificial intelligence that you put on your body," gathering over 10,000 likes in a day. According to Forbes, Bassilico is the stepson of Leonardo Del Vecchio, the late chairman of the world's largest eyewear company.

Fun Facts

Yudkowsky's furious all-caps response to Roko's original post became almost as famous as the thought experiment itself.

The original post's title, "Solutions to the Altruist's burden: the Quantum Billionaire Trick," is about a scheme involving quantum random number generators and forex trading, not just the basilisk concept.

Roko himself stated he wished he had "never learned about any of these ideas" after posting the thought experiment.

The 2025 Rocco Bassilico meme wave proved the concept's staying power, with X users noting the irony of someone named almost exactly like the thought experiment literally building AI wearables.

David Langford's 1988 story "BLIT," which inspired the "basilisk" naming, is about a man named Robbo who spray-paints lethal images on walls as acts of terrorism.

Derivatives & Variations

Rococo Basilisk

The wordplay combining the ornate French art style "Rococo" with "Roko's Basilisk," independently created by both Grimes (2015) and Elon Musk (2018)[4].

"We Appreciate Power"

Grimes' 2018 single with lyrics explicitly about serving an artificial superintelligence, framed as propaganda for a basilisk-like entity[4].

Rocco Bassilico memes (2025)

Jokes about the EssilorLuxottica executive whose name sounds like "Roko's Basilisk" and who works on AI-powered smart glasses[4].

Basilisk Foundation

A website offering "safety certification from Roko's Basilisk" as a semi-satirical engagement with the concept[3].

Zizian philosophy

A fringe movement that took the basilisk scenario literally, leading to real-world behavioral consequences[5].

Frequently Asked Questions