I was watching a video of a keynote speech at the Consumer Electronics Show for the Rabbit R1, an AI gadget that promises to act as a sort of personal assistant, when a feeling of doom took hold of me.
It wasn’t just that Rabbit’s CEO Jesse Lyu radiates the energy of a Kirkland-brand Steve Jobs. And it wasn’t even Lyu’s awkward demonstration of how the Rabbit’s camera can recognize a photo of Rick Astley and Rickroll the owner — even though that segment was so cringe it caused me chest pains.
No, the real foreboding came during a segment when Lyu breathlessly explained how the Rabbit could order pizza for you, telling it “the most-ordered option is fine,” leaving his choice of dinner up to the Pizza Hut website. After that, he proceeded to have the Rabbit plan an entire trip to London for him. The device very clearly just pulled a bunch of sights to see from some top-10 list on the internet, one that was very likely AI-generated itself.
Most of the Rabbit’s capabilities were well in line with existing voice-activated products, like Amazon Alexa. Its claim to being something special is its ability to create a “digital twin” of the user, which can directly utilize all of your apps so that you, the person, don’t have to. It can even use Midjourney to generate AI images for you, removing yet another level of human involvement and driving us all deeper into the uncanny valley.
We know very little about how the Rabbit will actually interact with all of these apps, or how secure your data will be, but the first 10,000 preorder units sold out at CES the instant they were announced. It was the most talked-about product at the show, and I heard whispers about it wherever I went. Among the early adopter set, people couldn’t wait for the chance to hand over more of their agency to a glorified chatbot. This is where the feeling of doom started building in my gut.
“I think everybody has a Copilot. Everybody’s making a Copilot. That’s just a great way to accelerate us as humans, right?”
Not long after watching this keynote, I found myself at a panel on deepfakes and “synthetic information” (the fancy term for AI-generated slop) hosted by the consulting firm Deloitte. One of the panelists was Bartley Richardson, an AI infrastructure manager at the tech company NVIDIA. He opened the panel by announcing his love of Microsoft’s AI assistant, Copilot. Microsoft brags Copilot can do everything from finding you the best-reviewed coffee grinder to answering “Where should I travel if I want to have a spiritual experience?”
Bartley seemed to be interested in Copilot as a sort of digital replacement for his time and effort. He told the panel, “I think everybody has a Copilot. Everybody’s making a Copilot. Everybody wants a Copilot, right? There’s going to be a Bartley Copilot, maybe in the future.… That’s just a great way to accelerate us as humans, right?”
While I find the idea of “accelerating” humanity via glorified Clippy unsettling, the comment felt truly unhinged in light of something I heard at another Deloitte panel, from one of Bartley’s co-workers, NVIDIA in-house counsel Nikki Pope: In a panel on “governing” AI risk, she cited internal research that showed consumers trusted brands less when they used AI.
This gels with research published last December that found only around 25 percent of customers trust decisions made by AI over those made by people. One might think an executive with access to this data might not want to admit to using a product that would make people trust them less. Or perhaps they felt losing a little trust was worth yielding some of their responsibility to a machine.
It was clear Lyu viewed himself as a new Steve Jobs, just as it was clear executives like Bartley didn’t want to miss getting ahead on the next big thing. But as I watched the hype cycle unfold, my mind wasn’t drawn to old memories of Apple keynotes or the shimmering excitement of the first dotcom boom. Instead, I thought about cults. Specifically, about a term first defined by psychologist Robert Lifton in his early writing on cult dynamics: “voluntary self-surrender.” This is what happens when people hand over their agency and the power to make decisions about their own lives to a guru.
Cult members are often depicted in the media as weak-willed and foolish. But the Church of Scientology — long accused of being a cult, an allegation they have endlessly denied — recruits heavily among the rich and powerful. The Finders, a D.C.-area cult that started in the 1970s, included a wealthy oil-company owner and multiple members with Ivy League degrees. All of them agreed to pool their money and hand over control of where they worked and how they raised their children to their cult leader. Haruki Murakami wrote that Aum Shinrikyo members, many of whom were doctors or engineers, “actively sought to be controlled.”
Perhaps this feels like a reach. But the deeper you dive into the people — and subcultures that are pushing AI forward — the more cult dynamics you begin to notice.
I should offer a caveat here: There’s nothing wrong with the basic technology we call “AI.” That wide banner term includes tools as varied as text- or facial-recognition programs, chatbots, and of course sundry tools to clone voices and generate deepfakes or rights-free images with odd numbers of fingers. CES featured some real products that harnessed the promise of machine learning (I was particularly impressed by a telescope that used AI to clean up light pollution in images). But the good stuff lived alongside nonsense like “ChatGPT for dogs” (really just an app to read your dog’s body language) and an AI-assisted fleshlight for premature ejaculators.
And, of course, bad ideas and irrational exuberance are par for the course at CES. Since 1967, the tech industry’s premier trade show has provided anyone paying attention with a preview of how Big Tech talks about itself, and our shared future. But what I saw this year and last year, from both excited futurist fanboys and titans of industry, is a kind of unhinged messianic fervor that compares better to Scientology than to the iPhone.
I mean that literally.
“We believe any deceleration of AI will cost lives. Deaths that were preventable by the AI that was prevented from existing is a form of murder.”
MARC ANDREESSEN IS THE CO-FOUNDER of Netscape and the capital firm Andreessen-Horowitz. He is one of the most influential investors in tech history, and has put more money into AI start-ups than almost anyone else. Last year, he published something called the “Techno-Optimist Manifesto” on the Andreessen-Horowitz website.
On the surface it’s a paean to the promise of AI and an exhortation to embrace the promise of technology and disregard pessimism. Plenty of people called the piece out for its logical fallacies (it ignores that much tech pessimism is due to real harm caused by some of the companies Andreessen invested in, like Facebook). What has attracted less attention is the messianic overtones of Andreessen’s beliefs:
“We believe Artificial Intelligence can save lives – if we let it. Medicine, among many other fields, is in the stone age compared to what we can achieve with joined human and machine intelligence working on new cures. There are scores of common causes of death that can be fixed with AI, from car crashes to pandemics to wartime friendly-fire.”
As I type this, the nation of Israel is using an AI program called the Gospel to assist its airstrikes, which have been widely condemned for their high level of civilian casualties. Everything else Andreessen brings up here is largely theoretical (the promise of self-driving cars has already proven somewhat overstated). AI does hold promise for improving our ability to analyze large data sets used in many kinds of scientific research (as well as novel bioweapons), but we have all seen recently that you can’t stop a pandemic with medicine alone. You must grapple with disinformation every step of the way, and AI makes it easier to spread lies at scale.
Andreessen has no time for doubters. In fact, doubting the benefits of artificial general intelligence (AGI), the industry term for a truly sentient AI, is the only sin of his religion.
“We believe any deceleration of AI will cost lives,” his manifesto states. “Deaths that were preventable by the AI that was prevented from existing is a form of murder.”
And murder is a sin. The more you dig into Andreessen’s theology, the more it starts to seem like a form of technocapitalist Christianity. AI is the savior, and in the case of devices like the Rabbit, it might literally become our own, personal Jesus. And who, you might ask, is God?
“We believe the market economy is a discovery machine, a form of intelligence — an exploratory, evolutionary, adaptive system,” Andreessen writes.
This is the prism through which these capitalists see artificial intelligence. This is why they are choosing to bring AGI into being. All of the jobs lost, all of the incoherent flotsam choking our internet, all of the Amazon drop shippers using ChatGPT to write product descriptions, these are but the market expressing its will. Artists must be plagiarized and children presented with hours of procedurally generated slop and lies on YouTube so that we can, one day, reach the promised land: code that can outthink a human being.
AGI is treated as an inevitability by people like Sam Altman of OpenAI, who needs it to be at least perceived as inevitable so their company can have the highest possible stock price when it goes public. This messianic fervor has also been adopted by squadrons of less-influential tech executives who simply need AI to be real because it solves a financial problem.
Venture capital funding for Big Tech collapsed in the months before ChatGPT hit public consciousness. The reason CES was so packed with random “AI”-branded products was that sticking those two letters to a new company is seen as something of a talisman, a ritual to bring back the rainy season. Outside of that, laptop makers see adding AI programs, like Microsoft’s Copilot, as a way to reverse the past few years of tumbling sales.
The terminology these tech executives use around AI is more grounded than Andreessen’s prophesying, but just as irrational.
Every AI benefit was touted in vague terms: It’ll make your company more “nimble” and “efficient.” Harms were discussed less often, but with terrible specificity that stood out next to the vagueness. Early in the deepfake panel Ben Colman, CEO of a company named Reality Defender that detects artificially generated media, claimed his company expects half a trillion dollars in fraud worldwide this year, just from voice-cloning AI.
His numbers are in line with what other researchers expect. This horrified me. Last year brought us the story of a mother getting phone calls from what sounded like their kidnapped daughter but was, in fact, a scammer using AI. At CES, as in the Substacks and newsletters of AI cultists, there is no time to dwell on such horrors. Full steam ahead is the only serious suggestion these people make.
“You should all be excited,” Google’s VP of Engineering Beshad Singh tells us, during a panel discussion with a McDonald’s executive. If we’re not using AI, Beshad warns, we’re missing out. I hear variations of this same sentiment over and over. Not just “This stuff is great,” but “You’re kind of doomed if you don’t start using it.”
“If we create AI that disparately treats one group tremendously in favor of another group, the group that is disadvantaged or disenfranchised, that’s an existential threat to that group.”
NIKKI POPE WAS THE SOLE quasi-skeptic allowed a speaking role at CES. During a discussion over “governing” AI risks with Adobe VP Alexandru Costin, she urged the audience to think about the direct harm algorithmic bias does to marginalized communities. God- (or devil-) like AI may come some day, maybe. But the systems that exist today, here in the real world, are already fucking people over.
“If we create AI that disparately treats one group tremendously in favor of another group,” Pope said, “the group that is disadvantaged or disenfranchised, that’s an existential threat to that group.”
Costin claimed the biggest risk with generative AI wasn’t fraud or plagiarism, but failing to use it. He expressed his belief that this was as big an innovation as the internet, and added, “I think humanity will find a way to tame it to our best interest. Hopefully.”
The whole week was like that: specific and devastating harms paired with vague claims of benefits touted as the salve to all of mankind’s ills.
I don’t think every leader trying to profit from AI in tech believes in Andreessen’s messianic robot god. OpenAI’s Altman, for instance, is much more cynical. Last year, he was happy to warn that AI might kill us all and declared that AGI would likely arrive within the next decade. At Davos, just days ago, he was much more subdued, saying, “I don’t think anybody agrees anymore what AGI means.” A consummate businessman, Altman is happy to lean into that old-time religion when he wants to gin up buzz in the media, but among his fellow plutocrats, he treats AI like any other profitable technology.
Most of the executives hoping to profit off AI are in a similar state of mind. All the free money right now is going to AI businesses. They know the best way to chase that money is to throw logic to the wind and promise the masses that if we just let this technology run roughshod over every field of human endeavor it’ll be worth it in the end.
This is rational for them, because they’ll make piles of money. But it is an irrational thing for us to let them do. Why would we want to put artists and illustrators out of a job? Why would we accept a world where it’s impossible to talk to a human when you have a problem, and you’re instead thrown to a churning swarm of chatbots? Why would we let Altman hoover up the world’s knowledge and resell it back to us?
We wouldn’t, and we won’t, unless he can convince us doing so is the only way to solve every problem that terrifies us. Climate change, the cure for cancer, an end to war or, at least, an end to fear that we’ll be victimized by crime or terrorism, all of these have been touted as benefits of the coming AI age. If only we can reach the AGI promised land.
This is the logic beyond Silicon Valley’s latest subculture: effective accelerationism, or e/acc. The gist of this movement fits with Andreessen’s manifesto: AI development must be accelerated without restriction, no matter the cost. Altman signaled his sympathy with the ideology in a response on Twitter to one of its chief thought leaders: “You cannot out-accelerate me.”
E/acc has been covered by a number of journalists, but most of that coverage misses how very … spiritual some of it seems. “Beff Jezos,” the pseudonym of a former Google engineer who popularized the e/acc movement, said in a Jan. 21 Twitter post, “If your product isn’t amenable to spontaneously producing a cult, it’s probably not impactful enough.”
One of the inaugural documents of the entire belief system opens with “Accelerationism is simply the self-awareness of capitalism, which has scarcely begun.” Again, we see a statement that AI is somehow enmeshed with the ability of capitalism, which is in some way intelligent, that it knows itself. How else are we to interpret this, but as belief in a god built by atheists who love money?
The argument continues that nothing matters more than extending the “light of consciousness” into the stars, a belief Elon Musk himself has championed. AI is the force the market will use to do this, and “This force cannot be stopped.” This is followed by wild claims that “next-generation lifeforms” will be created, inevitably. And then, a few sentences down, you get the kicker:
“Those who are the first to usher in and control the hyper-parameters of AI/technocapital have immense agency over the future of consciousness.”
AI is not just a god, but a god we can build, and thus we can shape the future of reality to our own peculiar whims. There’s another Beff Jezos post for this idea as well: “If you help the homo-techno-capital machine build the grander future it wants, you will be included in it.”
“Accelerationism is simply the self-awareness of capitalism, which has scarcely begun.”
Attempting to slow this process down has “risks,” of course. They stop short of lobbing threats at those who might seek to slow AI development, but like Andreessen, they imply moral culpability in horrific crimes for skeptics who get their way.
As I listened to PR people try to sell me on an AI-powered fake vagina, I thought back to Andreessen’s claims that AI will fix car crashes and pandemics and myriad other terrors. In particular, I thought about his claim that because of this, halting AI development was akin to murder. It reminded me of another wealthy self-described futurist with a plan to save the world.
The Church of Scientology, founded by the science-fiction writer L. Ron Hubbard and based upon a series of practices his disciples call “tech,” claims that their followers will “rid the planet of insanity, war and crime, and in its place create a civilization in which sanity and peace exist.” Scientology “tech” is so important for mankind’s future that threats against it justify their infamous “fair game” policy. A person declared fair game “may be deprived of property or injured by any means by any Scientologist.…”
Sinners must be punished, after all.
PERHAPS THE MOST AMUSING part of all this is that a segment of the AI-believing community has created not just a potential god, but a hell. One of the online subcultures that influenced the birth of e/acc are the Rationalists. They formed in the early aughts around a series of blog posts by a man named Eleizer Yudkowsky.
A self-proclaimed autodidact, Yudkowsky didn’t attend high school or college and instead made a name for himself blogging about game theory and logic. His online community, LessWrong, became a hub of early AI discussion. Over time, Yudkowsky fashioned himself into an artificial-intelligence researcher and philosopher. For a time, he was seen as something of a guru among certain tech and finance types (former Alameda Research CEO Caroline Ellison loves his 660,000-word Harry Potter fanfic).
In recent years, Yudkowsky has become a subject of ridicule to many tech movers and shakers. The e/acc people find him particularly absurd. This is because he shares their view of AI as a potential deity, but he believes AGI will inevitably kill everyone. Thus, we must bomb data centers.
One of Yudkowsky’s early followers even created the AI equivalent to Pascal’s Wager. In 2010, a LessWrong user named Roko posed this question: What if an otherwise benevolent AI decided it had to torture any human who failed to work to bring it into existence?
The logic behind his answer was based on the prisoner’s dilemma, a concept in game theory. It’s not worth explaining because it’s stupid, but Roko’s conclusion was that an AI who felt this way would logically punish its apostates for eternity by creating a VR hell for their consciousness to dwell in evermore.
Silly as it sounds, people believed in what became known as Roko’s Basilisk strongly enough that some reported nightmares and extreme anxiety. Yudkowsky rejected it as obviously absurd — and it is — but discussion of the concept remains influential. Elon Musk and Grimes allegedly met talking about Roko’s Basilisk.
This is relevant for us because it is one more datapoint showing that people who take AI seriously as a real intelligence can’t seem to help turning it into a religion. Perhaps all beliefs rooted so firmly in faith follow similar patterns. And it is wise to remember that the promise of truly intelligent, self-aware AI is still a matter of pure faith.
In an article published by Frontiers in Ecology and Evolution, a research journal, Dr. Andreas Roli and colleagues argue that “AGI is not achievable in the current algorithmic frame of AI research.” One point they make is that intelligent organisms can both want things and improvise, capabilities no model yet extant has generated. They argue that algorithmic AI cannot make that jump.
What we call AI lacks agency, the ability to make dynamic decisions of its own accord, choices that are “not purely reactive, not entirely determined by environmental conditions.” Midjourney can read a prompt and return with art it calculates will fit the criteria. Only a living artist can choose to seek out inspiration and technical knowledge, then produce the art that Midjourney digests and regurgitates.
Roli’s article will not be the last word on whether AGI is possible, or whether our present algorithmic frame can reach that lofty goal. My point is that the goals Andreessen and the e/acc crew champion right now are based in faith, not fact. The kind of faith that makes a man a murderer for doubting it.
Andreessen’s manifesto claims, “Our enemies are not bad people — but rather bad ideas.” I wonder where that leaves me, in his eyes. Or Dr. Roli for that matter. We have seen many times in history what happens when members of a faith decide someone of another belief system is their enemy. We have already seen artists and copyright holders treated as “fair game” by the legal arm of the AI industry.
Who will be the next heretic?
I decided to make myself one before the end of the trade show, at a panel on “The AI Driven Restaurant and Retail Experience.” Beshad Singh (a VP at Google) had claimed AI might be the equivalent of gaining a million extra employees. Michelle Gansle, chief data and analytics officer for McDonald’s, had bragged that her company had used AI to help them stop $50 million in fraud in a single month.
I told them I felt like most of that $50 million in fraud had also been done with AI help. And that a million extra employees for Google will be at least equalled by a million new employees in the hands of disinfo merchants, fraudsters, and other bad actors.
“What are the odds that these gains are offset by the costs?” I asked them both.
Singh agreed those were problems and said, “I think that’s why I guess things should be regulated.” He was sure the benefits would outweigh the harms. Gansle agreed with Singh, and brought up a 1999 interview with David Bowie on the future of the internet. (She’d said earlier that she felt his decades-old hope for the future of the internet fit better with the promise of AI.)
It was hard for me to not draw comparisons between this and a recent AI-generated George Carlin routine. Both essentially put words in the mouth of a dead man for the sake of making a buck. This put me in a sour mood, but then right after me, someone in the audience asked if either of them thought Blockchain, the big tech craze of a few years earlier, had a role to play in AI. They could not say no fast enough.
And that actually brought me a bit of hope. Perhaps we’ll get Marc Andreessen’s benevolent AI god or Eleizer Yudkowski’s silicon devil. Or perhaps, in the end, we heretics will persevere.
From Rolling Stone US