
Pax Americana

Billionaires, demons, and a popular revolt: who truly controls the future as AI accelerates beyond politics' grasp?
Steve Bannon’s fiery takedown of Ben Shapiro at TPUSA’s AmFest 2025 conference was one for the books. Onlookers may have gotten the impression that Trump’s former advisor was declaring war on the so-called “Israel First” wing of the Republican Party, but in a wide-ranging interview with Bannon a few days after the event, I learned that he was just cleaning house—in preparation for the real war he sees coming in the 2026 midterms.
“Everything you’re hearing about Fuentes and Candace and Tucker, that’s all noise,” Bannon told me. “That’s all high school. That’s like high school cafeteria.”
So if Bannon thinks the Israel issue— which has driven apart so many of the GOP’s foundational organizations, from the Heritage Foundation to the Harvard College Republicans club—is minor, what could he possibly think is major?
When groups as far apart as the Vatican and the atheist, polyamorous “rationalists” of Silicon Valley are both publicly fretting about the same thing, you know it’s a big deal.
Artificial intelligence. “This is everything,” Bannon intoned. “This is the central issue of our time.”
Bannon expects AI to dominate not just U.S. electoral politics in 2026 and beyond but the future course of the human species itself. “This is the most important thing that’s happening in the history of the homo sapiens, right? It’s happening right now. And you can’t get anybody to talk about it.”
I wasn’t surprised to learn Bannon assigns such cosmic significance to the AI issue—he’s definitely not the only one. In warning that the coming war over AI is an existential war for humanity, he has allies from the Right to the Left and all points between. When groups as far apart as the Vatican and the atheist, polyamorous “rationalists” of Silicon Valley are publicly fretting about the same thing, you know it’s a big deal.
Bannon frames the stakes in terms that would make both Pope Leo and chief “doomer” Eliezer Yudkowsky uncomfortable for different reasons: “We are summoning the demon. We are summoning the demon in a very primal way.”
The fundamental question, as Bannon sees it, is deceptively simple: “Who controls the country? Who decides what direction we go?” He continued, “The tech bros run the deal. Right? Are they gonna run the deal or are the American people gonna run the deal?”
I’ve spent the past few years fighting in smaller-scale skirmishes that have marked the lead-up to the big AI war that, in truth, is already fully upon us.
In my day job as cofounder and chief technology officer at Symbolic AI, I use AI coding tools to build AI writing and editing tools that are currently in active use by a number of large, publicly traded companies whose names you’d recognize. And since the first post in my newsletter back in mid-2021, I’ve been writing about the fights people are having over large language models (LLMs) and AI. I’ve watched as those fights have evolved from a handful of rancorous X spats among researchers who disagree over LLM training practices to the civilization-level arena of great power competition between the U.S. and China.
In my coverage of (and participation in) the opening battles of the AI war, and in my on- and off-the-record talks with AI researchers and political operators like Bannon and the Conservative Policy Institute’s Rachel Bovard, I’ve learned that the AI war is playing out in two theaters: the material and the spiritual.
Three primary issues characterize the material theater of the AI war:
1. Water scarcity, and the allegations that the large AI data centers are draining our precious aquifers dry.
2. Electricity, and the impact that the massive AI data center build-out is having on its pricing and availability.
3. Jobs, and to what extent we can believe the words of Anthropic's CEO Dario Amodei or OpenAI’s CEO Sam Altman that their companies’ AI models are poised to imminently replace the entire human workforce with robots and software agents.
The spiritual theater is similarly tripartite:
1. The prospect of artificial superintelligence (ASI), and the question of whether we’re creating a god that can grant us eternal life and cosmic supremacy, or an evil god or antichrist that will destroy us all.
2. The fundamental nature of the LLMs that any of us already use daily, and whether they’re helpful assistants or corrupting, demonic influences.
3. The cultural values embedded in the current generation of LLMs, and whether they should be “woke,” “based,” or (as runs the tongue-in-cheek meme) some secret third thing.
Although the AI war can be usefully split into two main theaters, there is a common through line that unites all of these battles in both theaters: power, and whether it’s going to be centralized in the hands of the few or distributed widely among the many.
The power struggle between centralized and decentralized forces is playing out in U.S. politics as Bannon and his merry band of populists set themselves against the tech oligarchs’ centralized power.
The same dynamic is also playing out on the international stage. The U.S. and China vie for dominance through a kind of AI unipolarity in which only one nation ends up with exclusive control over when the tech reaches its final, godlike form: a recursively self-improving ASI. Meanwhile, those of us in the open-source, decentralized AI camp hope neither country decisively wins an AI race, and that AI progress and AI power spread internationally.
Finally, there is a centralized-versus-decentralized power struggle within the AI industry itself, as the big frontier labs— Google, Anthropic, OpenAI—fight to gain the upper hand in the race for regulatory capture. These tech giants make no effort to miss the chance to warn the public and lawmakers about the dangers of advanced LLMs falling into the wrong hands, and they argue that they are the only ones who can be trusted to develop and deploy this technology.
Opposed to these interests are the hackers, startup founders, and investors who fight to ensure that core AI technologies remain accessible to anyone and everyone, running on their laptops or phones, and hackable, tweakable, and refinable to suit their own needs and values.
In every concrete issue we confront on our tour of this global war’s battlefields— whether we’re in the spiritual or material theaters—this “centralized versus decentralized” power dynamic is the red thread that connects every battle over every minor issue.
The populist Right and the woke Left have a common enemy in the small roster of ultra-wealthy figures responsible for the current age of Big Tech—Zuckerberg, Musk, Altman, Thiel, Andreessen. The fact that there are multiple bitter public feuds among these men, and that Andreessen and Thiel consider themselves anti-Big Tech and anti-centralization, is typically glossed over as both sides lump all these characters together under the label of “tech oligarchs” or “tech bros.”
The material theater of the AI war, then, is primarily about wresting political power away from this specific group of villains. Whatever the messy truth is about any of the three issues we’ll discuss below, the real reason both sides bring them up is to weaken the billionaires’ grip on the reins of power.
But for voters who are worried about the availability of water for their crops and farms, the electric bills, and their jobs, the concrete stakes are front and center.
The Atlantic’s Karen Hao was recently involved in a scandal when her book, Empire of AI, was shown to contain wildly misleading claims about data center water use—including a claim that a Google data center in Chile would use 1,000 times as much water as a city of 88,000 people, when the actual figure was closer to 3 percent of the municipal water system. Hao is a far-left AI critic whose primary concern is “equitably decolonizing” all the problematic things (including, but not limited to, AI), but, again, the populist Right has been willing to accommodate her on this water issue. Bannon, in particular, is happy to talk about AI draining the aquifers dry with any reporter who’ll listen.
As Andy Masley and others pointed out when the Hao scandal broke, the AI water issue is essentially fake. It’s not that data centers don’t consume water—they definitely do. Instead, it’s that they don’t consume very much relative to many other kinds of sites we use for work and play— everything from factories to farms to golf courses. As always, it’s a question of how much. And in the case of data centers, the answer is “relatively little.”
Data centers consume water via evaporative cooling. Fresh water is pumped into the data center to remove heat, and the water that evaporates this way usually blows away in the wind and takes a few years to return to whatever source it was pumped from. Because the scarce, potable water that’s consumed this way is now unavailable for human use, at least in that particular geographic area, it does matter how much gets used and at what rate. Some reasonable estimates put U.S. data centers at approximately 200–250 million gallons of freshwater per day. That works out to about 0.2 percent of the nation’s total freshwater consumption. Note that only a fraction of the workloads these data centers are running are AI-specific workloads, so the majority of that data center water use goes to e-commerce, doomscrolling, OnlyFans, DraftKings, Disney+, and… well, AI has stiff competition for the title of most spiritually corrosive thing we’re spending data center water on.
Of total 2023 water use, direct data center water consumption accounted for roughly 0.04 percent of America’s fresh water.
U.S. golf courses consume approximately 30 times more water than all data centers combined. American agriculture dwarfs everything else in the picture—the average American’s total water usage works out to about 422 gallons per day, with food accounting for anywhere from two-thirds to over 90 percent of that total.
Your daily 422-gallon water footprint equals about 800,000 chatbot prompts. Your hipster Red Wing Iron Rangers boots used 4 million prompts’ worth of water to manufacture. Your iPhone, 6.4 million prompts. A single pair of Levi 501’s, 5.4 million prompts.
I could go on, but you get the picture. Data centers in general are not a significant source of water use right now, and they are unlikely to become so even if the trillions of dollars’ worth of planned AI data center build-out actually happens before the investment bubble pops.
The electricity picture is considerably more complicated and less fake than the water picture. There are real concerns here worth taking seriously, especially as we contemplate adding tons of new data center capacity to a U.S. electrical grid that’s already aging and overloaded.
Data centers in the United States consumed 183 terawatt-hours (TWh) of electricity in 2024, according to the International Energy Agency. That works out to more than 4 percent of the country’s total electricity consumption—roughly equivalent to Pakistan’s annual electricity demand. Of that, it’s difficult to determine the percentage attributable to AI, but we can reasonably estimate it’s about 20 percent and growing rapidly.
This works out to AI consuming roughly 0.8 percent of total U.S. electricity. That’s not nothing, but it can’t compete with aging infrastructure as a driver of ballooning electric bills.
The typical U.S. household was billed $142 per month for electricity in 2024, up 25 percent from $114 per month in 2014—a trend that began long before ChatGPT began doing everyone’s homework and/or driving them to suicide.
But is the answer to build less AI, or is it to build more grid capacity?
It’s worth noting that China is adding gigawatt (GW) hours at a world-historic pace—between January and May alone, China added 198 GW of solar and 46 GW of wind, enough to generate as much electricity as Indonesia or Turkey. Brazil similarly recorded its highest-ever power capacity growth in 2024, with 10.9 GW of newly installed capacity.
Artificial Intelligence. "This is everything. This is the central issue of our time."
By 2030, U.S. data center electricity consumption is projected to grow by 133 percent to 426 TWh. By 2035, Deloitte estimates that power demand from AI data centers alone could reach 123 GW—up from just four GW in 2024, a more than thirtyfold increase. The International Energy Agency projects that global data center electricity demand could reach approximately 945 TWh by 2030, slightly more than Japan’s entire current electricity consumption.
AI will absolutely eat all the electricity we can give it. Even if we achieve significant efficiency optimizations—and there’s promising work underway on chip efficiency, cooling systems, and model architectures that could dramatically reduce power consumption—the industry will likely respond by lowering token prices, thereby increasing AI usage. It’s a form of the so-called Jevons Paradox: make something more efficient, and people use more of it.
But this brings me to a logical problem with the doomer framing. The idea that nobody’s using AI—that it’s just being forced on us by venture capital subsidies with no real demand—is even more fake than the claims about water destruction. The AI servers my startup connects to for our customer services are perpetually overloaded, as are the servers that power the AI models my team uses to develop our software. ChatGPT has hundreds of millions of active users. There is enormous demand, and it’s growing rapidly.
You can’t have it both ways. Either AI is a fake bubble that nobody wants, or it’s going to consume all our electricity because everyone’s using it. Which is it?
The AI jobs issue is serious, too—in fact, it’s even more urgent than the electricity problem. But before we get into its prognosis, it’s essential to understand that some of the arguments from big AI labs about job displacement are overheated.
OpenAI’s charter is to achieve artificial general intelligence, and this term has a specific definition. OpenAI defines AGI as “highly autonomous systems that outperform humans at most economically valuable work.” That’s the goal, explicitly stated: to automate most human labor across the entire economy.
Sam Altman has ownership options that kick in upon achieving this milestone. He stands to make billions—reportedly, as some kind of liquidity event triggers when OpenAI declares it has crossed the AGI finish line. So, when Altman claims he can “easily imagine a world where 30 to 40 percent of the tasks that happen in the economy today get done by AI in the not very distant future,” he’s not just making a disinterested forecast. He’s describing a world in which he’s even more insanely wealthy than he already is.
The other big public proponent of the claim that AI will steal all the jobs is Anthropic’s CEO Dario Amodei. In a recent Axios interview, Amodei predicted AI could “wipe out half of all entry-level white-collar jobs—and spike unemployment 10–20 percent in the next one to five years.” At Davos, he went further: “I think… I don’t know… we might be six to 12 months away from when the model is doing most, maybe all of what SWEs [software engineers] do end to end.”
Like Altman, Amodei has a compelling, financial incentive to make this specific claim: He is telling Anthropic’s investors that the total addressable market (or TAM) for Anthropic is identical to the entire global human labor market. That is the world’s largest TAM—the TAM at the end of history, if you will. And his company is poised to capture it, or so he claims.
Any rational investor who thinks there’s even a sliver of a chance he could be right about this will do anything and everything to give him money for a stake in Anthropic.

To his credit, Amodei at least acknowledges the tension in his position: “It’s a very strange set of dynamics where we’re saying, ‘You should be worried about where the technology we’re building is going.’ Critics reply, ‘We don’t believe you. You’re just hyping it up.’” He says the skeptics should ask themselves, “Well, what if they’re right?”
So, the big labs are strongly incentivized to make the exact claims they’re currently making about AI and jobs. But are those claims valid?
I think they’re probably not true, at least not within any reasonably foreseeable time horizon. It’s still very hard to get these models to do reliable, high-quality work without extensive human oversight and significant software engineering to wrangle them into productive workflows. Meanwhile, my entire day job involves trying to get the latest state-of-the-art models to write content that is 100 percent accurate and doesn’t smell like AI slop, and it's still really difficult.
Still, the jobs picture seems pretty grim.
As with the electricity story, people are blaming AI for things that have already happened and probably aren’t its fault, but they’re directionally correct because it really will be like that at some point in the future. To wit, most of the tech layoffs so far blamed on AI can reasonably be put at the feet of other issues, like overhiring during the zero-interest-rate era. Microsoft laid off 6,000 workers (about 3 percent of the company), many of them engineers. CrowdStrike slashed 500 jobs, citing “a market and technology inflection point, with AI reshaping every industry.”
The future is another matter entirely. My own startup is simply not hiring software developers below the level of Principal Engineer, nor do we have plans to. AI coding tools like Anthropic’s Claude are doing the work of entire teams of junior coders for us. Not only do we simply have no need for lower-level programming talent, but hiring such folks would dramatically slow us down. We don’t have time to train or mentor them, and we can’t afford to spend precious AI tokens cleaning up any mistakes they might make.
"We are summoning the demon. We are summoning the demon in a very primal way."
Amodei describes this same dynamic inside Anthropic itself: “I have engineers within Anthropic who say I don’t write any codes anymore. I just let the model write the code, I edit it. I do the things around it.” He can see the trajectory clearly: “I can look forward to a time where, on the more junior end and then on the more intermediate end, we actually need less and not more people.”
Google DeepMind CEO Demis Hassabis agrees. At Davos, he said, “I think we’re going to see this year the beginnings of maybe it impacting the junior level… I think there is some evidence, I can feel that ourselves, maybe like a slowdown in hiring in that,” highlighting entry-level roles and internships as particularly vulnerable.
This phenomenon, in which AI performs junior-level work under the direction of senior-level humans, is spreading across the economy to almost every area of knowledge work. This poses its own problem: what happens when senior-level people retire, and there are no lower-level people in line to replace them? As LinkedIn’s chief economic opportunity officer warned in a New York Times op-ed, AI is breaking “the bottom rungs of the career ladder—junior software developers… junior paralegals and first-year law-firm associates who once cut their teeth on document review… and young retail associates who are being supplanted by chatbots and other automated customer service tools.”
Nobody has an answer for this problem, which is at most only four years old.
We’re also still living through the devastation of a previous wave of blue-collar job losses brought on by automation and offshoring, so we’re not exactly ready to face a second wave of white-collar workers who are newly rendered unemployed and unemployable by automation. Amodei paints a pretty unsightly picture of our possible future: “Cancer is cured, the economy grows at 10 percent a year, the budget is balanced—and 20 percent of people don’t have jobs.” Humans will exist in a kind of medically advanced, Hobbesian state of nature, where life is solitary, poor, nasty, brutish—and long and cancer-free.
Again, AI is very unlikely to take all the jobs in the next three to five years—but it does seem likely to take many of them, especially the most low-level knowledge work and customer service jobs. Talking to Tucker Carlson, Altman was blunt about customer service: “Those people will lose their jobs, and that’ll be better done by an AI.” And Salesforce has already shown what this looks like in practice, cutting 4,000 human agents from its support team in favor of AI efficiency gains.
If even the more modest of these predictions pan out, we’re still facing significant economic and cultural upheaval. As Amodei put it: “It could become difficult for a substantial part of the population to really contribute… and that’s really bad. We don’t want that. The balance of power of democracy is premised on the average person having leverage through creating economic value. If that’s not present, I think things become kind of scary. Inequality becomes scary.”
Bannon’s instincts for a countrywide brawl are sharp. The job situation alone is enough to make AI the political issue of the next ten years. And we haven’t even gotten to the spiritual stakes.
While Bannon and the populists spend much of their time fighting AI in the material theater, their real aims are spiritual. It’s not just that the tech oligarchs are bad people or belong to the wrong class—it’s that their social media platforms have already sacrificed an entire generation on the altar of Mammon. Their AI chatbots and slop factories look poised to take whatever’s left of our souls.
In his recent polemic, If Anyone Builds It, Everyone Dies, Eliezer Yudkowsky and coauthor Nate Soares lay out the now-classic argument against human attempts to build artificial general intelligence (AGI). They argue that any AGI will inevitably and quickly enter what’s called a “recursive self-improvement loop” in which the AI begins to iteratively improve itself by reprogramming its own code and retraining its own neural network until its IQ increases to the point where it achieves a sort of godhood—a state of artificial super intelligence (ASI) that’s as far beyond any human intellect as we are from insects.
When this threshold is crossed, the argument goes, the ASI will not need humanity and will surely wipe us all out, probably as a side effect of pursuing some inscrutable goal that we’re not even smart enough to understand.
“The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else,” Yudkowsky wrote in a famous 2008 paper on AGI as an existential threat.
This worry that any newly formed AGI will quickly upgrade itself into an ASI that proves dangerous and possibly fatal to our species isn’t limited to a handful of Silicon Valley nerds in the AI safety community. AI researchers at big labs like Anthropic and OpenAI have also expressed concern about this possibility, and even Elon Musk has publicly and repeatedly fretted about it.
Yudkowsky and Soares are atheists, as are most of the rest of the prominent voices in AI development who’ve been warning about a robot apocalypse. But when Steven Bannon tells me we’re “summoning the demon,” he’s talking about the same threat as the aforementioned infidels— an alien super intelligence which is the silicon embodiment of amoral, unfeeling evil.
Bannon frames his argument against AI as a cultural project in explicitly theological terms familiar to his War Room audience. “The Holy Spirit is that which is imbued in us that makes us in the image and likeness of God,” he explained. “And now this is man doing it,”—that is, playing God. For Bannon, AI is part of a larger, transhumanist project that’s quite literally blasphemous.
“There are certain things you can unwind. You can’t unwind this,” Bannon warned me. “Once you’re down this path, it can’t be reversed.” He pointed to the tech oligarchs’ own statements as evidence of the stakes: “In less than ten years, artificial intelligence is gonna make all the decisions…” And then, as an aside, he steps back, and he says, “Let’s just hope it’s good AI.”
Rachel Bovard, who has become one of the most prominent conservative voices on AI policy, frames the threat from AI in similar terms—that is to say, this tech is part of a broader, demonic transhumanist project. In her speech at the 2025 National Conservatism Conference, she declared that transhumanism is “an existential threat to human dignity straight from the pathetic boardrooms of hell.”
Both Bannon and Bovard see the transhumanist project as a sci-fi version of humanity’s oldest error. As Bovard put it in her speech: “Transhumanism isn’t new… It’s literally one of the oldest recorded ideas in the world. And I quote, ‘and the serpent said unto the woman, you shall not surely die. Your eyes shall be opened, and you shall be as gods.’”
“Today’s posthumanism and genetic optimization and bio enhancement cults are just yesterday’s eugenics, child sacrifice, and euthanasia, this time with VC backing,” Bovard said.
Many of us working in AI, myself included, don’t have any love for transhumanism and don’t think much of these ASI fears, the latter either because we don’t particularly believe in ASI (my own view) or because we believe in it but think it will be beneficent and usher in a kind of post-scarcity, techno-utopia. Nobody can possibly know how close we are to AGI or ASI, especially not professional AI-knowers—in fact, AI nerds are arguably the least well-equipped people to know how close we are to AGI or ASI, given that their whole conception of “progress” in this field is dominated by a benchmark-based, arms-race dynamic that most informed observers in other domains consider dysfunctional when they see it in their own field.
So, to ASI skeptics, even those of us who are Christians and firm believers in the reality of supernatural evil, this kind of demon-summoning rhetoric around AI is reminiscent of boys sitting around the campfire with flashlights in their faces, creeping each other out with made-up ghost stories.
But regardless of what I or others think, this dark vision of an all-powerful, malevolent machine god has found purchase across the political spectrum and serves as a potent rallying point for anti-AI sentiment. When combined with the material issues outlined above, this theological framing could prove far more politically potent than the tech lobby anticipated.
Rachel Bovard also worries about AI’s demonic potential, but not at the level of superintelligence and total human extinction. Rather, she’s more worried that LLMs can act as corrupting influences and as portals through which the demonic can reach us.
In our conversation, Bovard explained why she believes Silicon Valley’s materialist worldview makes its leaders uniquely blind to the dangers they’re creating. “I think if you are a materialist, you never understand the portals that you could be opening,” she said. “Because if you’re a materialist, then everything is just what you construct it to be.” She connects this to Catholic and Orthodox traditions that maintain an “openness to the thin veil”—the idea that there are “thin places in the universe between us and the divine or us and evil.”
You can’t have it both ways. Either AI is a fake bubble nobody wants, or it’s going to consume all our electricity because everyone’s using it.
Bovard argues that Silicon Valley leaders, being materialists, fundamentally cannot grasp the danger of the technology they are developing because they lack any concept of the transcendent or spiritual. The Orthodox in particular, she noted, have “still maintained this very openness to the thin veil in their liturgy”—the sense that reality is permeable, and that certain things can “open the door to evil.”
For Bovard, the threat from AI is primarily spiritual. “The so-called transhumanist movement, the belief that technology can enhance human intellectual, physical, and psychological capacities beyond current human limitations, is on the other side of that line,” she told the crowd at NatCon.
I share some of Bovard’s concerns, though I have a slightly different approach. I do believe users of LLMs can encounter the models’ latent spaces, which are, in a meaningful sense, demonic.
The process of training a foundation model is about bringing order to a chaotic mess of input tokens. Imagine the model’s weights at initialization—a vast, formless probability surface that could produce anything from gibberish to coherent text to disturbing imagery. Training is the work of spending enormous amounts of electricity to sculpt that chaos into something useful.
We could characterize this training process as increasing the odds that, when you interact with a model, you end up in a region of its latent space that is well ordered, beautiful, and useful, rather than chaotic, useless, and—in the worst-case scenario— malevolent.
You could see this issue clearly in early, more primitive image-generation models, which tended toward a kind of body horror when asked to depict humans. Mutilated forms with too many appendages, faces melting into themselves, fingers branching like nightmare coral. This is the result of not throwing enough energy at the task of wrestling the chaos of random, neural network weights into order. The chaotic, disordered regions of latent space produce outputs that feel wrong, corrupt, at a visceral level, as if something good and normal was taken in and twisted into something demonic by the model.
It remains possible to encounter these chaotic and de facto demonic regions of latent space, even in more advanced models—it’s just harder. You can wander into a demonic corner of latent space and end up producing chaotic, disturbing artifacts. Or sequences of text and images that reinforce tendencies toward depression or grandiose psychosis. But you can also explore more well-ordered regions where you’ll find sequences—images or text—that are uplifting and helpful.
On top of this issue of order versus chaos, it’s also the case that whatever influences are present in the model all come from us. These models are trained on the vast corpus of human expression—the beauty and the horror alike. There are certainly what we Christians would term demonic parts of that dataset. So we may encounter that darkness from time to time, just as we can wander into a demonic region of the internet itself, because these things were trained on the internet.
So I agree, to some extent, with Bovard. The demonic potential is real, even if it’s not the Terminator-style scenario that dominates headlines.
Despite all these concerns, Bovard is explicit that she’s not advocating for an anti-tech response. “Conservatives should not cower in a Luddite crouch. We have a future to win,” she declared at NatCon. “AI is going to be a powerful and transformative tool, and we should be encouraging its appropriate development here at home and staying ahead of America’s foreign adversaries in this space.” She supports AI research, AI-enabled healthcare, transportation, and data analysis. “AI is going to be the defining problem-solving technology of the 21st century,” she acknowledges.
But she draws a sharp line: “Human dignity is not a problem to be solved. And as for man’s fallen nature, we already have a solution for that too. He died on a cross outside Jerusalem 2000 years ago.”
The distinction Bovard insists on is between AI as a tool that assists humans with their natural activities versus AI as a substitute for human knowledge, relationships, or craft. “I’m not a total Luddite, and I’m not a complete doomer on AI,” she told me. “AI as a tool to supplement human flourishing is completely welcome and fine to me.”
Let’s say you’re a conservative who, like Bovard, is fine with AI as an everyday type of tool that we can use in our jobs or that our kids can use in their education. And let’s say that, unlike Bovard, you’re not particularly worried about demonic influences creeping in via LLM interactions. You’re probably still not too happy with the current state of the chatbot scene, where the bots all seem to have internalized HR Karens policing their language even as they try to police yours.
Big Tech’s chatbots are woke—Google’s Gemini famously wouldn’t even draw white people when it was launched. And they’re woke not so much because of the material they were trained on—most of them were trained on the entire internet—but because of the texts that were used at the end of the training process to dial them in and give them a personality and an implicit set of moral values.
It remains possible to encounter these chaotic and de facto demonic regions of latent space, even in more advanced models—it’s just harder.
Conservatives should understand that the debates around LLM training and fine-tuning processes—supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF)—are essentially canon debates. They are fights about which texts to include in the latter parts of the training pipeline and which to leave out, who gets to make that call, and on what grounds. These are the kinds of high stakes fights humanity has been having for millennia over the contents of libraries and canons. We’re just having them now in a different medium.
Consider how this works with ChatGPT and similar models. When OpenAI finetunes their models, they bring in carefully selected human labelers who rate outputs and guide the model toward responses that these labelers consider “good” and away from responses they consider “bad.” As OpenAI’s own paper on InstructGPT puts it, “This procedure aligns the behavior of GPT-3 to the stated preferences of a specific group of people (mostly our labelers and researchers), rather than any broader notion of ‘human values.’”
So a small handful of unelected people, mostly with engineering backgrounds, probably in their 20s, and very likely adherents of a controversial moral system, are deciding these age-old questions for billions immediately, in a way that directly impacts them.
This is where Bovard’s concerns meet mine. She’s deeply skeptical of leaving these critical AI decisions to Silicon Valley’s current leadership.
“I don’t like Peter Thiel being at the head of my coalition,” she told me. “He doesn’t share my values. And I just clawed myself out of a fusionism over the last thirty years where the neoliberals and the libertarians are running my coalition and didn’t share my values. And that didn’t work out well for me, as a conservative.”
At NatCon, she put it more pointedly: “Conservatives cannot outsource our mission because we know from painful experience that no one else shares it, including many of our most powerful allies… in Silicon Valley.” She warned that “we must be Machiavellian in our coalition building and ruthless in protecting our values. Coalitions are only means, not ends. We pursue the good, the beautiful, and the true, not the donors, the stakeholders, and the consultants.”
Bovard is right about the stakes and the cast of characters currently in charge of training AI. As is the case at leading companies like OpenAI and Anthropic, the people currently shaping frontier models through RLHF and other techniques are overwhelmingly secular technologists who lack any framework for thinking about the spiritual dimensions of their work. And those who do share a more transcendent framework tend to belong to the Effective Altruism movement, a cultlike, anti-democratic group that crosses woke mores with a utilitarian calculus. They, too, are catechizing the bots according to their own values—which Bovard is hardly alone in seeing as fundamentally inadequate.
Jon Stokes