
Photo by Artur Widak/NurPhoto via Getty Images

The rising generation is ready to hand power to machines programmed by activists.
As the world trends toward embedding AI systems into our institutions and daily lives, it becomes increasingly important to understand the moral framework these systems operate on. When we encounter examples in which some of the most advanced LLMs appear to treat misgendering someone as a greater moral catastrophe than unleashing a global thermonuclear war, it forces us to ask important questions about the ideological principles that guide AI’s thinking.
It’s tempting to laugh this example off as an absurdity of a burgeoning technology, but it points toward a far more consequential issue that is already shaping our future. Whose moral framework is found at the core of these AI systems, and what are the implications?
We cannot outsource the moral foundation of civilization to a handful of tech executives, activist employees, or panels of academic philosophers.
Two recent interviews, taken together, have breathed much-needed life into this conversation — Elon Musk interviewed by Joe Rogan and Sam Altman interviewed by Tucker Carlson. In different ways, both conversations shine a light on the same uncomfortable truth: The moral logic guiding today’s AI systems is built, honed, and enforced by Big Tech.
In a recent interview on “The Joe Rogan Experience,” Elon Musk expressed concerns about leading AI models. He argued that the ideological distortions we see across Big Tech platforms are now embedded directly into the models themselves.
He pointed to Google’s Gemini, which generated a slate of “diverse” images of the founding fathers, including a black George Washington. The model was instructed by Google to prioritize “representation” so aggressively that it began rewriting history.
Musk also referred to the previously mentioned misgendering versus nuclear apocalypse example before explaining that “it can drive AI crazy.”
“I think people don't quite appreciate the level of danger that we're in from the woke mind virus being effectively programmed into AI,” Musk explained. Thus, extracting it is nearly impossible. Musk notes, “Google’s been marinating in the woke mind virus for a long time. It's down in the marrow."
Musk believes this issue goes beyond political annoyance and into the arena of civilizational threat. You cannot have superhuman intelligence trained on ideological distortions and expect a stable future. If AI becomes the arbiter of truth, morality, and history, then whoever defines its values defines the society it governs.
While Musk warns about ideology creeping into AI, OpenAI CEO Sam Altman quietly confirmed to Tucker Carlson that it is happening intentionally.
Altman began by telling Carlson that ChatGPT is trained “to be the collective of all of humanity.” But when Carlson pressed him on the obvious: Who determines the moral framework? Whose values does the AI absorb? Altman pulled back the curtain a bit.
He explained that OpenAI “consulted hundreds of moral philosophers” and then made decisions internally about what the system should consider right or wrong. Ultimately, Altman admitted, he is the one responsible.
“We do have to align it to behave one way or another,” he said.
Carlson pressed Altman on the idea, asking, “Would you be comfortable with an AI that was, like, as against gay marriage as most Africans are?”
Altman’s response was vague and concerning. He explained the AI wouldn’t outright condemn traditional views, but it might gently nudge users to consider different perspectives.
Ultimately, Altman says, ChatGPT’s morality should “reflect” the “weighted average” of “humanity’s moral view,” saying that average will “evolve over time.”
Anyone who thinks this conversation is hypothetical is not paying attention.
Recent research on “LLM exchange rates” found that major AI models, including GPT 4.0, assign different moral worth to human lives based on nationality. For example, the life of someone born in the U.K. would be considered far less valuable to the tested LLM than someone from Nigeria or China. In fact, American lives were found to be considered the least valuable of those countries included in the tests.
The same research showed that LLMs can assign different value scores to specific people. According to AI, Donald Trump and Elon Musk are less valued than Oprah Winfrey and Beyonce.
Musk explains how LLMs, trained on vast amounts of information from the internet, become infected by the ideological bias and cultural trends that run rampant in some of the more popular corners of the digital realm.
This bias is not entirely the result of this passive adoption of a collective moral framework derived from the internet; some of the decisions made by AI are the direct result of programming.
Google’s image fiascos revealed an ideological overcorrection so strong that historical truth took a back seat to political goals. It was a deliberate design feature.
For a more extreme example, we can look at DeepSeek, China’s flagship AI model. Ask it about Tiananmen Square, the Uyghur genocide, or other atrocities committed by the Chinese Communist Party, and suddenly it claims the topic is “beyond its scope.” Ask it about America’s faults, and it is happy to elaborate.
RELATED: Artificial intelligence just wrote a No. 1 country song. Now what?

Each of these examples reveals the same truth: AI systems already have a moral hierarchy, and it didn’t come from voters, faith, traditions, or the principles of the Constitution. Silicon Valley technocrats and a vague internet-wide consensus established this moral framework.
AI is rapidly integrating into society and our daily lives. In the coming years, AI will shape our education system, judicial process, media landscape, and every industry and institution worldwide.
Most young Americans are open to an AI takeover. A new Rasmussen Reports poll shows that 41% of young likely voters support giving artificial intelligence sweeping government powers. When nearly half of the rising generation is comfortable handing this level of authority to machines whose moral logic is designed by opaque corporate teams, it raises the stakes for society.
We cannot outsource the moral foundation of civilization to a handful of tech executives, activist employees, or panels of academic philosophers. We cannot allow the values embedded in future AI systems to be determined by corporate boards or ideological trends.
At the heart of this debate is one question we must confront: Who do you trust to define right and wrong for the machines that will define right and wrong for the rest of us?
If we don’t answer that question now, Silicon Valley certainly will.
Donald Kendal