© 2026 Blaze Media LLC. All rights reserved.
The next fight over freedom will run through AI models
William_Potter / Getty Images

The next fight over freedom will run through AI models

These systems will shape thought, speech, and incentives, which makes constitutional limits more urgent than ever.

When it comes to artificial intelligence, the Trump administration has made its position clear: America will not choke innovation with red tape.

That instinct is understandable and, in many ways, correct. AI is moving fast, and heavy-handed regulation could do real damage. If the United States cripples its own companies, China will gladly take the advantage. And no one on the right wants blue-state politicians using AI rules to smuggle “woke” ideology into the next generation of powerful models.

The goal should be straightforward: Build an American AI future in which freedom is embedded from the start, and constitutional guardrails shape the systems that will increasingly shape us.

As White House AI adviser David Sacks recently put it, “We don’t like seeing blue states trying to insert their woke ideology in AI models, and we really want to try and stop that.”

Fair enough.

But what happens when resistance to bad regulation hardens into resistance to any regulation at all?

That question is now surfacing in Utah, where the White House is reportedly opposing a Republican-sponsored AI transparency bill. The fight may sound parochial, but it raises a much larger question: Do conservatives have the discipline to protect constitutional liberty in the AI age?

Utah isn’t California

The Utah proposal is not a European-style crackdown. It would not impose speech codes, mandate ideological compliance, or try to centrally plan the AI economy.

At its core, the bill focuses on transparency and accountability. It would require frontier AI companies to disclose serious risks, plan for safety in advance, report major problems, and protect whistleblowers who raise alarms.

That’s far from radical.

If the administration’s AI strategy is to stop progressive states from embedding political orthodoxy into algorithms, Utah’s bill does not belong in that category. The measure is about making sure the companies building extraordinarily powerful systems acknowledge the risks up front and take responsibility when things go wrong.

Treating that effort as if it were blue-state social engineering confuses two very different problems. There is a real difference between using AI regulation to enforce ideology and asking powerful firms to level with the public about systems that could reshape society.

The myth of an ‘unregulated’ AI market

Another uncomfortable truth lurks beneath this debate: AI is not operating in anything like a free-market vacuum.

The European Union has already enacted its sweeping AI Act. That regulatory regime will not stop at Europe’s borders. American companies that operate globally will feel its force, and American users will feel the downstream effects.

If the United States adopts a posture of total federal non-engagement, it will not preserve a neutral market. It will hand the regulatory initiative to Brussels.

That would be a serious mistake. Europe does not regulate with American constitutional principles in mind. It regulates through a bureaucratic worldview that prizes centralized control over freedom. If Washington refuses to establish clear guardrails rooted in our own constitutional tradition, foreign regulators and multinational firms will fill the void.

Power without constitutional guardrails

AI is quickly becoming part of the infrastructure of modern life. These systems increasingly shape how information flows, how public opinion forms, and how daily choices get nudged.

That is power.

We have already watched major corporations use private power to shape public life. Social-media companies moderated, suppressed, and curated speech in ways that tilted public debate. Large firms adopted ESG frameworks that embedded political priorities into lending, hiring, and investment. In both cases, powerful institutions pushed ideological outcomes without a vote being cast or a law being passed.

Nothing suggests AI will escape those pressures.

RELATED: If AI isn’t built for freedom, it will be programmed for control

gorodenkoff / Getty Images

The companies building frontier systems carry their own assumptions, incentives, and cultural biases. If those assumptions get baked into foundational models — and those models then get integrated into education, finance, media, hiring, and governance — ideological influence will move from the margins to the infrastructure of society.

Yes, clumsy central planning would hurt innovation and weaken America’s position against China. But the answer cannot be blind faith that market incentives alone will protect liberty. That asks a great deal of institutions that have already shown a willingness to steer political and cultural outcomes in their preferred direction.

The real challenge is making sure extraordinary technological power develops inside a framework that respects constitutional rights, individual liberty, and personal autonomy.

A pro-liberty AI framework

The Trump administration is right to resist ideological manipulation in AI models and to oppose sweeping regimes that would handicap American innovation while China races ahead.

But someone will shape the boundaries of this technology. The only real question is whether those boundaries reflect American constitutional principles or the preferences of foreign regulators and corporate boards.

Red states such as Utah should be treated as allies in that effort, not obstacles. They can serve as proving ground for approaches that protect transparency, due process, free expression, and individual autonomy without strangling innovation.

Artificial intelligence will shape the next century more than any single statute. Total non-engagement may sound pro-growth, but in practice it leaves the foundational rules of the AI era to someone else.

The goal should be straightforward: Build an American AI future in which freedom is embedded from the start, and constitutional guardrails shape the systems that will increasingly shape us.

Want to leave a tip?

We answer to you. Help keep our content free of advertisers and big tech censorship by leaving a tip today.
Want to join the conversation?
Already a subscriber?
Donald Kendal

Donald Kendal

Donald Kendal is the director of the Emerging Issues Center at the Heartland Institute.
@EmergingIssuesX →