Tech by Blaze Media

© 2025 Blaze Media LLC. All rights reserved.
Does anyone think we’re up to the task of controlling AI?
Photo by SOPA Images / Contributor via Getty Images

Does anyone think we’re up to the task of controlling AI?

An ailing society is apt to tap out of its biggest responsibilities.

So many slide decks and white papers promise a future of AI under human control, a project framed not as a technological sprint but as a long journey. The language is meant to reassure, a steady hand on the shoulder of a jittery public. Yet the very premise of the journey implies a certain departure, a recognition that the systems we are building now operate at a speed and complexity that have outstripped our capacity to easily oversee. One might nervously wonder whether the center will hold.

One answer to this predicament is “interpretability,” a technique for examining an AI model to figure out why it did what it did. It’s the equivalent of reading a plane’s flight recorder after a crash. But a system making thousands of autonomous decisions a second offers no time for such leisurely forensics. A failure may not be an event but a condition, a constant state of potential deviation.

The new thinking, then, is to move from forensics to architecture. The goal is to build in the oversight, to treat governance, not as a secondary analysis, but as a foundational requirement, an immutable audit trail that logs not just a model’s output but its entire lineage: the data it was fed, the model version that made the call, the key inputs that shaped its rationale. We are no longer merely watching the machine; we are building a watchtower.

In the loop?

At the heart of this new architecture is the “human-in-the-loop,” a concept whose neatness belies anxiety. The human, we are told, will shift from a passive reviewer to an active designer, engage in a continuous loop of governance that sets the boundaries and defines the goals. But the very act of depending on these systems can engender a state of cognitive offloading, a subtle atrophy of our own critical faculties. We are asked to be the system’s ultimate arbiter at the very moment the system is eroding the instincts required for the job.

The friction is everywhere. We see it in the laboratory, when a researcher at the University of Washington uses deep learning to design functional proteins that have never existed in nature, opening doors to novel medicines and biosensors. We see it in the game of Go, when a machine makes a move that defies centuries of human wisdom, a move of startling, alien creativity. The promise is one of discovery, of accelerating the scientific method. The possible reality is a “theory glut,” a condition in which the bottleneck shifts from ideation to validation. We find ourselves in a world that can generate hypotheses at a superhuman rate, but our capacity to test them, to ground them in the physical world, remains stubbornly, irreducibly human. We might drown in brilliant answers to questions we have not yet learned how to ask.

This dissonance echoes in the most intimate spaces of our lives. We are offered “digital twins,” virtual replicas of our own physiology, updated in real time, upon which a surgeon can rehearse a procedure in a risk-free environment. We are told that AI copilots will save the legal profession a great number of hours per year, freeing lawyers from the drudgery of document review to focus on the higher arts of deepening client relationships.

RELATED: Female avatar appointed as Europe's first AI government official

Photo by SFOTO / Contributor via Getty Images

Free and fragile

The narrative is one of liberation, of efficiency begetting connection. And yet, this reclaimed time exists within a system of escalating expectations. The Jevons paradox, a 19th-century economic observation, finds its modern footing here: As efficiency increases, so sometimes does demand. The two hours a sales professional saves each day are not banked for leisure; they are reinvested into the pursuit of higher quotas. The freedom from menial tasks does not lead to rest, but to the creation of new, more complex work.

And beneath it all, there is a persistent hum of vulnerability. The very transparency we engineer for control becomes a new attack surface. An adversary can engage in “data poisoning,” slipping malicious information into a training set to warp a model’s output in subtle, insidious ways. The system built for audibility becomes uniquely susceptible to a kind of attack that leaves no obvious trace, a hidden vulnerability that could lie dormant for years in a system that guides autonomous vehicles or calibrates antibiotic dosages. The solution, it turns out, has problems of its own.

The long journey points not toward a destination but toward a state of perpetual negotiation. The most critical constraint is not hardware or networking or power. It is talent. The crisis is human. The skill gap between the demand for those who can manage these systems and the available supply is the true bottleneck. The government may frame this challenge as a matter of national security, an imperative to maintain a competitive advantage. But it seems to be something more fundamental. A controllable AI future is not about building smarter machines. It is about the far more complex and uncertain project of building a more resilient and healthy human society, one capable of managing the strange and brilliant weather of its own creation.

Want to leave a tip?

We answer to you. Help keep our content free of advertisers and big tech censorship by leaving a tip today.
Want to join the conversation?
Already a subscriber?
Stephen Pimentel

Stephen Pimentel

Stephen Pimentel is an engineer and essayist in the San Francisco Bay Area, interested in the classics, political philosophy, governance futurism, and AI.