Photo by Saul Loeb-Pool/Getty Images
Never has the imagination of so few mattered so much.
Many in the AI field believe that the future is inevitable, a destination arrived at through the brute application of electricity and capital. This prevailing faith, known as the scaling hypothesis, posits that if one feeds enough data into enough GPUs, AI will emerge as a matter of course. It is a comforting determinism, suggesting that the machine evolves under its own logic, provided the resources are sufficient.
However, if we observe the actual dynamics of this revolution, we notice that the machinery is useless without a very specific, rare kind of human intervention. Servers may hum in their air-conditioned vastness, but the architecture of the computing they house does not emerge spontaneously from the chips. It is crafted, often painfully, by a handful of individuals. As the entrepreneur Naveen Rao observed, there are perhaps “only a couple hundred people in the world” who possess the deep expertise required to train cutting-edge models.
Progress relies on the spark of insight that only a human mind can provide.
The leaders of the industry are betting that a brilliant mind can unlock more progress than an extra few billion parameters can. While scaling provides the clay, the spark of human genius acts as the catalyst. This scarcity has precipitated what Elon Musk called the “craziest talent war” he had ever seen. Companies are not merely hiring; they are offering seven-figure salaries to lure researchers away from rivals, regarding these individual experts as the ultimate competitive edge.
There is a historical resonance here, a recurring pattern in which the movement of a few minds alters the geopolitical trajectory. We saw it in the 20th century, when the United States imported Wernher von Braun and his team under Project Paperclip, a move that enabled America’s achievements during the space race. We saw the inverse when the U.S., concerned with communist espionage, deported the Caltech-trained scientist Qian Xuesen to China, an act later described by a U.S. official as the “stupidest thing this country ever did.” Qian returned to China to orchestrate its nuclear program, proving that the loss of talent can be a strategic error that capital cannot fix.
We are now witnessing a diffusion of genius that belies the American assumption of dominance. For years, American discourse failed to see the rise of Chinese AI, lulled by the belief that innovation was a function of Silicon Valley’s unique ecosystem. Then came DeepSeek. In early 2025, this Hangzhou-based lab released a model that rivaled the best American systems, trained at a fraction of the cost.
RELATED: Zuckerberg names ex-White House deputy Meta's new president — and Trump LOVES it

The shock was palpable. It was described as a “Sputnik moment.” DeepSeek did not achieve this by out-spending the Americans; it did it by out-thinking them. It utilized architectural efficiencies to achieve frontier capability with about a tenth of the computing power of its competitors. It demonstrated that elite technical talent can compensate for, and optimize around, resource constraints. Brains had outsmarted brawn.
This dynamic is reshaping the cultural geography of the field. Talent is no longer content to sit in the monolithic campuses around the San Francisco Bay Area. Consider the exodus from Meta. Of the 14 authors who wrote the original LLaMA paper, 11 had departed by 2025. They did not vanish; they circulated. Many resurfaced in Paris, founding Mistral AI, which quickly raised over €100 million on the promise of making AI accessible through open-source models.
The shift is from institutional loyalty to intellectual nomadism. Some researchers are driven by an open-source ethos, preferring to publish their model weights and invite global collaboration rather than lock their work behind corporate walls. Openness can be a strategy to harness talent. When Alibaba’s Qwen team or DeepSeek release their models, they are not just releasing code; they are signaling to the global community of mathematicians and engineers that the work is happening there, outside the confines of the American giants.
The scaling hypothesis suggests a kind of inevitability, that any sufficiently funded lab would eventually reach the same breakthroughs. The history of the field suggests, rather, that the Transformer architecture, the very backbone of modern AI, might not have appeared in 2017 had Vasilii Vaswani and his collaborators not been in the room to imagine it. These shifts are not guaranteed by external conditions: They require advocates, mavericks, particular minds capable of the conceptual leap.
Michael Polanyi spoke of tacit knowledge, the ineffable know-how that cannot be written down but resides in the intuition of the expert. With neural networks, this tacit knowledge is the feel for tuning a loss function, the aesthetic judgment required to guide a model’s learning. To build machines that behave intelligently, we are dependent on the rarest and most distinctively human forms of creativity.
The models are getting larger. The data centers consume the power of small nations. However, the direction of this juggernaut is still determined by a very small number of people. The scaling hypothesis was only ever half the story. The other half is the talent hypothesis, the stubborn fact that progress relies on the spark of insight that only a human mind can provide.
The intelligence we are so desperate to manufacture is not a commodity we can mine from the earth but a reflection of the people who build it. Without the elite engineers to imagine what to do with the compute, the ambitious visions of artificial intelligence remain just that — visions, waiting for a mind to bring them to life. The servers may be loud, but it is the quiet work of these few hundred people that will determine what they are saying.
Stephen Pimentel