Tech by Blaze Media

© 2025 Blaze Media LLC. All rights reserved.
Why each new controversy around Sam Altman’s OpenAI is crazier than the last
Justin Sullivan/Getty Images

Why each new controversy around Sam Altman’s OpenAI is crazier than the last

Mass resignations, disgusting allegations, and a laundry list of lawsuits. Can anyone clean up the industry-leading company?

Last week, two independent nonprofits, the Midas Project and the Tech Oversight Project, released after a year’s worth of investigation a massive file that collects and presents evidence for a panoply of deeply suspect actions, mainly on the part of Altman but also attributable to OpenAI as a corporate entity.

It’s damning stuff — so much so that, if you’re only acquainted with the hype and rumors surrounding the company or perhaps its ChatGPT product, the time has come for you to take a deeper dive.

Sam Altman and/or OpenAI have been the subject of no less than eight serious, high-stakes lawsuits.

Most recently, iyO Audio alleged OpenAI made attempts at wholesale design theft and outright trademark infringement. A quick look at other recent headlines suggests an alarming pattern:

  • Altman is said to have claimed no equity in OpenAI despite backdoor investments through Y Combinator, among others;
  • Altman owns 7.5% of Reddit, which, after its still-expanding partnership with OpenAI, shot Altman’s net worth up $50 million;
  • OpenAI is reportedly restructuring its corporate form yet again — with a 7% stake, Altman stands to be $20 billion dollars richer under the new structure;
  • Former OpenAI executives, including Muri Murati, the Amodei siblings, and Ilya Sutskever, all confirm pathological levels of mistreatment and behavioral malfeasance on the part of Altman.

The list goes on. Many other serious transgressions are cataloged in the OpenAI Files excoriation. At the time of this writing, Sam Altman and/or OpenAI have been the subject of no less than eight serious, high-stakes lawsuits. Accusations include everything from incestual sexual abuse to racketeering, breach of contract, and copyright infringement.

None of these accusations, including heinous crimes of a sexual nature, have done much of anything to dent the OpenAI brand or its ongoing upward valuation.

Tech's game of thrones

The company’s trajectory has outlined a Silicon Valley game of thrones unlike any seen elsewhere. Since its 2016 inception — when Elon Musk, Sam Altman, Ilya Sutskever, and Greg Brockman convened to found OpenAI — the Janus-faced organization has been a tier-one player in the AI sphere. In addition to cutting-edge tech, it’s also generated near-constant turmoil. The company churns out rumors, upsets, expulsions, shady reversals, and controversy at about the same rate as it advances AI research, innovation, and products.

RELATED: Mark Zuckerberg's multibillion-dollar midlife crisis

Sean M. Haffey/Getty Images

Back in 2016, Amazon, Peter Thiel, and other investors pledged the company $1 billion up front, but the money was late to arrive. Right away, Altman and Musk clashed over the ultimate direction of the organization. By 2017, Elon was out — an exit which spiked investor uncertainty and required another fast shot of capital.

New investors, Reid Hoffman of LinkedIn fame among them, stepped up — and OpenAI rode on. Under the full direction of Sam Altman, the company pushed its reinforcement learning products, OpenAI Gym and Universe, to market.

To many at the time, including Musk, OpenAI was lagging behind Google in the race to AI dominance — a problem for the likes of Musk, who had originally conceived the organization as a serious counterweight against what many experts and laypeople saw as an extinction-level threat arising out of the centralized, “closed” development and implementation of AI to the point of dominance across all of society.

That’s why OpenAI began as a nonprofit, ostensibly human-based, decentralized, and open-source. In Silicon Valley’s heady (if degenerate) years prior to the COVID panic, there was a sense that AI was simply going to happen — it was inevitable, and it would be preferable that decent, smart people, perhaps not so eager to align themselves with the military industrial complex or simply the sheer and absolute logic of capital, be in charge of steering the outcome.

But by 2019, OpenAI had altered its corporate structure from nonprofit to something called a “capped-profit model.” Money was tight. Microsoft invested $1 billion, and early versions of the LLM GPT-2 were released to substantial fanfare and fawning appreciation from the experts.

Life after Elon

In 2020, the now for-limited-profit company dropped its API, which allowed developers to access GPT-3. Their image generator, DALL-E, was released in 2021, a move that has since seemed to define, to some limited but significant extent, the direction that OpenAI wants to progress. The spirit of cooperation and sharing, if not enshrined at the company, was at least in the air, and by 2022 ChatGPT had garnered millions of users, well on the way to becoming a household name. The company’s valuation rose to the ballpark of $1 billion.

After Musk’s dissatisfied departure — he now publicly lambastes "ClosedAI" and "Scam Altman" — its restructuring with ideologically diffuse investors solidified a new model: Build a sort of ecosystem of products which are intended to be dovetailed or interfaced with other companies and software. (Palantir has taken a somewhat similar, though much more focused, approach to the problem of capturing AI.) The thinking here seems to be: Attack the problem from all directions, converge on “intelligence,” and get paid along the way.

And so, at present, in addition to the aforementioned products, OpenAI now offers — deep breath — CLIP for image research, Jukebox for music generation, Shap-E for 3D object generation, Sora for generating video content, Operator for automating workflows with AI agents, Canvas for AI-assisted content generation, and a smattering of similar, almost modular, products. It’s striking how many of these are aimed at creative industries — an approach capped off most recently by the sensational hire of Apple’s former chief design officer Jony Ive, whose IO deal with the company is the target of iyO’s litigation.

But we shouldn’t give short shrift to the “o series” (o1 through o4) of products, which are said to be reasoning models. Reasoning, of course, is the crown jewel of AI. These products are curious, because while they don’t make up a hardcore package of premium-grade plug-and-play tools for industrial and military efficiency (the Palantir approach), they suggest a very clever approach into the heart of the technical problems involved in “solving” for “artificial reasoning.” (Assuming the contested point that such a thing can ever really exist.) Is part of the OpenAI ethos, even if only by default, to approach the crown jewel of “reasoning” by way of the creative, intuitive, and generative — as opposed to tracing a line of pure efficiency as others in the field have done?

Gut check time

Wrapped up in the latest OpenAI controversy is a warning that’s impossible to ignore: Perhaps humans just can’t be trusted to build or wield “real” AI of the sort Altman wants — the kind he can prompt to decide for itself what to do with all his money and all his computers.

Ask yourself: Does any of the human behavior evidenced along the way in the OpenAI saga seem, shall we say, stable — much less morally well-informed enough that Americans or any peoples would rest easy about putting the future in the hands of Altman and company? Are these individuals worth the $20 million to $100 million a year they command on the hot AI market?

Or are we — as a people, a society, a civilization — in danger of becoming strung out, hitting a wall of self-delusion and frenzied acquisitiveness? What do we have to show so far for the power, money, and special privileges thrown at Altman for promising a world remade? And he’s just getting started. Who among us feels prepared for what’s next?

Want to leave a tip?

We answer to you. Help keep our content free of advertisers and big tech censorship by leaving a tip today.
Want to join the conversation?
Already a subscriber?
Andrew Edwards

Andrew Edwards

Andrew Edwards was born and raised in Portland, Oregon, and now lives near Ashland with his wife, three sons, and two dogs. In addition to writing novels and short non-fiction and producing "The Warhorse Podcast," he has worked as a ranch hand, wilderness survival/tracking instructor, and private driver.
@goldengoatguild →