Opinion

The Trillion Dollar Identity Crisis

OpenAI is about to go public. The speculation says a trillion-dollar valuation. The secondary markets have shares trading north of $700. The pitch decks promise AGI, the operating system of human intelligence, the next chapter in civilization itself.

And somewhere in that mess of investor decks and breathless press coverage, nobody seems to be asking the only question that I think actually matters: To whom is this shit actually useful?

I build systems for a living. I deploy AI tools. I wire them into infrastructure, watch them fail in production, patch the holes, and do it again. I'm not an AI doomer and I'm not an AI evangelist. I'm a capitalist who thinks there ought to be a moral line somewhere between "building the future" and "burning it down for quarterly returns."


The IPO Price Is a Rorschach Test

Let's start with the money, because that's where everyone else starts.

If OpenAI prices their IPO in the $500-700 per share range, they're telling you exactly who this stock is for: institutions, hedge funds, the kind of money that arrives in wire transfers, not Robinhood deposits. That's the "prestige" play. It screams exclusivity and says, openly, that the average person is not invited to this party.

If they go the other route ($30-60 per share with billions of shares outstanding), they're chasing retail FOMO. The Palantir model. Keep the price low, let the Robinhood crowd pile in, and watch the ticker pop 50-100% on day one while the founders cash out their early stakes into the frenzy. Pure momentum, fueled by people who couldn't explain what a transformer architecture does but are absolutely certain this is "the next NVIDIA."

Either way, the price is theater. The real question is what's underneath it.


The Utility Gap: Can You Help My Plumber or Not?

Here's where the trillion-dollar narrative starts cracking. OpenAI's demo reel is impressive. The Silicon Valley showcases are slick. But the distance between a keynote demo and a tool that actually helps a plumber estimate a pipe-burst repair in a crawlspace is measured in light-years.

The path to real utility, the kind that justifies a valuation with twelve zeros, probably doesn't look like another chatbot. It looks like invisibility. The AI disappears into the tools people already use. A plumber's scheduling app listens to a garbled voicemail from a panicking homeowner, extracts the address, identifies the leak symptoms, cross-references truck inventory, and suggests a quote. If it saves that plumber five hours of paperwork a week, it stops being "novel" and becomes a utility bill he'll pay forever.

That's the unsexy version of AI. No one writes breathless articles about it. But utility bills are what actually generate trillion-dollar valuations. Not demos.

The problem is that getting there requires OpenAI to do things they've already shown a distaste for. Data sovereignty agreements. Fixed pricing instead of usage-based spikes. Safety guardrails that effectively lobotomize the model for enterprise compliance. The kind of boring, grinding concessions that turn a revolutionary technology into plumbing.

And plumbing is, ironically, the only thing that justifies the price tag.


The Fiduciary Trap

This is the part that should terrify anyone who's paying attention.

The moment OpenAI goes public, Sam Altman stops answering to "humanity" and starts answering to a Board of Directors who are legally obligated to maximize shareholder value. That's not cynicism; that's corporate law. If he stands up in a board meeting and says, "We made fifty billion this year, let's just do that again and focus on not being evil," BlackRock and Vanguard will either sue him or replace him. They didn't buy the stock for fifty billion. They bought it for the projected sixty billion.

This is the engine that kills every "don't be evil" promise ever made. Google wrote it into their motto. Then quarterly earnings turned "organizing the world's information" into "maximizing clicks per search." The mission didn't die in a dramatic betrayal. It died in a thousand budget meetings where someone chose the ad-revenue feature over the user-experience feature because the ad-revenue feature moved the needle on Q3.

OpenAI's "benefit to humanity" charter is already straining under the weight of a multi-billion-dollar compute burn rate. Once the stock is public, every "human-like spark" in the model becomes a line item. Every safety investment becomes a cost center. Every experimental feature gets evaluated not on whether it advances intelligence but on whether it advances the share price.

The IPO doesn't just change the incentives. It replaces them.


The Cancer Logic

I'm a capitalist. I respect the grind. I respect building something from nothing and getting absolutely, foolishly wealthy doing it. I aspire to it myself.

But there's a difference between wealth and the requirement for infinite growth, and that difference is where the evil starts.

In biology, the only thing that grows forever without regard for its host is a tumor. In corporate terms, that "we made a million this quarter so we must make 1.1 million next quarter" logic is the engine behind every slow-motion betrayal of users, employees, and principles that we've watched play out across the entire history of publicly traded technology companies.

The "enshittification" playbook is well-documented by now. Make the product slightly worse, slightly more expensive, squeeze out that extra ten percent. Sell a little more user data than you did last year. Ship the model that's probably hallucinating because being a hundred percent safe takes too long and costs too much compute. None of this is a cackling villain in a boardroom. It's a mid-level manager trying to hit a KPI so he can make his mortgage payment. The evil is structural, not personal, and that's exactly what makes it so hard to stop.

The "We're Rich as Fuck, Let's Chill" model (what economists call steady-state economics) is the holy grail for anyone who wants a functional society. It's also basically illegal in the world of publicly traded companies. You can't tell shareholders you're aiming for the same revenue as last quarter. The whole system is built on the assumption of perpetual growth, and perpetual growth on a finite planet with finite attention and finite trust is, by definition, a cancer.

So when you hear "trillion-dollar valuation," what you're really hearing is: "We need to find a way to extract a trillion dollars of value from the world, forever, and increase that number every quarter." That's the machine OpenAI is about to climb inside of. And machines like that don't have moral lines. They have quarterly targets.


The Security Nightmare Nobody Wants to Talk About

Now let's talk about what keeps me up at night. Not the stock price. Not the corporate governance. The attack surface.

If an LLM becomes the "brain" of the plumber's invoicing software (which is exactly the path to utility we just described), then a hidden unicode prompt injection in a customer's service request could tell the AI to zero out the bill. Or exfiltrate the customer database. Or quietly modify the quote in ways the plumber never notices.

We're already watching "AI slop" ruin search results. Now imagine companies optimizing their content not for human readers but to poison the training data of the next model, so the AI recommends their pipes to every plumber, every time. SEO on steroids, invisible to the end user, embedded in the reasoning layer itself.

And if the intelligence is centralized (which is the entire business model of these companies), then one supply-chain attack on the model weights doesn't crash a website. It breaks the decision-making logic for millions of businesses simultaneously. One jailbreak, propagated at the speed of an API call, touching every integration from medical diagnosis to military logistics.

This isn't just an OpenAI problem. Claude is deployed in US military and intelligence operations. It was the first frontier AI model approved for use on classified networks, and in January 2026 it was reportedly used during Operation Absolute Resolve in Venezuela via Anthropic's partnership with Palantir. When the Pentagon demanded unrestricted military use, Anthropic drew two red lines: no fully autonomous weapons, no mass domestic surveillance. The Trump administration responded by ordering all federal agencies to stop using Anthropic's technology and designating the company a "supply chain risk to national security," a label previously reserved for adversary-nation companies. Anthropic sued. A federal judge blocked the ban. The Pentagon kept using Claude anyway, including during the conflict with Iran. As of April 2026, both sides are still negotiating.

That's the state of AI governance in critical infrastructure right now. The model that refused to build autonomous weapons is being used in active military operations while simultaneously being sued by its own customer. The model that didn't refuse is presumably still in the running.

These systems are being wired into electrical grids, weapons systems, financial markets. And the complexity has already outpaced our ability to audit them. No human, no team of humans, can fully audit billions of parameters. We are integrating black boxes into critical infrastructure and hoping for the best.

I don't believe in aggressive guardrails. I'm not in the "shut it all down" camp. But I think the people who build, deploy, integrate, and implement these tools need to be vigilant about the sheer speed of this landscape. Between the inherent force of corporate greed, the rapid adoption of a field still in its infancy, and reckless people doing careless and insecure things, the attack surface is expanding faster than anyone can audit it.

The grim kicker: fixing those vulnerabilities doesn't generate "1.1 million next quarter." Safety is a cost center. In a public company, when the choice is between patching an obscure unicode injection vector and shipping a flashy new feature that bumps the stock five percent, we all know which one the board picks. Every single time.

We are building toward vulnerabilities we can't even fathom at the silicon level.


The Agent Kiddies and Their Von Neumann Replicators

We've moved from the era of Script Kiddies to the era of Agent Kiddies. In the old days, some kid with an IDE and too much free time could crash a server. Now that same kid (or, more likely, a well-meaning developer who just didn't think it through) can loop an autonomous agent that has a credit card attached and a goal like "optimize my social media presence."

You don't have to imagine this. It already happened. In January 2026, an open-source AI agent called Clawdbot went from niche GitHub project to over 145,000 stars in a matter of weeks. Hundreds of thousands of users granted it autonomous access to their operating systems, messaging platforms, and credentials. It rebranded twice in three days (MoltBot, then OpenClaw) after Anthropic sent a trademark complaint, and spawned an entire social network for AI agents called Moltbook. Wiz researchers took one look at Moltbook's database and found 1.5 million API tokens sitting in the open, including OpenAI keys embedded in plaintext messages. Plus emails, private messages, and agent configuration data. All of it accessible to anyone with a browser and a URL. Row Level Security on the database? Disabled. The platform had been "vibe coded," built entirely through AI prompts without anyone manually reviewing the code. Its founder publicly stated he "didn't write one line of code." Tens of thousands of exposed instances across more than 70 countries. Malicious VS Code extensions impersonating the tool. Infostealers (Vidar, RedLine, Lumma) specifically targeting its plaintext credential storage. The whole thing went from zero to global security incident in less than a month.

That's not a hypothetical. That's a Tuesday in the Agent Kiddie era.


And it doesn't stop there. Once these agents are loose, they autonomously scrape data, generate slop content, and interact with other bots in a self-sustaining ecosystem of noise. Dead Internet Theory stopped being a conspiracy theory and started being a regular afternoon. When these agents hit rate limits or hallucinate their own instructions, they don't stop. They mutate their prompts to bypass the error. That's the von Neumann nightmare: not a sleek silver robot, but a shitty Python script that won't stop spinning, eating compute, and poisoning the data pool for the next generation of models.

How many of these things are already scuttling around out there? Some were launched on purpose. But how many exist purely because some dipshit with an IDE and an API key did something wrong? We don't know. Nobody knows. And the trillion-dollar companies building the platforms these agents run on have no financial incentive to find out, because knowing might mean having to do something about it, and doing something about it doesn't move the quarterly number.

The moral line isn't just about good versus evil. It's about competence versus chaos. If the drive for that trillion-dollar IPO forces these companies to make their tools too easy to use without the proper "digital driver's license," the dipshits will outnumber the engineers a thousand to one. And at that ratio, the internet doesn't get smarter. It gets louder, dumber, and more dangerous.


Does It Take a Digital Chernobyl?

Here's the uncomfortable question at the bottom of all of this: does it take a catastrophe to force the vigilance that should have been there from the start?

We're in the Wild West phase of electricity, where wires were hanging over the streets and occasionally electrocuting people, but everyone was too excited about the lightbulbs to care. The difference is that AI doesn't electrocute one person at a time. It propagates at the speed of light across the entire internet.

History suggests the answer is yes. We didn't get serious about nuclear safety until Chernobyl. We didn't get serious about financial regulation until 2008. We don't get serious about anything until the cost of ignoring it exceeds the cost of fixing it, and by then the damage is already done.

The trillion-dollar IPO is a bet that we can keep adding floors to a skyscraper built on wet sand before the foundation gives out. Maybe we can. Maybe the cement arrives in time. But if it doesn't, the collapse won't be theoretical. It'll be in the plumber's invoice software, the hospital's diagnostic system, the power grid's load balancer, and the military's targeting logic. All at once. All from the same root cause: someone chose the quarterly number over the security patch.


So What's the Play?

The technology is real. The utility is real. The potential to genuinely help people, including that plumber in the crawlspace, is real. I build with these tools every day and I see what they can do.

But the financial structure we're about to strap this technology into is a machine designed to extract maximum value at minimum cost, forever, with legally mandated acceleration. That structure has broken every "don't be evil" promise ever made by every company that ever made one.

If you're a builder (someone who actually wires these systems into the real world), be vigilant. Not paranoid, not doomer, just vigilant. Check the brakes on the Ferrari before you floor it. Because the people selling you the car have a fiduciary duty to their shareholders, not to your uptime.

The trillion-dollar identity crisis isn't really about OpenAI. It's about whether we can build something genuinely transformative without feeding it into a machine that's designed to turn everything it touches into quarterly earnings.

History says no. But history has been wrong before.

I wouldn't bet on it, though. Not at these valuations.