A few months ago I noticed I was having two completely different conversations about AI depending on who I was talking to.

With people who build things, the conversations are specific. What works, what doesn't. The debugging is brutal. Costs add up faster than expected. But there's genuine progress happening in narrow areas.

With everyone else, it's either apocalyptic fear or breathless optimism. AI will take all jobs. AI will solve climate change. AI will destroy humanity. AI will cure cancer.

I don't find either version particularly useful. Not predictions, not a position. Just what I notice when I pay attention.

What I Notice in Organizations

There's a strange paradox happening right now. Personal AI tools work. Enterprise AI mostly doesn't.

An MIT study from last summer found that 95% of organizations investing in generative AI report "zero return." At the same time, over 40% of knowledge workers use AI tools personally and find them helpful.

Same people. Different outcomes depending on context.

I've noticed this pattern in my own work. Someone uses ChatGPT to draft emails, summarize documents, brainstorm ideas. It works fine. Then the same organization tries to deploy AI for something more structured and it falls apart.

Why?

The organizations that actually get value from AI tend to have something in common. They redesigned their workflows instead of bolting AI onto existing processes. They constrained what the AI was supposed to do and kept humans in the loop at decision points.

The ones that fail usually ask "How do we add AI to what we're already doing?"

I've found that the first framing of a problem is usually wrong. Organizations asking "how do we add AI?" might be asking the wrong question entirely. Maybe the question is "what are we actually trying to do?" and AI is one possible tool among many.

Or maybe the problem isn't AI capability at all. Maybe it's that most organizations struggle to implement any new technology well. Historical success rates for large enterprise tech deployments hover around 10%. AI fits right into that pattern.

Probably both interpretations are partially true.

The Money Question

Something about the economics makes me uncomfortable. I can't quite articulate why, but let me try.

Michael Burry has been writing about this. He's the investor who called the housing crash before 2008 (you probably know him from The Big Short). His thesis is that hyperscalers are understating depreciation by assuming chips will last 5-6 years when the actual product cycle is closer to 18-24 months. He projects about $176 billion in understated depreciation between now and 2028.

I don't know if he's right. But the math is worth thinking about.

OpenAI generated $4.3 billion in revenue in the first half of 2025 while burning through $2.5 billion in cash.

Sam Altman himself said it earlier this year: "Are we in a phase where investors as a whole are overexcited about AI? My opinion is yes." He compared it to the dot-com bubble. Smart people getting overexcited about a kernel of truth.

The kernel of truth part matters. The internet was transformative. Most dot-com investments still lost money. Both things can be true at once.

I keep thinking about what happens if the current spending levels prove unsustainable. Amazon, Google, Meta, and Microsoft are collectively spending around $400 billion on AI infrastructure this year. By some estimates, the sector needs $2 trillion in annual revenue by 2030 just to justify current spending.

Does AI development slow down if the money dries up? Or does it just consolidate into fewer, larger players?

What I notice is that most AI commentary focuses on capabilities and ignores the economics entirely. That seems like a gap.

The Bigger Picture

AI development isn't just a technical question. It's embedded in systems that determine what gets built.

The shorthand version is that America innovates, China replicates, and Europe regulates.

Private AI investment in the US hit about $109 billion in 2024. That's roughly twelve times China's figure. The US treats AI compute and chips as strategic assets, using export controls to maintain advantage.

China has a different model. State-centric, focused on building their own domestic AI industry. They rose from 23rd to 6th in AI readiness rankings between 2024 and 2025. They're catching up faster than anyone expected.

Europe is betting on being the rule-maker rather than the first mover. The AI Act went into force in 2024. They launched OpenEuroLLM to develop models across EU languages. There's a lot of talk about "technological sovereignty" and "buy European."

What interests me is how these different approaches create different incentives. If you're in the US model, you optimize for speed and scale. If you're in the EU model, you optimize for compliance and risk management. If you're in the China model, you optimize for state priorities.

The technology that emerges from each model will be different. Not just in capability, but in what it's designed to do.

I wrote about sovereign AI a few months ago, focused on smaller nations trying to build their own capacity. But the same logic applies at the superpower level. AI is becoming infrastructure. And infrastructure is political.

Here's what I keep wondering. If the future of AI depends on which economic and regulatory model wins, are we even having the right conversations about it?

Most AI discourse focuses on what the technology can do. Much less attention goes to who controls it, under what rules, for whose benefit.

What I'm Left With

I don't have a unified theory. I'm not sure anyone does.

Organizations struggle with AI, but maybe they struggle with all new technology. The economics look unsustainable, but they looked unsustainable for Amazon too and that turned out fine. The geopolitics will shape what gets built, but predicting geopolitics is a fool's game.

What I keep coming back to is that the AI conversation happens at two extremes. Either it's technical discussions about scaling laws and model architectures, or it's sweeping claims about civilization-level transformation.

There's not much in between. Not much asking what's actually working today. Whether the money math is sustainable. What systems this technology is embedded in.

Those feel like the questions that matter. I don't have answers. But I'm paying attention.