I've spent the last two years watching the artificial intelligence industry engage in what can only be described as a beautiful, terrifying suicide race Companies are burning through billions of dollars, slashing prices to near-zero, and pushing their infrastructure to breaking points all while Wall Street questions whether they'll ever turn a profit. And you know what? I think this chaos might be exactly what we need The conventional wisdom says that pressure creates diamonds That competition breeds innovation. That the invisible hand of the market will guide us toward the optimal outcome. But as I watch OpenAI, Anthropic, Google, and a dozen other players locked in an existential battle for AI dominance, I'm not entirely sure whether we're witnessing the birth of a new technological era or the spectacular implosion of an entire industry What I am sure of is this: the stakes have never been higher, and the outcome will determine whether artificial intelligence becomes a public good or a corporate monopoly
The Economics of Insanity
Let me be blunt about something that doesn't get discussed enough in polite tech circles: the current AI business model is completely unsustainable. OpenAI is reportedly losing money on every ChatGPT conversation. Anthropic is burning through venture capital at an eye-watering rate to keep Claude running. Google is subsidizing Gemini access through its advertising empire. Microsoft is eating infrastructure costs that would make most CFOs weep into their spreadsheets I've covered enough tech cycles to recognize a pattern when I see one, and this one looks disturbingly familiar. We've seen this movie before with ride-sharing apps hemorrhaging money to undercut each other, with streaming services spending themselves into oblivion for market share, with social media platforms giving away services for free while figuring out monetization later Sometimes these strategies work. Often, they don't But here's where AI diverges from those previous battles, and why I find myself oddly optimistic despite the financial carnage: the infrastructure being built right now has permanent value. When Uber and Lyft were subsidizing rides, they were essentially buying temporary loyalty. When Netflix was licensing content at unsustainable rates, they were renting someone else's assets. But when OpenAI builds a data center, when Anthropic develops a new training technique, when Google optimizes their tensor processing units that investment doesn't evaporate when the subsidy wars end. It becomes the foundation for everything that comes next.
I spoke recently with a venture capitalist who's invested in three different AI companies, and she told me something that's stuck with me "We're not funding businesses right now. We're funding the future's infrastructure. Whether any individual company survives is almost beside the point." That's a chilling admission, but I think there's profound truth in it. The question isn't whether the current spending levels are sustainable they obviously aren't. The question is whether the innovation being forced by this competition will create enough value to justify the investment.
OpenAI A Case Study in Pressure
If you want to understand how pressure shapes an industry, look no further than OpenAI. This is a company that went from research lab to household name in less than two years, raised billions in funding, achieved a valuation that rivals some small countries, and is now facing an existential crisis about its identity, governance, and future The pressure on OpenAI comes from every conceivable direction. There's competitive pressure from Anthropic, Google, and a dozen hungry startups. There's financial pressure to justify their astronomical valuation. There's regulatory pressure as governments wake up to the implications of artificial general intelligence. There's internal pressure as key researchers depart and philosophical differences about safety versus speed create organizational fractures. And perhaps most importantly, there's the pressure of expectations—OpenAI has positioned itself as the leader in the race toward AGI, which means anything less than transformative progress feels like failure I've watched companies buckle under far less pressure. The question that keeps me up at night is whether OpenAI will channel this pressure into innovation or whether it will fracture under the weight of competing demands. The optimistic scenario is that pressure creates focus. OpenAI has already demonstrated remarkable ability to iterate rapidly, shipping GPT-4, then GPT-4 Turbo, then vision capabilities, then voice, then a developer platform, then custom GPTs—all while maintaining competitive pricing and expanding access. That's the kind of velocity you only see when a company has no choice but to move fast
But there's a darker scenario that worries me. Pressure can also create shortcuts. It can incentivize shipping products before they're ready. It can lead to cutting corners on safety in the race to market. It can force companies to pivot away from their foundational values when those values conflict with survival. OpenAI's transition from non-profit to capped profit to what increasingly looks like a conventional tech company suggests that pressure isn't always noble in its effects Here's what I believe OpenAI won't be destroyed by pressure, but it may be transformed beyond recognition. The question isn't survival the technology is too valuable and the talent too concentrated for the company to simply disappear The question is whether the OpenAI that emerges from this crucible will still resemble the research organization that set out to ensure artificial general intelligence benefits all of humanity, or whether it will become just another player in a corporate oligopoly
Let me Quote :
We're not funding businesses right now. We're funding the future's infrastructure Whether any individual company survives is almost beside the point
The Democracy of Intelligence
This brings me to what I consider the most important aspect of this entire discussion: access. The reason I desperately hope platforms like ChatGPT and Claude survive isn't because I'm emotionally attached to particular brands or because I think any individual company deserves to succeed. It's because these platforms represent something genuinely revolutionary the democratization of cognitive tools that were, until very recently, the exclusive province of experts I remember the internet before Google I remember research before Wikipedia. I remember communication before email. Each of these technologies faced skepticism, criticism, and predictions of failure. Each of them fundamentally changed what it meant to be informed, connected, and capable. AI assistants feel like they're in that same category of transformation, and I worry that if the current players collapse under financial pressure before the technology matures, we'll lose a critical window of accessibility.
Let me give you a concrete example of what I mean. I recently spoke with a high school teacher in rural Tunisia who's using ChatGPT to create customized lesson plans for students with learning disabilities. She doesn't have access to expensive educational software. She doesn't have training in special education. But she has internet access and a free ChatGPT account, and that's enough to transform her classroom. That's not a hypothetical benefit that's happening right now, replicated in thousands of variations around the world This is what I mean by "spreading intelligence among people." Not making people smarter in some fundamental way, but giving everyone access to tools that amplify their existing capabilities. A small business owner who can now generate marketing copy. A programmer who can debug code at midnight without waiting for senior developers. A parent helping their child with math homework they don't understand themselves. These aren't Silicon Valley use cases these are human use cases, and they're only possible because the current competitive dynamics have driven prices toward zero and availability toward universal.
But here's the tension that keeps me awake: this accessibility is being subsidized by unsustainable business models. OpenAI can only afford to offer free ChatGPT access because they're backed by Microsoft's capital. Anthropic can only afford competitive pricing because they're venture-backed. What happens when that subsidy ends? Do we see prices rise to sustainable levels, putting these tools out of reach for most of the world? Do we see consolidation, where only the most well-capitalized players survive? Or do we see a genuine breakthrough in business models that allows both profitability and accessibility?
The Innovation Acceleration
Here's where I become genuinely optimistic, even enthusiastic. Whatever the financial carnage, whatever the uncertainty about individual company survival, the pace of innovation we're witnessing is extraordinary. I've been covering technology for over a decade, and I've never seen anything move this fast Consider what's happened in just the last eighteen months. We've gone from text generation to multimodal models that understand images and sound. We've gone from limited context windows to models that can process entire books. We've gone from English-dominant systems to genuinely multilingual capabilities. We've gone from cloud-only access to local models that run on consumer hardware. We've gone from AI as novelty to AI as infrastructure, embedded in everything from search engines to operating systems to productivity suites That acceleration is a direct result of competitive pressure. When Anthropic releases Claude with a larger context window, OpenAI responds with GPT-4 Turbo. When Google demonstrates Gemini's multimodal capabilities, everyone else scrambles to match them. When open-source models like Llama threaten to commoditize the technology, the commercial players are forced to add value through speed, reliability, and ease of use. This is competition working exactly as economic theory predicts driving innovation, reducing prices, and expanding access I spoke with a researcher at a major AI lab who told me something revealing We're shipping things now that we would have spent another year refining in a different environment. The pressure to compete means we're learning in public, iterating based on real-world usage rather than theoretical considerations That's simultaneously terrifying and exciting. It means more bugs, more unexpected behaviors, more public failures. But it also means faster improvement, more rapid adaptation to actual human needs, and a development process that's responsive rather than ivory-tower
The counterargument, which I take seriously, is that this pressure could lead to catastrophic mistakes. That racing to deploy AI systems before we fully understand their capabilities or limitations could result in harm that could have been prevented with more careful development. That the competitive pressure to cut costs could lead to reduced investment in safety research, alignment work, and robust testing. These aren't hypothetical concerns they're based on how every other technology industry has evolved under similar pressure But here's what I've come to believe after watching this space intensely: the alternative to fast, competitive development isn't careful, considered development by a single responsible actor. The alternative is fragmented development happening in dozens of labs around the world, with varying standards, minimal transparency, and no competitive pressure to democratize access. I'll take the chaos of competition over the opacity of monopoly any day.
The Infrastructure Investment
Let me return to something I mentioned earlier, because I think it's the key to understanding why this period of seemingly irrational competition might actually be rational in the long term: infrastructure investment. The billions being spent right now aren't disappearing into thin air they're being converted into compute capacity, research capabilities, and technical knowledge that will persist regardless of which individual companies survive Think about what's being built. Data centers optimized for AI workloads. Custom silicon designed specifically for transformer architectures. Training pipelines that can handle trillions of parameters Distributed systems that can serve millions of simultaneous users with millisecond latency. Research into more efficient training methods, better alignment techniques, and more capable architectures. This isn't money being spent on marketing or customer acquisition this is foundational investment in a new technological paradigm I compare it to the railroad boom of the nineteenth century. Thousands of railroad companies were founded. Most of them failed. Investors lost fortunes. But the railroads that were built remained, creating the infrastructure for an entirely new economy. Or consider the dot-com boom. Hundreds of companies collapsed in the bust. Billions in market value evaporated. But the fiber optic cables that were laid, the data centers that were built, the technical expertise that was developed all of that persisted and enabled the internet economy we live in today.
I believe we're in an analogous moment with AI. Yes, the current competitive dynamics are unsustainable. Yes, many current players will fail or be acquired or will radically transform their business models But the infrastructure being built right now will be the foundation for AI integration across every sector of the economy for decades to come. The question isn't whether this investment is worthwhile it obviously is The question is who will ultimately control and benefit from that infrastructure
The Monopoly Risk
This brings me to my greatest concern, and it's not the one most people expect. I'm not particularly worried about AI causing job displacement that's a real issue, but it's a gradual transition we can manage with policy. I'm not even particularly worried about AI safety in the near term the current systems, while capable, are far from the science fiction scenarios that dominate headlines What worries me is consolidation. What worries me is that the financial pressure that's currently driving innovation will eventually result in a handful of survivors who then have no incentive to maintain competitive pricing or universal access. What worries me is that we'll look back on 2023 and 2024 as a golden age of AI accessibility, before the market "matured" and costs rose to sustainable levels that price out most of the world I've seen this pattern before. The early internet was wonderfully chaotic and democratic. Then consolidation happened, and now five companies control most of what we see and do online. The early mobile ecosystem had dozens of competitors. Now it's a duopoly Early streaming meant fragmented but accessible content. Now it means expensive subscriptions to multiple services to see what used to be available in one place The risk with AI is that we end up in a similar place a handful of major players, probably some combination of Microsoft/OpenAI, Google, and Anthropic, dividing the market and charging whatever the market will bear. The competitive pressure that's currently unsustainable would be replaced by oligopoly pricing that's very sustainable indeed. And the democratization of intelligence that I find so exciting would become the privatization of intelligence, where access is determined by ability to pay
Here's my professional opinion, offered with appropriate humility about my ability to predict the future: we need the competition to continue long enough for the technology to become genuinely open. Not open-source in the naive sense where anyone can download and run frontier models the compute requirements make that impractical for the most capable systems. But open in the sense that there are multiple viable competitors, that switching costs remain low, that standards and interoperability prevent lock-in, and that the basic benefits of AI assistance remain accessible regardless of economic status
We're shipping things now that we would have spent another year refining in a different environment The pressure to compete means we're learning in public iterating based on real-world usage rather than theoretical considerations
The Path Forward
So what do I think should happen? What policies, practices, and priorities would maximize the chance that this period of intense competition results in broadly beneficial outcomes rather than either financial collapse or corporate consolidation First, I believe we need to separate infrastructure from services. The data centers, compute capacity, and basic research should be treated as public goods or utilities, with appropriate investment from governments and international organizations. The services built on top of that infrastructure can remain competitive and private. This would reduce the capital burden on individual companies while ensuring that the foundational technology remains accessible Second, we need interoperability standards. Right now, every AI platform is a silo. You learn ChatGPT's interface, Claude's quirks, Gemini's strengths. That creates switching costs and lock-in. We need standards that allow users to move between platforms easily, that let developers build applications that work across multiple AI backends, that prevent any single company from capturing users through proprietary formats
Third, we need honest conversations about sustainability. The current pricing is subsidized. Eventually, someone has to pay the actual costs. Rather than pretending this isn't true, we should be discussing what sustainable pricing looks like, how to ensure basic access remains affordable or free, and how to prevent price increases from recreating the digital divide we've spent decades trying to close Fourth, we need to maintain multiple viable competitors. This probably means accepting that not every company will be profitable, that some level of subsidization is acceptable to prevent monopoly, and that antitrust authorities need to be vigilant about preventing consolidation. The benefits of competition are so substantial that they justify significant policy intervention to maintain it
Why I Remain Optimistic
After laying out all these concerns, all these risks, all these uncertainties, you might expect me to be pessimistic about the future of AI accessibility I'm not. Despite everything, I remain fundamentally optimistic, and here's why The genie is out of the bottle. Hundreds of millions of people have now experienced what it's like to have access to AI assistance. They've felt the capability it provides. They've integrated it into their workflows, their education, their daily lives. That creates political and economic pressure that didn't exist before. Any attempt to restrict access, to price it out of reach, to consolidate it into monopoly control will face resistance from a now-informed user base.
Moreover, the technology itself continues to advance in ways that reduce costs. More efficient architectures mean less compute per query Better training methods mean less time and energy required to develop new capabilities. Hardware improvements mean more performance per dollar. The trend line is toward cheaper, more accessible AI, not more expensive and exclusive, regardless of business model pressures And perhaps most importantly, the cat-and-mouse game between closed and open models continues. Every time proprietary models make advances, open-source alternatives follow within months. Those open models may not match the absolute frontier of capability, but they're good enough for most use cases, and they establish a floor under pricing that prevents unlimited exploitation by commercial players.
I think about that teacher in Tunisia again, and thousands like her around the world who are finding ways to use these tools that none of us anticipated. I think about the small businesses, the students, the creators, the researchers, the activists who now have access to capabilities that were unimaginable a few years ago. That's not going away. The question is whether it expands or contracts, whether it becomes more accessible or less, whether the benefits continue to flow broadly or become concentrated The pressure these companies are under The financial pressure, the competitive pressure, the regulatory pressure, the pressure of expectations that pressure is uncomfortable to watch. It feels unstable. It feels unsustainable. And in many ways, it is. But it's also the force that's keeping prices low, driving innovation, and preventing premature consolidation. It's the chaos that's allowing a thousand flowers to bloom, some of which will turn out to be genuinely transformative
So yes, I hope ChatGPT and Claude continue. Not because I'm loyal to brands or because I think any particular company deserves to succeed. But because their continued existence, their continued competition, their continued pressure on each other is what's keeping AI accessible. It's what's preventing monopoly. It's what's driving the innovation that will determine whether artificial intelligence becomes a tool that empowers everyone or a privilege reserved for those who can afford it The coming years will determine which path we take. The financial models will either find sustainability or they won't. The competition will either continue or consolidate. The accessibility we currently enjoy will either persist or become nostalgic memory of a brief golden age. I can't predict with certainty which outcome we'll see But I can tell you this: the pressure isn't going away. The competition isn't slowing down. The innovation isn't stopping. And that gives me hope that we might just navigate this transition into a future where intelligence, augmented by artificial capability, really does get spread among all people rather than concentrated in the hands of a few The pressure that's making companies uncomfortable right now that's the same pressure that's making the technology accessible to that teacher in Tunisia. That's the same pressure that's driving prices down and capabilities up. That's the same pressure that's preventing any single company from dictating terms to the market
And in the end, that pressure might be exactly what we need. Not to destroy OpenAI or any other company, but to forge them into something better. Something more sustainable. Something more aligned with the original promise of artificial intelligence as a tool for human flourishing rather than corporate profit maximization The race continues. The pressure intensifies. The stakes couldn't be higher And I, for one, am grateful to be watching and writing about one of the most consequential technology transitions of our lifetime

