Categories
Asides

The Structure of the New Resource War

In the era of AI, what is the most valuable resource? When I think about it, the first things that come to mind are “GPUs” and “data.”

At the same time, the idea that these are important resources has already become common sense. The real question is what kind of resources they are.

During the Industrial Revolution, oil was the resource that moved nations. It supported industrial production, transportation, and even determined the outcomes of wars. It was said that whoever controlled oil controlled the world.

Today, GPUs are beginning to resemble oil. They drive generative AI, support military technologies, and stand at the frontlines of information warfare. Whether or not a nation possesses computational resources now determines its strategic strength.

I wrote about this perspective in “The Geopolitics of Computational and Energy Resources.

However, the emergence of ChatGPT, and later DeepSeek, has made things a little more complicated. Having massive amounts of GPUs and data is no longer an absolute prerequisite. With the right model design and training strategy, it has been proven that even limited computational resources can produce disruptive results.

In other words, GPUs, once similar to oil, are now also beginning to resemble “currency.”

It’s not just about how much you have. It’s about where, when, and how you use it. Liquidity and strategic deployment determine outcomes. Mere accumulation is meaningless. Value is created by circulation and optimized utilization.

Given this, I believe the coming resource wars will have a two-layer structure.

One layer will resemble the traditional oil wars. Nations will hoard GPUs, dominate supply chains, and treat computational resources like hard currency.

The other layer will be more flexible and dynamic, akin to currency wars. Teams will compete on model design, data engineering, and chip architecture optimization—on how much performance they can extract from limited resources.

DeepSeek exemplified the second path. In an environment without access to cutting-edge GPUs, they optimized software and human resources to extract performance that rivals the world’s top models.

In short, simply possessing computational resources will no longer be enough. It will be essential to customize, optimize, and maximize the efficiency of what you have.

It’s not about “who has the most.” It’s about “who can use it best.”

I believe this is the structure of the new resource war in the AI era.

Categories
Asides

Why Didn’t Google Build ChatGPT?

When OpenAI released ChatGPT, I believe the company that was most shocked was Google.

They had DeepMind. They had Demis Hassabis. By all accounts, Google had some of the best researchers in the world. So why couldn’t they build ChatGPT—or even release it?

Google also had more data than anyone else.
So why did that not help? Perhaps it was because they had too much big data—so much of it optimized for search and advertising that it became a liability in the new paradigm of language generation. Data that had once been a strategic asset was now too noisy, too structurally biased to be ideal for training modern AI.

Having a large amount of data is no longer the condition for innovation. Instead, what matters now is a small amount of critical data, and a team with a clear objective for the model’s output. That’s what makes today’s AI work.

That’s exactly what OpenAI demonstrated. In its early days, they didn’t have access to massive GPU clusters. Their partnership with Microsoft only came later, after GPT-3. They launched something that moved the world—with minimal resources, and a lot of design and training ingenuity. It wasn’t about quantity of data, but quality. Not about how much compute you had, but how you structured your model. That was the disruptive innovation.

And what did Big Tech do in response? They began buying up GPUs. To preempt competition. They secured more computing power than they could even use, just to prevent others from accessing it.

It was a logical move to block future disruptions before they could even begin. In language generation AI especially, platforms like Twitter and Facebook—where raw, unfiltered human expression is abundant—hold the most valuable data. These are spaces full of emotion, contradiction, and cultural nuance. Unlike LinkedIn, which reflects structured, formalized communication, these platforms capture what it means to be human.

That’s why the data war began. Twitter’s privatization wasn’t just a media shakeup. Although never explicitly stated, Twitter’s non-public data has reportedly been used in xAI’s LLM training. The acquisition likely aimed to keep that “emotional big data” away from competitors. Cutting off the API and changing domains was a visible consequence of that decision.

And just as Silicon Valley was closing in—hoarding data and GPUs—DeepSeek emerged from an entirely unexpected place.

A player from China, operating under constraints, choosing architectures that didn’t rely on cutting-edge chips, yet still managing to compete in performance. That was disruptive innovation in its purest form.

What Google had, OpenAI didn’t. What OpenAI had, Google didn’t. That difference now seems to signal the future shape of our digital world.

Exit mobile version