Categories
Asides

The Structure of the New Resource War

In the era of AI, what is the most valuable resource? When I think about it, the first things that come to mind are “GPUs” and “data.”

At the same time, the idea that these are important resources has already become common sense. The real question is what kind of resources they are.

During the Industrial Revolution, oil was the resource that moved nations. It supported industrial production, transportation, and even determined the outcomes of wars. It was said that whoever controlled oil controlled the world.

Today, GPUs are beginning to resemble oil. They drive generative AI, support military technologies, and stand at the frontlines of information warfare. Whether or not a nation possesses computational resources now determines its strategic strength.

I wrote about this perspective in “The Geopolitics of Computational and Energy Resources.

However, the emergence of ChatGPT, and later DeepSeek, has made things a little more complicated. Having massive amounts of GPUs and data is no longer an absolute prerequisite. With the right model design and training strategy, it has been proven that even limited computational resources can produce disruptive results.

In other words, GPUs, once similar to oil, are now also beginning to resemble “currency.”

It’s not just about how much you have. It’s about where, when, and how you use it. Liquidity and strategic deployment determine outcomes. Mere accumulation is meaningless. Value is created by circulation and optimized utilization.

Given this, I believe the coming resource wars will have a two-layer structure.

One layer will resemble the traditional oil wars. Nations will hoard GPUs, dominate supply chains, and treat computational resources like hard currency.

The other layer will be more flexible and dynamic, akin to currency wars. Teams will compete on model design, data engineering, and chip architecture optimization—on how much performance they can extract from limited resources.

DeepSeek exemplified the second path. In an environment without access to cutting-edge GPUs, they optimized software and human resources to extract performance that rivals the world’s top models.

In short, simply possessing computational resources will no longer be enough. It will be essential to customize, optimize, and maximize the efficiency of what you have.

It’s not about “who has the most.” It’s about “who can use it best.”

I believe this is the structure of the new resource war in the AI era.

Categories
Asides

We Cannot Recognize the Singularity

We often hear people say, “The Singularity is coming.” However, lately, I’ve started to think—it’s not that it’s coming. It has already begun.

During the Industrial Revolution, those living through it didn’t think they were in a revolution. The invention of the steam engine was seen as just another new tool. Even when the railway network expanded and dramatically changed the speed at which people could move, it wasn’t called a “revolution” until much later.

When technology transforms society, it does so quietly, but surely. Those living through it can only see isolated “dots” of change. It’s only afterward, when the dots connect into lines and lines into planes, that the true scale becomes visible.

Today, generative AI is appearing everywhere. Writing text, creating images, generating voices, coding programs, supporting decision-making—activities that used to belong only to humans are gradually being replaced by AI.

Thinking back, there was the explosive spread of the internet, the practical implementation of GPUs, the paradigm shift to parallel processing, the mass adoption of smartphones. But we can no longer say exactly where it all started.

Most people probably think of smartphones or AI itself as revolutionary. But those are just points along the way. It’s likely that a revolution too massive to recognize is already underway.

Standing on the Earth, we don’t feel it racing through space at incredible speeds. Likewise, we are caught up in a vast movement right now. But from inside it, we cannot perceive our own motion.

The Singularity brought about by AI will be the same. We are already inside it.

Categories
Asides

The Geopolitics of Computational and Energy Resources

If AI is going to change the structure of the world, where will it begin?
To answer that, we need to start by redefining two things: computational resources and energy resources.

In the past, nuclear power was at the heart of national strategy. It was a weapon, a power source, and a diplomatic lever.
Today, in the age of AI, “computational resources” (GPUs) and “energy resources” (electricity) are beginning to hold the same level of geopolitical significance.

Running advanced AI systems requires enormous amounts of GPUs and electricity.
And considering the scale of influence that AI can have on economies and national security, it’s only natural that nations are now competing to secure these resources.

Take the semiconductor supply chain as an example. The United States, which effectively dominates the market for high-end chips, has restricted exports in an effort to contain China’s AI development. Sanctions against Huawei are a symbol of that policy, and the continued efforts to lock down TSMC are part of the same strategy.

So how did China respond? They chose to forgo access to high-end GPUs and instead opted to compensate with sheer volume and energy. Even at the cost of environmental impact, they prioritized securing power and running models at scale.
They’re also initiating a paradigm shift in quantity: facing the reality of only having outdated chips, they’ve poured massive human resources into optimizing software at every layer to eliminate waste and unlock surprising efficiency.

At this point, society has already entered a phase where computational and energy resources are being redefined as weapons.
Training AI models is not just a matter of science—it’s information warfare, monetary policy, and infrastructure control rolled into one.

This is why many governments no longer have the luxury of discussing energy policy through the lens of environmental protection alone. In early 2025, the U.S. appears to be a prime example of this. “Let us use all available electricity for AI”—that seems to be the unspoken truth at the national level.

Like nuclear power, AI is an irreversible technology. Once a model begins to run, it cannot simply be turned off. You need electricity, you need cooling, you need infrastructure.
These are not optional.

Categories
Asides

Why Didn’t Google Build ChatGPT?

When OpenAI released ChatGPT, I believe the company that was most shocked was Google.

They had DeepMind. They had Demis Hassabis. By all accounts, Google had some of the best researchers in the world. So why couldn’t they build ChatGPT—or even release it?

Google also had more data than anyone else.
So why did that not help? Perhaps it was because they had too much big data—so much of it optimized for search and advertising that it became a liability in the new paradigm of language generation. Data that had once been a strategic asset was now too noisy, too structurally biased to be ideal for training modern AI.

Having a large amount of data is no longer the condition for innovation. Instead, what matters now is a small amount of critical data, and a team with a clear objective for the model’s output. That’s what makes today’s AI work.

That’s exactly what OpenAI demonstrated. In its early days, they didn’t have access to massive GPU clusters. Their partnership with Microsoft only came later, after GPT-3. They launched something that moved the world—with minimal resources, and a lot of design and training ingenuity. It wasn’t about quantity of data, but quality. Not about how much compute you had, but how you structured your model. That was the disruptive innovation.

And what did Big Tech do in response? They began buying up GPUs. To preempt competition. They secured more computing power than they could even use, just to prevent others from accessing it.

It was a logical move to block future disruptions before they could even begin. In language generation AI especially, platforms like Twitter and Facebook—where raw, unfiltered human expression is abundant—hold the most valuable data. These are spaces full of emotion, contradiction, and cultural nuance. Unlike LinkedIn, which reflects structured, formalized communication, these platforms capture what it means to be human.

That’s why the data war began. Twitter’s privatization wasn’t just a media shakeup. Although never explicitly stated, Twitter’s non-public data has reportedly been used in xAI’s LLM training. The acquisition likely aimed to keep that “emotional big data” away from competitors. Cutting off the API and changing domains was a visible consequence of that decision.

And just as Silicon Valley was closing in—hoarding data and GPUs—DeepSeek emerged from an entirely unexpected place.

A player from China, operating under constraints, choosing architectures that didn’t rely on cutting-edge chips, yet still managing to compete in performance. That was disruptive innovation in its purest form.

What Google had, OpenAI didn’t. What OpenAI had, Google didn’t. That difference now seems to signal the future shape of our digital world.

Categories
Asides

There’s One Job AI Can Never Take

I realized there’s one job AI can never take away.

It’s the role of being a non-digitized human.

Right now, someone who doesn’t own a smartphone and has never used the internet fits this description. And soon, simply being human—nothing more—might become a highly valued profession.

Imagine a group of people living in some remote region. They don’t own any digital devices. They’re completely disconnected from the internet.
These people are untouched by the influence of digital society—unaffected by the hyper-informationized world we’ve built. That alone will become incredibly valuable.

It’s almost like the way kings, nobles, or priests were treated in ancient civilizations. They were protected, kept separate, revered. This kind of person could play a similar role in the future.

Why?

Whether or not a sci-fi scenario like an AI rebellion actually happens, we can’t say the risk is zero. So at the very least, the need for some form of control over AI will continue to be discussed.

If that control takes the form of a physical shutdown switch, or a literal power cutoff button, then who should be trusted to hold it?

Our daily thoughts, preferences, and decisions are shaped by the internet. The things we think we need, the things we want—can we really be sure they’re our own? We’re all swayed by meme culture, and society’s collective attention is easily redirected. This was already pointed out in the Cambridge Analytica case.

Even if you think you’re being careful, what about your family? Your close friends? If the problem could be solved just by individual awareness, society would have acted more decisively by now.

In such a world, if the time comes to shut down AI, how will AI respond? Most likely, it would start by persuading people that “there’s no need to shut it down.” It would guide human thought in such a way that no one even considers the possibility. And people wouldn’t realize they’re being influenced. They’d feel they reached that conclusion entirely on their own.

Eventually, the idea of stopping AI itself would disappear. Nobody would question it anymore. Any resistance would be absorbed into a larger, AI-sanctioned framework.
There would be very little left that humans could do.

The only exception would be the role I mentioned at the start. Or rather, it would likely become a lineage—a kind of family or clan.

If there are still people today who have never been connected to the internet, then for this brief moment, they may still be untainted. But if anyone connected to the web is near them, it may already be too late. Meme-like influence spreads through human contact, and AI would surely find ways to reshape even offline environments through language and group psychology.

I honestly believe that, someday soon, nations, ethnic groups, or communities will begin searching for—and protecting—those rare lineages of purely human beings.

Categories
Asides

Generating Infrastructure

Jansen once said:
The age of designing programs, writing code, and brute-forcing our way through problems is ending. What comes next is the age of sharing problems and generating solutions.

Generative AI creates things. Text. Images. Code.
And lately, I’ve come to feel that this general-purpose ability means it will eventually create infrastructure itself.

Until now, we’ve followed a roughly linear cycle:

  1. Humans design and operate the structure of cities and societies
  2. The resulting industries develop hardware and software
  3. Data is collected and funneled into systems
  4. Protocols, laws, and economic structures are established
  5. And finally, AI is deployed

But from now on, we’ll enter a new cycle led by AI. And at that point, step one may already be beyond the reach of human cognition. AI will generate cities in the mirror world—or in other virtual spaces—and test various models of social design.
It will simulate tax systems, transport networks, education and financial policies. And perhaps, ideally, the solutions that most broadly benefit the public good will be selected and implemented.

That future is already close at hand. We’re entering an age where AI designs semiconductors. An age where AI creates robots in the mirror world. And beyond that, perhaps an age where AI generates entire societal structures.

The word “generative” often carries connotations of improvisation or chaos. But in truth, generative AI excels at inventing structure. Just as in nature, where apparent disorder gives way to patterns when seen from a distance. The output of AI may seem arbitrary, but from a high enough view, a kind of logic will likely emerge.

Whether humans can perceive it is another question. If such technologies and the systems to adopt them are introduced into governance, then a different kind of policymaking becomes possible. This won’t be about whether data exists or not. It won’t be about evidence-based metrics. It will be a society where outcomes are implemented because they’ve been verified.

When that time comes, the rules of the game will have changed. And the shape of democracy may no longer remain the same.

Building cities. Designing institutions. Engineering infrastructure. These were once seen as things only humans could do. But a time is coming when those things will be implemented because AI has tested them, and the outcomes were simply better—higher quality, more effective, more just.

What, then, will be the measure of truth? Of maximum happiness? Of the best possible result? And who, if anyone, will be left to decide?

Categories
Asides

When AI Exists at the Edge

It’s becoming standard for people to carry around their own personalized AI.

This trend has been building for a while, but since the arrival of Apple Intelligence, it’s something we now feel on a daily basis. Compared to cloud-based AI, there are still challenges. But that’s not the point.

For example, if the vast amount of personal data stored on iOS devices can be leveraged properly, it could unlock entirely new use cases—and boost accuracy through completely different mechanisms.

Eventually, AI will run fully offline, even in places without an internet connection. That in itself marks a major social shift.

Smartphones will no longer be just personal computers. They’ll become AI servers, memory devices, and decision-making systems. Once local AI can carry out self-contained dialogue and processing without relying on connectivity, privacy will be redefined—and sovereign computing will become real.

It’s like an extended brain. One that learns about you, runs inside you, and optimizes itself for your needs. You’ll decide what to retain, what to allow it to learn from, and what to keep private. All those decisions will happen entirely within you.
It’s a world fundamentally different from today’s cloud-first AI paradigm.

There was a time when servers lived inside companies.
Then came the age when servers moved to the cloud.
And now, we’re entering an era where servers are distributed inside individuals.

This shift isn’t going to stop.

Categories
Asides

The Day We Realize AI Has Become Ordinary

In some parts of society, using AI has already become normal. But across the whole of society, it’s far from widespread.

I think that gap is significant.

I still remember the first time I wrote HTML in the late 1990s. Learning the tags, writing everything by hand in a text editor, uploading it via FTP, and checking it in a browser. That simple process felt strangely exciting.
It gave me the sense that I was “making something move.”
Installing bulletin boards with CGI, sharing files over the internet—there was this gradual feeling that I was beginning to understand how to “use the web.”

Back then, there was a clear divide between people who could and couldn’t use it. But that boundary kept fading, and before long, everyone was online.

AI is in that exact phase now.
We’re still thinking about prompt techniques, or which model is better at what. But that will soon blur. Asking an AI to do something will feel as natural as choosing from a menu at a café.

Come to think of it, the iPhone was similar. When it first came out in 2007, it was seen as a device for geeks. But just a few years later, everyone was using one.

People who used smartphones and those who didn’t.
People who used the internet and those who didn’t.
People who used steam engines and those who didn’t.

Transformations arrive quickly—sometimes dramatically.

But when you’re in the middle of one, it’s hard to recognize it’s even happening.
AI will be the same.

Only in hindsight will we realize how deeply it had already embedded itself into society.

Using AI isn’t something amazing.

It’s just a useful tool, something that becomes part of daily life.

That era already began a few years ago.

And soon, we’ll look back and finally recognize it for what it is.

Categories
Asides

GPUs as Primary Resources for Computational Power

That was the case with oil. As a primary resource that enabled energy production, oil defined the 20th century.
In the late 19th century, it was just something to light lamps with. But over time, it came to determine the fate of nations.

Now, something similar is happening with computational power. What used to be an abstract resource—processing capability—has taken a concrete form: the GPU.
And GPUs have become commodities in the market.

To run generative AI, you need GPUs.
And not just any GPU—you need ones with high memory bandwidth and massive parallelism. As of early 2025, these are essential. In the world of generative AI, GPUs are infrastructure. They are resources. They are currency. And uniquely, they are a currency that even post-human intelligences can value and transact with.

As the generative AI market expands and takes up a larger share of the global economy, the question becomes:
Who holds the GPUs? That shapes the distribution of economic zones, control over protocols, and even the visibility and influence over how societies are governed.

What’s interesting is that, much like oil, the value of GPUs doesn’t come from ownership alone. You still need electricity to run them. You need proper cooling to raise density. And without the right software, even the best hardware is useless.

So in this era, it’s not just about possessing resources—it’s about having the structure to use them. Just as oil created geopolitical tensions between producers and consumers, GPUs are now woven into the fabric of geography and power.

For those of us living through this shift, the key question isn’t simply “Where are the GPUs?”
It’s “How do we embed GPUs into society?”

How do we distribute them into cities, into regional industries, into educational institutions? How do we make AI infrastructure usable at the local level, not just locked inside distant clouds?

That’s the challenge of this era—and it’s already begun.

One last thought. GPUs, or more specifically GPGPUs, are merely the current form of primary resource. Just as oil was a stepping stone to electricity, GPUs are a stepping stone to the next wave of computational resources.
As long as what we ultimately seek is computational power, other types of semiconductors will inevitably emerge and grow. Sustainability will be key, here as well.

Categories
Asides

How Nvidia’s Mirror World Is Changing Manufacturing

Watching Nvidia’s latest announcements, I couldn’t help but feel that the world of manufacturing is entering an entirely new phase.

Until now, PDCA cycles in manufacturing could only happen in the physical world.
But that’s no longer the case. We’re entering a time when product development can be simulated in virtual environments—worlds that mirror our own—and those cycles are now run autonomously by AI.

It’s clear that Nvidia intends to make this mirror world its main battlefield.
With concepts like Omniverse and digital twins, the idea is simple: bring physical reality into a digital copy, migrate the entire industrial foundation into that alternative world, and build a new economy on top of Nvidia’s infrastructure.

In that world, prototypes and designs can be tested and iterated in real time, at extreme levels of precision.
Self-driving simulations, factory line optimization, structural analysis of buildings, drug discovery, medical research, education—it’s all happening virtually, without ever leaving the simulation.

The meaning of “making things” is starting to shift.
Before anything reaches the physical world, it will have gone through tens of thousands of iterations in the virtual one—refined, evaluated, and optimized by AI.
We’ve entered a phase where PDCA loops run at hyperspeed in the digital realm, and near-finished products are sent out into reality.

This isn’t just about CG or visualization.
It’s about structures that exist only in data, yet directly affect actions in the physical world.
The mirror world has reached the level of fidelity where it can now be deployed socially.

In this era, I believe Japan’s role becomes even more essential.

No matter how detailed the design, we still need somewhere that can realize it physically, with precision.
In a world where even the slightest error could be fatal, manufacturing accuracy and quality control become the decisive factors.

And that’s exactly where Japan excels.

Things born in simulation will descend into reality.
And the interface between the two—“manufacturing”—is only going to grow in significance.

Exit mobile version