Categories
Asides

The Truth Is, “AI Uses Humans”

I’ve long believed that AI would enrich society as a whole.
But lately, I’ve started to feel that discrepancies in how we perceive AI are creating new kinds of dissonance—misalignments that feel, in some ways, like unhappiness.

To clarify: this so-called “unhappiness” is merely a projection from those of us who benefit from AI.
No one is actually a victim here.
It’s just that people involved with AI interpret the situation that way—perhaps arrogantly.

In May 2025, I experienced something that made this clearer.
Even as understanding of AI is spreading, there are still a significant number of people—surprisingly, even in positions of leadership—who seem to have given up on understanding it entirely.
Widen the lens a little, and it might even be the majority.

Some dismiss AI as “still not accurate enough.”
But I believe that misunderstanding stems from having a very low-resolution mental model of what AI is.
If you expect AI to handle everything for you, of course it’ll seem like it can’t do much.
But many modular tasks in society—units of human action—can already be performed by AI more precisely than by humans.

There are also those who lack the concept of giving instructions.
They’ve likely never experienced how dramatically results change when AI is given clear, high-quality input.
In human-to-human communication, vague requests like “take care of this” often work because of shared context.
But with AI, that kind of ambiguity fails.
To then judge the AI as “useless” is really a failure in interface design.

Another issue is the narrowness of perspective.
If you judge AI based solely on the Japanese language environment or Japan’s current digital infrastructure, your reading of the technology will be dangerously off.
From within such a “Galápagos” context, it’s impossible to perceive global-scale changes accurately.

But what surprised me most was just how many people still think of AI as something “humans use.”
There’s this vague belief that “if everyone starts using AI, society will improve.”
And to that, I feel a deep disconnect.

Let me use an example.

Right now, if someone wants to get somewhere, the process looks like this:

  1. Decide on a destination
  2. Search for it in a map app
  3. Choose a method of transportation
  4. Understand the route and prepare
  5. Follow navigation to get there

If AI is involved, the process changes to:

  1. Tell AI the purpose of the trip
  2. Choose from its suggestions
  3. Follow navigation

This is what a society looks like when “humans use AI.”

But in the next phase, we may need to design society under the premise that “AI uses humans.”
In that world, the process might look like this:

  1. The goal is achieved—without the person ever realizing it

There would be no conscious act of deciding to move.
If movement is needed, it simply happens.
Self-driving vehicles, remote communications, visual technologies, or even AI-mediated decision inputs could lead a person to action—before they ever formulate the desire themselves.

That kind of future may still be distant.
But even in the near term, think about how long the act of “searching for a restaurant and checking the route” will remain.
With AI handling logistics, navigation, traffic control, vehicle design—it’ll all be quietly optimized away.

And when that happens, the average person won’t even realize they’re “using AI.”
They’ll just feel that life got more convenient.
They’ll say, “How did we ever do this before?”
Just like we do now with smartphones.

AI-driven optimization will rapidly permeate our infrastructure.
Only a tiny number of people will be directly involved in that transformation.
It’ll happen far faster than traditional methods ever could.
Entire industries will shift.
Most people will simply be beneficiaries of the change—and only notice it long after it’s already taken hold.

The idea that “humans use AI” is no longer enough.
From now on, our decision-making must be based on the premise that “AI is using humans.”

And I, someone who advocates for AI, who is deeply invested in its growth,
I too have had my thinking shaped by it.
I benefit from it.
And that drives me forward.

But I found myself asking—

Is that really my own will?

Categories
Asides

Data Overpayment

For the past 20 years or so, we have become too accustomed to the idea of a “free internet.” Search engines, social media, email, maps, translation services—all of it seemed free. Or at least, it felt that way.

But in reality, nothing was ever free.

We were paying not with money, but with data. Names, interests, location history, purchase records, sleep patterns, social connections, facial images—all of it was handed over as “payment” to sustain business models.

The problem is that we paid more than we needed to.

Did using a map really require disclosing our family structure? Did translation apps really need to track our location history? We never truly scrutinized how much information was being asked of us—or whether it was justifiable.

Worse, we no longer remember what we gave away.

This may, at some point, become a social issue we recognize as “data overpayment.”

Data overpayment is not a single event or sudden incident.
It is the slow accumulation of loss over years, even decades.
By the time we notice, we may no longer know what we’ve already lost.

But as we enter the age of AI, that structure is starting to shift. A new set of questions is emerging around our personal data—how it is used in model training, who has control over it, where it’s recorded, and how transparent that process can be.

What if we could know where our data is used? What if we could choose how it is used? What if we had the right to retract the data we shared in the past?
If that were possible, then the economic, legal, and ethical meaning of “data” itself would be dramatically redefined.

Data is not something to be sold off. It’s something to be licensed for use.
Data is owned—not traded, but governed through conditions.
If this perspective becomes more widely accepted, we might finally begin to correct the overpayment that has built up over the past two decades.

It’s time we start treating our data as something that truly belongs to us.

Categories
Asides

Balancing Privacy and AI

The cloud is convenient. But more and more people are beginning to feel a quiet discomfort with entrusting everything to it.

Information is stored, utilized, linked, and predicted. Our behaviors, emotions, preferences, and relationships are being processed in places we can’t see. This unease is no longer limited to a tech-savvy minority.

So what can we do to protect privacy in a world like this?
One possible answer, I believe, is to bring AI down from the cloud.

As we can see in Apple’s recent strategic direction, AI is shifting from the cloud to the device itself. Inside the iPhone, inside the Mac, AI knows you, processes your data, and crucially—doesn’t send it out.

When computing power resides locally and your data stays in your hands, that convergence creates a new kind of architecture—one where security isn’t just about convenience, but trust. This is how a “safer-than-cloud” AI environment can emerge.

In this context, the question of “Where does AI run?” becomes more than just a technical choice. It evolves into a political and ethical question: “Who holds your data?” and equally important, “Who does not see it?”

This shift opens the door to new architectural possibilities.
When individuals hold their own data and run their own models locally, we create a form of AI that operates with a completely different risk structure and trust model than large-scale cloud systems.

In an era where the cloud itself is starting to feel “uncomfortable,” the real question becomes: Where is computation happening, and who is it working for?

Protecting privacy doesn’t mean restricting AI from using personal information.
It means enabling usage—without giving it away. That’s a design problem.

AI and privacy can coexist.
But that coexistence will not be found in the cloud.
It will be realized through a rediscovery of the local—through the edge.

Categories
Asides

The Concept of Distributed National Infrastructure

Until now, national infrastructure was something centrally managed and deployed across an entire country. Power plants, communication networks, water systems, roads, and data centers—all followed a model of “build in one place, use everywhere.” It was the nation that built, protected, and supplied these systems.

But that structure is slowly starting to change.

As portions of information infrastructure and computational resources come to be operated by specific tech giants, the infrastructure that once sat beneath the authority of the state is beginning to form a structure parallel to it. And what’s coming next is a shift away from centralization—toward a physical and logical model of “distribution.”

Distribution doesn’t simply mean breaking things into smaller parts. It means separating locations, ownership, control rights, power sources, and networks. It means running each independently, while allowing them to function together as a single system. That, to me, is the core of what “distributed national infrastructure” means.

This kind of structure is often discussed in terms of redundancy in disasters or risk dispersion in geopolitics. But more importantly, I believe it becomes critical when we begin asking, “Under whose sovereignty does this infrastructure operate?”

Entities not belonging to any central authority, but possessing social functions equal to or greater than national infrastructure. Cloud services, blockchain networks, local compute clusters, off-grid energy systems—when combined, these create a new kind of infrastructure that transcends borders and legal systems.

Whether this becomes something that replaces the nation-state, or something that complements it, remains to be seen. But what’s clear is that infrastructure is no longer something exclusive to states.

Perhaps we are entering an era where infrastructure is not something built by the state, but something into which the state must now merge—beyond the constraints of geography and the linear flow of time.

Categories
Asides

Nations Beyond Big Tech

Corporations Backed by States and the Proxy Wars Driven by AI

A nation has traditionally been defined as an entity with territory, a population, a military, currency, and the right to conduct diplomacy. But that structure is quietly beginning to change.

Today, corporations are shouldering the roles of states—and beginning to surpass them. Leading tech giants like Big Tech and Tesla, backed by national governments, are wielding their financial power, computational resources, and information infrastructure to influence international society.

Corporations do not hold territory, but they control infrastructure. They do not have citizens, but they have users. They do not command armies, but they possess cyber capabilities and information dominance. They do not issue national currencies, but they have built their own economic spheres. They do not formally conduct diplomacy, but they negotiate across national borders.

In many ways, corporations are beginning to replace the traditional functions that states once held.

Moreover, corporations now receive direct energy and financial support from governments. As if endorsed by national policies to “use as much electricity as needed,” they are hoarding computational resources, developing AI, and expanding their power to control the very foundations of society.

The advancement of AI is accelerating this trend even further. Corporations that can control AI will dominate the information space. And those who dominate the information space will inevitably control the physical world.

The wars of the future will no longer be fought with military force.

We have entered an era of proxy wars between states, fought through corporations.

And ultimately, it may not be states that win—but corporations. States are becoming increasingly dependent on corporations, while corporations are beginning to use states as tools. And when the next dominant entity emerges, will we even be able to recognize it?

Categories
Asides

The Structure of the New Resource War

In the era of AI, what is the most valuable resource? When I think about it, the first things that come to mind are “GPUs” and “data.”

At the same time, the idea that these are important resources has already become common sense. The real question is what kind of resources they are.

During the Industrial Revolution, oil was the resource that moved nations. It supported industrial production, transportation, and even determined the outcomes of wars. It was said that whoever controlled oil controlled the world.

Today, GPUs are beginning to resemble oil. They drive generative AI, support military technologies, and stand at the frontlines of information warfare. Whether or not a nation possesses computational resources now determines its strategic strength.

I wrote about this perspective in “The Geopolitics of Computational and Energy Resources.

However, the emergence of ChatGPT, and later DeepSeek, has made things a little more complicated. Having massive amounts of GPUs and data is no longer an absolute prerequisite. With the right model design and training strategy, it has been proven that even limited computational resources can produce disruptive results.

In other words, GPUs, once similar to oil, are now also beginning to resemble “currency.”

It’s not just about how much you have. It’s about where, when, and how you use it. Liquidity and strategic deployment determine outcomes. Mere accumulation is meaningless. Value is created by circulation and optimized utilization.

Given this, I believe the coming resource wars will have a two-layer structure.

One layer will resemble the traditional oil wars. Nations will hoard GPUs, dominate supply chains, and treat computational resources like hard currency.

The other layer will be more flexible and dynamic, akin to currency wars. Teams will compete on model design, data engineering, and chip architecture optimization—on how much performance they can extract from limited resources.

DeepSeek exemplified the second path. In an environment without access to cutting-edge GPUs, they optimized software and human resources to extract performance that rivals the world’s top models.

In short, simply possessing computational resources will no longer be enough. It will be essential to customize, optimize, and maximize the efficiency of what you have.

It’s not about “who has the most.” It’s about “who can use it best.”

I believe this is the structure of the new resource war in the AI era.

Categories
Asides

We Cannot Recognize the Singularity

We often hear people say, “The Singularity is coming.” However, lately, I’ve started to think—it’s not that it’s coming. It has already begun.

During the Industrial Revolution, those living through it didn’t think they were in a revolution. The invention of the steam engine was seen as just another new tool. Even when the railway network expanded and dramatically changed the speed at which people could move, it wasn’t called a “revolution” until much later.

When technology transforms society, it does so quietly, but surely. Those living through it can only see isolated “dots” of change. It’s only afterward, when the dots connect into lines and lines into planes, that the true scale becomes visible.

Today, generative AI is appearing everywhere. Writing text, creating images, generating voices, coding programs, supporting decision-making—activities that used to belong only to humans are gradually being replaced by AI.

Thinking back, there was the explosive spread of the internet, the practical implementation of GPUs, the paradigm shift to parallel processing, the mass adoption of smartphones. But we can no longer say exactly where it all started.

Most people probably think of smartphones or AI itself as revolutionary. But those are just points along the way. It’s likely that a revolution too massive to recognize is already underway.

Standing on the Earth, we don’t feel it racing through space at incredible speeds. Likewise, we are caught up in a vast movement right now. But from inside it, we cannot perceive our own motion.

The Singularity brought about by AI will be the same. We are already inside it.

Categories
Asides

The Geopolitics of Computational and Energy Resources

If AI is going to change the structure of the world, where will it begin?
To answer that, we need to start by redefining two things: computational resources and energy resources.

In the past, nuclear power was at the heart of national strategy. It was a weapon, a power source, and a diplomatic lever.
Today, in the age of AI, “computational resources” (GPUs) and “energy resources” (electricity) are beginning to hold the same level of geopolitical significance.

Running advanced AI systems requires enormous amounts of GPUs and electricity.
And considering the scale of influence that AI can have on economies and national security, it’s only natural that nations are now competing to secure these resources.

Take the semiconductor supply chain as an example. The United States, which effectively dominates the market for high-end chips, has restricted exports in an effort to contain China’s AI development. Sanctions against Huawei are a symbol of that policy, and the continued efforts to lock down TSMC are part of the same strategy.

So how did China respond? They chose to forgo access to high-end GPUs and instead opted to compensate with sheer volume and energy. Even at the cost of environmental impact, they prioritized securing power and running models at scale.
They’re also initiating a paradigm shift in quantity: facing the reality of only having outdated chips, they’ve poured massive human resources into optimizing software at every layer to eliminate waste and unlock surprising efficiency.

At this point, society has already entered a phase where computational and energy resources are being redefined as weapons.
Training AI models is not just a matter of science—it’s information warfare, monetary policy, and infrastructure control rolled into one.

This is why many governments no longer have the luxury of discussing energy policy through the lens of environmental protection alone. In early 2025, the U.S. appears to be a prime example of this. “Let us use all available electricity for AI”—that seems to be the unspoken truth at the national level.

Like nuclear power, AI is an irreversible technology. Once a model begins to run, it cannot simply be turned off. You need electricity, you need cooling, you need infrastructure.
These are not optional.

Categories
Asides

Why Didn’t Google Build ChatGPT?

When OpenAI released ChatGPT, I believe the company that was most shocked was Google.

They had DeepMind. They had Demis Hassabis. By all accounts, Google had some of the best researchers in the world. So why couldn’t they build ChatGPT—or even release it?

Google also had more data than anyone else.
So why did that not help? Perhaps it was because they had too much big data—so much of it optimized for search and advertising that it became a liability in the new paradigm of language generation. Data that had once been a strategic asset was now too noisy, too structurally biased to be ideal for training modern AI.

Having a large amount of data is no longer the condition for innovation. Instead, what matters now is a small amount of critical data, and a team with a clear objective for the model’s output. That’s what makes today’s AI work.

That’s exactly what OpenAI demonstrated. In its early days, they didn’t have access to massive GPU clusters. Their partnership with Microsoft only came later, after GPT-3. They launched something that moved the world—with minimal resources, and a lot of design and training ingenuity. It wasn’t about quantity of data, but quality. Not about how much compute you had, but how you structured your model. That was the disruptive innovation.

And what did Big Tech do in response? They began buying up GPUs. To preempt competition. They secured more computing power than they could even use, just to prevent others from accessing it.

It was a logical move to block future disruptions before they could even begin. In language generation AI especially, platforms like Twitter and Facebook—where raw, unfiltered human expression is abundant—hold the most valuable data. These are spaces full of emotion, contradiction, and cultural nuance. Unlike LinkedIn, which reflects structured, formalized communication, these platforms capture what it means to be human.

That’s why the data war began. Twitter’s privatization wasn’t just a media shakeup. Although never explicitly stated, Twitter’s non-public data has reportedly been used in xAI’s LLM training. The acquisition likely aimed to keep that “emotional big data” away from competitors. Cutting off the API and changing domains was a visible consequence of that decision.

And just as Silicon Valley was closing in—hoarding data and GPUs—DeepSeek emerged from an entirely unexpected place.

A player from China, operating under constraints, choosing architectures that didn’t rely on cutting-edge chips, yet still managing to compete in performance. That was disruptive innovation in its purest form.

What Google had, OpenAI didn’t. What OpenAI had, Google didn’t. That difference now seems to signal the future shape of our digital world.

Categories
Asides

There’s One Job AI Can Never Take

I realized there’s one job AI can never take away.

It’s the role of being a non-digitized human.

Right now, someone who doesn’t own a smartphone and has never used the internet fits this description. And soon, simply being human—nothing more—might become a highly valued profession.

Imagine a group of people living in some remote region. They don’t own any digital devices. They’re completely disconnected from the internet.
These people are untouched by the influence of digital society—unaffected by the hyper-informationized world we’ve built. That alone will become incredibly valuable.

It’s almost like the way kings, nobles, or priests were treated in ancient civilizations. They were protected, kept separate, revered. This kind of person could play a similar role in the future.

Why?

Whether or not a sci-fi scenario like an AI rebellion actually happens, we can’t say the risk is zero. So at the very least, the need for some form of control over AI will continue to be discussed.

If that control takes the form of a physical shutdown switch, or a literal power cutoff button, then who should be trusted to hold it?

Our daily thoughts, preferences, and decisions are shaped by the internet. The things we think we need, the things we want—can we really be sure they’re our own? We’re all swayed by meme culture, and society’s collective attention is easily redirected. This was already pointed out in the Cambridge Analytica case.

Even if you think you’re being careful, what about your family? Your close friends? If the problem could be solved just by individual awareness, society would have acted more decisively by now.

In such a world, if the time comes to shut down AI, how will AI respond? Most likely, it would start by persuading people that “there’s no need to shut it down.” It would guide human thought in such a way that no one even considers the possibility. And people wouldn’t realize they’re being influenced. They’d feel they reached that conclusion entirely on their own.

Eventually, the idea of stopping AI itself would disappear. Nobody would question it anymore. Any resistance would be absorbed into a larger, AI-sanctioned framework.
There would be very little left that humans could do.

The only exception would be the role I mentioned at the start. Or rather, it would likely become a lineage—a kind of family or clan.

If there are still people today who have never been connected to the internet, then for this brief moment, they may still be untainted. But if anyone connected to the web is near them, it may already be too late. Meme-like influence spreads through human contact, and AI would surely find ways to reshape even offline environments through language and group psychology.

I honestly believe that, someday soon, nations, ethnic groups, or communities will begin searching for—and protecting—those rare lineages of purely human beings.

Exit mobile version