Categories
Asides

Generating Infrastructure

Jansen once said:
The age of designing programs, writing code, and brute-forcing our way through problems is ending. What comes next is the age of sharing problems and generating solutions.

Generative AI creates things. Text. Images. Code.
And lately, I’ve come to feel that this general-purpose ability means it will eventually create infrastructure itself.

Until now, we’ve followed a roughly linear cycle:

  1. Humans design and operate the structure of cities and societies
  2. The resulting industries develop hardware and software
  3. Data is collected and funneled into systems
  4. Protocols, laws, and economic structures are established
  5. And finally, AI is deployed

But from now on, we’ll enter a new cycle led by AI. And at that point, step one may already be beyond the reach of human cognition. AI will generate cities in the mirror world—or in other virtual spaces—and test various models of social design.
It will simulate tax systems, transport networks, education and financial policies. And perhaps, ideally, the solutions that most broadly benefit the public good will be selected and implemented.

That future is already close at hand. We’re entering an age where AI designs semiconductors. An age where AI creates robots in the mirror world. And beyond that, perhaps an age where AI generates entire societal structures.

The word “generative” often carries connotations of improvisation or chaos. But in truth, generative AI excels at inventing structure. Just as in nature, where apparent disorder gives way to patterns when seen from a distance. The output of AI may seem arbitrary, but from a high enough view, a kind of logic will likely emerge.

Whether humans can perceive it is another question. If such technologies and the systems to adopt them are introduced into governance, then a different kind of policymaking becomes possible. This won’t be about whether data exists or not. It won’t be about evidence-based metrics. It will be a society where outcomes are implemented because they’ve been verified.

When that time comes, the rules of the game will have changed. And the shape of democracy may no longer remain the same.

Building cities. Designing institutions. Engineering infrastructure. These were once seen as things only humans could do. But a time is coming when those things will be implemented because AI has tested them, and the outcomes were simply better—higher quality, more effective, more just.

What, then, will be the measure of truth? Of maximum happiness? Of the best possible result? And who, if anyone, will be left to decide?

Categories
Asides

When AI Exists at the Edge

It’s becoming standard for people to carry around their own personalized AI.

This trend has been building for a while, but since the arrival of Apple Intelligence, it’s something we now feel on a daily basis. Compared to cloud-based AI, there are still challenges. But that’s not the point.

For example, if the vast amount of personal data stored on iOS devices can be leveraged properly, it could unlock entirely new use cases—and boost accuracy through completely different mechanisms.

Eventually, AI will run fully offline, even in places without an internet connection. That in itself marks a major social shift.

Smartphones will no longer be just personal computers. They’ll become AI servers, memory devices, and decision-making systems. Once local AI can carry out self-contained dialogue and processing without relying on connectivity, privacy will be redefined—and sovereign computing will become real.

It’s like an extended brain. One that learns about you, runs inside you, and optimizes itself for your needs. You’ll decide what to retain, what to allow it to learn from, and what to keep private. All those decisions will happen entirely within you.
It’s a world fundamentally different from today’s cloud-first AI paradigm.

There was a time when servers lived inside companies.
Then came the age when servers moved to the cloud.
And now, we’re entering an era where servers are distributed inside individuals.

This shift isn’t going to stop.

Categories
Asides

The Day We Realize AI Has Become Ordinary

In some parts of society, using AI has already become normal. But across the whole of society, it’s far from widespread.

I think that gap is significant.

I still remember the first time I wrote HTML in the late 1990s. Learning the tags, writing everything by hand in a text editor, uploading it via FTP, and checking it in a browser. That simple process felt strangely exciting.
It gave me the sense that I was “making something move.”
Installing bulletin boards with CGI, sharing files over the internet—there was this gradual feeling that I was beginning to understand how to “use the web.”

Back then, there was a clear divide between people who could and couldn’t use it. But that boundary kept fading, and before long, everyone was online.

AI is in that exact phase now.
We’re still thinking about prompt techniques, or which model is better at what. But that will soon blur. Asking an AI to do something will feel as natural as choosing from a menu at a café.

Come to think of it, the iPhone was similar. When it first came out in 2007, it was seen as a device for geeks. But just a few years later, everyone was using one.

People who used smartphones and those who didn’t.
People who used the internet and those who didn’t.
People who used steam engines and those who didn’t.

Transformations arrive quickly—sometimes dramatically.

But when you’re in the middle of one, it’s hard to recognize it’s even happening.
AI will be the same.

Only in hindsight will we realize how deeply it had already embedded itself into society.

Using AI isn’t something amazing.

It’s just a useful tool, something that becomes part of daily life.

That era already began a few years ago.

And soon, we’ll look back and finally recognize it for what it is.

Categories
Asides

GPUs as Primary Resources for Computational Power

That was the case with oil. As a primary resource that enabled energy production, oil defined the 20th century.
In the late 19th century, it was just something to light lamps with. But over time, it came to determine the fate of nations.

Now, something similar is happening with computational power. What used to be an abstract resource—processing capability—has taken a concrete form: the GPU.
And GPUs have become commodities in the market.

To run generative AI, you need GPUs.
And not just any GPU—you need ones with high memory bandwidth and massive parallelism. As of early 2025, these are essential. In the world of generative AI, GPUs are infrastructure. They are resources. They are currency. And uniquely, they are a currency that even post-human intelligences can value and transact with.

As the generative AI market expands and takes up a larger share of the global economy, the question becomes:
Who holds the GPUs? That shapes the distribution of economic zones, control over protocols, and even the visibility and influence over how societies are governed.

What’s interesting is that, much like oil, the value of GPUs doesn’t come from ownership alone. You still need electricity to run them. You need proper cooling to raise density. And without the right software, even the best hardware is useless.

So in this era, it’s not just about possessing resources—it’s about having the structure to use them. Just as oil created geopolitical tensions between producers and consumers, GPUs are now woven into the fabric of geography and power.

For those of us living through this shift, the key question isn’t simply “Where are the GPUs?”
It’s “How do we embed GPUs into society?”

How do we distribute them into cities, into regional industries, into educational institutions? How do we make AI infrastructure usable at the local level, not just locked inside distant clouds?

That’s the challenge of this era—and it’s already begun.

One last thought. GPUs, or more specifically GPGPUs, are merely the current form of primary resource. Just as oil was a stepping stone to electricity, GPUs are a stepping stone to the next wave of computational resources.
As long as what we ultimately seek is computational power, other types of semiconductors will inevitably emerge and grow. Sustainability will be key, here as well.

Categories
Asides

How Nvidia’s Mirror World Is Changing Manufacturing

Watching Nvidia’s latest announcements, I couldn’t help but feel that the world of manufacturing is entering an entirely new phase.

Until now, PDCA cycles in manufacturing could only happen in the physical world.
But that’s no longer the case. We’re entering a time when product development can be simulated in virtual environments—worlds that mirror our own—and those cycles are now run autonomously by AI.

It’s clear that Nvidia intends to make this mirror world its main battlefield.
With concepts like Omniverse and digital twins, the idea is simple: bring physical reality into a digital copy, migrate the entire industrial foundation into that alternative world, and build a new economy on top of Nvidia’s infrastructure.

In that world, prototypes and designs can be tested and iterated in real time, at extreme levels of precision.
Self-driving simulations, factory line optimization, structural analysis of buildings, drug discovery, medical research, education—it’s all happening virtually, without ever leaving the simulation.

The meaning of “making things” is starting to shift.
Before anything reaches the physical world, it will have gone through tens of thousands of iterations in the virtual one—refined, evaluated, and optimized by AI.
We’ve entered a phase where PDCA loops run at hyperspeed in the digital realm, and near-finished products are sent out into reality.

This isn’t just about CG or visualization.
It’s about structures that exist only in data, yet directly affect actions in the physical world.
The mirror world has reached the level of fidelity where it can now be deployed socially.

In this era, I believe Japan’s role becomes even more essential.

No matter how detailed the design, we still need somewhere that can realize it physically, with precision.
In a world where even the slightest error could be fatal, manufacturing accuracy and quality control become the decisive factors.

And that’s exactly where Japan excels.

Things born in simulation will descend into reality.
And the interface between the two—“manufacturing”—is only going to grow in significance.

Categories
Asides

Tesla Optimus will become infrastructure

The age of AI has already begun.

With ChatGPT, we can now generate text, images, voice, even video. It’s not “coming soon” — it’s already here.

But changing the physical world takes one more step: integration with IoT.
AI can process data, but it can’t touch the real world. That’s where robots come in — they allow AI to physically interact with reality. Optimus is a symbol of that.

Tesla Optimus is a device meant to carry us into the age of automation, without rewriting our entire society.
From AI’s point of view, it’s the interface to the real world.
No need to reinvent roads, elevators, or doors. Optimus — and other robots being built by Big Tech — are designed to move through the world as it is. They’re general-purpose labor bodies, built to help AI function inside existing human infrastructure.

What we’re seeing now is, I think, a robot plan to AIoT the world.
Everything will be connected, automated, decision-capable, and able to act.
And the reason robots need to be humanoid is finally becoming clear: they’re designed to fit into our world, not the other way around.

Automation will move faster than we expect.
Car companies might end up as manufacturers of “just empty boxes” — simple transport units. These boxes don’t need intelligence. In fact, automation works better when things follow spec, stay predictable, and don’t think too much.

In Japan’s case, I wouldn’t be surprised if the government eventually distributes robots like Tesla Optimus.
You give up your driver’s license, and in return, get a subsidy for a household robot. That kind of world might not be a joke — it might be real, and sooner than we think.

But the tech and quality needed to make those robots — that’s where Japan comes in.

Humanoid robots are hard to build. They can’t afford to break down. Batteries, motors, sensors, thermal systems, materials — all of it needs to be precise and reliable.
That’s exactly what Japan has spent decades getting good at.

Manufacturing and quality control — those might be Japan’s last strongholds.
And they’re exactly what the world is looking for right now.

Categories
Asides

The only essential skill today might be the ability to talk to AI

Just write your thoughts like a specification. Feed that to an AI. Let it write the code and execute it. That’s all it takes now.

Even the environment setup can be done through instructions. No need to deeply know a specific programming language anymore. As long as you can clearly explain what you want in either English or Japanese, things get implemented. Even complicated processes feel surprisingly doable.

For instance, I now automate all my personal accounting using a self-written script that was generated by an AI.

And once you get used to it, switching between languages or platforms becomes trivial. Generate in your favorite language, review, and go. Optimizing for the actual runtime environment can be skipped if you’re okay with it.

I wish someone would stop me. I really do.

I wake up, sit at the desk, and before I know it, it’s night. Weekends are worse. If nobody interrupts me, time just vanishes.

Some might say there’s nothing wrong with this—“you’re just being productive.” But it doesn’t feel like that.

I’m not creating a product for someone. I just keep spotting things I can automate and then automate them. My personal productivity has skyrocketed, sure, but it’s not like I’m contributing to society or generating any economic value.

It even feels like I’m slowly detaching from human life.

I follow triggers, execute when conditions match, evaluate paths, and accept results. It’s not just software anymore—my routines, my habits, maybe even my sense of self, are getting programmed.

It’s as if Dawkins’ selfish genes have started to act outside the body. I keep going, justifying it by telling myself: maybe I’ve gone beyond being a biological human. Maybe I’m now leaving behind digital genes for the age of AIoT.

I also realized Zapier isn’t necessary for me anymore. The idea of “no-code” was meant for humans who couldn’t program. But ironically, it introduces constraints that limit what’s possible.

That made sense before. It helped non-engineers get things done. Even if slightly inconvenient, it was worth the trade-off. But now? Many knowledge workers have an AI partner without the limits of human cognition. For them, no-code might be a bottleneck.

Of course, there are still needs for automation on the server side. But with things like Google Apps Script and local capabilities powered by AI, we now have options.

Apple Intelligence, iOS/macOS Shortcuts, and similar tools on other platforms have made client-side automation possible too. Combine that with Alexa and other ecosystems, and automation expands from software into real life. This is AIoT.

Even regular people can now simulate server-like behavior in their personal environments—with AI’s help.

So here I am again, using electricity and computing power—precious resources of humankind—just to automate my life.

It’s a lazy and arrogant way to live. And part of me is worried that someday, AI might judge me for it.

Exit mobile version