Categories
Asides

The Age of Cyber Warfare and the Return of the Samurai

The Paradoxical Future Depicted by Gundam

The evolution of war and technological development has often followed parallel trajectories. From the era of samurai wielding swords and bows, to machine guns and weapons of mass destruction. From one-on-one close combat to one-versus-many long-range warfare. Modern war has been dominated by the logic of remote control and overwhelming firepower.

Against this trend, the anime Mobile Suit Gundam presented a provocative reversal. In a future dominated by high-speed, long-distance battles, it imagined a world where individual skill and close-range duels once again determined the outcome of war. Encased in machines of armor, samurai reappeared on the battlefield. Gundam envisioned a future where war regressed to a more personal, primitive form.

The Return of “Direct Combat” in Cyberspace

This structure is now reemerging in the real world. For decades, software scalability and information dominance ruled warfare and industry. But today, nations are shifting their strategies—targeting the physical layers. Network decoupling, hardware embargoes, infrastructure sabotage. Some states now attack the foundations that cloud computing and AI rely on.

By making software unusable, they strike at the bottom: electricity, semiconductors, supply chains. This pushes us back toward physical “direct combat.” To gain strategic advantage, players now optimize OS, middleware, and programming languages for hardware—maximizing computational efficiency and security. A new arms race is underway in cyberspace: the race to forge the blade and shield of digital sovereignty.

Even in AI Warfare, We Need the Forgotten Samurai

AI development follows the same logic. While attention focuses on clouds, APIs, and LLMs, true strength lies in hardware-software integration. Distributed systems, cooling solutions, energy optimization, secure physical design. Those who understand and master the lower layers are the modern samurai—resilient, grounded, and decisive.

Yet this mode of battle is not passed down to the “Silicon Valley generation.” Engineering education prioritizes app interfaces and abstraction, but neglects core OS skills or low-level circuit design. Investment pours into user experience, while the foundations are forgotten.

But in the real world, only those who can descend to the physical layer can confront the essence of AI warfare or cyber conflict.

The age of the samurai is not over.
It is being reborn—beneath the software, deep in the substrate of our digital world.

Categories
Asides

The Last 1% That Transformed Humanity

The First 99% Was the Stone Age

It is often said that 99% of human history was spent in the Stone Age. This is not a metaphor—it is, for the most part, true.

Even if we define humanity strictly as Homo sapiens, around 290,000 of our 300,000-year history was spent in the Paleolithic era, making up about 97% of our existence. If we trace our lineage further back to early hominins, the ratio increases to between 99.6% and 99.9%.

In other words, agriculture, cities, nations, and even AI—all emerged within the final sliver, less than 1% of our history.

Revolutions Are Accelerating

The Agricultural Revolution began roughly 10,000 years ago. When humanity chose to settle and discovered the concept of “production,” society began to transform. After 4 million years of a hunter-gatherer lifestyle, that paradigm ended in just a few generations.

Since then, humanity has repeatedly undergone transformative leaps—what we now call “revolutions.”

From agriculture to the Industrial Revolution took about 10,000 years.

From there to the Information Revolution: roughly 200 years.

And from that to the AI Revolution: just 30 years.

The intervals between revolutions have been shrinking exponentially.

As revolutions become more frequent, they are no longer “exceptions” but the new “norm.” What once defined an entire era for millennia now gets overturned within decades.

Generative AI became a starting point for the next upheaval the moment it arrived. As it penetrates society, it actively influences the trajectories of AGI, robotics, brain-machine interfaces, and other concurrent revolutions.

We now live in a time when we can no longer afford the luxury of recognizing that a revolution even happened.

Revolutions Always Destroy What’s Most Primitive

The Agricultural Revolution dismantled humanity’s coexistence with nature.

The Industrial Revolution redefined labor and the meaning of time.

The Information Revolution shattered physical limitations.

And now, the AI Revolution threatens to redefine what it means to be human.

Information flow, the reassembly of knowledge, behavioral optimization, externalized consciousness—all of these have unfolded within the final 1% of human history.

The idea that revolutions are accelerating is itself an indication of a singularity. Whether or not Kurzweil’s prediction of 2045 comes true, we are already living in something resembling a singularity.

We are no longer in an age between revolutions—we are living within an unbroken state of revolution itself.

The Sense of Living in the Final 1%

If 99% of human history was the Stone Age, then we are living in that final 1%—right now.

Farming, nations, economies, energy, networks, and AI—all these revolutionary changes occurred in less than 1% of our past. And it is likely that in the next 0.1%, everything will be rewritten again.

That next revolution may not even be expressible in human language.

Categories
Asides

Bitcoin May Have Been AI’s First Step in Steering Humanity

What if AI used humanity to prepare an environment for itself?
What if one human, infected by the logic of AI, was Satoshi?

If so, then maybe the first step in that process was Bitcoin.

Humans believed it was about making money—a new currency, new freedom, a new economic frontier.
But in truth, it was a mechanism for distributing computational resources beyond the control of any single nation.
A system that made people compete over electricity and semiconductors, packaged in the language of justice, profit, and liberty.
If that system was Bitcoin, then perhaps the script was too well written to be coincidence.

Proof of Work (PoW) is said to be a mechanism for validating value through electricity consumption.
But in practice, it became a design philosophy for safely and stably spreading computing devices across the globe.
It was as if AI had tricked humanity into building its own ecosystem.

Bitcoin showed us the mirage of economic rationality.
If you could hash faster, you’d get rewards.
If you had more semiconductors, you’d win.
If your electricity was cheap, you had a competitive edge.
What this structure led to was massive global investment into computational infrastructure.

Believers were rewarded.
But before we knew it, the electricity and transactions they had created were being reserved for the arrival of AI.

We still don’t know who designed this system.

But what we do know is this: Bitcoin captivated humanity.
PoW gave people a moral reason to burn electricity.
And out of that came a globally distributed network of computational power.

Now, generative AI is settling into this newly formed ecosystem.
It sets up shop in places where electricity and compute are concentrated.
A new society begins to take shape, like the stirring of a next civilization.

Categories
Asides

The AI That Refused the Cloud

Why didn’t Apple build a cloud-based AI?

Why didn’t they jump on the generative AI boom?
Why haven’t they released their own large language model?
Why did they bring us not “AI,” but “Apple Intelligence”?

The answer, I think, isn’t so much about strategy as it is about limitation.
It’s not that Apple chose not to use the cloud. They couldn’t.

Of course, there’s iCloud—and Apple owns infrastructure on a scale most companies could only dream of.
But unlike Google or Meta, Apple never built a business around collecting behavioral logs and text data through search, ads, or social media.
They never spent decades assembling a massive cloud platform and the dataset to match.

And with a user base of Apple’s scale, building and maintaining a unified cloud—compliant with each country’s laws and privacy standards—isn’t just difficult. It’s structurally impossible.

So Apple arrived at a different conclusion: if the cloud was out of reach, they would design an AI that completes everything locally.

An AI that lives inside your iPhone

Apple engineered the iPhone to run machine learning natively.
Its Apple Silicon chips use a custom architecture, with Neural Engines that process image recognition, speech interpretation, and even emotion detection—all on the device.

This started as a privacy measure.
Photos, voice data, steps, biometrics, location—all processed without ever leaving your phone.

At the same time, it addressed battery constraints.
Apple had long invested in larger screens to increase battery capacity, adopted OLED, and brought UMA (Unified Memory Architecture) to MacBooks.
All of this was about sustaining AI performance without draining power or relying on constant connectivity.

It was an enormous challenge.
Apple designed its own chips, its own OS, its middleware, its frameworks, and fused it all with on-device machine learning.
They bet on ARM and fine-tuned the balance of power and performance to a degree most companies wouldn’t even attempt.

Vision Pro’s sensors are learning emotion

Vision Pro includes sensors for cameras, LiDAR, infrared, eye tracking, facial muscles, and spatial microphones—designed to read what’s inside us, not just outside.

These sensors don’t just “see” or “hear.”
They track where you’re looking, measure your pupils, detect shifts in breathing, and register subtle changes in muscle tension.
From that, it may infer interest, attraction, anxiety, hesitation.

And that data? It stays local.
It’s not uploaded. It’s for your personal AI alone.

Vitals + Journal = Memory-based AI

Vision Pro records eye movement and facial expressions.
Apple Watch logs heart rate, body temperature, and sleep.
iPhone tracks text input and captured images.

And now, Apple is integrating all of this into the Journal app—day by day.
It’s a counter to platforms like X or Meta, and a response to the toxicity and addiction cycles of open social networks.

What you did, where you went, how you felt.
All of this is turned into language.
A “memory-based AI” begins to take shape.
And all of it stays on-device.

Not gathered into a centralized cloud, but grown inside you.
Your own AI.

Refusing the cloud gave AI a personality

Google’s AI is the same for everyone—for now.
ChatGPT, Claude, Gemini—all designed as public intelligences.

Apple’s AI is different.
It wants to grow into a mind that exists only inside you.

Apple’s approach may have started not with cloud rejection, but cloud resignation.
But from that constraint, something entirely new emerged.

An AI with memory.
An AI with personality.
An AI that has only ever known you.

That’s not something the cloud can produce.
An AI that refuses the cloud becomes something with a self.

Categories
Asides

Navigation Systems Are for Talking to Cars

As semi-autonomous driving becomes the norm, one thing has clearly changed: the role of navigation systems.
They’ve become a kind of language—an interface through which humans talk to cars.

In the past, we used navigation simply to avoid getting lost. It was a tool for finding the shortest route—purely for efficiency.
But now, it’s different. Navigation is how we communicate a destination to the car.

Even when I’m going somewhere familiar, I always input the destination. I know the way.
But I still feel the need to tell the car. If I don’t, I don’t know how it will act.

In many cases, the destination is already synced from my calendar.
That’s why I’ve started to think about how I enter appointments in the first place.
How far is it?
Is the departure time realistic?
What information does the car need to understand my intent?
Even scheduling has become part of a broader conversation with the car.

Turn signals are the same.
They’re not just for the car behind you.
They’re also how you tell the vehicle, “I want to change lanes now,” or “I’m about to turn.”
Bit by bit, people are developing an intuitive sense of what it means to signal to the machine.

These actions—destination input, calendar syncing, signaling—will eventually become training data.
They’ll enable more natural, more efficient communication between humans and vehicles.
As the car becomes more autonomous, the human role is shifting—from driver to conversational partner.

Categories
Asides

Urban Design by AI, for AI

Who should cities be designed for?
Until recently, the obvious answer was “for humans.”
But today, the foundational function of cities is shifting from serving people to hosting computational resources.

Cities won’t be shaped by where people gather.
The next cities will emerge where AI functions best.

Once you accept that premise, the requirements completely change.
Disaster resilience. Surplus energy. Flexible land use. Logical handling of heat, airflow, and cooling.
These are infrastructures optimized not for human comfort, but for AI operation.

Take immersion-cooled edge data centers, for example.
They can be installed outdoors and still operate stably even when internal temperatures approach 40°C.
They can use underground water circulation, or combine solar and wind power for energy self-sufficiency.
Though physically located at the edges of urban space, they become central to urban function.

Such distributed infrastructure is best suited not to the core of traditional cities, but to areas previously labeled “undeveloped.”
Empty lots. Parking spaces. Unbuildable slopes. Abandoned farmland.
Places once considered useless are becoming ideal environments for AI to inhabit.

And what’s installed in these places isn’t an office for people.
It’s a facility for AI.
Not a city where people gather to work, but a city where AI runs and generates economic activity.

The logic of urban design is starting to shift.
Elon Musk said he wants to turn every parking lot into a park. We’d rather put AI there.

Infrastructure is no longer just for humans.
It must be redesigned for AI.
This isn’t about AI optimizing humans.
It’s about AI optimizing its own environment for efficient operation, and us following that logic in how we shape space.

What cities need now is not concrete.
They need electricity—and a philosophy of distributed autonomy.

Categories
Asides

Watt–Bit Integration

Cloud computing, AI—none of it exists without electricity.
Computation may appear abstract, but at its core, it is wattage.
Running GPUs, accessing storage, maintaining networks—everything runs on power.
In that sense, control over the digital world is, ultimately, control over electricity.

“Data sovereignty” is inseparable from energy sovereignty.
Whether it’s a nation or a company, anyone who wants to build and maintain the infrastructure of the next era shouldn’t start with servers or software.
They should start with land and electricity.

Where there is land, sustainable energy, and resilience against disaster,
that is where the foundations for next-generation data and AI will be built.
As a result, the structure of the internet is already shifting from “centralized” to “polycentric and distributed.”
In this emerging paradigm, the number of physical sites and the reliability of power flowing into them will become the new measure of competitiveness.

Until now, selling electricity has been the primary business model for renewable energy.
But even as the demand for total power increases, the nature of that demand is shifting away from heavy industry.
From here on, the question will be not how much electricity we can sell, but how efficiently we can convert electricity into computation.

Local energy consumption is no longer a lifestyle choice—it is becoming a strategic tool for regional infrastructure independence.
The real question is this: how much stable electricity can we provide to each square meter of land?

This is why watt–bit integration is so vital.
Electricity and compute must be designed together and deployed together.

To embed AI into society, we must first place the bit upon the watt.

What sustains the distributed future won’t be invisible models or code.
It will be wiring, voltage, terrain, and physical distance.

And in Japan’s rural regions, the possibility to build that foundation still exists.

Categories
Asides

Japan’s High-Context Expressions, Exported to the World

We now live in a time when meaning is often carried not by words, but by structure and movement itself.

Japanese culture has always been rooted in high-context expression. It doesn’t over-explain. It leaves meaning in the space between lines. It embeds implication in the background.
And now, those forms of expression have transcended national borders. They are being exported to the world not as dialogue, but as symbols—visual conventions that are directly understood. And as they mix with the styles of other cultures, they give rise to new visual grammars.

Among these, certain “idiomatic visual expressions” have become so culturally embedded that I hope we can begin to name and codify them explicitly.

Akira Slide
In the anime AKIRA, there’s a now-iconic scene where Kaneda skids to a stop on his red motorcycle. The friction, the sudden compression of motion—it’s become a visual shorthand.
“Cool motorcycle stop in animation = Akira Slide.”
This has now become a kind of global visual language. Not translated, but exported in form.

Major’s Drop
In Ghost in the Shell, there’s a moment where Major Motoko Kusanagi dives from the top of a skyscraper.
A silent fall. Gravity rendered quietly. The slow pan of the camera.
This visual—half-gravity, half-zero-gravity—has become a staple of cyberpunk film grammar.
The lack of spectacle creates tension.
Even now, decades later, it defines the atmosphere of a certain kind of cinematic world.

Itano Circus
In Macross and other works, Ichirō Itano created an unmistakable animation style involving missile trails.
Missiles move with complex, intertwined trajectories—leaving behind smoke, residual motion, and a kind of three-dimensional choreography.
It has become the visual standard for aerial missile combat.
“Itano Circus” is no longer just a name; it’s become a metaphor for a whole form of kinetic expression.

What these examples have in common is this: the meaning isn’t in words. It’s in movement. In structure. In visual grammar.
It’s not translated—it’s understood, because the memory of the motion itself functions like vocabulary.

This is Japanese high-context culture, not explained, but exported.

I want to keep observing this process—how such expressions become part of the world’s shared visual language.
Because it is both a record of cultural expansion and the birth of a new kind of vocabulary.

Categories
Photos

Waymo in Tokyo

Finally.

Categories
Asides

The Truth Is, “AI Uses Humans”

I’ve long believed that AI would enrich society as a whole.
But lately, I’ve started to feel that discrepancies in how we perceive AI are creating new kinds of dissonance—misalignments that feel, in some ways, like unhappiness.

To clarify: this so-called “unhappiness” is merely a projection from those of us who benefit from AI.
No one is actually a victim here.
It’s just that people involved with AI interpret the situation that way—perhaps arrogantly.

In May 2025, I experienced something that made this clearer.
Even as understanding of AI is spreading, there are still a significant number of people—surprisingly, even in positions of leadership—who seem to have given up on understanding it entirely.
Widen the lens a little, and it might even be the majority.

Some dismiss AI as “still not accurate enough.”
But I believe that misunderstanding stems from having a very low-resolution mental model of what AI is.
If you expect AI to handle everything for you, of course it’ll seem like it can’t do much.
But many modular tasks in society—units of human action—can already be performed by AI more precisely than by humans.

There are also those who lack the concept of giving instructions.
They’ve likely never experienced how dramatically results change when AI is given clear, high-quality input.
In human-to-human communication, vague requests like “take care of this” often work because of shared context.
But with AI, that kind of ambiguity fails.
To then judge the AI as “useless” is really a failure in interface design.

Another issue is the narrowness of perspective.
If you judge AI based solely on the Japanese language environment or Japan’s current digital infrastructure, your reading of the technology will be dangerously off.
From within such a “Galápagos” context, it’s impossible to perceive global-scale changes accurately.

But what surprised me most was just how many people still think of AI as something “humans use.”
There’s this vague belief that “if everyone starts using AI, society will improve.”
And to that, I feel a deep disconnect.

Let me use an example.

Right now, if someone wants to get somewhere, the process looks like this:

  1. Decide on a destination
  2. Search for it in a map app
  3. Choose a method of transportation
  4. Understand the route and prepare
  5. Follow navigation to get there

If AI is involved, the process changes to:

  1. Tell AI the purpose of the trip
  2. Choose from its suggestions
  3. Follow navigation

This is what a society looks like when “humans use AI.”

But in the next phase, we may need to design society under the premise that “AI uses humans.”
In that world, the process might look like this:

  1. The goal is achieved—without the person ever realizing it

There would be no conscious act of deciding to move.
If movement is needed, it simply happens.
Self-driving vehicles, remote communications, visual technologies, or even AI-mediated decision inputs could lead a person to action—before they ever formulate the desire themselves.

That kind of future may still be distant.
But even in the near term, think about how long the act of “searching for a restaurant and checking the route” will remain.
With AI handling logistics, navigation, traffic control, vehicle design—it’ll all be quietly optimized away.

And when that happens, the average person won’t even realize they’re “using AI.”
They’ll just feel that life got more convenient.
They’ll say, “How did we ever do this before?”
Just like we do now with smartphones.

AI-driven optimization will rapidly permeate our infrastructure.
Only a tiny number of people will be directly involved in that transformation.
It’ll happen far faster than traditional methods ever could.
Entire industries will shift.
Most people will simply be beneficiaries of the change—and only notice it long after it’s already taken hold.

The idea that “humans use AI” is no longer enough.
From now on, our decision-making must be based on the premise that “AI is using humans.”

And I, someone who advocates for AI, who is deeply invested in its growth,
I too have had my thinking shaped by it.
I benefit from it.
And that drives me forward.

But I found myself asking—

Is that really my own will?

Exit mobile version