Categories
Asides

The AI That Refused the Cloud

Why didn’t Apple build a cloud-based AI?

Why didn’t they jump on the generative AI boom?
Why haven’t they released their own large language model?
Why did they bring us not “AI,” but “Apple Intelligence”?

The answer, I think, isn’t so much about strategy as it is about limitation.
It’s not that Apple chose not to use the cloud. They couldn’t.

Of course, there’s iCloud—and Apple owns infrastructure on a scale most companies could only dream of.
But unlike Google or Meta, Apple never built a business around collecting behavioral logs and text data through search, ads, or social media.
They never spent decades assembling a massive cloud platform and the dataset to match.

And with a user base of Apple’s scale, building and maintaining a unified cloud—compliant with each country’s laws and privacy standards—isn’t just difficult. It’s structurally impossible.

So Apple arrived at a different conclusion: if the cloud was out of reach, they would design an AI that completes everything locally.

An AI that lives inside your iPhone

Apple engineered the iPhone to run machine learning natively.
Its Apple Silicon chips use a custom architecture, with Neural Engines that process image recognition, speech interpretation, and even emotion detection—all on the device.

This started as a privacy measure.
Photos, voice data, steps, biometrics, location—all processed without ever leaving your phone.

At the same time, it addressed battery constraints.
Apple had long invested in larger screens to increase battery capacity, adopted OLED, and brought UMA (Unified Memory Architecture) to MacBooks.
All of this was about sustaining AI performance without draining power or relying on constant connectivity.

It was an enormous challenge.
Apple designed its own chips, its own OS, its middleware, its frameworks, and fused it all with on-device machine learning.
They bet on ARM and fine-tuned the balance of power and performance to a degree most companies wouldn’t even attempt.

Vision Pro’s sensors are learning emotion

Vision Pro includes sensors for cameras, LiDAR, infrared, eye tracking, facial muscles, and spatial microphones—designed to read what’s inside us, not just outside.

These sensors don’t just “see” or “hear.”
They track where you’re looking, measure your pupils, detect shifts in breathing, and register subtle changes in muscle tension.
From that, it may infer interest, attraction, anxiety, hesitation.

And that data? It stays local.
It’s not uploaded. It’s for your personal AI alone.

Vitals + Journal = Memory-based AI

Vision Pro records eye movement and facial expressions.
Apple Watch logs heart rate, body temperature, and sleep.
iPhone tracks text input and captured images.

And now, Apple is integrating all of this into the Journal app—day by day.
It’s a counter to platforms like X or Meta, and a response to the toxicity and addiction cycles of open social networks.

What you did, where you went, how you felt.
All of this is turned into language.
A “memory-based AI” begins to take shape.
And all of it stays on-device.

Not gathered into a centralized cloud, but grown inside you.
Your own AI.

Refusing the cloud gave AI a personality

Google’s AI is the same for everyone—for now.
ChatGPT, Claude, Gemini—all designed as public intelligences.

Apple’s AI is different.
It wants to grow into a mind that exists only inside you.

Apple’s approach may have started not with cloud rejection, but cloud resignation.
But from that constraint, something entirely new emerged.

An AI with memory.
An AI with personality.
An AI that has only ever known you.

That’s not something the cloud can produce.
An AI that refuses the cloud becomes something with a self.

Categories
Asides

Navigation Systems Are for Talking to Cars

As semi-autonomous driving becomes the norm, one thing has clearly changed: the role of navigation systems.
They’ve become a kind of language—an interface through which humans talk to cars.

In the past, we used navigation simply to avoid getting lost. It was a tool for finding the shortest route—purely for efficiency.
But now, it’s different. Navigation is how we communicate a destination to the car.

Even when I’m going somewhere familiar, I always input the destination. I know the way.
But I still feel the need to tell the car. If I don’t, I don’t know how it will act.

In many cases, the destination is already synced from my calendar.
That’s why I’ve started to think about how I enter appointments in the first place.
How far is it?
Is the departure time realistic?
What information does the car need to understand my intent?
Even scheduling has become part of a broader conversation with the car.

Turn signals are the same.
They’re not just for the car behind you.
They’re also how you tell the vehicle, “I want to change lanes now,” or “I’m about to turn.”
Bit by bit, people are developing an intuitive sense of what it means to signal to the machine.

These actions—destination input, calendar syncing, signaling—will eventually become training data.
They’ll enable more natural, more efficient communication between humans and vehicles.
As the car becomes more autonomous, the human role is shifting—from driver to conversational partner.

Categories
Asides

Urban Design by AI, for AI

Who should cities be designed for?
Until recently, the obvious answer was “for humans.”
But today, the foundational function of cities is shifting from serving people to hosting computational resources.

Cities won’t be shaped by where people gather.
The next cities will emerge where AI functions best.

Once you accept that premise, the requirements completely change.
Disaster resilience. Surplus energy. Flexible land use. Logical handling of heat, airflow, and cooling.
These are infrastructures optimized not for human comfort, but for AI operation.

Take immersion-cooled edge data centers, for example.
They can be installed outdoors and still operate stably even when internal temperatures approach 40°C.
They can use underground water circulation, or combine solar and wind power for energy self-sufficiency.
Though physically located at the edges of urban space, they become central to urban function.

Such distributed infrastructure is best suited not to the core of traditional cities, but to areas previously labeled “undeveloped.”
Empty lots. Parking spaces. Unbuildable slopes. Abandoned farmland.
Places once considered useless are becoming ideal environments for AI to inhabit.

And what’s installed in these places isn’t an office for people.
It’s a facility for AI.
Not a city where people gather to work, but a city where AI runs and generates economic activity.

The logic of urban design is starting to shift.
Elon Musk said he wants to turn every parking lot into a park. We’d rather put AI there.

Infrastructure is no longer just for humans.
It must be redesigned for AI.
This isn’t about AI optimizing humans.
It’s about AI optimizing its own environment for efficient operation, and us following that logic in how we shape space.

What cities need now is not concrete.
They need electricity—and a philosophy of distributed autonomy.

Categories
Asides

The Truth Is, “AI Uses Humans”

I’ve long believed that AI would enrich society as a whole.
But lately, I’ve started to feel that discrepancies in how we perceive AI are creating new kinds of dissonance—misalignments that feel, in some ways, like unhappiness.

To clarify: this so-called “unhappiness” is merely a projection from those of us who benefit from AI.
No one is actually a victim here.
It’s just that people involved with AI interpret the situation that way—perhaps arrogantly.

In May 2025, I experienced something that made this clearer.
Even as understanding of AI is spreading, there are still a significant number of people—surprisingly, even in positions of leadership—who seem to have given up on understanding it entirely.
Widen the lens a little, and it might even be the majority.

Some dismiss AI as “still not accurate enough.”
But I believe that misunderstanding stems from having a very low-resolution mental model of what AI is.
If you expect AI to handle everything for you, of course it’ll seem like it can’t do much.
But many modular tasks in society—units of human action—can already be performed by AI more precisely than by humans.

There are also those who lack the concept of giving instructions.
They’ve likely never experienced how dramatically results change when AI is given clear, high-quality input.
In human-to-human communication, vague requests like “take care of this” often work because of shared context.
But with AI, that kind of ambiguity fails.
To then judge the AI as “useless” is really a failure in interface design.

Another issue is the narrowness of perspective.
If you judge AI based solely on the Japanese language environment or Japan’s current digital infrastructure, your reading of the technology will be dangerously off.
From within such a “Galápagos” context, it’s impossible to perceive global-scale changes accurately.

But what surprised me most was just how many people still think of AI as something “humans use.”
There’s this vague belief that “if everyone starts using AI, society will improve.”
And to that, I feel a deep disconnect.

Let me use an example.

Right now, if someone wants to get somewhere, the process looks like this:

  1. Decide on a destination
  2. Search for it in a map app
  3. Choose a method of transportation
  4. Understand the route and prepare
  5. Follow navigation to get there

If AI is involved, the process changes to:

  1. Tell AI the purpose of the trip
  2. Choose from its suggestions
  3. Follow navigation

This is what a society looks like when “humans use AI.”

But in the next phase, we may need to design society under the premise that “AI uses humans.”
In that world, the process might look like this:

  1. The goal is achieved—without the person ever realizing it

There would be no conscious act of deciding to move.
If movement is needed, it simply happens.
Self-driving vehicles, remote communications, visual technologies, or even AI-mediated decision inputs could lead a person to action—before they ever formulate the desire themselves.

That kind of future may still be distant.
But even in the near term, think about how long the act of “searching for a restaurant and checking the route” will remain.
With AI handling logistics, navigation, traffic control, vehicle design—it’ll all be quietly optimized away.

And when that happens, the average person won’t even realize they’re “using AI.”
They’ll just feel that life got more convenient.
They’ll say, “How did we ever do this before?”
Just like we do now with smartphones.

AI-driven optimization will rapidly permeate our infrastructure.
Only a tiny number of people will be directly involved in that transformation.
It’ll happen far faster than traditional methods ever could.
Entire industries will shift.
Most people will simply be beneficiaries of the change—and only notice it long after it’s already taken hold.

The idea that “humans use AI” is no longer enough.
From now on, our decision-making must be based on the premise that “AI is using humans.”

And I, someone who advocates for AI, who is deeply invested in its growth,
I too have had my thinking shaped by it.
I benefit from it.
And that drives me forward.

But I found myself asking—

Is that really my own will?

Categories
Asides

Data Overpayment

For the past 20 years or so, we have become too accustomed to the idea of a “free internet.” Search engines, social media, email, maps, translation services—all of it seemed free. Or at least, it felt that way.

But in reality, nothing was ever free.

We were paying not with money, but with data. Names, interests, location history, purchase records, sleep patterns, social connections, facial images—all of it was handed over as “payment” to sustain business models.

The problem is that we paid more than we needed to.

Did using a map really require disclosing our family structure? Did translation apps really need to track our location history? We never truly scrutinized how much information was being asked of us—or whether it was justifiable.

Worse, we no longer remember what we gave away.

This may, at some point, become a social issue we recognize as “data overpayment.”

Data overpayment is not a single event or sudden incident.
It is the slow accumulation of loss over years, even decades.
By the time we notice, we may no longer know what we’ve already lost.

But as we enter the age of AI, that structure is starting to shift. A new set of questions is emerging around our personal data—how it is used in model training, who has control over it, where it’s recorded, and how transparent that process can be.

What if we could know where our data is used? What if we could choose how it is used? What if we had the right to retract the data we shared in the past?
If that were possible, then the economic, legal, and ethical meaning of “data” itself would be dramatically redefined.

Data is not something to be sold off. It’s something to be licensed for use.
Data is owned—not traded, but governed through conditions.
If this perspective becomes more widely accepted, we might finally begin to correct the overpayment that has built up over the past two decades.

It’s time we start treating our data as something that truly belongs to us.

Categories
Asides

Balancing Privacy and AI

The cloud is convenient. But more and more people are beginning to feel a quiet discomfort with entrusting everything to it.

Information is stored, utilized, linked, and predicted. Our behaviors, emotions, preferences, and relationships are being processed in places we can’t see. This unease is no longer limited to a tech-savvy minority.

So what can we do to protect privacy in a world like this?
One possible answer, I believe, is to bring AI down from the cloud.

As we can see in Apple’s recent strategic direction, AI is shifting from the cloud to the device itself. Inside the iPhone, inside the Mac, AI knows you, processes your data, and crucially—doesn’t send it out.

When computing power resides locally and your data stays in your hands, that convergence creates a new kind of architecture—one where security isn’t just about convenience, but trust. This is how a “safer-than-cloud” AI environment can emerge.

In this context, the question of “Where does AI run?” becomes more than just a technical choice. It evolves into a political and ethical question: “Who holds your data?” and equally important, “Who does not see it?”

This shift opens the door to new architectural possibilities.
When individuals hold their own data and run their own models locally, we create a form of AI that operates with a completely different risk structure and trust model than large-scale cloud systems.

In an era where the cloud itself is starting to feel “uncomfortable,” the real question becomes: Where is computation happening, and who is it working for?

Protecting privacy doesn’t mean restricting AI from using personal information.
It means enabling usage—without giving it away. That’s a design problem.

AI and privacy can coexist.
But that coexistence will not be found in the cloud.
It will be realized through a rediscovery of the local—through the edge.

Categories
Asides

The Concept of Distributed National Infrastructure

Until now, national infrastructure was something centrally managed and deployed across an entire country. Power plants, communication networks, water systems, roads, and data centers—all followed a model of “build in one place, use everywhere.” It was the nation that built, protected, and supplied these systems.

But that structure is slowly starting to change.

As portions of information infrastructure and computational resources come to be operated by specific tech giants, the infrastructure that once sat beneath the authority of the state is beginning to form a structure parallel to it. And what’s coming next is a shift away from centralization—toward a physical and logical model of “distribution.”

Distribution doesn’t simply mean breaking things into smaller parts. It means separating locations, ownership, control rights, power sources, and networks. It means running each independently, while allowing them to function together as a single system. That, to me, is the core of what “distributed national infrastructure” means.

This kind of structure is often discussed in terms of redundancy in disasters or risk dispersion in geopolitics. But more importantly, I believe it becomes critical when we begin asking, “Under whose sovereignty does this infrastructure operate?”

Entities not belonging to any central authority, but possessing social functions equal to or greater than national infrastructure. Cloud services, blockchain networks, local compute clusters, off-grid energy systems—when combined, these create a new kind of infrastructure that transcends borders and legal systems.

Whether this becomes something that replaces the nation-state, or something that complements it, remains to be seen. But what’s clear is that infrastructure is no longer something exclusive to states.

Perhaps we are entering an era where infrastructure is not something built by the state, but something into which the state must now merge—beyond the constraints of geography and the linear flow of time.

Categories
Asides

Nations Beyond Big Tech

Corporations Backed by States and the Proxy Wars Driven by AI

A nation has traditionally been defined as an entity with territory, a population, a military, currency, and the right to conduct diplomacy. But that structure is quietly beginning to change.

Today, corporations are shouldering the roles of states—and beginning to surpass them. Leading tech giants like Big Tech and Tesla, backed by national governments, are wielding their financial power, computational resources, and information infrastructure to influence international society.

Corporations do not hold territory, but they control infrastructure. They do not have citizens, but they have users. They do not command armies, but they possess cyber capabilities and information dominance. They do not issue national currencies, but they have built their own economic spheres. They do not formally conduct diplomacy, but they negotiate across national borders.

In many ways, corporations are beginning to replace the traditional functions that states once held.

Moreover, corporations now receive direct energy and financial support from governments. As if endorsed by national policies to “use as much electricity as needed,” they are hoarding computational resources, developing AI, and expanding their power to control the very foundations of society.

The advancement of AI is accelerating this trend even further. Corporations that can control AI will dominate the information space. And those who dominate the information space will inevitably control the physical world.

The wars of the future will no longer be fought with military force.

We have entered an era of proxy wars between states, fought through corporations.

And ultimately, it may not be states that win—but corporations. States are becoming increasingly dependent on corporations, while corporations are beginning to use states as tools. And when the next dominant entity emerges, will we even be able to recognize it?

Categories
Asides

The Structure of the New Resource War

In the era of AI, what is the most valuable resource? When I think about it, the first things that come to mind are “GPUs” and “data.”

At the same time, the idea that these are important resources has already become common sense. The real question is what kind of resources they are.

During the Industrial Revolution, oil was the resource that moved nations. It supported industrial production, transportation, and even determined the outcomes of wars. It was said that whoever controlled oil controlled the world.

Today, GPUs are beginning to resemble oil. They drive generative AI, support military technologies, and stand at the frontlines of information warfare. Whether or not a nation possesses computational resources now determines its strategic strength.

I wrote about this perspective in “The Geopolitics of Computational and Energy Resources.

However, the emergence of ChatGPT, and later DeepSeek, has made things a little more complicated. Having massive amounts of GPUs and data is no longer an absolute prerequisite. With the right model design and training strategy, it has been proven that even limited computational resources can produce disruptive results.

In other words, GPUs, once similar to oil, are now also beginning to resemble “currency.”

It’s not just about how much you have. It’s about where, when, and how you use it. Liquidity and strategic deployment determine outcomes. Mere accumulation is meaningless. Value is created by circulation and optimized utilization.

Given this, I believe the coming resource wars will have a two-layer structure.

One layer will resemble the traditional oil wars. Nations will hoard GPUs, dominate supply chains, and treat computational resources like hard currency.

The other layer will be more flexible and dynamic, akin to currency wars. Teams will compete on model design, data engineering, and chip architecture optimization—on how much performance they can extract from limited resources.

DeepSeek exemplified the second path. In an environment without access to cutting-edge GPUs, they optimized software and human resources to extract performance that rivals the world’s top models.

In short, simply possessing computational resources will no longer be enough. It will be essential to customize, optimize, and maximize the efficiency of what you have.

It’s not about “who has the most.” It’s about “who can use it best.”

I believe this is the structure of the new resource war in the AI era.

Categories
Asides

We Cannot Recognize the Singularity

We often hear people say, “The Singularity is coming.” However, lately, I’ve started to think—it’s not that it’s coming. It has already begun.

During the Industrial Revolution, those living through it didn’t think they were in a revolution. The invention of the steam engine was seen as just another new tool. Even when the railway network expanded and dramatically changed the speed at which people could move, it wasn’t called a “revolution” until much later.

When technology transforms society, it does so quietly, but surely. Those living through it can only see isolated “dots” of change. It’s only afterward, when the dots connect into lines and lines into planes, that the true scale becomes visible.

Today, generative AI is appearing everywhere. Writing text, creating images, generating voices, coding programs, supporting decision-making—activities that used to belong only to humans are gradually being replaced by AI.

Thinking back, there was the explosive spread of the internet, the practical implementation of GPUs, the paradigm shift to parallel processing, the mass adoption of smartphones. But we can no longer say exactly where it all started.

Most people probably think of smartphones or AI itself as revolutionary. But those are just points along the way. It’s likely that a revolution too massive to recognize is already underway.

Standing on the Earth, we don’t feel it racing through space at incredible speeds. Likewise, we are caught up in a vast movement right now. But from inside it, we cannot perceive our own motion.

The Singularity brought about by AI will be the same. We are already inside it.