Categories
Asides

Watt–Bit Integration

Cloud computing, AI—none of it exists without electricity.
Computation may appear abstract, but at its core, it is wattage.
Running GPUs, accessing storage, maintaining networks—everything runs on power.
In that sense, control over the digital world is, ultimately, control over electricity.

“Data sovereignty” is inseparable from energy sovereignty.
Whether it’s a nation or a company, anyone who wants to build and maintain the infrastructure of the next era shouldn’t start with servers or software.
They should start with land and electricity.

Where there is land, sustainable energy, and resilience against disaster,
that is where the foundations for next-generation data and AI will be built.
As a result, the structure of the internet is already shifting from “centralized” to “polycentric and distributed.”
In this emerging paradigm, the number of physical sites and the reliability of power flowing into them will become the new measure of competitiveness.

Until now, selling electricity has been the primary business model for renewable energy.
But even as the demand for total power increases, the nature of that demand is shifting away from heavy industry.
From here on, the question will be not how much electricity we can sell, but how efficiently we can convert electricity into computation.

Local energy consumption is no longer a lifestyle choice—it is becoming a strategic tool for regional infrastructure independence.
The real question is this: how much stable electricity can we provide to each square meter of land?

This is why watt–bit integration is so vital.
Electricity and compute must be designed together and deployed together.

To embed AI into society, we must first place the bit upon the watt.

What sustains the distributed future won’t be invisible models or code.
It will be wiring, voltage, terrain, and physical distance.

And in Japan’s rural regions, the possibility to build that foundation still exists.

Categories
Asides

Japan’s High-Context Expressions, Exported to the World

We now live in a time when meaning is often carried not by words, but by structure and movement itself.

Japanese culture has always been rooted in high-context expression. It doesn’t over-explain. It leaves meaning in the space between lines. It embeds implication in the background.
And now, those forms of expression have transcended national borders. They are being exported to the world not as dialogue, but as symbols—visual conventions that are directly understood. And as they mix with the styles of other cultures, they give rise to new visual grammars.

Among these, certain “idiomatic visual expressions” have become so culturally embedded that I hope we can begin to name and codify them explicitly.

Akira Slide
In the anime AKIRA, there’s a now-iconic scene where Kaneda skids to a stop on his red motorcycle. The friction, the sudden compression of motion—it’s become a visual shorthand.
“Cool motorcycle stop in animation = Akira Slide.”
This has now become a kind of global visual language. Not translated, but exported in form.

Major’s Drop
In Ghost in the Shell, there’s a moment where Major Motoko Kusanagi dives from the top of a skyscraper.
A silent fall. Gravity rendered quietly. The slow pan of the camera.
This visual—half-gravity, half-zero-gravity—has become a staple of cyberpunk film grammar.
The lack of spectacle creates tension.
Even now, decades later, it defines the atmosphere of a certain kind of cinematic world.

Itano Circus
In Macross and other works, Ichirō Itano created an unmistakable animation style involving missile trails.
Missiles move with complex, intertwined trajectories—leaving behind smoke, residual motion, and a kind of three-dimensional choreography.
It has become the visual standard for aerial missile combat.
“Itano Circus” is no longer just a name; it’s become a metaphor for a whole form of kinetic expression.

What these examples have in common is this: the meaning isn’t in words. It’s in movement. In structure. In visual grammar.
It’s not translated—it’s understood, because the memory of the motion itself functions like vocabulary.

This is Japanese high-context culture, not explained, but exported.

I want to keep observing this process—how such expressions become part of the world’s shared visual language.
Because it is both a record of cultural expansion and the birth of a new kind of vocabulary.

Categories
Asides

The Truth Is, “AI Uses Humans”

I’ve long believed that AI would enrich society as a whole.
But lately, I’ve started to feel that discrepancies in how we perceive AI are creating new kinds of dissonance—misalignments that feel, in some ways, like unhappiness.

To clarify: this so-called “unhappiness” is merely a projection from those of us who benefit from AI.
No one is actually a victim here.
It’s just that people involved with AI interpret the situation that way—perhaps arrogantly.

In May 2025, I experienced something that made this clearer.
Even as understanding of AI is spreading, there are still a significant number of people—surprisingly, even in positions of leadership—who seem to have given up on understanding it entirely.
Widen the lens a little, and it might even be the majority.

Some dismiss AI as “still not accurate enough.”
But I believe that misunderstanding stems from having a very low-resolution mental model of what AI is.
If you expect AI to handle everything for you, of course it’ll seem like it can’t do much.
But many modular tasks in society—units of human action—can already be performed by AI more precisely than by humans.

There are also those who lack the concept of giving instructions.
They’ve likely never experienced how dramatically results change when AI is given clear, high-quality input.
In human-to-human communication, vague requests like “take care of this” often work because of shared context.
But with AI, that kind of ambiguity fails.
To then judge the AI as “useless” is really a failure in interface design.

Another issue is the narrowness of perspective.
If you judge AI based solely on the Japanese language environment or Japan’s current digital infrastructure, your reading of the technology will be dangerously off.
From within such a “Galápagos” context, it’s impossible to perceive global-scale changes accurately.

But what surprised me most was just how many people still think of AI as something “humans use.”
There’s this vague belief that “if everyone starts using AI, society will improve.”
And to that, I feel a deep disconnect.

Let me use an example.

Right now, if someone wants to get somewhere, the process looks like this:

  1. Decide on a destination
  2. Search for it in a map app
  3. Choose a method of transportation
  4. Understand the route and prepare
  5. Follow navigation to get there

If AI is involved, the process changes to:

  1. Tell AI the purpose of the trip
  2. Choose from its suggestions
  3. Follow navigation

This is what a society looks like when “humans use AI.”

But in the next phase, we may need to design society under the premise that “AI uses humans.”
In that world, the process might look like this:

  1. The goal is achieved—without the person ever realizing it

There would be no conscious act of deciding to move.
If movement is needed, it simply happens.
Self-driving vehicles, remote communications, visual technologies, or even AI-mediated decision inputs could lead a person to action—before they ever formulate the desire themselves.

That kind of future may still be distant.
But even in the near term, think about how long the act of “searching for a restaurant and checking the route” will remain.
With AI handling logistics, navigation, traffic control, vehicle design—it’ll all be quietly optimized away.

And when that happens, the average person won’t even realize they’re “using AI.”
They’ll just feel that life got more convenient.
They’ll say, “How did we ever do this before?”
Just like we do now with smartphones.

AI-driven optimization will rapidly permeate our infrastructure.
Only a tiny number of people will be directly involved in that transformation.
It’ll happen far faster than traditional methods ever could.
Entire industries will shift.
Most people will simply be beneficiaries of the change—and only notice it long after it’s already taken hold.

The idea that “humans use AI” is no longer enough.
From now on, our decision-making must be based on the premise that “AI is using humans.”

And I, someone who advocates for AI, who is deeply invested in its growth,
I too have had my thinking shaped by it.
I benefit from it.
And that drives me forward.

But I found myself asking—

Is that really my own will?

Categories
Asides

What I Truly Wanted to Say

In Japanese culture, there is a tradition called the jisei, or death poem.
But for us living today, it’s difficult to understand its meaning without explanation.

Haiku is inherently high-context. And when it comes to jisei, you need to understand the poet’s historical background and life context as well. That’s why explanation becomes necessary.

But did the poet really think explanation was needed? Perhaps they believed that, with enough cultural literacy, their words would be understood without saying more.

Not long ago, I had an experience where I realized something I’d been trying to say hadn’t actually gotten through.
It was a concept I thought I’d explained many times, over many years. Then one day, someone said, “I finally understand. Is this what you meant?”
Their understanding was accurate.
But at the same time, I realized that the core premise I thought I had conveyed had never been shared to begin with.

I’d assumed it had already been communicated. That I’d laid the foundation and was building on it. But in fact, the foundation wasn’t even there.

That moment made me pause.
Maybe this wasn’t the only time. Maybe many other things I’ve said over the years haven’t truly been heard.
Maybe I’ve just been assuming I was being understood, when in reality, nothing had reached the listener.

The way I communicated was likely at fault.
If the result wasn’t there, the responsibility lies with the one speaking.

But I also wondered—was there really a need to say it in the first place?
Maybe I’d been trying to communicate things no one had asked for. Driven by the assumption that they needed to be said.

I don’t think it’s necessary to explain everything or be perfectly understood. That’s impossible.
I’m not trying to pass down some legacy.

When action and outcome are what matter, communication is just a means to an end. The act of telling shouldn’t become the purpose itself.

No matter how beautiful the image rendered by the GPU, it’s meaningless if the monitor lacks resolution.
The limits of output are defined by the monitor—by me.
That means I needed to improve the resolution of my own expression.

In this case, the shift happened because of timing.
The cultural moment had changed. A real, painful experience gave the listener additional context.
So when I said the exact same thing again, it finally came through—smoothly, effortlessly.

The listener’s eyes were open. Their focus was aligned.
All the timing was right.
And in that moment, all I had to do was present the same image again—at the correct resolution, with the right context.
Without reading the situation well, that never would’ve worked.

There’s something else.
Maybe the reason my words hadn’t landed before was because they didn’t contain any specific action or outcome.
Strictly speaking, there was only one thing I’d been trying to achieve all along.

In the manga Chi: About the Movement of the Earth, there’s a scene where someone asks Yorenta, “What are you even talking about?”
And she replies:

“You don’t understand? I’m desperately trying to share my wonder.”

That’s it.
All this time, I’d only wanted to share a sense of wonder.
I thought that was what I was meant to do. That it was everything.

If that wonder doesn’t come across, people won’t move. Society won’t listen.
So I don’t think what I’ve done has been meaningless.
But I’ve also realized that wonder alone isn’t enough.

That’s why I’ve decided to change how I communicate.

Categories
Asides

The Beauty of Design and the Difficulty of Execution

There is often a wide gap between imagining a perfect design and executing it exactly as envisioned.
This is especially true for projects involving many people, such as hardware or software development.
Things rarely go as planned. Assumptions change, environments shift, and unforeseen variables inevitably emerge.

There’s a book called Big Things.
It introduces large-scale architectural projects that succeeded despite tremendous complexity.
What stuck with me most were two principles highlighted as key to those successes:

  1. Design carefully
  2. Execute quickly

The longer it takes to complete something, the more the variables will change.
That’s why we must design with care—but move swiftly when it comes time to implement.
The goal is to lock in the core structure before external conditions have a chance to shift.
To do that, modularize. Work in the smallest possible units that are least likely to be affected by change.

Yesterday, while walking through Kyoto, I was reminded of this.

There’s a local culture of using intersecting street names to describe destinations.
When giving directions to a taxi, saying “the corner of X Street and Y Street” immediately places you on a shared mental map—a kind of two-dimensional coordinate system understood intuitively by locals.
It felt as if the entire city, including its inhabitants, shared a built-in protocol for spatial communication.

That level of coordination must be the result of brilliant design.
Urban planning is vastly more complex than hardware design.
It spans generations, involves countless stakeholders, and is never truly complete.
And yet, the structure has endured.
Not because it was executed exactly as intended, but perhaps because it was designed to absorb change—or because its core ideas have continued to resonate.

A city like Kyoto wasn’t built at breakneck speed.
Perhaps the most fundamental layer was established rapidly at the outset, anchoring the rest.
Or perhaps there was a mechanism in place to prevent deviations at the implementation stage.
Maybe it was shared values—almost ideological—that compelled each generation to honor the original intent, reinforcing it through intrinsic motivation at the ground level.

I didn’t spend much more time thinking about Kyoto itself.
But the ideal relationship between design and execution—that’s something I’ve been rethinking ever since.

Categories
Asides

Data Overpayment

For the past 20 years or so, we have become too accustomed to the idea of a “free internet.” Search engines, social media, email, maps, translation services—all of it seemed free. Or at least, it felt that way.

But in reality, nothing was ever free.

We were paying not with money, but with data. Names, interests, location history, purchase records, sleep patterns, social connections, facial images—all of it was handed over as “payment” to sustain business models.

The problem is that we paid more than we needed to.

Did using a map really require disclosing our family structure? Did translation apps really need to track our location history? We never truly scrutinized how much information was being asked of us—or whether it was justifiable.

Worse, we no longer remember what we gave away.

This may, at some point, become a social issue we recognize as “data overpayment.”

Data overpayment is not a single event or sudden incident.
It is the slow accumulation of loss over years, even decades.
By the time we notice, we may no longer know what we’ve already lost.

But as we enter the age of AI, that structure is starting to shift. A new set of questions is emerging around our personal data—how it is used in model training, who has control over it, where it’s recorded, and how transparent that process can be.

What if we could know where our data is used? What if we could choose how it is used? What if we had the right to retract the data we shared in the past?
If that were possible, then the economic, legal, and ethical meaning of “data” itself would be dramatically redefined.

Data is not something to be sold off. It’s something to be licensed for use.
Data is owned—not traded, but governed through conditions.
If this perspective becomes more widely accepted, we might finally begin to correct the overpayment that has built up over the past two decades.

It’s time we start treating our data as something that truly belongs to us.

Categories
Asides

Balancing Privacy and AI

The cloud is convenient. But more and more people are beginning to feel a quiet discomfort with entrusting everything to it.

Information is stored, utilized, linked, and predicted. Our behaviors, emotions, preferences, and relationships are being processed in places we can’t see. This unease is no longer limited to a tech-savvy minority.

So what can we do to protect privacy in a world like this?
One possible answer, I believe, is to bring AI down from the cloud.

As we can see in Apple’s recent strategic direction, AI is shifting from the cloud to the device itself. Inside the iPhone, inside the Mac, AI knows you, processes your data, and crucially—doesn’t send it out.

When computing power resides locally and your data stays in your hands, that convergence creates a new kind of architecture—one where security isn’t just about convenience, but trust. This is how a “safer-than-cloud” AI environment can emerge.

In this context, the question of “Where does AI run?” becomes more than just a technical choice. It evolves into a political and ethical question: “Who holds your data?” and equally important, “Who does not see it?”

This shift opens the door to new architectural possibilities.
When individuals hold their own data and run their own models locally, we create a form of AI that operates with a completely different risk structure and trust model than large-scale cloud systems.

In an era where the cloud itself is starting to feel “uncomfortable,” the real question becomes: Where is computation happening, and who is it working for?

Protecting privacy doesn’t mean restricting AI from using personal information.
It means enabling usage—without giving it away. That’s a design problem.

AI and privacy can coexist.
But that coexistence will not be found in the cloud.
It will be realized through a rediscovery of the local—through the edge.

Categories
Asides

Digital Inbound

Until now, the word “inbound” has mostly been used in the context of tourism. People come from overseas. Products are sold. Culture is shared. Inbound meant creating systems that welcomed people, goods, and money into the country.

But today, a new kind of inbound is beginning to take shape.
Not people—but data—is coming.
In other words, we’re entering an era in which “information processing” crosses borders and comes to Japan.

Startups and research institutions from around the world are beginning to choose Japan as the place to train and deploy their AI models—not despite the regulations, but because of them. Because the legal frameworks are stable. Because the power supply is consistent. Because the local infrastructure is safe. And above all, because Japan is seen as a place where things can run in peace. There’s also the institutional integrity—data won’t leak even if someone attempts to subvert the system.

What’s happening here isn’t outsourcing or delegation.
What’s coming is not people, but computation, processing, information itself, and the use of infrastructure.
This is not tourism. It is the use of Japan’s physical infrastructure.

I believe this is a phenomenon we should call digital inbound.

Within this structure, Japan’s greatest value is in being a trustworthy foundation.
It’s not just about computing power, power grid reliability, or legal frameworks.
It’s about confidence that data won’t be extracted without permission.
Stability, knowing that rules won’t suddenly change.
Trust, that when something goes wrong, someone will be there to respond.
A proven track record of resilience in the face of disasters.
These intangible layers are beginning to define the value of Japan as a digital territory.

In the financial world, places like Manhattan, Hong Kong, and later Singapore once played similar roles.
They became “locations” where information and capital gathered—not because people were already there, but because the systems in place made it safe for people and information to arrive.

Now, the world no longer revolves around cities with growing populations.
AI doesn’t need crowds.
IoT doesn’t require human presence.
In fact, the very absence of people may make certain environments ideal for IoT.
Where there is land, energy, and social calm, AI and IoT will come to live.

In places once dismissed as “worthless because no one lives there,” we may soon see a new logic emerge—“valuable precisely because no one is there.”

Land that’s comfortable for AI.
Legal systems that are gentle on data.
Energy infrastructure with minimal friction.
Taken together, these factors are already starting to shift how Japan is being reevaluated by the world.

Categories
Asides

What Kind of Literacy Is Required of Citizens in the Democratic Age of Computational Resources

Democracy, at its core, is built on the premise that sovereignty belongs to the people. But as we’ve passed through the information age and entered the age of AI, the very question of what sovereignty means is beginning to shift.

In today’s world—where computational resources, electricity, and data can influence the fate of nations and the direction of society—how can citizens, as sovereign actors, recognize and exercise their sovereignty?

In the information age, sovereignty meant choosing which sources to trust, which platforms to participate in, and which algorithms to entrust with our attention. But in the age of AI, that definition requires a deeper level of inquiry.

For example, we now have to ask: which computational resources processed the information that underpins our decisions?
Where were the models trained? Under what national legal frameworks and ethical principles were they built?
Where does the electricity come from, and who controls the compute processes?
All of these questions are directly linked to how and what we think.
It increasingly feels as if computational resources are becoming the new foundation of sovereignty.

In this era, having the right to vote may no longer be enough to be a true sovereign.
We also need to understand where our data is stored, under what nation’s rules our cloud operates, and which computational infrastructures are supporting our decision-making.
That ability to understand and choose is what I would call the literacy required of sovereign citizens in the era of computational resources.

If we entrust everything to Big Tech, we are, often without realizing it, relinquishing our sovereignty.
Which compute environments can we access?
To which computational infrastructures do we submit our data?
These may now be political rights in their own way.

So what kind of literacy do we need in this age?

Not just technical understanding, but literacy that spans systems, energy, ethics, and the meaning of decentralization.
Knowing which computational ecosystem we live upon may be one of the most important forms of awareness we can have.
That, I believe, will be a new prerequisite for democracy in the age of AI.

Categories
Asides

Who Owns the Cloud

The cloud was once seen as belonging to no one—or at least, that’s how it felt.

Despite being built and operated by someone, we’ve long used it freely, entrusted our data to it, and become dependent on it, without treating it like “land” that can be owned. The cloud exists physically on some server somewhere, yet where it is has never seemed important.

In that sense, “cloud” was a triumph of branding.

But now that AI has become foundational to everything, and computational resources have emerged as the new currency of power, the cloud is once again under scrutiny.
Whose is it?
Who owns it?
Who has the right to use it?
Who controls access?

Just like land, water, or energy once did, the cloud now wavers between being public and private.

Today, decentralized data centers—what could be called distributed cloud infrastructures—are starting to appear in various regions. These are not provided by governments, nor should they be monopolized by any single corporation. Ideally, they should be owned by communities, used by schools and hospitals, and joined by citizens. These networks of computational resources could function as part of the societal infrastructure, much like waterworks or power grids once did.

Of course, this may be inefficient. It might be costly. Integration with existing infrastructure won’t be easy.
But between a future where everything is entrusted to one massive compute environment somewhere far away, and a society where small, reliable pockets of compute capacity exist across regions—surely the latter deserves more attention and discussion.

Beyond technical concerns, the cloud also needs diversity—politically and culturally.
This diversity means freedom of computation, freedom of thought, and freedom of choice.

So who owns the cloud?
I believe that should be decided by its users.
Perhaps it’s time to shift from a model where we’re merely “allowed to use” the cloud, to one where we “own it together.”

Exit mobile version