Categories
Asides

What I Truly Wanted to Say

In Japanese culture, there is a tradition called the jisei, or death poem.
But for us living today, it’s difficult to understand its meaning without explanation.

Haiku is inherently high-context. And when it comes to jisei, you need to understand the poet’s historical background and life context as well. That’s why explanation becomes necessary.

But did the poet really think explanation was needed? Perhaps they believed that, with enough cultural literacy, their words would be understood without saying more.

Not long ago, I had an experience where I realized something I’d been trying to say hadn’t actually gotten through.
It was a concept I thought I’d explained many times, over many years. Then one day, someone said, “I finally understand. Is this what you meant?”
Their understanding was accurate.
But at the same time, I realized that the core premise I thought I had conveyed had never been shared to begin with.

I’d assumed it had already been communicated. That I’d laid the foundation and was building on it. But in fact, the foundation wasn’t even there.

That moment made me pause.
Maybe this wasn’t the only time. Maybe many other things I’ve said over the years haven’t truly been heard.
Maybe I’ve just been assuming I was being understood, when in reality, nothing had reached the listener.

The way I communicated was likely at fault.
If the result wasn’t there, the responsibility lies with the one speaking.

But I also wondered—was there really a need to say it in the first place?
Maybe I’d been trying to communicate things no one had asked for. Driven by the assumption that they needed to be said.

I don’t think it’s necessary to explain everything or be perfectly understood. That’s impossible.
I’m not trying to pass down some legacy.

When action and outcome are what matter, communication is just a means to an end. The act of telling shouldn’t become the purpose itself.

No matter how beautiful the image rendered by the GPU, it’s meaningless if the monitor lacks resolution.
The limits of output are defined by the monitor—by me.
That means I needed to improve the resolution of my own expression.

In this case, the shift happened because of timing.
The cultural moment had changed. A real, painful experience gave the listener additional context.
So when I said the exact same thing again, it finally came through—smoothly, effortlessly.

The listener’s eyes were open. Their focus was aligned.
All the timing was right.
And in that moment, all I had to do was present the same image again—at the correct resolution, with the right context.
Without reading the situation well, that never would’ve worked.

There’s something else.
Maybe the reason my words hadn’t landed before was because they didn’t contain any specific action or outcome.
Strictly speaking, there was only one thing I’d been trying to achieve all along.

In the manga Chi: About the Movement of the Earth, there’s a scene where someone asks Yorenta, “What are you even talking about?”
And she replies:

“You don’t understand? I’m desperately trying to share my wonder.”

That’s it.
All this time, I’d only wanted to share a sense of wonder.
I thought that was what I was meant to do. That it was everything.

If that wonder doesn’t come across, people won’t move. Society won’t listen.
So I don’t think what I’ve done has been meaningless.
But I’ve also realized that wonder alone isn’t enough.

That’s why I’ve decided to change how I communicate.

Categories
Asides

The Beauty of Design and the Difficulty of Execution

There is often a wide gap between imagining a perfect design and executing it exactly as envisioned.
This is especially true for projects involving many people, such as hardware or software development.
Things rarely go as planned. Assumptions change, environments shift, and unforeseen variables inevitably emerge.

There’s a book called Big Things.
It introduces large-scale architectural projects that succeeded despite tremendous complexity.
What stuck with me most were two principles highlighted as key to those successes:

  1. Design carefully
  2. Execute quickly

The longer it takes to complete something, the more the variables will change.
That’s why we must design with care—but move swiftly when it comes time to implement.
The goal is to lock in the core structure before external conditions have a chance to shift.
To do that, modularize. Work in the smallest possible units that are least likely to be affected by change.

Yesterday, while walking through Kyoto, I was reminded of this.

There’s a local culture of using intersecting street names to describe destinations.
When giving directions to a taxi, saying “the corner of X Street and Y Street” immediately places you on a shared mental map—a kind of two-dimensional coordinate system understood intuitively by locals.
It felt as if the entire city, including its inhabitants, shared a built-in protocol for spatial communication.

That level of coordination must be the result of brilliant design.
Urban planning is vastly more complex than hardware design.
It spans generations, involves countless stakeholders, and is never truly complete.
And yet, the structure has endured.
Not because it was executed exactly as intended, but perhaps because it was designed to absorb change—or because its core ideas have continued to resonate.

A city like Kyoto wasn’t built at breakneck speed.
Perhaps the most fundamental layer was established rapidly at the outset, anchoring the rest.
Or perhaps there was a mechanism in place to prevent deviations at the implementation stage.
Maybe it was shared values—almost ideological—that compelled each generation to honor the original intent, reinforcing it through intrinsic motivation at the ground level.

I didn’t spend much more time thinking about Kyoto itself.
But the ideal relationship between design and execution—that’s something I’ve been rethinking ever since.

Categories
Asides

Data Overpayment

For the past 20 years or so, we have become too accustomed to the idea of a “free internet.” Search engines, social media, email, maps, translation services—all of it seemed free. Or at least, it felt that way.

But in reality, nothing was ever free.

We were paying not with money, but with data. Names, interests, location history, purchase records, sleep patterns, social connections, facial images—all of it was handed over as “payment” to sustain business models.

The problem is that we paid more than we needed to.

Did using a map really require disclosing our family structure? Did translation apps really need to track our location history? We never truly scrutinized how much information was being asked of us—or whether it was justifiable.

Worse, we no longer remember what we gave away.

This may, at some point, become a social issue we recognize as “data overpayment.”

Data overpayment is not a single event or sudden incident.
It is the slow accumulation of loss over years, even decades.
By the time we notice, we may no longer know what we’ve already lost.

But as we enter the age of AI, that structure is starting to shift. A new set of questions is emerging around our personal data—how it is used in model training, who has control over it, where it’s recorded, and how transparent that process can be.

What if we could know where our data is used? What if we could choose how it is used? What if we had the right to retract the data we shared in the past?
If that were possible, then the economic, legal, and ethical meaning of “data” itself would be dramatically redefined.

Data is not something to be sold off. It’s something to be licensed for use.
Data is owned—not traded, but governed through conditions.
If this perspective becomes more widely accepted, we might finally begin to correct the overpayment that has built up over the past two decades.

It’s time we start treating our data as something that truly belongs to us.

Categories
Asides

Balancing Privacy and AI

The cloud is convenient. But more and more people are beginning to feel a quiet discomfort with entrusting everything to it.

Information is stored, utilized, linked, and predicted. Our behaviors, emotions, preferences, and relationships are being processed in places we can’t see. This unease is no longer limited to a tech-savvy minority.

So what can we do to protect privacy in a world like this?
One possible answer, I believe, is to bring AI down from the cloud.

As we can see in Apple’s recent strategic direction, AI is shifting from the cloud to the device itself. Inside the iPhone, inside the Mac, AI knows you, processes your data, and crucially—doesn’t send it out.

When computing power resides locally and your data stays in your hands, that convergence creates a new kind of architecture—one where security isn’t just about convenience, but trust. This is how a “safer-than-cloud” AI environment can emerge.

In this context, the question of “Where does AI run?” becomes more than just a technical choice. It evolves into a political and ethical question: “Who holds your data?” and equally important, “Who does not see it?”

This shift opens the door to new architectural possibilities.
When individuals hold their own data and run their own models locally, we create a form of AI that operates with a completely different risk structure and trust model than large-scale cloud systems.

In an era where the cloud itself is starting to feel “uncomfortable,” the real question becomes: Where is computation happening, and who is it working for?

Protecting privacy doesn’t mean restricting AI from using personal information.
It means enabling usage—without giving it away. That’s a design problem.

AI and privacy can coexist.
But that coexistence will not be found in the cloud.
It will be realized through a rediscovery of the local—through the edge.

Categories
Asides

Digital Inbound

Until now, the word “inbound” has mostly been used in the context of tourism. People come from overseas. Products are sold. Culture is shared. Inbound meant creating systems that welcomed people, goods, and money into the country.

But today, a new kind of inbound is beginning to take shape.
Not people—but data—is coming.
In other words, we’re entering an era in which “information processing” crosses borders and comes to Japan.

Startups and research institutions from around the world are beginning to choose Japan as the place to train and deploy their AI models—not despite the regulations, but because of them. Because the legal frameworks are stable. Because the power supply is consistent. Because the local infrastructure is safe. And above all, because Japan is seen as a place where things can run in peace. There’s also the institutional integrity—data won’t leak even if someone attempts to subvert the system.

What’s happening here isn’t outsourcing or delegation.
What’s coming is not people, but computation, processing, information itself, and the use of infrastructure.
This is not tourism. It is the use of Japan’s physical infrastructure.

I believe this is a phenomenon we should call digital inbound.

Within this structure, Japan’s greatest value is in being a trustworthy foundation.
It’s not just about computing power, power grid reliability, or legal frameworks.
It’s about confidence that data won’t be extracted without permission.
Stability, knowing that rules won’t suddenly change.
Trust, that when something goes wrong, someone will be there to respond.
A proven track record of resilience in the face of disasters.
These intangible layers are beginning to define the value of Japan as a digital territory.

In the financial world, places like Manhattan, Hong Kong, and later Singapore once played similar roles.
They became “locations” where information and capital gathered—not because people were already there, but because the systems in place made it safe for people and information to arrive.

Now, the world no longer revolves around cities with growing populations.
AI doesn’t need crowds.
IoT doesn’t require human presence.
In fact, the very absence of people may make certain environments ideal for IoT.
Where there is land, energy, and social calm, AI and IoT will come to live.

In places once dismissed as “worthless because no one lives there,” we may soon see a new logic emerge—“valuable precisely because no one is there.”

Land that’s comfortable for AI.
Legal systems that are gentle on data.
Energy infrastructure with minimal friction.
Taken together, these factors are already starting to shift how Japan is being reevaluated by the world.

Categories
Asides

What Kind of Literacy Is Required of Citizens in the Democratic Age of Computational Resources

Democracy, at its core, is built on the premise that sovereignty belongs to the people. But as we’ve passed through the information age and entered the age of AI, the very question of what sovereignty means is beginning to shift.

In today’s world—where computational resources, electricity, and data can influence the fate of nations and the direction of society—how can citizens, as sovereign actors, recognize and exercise their sovereignty?

In the information age, sovereignty meant choosing which sources to trust, which platforms to participate in, and which algorithms to entrust with our attention. But in the age of AI, that definition requires a deeper level of inquiry.

For example, we now have to ask: which computational resources processed the information that underpins our decisions?
Where were the models trained? Under what national legal frameworks and ethical principles were they built?
Where does the electricity come from, and who controls the compute processes?
All of these questions are directly linked to how and what we think.
It increasingly feels as if computational resources are becoming the new foundation of sovereignty.

In this era, having the right to vote may no longer be enough to be a true sovereign.
We also need to understand where our data is stored, under what nation’s rules our cloud operates, and which computational infrastructures are supporting our decision-making.
That ability to understand and choose is what I would call the literacy required of sovereign citizens in the era of computational resources.

If we entrust everything to Big Tech, we are, often without realizing it, relinquishing our sovereignty.
Which compute environments can we access?
To which computational infrastructures do we submit our data?
These may now be political rights in their own way.

So what kind of literacy do we need in this age?

Not just technical understanding, but literacy that spans systems, energy, ethics, and the meaning of decentralization.
Knowing which computational ecosystem we live upon may be one of the most important forms of awareness we can have.
That, I believe, will be a new prerequisite for democracy in the age of AI.

Categories
Asides

Who Owns the Cloud

The cloud was once seen as belonging to no one—or at least, that’s how it felt.

Despite being built and operated by someone, we’ve long used it freely, entrusted our data to it, and become dependent on it, without treating it like “land” that can be owned. The cloud exists physically on some server somewhere, yet where it is has never seemed important.

In that sense, “cloud” was a triumph of branding.

But now that AI has become foundational to everything, and computational resources have emerged as the new currency of power, the cloud is once again under scrutiny.
Whose is it?
Who owns it?
Who has the right to use it?
Who controls access?

Just like land, water, or energy once did, the cloud now wavers between being public and private.

Today, decentralized data centers—what could be called distributed cloud infrastructures—are starting to appear in various regions. These are not provided by governments, nor should they be monopolized by any single corporation. Ideally, they should be owned by communities, used by schools and hospitals, and joined by citizens. These networks of computational resources could function as part of the societal infrastructure, much like waterworks or power grids once did.

Of course, this may be inefficient. It might be costly. Integration with existing infrastructure won’t be easy.
But between a future where everything is entrusted to one massive compute environment somewhere far away, and a society where small, reliable pockets of compute capacity exist across regions—surely the latter deserves more attention and discussion.

Beyond technical concerns, the cloud also needs diversity—politically and culturally.
This diversity means freedom of computation, freedom of thought, and freedom of choice.

So who owns the cloud?
I believe that should be decided by its users.
Perhaps it’s time to shift from a model where we’re merely “allowed to use” the cloud, to one where we “own it together.”

Categories
Asides

Japan as a Choice

As information infrastructure becomes tied to national strategy, and both cloud and AI are increasingly framed within the context of geopolitics, nations are now faced with a decision: which information network to connect to, and on which compute infrastructure to build their society.

Many countries have effectively left that decision to Big Tech. The American cloud, or the Chinese cloud—not so much a matter of choosing, but of being absorbed into one or the other. In parts of Europe, there are now efforts to build “sovereign” systems, but even those often amount to little more than a reshuffling of dependencies.
This is something I felt directly, through discussions I had at CERN.

Beyond the startup

In this context, I’ve been thinking about the potential of a third option: Japan.

Not because Japan is technologically superior. In terms of compute resources, latent energy reserves, software competitiveness—Japan may in fact be at a relative disadvantage.

Even so, Japan holds a unique kind of value: neutrality, transparency, and trust—layers that aren’t easily quantified.

It’s a rule-of-law nation, with high disaster resilience, cautious about global-scale data usage, and with a strong social security layer. These form the foundation of what might be called national-level “assurance.”

In training AI models, it’s no longer just about how much you can compute. Where the data is processed, and under what ethical standards, now directly impacts long-term value. Ethics itself has become part of the infrastructure.

That’s why I believe that choosing Japan—specifically, the combination of its compute infrastructure and its legal framework—may increasingly hold structural significance. As companies, organizations, and even individual developers begin to consider “where to run” their projects, Japan may come to be seen as a politically and culturally “acceptable” nation.

Just as, in the world of finance, Switzerland, New York, Hong Kong, and Singapore once played such roles—Japan, or more precisely, Japan’s regional cities, could become a new center.
Perhaps the world is already beginning to seek out this option called Japan.

Categories
Asides

The Age of a Compute-Backed Economy Where Semiconductors Anchor Trust

In the past, the foundation of the economy was gold.
Under the gold standard, currency was backed by physical assets.
The rarity of gold itself directly reflected the credibility of a nation and the value of its currency. That was the world we once lived in.

After a long phase of economic expansion unmoored from tangible assets, we are now entering a world where computational capacity is beginning to take the place once held by gold.

AI has become the foundation of all economic activity.
Industries are run by models. Decisions are made by computation.
In such a society, value is no longer created by labor—but by computational resources themselves.

And what are computational resources?
They are, in the physical sense, electricity, compute devices, cooling infrastructure, access as a matter of policy, and above all, semiconductors.

In the coming world, a nation’s credibility will be determined by how much it can compute.
National power will increasingly reflect the total computational capacity it controls.
A country that possesses semiconductor design and fabrication capabilities—and the energy and infrastructure to operate them—will be able to anchor its currency with computational resources.

This represents a transition into a compute-backed economic system.

Where once nations signaled their monetary credibility with gold reserves, they may soon point to the total number of GPGPUs they own, the strength of their AI training infrastructure, or the volume of high-quality data they control.
We may enter a world where it is reasonable to say, “Our currency is stable because we possess sufficient compute.”
It’s possible that compute has already rewritten the very concept of military power.

Computational resources are invisible. And their value is fluid.
Electricity prices, cooling efficiency, software optimization, algorithmic efficiency, and data quality—all of these dynamically affect the credibility of a currency.
This is a real-time economic foundation, so dynamic that humans alone may be unable to grasp it.
It presupposes communication between AIs.
And for any nation without computational resources to participate in that communication, the end may already be near.

Until now, the economy has been driven by “intangible trust.”
But in the age of AI, it is “the total executable compute” that becomes the final form of trust.
And at the core of that trust lies the hard fact of how much compute a nation possesses—and governs—within its borders.

Semiconductors, electricity, and data are no longer merely parts of industrial structure.
They underpin currency and sovereignty.
And the nation that supports them will be the one that holds the next global reserve currency.

Categories
Asides

How to Turn Forgotten Resources into Infrastructure

There are resources in society that are no longer in use. They once had value, but over time, as structures changed, they were forgotten, left behind, and left untouched. Vast tracts of land abandoned due to natural disasters or depopulation. Decommissioned power infrastructure. Obsolete telecom stations. Remote plots of land and tunnels no one visits anymore. These are resources left behind by shifting industrial structures—forgotten, but not gone.

If the structure changes again, these resources may take on new meaning. Especially in an era driven by computational power, these physical infrastructures can function as “foundations for computation.” There’s electricity. Land that can dissipate heat. Environments with high tolerance for noise. Cheap land and municipalities open to collaboration. Water sources and climates ideal for cooling. From a different perspective, these may have always been “ideal infrastructures.” It’s not that they lacked value—they simply hadn’t been redefined yet.

When urban depopulation accelerates and rural populations decline, we tend to assume that the value of those regions is lost. But I believe that’s a human-centered—and deeply arrogant—assumption. For AI and IoT, the presence of people is not essential. What matters is whether data can be collected, electricity is available, and there’s access to the internet. For them, the optimal environment isn’t necessarily the city. In fact, rural areas—less interference, more available power and space, and infrastructure that can be redesigned from scratch—might be “natural” habitats for AI and IoT. Just as wildflowers grow where human hands do not reach, it is in these quiet places that the information infrastructure of the future may take root.

In finance, Manhattan once served as a hub, and later, Singapore did too—each backed by policy, tax regimes, and geopolitical positioning. If “geographic advantage” takes on new meaning, then Japan’s rural regions still have a chance. Japan is a rule-of-law country with stable power infrastructure and a high degree of safety. From the standpoint of human-centric life, it may appear resource-poor, but if we look across the country with localized renewable energy in mind, these “low-value” areas could become ideal foundations for the next generation of infrastructure.

Modern computational infrastructure no longer needs to be concentrated in urban centers. In fact, to avoid the shortages of electricity, space, and cooling found in cities, it will spread outward—to the periphery, to rural regions. As this trend continues, the logic that “unused means worthless” will flip. Places once dismissed may now be rediscovered as the foundational base for computational resources—valuable precisely because no one else is using them.

And it’s not just resources being redefined. Entire regions can reclaim purpose by changing the scale of evaluation. Turning forgotten resources into assets is not simply about buildings or machinery—it marks a quiet update to the structure of society itself.

Exit mobile version