English content v4
Test English Post v2
This is a test post in English (version 2).
Test English Post v3
This is a test post in English (version 3).
English Meta Test v4
English content v4
Test English Post v2
This is a test post in English (version 2).

The word “education” may be too broad. Here, I want to focus strictly on the act of acquiring knowledge, not on values or character formation. From that perspective, the emergence of generative AI has begun to reshape the very structure of learning itself.
Since generative AI became widespread, my own learning across many fields has clearly accelerated. This is not limited to professional topics; it applies equally to hobbies and peripheral areas of interest. It is not simply that answers arrive faster, but that the process of learning has fundamentally changed.
A concrete example is learning Rubik’s Cube algorithms. After moving beyond basic memorization and into the phase of solving more efficiently, I found an overwhelming amount of information on the web and on YouTube. What appeared there, however, were methods and sequences optimized for someone else. Determining what was optimal for me took considerable time. Each source operated on a different set of assumptions and context, leaving the burden of organizing and reconciling those differences entirely on the learner.
Even a single symbol could cause confusion. Which face does “R” refer to, and in which direction is it rotated? What exact sequence does “SUNE” represent? Because these premises were not aligned, explanations often progressed without shared grounding, making understanding fragile and fragmented.
When AI enters the loop, this situation changes dramatically. The task of organizing information shifts to the AI, which can align definitions, symbols, and concepts before explaining them. It can propose an optimal learning path based on one’s current understanding and recalibrate the level of detail as needed. As a result, learning efficiency improves to an extraordinary degree.
Key points can be reinforced repeatedly, and review can be structured with awareness of the forgetting curve. Questions that arise mid-process can be fact-checked immediately. Beyond that, a meta-learning perspective becomes available: reflecting on how one learns, identifying synergies with other knowledge areas, and continuously refining learning methods themselves.
There are, of course, drawbacks. The final responsibility for judging truth still lies with the human. When learning veers in the wrong direction, AI does not provide an inherent ethical brake or value-based correction. In areas such as conspiracy theories, this can accelerate misunderstanding rather than resolve it, potentially deepening social division.
This style of learning also depends heavily on intrinsic motivation. Without actively asking questions and engaging in dialogue, AI offers little value. We have not yet reached a stage where knowledge can simply be installed. The trigger remains firmly on the human side.
Even so, one point is clear. For the act of learning, generative AI is becoming an exceptionally powerful tool. The central question is no longer how to deliver knowledge, but how to arrive at understanding. On that question, AI has already begun to offer practical answers.
Messengers that operate on mesh networks using P2P communication already exist. Under the right conditions, they can function independently of existing communication infrastructure and offer strong resistance to censorship and shutdowns. They feel like products that intuitively point toward the future of communication.
At the same time, this approach has clear limitations. Communication only works reliably if a sufficient number of devices act as relay nodes, which means stability is limited to closed spaces or short periods when many people are densely gathered. When considered as everyday, wide-area communication infrastructure, instability remains a fundamental issue.
A very different and more practical answer to this constraint emerged in the form of messaging systems that ensure communication continuity while maintaining full end-to-end encryption. Signal is a representative example. Signal did not achieve security by eliminating central servers. Instead, it chose to accept the existence of central servers while removing them from the trust model altogether.
Signal’s servers temporarily relay encrypted messages and store them only while recipient devices are offline. They handle minimal tasks such as distributing public keys and triggering push notifications, but they cannot read message contents or decrypt past communications. Central servers exist, yet they function strictly as relays that cannot see or alter what passes through them.
This structure is supported by the Signal Protocol. Initial key exchange is completed entirely between devices, and encryption keys are updated with every message. Even if a single key were compromised, neither past nor future messages could be decrypted. Even if servers stored all communications, the data itself would be meaningless.
What matters most is that “trust” is not assumed at any point in this design. Signal does not rely on the goodwill of its operators. Client software is open source, cryptographic specifications are publicly documented, and reproducible builds make tampering verifiable. The principle of “don’t trust, verify” is embedded directly into the system.
This design avoids the extremes of both pure P2P and centralized control. It does not accept the instability inherent in full P2P networks, nor does it allow the surveillance and control risks that centralized systems introduce. Central relays are permitted, but they are rendered untrustworthy by design. It is a highly pragmatic compromise achieved through cryptography.
Meanwhile, new approaches are emerging that extend communication infrastructure into space itself. Satellite-based networks like Starlink bypass traditional telephone networks and terrestrial infrastructure altogether. This shift has implications not only for business models, but also for national security, privacy, and sovereignty. When the physical layer of communication changes, the rules that sit above it inevitably change as well.
Since the invention of the telephone, communication has evolved many times. It has repeatedly moved back and forth between centralization and decentralization, searching for workable compromises between technology and society. Neither absolute freedom nor absolute control has ever proven viable in reality.
That is why the question today is not “which model is correct,” but “where should the practical balance be placed.” By embedding trust into cryptography and treating central infrastructure as a necessary but constrained component, it becomes possible to preserve both freedom and stability. Communication continues to evolve, once again searching for its next form somewhere between these two forces.
The ideals that Web3 put forward were, in many ways, beautiful.
A future where individuals—not platforms—controlled their data and assets.
A world connected without borders, without gatekeepers.
Blockchain, cryptocurrencies, DAOs—all emerged under the banner of “decentralization,” carrying with them the promise of a new social architecture.
Yet ten years have passed.
Looking back, the movement resembles a kind of guerrilla warfare—pressing against the edges of the existing internet, searching for cracks in the dominant platforms, attempting to implement ideals through tactical advances rather than structural reform.
Guerrilla strategies can spread an idea, but they rarely rewrite society’s rules.
Why did technology alone fail to change the world?
One reason is that decentralization and fragmentation were often conflated.
The “decentralization” Web3 called for was meant to be a structural design: a system that prevents trust and power from concentrating at a single point.
But in practice, communities and factions splintered, independent economic zones emerged, and incompatible rules proliferated.
Instead of decentralization, what emerged was fragmentation—parallel micro-worlds with little connective tissue.
Fragmentation weakens information sharing and destroys interoperability.
And eventually, it invites the rise of new central authorities.
Indeed, even within Web3, entities that claimed to be “decentralized” created exchanges and platforms that wielded overwhelming influence.
What was meant to decentralize inadvertently produced another form of centralization.
So what should we take from this decade?
The key lesson is that decentralization must be understood not as a structure but as a method of operating trust.
“How should trust be implemented in society?”
This is the most valuable question Web3 posed.
More important than blockchain itself is how to reduce the cost of verifying truth—and how individuals and society can mutually confirm authenticity in the digital world.
This question stretches far beyond Web3; it touches the future of the internet, AI, IoT, and next-generation infrastructure.
Consider the ideas that remain relevant today:
privacy with transparency,
data self-sovereignty,
interoperability and standards,
and the redefinition of authentication through decentralized identity.
These are not failures—they are intellectual assets left behind by Web3’s struggles.
Another critical lesson is that decentralization cannot exist without distributed power and compute.
No matter how ideal an algorithm is, if the electricity and computational capacity required to operate it are concentrated, the architecture will inevitably drift back toward centralization.
This is why countries like Japan—where local regions possess energy resources and land—have the potential to become experimental grounds for truly decentralized infrastructure.
Here、the theme of local cities holding computational power naturally connects.
The ten years of Web3 demonstrated that technology alone cannot move the world.
But they also forced society to confront a deeper question:
How should trust be handled in the digital age?
Decentralization is not about breaking the world apart; it is about finding a form of trust that keeps the world connected without centralizing authority.
Over the next decade, what answers will we craft?
The shift must be away from fragmentation and toward decentralization for the sake of connection.
That implementation will sit at the core of infrastructure design in the AI era.
For decades, Japanese manufacturing has been synonymous with “quality.” Precision, durability, craftsmanship, and trust have defined the country’s industrial identity.
Yet in an era shaped by AI and IoT, quality can no longer be understood solely as physical robustness. Hardware itself has become a target, and Japan’s machines, components, and devices now operate within a fundamentally new risk environment where cyberspace and the physical world are directly connected.
Until recently, cyberattacks focused primarily on digital systems: servers, networks, authentication layers.
Today, however, attackers aim at physical devices—automotive ECUs, robot actuators, factory control systems, medical equipment, communication modules.
If the internal control of these systems is compromised, the consequences extend far beyond data breaches: accidents, shutdowns, and physical malfunctions become real possibilities.
This shift carries particular weight for Japan.
Japanese hardware underpins a vast range of global equipment—precision machinery, automotive systems, robotics, and embedded components.
A single vulnerability in a Japanese-made part could serve as an entry point for attackers into systems around the world.
Conversely, if Japan succeeds in securing these layers, it becomes a crucial pillar of global cyber resilience.
The core issue is that traditional concepts of manufacturing quality are no longer synchronized with modern cyber risk.
Manufacturing evaluates safety and reliability on long time horizons; cyber threats evolve on the scale of days or hours.
Physical and digital timelines were once independent, but AIoT has merged them—forcing hardware and cybersecurity to be designed within the same conceptual layer.
In other words, manufacturing and cybersecurity can no longer be separated.
The idea of “adding security later” no longer fits the reality of interconnected devices.
Security must be integrated across every stage: the component level, assembly level, device level, and network integration.
The definition of quality must expand from “does not break” to “cannot be broken, even under attack.”
Globally, a culture of testing and attacking hardware is emerging.
Vehicles, industrial machinery, and critical infrastructure control panels are publicly examined, and specialists search for vulnerabilities that lead to corrective improvements.
This trend mirrors the evolution from software bug bounties toward hardware-level security assessment.
Such environments—where offensive and defensive testing coexist—directly contribute to elevating industrial standards.
Yet awareness of hardware security remains uneven across nations.
In Japan, the reputation for robust and safe manufacturing often leads to complacency: devices are assumed secure because they are well-made.
Paradoxically, this confidence can obscure the need for systematic vulnerability testing, turning manufacturing strengths into latent cyber risks.
To maintain global trust in the years ahead, Japan must design manufacturing and security as a unified discipline.
The production process itself must function simultaneously as a security process.
A country known for its hardware must also be capable of guaranteeing the safety of that hardware—this dual responsibility will define Japan’s competitive position.
Japan today carries responsibility not only for manufacturing the world relies on, but also for ensuring the cybersecurity of that manufactured world.
Manufacturers, infrastructure operators, telecom providers, local governments, research institutions—each must coordinate to secure the nation’s industrial foundation.
Cultivating a perspective that connects manufacturing with cyber defense is essential.
It is this integration that will sustain global confidence in Japanese technology and define the next evolution of “Japan Quality.”
Financial markets once had clear centers of gravity—New York, London, Hong Kong, Singapore. Each era had its “world’s number-one market,” a place where capital, people, and rules converged. But today’s financial world is fragmented. Regulation and geopolitics have dispersed activity, and the idea of a single location one must watch has nearly disappeared.
If the world seeks a new center, what will it be? I believe the answer is the “information market.”
By information market, I do not mean a marketplace for trading data. It is a composite system: computational power, data, algorithms, the infrastructure that runs them, the people who operate them, and the rules that guarantee trust. When the choice of where to train an AI model—and under which legal and cultural framework to operate it—becomes a source of significant economic value, the information market will rival or surpass the importance of financial markets.
From this perspective, Japan cannot be excluded.
It is a stable rule-of-law nation with minimal risk of arbitrary seizures or retroactive regulations. Its power grid is remarkably reliable, with extremely low outage rates. Natural disasters occur, yet recovery is fast—earning Japan a reputation as a place where “things return to normal.” Additionally, Japan still retains a manufacturing foundation capable of designing and producing hardware, including semiconductors.
Taken together, these characteristics make Japan uniquely qualified as a place to “entrust information.”
Viewed through the lens of an information market, Japan has the right to stand at the “center.” Its position—neither the United States nor China—can be a geopolitical weakness, but also a strength when acting as a neutral infrastructure provider. Japan also has the institutional calmness to redesign rules around data ownership and privacy. The challenge is that its potential remains constrained by a Tokyo-centric mindset.
A Japanese information market cannot be built by focusing on Tokyo alone.
What is required is a shift: assuming that local cities must hold computational power. Until now, the role of local regions was to attract people and companies. From this point forward, they must be reframed as entities that attract computation and data. This is not a competition for population but a competition for information and processing.
Japan has many regions with renewable energy, surplus electricity, and land. Many of them enjoy cooler climates and access to water, which are favorable for cooling infrastructure. With proper planning for disaster risk, these regions can host mid-scale data centers and edge nodes—allowing each locality to own computational power.
This would create a distributed domestic information market that exists alongside, not beneath, Tokyo-centric cloud structures.
For local cities, possessing computational power is not merely about installing servers.
Services such as autonomous driving, drone logistics, and remote medicine depend on ultra-low latency and local trust. Japan’s regions—low population density, stable infrastructure, and defined geography—are ideal as real-world testbeds. If the computational layer behind these services resides locally, then each region becomes a site of the information market.
A similar structure appears at the level of individual homes. As I wrote in the 3LDDK article, the idea of embedding small-scale generation and computing into houses transforms residential units into local nodes. When aggregated at the town level, these nodes form clusters; when interconnected across municipalities, they become regional clouds.
Rather than relying entirely on centralized hyperscale clouds, local cities gain autonomy through computational power.
Financial history offers a useful analogy. Financial centers were places where capital, talent, and rules concentrated. Future information markets will concentrate computational power, data, and governance. But unlike finance, information markets will be physically distributed.
Networks of data centers in local cities—linked through invisible wiring—will collectively form a single “Japan Market.” From abroad, this appears not as a dispersed system but as a coherent, trustworthy platform.
The critical question is not “Where should we place data centers?” but “How should we design the system?”
Merely placing servers in local regions is insufficient. Market design must weave together electricity, land, and data flows while clarifying revenue distribution, risk ownership, and governance. Only then can Japan move from being a location for data centers to being the rules-maker of the information market itself.
Japan as an information market, and local cities as holders of computational power—these two visions are, in truth, one picture.
A system in which regions contribute their own compute and their own data, forming a market through federation rather than centralization. Whether Japan can articulate and implement this structure will determine the country’s position over the next decade.
That, I believe, is the question now placed before us.