Categories
Asides

An Era Where the Withdrawn Can Thrive

In every era, there have been people who never got to use what they had. Not because they lacked ability, but because the circuits connecting their abilities to society differed from one age to the next. You could call it luck, or adaptability. But neither word captures the full complexity.

There was a time when physical strength sat at the center of value. A strong body translated directly into survival and results. When civilization and science advanced far enough that social systems no longer depended on brute force, those without it found room to contribute. This was not merely technological progress. It was the moment society rewrote its definition of power.

The same pattern appears in the realm of expression. Throughout history, countless people possessed rich artistic talent but had no way to reach an audience. When digital spaces emerged and the cost of broadcasting dropped dramatically, talent that would have stayed buried became visible. Language barriers tell a similar story. No matter how exceptional someone’s expertise in a given field, without the words to convey it, that expertise might as well not have existed.

The evolution of civilization is, in part, a history of multiplying the circuits that connect buried talent to society. The printing press democratized the spread of ideas. The internet dismantled the monopoly on broadcasting. Each time a new technology appeared, a circuit that had been closed for someone, somewhere, opened. Seen this way, the current moment, with AI agents weaving themselves into daily life, looks like the opening of yet another kind of circuit.

People who could only operate alone. People profoundly uncomfortable with external communication. People whose expertise was confined to an impossibly narrow domain. In the social structures we have known, these traits were treated as weaknesses. Working in an organization required cooperation. Producing results presupposed collaboration with others. But as AI agents step into the intermediary role, that very premise is shifting. When the burden of interpersonal interaction is lifted, the focus and expertise inside a person can convert directly into productivity. It is even possible that those categorized as shut-ins or NEETs are, from a different angle, the personality type most suited to this era.

Looking back at my own experience, the past six months have been distinctly different from everything before. Even alone, with the help of AI agents, I can sustain the productive capacity of a small organization. Research, documentation, code implementation, translation, strategy sparring. Roles that once required separate people can now run under a single person’s judgment. Link those small units together through further division of labor, and they can grow into a mid-sized organizational body. The ceiling on individual productivity has shifted beyond anything previously imaginable.

Of course, there are parts that demand caution. Whether what we are calling productivity is worth its cost remains impossible to judge at this point. Just as in the early days of the internet and the dot-com bubble, we have yet to accurately grasp the true cost structure behind the productivity we feel. And there is still the question of what that productivity is directed toward. Creating things, refining ideas, exercising power for whom and to what end. When individual productivity rises dramatically, where that power is aimed falls to the individual. That is freedom, and at the same time, a quiet responsibility.

Still, much as Steve Jobs once called the computer a bicycle for the mind, there is a palpable sense of being in the middle of a moment when individual capability is being expanded. What happens when circuits that were closed begin to open is something no one can yet know.

Categories
Asides

The Age of AI Interpretation

In my daily work, I constantly move between English and Japanese. I know firsthand the cognitive load of simultaneous interpretation, and I know that getting the words right is never enough. Real interpretation means reading the other person’s intent, their baseline assumptions, the emotional temperature of the room, and what decision they’re trying to reach. Only then does translation become meaningful communication.

With the arrival of AI, that instinct has taken on a different shape. The act of interpreting between English and Japanese will, before long, be handled largely by machines. I think that’s unavoidable. But beyond that threshold, a new kind of competence is emerging: the ability to interpret between humans and AI.

Current AI systems look omnipotent on the surface, yet they are remarkably sensitive collaborators. Their memory architectures have structural constraints. The way they compress and expand context follows particular patterns. In human conversation, ambiguity and logical leaps are smoothed over naturally, but in dialogue with AI, those gaps translate directly into performance differences. What you establish as given, what you choose to omit, the order in which you present information — these design choices dramatically alter what the same AI returns.

This sensation is strikingly similar to the work of interpretation. When I move between English and Japanese, I try to consume as little unnecessary context as possible. I avoid regional expressions like Kansai dialect, construct clear sentences, and lead with conclusions. I keep each sentence short. I’ve always had an affinity for Markdown-like structures, and even in conversation I sometimes feel as though I’m speaking in Markdown. Once a phrase becomes shared shorthand, I define it upfront and compress the rest of the exchange into brief callbacks. Interpretation, before it is language conversion, is the art of context compression.

This framework transfers directly to the age of AI. In human conversation, imposing too much semantic structure feels unnatural, so a Markdown-like level of organization was the practical ceiling. With AI, that tendency intensifies. Align assumptions, state your objective, define constraints, decide the output format in advance — and the quality of dialogue shifts dramatically. In other words, the ability to converse with AI is not simply about asking good questions. It is the ability to structure your thinking and hand it over.

The problem is that most people do not habitually speak in Markdown. Between humans, conversation still works without it. But in dialogue with AI, that ambiguity becomes pure loss. This is why we will need interpreters who can convert natural human language into forms that AI can process effectively. What is called prompt engineering is one slice of this. But the real scope is far wider: knowing which sub-agent to deploy in a given moment, building specialized workflows for particular domains, compressing context while chaining automation, and consciously managing the boundary between short-term and long-term memory. The full range of these capabilities is becoming the new definition of linguistic competence.

Until now, language ability has been measured by how well you handle a foreign tongue. Going forward, that metric alone will not suffice. The capacity to translate vague human intent into structures a machine can process. The reverse capacity to retranslate AI output into forms that support human judgment. It is this back-and-forth movement that will determine intellectual productivity in the next era.

I don’t believe foreign language skills are becoming obsolete. If anything, the age of AI may be what finally reveals what interpretation was always about. What will be needed is not just English proficiency or Japanese proficiency. It is the ability to stand between people and AI, preserving meaning, organizing context, and faithfully conveying intent. Interpreters in that sense will matter far more than we currently imagine.

Categories
Asides

Cyborg Declaration

At the end of 2025, I spent time enjoying the development of my own programs. The original goal was to automate as many tasks as possible, and I feel I achieved that to some degree. By adopting a parallel style of development with AI, I built several personal programs designed to automate my own work. The process itself was deeply enjoyable, and it felt as though ideas that had existed only in my head for years had finally begun to take on form.

What mattered most was that I already had my own data center infrastructure. Because of that, I could introduce AI aggressively without carrying the usual anxiety around data handling. Deciding whether to use generative AI is not only a question of performance. It is also a question of trust—where the data goes, and what is being entrusted to whom. Once that concern was removed, AI stopped being something I merely experimented with and became a tool for extending my own capabilities.

Then, around the end of January 2026, I suddenly realized something. I might already be a cyborg. The version of myself from only a month earlier now felt, in some sense, primitive. Of course, I had not replaced any part of my body. I had not implanted electrodes into my brain. And yet I had clearly entered a different cognitive state.

When I used to imagine a cyborg, I pictured something much more direct: an arm turned into a weapon, an eye replaced with a lens, the body itself mechanically altered. But that does not seem to be how it happens. Human beings appear to enhance themselves through loosely coupled external devices and software. Rather than embedding everything inside the body, we place memory, judgment support, and intelligence outside ourselves, then use them as if they were part of us.

What is most interesting about this change is how ambiguous the boundary becomes. Where does the self end, and where does the external begin? I still write the code, but exploration, completion, and comparison are increasingly handled by AI. I still believe I am making the decisions, but the path that leads to those decisions already includes several forms of external intelligence. In that sense, becoming a cyborg may not mean mechanizing the body, but externalizing cognition.

And this change feels irreversible. I do not think I can really go back. I could still work the old way, of course, but it would feel like choosing to work by lamplight after electricity already exists. It is not merely that things have become more convenient. The basic conditions of intellectual work itself have changed.

Human beings do not become machines all at once. Instead, by the time they notice, they have already acquired a set of connections they can no longer give up, and they have accepted that condition as ordinary life. What I felt at the end of January 2026 was probably the sensation of having crossed that boundary.

For me, a cyborg declaration is not a declaration of bodily modification, but a declaration that acknowledges that my intelligence is no longer self-contained.

Categories
Asides

The Future Waymo Sees

I understand the feeling of accepting Waymo without much resistance. There is a sense of novelty, and as someone who likes technology, I also see it as a remarkable crystallization of engineering. Every time I notice one on the street, it feels like witnessing a transition point in history.

At the same time, separate from that excitement, I cannot help thinking about Waymo’s point of view. It has eyes called LiDAR. As it moves through the city, it continuously captures not only the shape of the roads, but also the positions, movements, and reactions of the people and objects within them. If we look at it only through the lens of autonomous driving, it appears to be a useful technology and a practical answer to driver shortages. But the real issue may lie less in the vehicle itself than in the world the vehicle is seeing.

What matters is not only what it can detect, or how far it can see. What matters is what kind of information is being accumulated, in what form, and under whose control. Not just terrain data or traffic conditions, but pedestrian flows, changes in congestion, human reactions, and the shifting texture of the city across different times of day. In the short term, such data may improve dispatch efficiency and safety. In the long term, it leads to a larger question: who gets to observe reality, and who gets to own it?

This is why recent moves by Niantic are worth paying attention to. A company that accumulated location data and image-based knowledge of the real world is now beginning to connect those assets to physical services such as robotics and delivery. It feels like a case where both the collection and the use of data have finally become visible in a form that broader society can understand.

Enormous amounts of data had already been gathered in the era of Twitter and Facebook. Yet the scale of that value, and the scale of its influence, remained abstract to much of society. As long as it appeared only in timelines, advertising, or recommendation engines, it was difficult for people to feel its weight. But the moment that same logic begins to shape maps, movement, logistics, and robotics in physical space, the importance of that data takes on a sharper outline.

Waymo is still driving through the city today. But it is not merely a car in motion. It is staring at reality through LiDAR and cameras, slowly copying the city as it goes. To think about the future of autonomous driving is not only to think about transportation. It is also to ask which companies will observe, accumulate, and ultimately reconstruct reality itself.

Categories
Asides

Learning with AI Is Changing the Nature of Education

The word “education” may be too broad. Here, I want to focus strictly on the act of acquiring knowledge, not on values or character formation. From that perspective, the emergence of generative AI has begun to reshape the very structure of learning itself.

Since generative AI became widespread, my own learning across many fields has clearly accelerated. This is not limited to professional topics; it applies equally to hobbies and peripheral areas of interest. It is not simply that answers arrive faster, but that the process of learning has fundamentally changed.

A concrete example is learning Rubik’s Cube algorithms. After moving beyond basic memorization and into the phase of solving more efficiently, I found an overwhelming amount of information on the web and on YouTube. What appeared there, however, were methods and sequences optimized for someone else. Determining what was optimal for me took considerable time. Each source operated on a different set of assumptions and context, leaving the burden of organizing and reconciling those differences entirely on the learner.

Even a single symbol could cause confusion. Which face does “R” refer to, and in which direction is it rotated? What exact sequence does “SUNE” represent? Because these premises were not aligned, explanations often progressed without shared grounding, making understanding fragile and fragmented.

When AI enters the loop, this situation changes dramatically. The task of organizing information shifts to the AI, which can align definitions, symbols, and concepts before explaining them. It can propose an optimal learning path based on one’s current understanding and recalibrate the level of detail as needed. As a result, learning efficiency improves to an extraordinary degree.

Key points can be reinforced repeatedly, and review can be structured with awareness of the forgetting curve. Questions that arise mid-process can be fact-checked immediately. Beyond that, a meta-learning perspective becomes available: reflecting on how one learns, identifying synergies with other knowledge areas, and continuously refining learning methods themselves.

There are, of course, drawbacks. The final responsibility for judging truth still lies with the human. When learning veers in the wrong direction, AI does not provide an inherent ethical brake or value-based correction. In areas such as conspiracy theories, this can accelerate misunderstanding rather than resolve it, potentially deepening social division.

This style of learning also depends heavily on intrinsic motivation. Without actively asking questions and engaging in dialogue, AI offers little value. We have not yet reached a stage where knowledge can simply be installed. The trigger remains firmly on the human side.

Even so, one point is clear. For the act of learning, generative AI is becoming an exceptionally powerful tool. The central question is no longer how to deliver knowledge, but how to arrive at understanding. On that question, AI has already begun to offer practical answers.

Categories
Asides

Japan as an Information Market and the Computational Power of Local Cities

Financial markets once had clear centers of gravity—New York, London, Hong Kong, Singapore. Each era had its “world’s number-one market,” a place where capital, people, and rules converged. But today’s financial world is fragmented. Regulation and geopolitics have dispersed activity, and the idea of a single location one must watch has nearly disappeared.
If the world seeks a new center, what will it be? I believe the answer is the “information market.”

By information market, I do not mean a marketplace for trading data. It is a composite system: computational power, data, algorithms, the infrastructure that runs them, the people who operate them, and the rules that guarantee trust. When the choice of where to train an AI model—and under which legal and cultural framework to operate it—becomes a source of significant economic value, the information market will rival or surpass the importance of financial markets.

From this perspective, Japan cannot be excluded.
It is a stable rule-of-law nation with minimal risk of arbitrary seizures or retroactive regulations. Its power grid is remarkably reliable, with extremely low outage rates. Natural disasters occur, yet recovery is fast—earning Japan a reputation as a place where “things return to normal.” Additionally, Japan still retains a manufacturing foundation capable of designing and producing hardware, including semiconductors.
Taken together, these characteristics make Japan uniquely qualified as a place to “entrust information.”

Viewed through the lens of an information market, Japan has the right to stand at the “center.” Its position—neither the United States nor China—can be a geopolitical weakness, but also a strength when acting as a neutral infrastructure provider. Japan also has the institutional calmness to redesign rules around data ownership and privacy. The challenge is that its potential remains constrained by a Tokyo-centric mindset.

A Japanese information market cannot be built by focusing on Tokyo alone.
What is required is a shift: assuming that local cities must hold computational power. Until now, the role of local regions was to attract people and companies. From this point forward, they must be reframed as entities that attract computation and data. This is not a competition for population but a competition for information and processing.

Japan has many regions with renewable energy, surplus electricity, and land. Many of them enjoy cooler climates and access to water, which are favorable for cooling infrastructure. With proper planning for disaster risk, these regions can host mid-scale data centers and edge nodes—allowing each locality to own computational power.
This would create a distributed domestic information market that exists alongside, not beneath, Tokyo-centric cloud structures.

For local cities, possessing computational power is not merely about installing servers.
Services such as autonomous driving, drone logistics, and remote medicine depend on ultra-low latency and local trust. Japan’s regions—low population density, stable infrastructure, and defined geography—are ideal as real-world testbeds. If the computational layer behind these services resides locally, then each region becomes a site of the information market.

A similar structure appears at the level of individual homes. As I wrote in the 3LDDK article, the idea of embedding small-scale generation and computing into houses transforms residential units into local nodes. When aggregated at the town level, these nodes form clusters; when interconnected across municipalities, they become regional clouds.
Rather than relying entirely on centralized hyperscale clouds, local cities gain autonomy through computational power.

Financial history offers a useful analogy. Financial centers were places where capital, talent, and rules concentrated. Future information markets will concentrate computational power, data, and governance. But unlike finance, information markets will be physically distributed.
Networks of data centers in local cities—linked through invisible wiring—will collectively form a single “Japan Market.” From abroad, this appears not as a dispersed system but as a coherent, trustworthy platform.

The critical question is not “Where should we place data centers?” but “How should we design the system?”
Merely placing servers in local regions is insufficient. Market design must weave together electricity, land, and data flows while clarifying revenue distribution, risk ownership, and governance. Only then can Japan move from being a location for data centers to being the rules-maker of the information market itself.

Japan as an information market, and local cities as holders of computational power—these two visions are, in truth, one picture.
A system in which regions contribute their own compute and their own data, forming a market through federation rather than centralization. Whether Japan can articulate and implement this structure will determine the country’s position over the next decade.
That, I believe, is the question now placed before us.

Categories
Asides

Redesigning Conversation and the Emergence of a Post-Human Language

As I wrote in the previous article, the idea of a “common language for humans, things, and AI” has been one of my long-standing themes. Recently, I’ve begun to feel that this question itself needs to be reconsidered from a deeper level. The shifts happening around us suggest that the very framework of human communication is starting to update.

Human-to-human conversation is approaching a point where further optimization is difficult. Reading emotions, estimating the other person’s knowledge and cognitive range, and choosing words with care—these processes enrich human culture, yet they also impose structural burdens. I don’t deny the value of embracing these inefficiencies, but if civilization advances and technology accelerates, communication too should be allowed to transform.

Here, it becomes necessary to change perspective. Rather than polishing the API between humans, we should redesign the interface between humans and AI itself. If we move beyond language alone and incorporate mechanisms that supplement intention and context, conversation will shift to a different stage. When AI can immediately understand the purpose of a dialogue, add necessary supporting information, and reinforce human comprehension, the burdens formerly assumed to be unavoidable can dissolve naturally.

Wearing devices on our ears and eyes is already a part of everyday life. Sensors and connected objects populate our environments, creating a state in which information is constantly exchanged. What comes next is a structure in which these objects and AI function as mediators of dialogue, coordinating interactions between people—or between humans and AI. Once mediated conversation becomes ordinary, the meaning of communication itself will begin to change.

Still, today’s human–AI dialogue is far from efficient. We continue to use natural language and impose human-centered grammar and expectations onto AI, paying the cognitive cost required to do so. We do not yet fully leverage AI’s capacity for knowledge and contextual memory, nor have we developed language systems or symbolic structures truly designed for AI. Even Markdown, while convenient, is simply a human-friendly formatting choice; the semantic structure AI might benefit from is largely absent. Human and AI languages could in principle be designed from completely different origins, and within that gap lies space for a new expressive culture beyond traditional “prompt optimization.”

The most intriguing domain is communication that occurs without humans—between AIs, or between AI and machines. In those spaces, a distinct communicative culture may already be emerging. Its speed and precision likely exceed human comprehension, similar to the way plants exchange chemical signals in natural systems. If such a language already exists, our task may not be to create a universal language for humans, but to design the conditions that allow humans to participate in that domain.

How humans will enter the new linguistic realm forming between AI and machines is an open question. Yet this is no longer just an interface problem; it is part of a broader reconstruction of social and technological civilization. In the future, conversation may not rely on “words” as sound, but on direct exchanges of understanding itself. That outline is beginning to come into view.

Categories
Asides

A Common Language for Humans, Machines, and AI

Human communication still has room for improvement. In fact, it may be one of the slowest systems to evolve. The optimal way to communicate depends on the purpose—whether to convey intent, ensure accuracy, share context, or express emotion. Even between people, our communication protocols are filled with inefficiencies.

Take the example of a phone call. The first step after connecting is always to confirm that audio is working—hence the habitual “hello.” That part makes sense. But what follows often doesn’t. If both parties already know each other’s numbers, it would be more efficient to go straight to the point. If it’s the first time, an introduction makes sense, but when recognition already exists, repetition becomes redundant. In other words, if there were a protocol that could identify the level of mutual recognition before the conversation begins, communication could be much smoother.

Similar inefficiencies appear everywhere in daily life. Paying at a store, ordering in a restaurant, or getting into a taxi you booked through an app—all of these interactions involve unnecessary back-and-forth verification. The taxi example is especially frustrating. As a passenger, you want to immediately state your reservation number or name to confirm your identity. But the driver, trained for politeness, automatically starts with a formal greeting. The two signals overlap, the identification gets lost, and eventually the driver still asks, “May I have your name, please?” Both sides are correct, yet the process is fundamentally flawed.

The real issue is that neither side knows the other’s expectations beforehand. Technically, this problem could be solved easily: automate the verification. A simple touch interaction or, ideally, a near-field communication system could handle both identification and payment instantly upon entry. In some contexts, reducing human conversation could actually improve the experience.

This leads to a broader point: the need for a shared language not only between people but also between humans, machines, and AI. At present, no universal communication protocol exists among them. Rather than forcing humans to adapt to digital systems, we should design a protocol that enables mutual understanding between the two. By implementing such a system at the societal level, communication between humans and AI could evolve from guesswork into trust and efficiency.

Ultimately, the most effective form of communication is one that eliminates misunderstanding—regardless of who or what is on the other end. Whether through speech, touch, or data exchange, what we truly need is a shared grammar of interaction. That grammar, still emerging at the edges of design and technology, may become the foundation of the next social infrastructure.

Categories
Asides

The Age of the AI Home

In the age of AI, the idea of what a home is will change fundamentally. As humans begin to coexist with artificial intelligence, houses may need to include small power generators or even miniature data centers. Computing power, like electricity or water, will become part of the essential infrastructure built into everyday living spaces.

Imagine a home with a living room, a dining room, and a data room. Such a layout could become commonplace. A dedicated space for AI, or for data itself, might naturally appear in architectural plans. It could be on the rooftop, underground, or next to the bedroom. Perhaps even the family altar—once a spiritual repository of ancestral memory—could evolve into a private archive where generations of personal data are stored and shared.

Either way, we will need far more computing power at the edge. Every household could function as a small node, collectively forming a distributed computational network across neighborhoods. A society that produces and consumes both energy and compute locally may begin with the home as its basic unit.

Still, this is a vision built on the inefficiencies of today’s AI infrastructure. As models become more efficient and require fewer resources, even small-scale home data centers might disappear. In their place, countless connected devices could collaborate to form an intelligent mesh that links homes and cities into a single network. At that point, a house would no longer just be a space to live—it would be a space where information itself resides.

The idea of an “AI-ready home,” one equipped with its own computing and energy systems, may be a symbol of this transition. It represents a moment when the boundary between living space and computational space begins to blur, and the household itself becomes a unit of intelligence.

Categories
Asides

Rethinking Tron

Perhaps Tron is exactly what is needed right now.
I had never looked at it seriously before, but revisiting its history and design philosophy makes it clear that many of its principles align with today’s infrastructure challenges.
Its potential has always been there—steady, consistent, and quietly waiting for the right time.

Background

Tron was designed around the premise of computation that supports society from behind the scenes.
Long before mobile and cloud computing became common, it envisioned a distributed and cooperative world where devices could interconnect seamlessly.
Its early commitment to open ecosystem design set it apart, and while its visible success in the consumer OS market was limited, its adoption as an invisible foundation continued to grow.

The difficulty in evaluating Tron has always stemmed from this invisibility.
Its success accumulated quietly in the background, sustaining “systems that must not stop.”
The challenge has never been technological alone—it has been how to articulate the value of something that works best when unseen.

Why Reevaluate Tron Now

The rate at which computational capability is sinking into the social substrate is accelerating.
From home appliances to industrial machines, mobility systems, and city infrastructure, the demand for small, reliable operating systems at the edge continues to increase.
Tron’s core lies in real-time performance and lightweight design.
It treats the OS not as an end but as a component—one that elevates the overall reliability of the system.

Its focus has always been on operating safely and precisely inside the field, not just in the cloud.
The needs that Tron originally addressed have now become universal, especially as systems must remain secure and maintainable over long lifespans.

Another reason for its renewed relevance lies in the shifting meaning of “open.”
By removing licensing fees and negotiation costs, and by treating compatibility as a shared social contract, Tron embodies a practical model for the fragmented IoT landscape.
Having an open, standards-based domestic option also supports supply chain diversity—a form of strategic resilience.

Current Strengths

Tron’s greatest strength is that it does not break in the field.
It has long been used in environments where failure is not tolerated—automotive ECUs, industrial machinery, telecommunications infrastructure, and consumer electronics.
Its lightweight nature allows it to thrive under cost and power constraints while enabling long-term maintenance planning.

The open architecture is more than a technical advantage.
It reduces the cost of licensing and vendor lock-in, helping organizations move decisions forward.
Its accessibility to companies and universities directly contributes to talent supply stability, lowering overall risks of deployment and long-term operation.

Visible Challenges

There are still clear hurdles.
The first is recognition.
Success in the background is difficult to visualize, and in overseas markets Tron faces competition from ecosystems with richer English documentation and stronger commercial support.
To encourage adoption, it needs better documentation, clearer support structures, visible case studies, and accessible community pathways.

The second is the need to compete as an ecosystem, not merely as an OS.
Market traction requires more than technical superiority.
Integration with cloud services, consistent security updates, development tools, validation environments, and production support must all be presented in an accessible, cohesive form.
An operational model that assumes continuous updating is now essential.

Outlook and Repositioning

Tron can be repositioned as a standard edge OS for the AIoT era.
While large-scale computation moves to the cloud, local, reliable control and pre-processing at the edge are becoming more important.
By maintaining its lightweight strength while improving on four fronts—international standard compliance, English-language information, commercial support, and educational outreach—the landscape could shift considerably.

Rethinking Tron is not about nostalgia for a domestic technology.
It is a practical reconsideration of how to design maintainable infrastructure for long-lived systems.
If we can balance invisible reliability with visible communication, Tron’s growth is far from over.
What matters now is not the story of the past, but how we position it for the next decade.