Categories
Asides

The Time I Can Perceive

Everyone experiences time differently. I thought I understood that. But it was only recently that I realized just how far my own sense of time deviates from the norm.

For me, the boundary of perceivable time sits at around three minutes. Past and future alike. Four minutes ago and an hour ago feel roughly the same distance away. Five minutes from now and tomorrow feel equally far. In either direction, the moment something crosses the three-minute threshold, its edges dissolve. But within three minutes, time suddenly becomes tangible — visible, felt, still here, or almost here. Between four minutes and three, there is a perceptual cliff.

This is probably connected to short-term memory, working memory, something along those lines. But putting a number on it changes things. Only the inside of that three-minute window feels vividly real; everything beyond it fades to the same flat distance. Once I saw that structure, the way I relate to time started to make a lot more sense.

I wish I had known sooner. Understanding how my sense of time is shaped is understanding the shape of my own cognition — not to correct it, but to design around it.

Categories
Asides

An Era Where the Withdrawn Can Thrive

In every era, there have been people who never got to use what they had. Not because they lacked ability, but because the circuits connecting their abilities to society differed from one age to the next. You could call it luck, or adaptability. But neither word captures the full complexity.

There was a time when physical strength sat at the center of value. A strong body translated directly into survival and results. When civilization and science advanced far enough that social systems no longer depended on brute force, those without it found room to contribute. This was not merely technological progress. It was the moment society rewrote its definition of power.

The same pattern appears in the realm of expression. Throughout history, countless people possessed rich artistic talent but had no way to reach an audience. When digital spaces emerged and the cost of broadcasting dropped dramatically, talent that would have stayed buried became visible. Language barriers tell a similar story. No matter how exceptional someone’s expertise in a given field, without the words to convey it, that expertise might as well not have existed.

The evolution of civilization is, in part, a history of multiplying the circuits that connect buried talent to society. The printing press democratized the spread of ideas. The internet dismantled the monopoly on broadcasting. Each time a new technology appeared, a circuit that had been closed for someone, somewhere, opened. Seen this way, the current moment, with AI agents weaving themselves into daily life, looks like the opening of yet another kind of circuit.

People who could only operate alone. People profoundly uncomfortable with external communication. People whose expertise was confined to an impossibly narrow domain. In the social structures we have known, these traits were treated as weaknesses. Working in an organization required cooperation. Producing results presupposed collaboration with others. But as AI agents step into the intermediary role, that very premise is shifting. When the burden of interpersonal interaction is lifted, the focus and expertise inside a person can convert directly into productivity. It is even possible that those categorized as shut-ins or NEETs are, from a different angle, the personality type most suited to this era.

Looking back at my own experience, the past six months have been distinctly different from everything before. Even alone, with the help of AI agents, I can sustain the productive capacity of a small organization. Research, documentation, code implementation, translation, strategy sparring. Roles that once required separate people can now run under a single person’s judgment. Link those small units together through further division of labor, and they can grow into a mid-sized organizational body. The ceiling on individual productivity has shifted beyond anything previously imaginable.

Of course, there are parts that demand caution. Whether what we are calling productivity is worth its cost remains impossible to judge at this point. Just as in the early days of the internet and the dot-com bubble, we have yet to accurately grasp the true cost structure behind the productivity we feel. And there is still the question of what that productivity is directed toward. Creating things, refining ideas, exercising power for whom and to what end. When individual productivity rises dramatically, where that power is aimed falls to the individual. That is freedom, and at the same time, a quiet responsibility.

Still, much as Steve Jobs once called the computer a bicycle for the mind, there is a palpable sense of being in the middle of a moment when individual capability is being expanded. What happens when circuits that were closed begin to open is something no one can yet know.

Categories
Asides

Why Tesla Won’t Support CarPlay

Many dismiss Tesla’s long refusal to support Apple CarPlay as mere stubbornness on the part of Elon Musk. But behind this decision lies an awareness that the revenue structure of the auto industry itself is shifting. Tesla’s answer to the question of what comes after building and delivering a good product was, in part, to reject CarPlay.

Sony’s image sensors are inside every iPhone shipped worldwide. That fact alone is proof of Sony’s engineering excellence and quality. Yet what users ultimately touch is the iPhone as a product, iOS as software, iCloud as a cloud service, and the experience Apple designed around all of it. No matter how exceptional a component supplier may be, only the company that controls the final layer of experience can build a lasting relationship with the customer. This pattern has repeated across every industry.

What is CarPlay, exactly? From the user’s perspective, it is simply a convenient way to bring the familiar iPhone experience into the car. For anyone who has suffered through an outdated car navigation interface, it feels like a rescue. But from an automaker’s standpoint, adopting CarPlay means handing control of the in-car experience to a smartphone maker. In the short term, it helps with customer satisfaction. A familiar interface is a selling point, and CarPlay compatibility alone can tip a purchase decision. Over the long term, however, every point of contact with the customer becomes Apple’s. Music plays through Apple Music, navigation runs on Apple Maps, notifications and calls come through the iPhone. The car becomes a rolling shell, and the experience is absorbed into Apple’s layer.

Tesla rejected this structure from the start. It built its own infotainment system and kept navigation, music, and every interface under its own roof. Even listening to Apple Music requires a Tesla Premium Connectivity subscription. The company chose to sacrifice user convenience rather than surrender any part of the experience layer.

What Tesla was looking at, beyond this decision, was a model of ongoing software revenue. Its FSD (Full Self-Driving) subscription shifted to a monthly-only plan in February 2026, with over a million users now enrolled. Rather than selling a car and moving on, Tesla collects driving data, trains its AI, improves its software, and sells those improvements back as a subscription. More drivers mean more data. More data means better autonomous driving. Better driving means a more valuable subscription. To sustain this cycle, Tesla needed to own both the in-car experience and the data it generates.

This dynamic extends well beyond a single feature called CarPlay. In an era when cars are becoming rolling computers, the question of whether the company that builds the hardware or the one that designs the software gets to own the customer relationship is fundamental to the structure of the industry. It mirrors what has happened to Japanese manufacturers, who built excellent products and shipped them around the world, only to find the experience layer captured by someone else.

In the future that lies beyond CarPlay’s continued evolution, automakers risk becoming Apple’s equivalent of sensor manufacturers. Their engineering and hardware quality may be recognized, but the interface customers touch every day will be one Apple designed. Tesla’s refusal was, at its core, a refusal to become a parts maker.

Whether Tesla’s bet was the right one remains unclear. Holding onto the experience layer comes at the cost of user frustration. Calls for CarPlay support never went away. Whether sheer will can hold out against market demand is a separate question entirely.

In the end, there is no clean answer. You can perfect the components and let someone else own the experience. Or you can design the entire experience yourself and absorb the friction with your customers. Both paths have costs. The one thing that seems certain is that whoever controls the experience layer gets to define the revenue structure and the shape of the customer relationship that follows. Whether or not to support CarPlay is not a technology question. It is an answer to what you believe you are selling.

Categories
Asides

The Age of AI Interpretation

In my daily work, I constantly move between English and Japanese. I know firsthand the cognitive load of simultaneous interpretation, and I know that getting the words right is never enough. Real interpretation means reading the other person’s intent, their baseline assumptions, the emotional temperature of the room, and what decision they’re trying to reach. Only then does translation become meaningful communication.

With the arrival of AI, that instinct has taken on a different shape. The act of interpreting between English and Japanese will, before long, be handled largely by machines. I think that’s unavoidable. But beyond that threshold, a new kind of competence is emerging: the ability to interpret between humans and AI.

Current AI systems look omnipotent on the surface, yet they are remarkably sensitive collaborators. Their memory architectures have structural constraints. The way they compress and expand context follows particular patterns. In human conversation, ambiguity and logical leaps are smoothed over naturally, but in dialogue with AI, those gaps translate directly into performance differences. What you establish as given, what you choose to omit, the order in which you present information — these design choices dramatically alter what the same AI returns.

This sensation is strikingly similar to the work of interpretation. When I move between English and Japanese, I try to consume as little unnecessary context as possible. I avoid regional expressions like Kansai dialect, construct clear sentences, and lead with conclusions. I keep each sentence short. I’ve always had an affinity for Markdown-like structures, and even in conversation I sometimes feel as though I’m speaking in Markdown. Once a phrase becomes shared shorthand, I define it upfront and compress the rest of the exchange into brief callbacks. Interpretation, before it is language conversion, is the art of context compression.

This framework transfers directly to the age of AI. In human conversation, imposing too much semantic structure feels unnatural, so a Markdown-like level of organization was the practical ceiling. With AI, that tendency intensifies. Align assumptions, state your objective, define constraints, decide the output format in advance — and the quality of dialogue shifts dramatically. In other words, the ability to converse with AI is not simply about asking good questions. It is the ability to structure your thinking and hand it over.

The problem is that most people do not habitually speak in Markdown. Between humans, conversation still works without it. But in dialogue with AI, that ambiguity becomes pure loss. This is why we will need interpreters who can convert natural human language into forms that AI can process effectively. What is called prompt engineering is one slice of this. But the real scope is far wider: knowing which sub-agent to deploy in a given moment, building specialized workflows for particular domains, compressing context while chaining automation, and consciously managing the boundary between short-term and long-term memory. The full range of these capabilities is becoming the new definition of linguistic competence.

Until now, language ability has been measured by how well you handle a foreign tongue. Going forward, that metric alone will not suffice. The capacity to translate vague human intent into structures a machine can process. The reverse capacity to retranslate AI output into forms that support human judgment. It is this back-and-forth movement that will determine intellectual productivity in the next era.

I don’t believe foreign language skills are becoming obsolete. If anything, the age of AI may be what finally reveals what interpretation was always about. What will be needed is not just English proficiency or Japanese proficiency. It is the ability to stand between people and AI, preserving meaning, organizing context, and faithfully conveying intent. Interpreters in that sense will matter far more than we currently imagine.

Categories
Asides

Cyborg Declaration

At the end of 2025, I spent time enjoying the development of my own programs. The original goal was to automate as many tasks as possible, and I feel I achieved that to some degree. By adopting a parallel style of development with AI, I built several personal programs designed to automate my own work. The process itself was deeply enjoyable, and it felt as though ideas that had existed only in my head for years had finally begun to take on form.

What mattered most was that I already had my own data center infrastructure. Because of that, I could introduce AI aggressively without carrying the usual anxiety around data handling. Deciding whether to use generative AI is not only a question of performance. It is also a question of trust—where the data goes, and what is being entrusted to whom. Once that concern was removed, AI stopped being something I merely experimented with and became a tool for extending my own capabilities.

Then, around the end of January 2026, I suddenly realized something. I might already be a cyborg. The version of myself from only a month earlier now felt, in some sense, primitive. Of course, I had not replaced any part of my body. I had not implanted electrodes into my brain. And yet I had clearly entered a different cognitive state.

When I used to imagine a cyborg, I pictured something much more direct: an arm turned into a weapon, an eye replaced with a lens, the body itself mechanically altered. But that does not seem to be how it happens. Human beings appear to enhance themselves through loosely coupled external devices and software. Rather than embedding everything inside the body, we place memory, judgment support, and intelligence outside ourselves, then use them as if they were part of us.

What is most interesting about this change is how ambiguous the boundary becomes. Where does the self end, and where does the external begin? I still write the code, but exploration, completion, and comparison are increasingly handled by AI. I still believe I am making the decisions, but the path that leads to those decisions already includes several forms of external intelligence. In that sense, becoming a cyborg may not mean mechanizing the body, but externalizing cognition.

And this change feels irreversible. I do not think I can really go back. I could still work the old way, of course, but it would feel like choosing to work by lamplight after electricity already exists. It is not merely that things have become more convenient. The basic conditions of intellectual work itself have changed.

Human beings do not become machines all at once. Instead, by the time they notice, they have already acquired a set of connections they can no longer give up, and they have accepted that condition as ordinary life. What I felt at the end of January 2026 was probably the sensation of having crossed that boundary.

For me, a cyborg declaration is not a declaration of bodily modification, but a declaration that acknowledges that my intelligence is no longer self-contained.

Categories
Asides

The Future Waymo Sees

I understand the feeling of accepting Waymo without much resistance. There is a sense of novelty, and as someone who likes technology, I also see it as a remarkable crystallization of engineering. Every time I notice one on the street, it feels like witnessing a transition point in history.

At the same time, separate from that excitement, I cannot help thinking about Waymo’s point of view. It has eyes called LiDAR. As it moves through the city, it continuously captures not only the shape of the roads, but also the positions, movements, and reactions of the people and objects within them. If we look at it only through the lens of autonomous driving, it appears to be a useful technology and a practical answer to driver shortages. But the real issue may lie less in the vehicle itself than in the world the vehicle is seeing.

What matters is not only what it can detect, or how far it can see. What matters is what kind of information is being accumulated, in what form, and under whose control. Not just terrain data or traffic conditions, but pedestrian flows, changes in congestion, human reactions, and the shifting texture of the city across different times of day. In the short term, such data may improve dispatch efficiency and safety. In the long term, it leads to a larger question: who gets to observe reality, and who gets to own it?

This is why recent moves by Niantic are worth paying attention to. A company that accumulated location data and image-based knowledge of the real world is now beginning to connect those assets to physical services such as robotics and delivery. It feels like a case where both the collection and the use of data have finally become visible in a form that broader society can understand.

Enormous amounts of data had already been gathered in the era of Twitter and Facebook. Yet the scale of that value, and the scale of its influence, remained abstract to much of society. As long as it appeared only in timelines, advertising, or recommendation engines, it was difficult for people to feel its weight. But the moment that same logic begins to shape maps, movement, logistics, and robotics in physical space, the importance of that data takes on a sharper outline.

Waymo is still driving through the city today. But it is not merely a car in motion. It is staring at reality through LiDAR and cameras, slowly copying the city as it goes. To think about the future of autonomous driving is not only to think about transportation. It is also to ask which companies will observe, accumulate, and ultimately reconstruct reality itself.

Categories
Asides

Learning with AI Is Changing the Nature of Education

The word “education” may be too broad. Here, I want to focus strictly on the act of acquiring knowledge, not on values or character formation. From that perspective, the emergence of generative AI has begun to reshape the very structure of learning itself.

Since generative AI became widespread, my own learning across many fields has clearly accelerated. This is not limited to professional topics; it applies equally to hobbies and peripheral areas of interest. It is not simply that answers arrive faster, but that the process of learning has fundamentally changed.

A concrete example is learning Rubik’s Cube algorithms. After moving beyond basic memorization and into the phase of solving more efficiently, I found an overwhelming amount of information on the web and on YouTube. What appeared there, however, were methods and sequences optimized for someone else. Determining what was optimal for me took considerable time. Each source operated on a different set of assumptions and context, leaving the burden of organizing and reconciling those differences entirely on the learner.

Even a single symbol could cause confusion. Which face does “R” refer to, and in which direction is it rotated? What exact sequence does “SUNE” represent? Because these premises were not aligned, explanations often progressed without shared grounding, making understanding fragile and fragmented.

When AI enters the loop, this situation changes dramatically. The task of organizing information shifts to the AI, which can align definitions, symbols, and concepts before explaining them. It can propose an optimal learning path based on one’s current understanding and recalibrate the level of detail as needed. As a result, learning efficiency improves to an extraordinary degree.

Key points can be reinforced repeatedly, and review can be structured with awareness of the forgetting curve. Questions that arise mid-process can be fact-checked immediately. Beyond that, a meta-learning perspective becomes available: reflecting on how one learns, identifying synergies with other knowledge areas, and continuously refining learning methods themselves.

There are, of course, drawbacks. The final responsibility for judging truth still lies with the human. When learning veers in the wrong direction, AI does not provide an inherent ethical brake or value-based correction. In areas such as conspiracy theories, this can accelerate misunderstanding rather than resolve it, potentially deepening social division.

This style of learning also depends heavily on intrinsic motivation. Without actively asking questions and engaging in dialogue, AI offers little value. We have not yet reached a stage where knowledge can simply be installed. The trigger remains firmly on the human side.

Even so, one point is clear. For the act of learning, generative AI is becoming an exceptionally powerful tool. The central question is no longer how to deliver knowledge, but how to arrive at understanding. On that question, AI has already begun to offer practical answers.

Categories
Asides

Rethinking the Practical Balance Between Decentralized Communication and Central Relays

Messengers that operate on mesh networks using P2P communication already exist. Under the right conditions, they can function independently of existing communication infrastructure and offer strong resistance to censorship and shutdowns. They feel like products that intuitively point toward the future of communication.

At the same time, this approach has clear limitations. Communication only works reliably if a sufficient number of devices act as relay nodes, which means stability is limited to closed spaces or short periods when many people are densely gathered. When considered as everyday, wide-area communication infrastructure, instability remains a fundamental issue.

A very different and more practical answer to this constraint emerged in the form of messaging systems that ensure communication continuity while maintaining full end-to-end encryption. Signal is a representative example. Signal did not achieve security by eliminating central servers. Instead, it chose to accept the existence of central servers while removing them from the trust model altogether.

Signal’s servers temporarily relay encrypted messages and store them only while recipient devices are offline. They handle minimal tasks such as distributing public keys and triggering push notifications, but they cannot read message contents or decrypt past communications. Central servers exist, yet they function strictly as relays that cannot see or alter what passes through them.

This structure is supported by the Signal Protocol. Initial key exchange is completed entirely between devices, and encryption keys are updated with every message. Even if a single key were compromised, neither past nor future messages could be decrypted. Even if servers stored all communications, the data itself would be meaningless.

What matters most is that “trust” is not assumed at any point in this design. Signal does not rely on the goodwill of its operators. Client software is open source, cryptographic specifications are publicly documented, and reproducible builds make tampering verifiable. The principle of “don’t trust, verify” is embedded directly into the system.

This design avoids the extremes of both pure P2P and centralized control. It does not accept the instability inherent in full P2P networks, nor does it allow the surveillance and control risks that centralized systems introduce. Central relays are permitted, but they are rendered untrustworthy by design. It is a highly pragmatic compromise achieved through cryptography.

Meanwhile, new approaches are emerging that extend communication infrastructure into space itself. Satellite-based networks like Starlink bypass traditional telephone networks and terrestrial infrastructure altogether. This shift has implications not only for business models, but also for national security, privacy, and sovereignty. When the physical layer of communication changes, the rules that sit above it inevitably change as well.

Since the invention of the telephone, communication has evolved many times. It has repeatedly moved back and forth between centralization and decentralization, searching for workable compromises between technology and society. Neither absolute freedom nor absolute control has ever proven viable in reality.

That is why the question today is not “which model is correct,” but “where should the practical balance be placed.” By embedding trust into cryptography and treating central infrastructure as a necessary but constrained component, it becomes possible to preserve both freedom and stability. Communication continues to evolve, once again searching for its next form somewhere between these two forces.

Categories
Asides

What We Learned from Ten Years of Web3: Between Decentralization and Fragmentation

The ideals that Web3 put forward were, in many ways, beautiful.

A future where individuals—not platforms—controlled their data and assets.
A world connected without borders, without gatekeepers.
Blockchain, cryptocurrencies, DAOs—all emerged under the banner of “decentralization,” carrying with them the promise of a new social architecture.

Yet ten years have passed.
Looking back, the movement resembles a kind of guerrilla warfare—pressing against the edges of the existing internet, searching for cracks in the dominant platforms, attempting to implement ideals through tactical advances rather than structural reform.
Guerrilla strategies can spread an idea, but they rarely rewrite society’s rules.

Why did technology alone fail to change the world?
One reason is that decentralization and fragmentation were often conflated.

The “decentralization” Web3 called for was meant to be a structural design: a system that prevents trust and power from concentrating at a single point.
But in practice, communities and factions splintered, independent economic zones emerged, and incompatible rules proliferated.
Instead of decentralization, what emerged was fragmentation—parallel micro-worlds with little connective tissue.

Fragmentation weakens information sharing and destroys interoperability.
And eventually, it invites the rise of new central authorities.
Indeed, even within Web3, entities that claimed to be “decentralized” created exchanges and platforms that wielded overwhelming influence.
What was meant to decentralize inadvertently produced another form of centralization.

So what should we take from this decade?
The key lesson is that decentralization must be understood not as a structure but as a method of operating trust.

“How should trust be implemented in society?”
This is the most valuable question Web3 posed.
More important than blockchain itself is how to reduce the cost of verifying truth—and how individuals and society can mutually confirm authenticity in the digital world.
This question stretches far beyond Web3; it touches the future of the internet, AI, IoT, and next-generation infrastructure.

Consider the ideas that remain relevant today:
privacy with transparency,
data self-sovereignty,
interoperability and standards,
and the redefinition of authentication through decentralized identity.
These are not failures—they are intellectual assets left behind by Web3’s struggles.

Another critical lesson is that decentralization cannot exist without distributed power and compute.
No matter how ideal an algorithm is, if the electricity and computational capacity required to operate it are concentrated, the architecture will inevitably drift back toward centralization.
This is why countries like Japan—where local regions possess energy resources and land—have the potential to become experimental grounds for truly decentralized infrastructure.
Here、the theme of local cities holding computational power naturally connects.

The ten years of Web3 demonstrated that technology alone cannot move the world.
But they also forced society to confront a deeper question:
How should trust be handled in the digital age?
Decentralization is not about breaking the world apart; it is about finding a form of trust that keeps the world connected without centralizing authority.

Over the next decade, what answers will we craft?
The shift must be away from fragmentation and toward decentralization for the sake of connection.
That implementation will sit at the core of infrastructure design in the AI era.

Categories
Asides

Japan’s Manufacturing and Its Responsibility in Cybersecurity

For decades, Japanese manufacturing has been synonymous with “quality.” Precision, durability, craftsmanship, and trust have defined the country’s industrial identity.
Yet in an era shaped by AI and IoT, quality can no longer be understood solely as physical robustness. Hardware itself has become a target, and Japan’s machines, components, and devices now operate within a fundamentally new risk environment where cyberspace and the physical world are directly connected.

Until recently, cyberattacks focused primarily on digital systems: servers, networks, authentication layers.
Today, however, attackers aim at physical devices—automotive ECUs, robot actuators, factory control systems, medical equipment, communication modules.
If the internal control of these systems is compromised, the consequences extend far beyond data breaches: accidents, shutdowns, and physical malfunctions become real possibilities.

This shift carries particular weight for Japan.
Japanese hardware underpins a vast range of global equipment—precision machinery, automotive systems, robotics, and embedded components.
A single vulnerability in a Japanese-made part could serve as an entry point for attackers into systems around the world.
Conversely, if Japan succeeds in securing these layers, it becomes a crucial pillar of global cyber resilience.

The core issue is that traditional concepts of manufacturing quality are no longer synchronized with modern cyber risk.
Manufacturing evaluates safety and reliability on long time horizons; cyber threats evolve on the scale of days or hours.
Physical and digital timelines were once independent, but AIoT has merged them—forcing hardware and cybersecurity to be designed within the same conceptual layer.

In other words, manufacturing and cybersecurity can no longer be separated.
The idea of “adding security later” no longer fits the reality of interconnected devices.
Security must be integrated across every stage: the component level, assembly level, device level, and network integration.
The definition of quality must expand from “does not break” to “cannot be broken, even under attack.”

Globally, a culture of testing and attacking hardware is emerging.
Vehicles, industrial machinery, and critical infrastructure control panels are publicly examined, and specialists search for vulnerabilities that lead to corrective improvements.
This trend mirrors the evolution from software bug bounties toward hardware-level security assessment.
Such environments—where offensive and defensive testing coexist—directly contribute to elevating industrial standards.

Yet awareness of hardware security remains uneven across nations.
In Japan, the reputation for robust and safe manufacturing often leads to complacency: devices are assumed secure because they are well-made.
Paradoxically, this confidence can obscure the need for systematic vulnerability testing, turning manufacturing strengths into latent cyber risks.

To maintain global trust in the years ahead, Japan must design manufacturing and security as a unified discipline.
The production process itself must function simultaneously as a security process.
A country known for its hardware must also be capable of guaranteeing the safety of that hardware—this dual responsibility will define Japan’s competitive position.

Japan today carries responsibility not only for manufacturing the world relies on, but also for ensuring the cybersecurity of that manufactured world.
Manufacturers, infrastructure operators, telecom providers, local governments, research institutions—each must coordinate to secure the nation’s industrial foundation.
Cultivating a perspective that connects manufacturing with cyber defense is essential.
It is this integration that will sustain global confidence in Japanese technology and define the next evolution of “Japan Quality.”