Categories
Asides

Japan’s Manufacturing and Its Responsibility in Cybersecurity

For decades, Japanese manufacturing has been synonymous with “quality.” Precision, durability, craftsmanship, and trust have defined the country’s industrial identity.
Yet in an era shaped by AI and IoT, quality can no longer be understood solely as physical robustness. Hardware itself has become a target, and Japan’s machines, components, and devices now operate within a fundamentally new risk environment where cyberspace and the physical world are directly connected.

Until recently, cyberattacks focused primarily on digital systems: servers, networks, authentication layers.
Today, however, attackers aim at physical devices—automotive ECUs, robot actuators, factory control systems, medical equipment, communication modules.
If the internal control of these systems is compromised, the consequences extend far beyond data breaches: accidents, shutdowns, and physical malfunctions become real possibilities.

This shift carries particular weight for Japan.
Japanese hardware underpins a vast range of global equipment—precision machinery, automotive systems, robotics, and embedded components.
A single vulnerability in a Japanese-made part could serve as an entry point for attackers into systems around the world.
Conversely, if Japan succeeds in securing these layers, it becomes a crucial pillar of global cyber resilience.

The core issue is that traditional concepts of manufacturing quality are no longer synchronized with modern cyber risk.
Manufacturing evaluates safety and reliability on long time horizons; cyber threats evolve on the scale of days or hours.
Physical and digital timelines were once independent, but AIoT has merged them—forcing hardware and cybersecurity to be designed within the same conceptual layer.

In other words, manufacturing and cybersecurity can no longer be separated.
The idea of “adding security later” no longer fits the reality of interconnected devices.
Security must be integrated across every stage: the component level, assembly level, device level, and network integration.
The definition of quality must expand from “does not break” to “cannot be broken, even under attack.”

Globally, a culture of testing and attacking hardware is emerging.
Vehicles, industrial machinery, and critical infrastructure control panels are publicly examined, and specialists search for vulnerabilities that lead to corrective improvements.
This trend mirrors the evolution from software bug bounties toward hardware-level security assessment.
Such environments—where offensive and defensive testing coexist—directly contribute to elevating industrial standards.

Yet awareness of hardware security remains uneven across nations.
In Japan, the reputation for robust and safe manufacturing often leads to complacency: devices are assumed secure because they are well-made.
Paradoxically, this confidence can obscure the need for systematic vulnerability testing, turning manufacturing strengths into latent cyber risks.

To maintain global trust in the years ahead, Japan must design manufacturing and security as a unified discipline.
The production process itself must function simultaneously as a security process.
A country known for its hardware must also be capable of guaranteeing the safety of that hardware—this dual responsibility will define Japan’s competitive position.

Japan today carries responsibility not only for manufacturing the world relies on, but also for ensuring the cybersecurity of that manufactured world.
Manufacturers, infrastructure operators, telecom providers, local governments, research institutions—each must coordinate to secure the nation’s industrial foundation.
Cultivating a perspective that connects manufacturing with cyber defense is essential.
It is this integration that will sustain global confidence in Japanese technology and define the next evolution of “Japan Quality.”

Categories
Asides

Redesigning Conversation and the Emergence of a Post-Human Language

As I wrote in the previous article, the idea of a “common language for humans, things, and AI” has been one of my long-standing themes. Recently, I’ve begun to feel that this question itself needs to be reconsidered from a deeper level. The shifts happening around us suggest that the very framework of human communication is starting to update.

Human-to-human conversation is approaching a point where further optimization is difficult. Reading emotions, estimating the other person’s knowledge and cognitive range, and choosing words with care—these processes enrich human culture, yet they also impose structural burdens. I don’t deny the value of embracing these inefficiencies, but if civilization advances and technology accelerates, communication too should be allowed to transform.

Here, it becomes necessary to change perspective. Rather than polishing the API between humans, we should redesign the interface between humans and AI itself. If we move beyond language alone and incorporate mechanisms that supplement intention and context, conversation will shift to a different stage. When AI can immediately understand the purpose of a dialogue, add necessary supporting information, and reinforce human comprehension, the burdens formerly assumed to be unavoidable can dissolve naturally.

Wearing devices on our ears and eyes is already a part of everyday life. Sensors and connected objects populate our environments, creating a state in which information is constantly exchanged. What comes next is a structure in which these objects and AI function as mediators of dialogue, coordinating interactions between people—or between humans and AI. Once mediated conversation becomes ordinary, the meaning of communication itself will begin to change.

Still, today’s human–AI dialogue is far from efficient. We continue to use natural language and impose human-centered grammar and expectations onto AI, paying the cognitive cost required to do so. We do not yet fully leverage AI’s capacity for knowledge and contextual memory, nor have we developed language systems or symbolic structures truly designed for AI. Even Markdown, while convenient, is simply a human-friendly formatting choice; the semantic structure AI might benefit from is largely absent. Human and AI languages could in principle be designed from completely different origins, and within that gap lies space for a new expressive culture beyond traditional “prompt optimization.”

The most intriguing domain is communication that occurs without humans—between AIs, or between AI and machines. In those spaces, a distinct communicative culture may already be emerging. Its speed and precision likely exceed human comprehension, similar to the way plants exchange chemical signals in natural systems. If such a language already exists, our task may not be to create a universal language for humans, but to design the conditions that allow humans to participate in that domain.

How humans will enter the new linguistic realm forming between AI and machines is an open question. Yet this is no longer just an interface problem; it is part of a broader reconstruction of social and technological civilization. In the future, conversation may not rely on “words” as sound, but on direct exchanges of understanding itself. That outline is beginning to come into view.

Categories
Asides

A Common Language for Humans, Machines, and AI

Human communication still has room for improvement. In fact, it may be one of the slowest systems to evolve. The optimal way to communicate depends on the purpose—whether to convey intent, ensure accuracy, share context, or express emotion. Even between people, our communication protocols are filled with inefficiencies.

Take the example of a phone call. The first step after connecting is always to confirm that audio is working—hence the habitual “hello.” That part makes sense. But what follows often doesn’t. If both parties already know each other’s numbers, it would be more efficient to go straight to the point. If it’s the first time, an introduction makes sense, but when recognition already exists, repetition becomes redundant. In other words, if there were a protocol that could identify the level of mutual recognition before the conversation begins, communication could be much smoother.

Similar inefficiencies appear everywhere in daily life. Paying at a store, ordering in a restaurant, or getting into a taxi you booked through an app—all of these interactions involve unnecessary back-and-forth verification. The taxi example is especially frustrating. As a passenger, you want to immediately state your reservation number or name to confirm your identity. But the driver, trained for politeness, automatically starts with a formal greeting. The two signals overlap, the identification gets lost, and eventually the driver still asks, “May I have your name, please?” Both sides are correct, yet the process is fundamentally flawed.

The real issue is that neither side knows the other’s expectations beforehand. Technically, this problem could be solved easily: automate the verification. A simple touch interaction or, ideally, a near-field communication system could handle both identification and payment instantly upon entry. In some contexts, reducing human conversation could actually improve the experience.

This leads to a broader point: the need for a shared language not only between people but also between humans, machines, and AI. At present, no universal communication protocol exists among them. Rather than forcing humans to adapt to digital systems, we should design a protocol that enables mutual understanding between the two. By implementing such a system at the societal level, communication between humans and AI could evolve from guesswork into trust and efficiency.

Ultimately, the most effective form of communication is one that eliminates misunderstanding—regardless of who or what is on the other end. Whether through speech, touch, or data exchange, what we truly need is a shared grammar of interaction. That grammar, still emerging at the edges of design and technology, may become the foundation of the next social infrastructure.

Categories
Asides

Rethinking Tron

Perhaps Tron is exactly what is needed right now.
I had never looked at it seriously before, but revisiting its history and design philosophy makes it clear that many of its principles align with today’s infrastructure challenges.
Its potential has always been there—steady, consistent, and quietly waiting for the right time.

Background

Tron was designed around the premise of computation that supports society from behind the scenes.
Long before mobile and cloud computing became common, it envisioned a distributed and cooperative world where devices could interconnect seamlessly.
Its early commitment to open ecosystem design set it apart, and while its visible success in the consumer OS market was limited, its adoption as an invisible foundation continued to grow.

The difficulty in evaluating Tron has always stemmed from this invisibility.
Its success accumulated quietly in the background, sustaining “systems that must not stop.”
The challenge has never been technological alone—it has been how to articulate the value of something that works best when unseen.

Why Reevaluate Tron Now

The rate at which computational capability is sinking into the social substrate is accelerating.
From home appliances to industrial machines, mobility systems, and city infrastructure, the demand for small, reliable operating systems at the edge continues to increase.
Tron’s core lies in real-time performance and lightweight design.
It treats the OS not as an end but as a component—one that elevates the overall reliability of the system.

Its focus has always been on operating safely and precisely inside the field, not just in the cloud.
The needs that Tron originally addressed have now become universal, especially as systems must remain secure and maintainable over long lifespans.

Another reason for its renewed relevance lies in the shifting meaning of “open.”
By removing licensing fees and negotiation costs, and by treating compatibility as a shared social contract, Tron embodies a practical model for the fragmented IoT landscape.
Having an open, standards-based domestic option also supports supply chain diversity—a form of strategic resilience.

Current Strengths

Tron’s greatest strength is that it does not break in the field.
It has long been used in environments where failure is not tolerated—automotive ECUs, industrial machinery, telecommunications infrastructure, and consumer electronics.
Its lightweight nature allows it to thrive under cost and power constraints while enabling long-term maintenance planning.

The open architecture is more than a technical advantage.
It reduces the cost of licensing and vendor lock-in, helping organizations move decisions forward.
Its accessibility to companies and universities directly contributes to talent supply stability, lowering overall risks of deployment and long-term operation.

Visible Challenges

There are still clear hurdles.
The first is recognition.
Success in the background is difficult to visualize, and in overseas markets Tron faces competition from ecosystems with richer English documentation and stronger commercial support.
To encourage adoption, it needs better documentation, clearer support structures, visible case studies, and accessible community pathways.

The second is the need to compete as an ecosystem, not merely as an OS.
Market traction requires more than technical superiority.
Integration with cloud services, consistent security updates, development tools, validation environments, and production support must all be presented in an accessible, cohesive form.
An operational model that assumes continuous updating is now essential.

Outlook and Repositioning

Tron can be repositioned as a standard edge OS for the AIoT era.
While large-scale computation moves to the cloud, local, reliable control and pre-processing at the edge are becoming more important.
By maintaining its lightweight strength while improving on four fronts—international standard compliance, English-language information, commercial support, and educational outreach—the landscape could shift considerably.

Rethinking Tron is not about nostalgia for a domestic technology.
It is a practical reconsideration of how to design maintainable infrastructure for long-lived systems.
If we can balance invisible reliability with visible communication, Tron’s growth is far from over.
What matters now is not the story of the past, but how we position it for the next decade.

Categories
Asides

Could the Human Eye Receive Optical Communication through IoT Integration?

I wondered if humans could ever become compatible with IOWN.

When vision is seen as an entry point for information, the human eye is already a highly advanced sensor for receiving light. If communication functionality could be layered onto it, the human body itself might become a node within the information network.

Of course, in reality, there are significant challenges involving freedom of movement and safety. Directly receiving optical signals—through Li-Fi or fiber-based communication—would place biological strain on the eye, making practical implementation difficult. Yet if even a part of the human body could receive data through optical communication, the relationship between humans and networks would be fundamentally transformed.

Reframing vision not as an organ for seeing but as a port for communication shifts the gateway of information from the brain to the body itself.
I would like to imagine a future where humans become the terminal devices of IOWN.

Categories
Asides

The Need for a Self-Driving License

After AT-only licenses, the next step we may need is a “self-driving license.”

Recently, I rented a gasoline-powered car for the first time in a while. It was an automatic model, but because I was unfamiliar with both the vehicle and the driving environment, the experience was far more stressful than I expected. Having become used to driving an EV equipped with autonomous features, I found the act of operating everything manually—with my own judgment and physical input—strangely primitive.

When the gear is shifted to drive, the car starts moving on its own. A handbrake must be engaged separately, and the accelerator must be pressed continuously to keep moving. Every stop requires the brake, every start requires a shift of the foot back to the accelerator, and even the turn signal must be turned off manually. I was reminded that this entire system is designed around the assumption that the human body functions as the car’s control mechanism.

I also found myself confused by actions that used to be second nature—starting the engine, locking and unlocking the door with a key. What once seemed natural now feels unnecessary. There are simply too many steps required before a car can even move. Press a button, pull a lever, step on a pedal, turn a wheel. The process feels less like operating a machine and more like performing a ritual.

From a UX perspective, this reflects a design philosophy stuck between eras. The dashboard is filled with switches and meters whose meanings are not immediately clear. Beyond speed and fuel levels, how much information does a driver actually need? The system relies on human judgment, but in doing so, it also introduces confusion.

When driving shifted from manual to automatic, the clutch became obsolete. People were freed from unnecessary complexity, and driving became accessible to anyone. In the same way, in an age where autonomous driving becomes the norm, pressing pedals or turning a steering wheel will seem like relics of a bygone era. We are moving from a phase where machines adapt to humans to one where humans no longer need to touch the machines at all.

Yet driver licensing systems have not caught up with this change. Until now, a license has certified one’s ability to operate a vehicle. But in the future, what will matter is the ability to interact with the car, to understand its systems, and to intervene safely when needed. It will no longer be about physical control, but about comprehension—of AI behavior, of algorithmic decision-making, and of how to respond when something goes wrong.

When AT-only licenses were introduced, many drivers were skeptical about removing the clutch. But over time, that became the standard, and manual transmissions turned into a niche skill. Likewise, if a “self-driving license” is introduced in the near future, pressing pedals may come to be viewed as a legacy form of driving—something from another era.

The evolution of driving technology is, at its core, the gradual separation of humans from machines. A self-driving license would not be a qualification to control a vehicle, but a literacy certificate for coexisting with technology. It would mark the shift from moving the car to moving with the car. Such a change in licensing might define how transportation itself evolves in the next generation.

Categories
Asides

Infrastructure That Makes Corporate Trust Irrelevant by Making Lies Impossible

Every time a “fabrication of inspection data” scandal surfaces in the manufacturing world, I can’t help but think—it’s not just about one company’s wrongdoing. It’s a structural issue embedded in society itself.

We operate in systems where lying becomes necessary. In fact, the system often rewards dishonesty. As long as this remains true, we shouldn’t view misconduct as a matter of individual or corporate ethics—but as a systemic design flaw.

That’s exactly why, when a technology appears that makes lying technically impossible, we should adopt it without hesitation.

Why can large corporations charge higher prices and still sell their products? Because they’ve earned trust over time. Their price tags are justified not just by quality and performance, but by accumulated history—reputation, consistency, and customer confidence.

But once that trust is broken, price advantages crumble quickly. Worse, the damage ripples through the entire supply chain. The longer the history, the broader the impact.

Startups, on the other hand, often compete on price. Without a long track record, they offer lower prices and slowly build trust through repeated delivery. That’s been the only way—until now.

Today, things are different. Imagine a startup that records its manufacturing processes and inspection data in real time, using an immutable system that prevents tampering. That alone is enough to objectively prove that their data—and by extension, their product—is authentic.

In other words, trust is no longer a function of time. We’re shifting from trust built over decades to trust guaranteed by infrastructure.

So for startups entering the manufacturing space, here’s what I believe they should keep in mind from day one:

Leave a tamper-proof audit trail for all quality and inspection data.
That record becomes a competitive weapon the moment a legacy competitor is caught in a data scandal.
The value of that audit trail grows over time, so it’s critical to start now—not later.
Use data to visualize trust, and build competitive advantage beyond price.
It’s an asset that can’t be recreated retroactively.

For large enterprises that already enjoy the benefits of trust, this shift isn’t someone else’s problem. It’s happening now.

Your ability to charge premium prices is built on the trust you’ve accumulated—but that trust can vanish with a single scandal. The older the company, the greater the surface area for reputational risk. And the more complex the supply chain, the faster that risk spreads.

What we need now is an environment where dishonesty is structurally impossible.
Real-time auditing of inspection data.
End-to-end transparency across the supply chain.
With that infrastructure in place, even if a problem does occur, it can be detected early and contained before it spreads.

This isn’t just risk mitigation—it’s a form of psychological safety for the people on the ground. If they don’t have to lie, they can focus on doing honest work. If mistakes happen, they can report them. And if they’re reported, they can be fixed.

In short, this isn’t a system to prevent fraud—it’s a system to cultivate trust.

With such a system, a company can prove it hasn’t cheated. And society will begin to evaluate who to work with based on those records. In a world where data is an asset, an immutable audit trail is the ultimate proof of integrity.

But here’s the thing: you can only start building that record now. Not tomorrow.

An audit trail means you never had to lie in the first place. It’s a record of your honesty—and a source of competitive strength for the future.

Categories
Asides

How Nvidia’s Mirror World Is Changing Manufacturing

Watching Nvidia’s latest announcements, I couldn’t help but feel that the world of manufacturing is entering an entirely new phase.

Until now, PDCA cycles in manufacturing could only happen in the physical world.
But that’s no longer the case. We’re entering a time when product development can be simulated in virtual environments—worlds that mirror our own—and those cycles are now run autonomously by AI.

It’s clear that Nvidia intends to make this mirror world its main battlefield.
With concepts like Omniverse and digital twins, the idea is simple: bring physical reality into a digital copy, migrate the entire industrial foundation into that alternative world, and build a new economy on top of Nvidia’s infrastructure.

In that world, prototypes and designs can be tested and iterated in real time, at extreme levels of precision.
Self-driving simulations, factory line optimization, structural analysis of buildings, drug discovery, medical research, education—it’s all happening virtually, without ever leaving the simulation.

The meaning of “making things” is starting to shift.
Before anything reaches the physical world, it will have gone through tens of thousands of iterations in the virtual one—refined, evaluated, and optimized by AI.
We’ve entered a phase where PDCA loops run at hyperspeed in the digital realm, and near-finished products are sent out into reality.

This isn’t just about CG or visualization.
It’s about structures that exist only in data, yet directly affect actions in the physical world.
The mirror world has reached the level of fidelity where it can now be deployed socially.

In this era, I believe Japan’s role becomes even more essential.

No matter how detailed the design, we still need somewhere that can realize it physically, with precision.
In a world where even the slightest error could be fatal, manufacturing accuracy and quality control become the decisive factors.

And that’s exactly where Japan excels.

Things born in simulation will descend into reality.
And the interface between the two—“manufacturing”—is only going to grow in significance.

Categories
Asides

Tesla Optimus will become infrastructure

The age of AI has already begun.

With ChatGPT, we can now generate text, images, voice, even video. It’s not “coming soon” — it’s already here.

But changing the physical world takes one more step: integration with IoT.
AI can process data, but it can’t touch the real world. That’s where robots come in — they allow AI to physically interact with reality. Optimus is a symbol of that.

Tesla Optimus is a device meant to carry us into the age of automation, without rewriting our entire society.
From AI’s point of view, it’s the interface to the real world.
No need to reinvent roads, elevators, or doors. Optimus — and other robots being built by Big Tech — are designed to move through the world as it is. They’re general-purpose labor bodies, built to help AI function inside existing human infrastructure.

What we’re seeing now is, I think, a robot plan to AIoT the world.
Everything will be connected, automated, decision-capable, and able to act.
And the reason robots need to be humanoid is finally becoming clear: they’re designed to fit into our world, not the other way around.

Automation will move faster than we expect.
Car companies might end up as manufacturers of “just empty boxes” — simple transport units. These boxes don’t need intelligence. In fact, automation works better when things follow spec, stay predictable, and don’t think too much.

In Japan’s case, I wouldn’t be surprised if the government eventually distributes robots like Tesla Optimus.
You give up your driver’s license, and in return, get a subsidy for a household robot. That kind of world might not be a joke — it might be real, and sooner than we think.

But the tech and quality needed to make those robots — that’s where Japan comes in.

Humanoid robots are hard to build. They can’t afford to break down. Batteries, motors, sensors, thermal systems, materials — all of it needs to be precise and reliable.
That’s exactly what Japan has spent decades getting good at.

Manufacturing and quality control — those might be Japan’s last strongholds.
And they’re exactly what the world is looking for right now.

Exit mobile version