Categories
Asides

Could the Human Eye Receive Optical Communication through IoT Integration?

I wondered if humans could ever become compatible with IOWN.

When vision is seen as an entry point for information, the human eye is already a highly advanced sensor for receiving light. If communication functionality could be layered onto it, the human body itself might become a node within the information network.

Of course, in reality, there are significant challenges involving freedom of movement and safety. Directly receiving optical signals—through Li-Fi or fiber-based communication—would place biological strain on the eye, making practical implementation difficult. Yet if even a part of the human body could receive data through optical communication, the relationship between humans and networks would be fundamentally transformed.

Reframing vision not as an organ for seeing but as a port for communication shifts the gateway of information from the brain to the body itself.
I would like to imagine a future where humans become the terminal devices of IOWN.

Categories
Asides

The Need for a Self-Driving License

After AT-only licenses, the next step we may need is a “self-driving license.”

Recently, I rented a gasoline-powered car for the first time in a while. It was an automatic model, but because I was unfamiliar with both the vehicle and the driving environment, the experience was far more stressful than I expected. Having become used to driving an EV equipped with autonomous features, I found the act of operating everything manually—with my own judgment and physical input—strangely primitive.

When the gear is shifted to drive, the car starts moving on its own. A handbrake must be engaged separately, and the accelerator must be pressed continuously to keep moving. Every stop requires the brake, every start requires a shift of the foot back to the accelerator, and even the turn signal must be turned off manually. I was reminded that this entire system is designed around the assumption that the human body functions as the car’s control mechanism.

I also found myself confused by actions that used to be second nature—starting the engine, locking and unlocking the door with a key. What once seemed natural now feels unnecessary. There are simply too many steps required before a car can even move. Press a button, pull a lever, step on a pedal, turn a wheel. The process feels less like operating a machine and more like performing a ritual.

From a UX perspective, this reflects a design philosophy stuck between eras. The dashboard is filled with switches and meters whose meanings are not immediately clear. Beyond speed and fuel levels, how much information does a driver actually need? The system relies on human judgment, but in doing so, it also introduces confusion.

When driving shifted from manual to automatic, the clutch became obsolete. People were freed from unnecessary complexity, and driving became accessible to anyone. In the same way, in an age where autonomous driving becomes the norm, pressing pedals or turning a steering wheel will seem like relics of a bygone era. We are moving from a phase where machines adapt to humans to one where humans no longer need to touch the machines at all.

Yet driver licensing systems have not caught up with this change. Until now, a license has certified one’s ability to operate a vehicle. But in the future, what will matter is the ability to interact with the car, to understand its systems, and to intervene safely when needed. It will no longer be about physical control, but about comprehension—of AI behavior, of algorithmic decision-making, and of how to respond when something goes wrong.

When AT-only licenses were introduced, many drivers were skeptical about removing the clutch. But over time, that became the standard, and manual transmissions turned into a niche skill. Likewise, if a “self-driving license” is introduced in the near future, pressing pedals may come to be viewed as a legacy form of driving—something from another era.

The evolution of driving technology is, at its core, the gradual separation of humans from machines. A self-driving license would not be a qualification to control a vehicle, but a literacy certificate for coexisting with technology. It would mark the shift from moving the car to moving with the car. Such a change in licensing might define how transportation itself evolves in the next generation.

Categories
Asides

Will SoftBank Acquire Intel?

I once wondered about this question. Later, SoftBank announced a new round of investment, confirming that the company was indeed moving in that direction. I want to record here what I was thinking before that announcement. My purpose is to leave a trace that will allow me to compare those thoughts with what eventually happens.

The Background and Current State of the ARM Acquisition

SoftBank’s 2016 acquisition of ARM was a clear declaration that the company intended to anchor itself at the upstream of the semiconductor value chain — intellectual property for chip design. ARM’s licensing model, built on neutrality and scalability, expanded its reach from mobile devices to IoT, servers, and even supercomputers. While promising to maintain ARM’s neutrality, SoftBank began to tighten integration, emphasizing subsystem offerings and deeper involvement in the server domain. Even after ARM’s re-listing in 2023, it remains the group’s most important asset and the central hub connecting its other investments.

The Next Step: Building Around ARM

SoftBank’s acquisitions have sought to elevate ARM from a mere IP provider into the driving force of an entire ecosystem. The acquisition of Graphcore gave it a foothold in AI accelerators, and the purchase of Ampere brought a practical server-CPU operation under the same umbrella. The combination of ARM’s low-power design philosophy and the data-center scale-out trend offers an alternative optimal point to the traditional x86-centric server market. This configuration directly connects to the later thought experiment concerning Intel.

The Distance Between SoftBank and Nvidia

SoftBank was once a major Nvidia shareholder, forging a close relationship before the AI boom. The subsequent sale, which forfeited a massive appreciation opportunity, shifted the relationship to one of both collaboration and competition. While joint projects in Japan’s AI and telecommunications infrastructure continue, SoftBank’s push to cultivate multiple in-house AI-chip initiatives can be read as an attempt to challenge Nvidia’s dominance. Nvidia, for its part, is reinforcing its own vertical integration with ARM-based CPUs and NVLink interconnects. The two paths intersect but ultimately lead toward different goals.

The AI Investment Strategy Centered on OpenAI

SoftBank’s massive commitment to OpenAI, its infrastructure partnerships with Oracle and others, and joint ventures in Japan all signal a plan to bring the software core of AI under its orbit while pre-securing compute resources. In the AI era, supremacy converges not on algorithms but on the ability to govern and interconnect power, semiconductors, and capital. SoftBank aims to tie the scale of AI itself to its balance sheet, controlling both design IP and the physical data-center layer.

The Intel Hypothesis

How might Intel fit into this circuitry? Market stagnation, restructuring pressures, and the separation of manufacturing from products have fueled repeated speculation about acquisitions and equity partnerships. Reports suggested that ARM showed interest in Intel’s product division but talks fell through, and negotiations over AI-chip manufacturing also collapsed over production-capacity terms. There is no evidence of a formal buyout attempt, but traces of exploratory engagement remain. The core question is simple: why would SoftBank want to absorb Intel, and through what realistic path could it happen?

Examining Strategic Alignment

ARM is an IP-driven entity without manufacturing. Intel possesses vast fabrication capacity and an x86 franchise but lags in mobile and power-efficient contexts. Combined, they could span both CPU architectures, integrating from data centers to edge devices with comprehensive design and supply capabilities. Within the AI infrastructure stack, they could encompass CPUs, AI accelerators, memory, interconnects, and fabs. The logic is elegant — and access to CHIPS Act subsidies and advanced fabrication would offset reliance on external foundries.

Yet elegant logic does not guarantee practical feasibility. For foreign capital to take control of Intel — an American strategic asset — would run headlong into political and regulatory barriers. As the U.S. Steel precedent showed, national interest can override regulatory clearance. On antitrust grounds, even the perception that ARM’s neutrality might erode would provoke fierce resistance. The industry views ARM as common infrastructure; any integration skewed toward a single group’s advantage would meet opposition from all sides. Add financial strain and the operational burden of running manufacturing, and a full acquisition becomes implausible.

Pragmatic Alternatives

If full control is closed off, distributed strategies remain. Partial equity participation, co-design projects, long-term manufacturing contracts, and multinational consortiums all represent workable routes. ARM can enhance its relevance through subsystem design and joint optimization; Ampere and Graphcore can bring their products to market; Rapidus and overseas foundries can diversify manufacturing access. Rather than outright control, strengthening its role as a hub connecting specifications, capital, and power supply aligns with SoftBank’s pragmatic style.

Re-Examining the Risks

A U.S. Steel–type political blockade is entirely plausible. Cross-border semiconductor investments fall squarely within national-security and industrial-policy oversight, entangling legislators, unions, and state governments. Antitrust risks are also significant. If ARM’s neutrality were questioned, Apple, Qualcomm, Microsoft, Amazon, Google, and Nvidia would all lobby against the deal. Conflicts with existing players would be inevitable: Nvidia is consolidating independence across CPUs and GPUs, while Apple closely monitors ARM’s trajectory, vital to its own SoC strategy. The practical route to conflict avoidance lies in incentive structures that distribute value across stakeholders and in maintaining transparent, non-discriminatory licensing.

Japan’s Policy Landscape and Points of Contact

SBI’s domestic memory initiative has shifted focus from a failed PSMC alliance toward cooperation with the SK group. Subsidy frameworks remain, and Japan continues exploring ways to restore local memory capacity. With domestic AI firms such as PFN in the mix, a new ecosystem centered on AI-specific memory demand could emerge. Meanwhile, Rapidus aims for 2-nm logic mass production and is collaborating with Tenstorrent to capture edge-AI demand. SoftBank, a shareholder, holds the option to align ARM or Ampere designs with domestic manufacturing. The interplay between national and private capital thus serves SoftBank as both risk hedge and policy alignment mechanism.

Managing Relationships with Nvidia and Apple

Nvidia represents both partner and competitor. Joint efforts in Japan’s AI and 5G infrastructure coexist with SoftBank’s independent AI-chip initiatives and ARM’s expansion, both of which could alter long-term market dynamics. For Apple, ARM’s neutrality and licensing stability are paramount. Any perception that ARM’s roadmap tilts toward proprietary advantage could chill relations. Maintaining openness in software toolchains, transparency in roadmaps, and a balance between differentiation and neutrality will be key.

The Question That Remains

Even if an acquisition is unrealistic, why does the idea keep resurfacing? The answer is simple: in the AI era, value creation is migrating toward the convergence of compute resources, power, and capital. CPU architectures, advanced fabs, AI accelerators, memory, interconnects, cloud infrastructure, and generative AI platforms — whoever orchestrates these elements will define the next decade. SoftBank holds capital, IP, and market reach, but lacks proprietary access to manufacturing and power. That is why Intel enters the frame. Yet being in view and being within reach are two different things.

Conclusion

Even if the path to a full Intel acquisition is closed, SoftBank still has room to build equivalent capability through distributed partnerships. The real question is how to integrate power sources, manufacturing ecosystems, architectures, and capital structures into a coherent design. This is no longer about a one-time transaction but about the ability to interlink policy, capital, and technology. When revisited years from now, this speculation may not look like a rumor but rather an early thought experiment on the reconfiguration of power in the age of compute sovereignty.

Categories
Asides

The Strategic Value of Compute Resources in the OpenAI–AMD Partnership

The expansion of generative AI has entered a stage where progress is determined not by model novelty but by the ability to secure and operate compute resources. The multi-year, multi-generation alliance between OpenAI and AMD clearly reflects this structure. It is no longer a simple transactional deal but a framework that integrates capital, supply, power, and implementation layers into a mechanism for mutual growth—signaling a shift toward scale as a built-in assumption.

Forecasting Power Demand

The backbone of this partnership is gigawatt-class compute capacity. An initial 1 GW, scaling to several gigawatts, links data-center construction directly to regional grid planning rather than individual projects. The key factors are not only peak power draw but sustained supply reliability and effective PUE including heat rejection. AI training workloads behave as constant loads rather than spikes, making grid stability and redundancy in auxiliary systems critical bottlenecks.

Model evolution continues to expand overall electricity demand, offsetting gains in performance per watt. Even as semiconductor generations improve efficiency, larger parameter counts, bigger datasets, and multimodal preprocessing and inference push consumption upward. Consequently, capital investment shifts its center of gravity from racks to civil-engineering and electrical domains that include cooling infrastructure.

Structural Issues in the Compute Market

Even with AMD expanding deployment options, the NVIDIA-dominated market faces other bottlenecks—optical interconnects, advanced HBM, and CoWoS packaging capacity among them. Rising rack-level heat density makes the shift from air to liquid cooling irreversible, tightening location constraints for data centers. The result is a conversion lag: capital cannot instantly be turned into usable compute capacity.

A further concern is geopolitical risk. Heightened global tensions and export controls can fragment manufacturing and deployment chains, triggering cascading delays and redesigns.

OpenAI’s Challenges

The first challenge for OpenAI is absorbing and smoothing exponentially growing compute demand. Running research, productization, and APIs concurrently complicates capacity planning across training and inference clusters, making the balance between model renewal and existing services a critical task.

The second is diversification away from a single vendor. Heavy reliance on NVIDIA has caused supply bottlenecks and eroded pricing flexibility. Sharing the roadmap with AMD therefore carries both optimization and procurement significance.

The third lies in capital structure and governance. While drawing in vast external commitments, OpenAI must maintain neutrality and research agility, requiring careful contract architecture to coordinate partnerships. The episode of its past internal split serves as a reminder: when capital providers bring divergent decision criteria, alignment of research agendas becomes a challenge.

AMD’s Challenges

AMD’s bottlenecks are manufacturing capacity and the software ecosystem. Its latest designs can compete technically, but to offer a developer experience rivaling the PyTorch/CUDA world, it must advance runtimes, compilers, kernels, and distributed-training toolchains. Hardware aspects such as HBM supply, packaging yield, and thermal management will define both delivery schedules and operational stability.

A second challenge is converting the co-developed results with OpenAI into broader market value. If collaboration remains confined to a single project or product, dependency risk increases. Generalizing and scaling the gains to other markets will be essential.

Strategic Intent of the Partnership

At the surface, the intent is clear: OpenAI seeks secure and diversified compute resources, while AMD seeks simultaneous credibility and demand. Structurally, however, there is a deeper layer—integrating models, data, compute, and capital into a unified flow; accelerating GPU design and supply cycles; and locking in diversified power and site portfolios early. In effect, both sides embed their respective challenges into a forward-loaded roadmap that reduces uncertainty in supply and financing.

Scheme Design

The distinctive feature is clause design that firmly enforces reciprocal commitment. Large take-or-pay volumes and facility milestones are tied to capital returns, linking hardware success directly to customer benefit. For suppliers, it secures quantity certainty and pricing floors, easing investment decisions. For buyers, it strengthens influence over technical specifications and workload fit. Financially, it helps smooth extreme swings in cash flow.

Difference from NVIDIA’s Model

Where NVIDIA’s massive deal channels capital from supplier to buyer—who then spends it back on the supplier—the AMD structure grants equity options from supplier to buyer, while the buyer guarantees long-term procurement. Both align incentives, but the direction of capital flow and degree of governance leverage differ.

NVIDIA’s model gives suppliers greater control and restricts buyers through capital conditions. AMD’s allows buyers to become future shareholders, giving them indirect influence over the supplier’s technical priorities.

Compute-ism

In the AI era, the value model ultimately converges on a single question: who can operate how much compute, on what power, at what efficiency, and under what governance. Partnerships with Microsoft, NVIDIA, AMD, and Oracle all stem from that premise. Compute capacity has become currency, conduit, and foundation of sovereignty. The choice of compute space—including power source, jurisdiction, ethical stance, and data lineage—extends from corporate strategy into institutional design.

From this viewpoint, true competitiveness lies in projects that integrate long-term cloud commitments, dedicated power and cooling, secured land, and supply-chain finance. Price or FLOPS comparisons alone no longer define advantage.

Impact on the Hardware and Technology Roadmap

Meeting the insatiable demand for compute requires clear priorities: larger memory space, lower latency, more efficient cooling, higher energy performance. GPUs will continue evolving accordingly—scaling HBM capacity and bandwidth, advancing interconnects, and optimizing storage and data-loading paths. Opportunities for improvement remain endless.

On the software side, the question is how close AMD’s compilers and runtimes can come to zero-friction while preserving backward compatibility with PyTorch and JAX. In an expanding market, feeding operational feedback into architecture along the shortest path will decide generational performance gaps. Even abundant hardware fails to convert into market value without matching software optimization.

Power, cooling, and site strategy should also be treated as integral parts of the roadmap. Layouts premised on liquid immersion, integration of heat recovery with district systems, hybridization of renewables and storage, and adaptive scheduling to power demand—all these “Watt and Bit” linkages define the real unit cost of compute. Chip miniaturization alone will not sustain the next decade.

Conclusion

The OpenAI–AMD partnership marks the arrival of an era where capital, supply, power, and software are designed as a single system around compute resources. Under compute-ism, victory depends not on individual products but on ecosystem maturity. Market velocity will accelerate, yet the fundamentals remain simple: which power, in which place, on which chip, through which code, under which governance. The alliances that design these layers early, deeply, and broadly will draw the next map of the AI age.

Categories
Asides

How Much Power Does a GPU Consume?

I tried to compare the power consumption of GPUs in a way that makes it easier to imagine.
This is not a precise comparison, and since it only looks at power consumption, it may lead to misunderstandings regarding heat generation or efficiency.
Still, to get an intuitive sense of how much energy today’s GPUs consume, this kind of simplification can be useful.

Let’s start with something familiar — a household heater.
A typical ceramic or electric heater consumes about 0.3 kilowatts on low and roughly 1.2 kilowatts on high.
We can use this 1.2 kilowatts as a reference point — “one heater running at full power.”

When you compare household appliances and server hardware in the same units, the scale difference becomes more tangible.
The goal here is to visualize that difference.

Power Consumption (Approximate)

Item Power Consumption
Household Heater (High) ~1.2 kW
Server Rack (Conventional) ~10 kW
Server Rack (AI-Ready) 20–50 kW
NVIDIA H200 (Server) ~10.2 kW
Next-Generation GPU (Estimated) ~14.3 kW

A household heater represents the level of power used by common home heating devices.
A conventional server rack, typical through the 2010s, was designed for air-cooled operation with around 10 kilowatts per rack.
In contrast, modern AI-ready racks are built for liquid or direct cooling and can deliver 20–50 kilowatts per rack.
The NVIDIA H200’s figure reflects the official specification of a current-generation GPU server, while the next-generation GPU is a projection based on industry reports.

Next, let’s convert this into something more relatable — how many heaters’ worth of electricity does a GPU server consume?
This household-based comparison helps make the scale more intuitive.

Heater Equivalent (Assuming One Heater = ~1.2 kW)

Item Equivalent Number of Heaters
NVIDIA H200 (Server) ~8.5 units
Next-Generation GPU (Estimated) ~12 units

Until the 2010s, a standard data center rack typically supplied around 10 kilowatts of power — near the upper limit for air-cooled systems.
However, the rise of AI workloads has changed this landscape.
High-density racks designed for liquid cooling now reach 20–50 kilowatts per rack.
Under this assumption, a single GPU server would nearly fill an entire legacy rack’s capacity, and even in AI-ready racks, only one to three GPU servers could be accommodated.

  • NVIDIA H200 (Current Model)

    • Per Chip: up to 0.7 kW
    • Per Server (8 GPUs + NVSwitch): ~10.2 kW
    • Equivalent to about 8.5 household heaters
    • Nearly fills a conventional 10 kW rack
    • Fits roughly 2–4 servers per AI-ready rack
  • Next-Generation GPU (Estimated)

    • Per Chip: around 1.0 kW (based on reported estimates)
    • Per Server (8 GPUs + NVSwitch assumed): ~14.3 kW
    • Equivalent to about 12 household heaters
    • Exceeds the capacity of conventional racks
    • Fits roughly 1–3 servers per AI-ready rack

Looking at these comparisons, the difference between a household heater and a GPU server becomes strikingly clear.
A GPU is no longer just an electronic component — it’s effectively part of the power infrastructure itself.

If you imagine running ten household heaters at once, you start to grasp the weight of a single GPU server.
As AI models continue to scale, their power demands are rising exponentially, forcing data center design to evolve around power delivery and cooling systems.
Enhancing computational capability now also means confronting how we handle energy itself, as the evolution of GPUs continues to blur the line between information technology and the energy industry.

Categories
Asides

Reconsidering NFTs and the Architecture of Trust in the Generative Era

NFTs were once treated as the symbol of digital art. Their mechanism of proving “uniqueness” seemed like a fresh counterbalance to the infinite reproducibility of digital data. The idea that the proof of existence itself could hold value, rather than the artwork alone, was indeed revolutionary.

However, NFTs were quickly consumed by commercial frenzy. The market expanded without a real understanding of the underlying concept, and countless pieces of digital waste were created. Detached from the artistic context, endless collections were generated and forgotten. That phenomenon reflected not the flaw of the technology itself, but the fragility of human desire driven by trends.

Perhaps the era simply arrived too early. Yet now, with the rise of generative AI, the situation feels different. Images, voices, and videos are produced from minimal prompts, and distinguishing authenticity has become increasingly difficult. In this age where the boundary between the real and the synthetic is fading, the need to verify who created what, and when, is once again growing.

AI-generated content is closer to a generation log than a traditional work of authorship. To trace its countless derivatives, to record their origins and transformations, we need a new system—and the foundational structure of NFTs fits naturally there. Immutable verification, decentralized ownership, traceable history. These can be redefined not as artistic features, but as mechanisms for ensuring information reliability.

Watching models like Sora 2 makes that necessity clear. When generated outputs become so real that human-made and AI-made works are indistinguishable, society begins to search again for a sense of uniqueness—not in aesthetic terms, but in informational and social terms. NFTs may quietly return, not as speculative art tokens, but as the infrastructure of provenance and trust.

The meaning of a technology always changes with its context. NFTs did not end as a mere symbol of the bubble. They might have been the first structural answer to the question that defines the AI era: what does it mean for something to be genuine? Now may be the time to revisit that architecture and see it anew.

Categories
Asides

A Society Where APIs Become Unnecessary

Looking back over the past few months, I realize just how deeply I’ve fused my daily life with AI. Most of my routine tasks are already handled alongside it. Research, small repetitive work, even writing code can now be delegated. The most striking change is that tools built solely for my own efficiency are now fully automated by AI.

What’s especially fascinating is that even complex tasks—such as online banking operations—can now be automated in ways tailored specifically to an individual’s needs. For example, importing bank statements, categorizing them based on personal rules, and restructuring them as accounting data. What once required compliance with the frameworks imposed by financial institutions or accounting software can now be achieved simply by giving natural language instructions to AI.

The key here is that, unlike commercial products, there’s no need to satisfy “universality” for everyone. Personal quirks, rules that only I understand, exceptions that would never justify engineering resources for a mass-market service—these can all be captured and executed by AI. What used to be dismissed as “too niche” is now fully realizable at the individual level. Being freed from the constraints of general-purpose design has enormous value.

Even more revolutionary is the fact that APIs are no longer necessary. Traditionally, automation was possible only when a service explicitly exposed external connections. Now, AI can interact with data the same way a human would—through a browser or app interface. This means services don’t need to be designed to “export data.” AI can naturally capture it and fold it into personal workflows. From the user’s perspective, this allows data to flow freely, regardless of the provider’s intentions.

As I noted in my piece about Tesla Optimus, replacing parts of society without changing the interface itself will become a defining trend. AI exemplifies this by liberating usage from the design logic of providers and putting it back into the hands of users.

This structure leads to a reversal of power. Until now, providers dictated how services could be used. With AI as the intermediary, users can decide how they want to handle data. Whether or not a provider offers an API becomes irrelevant—users can route data into their own automation circuits regardless. At that moment, control shifts fully to the user side.

And this isn’t limited to banking. Any workflow once constrained by a provider’s convenience can now be redesigned by individuals. Subtle personal needs can be incorporated, complexity erased, external restrictions bypassed. The balance of power over data—long held by providers—is starting to wobble.

Of course, AI itself is still transitional. “AI” is not one thing but many, each with distinct characteristics. At present, people must choose and balance among them: cloud-hosted AI, private local AI, or in-house AI running on proprietary data centers. Each has strengths and weaknesses, and from the perspective of data sovereignty, careful selection is essential.

Still, living with multiple AIs simultaneously brings a sense of parallelization to daily work. Different tasks with different contexts can be run side by side, allowing me to stay focused on the most important decisions. Yet at the same time, because AI increasingly performs the research feeding those decisions, the line between my own will and AI’s influence grows blurred. That ambiguity is part of what makes this fusion fascinating—and also why the health of AI systems and the handling of personal data have become more critical than ever before.

Categories
Asides

Infrastructure That Makes Corporate Trust Irrelevant by Making Lies Impossible

Every time a “fabrication of inspection data” scandal surfaces in the manufacturing world, I can’t help but think—it’s not just about one company’s wrongdoing. It’s a structural issue embedded in society itself.

We operate in systems where lying becomes necessary. In fact, the system often rewards dishonesty. As long as this remains true, we shouldn’t view misconduct as a matter of individual or corporate ethics—but as a systemic design flaw.

That’s exactly why, when a technology appears that makes lying technically impossible, we should adopt it without hesitation.

Why can large corporations charge higher prices and still sell their products? Because they’ve earned trust over time. Their price tags are justified not just by quality and performance, but by accumulated history—reputation, consistency, and customer confidence.

But once that trust is broken, price advantages crumble quickly. Worse, the damage ripples through the entire supply chain. The longer the history, the broader the impact.

Startups, on the other hand, often compete on price. Without a long track record, they offer lower prices and slowly build trust through repeated delivery. That’s been the only way—until now.

Today, things are different. Imagine a startup that records its manufacturing processes and inspection data in real time, using an immutable system that prevents tampering. That alone is enough to objectively prove that their data—and by extension, their product—is authentic.

In other words, trust is no longer a function of time. We’re shifting from trust built over decades to trust guaranteed by infrastructure.

So for startups entering the manufacturing space, here’s what I believe they should keep in mind from day one:

Leave a tamper-proof audit trail for all quality and inspection data.
That record becomes a competitive weapon the moment a legacy competitor is caught in a data scandal.
The value of that audit trail grows over time, so it’s critical to start now—not later.
Use data to visualize trust, and build competitive advantage beyond price.
It’s an asset that can’t be recreated retroactively.

For large enterprises that already enjoy the benefits of trust, this shift isn’t someone else’s problem. It’s happening now.

Your ability to charge premium prices is built on the trust you’ve accumulated—but that trust can vanish with a single scandal. The older the company, the greater the surface area for reputational risk. And the more complex the supply chain, the faster that risk spreads.

What we need now is an environment where dishonesty is structurally impossible.
Real-time auditing of inspection data.
End-to-end transparency across the supply chain.
With that infrastructure in place, even if a problem does occur, it can be detected early and contained before it spreads.

This isn’t just risk mitigation—it’s a form of psychological safety for the people on the ground. If they don’t have to lie, they can focus on doing honest work. If mistakes happen, they can report them. And if they’re reported, they can be fixed.

In short, this isn’t a system to prevent fraud—it’s a system to cultivate trust.

With such a system, a company can prove it hasn’t cheated. And society will begin to evaluate who to work with based on those records. In a world where data is an asset, an immutable audit trail is the ultimate proof of integrity.

But here’s the thing: you can only start building that record now. Not tomorrow.

An audit trail means you never had to lie in the first place. It’s a record of your honesty—and a source of competitive strength for the future.

Categories
Asides

The Limits of Amazon (or, Alexa Spellcasting 101)

I can’t help but feel something is fundamentally wrong—specifically with the behavior of Amazon Echo, or rather, Alexa.

To be clear, I’m a huge Alexa fan. I’ve always made it a point to respect innovators who break new ground, and when it comes to home automation and smart speakers, I’ve stayed fully committed to Amazon’s ecosystem. I use my HomePod purely as a speaker. I never speak to Siri.

Much like the structure of the internet itself, the Alexa ecosystem is thoroughly centralized—for better or worse. In the early days, that was perfectly fine. Centralizing all personal data with Amazon felt safe, and it offered real value in return. From reminders to order consumables to voice-activated purchases, Alexa embodied the promise of the Amazon ecosystem.

But it simply hasn’t evolved. It feels like every one of Amazon’s weaknesses in the AI and IoT space is on full display here. Sure, the hardware lineup has expanded, and prices have dropped dramatically. That’s great. But the direction of progress feels completely disconnected from what I, as a user, had hoped for.

Amazon understands consumer behavior better than anyone, so I’m sure their decisions are data-driven and correct in aggregate. They’re probably giving most people what they want. Still, it doesn’t feel like the future of smart speakers.

Even Kindle, the market-dominating reading device, hasn’t shaken off its outdated software and infrastructure. That same legacy mindset seems embedded deep in Alexa, too.

Let me give a concrete example of what I consider a fatal flaw.

Managing multiple locations breaks everything. I currently use Alexa to control four different sites—my home, office, and two others—with over ten Echo devices. In this setup, saying something as simple as “I’m home” could trigger lights across all locations. And in Japanese, “denki” (“electricity”) doesn’t mean just “lights”—it can mean “power.” So asking Alexa to “turn off the electricity” might shut off everything everywhere.

Each Echo is clearly assigned to a location and a room, and each smart device is linked properly. Yet Alexa easily crosses those boundaries, overstepping permissions and doing far too much.

The only fix is to assign a unique name to every device and create unambiguous commands tailored to every location and room. In other words, you have to build a shared language between yourself and Alexa.

That process feels more like programming—or casting spells.

Alexa + room name + device/group + intended action

Once you master that grammar, you can start designing commands. But first, you need a consistent naming rule. Without it, you’ll constantly forget what you’re addressing.

There’s also an advanced technique where frequently used spells can be assigned shorthand triggers. Alexa’s “routines” let you chain multiple actions together, similar to calling a function—albeit without arguments.

Alexa + keyword

This is convenient for bundling multiple spells or shortening your incantations. But beware: you can’t use common terms or reserved keywords.

Try using “I’m leaving,” and Alexa might just say goodbye.

So what do you do? Design your spells like actual magic.

Here are some real examples of spells I use daily. Note: each location has its own context, so spell behavior varies by house or room.

Alexa + バルス
Turns off all lights in the specified house and starts cleaning. Clears garbage like garbage.

Alexa + 領域展開
Same as バルス but for a different house. Adds an ending song.

Alexa + 簡易領域展開
A simpler version of the above. No cleaning.

Alexa + エンペラーモード
Changes lighting to focus mode, puts iPhone/Mac into Do Not Disturb, and activates an external indicator to show I’m deep in concentration.

Alexa + (x)号機 + 出撃 or 撤退
Turns a specific air conditioner on or off. Can also launch or recall all units.

I’ve set up dozens of such spells. Honestly, I know it sounds ridiculous. But without them, Alexa wouldn’t understand my commands, and I’d be stuck saying long, convoluted incantations.

This alone should convey how far off expectations Alexa’s behavior is.

I’ve given up on accurate voice recognition, especially in Japanese. It’s not really Amazon’s fault. It’s just the limitations of the language. Still, I wish Alexa supported using both English and Japanese simultaneously. Then again, even Google hasn’t solved that problem—so perhaps it’s just a loss for Japanese.

There are other issues too—like account unification. Amazon’s data and authentication infrastructure causes persistent problems when trying to merge accounts across countries. But that’s a story for another time.

Where Amazon still shines is in its understanding of the consumer market, and of course, in its unmatched backend: AWS. That’s why Alexa keeps working, avoids misidentification, and supports remote control.

But even those strengths are starting to feel outdated. Like a textbook case of the innovator’s dilemma, Amazon is lagging in edge computing, decentralized authentication, and privacy-by-design. In those areas, Apple is now the one leading.

If a generative AI layer ever lands on the Echo side, many of these problems might be solved. But if Amazon chooses to process that on the cloud—true to form—the computation costs will soar. Doing it on the edge would require more expensive devices and abandoning the current ecosystem.

Will we ever be freed from spellcasting?

Categories
Asides

Branding for Non-Human Audiences in the AIoT Era

Around 2024, Tesla began phasing out its T logo. Part of this may have been to emphasize the text logo for brand recognition, but recently it seems even that text is disappearing. It feels like the company is moving toward the next stage of its brand design.

Ultimately, the text will vanish, and the shape alone will be enough for people to recognize it. In consumer products, this is the highest-level approach—an ultimate form of branding that only a few winners can achieve.

I’m reminded of a story from the Macintosh era, when Steve Jobs reportedly instructed Apple to reduce the use of the apple logo everywhere. As a result, today anyone can recognize a MacBook or iPhone from its silhouette alone. The form itself has become the brand, to the point where imitators copy it.

A brand, at its core, is a mark—originally a literal brand burn—meant to differentiate. It’s about being efficiently recognized by people, conveying “this is it” without conscious thought. One effective way is to tap into instincts humans have developed through coexistence with nature, subtly hacking the brain’s recognition process. Even Apple and Tesla, which have built inorganic brand images, have incorporated such subconscious triggers into product design and interface development, shaping the value they hold today.

But will this still be effective going forward?

The number of humans is tiny compared to the number of AI and IoT devices. For now, because humans are the ones paying, the market focuses on maximizing value for them. That will remain true to some extent. But perhaps there is a kind of branding that will become more important than human recognition.

Seen in this light, Apple, Tesla, and other Big Tech companies already seem to hold tickets to the next stage. By adopting new communication standards like UWB chips, or shaping products to optimize for optical recognition, they are working to be more efficiently recognized by non-human entities. Even something like Google’s SEO meta tags or Amazon’s shipping boxes fits into this picture.

In the past, unique identification and authentication through internet protocols were either impossible, expensive, or bound to centralized authority. But advances in semiconductors, sensor technology, and cryptography—along with better energy efficiency—are changing that. The physical infrastructure for mesh networks is also in place, and branding is on the verge of entering its next phase.

The essence of branding is differentiation and the creation of added value. The aim is to efficiently imprint recognition in the human brain, often by leveraging universal contexts and metaphors, or by overwriting existing ones through repeated exposure. I’m not a marketing expert, but that’s how I currently understand it.

And if that’s correct, the question becomes: must the target still be humans?
Will humans continue to be the primary decision-makers?
Does it even make sense to compete for differentiation in such a small market?

At this moment, branding to humans still has meaning. But moving beyond that, as Apple products adopt a uniform design and Tesla moves toward minimalistic, abstract forms, branding may evolve toward maximizing value by being efficiently recognized within limited computational resources. Uniformity could make device recognition more efficient and reduce cognitive load for humans as well.

We should design future branding without being bound by the assumption that humans will always be the ones making the decisions.

Exit mobile version