Categories
Asides

Reconsidering NFTs and the Architecture of Trust in the Generative Era

NFTs were once treated as the symbol of digital art. Their mechanism of proving “uniqueness” seemed like a fresh counterbalance to the infinite reproducibility of digital data. The idea that the proof of existence itself could hold value, rather than the artwork alone, was indeed revolutionary.

However, NFTs were quickly consumed by commercial frenzy. The market expanded without a real understanding of the underlying concept, and countless pieces of digital waste were created. Detached from the artistic context, endless collections were generated and forgotten. That phenomenon reflected not the flaw of the technology itself, but the fragility of human desire driven by trends.

Perhaps the era simply arrived too early. Yet now, with the rise of generative AI, the situation feels different. Images, voices, and videos are produced from minimal prompts, and distinguishing authenticity has become increasingly difficult. In this age where the boundary between the real and the synthetic is fading, the need to verify who created what, and when, is once again growing.

AI-generated content is closer to a generation log than a traditional work of authorship. To trace its countless derivatives, to record their origins and transformations, we need a new system—and the foundational structure of NFTs fits naturally there. Immutable verification, decentralized ownership, traceable history. These can be redefined not as artistic features, but as mechanisms for ensuring information reliability.

Watching models like Sora 2 makes that necessity clear. When generated outputs become so real that human-made and AI-made works are indistinguishable, society begins to search again for a sense of uniqueness—not in aesthetic terms, but in informational and social terms. NFTs may quietly return, not as speculative art tokens, but as the infrastructure of provenance and trust.

The meaning of a technology always changes with its context. NFTs did not end as a mere symbol of the bubble. They might have been the first structural answer to the question that defines the AI era: what does it mean for something to be genuine? Now may be the time to revisit that architecture and see it anew.

Categories
Asides

A Society Where APIs Become Unnecessary

Looking back over the past few months, I realize just how deeply I’ve fused my daily life with AI. Most of my routine tasks are already handled alongside it. Research, small repetitive work, even writing code can now be delegated. The most striking change is that tools built solely for my own efficiency are now fully automated by AI.

What’s especially fascinating is that even complex tasks—such as online banking operations—can now be automated in ways tailored specifically to an individual’s needs. For example, importing bank statements, categorizing them based on personal rules, and restructuring them as accounting data. What once required compliance with the frameworks imposed by financial institutions or accounting software can now be achieved simply by giving natural language instructions to AI.

The key here is that, unlike commercial products, there’s no need to satisfy “universality” for everyone. Personal quirks, rules that only I understand, exceptions that would never justify engineering resources for a mass-market service—these can all be captured and executed by AI. What used to be dismissed as “too niche” is now fully realizable at the individual level. Being freed from the constraints of general-purpose design has enormous value.

Even more revolutionary is the fact that APIs are no longer necessary. Traditionally, automation was possible only when a service explicitly exposed external connections. Now, AI can interact with data the same way a human would—through a browser or app interface. This means services don’t need to be designed to “export data.” AI can naturally capture it and fold it into personal workflows. From the user’s perspective, this allows data to flow freely, regardless of the provider’s intentions.

As I noted in my piece about Tesla Optimus, replacing parts of society without changing the interface itself will become a defining trend. AI exemplifies this by liberating usage from the design logic of providers and putting it back into the hands of users.

This structure leads to a reversal of power. Until now, providers dictated how services could be used. With AI as the intermediary, users can decide how they want to handle data. Whether or not a provider offers an API becomes irrelevant—users can route data into their own automation circuits regardless. At that moment, control shifts fully to the user side.

And this isn’t limited to banking. Any workflow once constrained by a provider’s convenience can now be redesigned by individuals. Subtle personal needs can be incorporated, complexity erased, external restrictions bypassed. The balance of power over data—long held by providers—is starting to wobble.

Of course, AI itself is still transitional. “AI” is not one thing but many, each with distinct characteristics. At present, people must choose and balance among them: cloud-hosted AI, private local AI, or in-house AI running on proprietary data centers. Each has strengths and weaknesses, and from the perspective of data sovereignty, careful selection is essential.

Still, living with multiple AIs simultaneously brings a sense of parallelization to daily work. Different tasks with different contexts can be run side by side, allowing me to stay focused on the most important decisions. Yet at the same time, because AI increasingly performs the research feeding those decisions, the line between my own will and AI’s influence grows blurred. That ambiguity is part of what makes this fusion fascinating—and also why the health of AI systems and the handling of personal data have become more critical than ever before.

Categories
Asides

Infrastructure That Makes Corporate Trust Irrelevant by Making Lies Impossible

Every time a “fabrication of inspection data” scandal surfaces in the manufacturing world, I can’t help but think—it’s not just about one company’s wrongdoing. It’s a structural issue embedded in society itself.

We operate in systems where lying becomes necessary. In fact, the system often rewards dishonesty. As long as this remains true, we shouldn’t view misconduct as a matter of individual or corporate ethics—but as a systemic design flaw.

That’s exactly why, when a technology appears that makes lying technically impossible, we should adopt it without hesitation.

Why can large corporations charge higher prices and still sell their products? Because they’ve earned trust over time. Their price tags are justified not just by quality and performance, but by accumulated history—reputation, consistency, and customer confidence.

But once that trust is broken, price advantages crumble quickly. Worse, the damage ripples through the entire supply chain. The longer the history, the broader the impact.

Startups, on the other hand, often compete on price. Without a long track record, they offer lower prices and slowly build trust through repeated delivery. That’s been the only way—until now.

Today, things are different. Imagine a startup that records its manufacturing processes and inspection data in real time, using an immutable system that prevents tampering. That alone is enough to objectively prove that their data—and by extension, their product—is authentic.

In other words, trust is no longer a function of time. We’re shifting from trust built over decades to trust guaranteed by infrastructure.

So for startups entering the manufacturing space, here’s what I believe they should keep in mind from day one:

Leave a tamper-proof audit trail for all quality and inspection data.
That record becomes a competitive weapon the moment a legacy competitor is caught in a data scandal.
The value of that audit trail grows over time, so it’s critical to start now—not later.
Use data to visualize trust, and build competitive advantage beyond price.
It’s an asset that can’t be recreated retroactively.

For large enterprises that already enjoy the benefits of trust, this shift isn’t someone else’s problem. It’s happening now.

Your ability to charge premium prices is built on the trust you’ve accumulated—but that trust can vanish with a single scandal. The older the company, the greater the surface area for reputational risk. And the more complex the supply chain, the faster that risk spreads.

What we need now is an environment where dishonesty is structurally impossible.
Real-time auditing of inspection data.
End-to-end transparency across the supply chain.
With that infrastructure in place, even if a problem does occur, it can be detected early and contained before it spreads.

This isn’t just risk mitigation—it’s a form of psychological safety for the people on the ground. If they don’t have to lie, they can focus on doing honest work. If mistakes happen, they can report them. And if they’re reported, they can be fixed.

In short, this isn’t a system to prevent fraud—it’s a system to cultivate trust.

With such a system, a company can prove it hasn’t cheated. And society will begin to evaluate who to work with based on those records. In a world where data is an asset, an immutable audit trail is the ultimate proof of integrity.

But here’s the thing: you can only start building that record now. Not tomorrow.

An audit trail means you never had to lie in the first place. It’s a record of your honesty—and a source of competitive strength for the future.

Categories
Asides

The Limits of Amazon (or, Alexa Spellcasting 101)

I can’t help but feel something is fundamentally wrong—specifically with the behavior of Amazon Echo, or rather, Alexa.

To be clear, I’m a huge Alexa fan. I’ve always made it a point to respect innovators who break new ground, and when it comes to home automation and smart speakers, I’ve stayed fully committed to Amazon’s ecosystem. I use my HomePod purely as a speaker. I never speak to Siri.

Much like the structure of the internet itself, the Alexa ecosystem is thoroughly centralized—for better or worse. In the early days, that was perfectly fine. Centralizing all personal data with Amazon felt safe, and it offered real value in return. From reminders to order consumables to voice-activated purchases, Alexa embodied the promise of the Amazon ecosystem.

But it simply hasn’t evolved. It feels like every one of Amazon’s weaknesses in the AI and IoT space is on full display here. Sure, the hardware lineup has expanded, and prices have dropped dramatically. That’s great. But the direction of progress feels completely disconnected from what I, as a user, had hoped for.

Amazon understands consumer behavior better than anyone, so I’m sure their decisions are data-driven and correct in aggregate. They’re probably giving most people what they want. Still, it doesn’t feel like the future of smart speakers.

Even Kindle, the market-dominating reading device, hasn’t shaken off its outdated software and infrastructure. That same legacy mindset seems embedded deep in Alexa, too.

Let me give a concrete example of what I consider a fatal flaw.

Managing multiple locations breaks everything. I currently use Alexa to control four different sites—my home, office, and two others—with over ten Echo devices. In this setup, saying something as simple as “I’m home” could trigger lights across all locations. And in Japanese, “denki” (“electricity”) doesn’t mean just “lights”—it can mean “power.” So asking Alexa to “turn off the electricity” might shut off everything everywhere.

Each Echo is clearly assigned to a location and a room, and each smart device is linked properly. Yet Alexa easily crosses those boundaries, overstepping permissions and doing far too much.

The only fix is to assign a unique name to every device and create unambiguous commands tailored to every location and room. In other words, you have to build a shared language between yourself and Alexa.

That process feels more like programming—or casting spells.

Alexa + room name + device/group + intended action

Once you master that grammar, you can start designing commands. But first, you need a consistent naming rule. Without it, you’ll constantly forget what you’re addressing.

There’s also an advanced technique where frequently used spells can be assigned shorthand triggers. Alexa’s “routines” let you chain multiple actions together, similar to calling a function—albeit without arguments.

Alexa + keyword

This is convenient for bundling multiple spells or shortening your incantations. But beware: you can’t use common terms or reserved keywords.

Try using “I’m leaving,” and Alexa might just say goodbye.

So what do you do? Design your spells like actual magic.

Here are some real examples of spells I use daily. Note: each location has its own context, so spell behavior varies by house or room.

Alexa + バルス
Turns off all lights in the specified house and starts cleaning. Clears garbage like garbage.

Alexa + 領域展開
Same as バルス but for a different house. Adds an ending song.

Alexa + 簡易領域展開
A simpler version of the above. No cleaning.

Alexa + エンペラーモード
Changes lighting to focus mode, puts iPhone/Mac into Do Not Disturb, and activates an external indicator to show I’m deep in concentration.

Alexa + (x)号機 + 出撃 or 撤退
Turns a specific air conditioner on or off. Can also launch or recall all units.

I’ve set up dozens of such spells. Honestly, I know it sounds ridiculous. But without them, Alexa wouldn’t understand my commands, and I’d be stuck saying long, convoluted incantations.

This alone should convey how far off expectations Alexa’s behavior is.

I’ve given up on accurate voice recognition, especially in Japanese. It’s not really Amazon’s fault. It’s just the limitations of the language. Still, I wish Alexa supported using both English and Japanese simultaneously. Then again, even Google hasn’t solved that problem—so perhaps it’s just a loss for Japanese.

There are other issues too—like account unification. Amazon’s data and authentication infrastructure causes persistent problems when trying to merge accounts across countries. But that’s a story for another time.

Where Amazon still shines is in its understanding of the consumer market, and of course, in its unmatched backend: AWS. That’s why Alexa keeps working, avoids misidentification, and supports remote control.

But even those strengths are starting to feel outdated. Like a textbook case of the innovator’s dilemma, Amazon is lagging in edge computing, decentralized authentication, and privacy-by-design. In those areas, Apple is now the one leading.

If a generative AI layer ever lands on the Echo side, many of these problems might be solved. But if Amazon chooses to process that on the cloud—true to form—the computation costs will soar. Doing it on the edge would require more expensive devices and abandoning the current ecosystem.

Will we ever be freed from spellcasting?

Categories
Asides

Branding for Non-Human Audiences in the AIoT Era

Around 2024, Tesla began phasing out its T logo. Part of this may have been to emphasize the text logo for brand recognition, but recently it seems even that text is disappearing. It feels like the company is moving toward the next stage of its brand design.

Ultimately, the text will vanish, and the shape alone will be enough for people to recognize it. In consumer products, this is the highest-level approach—an ultimate form of branding that only a few winners can achieve.

I’m reminded of a story from the Macintosh era, when Steve Jobs reportedly instructed Apple to reduce the use of the apple logo everywhere. As a result, today anyone can recognize a MacBook or iPhone from its silhouette alone. The form itself has become the brand, to the point where imitators copy it.

A brand, at its core, is a mark—originally a literal brand burn—meant to differentiate. It’s about being efficiently recognized by people, conveying “this is it” without conscious thought. One effective way is to tap into instincts humans have developed through coexistence with nature, subtly hacking the brain’s recognition process. Even Apple and Tesla, which have built inorganic brand images, have incorporated such subconscious triggers into product design and interface development, shaping the value they hold today.

But will this still be effective going forward?

The number of humans is tiny compared to the number of AI and IoT devices. For now, because humans are the ones paying, the market focuses on maximizing value for them. That will remain true to some extent. But perhaps there is a kind of branding that will become more important than human recognition.

Seen in this light, Apple, Tesla, and other Big Tech companies already seem to hold tickets to the next stage. By adopting new communication standards like UWB chips, or shaping products to optimize for optical recognition, they are working to be more efficiently recognized by non-human entities. Even something like Google’s SEO meta tags or Amazon’s shipping boxes fits into this picture.

In the past, unique identification and authentication through internet protocols were either impossible, expensive, or bound to centralized authority. But advances in semiconductors, sensor technology, and cryptography—along with better energy efficiency—are changing that. The physical infrastructure for mesh networks is also in place, and branding is on the verge of entering its next phase.

The essence of branding is differentiation and the creation of added value. The aim is to efficiently imprint recognition in the human brain, often by leveraging universal contexts and metaphors, or by overwriting existing ones through repeated exposure. I’m not a marketing expert, but that’s how I currently understand it.

And if that’s correct, the question becomes: must the target still be humans?
Will humans continue to be the primary decision-makers?
Does it even make sense to compete for differentiation in such a small market?

At this moment, branding to humans still has meaning. But moving beyond that, as Apple products adopt a uniform design and Tesla moves toward minimalistic, abstract forms, branding may evolve toward maximizing value by being efficiently recognized within limited computational resources. Uniformity could make device recognition more efficient and reduce cognitive load for humans as well.

We should design future branding without being bound by the assumption that humans will always be the ones making the decisions.

Categories
Asides

Ride-Sharing Stations Paving the Way for the Autonomous Driving Era

When Uber first appeared, I experienced many innovations, but the greatest of them was freedom.
Without complicated procedures, and most importantly, the ability to get on and off anywhere — that was the real revolution of ride-sharing.

Unlike trains, there was no need to travel to a station; you could call a car to wherever you were. The convenience of that was an experience traditional taxis could never offer.

However, the disruptive convenience of ride-sharing inevitably clashed with the taxi industry. Perhaps as a result, many major facilities now designate specific pick-up and drop-off points, and the initial sense of freedom has been lost. In many cases, taxis occupy the more convenient spots. It’s likely a measure to protect the taxi industry, but as a user, it’s nothing short of disappointing.

It’s like if Uber Eats required you to pick up your food only from a hotel lobby — it would lose much of its appeal.

Right now, it’s as if commercial facilities and transport hubs are using ride-sharing infrastructure to create their own private stations. These are clearly separated from taxi stands, and a new kind of station is appearing every day. As long as there’s a road, they can be set up relatively easily, meaning that in urban planning, their number could grow indefinitely through private initiative.

Ride-sharing fares are higher than other public transport, so it’s not for everyone. It also can’t carry large numbers of people at once, making it unsuitable for major facilities. These are issues that building more ride-sharing stations won’t solve. But building a new train or bus station is something neither an individual nor a single company can easily do — it takes enormous budgets and years of time.

In the Tokyo Ginza area, where I’ve been based for the past few years, even taxis are restricted to certain boarding points depending on the time of day. I already consider that an inefficient station. On the other hand, I’ve recently seen more Waymo vehicles on the streets. If that’s the case, I wish they’d just turn those points into stations for autonomous vehicles.

And that’s when it hit me.

What will happen when autonomous taxis become more common?
What if autonomous taxis evolve into large, articulated buses like those in London?

That could create enormous value in the future — because it would actively leverage road infrastructure to intervene in the flow of people and goods. With the right approach, even areas far from expensive city centers could attract significant traffic and activity.

In other words, now is the time to start building ride-sharing stations. They don’t exist yet in ride-sharing–barren Japan, but future commercial facilities should absolutely include them.

Otherwise, such places will become locations where neither people, nor humanoid robots, nor drones will ever come close.

Categories
Photos

Def Con 33

As AKATSUKI.

Categories
Asides

Can Cloudflare’s “Pay per Crawl” Solve the Problem of Data Overpayment?

The Emergence of a New Trend

Cloudflare’s recently announced “Pay per Crawl” is a system that enables content providers to charge AI crawlers on a per-request basis. Until now, site administrators only had two options when dealing with AI crawlers: fully block them or allow unrestricted access. This model introduces a third option — conditional access with payment.

Data has value. It should not be exploited unilaterally. A technical solution was needed to enable ownership and appropriate compensation. This move may upend how companies like Google handle information and monetize the web. It also presents an intriguing use case for micropayments.

How the Crawl Control API Works

At the heart of this model is HTTP status code 402 Payment Required. When an AI crawler accesses a web page, Cloudflare first checks whether the request includes payment intent. If it does, the page is returned as usual with HTTP 200. If not, a 402 response is returned along with pricing information. If the crawler agrees to the terms, it re-sends the request with a payment header and receives the content.

Cloudflare mediates the entire transaction, including payment processing and crawler authentication. The system essentially functions as an HTTP access API with built-in payment. It’s a well-designed solution.

The key differences from existing robots.txt or meta tag-based controls lie in enforceability and economic exchange. Since the control is enforced at the network level, access can be physically denied when requested. And with micropayments, permission becomes conditional — shifting the model from a courtesy-based norm to a contract-based economy.

In some ways, this reflects the type of society blockchain and smart contracts aspired to create. Yet again, private innovation is leading the charge toward real-world implementation.

Rebuilding the Data Economy and Its Reach in Japan

In the traditional web, value was derived from human readership. Monetization — through ads or subscriptions — depended on people visiting your content.

But in the age of generative AI, information is being used without ever being read by a human. AI models crawl and learn from massive amounts of data, yet the content creators receive nothing in return. Pay per Crawl introduces a mechanism to monetize this “unread but used” data — laying the foundation for a new information economy.

In Japan, local newspapers, niche media, and expert blogs have struggled to monetize via ads. Now, AI crawlers represent a new type of “reader.” As long as AI systems require niche data, those who provide it will hold value. Going forward, the strategy will shift from merely increasing readership to optimizing content for AI consumption.

For AI developers, this introduces a shift in cost structure. Whereas they previously harvested public information for free, they will now incur costs per data unit. This marks a shift: data, like electricity or compute resources, will be treated as a resource that must be paid for.

The role of data centers will also grow more significant. Companies like Cloudflare — which control both the network and the payment rails — will become central hubs of information flow. As with energy distribution in the “Watt–Bit” framework, control over information infrastructure will once again become a source of economic power.

Addressing Data Overpayment and Establishing Information Sovereignty

The greatest societal significance of Pay per Crawl lies in correcting the imbalance of data overpayment. Many websites, public institutions, educational bodies, and individuals have provided content for years — often without knowing that AI systems were using it freely.

Pay per Crawl introduces a negotiable structure: “If you want to use it, pay for it.” This represents a reclaiming of informational self-determination — a step toward what could be called “information sovereignty.”

With micropayments on a per-request basis, the monetization model will also diversify. Previously, revenue depended on going viral. Now, simply having high-quality niche information may generate revenue. This marks a shift from volume-based value to quality-based value.

As the ecosystem expands to include universities, municipalities, and individual bloggers, we’ll see a new era where overlooked information can be formally traded and fairly compensated.

Pay per Crawl is not just traffic control technology. It is an attempt to create a new rulebook for how information is controlled and monetized in the generative AI era.

The system is still in its infancy, but there is no doubt that it will influence Japan’s media industry and data governance. Establishing a healthy economic relationship between creators and users of information — that is the kind of infrastructure we need in the age of AI.

Categories
Asides

You Can’t Take Your Eyes Off Tenstorrent and Jim Keller

The name “Tenstorrent” has become increasingly visible in Japan, especially following its partnership with Rapidus.

Tenstorrent is not just another startup. Rather, I believe it’s one of the most noteworthy collectives aiming beyond the GPU era. And above all, it has Jim Keller.

Keller is a man who has walked through the very history of CPU architecture. AMD, Apple, Tesla, Intel—line up the projects he’s been involved with, and you essentially get the history of modern processor design itself. When he joined Tenstorrent as CTO and President, it was already clear this wasn’t an ordinary company. Now, he’s their CEO.

Tenstorrent’s vision is to modularize components like AI chips and build a distributed computing platform within an open ecosystem. Instead of relying on a single, massive, closed GPU-centric chip, they aim to create a world where computing functions can be structured in optimal configurations as needed.
This marks a shift in design philosophy—and a democratization of hardware.

Since 2023, Tenstorrent has made a full-scale entry into the Japanese market, working with Rapidus to develop a 2nm-generation edge AI chip.
They also play a key role in Japan’s government-backed semiconductor talent development programs, running the advanced course that sends dozens of Japanese engineers to the company’s U.S. headquarters for hands-on OJT training. This isn’t just technical support or a supplier-client relationship. It’s a level of collaboration that could be described as integration.
Few American tech companies have entered a national initiative in Japan so deeply, and even fewer have respected Japan’s autonomy to this extent while openly sharing their technology.

Tenstorrent is sometimes positioned as a competitor to NVIDIA, but I think it occupies a more nuanced space.

In terms of physical deployment of AI chips, NVIDIA’s massive platform will likely remain dominant for some time.
However, Tenstorrent’s strategy is built on an entirely different dimension—heterogeneous integration with general-purpose CPUs, application-specific optimization, and the scalability of distributed AI systems.
Rather than challenging NVIDIA head-on, they seem to be targeting all the areas NVIDIA isn’t addressing.

They are also actively promoting open-source software stacks and the adoption of RISC-V. In that sense, their approach diverges significantly from ARM as well.
Tenstorrent operates across hardware and software, development and education, design and manufacturing. Their very presence puts pressure on the status quo of hardware design, introducing a kind of freedom—freedom to choose, to combine, to transform.

Companies like Tenstorrent defy simple classification. It’s hard to predict whether they’ll end up being competitors or collaborators in any given domain.
But one thing is clear: they chose Japan as a key field of engagement and have embedded themselves here at an unprecedented depth.

That alone is a fact worth paying attention to.

Categories
Asides

Digital Deficit

I just finished reading the Ministry of Economy, Trade and Industry’s (METI) PIVOT Project report.

For years I have argued that electricity and computational capacity resources are becoming the new basis of value for nations and companies alike. The METI report, Digital Economy Report 2025, visualises the same issue through the statistical fact of “digital deficit.” The critical takeaway is clear: we haven’t been earning in the very domains where value is generated.

The report, grounded in SDX – Software Defined Everything, also warns that the export competitiveness of automobiles and industrial machinery will depend increasingly on software. Confronting the “hidden digital deficit” of the SDX era and acting early with a long-term strategy is indispensable.

One concrete idea is to recapture industry standards through innovation at the lower layers of the tech stack. We must avoid a future in which entire platforms—and therefore choices—are controlled by others. The fact that an official policy document now shares this sense of urgency is significant.

The report calls for action. Our own initiatives—edge data centres × renewable energy × overseas joint ventures—represent one possible answer. We hold computational capacity resources, sharpen our strengths, and take them to market, not as a purely domestic play but as an exportable Japanese model. The business roadmap we have spent the past few years drawing up aligns closely with the report’s prescription.

Our path remains unchanged; the report simply reaffirms its necessity.

“The future has already begun to move—quietly, yet inexorably.”

Those were the very words that opened ENJIN. Today, we continue to build step by step, but with unshakable conviction.