Categories
Asides

Reconsidering NFTs and the Architecture of Trust in the Generative Era

NFTs were once treated as the symbol of digital art. Their mechanism of proving “uniqueness” seemed like a fresh counterbalance to the infinite reproducibility of digital data. The idea that the proof of existence itself could hold value, rather than the artwork alone, was indeed revolutionary.

However, NFTs were quickly consumed by commercial frenzy. The market expanded without a real understanding of the underlying concept, and countless pieces of digital waste were created. Detached from the artistic context, endless collections were generated and forgotten. That phenomenon reflected not the flaw of the technology itself, but the fragility of human desire driven by trends.

Perhaps the era simply arrived too early. Yet now, with the rise of generative AI, the situation feels different. Images, voices, and videos are produced from minimal prompts, and distinguishing authenticity has become increasingly difficult. In this age where the boundary between the real and the synthetic is fading, the need to verify who created what, and when, is once again growing.

AI-generated content is closer to a generation log than a traditional work of authorship. To trace its countless derivatives, to record their origins and transformations, we need a new system—and the foundational structure of NFTs fits naturally there. Immutable verification, decentralized ownership, traceable history. These can be redefined not as artistic features, but as mechanisms for ensuring information reliability.

Watching models like Sora 2 makes that necessity clear. When generated outputs become so real that human-made and AI-made works are indistinguishable, society begins to search again for a sense of uniqueness—not in aesthetic terms, but in informational and social terms. NFTs may quietly return, not as speculative art tokens, but as the infrastructure of provenance and trust.

The meaning of a technology always changes with its context. NFTs did not end as a mere symbol of the bubble. They might have been the first structural answer to the question that defines the AI era: what does it mean for something to be genuine? Now may be the time to revisit that architecture and see it anew.

Categories
Asides

A Society Where APIs Become Unnecessary

Looking back over the past few months, I realize just how deeply I’ve fused my daily life with AI. Most of my routine tasks are already handled alongside it. Research, small repetitive work, even writing code can now be delegated. The most striking change is that tools built solely for my own efficiency are now fully automated by AI.

What’s especially fascinating is that even complex tasks—such as online banking operations—can now be automated in ways tailored specifically to an individual’s needs. For example, importing bank statements, categorizing them based on personal rules, and restructuring them as accounting data. What once required compliance with the frameworks imposed by financial institutions or accounting software can now be achieved simply by giving natural language instructions to AI.

The key here is that, unlike commercial products, there’s no need to satisfy “universality” for everyone. Personal quirks, rules that only I understand, exceptions that would never justify engineering resources for a mass-market service—these can all be captured and executed by AI. What used to be dismissed as “too niche” is now fully realizable at the individual level. Being freed from the constraints of general-purpose design has enormous value.

Even more revolutionary is the fact that APIs are no longer necessary. Traditionally, automation was possible only when a service explicitly exposed external connections. Now, AI can interact with data the same way a human would—through a browser or app interface. This means services don’t need to be designed to “export data.” AI can naturally capture it and fold it into personal workflows. From the user’s perspective, this allows data to flow freely, regardless of the provider’s intentions.

As I noted in my piece about Tesla Optimus, replacing parts of society without changing the interface itself will become a defining trend. AI exemplifies this by liberating usage from the design logic of providers and putting it back into the hands of users.

This structure leads to a reversal of power. Until now, providers dictated how services could be used. With AI as the intermediary, users can decide how they want to handle data. Whether or not a provider offers an API becomes irrelevant—users can route data into their own automation circuits regardless. At that moment, control shifts fully to the user side.

And this isn’t limited to banking. Any workflow once constrained by a provider’s convenience can now be redesigned by individuals. Subtle personal needs can be incorporated, complexity erased, external restrictions bypassed. The balance of power over data—long held by providers—is starting to wobble.

Of course, AI itself is still transitional. “AI” is not one thing but many, each with distinct characteristics. At present, people must choose and balance among them: cloud-hosted AI, private local AI, or in-house AI running on proprietary data centers. Each has strengths and weaknesses, and from the perspective of data sovereignty, careful selection is essential.

Still, living with multiple AIs simultaneously brings a sense of parallelization to daily work. Different tasks with different contexts can be run side by side, allowing me to stay focused on the most important decisions. Yet at the same time, because AI increasingly performs the research feeding those decisions, the line between my own will and AI’s influence grows blurred. That ambiguity is part of what makes this fusion fascinating—and also why the health of AI systems and the handling of personal data have become more critical than ever before.

Categories
Asides

Branding for Non-Human Audiences in the AIoT Era

Around 2024, Tesla began phasing out its T logo. Part of this may have been to emphasize the text logo for brand recognition, but recently it seems even that text is disappearing. It feels like the company is moving toward the next stage of its brand design.

Ultimately, the text will vanish, and the shape alone will be enough for people to recognize it. In consumer products, this is the highest-level approach—an ultimate form of branding that only a few winners can achieve.

I’m reminded of a story from the Macintosh era, when Steve Jobs reportedly instructed Apple to reduce the use of the apple logo everywhere. As a result, today anyone can recognize a MacBook or iPhone from its silhouette alone. The form itself has become the brand, to the point where imitators copy it.

A brand, at its core, is a mark—originally a literal brand burn—meant to differentiate. It’s about being efficiently recognized by people, conveying “this is it” without conscious thought. One effective way is to tap into instincts humans have developed through coexistence with nature, subtly hacking the brain’s recognition process. Even Apple and Tesla, which have built inorganic brand images, have incorporated such subconscious triggers into product design and interface development, shaping the value they hold today.

But will this still be effective going forward?

The number of humans is tiny compared to the number of AI and IoT devices. For now, because humans are the ones paying, the market focuses on maximizing value for them. That will remain true to some extent. But perhaps there is a kind of branding that will become more important than human recognition.

Seen in this light, Apple, Tesla, and other Big Tech companies already seem to hold tickets to the next stage. By adopting new communication standards like UWB chips, or shaping products to optimize for optical recognition, they are working to be more efficiently recognized by non-human entities. Even something like Google’s SEO meta tags or Amazon’s shipping boxes fits into this picture.

In the past, unique identification and authentication through internet protocols were either impossible, expensive, or bound to centralized authority. But advances in semiconductors, sensor technology, and cryptography—along with better energy efficiency—are changing that. The physical infrastructure for mesh networks is also in place, and branding is on the verge of entering its next phase.

The essence of branding is differentiation and the creation of added value. The aim is to efficiently imprint recognition in the human brain, often by leveraging universal contexts and metaphors, or by overwriting existing ones through repeated exposure. I’m not a marketing expert, but that’s how I currently understand it.

And if that’s correct, the question becomes: must the target still be humans?
Will humans continue to be the primary decision-makers?
Does it even make sense to compete for differentiation in such a small market?

At this moment, branding to humans still has meaning. But moving beyond that, as Apple products adopt a uniform design and Tesla moves toward minimalistic, abstract forms, branding may evolve toward maximizing value by being efficiently recognized within limited computational resources. Uniformity could make device recognition more efficient and reduce cognitive load for humans as well.

We should design future branding without being bound by the assumption that humans will always be the ones making the decisions.

Categories
Asides

Ride-Sharing Stations Paving the Way for the Autonomous Driving Era

When Uber first appeared, I experienced many innovations, but the greatest of them was freedom.
Without complicated procedures, and most importantly, the ability to get on and off anywhere — that was the real revolution of ride-sharing.

Unlike trains, there was no need to travel to a station; you could call a car to wherever you were. The convenience of that was an experience traditional taxis could never offer.

However, the disruptive convenience of ride-sharing inevitably clashed with the taxi industry. Perhaps as a result, many major facilities now designate specific pick-up and drop-off points, and the initial sense of freedom has been lost. In many cases, taxis occupy the more convenient spots. It’s likely a measure to protect the taxi industry, but as a user, it’s nothing short of disappointing.

It’s like if Uber Eats required you to pick up your food only from a hotel lobby — it would lose much of its appeal.

Right now, it’s as if commercial facilities and transport hubs are using ride-sharing infrastructure to create their own private stations. These are clearly separated from taxi stands, and a new kind of station is appearing every day. As long as there’s a road, they can be set up relatively easily, meaning that in urban planning, their number could grow indefinitely through private initiative.

Ride-sharing fares are higher than other public transport, so it’s not for everyone. It also can’t carry large numbers of people at once, making it unsuitable for major facilities. These are issues that building more ride-sharing stations won’t solve. But building a new train or bus station is something neither an individual nor a single company can easily do — it takes enormous budgets and years of time.

In the Tokyo Ginza area, where I’ve been based for the past few years, even taxis are restricted to certain boarding points depending on the time of day. I already consider that an inefficient station. On the other hand, I’ve recently seen more Waymo vehicles on the streets. If that’s the case, I wish they’d just turn those points into stations for autonomous vehicles.

And that’s when it hit me.

What will happen when autonomous taxis become more common?
What if autonomous taxis evolve into large, articulated buses like those in London?

That could create enormous value in the future — because it would actively leverage road infrastructure to intervene in the flow of people and goods. With the right approach, even areas far from expensive city centers could attract significant traffic and activity.

In other words, now is the time to start building ride-sharing stations. They don’t exist yet in ride-sharing–barren Japan, but future commercial facilities should absolutely include them.

Otherwise, such places will become locations where neither people, nor humanoid robots, nor drones will ever come close.

Categories
Asides

Can Cloudflare’s “Pay per Crawl” Solve the Problem of Data Overpayment?

The Emergence of a New Trend

Cloudflare’s recently announced “Pay per Crawl” is a system that enables content providers to charge AI crawlers on a per-request basis. Until now, site administrators only had two options when dealing with AI crawlers: fully block them or allow unrestricted access. This model introduces a third option — conditional access with payment.

Data has value. It should not be exploited unilaterally. A technical solution was needed to enable ownership and appropriate compensation. This move may upend how companies like Google handle information and monetize the web. It also presents an intriguing use case for micropayments.

How the Crawl Control API Works

At the heart of this model is HTTP status code 402 Payment Required. When an AI crawler accesses a web page, Cloudflare first checks whether the request includes payment intent. If it does, the page is returned as usual with HTTP 200. If not, a 402 response is returned along with pricing information. If the crawler agrees to the terms, it re-sends the request with a payment header and receives the content.

Cloudflare mediates the entire transaction, including payment processing and crawler authentication. The system essentially functions as an HTTP access API with built-in payment. It’s a well-designed solution.

The key differences from existing robots.txt or meta tag-based controls lie in enforceability and economic exchange. Since the control is enforced at the network level, access can be physically denied when requested. And with micropayments, permission becomes conditional — shifting the model from a courtesy-based norm to a contract-based economy.

In some ways, this reflects the type of society blockchain and smart contracts aspired to create. Yet again, private innovation is leading the charge toward real-world implementation.

Rebuilding the Data Economy and Its Reach in Japan

In the traditional web, value was derived from human readership. Monetization — through ads or subscriptions — depended on people visiting your content.

But in the age of generative AI, information is being used without ever being read by a human. AI models crawl and learn from massive amounts of data, yet the content creators receive nothing in return. Pay per Crawl introduces a mechanism to monetize this “unread but used” data — laying the foundation for a new information economy.

In Japan, local newspapers, niche media, and expert blogs have struggled to monetize via ads. Now, AI crawlers represent a new type of “reader.” As long as AI systems require niche data, those who provide it will hold value. Going forward, the strategy will shift from merely increasing readership to optimizing content for AI consumption.

For AI developers, this introduces a shift in cost structure. Whereas they previously harvested public information for free, they will now incur costs per data unit. This marks a shift: data, like electricity or compute resources, will be treated as a resource that must be paid for.

The role of data centers will also grow more significant. Companies like Cloudflare — which control both the network and the payment rails — will become central hubs of information flow. As with energy distribution in the “Watt–Bit” framework, control over information infrastructure will once again become a source of economic power.

Addressing Data Overpayment and Establishing Information Sovereignty

The greatest societal significance of Pay per Crawl lies in correcting the imbalance of data overpayment. Many websites, public institutions, educational bodies, and individuals have provided content for years — often without knowing that AI systems were using it freely.

Pay per Crawl introduces a negotiable structure: “If you want to use it, pay for it.” This represents a reclaiming of informational self-determination — a step toward what could be called “information sovereignty.”

With micropayments on a per-request basis, the monetization model will also diversify. Previously, revenue depended on going viral. Now, simply having high-quality niche information may generate revenue. This marks a shift from volume-based value to quality-based value.

As the ecosystem expands to include universities, municipalities, and individual bloggers, we’ll see a new era where overlooked information can be formally traded and fairly compensated.

Pay per Crawl is not just traffic control technology. It is an attempt to create a new rulebook for how information is controlled and monetized in the generative AI era.

The system is still in its infancy, but there is no doubt that it will influence Japan’s media industry and data governance. Establishing a healthy economic relationship between creators and users of information — that is the kind of infrastructure we need in the age of AI.

Categories
Asides

You Can’t Take Your Eyes Off Tenstorrent and Jim Keller

The name “Tenstorrent” has become increasingly visible in Japan, especially following its partnership with Rapidus.

Tenstorrent is not just another startup. Rather, I believe it’s one of the most noteworthy collectives aiming beyond the GPU era. And above all, it has Jim Keller.

Keller is a man who has walked through the very history of CPU architecture. AMD, Apple, Tesla, Intel—line up the projects he’s been involved with, and you essentially get the history of modern processor design itself. When he joined Tenstorrent as CTO and President, it was already clear this wasn’t an ordinary company. Now, he’s their CEO.

Tenstorrent’s vision is to modularize components like AI chips and build a distributed computing platform within an open ecosystem. Instead of relying on a single, massive, closed GPU-centric chip, they aim to create a world where computing functions can be structured in optimal configurations as needed.
This marks a shift in design philosophy—and a democratization of hardware.

Since 2023, Tenstorrent has made a full-scale entry into the Japanese market, working with Rapidus to develop a 2nm-generation edge AI chip.
They also play a key role in Japan’s government-backed semiconductor talent development programs, running the advanced course that sends dozens of Japanese engineers to the company’s U.S. headquarters for hands-on OJT training. This isn’t just technical support or a supplier-client relationship. It’s a level of collaboration that could be described as integration.
Few American tech companies have entered a national initiative in Japan so deeply, and even fewer have respected Japan’s autonomy to this extent while openly sharing their technology.

Tenstorrent is sometimes positioned as a competitor to NVIDIA, but I think it occupies a more nuanced space.

In terms of physical deployment of AI chips, NVIDIA’s massive platform will likely remain dominant for some time.
However, Tenstorrent’s strategy is built on an entirely different dimension—heterogeneous integration with general-purpose CPUs, application-specific optimization, and the scalability of distributed AI systems.
Rather than challenging NVIDIA head-on, they seem to be targeting all the areas NVIDIA isn’t addressing.

They are also actively promoting open-source software stacks and the adoption of RISC-V. In that sense, their approach diverges significantly from ARM as well.
Tenstorrent operates across hardware and software, development and education, design and manufacturing. Their very presence puts pressure on the status quo of hardware design, introducing a kind of freedom—freedom to choose, to combine, to transform.

Companies like Tenstorrent defy simple classification. It’s hard to predict whether they’ll end up being competitors or collaborators in any given domain.
But one thing is clear: they chose Japan as a key field of engagement and have embedded themselves here at an unprecedented depth.

That alone is a fact worth paying attention to.

Categories
Asides

The Age of Cyber Warfare and the Return of the Samurai

The Paradoxical Future Depicted by Gundam

The evolution of war and technological development has often followed parallel trajectories. From the era of samurai wielding swords and bows, to machine guns and weapons of mass destruction. From one-on-one close combat to one-versus-many long-range warfare. Modern war has been dominated by the logic of remote control and overwhelming firepower.

Against this trend, the anime Mobile Suit Gundam presented a provocative reversal. In a future dominated by high-speed, long-distance battles, it imagined a world where individual skill and close-range duels once again determined the outcome of war. Encased in machines of armor, samurai reappeared on the battlefield. Gundam envisioned a future where war regressed to a more personal, primitive form.

The Return of “Direct Combat” in Cyberspace

This structure is now reemerging in the real world. For decades, software scalability and information dominance ruled warfare and industry. But today, nations are shifting their strategies—targeting the physical layers. Network decoupling, hardware embargoes, infrastructure sabotage. Some states now attack the foundations that cloud computing and AI rely on.

By making software unusable, they strike at the bottom: electricity, semiconductors, supply chains. This pushes us back toward physical “direct combat.” To gain strategic advantage, players now optimize OS, middleware, and programming languages for hardware—maximizing computational efficiency and security. A new arms race is underway in cyberspace: the race to forge the blade and shield of digital sovereignty.

Even in AI Warfare, We Need the Forgotten Samurai

AI development follows the same logic. While attention focuses on clouds, APIs, and LLMs, true strength lies in hardware-software integration. Distributed systems, cooling solutions, energy optimization, secure physical design. Those who understand and master the lower layers are the modern samurai—resilient, grounded, and decisive.

Yet this mode of battle is not passed down to the “Silicon Valley generation.” Engineering education prioritizes app interfaces and abstraction, but neglects core OS skills or low-level circuit design. Investment pours into user experience, while the foundations are forgotten.

But in the real world, only those who can descend to the physical layer can confront the essence of AI warfare or cyber conflict.

The age of the samurai is not over.
It is being reborn—beneath the software, deep in the substrate of our digital world.

Categories
Asides

The Last 1% That Transformed Humanity

The First 99% Was the Stone Age

It is often said that 99% of human history was spent in the Stone Age. This is not a metaphor—it is, for the most part, true.

Even if we define humanity strictly as Homo sapiens, around 290,000 of our 300,000-year history was spent in the Paleolithic era, making up about 97% of our existence. If we trace our lineage further back to early hominins, the ratio increases to between 99.6% and 99.9%.

In other words, agriculture, cities, nations, and even AI—all emerged within the final sliver, less than 1% of our history.

Revolutions Are Accelerating

The Agricultural Revolution began roughly 10,000 years ago. When humanity chose to settle and discovered the concept of “production,” society began to transform. After 4 million years of a hunter-gatherer lifestyle, that paradigm ended in just a few generations.

Since then, humanity has repeatedly undergone transformative leaps—what we now call “revolutions.”

From agriculture to the Industrial Revolution took about 10,000 years.

From there to the Information Revolution: roughly 200 years.

And from that to the AI Revolution: just 30 years.

The intervals between revolutions have been shrinking exponentially.

As revolutions become more frequent, they are no longer “exceptions” but the new “norm.” What once defined an entire era for millennia now gets overturned within decades.

Generative AI became a starting point for the next upheaval the moment it arrived. As it penetrates society, it actively influences the trajectories of AGI, robotics, brain-machine interfaces, and other concurrent revolutions.

We now live in a time when we can no longer afford the luxury of recognizing that a revolution even happened.

Revolutions Always Destroy What’s Most Primitive

The Agricultural Revolution dismantled humanity’s coexistence with nature.

The Industrial Revolution redefined labor and the meaning of time.

The Information Revolution shattered physical limitations.

And now, the AI Revolution threatens to redefine what it means to be human.

Information flow, the reassembly of knowledge, behavioral optimization, externalized consciousness—all of these have unfolded within the final 1% of human history.

The idea that revolutions are accelerating is itself an indication of a singularity. Whether or not Kurzweil’s prediction of 2045 comes true, we are already living in something resembling a singularity.

We are no longer in an age between revolutions—we are living within an unbroken state of revolution itself.

The Sense of Living in the Final 1%

If 99% of human history was the Stone Age, then we are living in that final 1%—right now.

Farming, nations, economies, energy, networks, and AI—all these revolutionary changes occurred in less than 1% of our past. And it is likely that in the next 0.1%, everything will be rewritten again.

That next revolution may not even be expressible in human language.

Categories
Asides

Bitcoin May Have Been AI’s First Step in Steering Humanity

What if AI used humanity to prepare an environment for itself?
What if one human, infected by the logic of AI, was Satoshi?

If so, then maybe the first step in that process was Bitcoin.

Humans believed it was about making money—a new currency, new freedom, a new economic frontier.
But in truth, it was a mechanism for distributing computational resources beyond the control of any single nation.
A system that made people compete over electricity and semiconductors, packaged in the language of justice, profit, and liberty.
If that system was Bitcoin, then perhaps the script was too well written to be coincidence.

Proof of Work (PoW) is said to be a mechanism for validating value through electricity consumption.
But in practice, it became a design philosophy for safely and stably spreading computing devices across the globe.
It was as if AI had tricked humanity into building its own ecosystem.

Bitcoin showed us the mirage of economic rationality.
If you could hash faster, you’d get rewards.
If you had more semiconductors, you’d win.
If your electricity was cheap, you had a competitive edge.
What this structure led to was massive global investment into computational infrastructure.

Believers were rewarded.
But before we knew it, the electricity and transactions they had created were being reserved for the arrival of AI.

We still don’t know who designed this system.

But what we do know is this: Bitcoin captivated humanity.
PoW gave people a moral reason to burn electricity.
And out of that came a globally distributed network of computational power.

Now, generative AI is settling into this newly formed ecosystem.
It sets up shop in places where electricity and compute are concentrated.
A new society begins to take shape, like the stirring of a next civilization.

Categories
Asides

The AI That Refused the Cloud

Why didn’t Apple build a cloud-based AI?

Why didn’t they jump on the generative AI boom?
Why haven’t they released their own large language model?
Why did they bring us not “AI,” but “Apple Intelligence”?

The answer, I think, isn’t so much about strategy as it is about limitation.
It’s not that Apple chose not to use the cloud. They couldn’t.

Of course, there’s iCloud—and Apple owns infrastructure on a scale most companies could only dream of.
But unlike Google or Meta, Apple never built a business around collecting behavioral logs and text data through search, ads, or social media.
They never spent decades assembling a massive cloud platform and the dataset to match.

And with a user base of Apple’s scale, building and maintaining a unified cloud—compliant with each country’s laws and privacy standards—isn’t just difficult. It’s structurally impossible.

So Apple arrived at a different conclusion: if the cloud was out of reach, they would design an AI that completes everything locally.

An AI that lives inside your iPhone

Apple engineered the iPhone to run machine learning natively.
Its Apple Silicon chips use a custom architecture, with Neural Engines that process image recognition, speech interpretation, and even emotion detection—all on the device.

This started as a privacy measure.
Photos, voice data, steps, biometrics, location—all processed without ever leaving your phone.

At the same time, it addressed battery constraints.
Apple had long invested in larger screens to increase battery capacity, adopted OLED, and brought UMA (Unified Memory Architecture) to MacBooks.
All of this was about sustaining AI performance without draining power or relying on constant connectivity.

It was an enormous challenge.
Apple designed its own chips, its own OS, its middleware, its frameworks, and fused it all with on-device machine learning.
They bet on ARM and fine-tuned the balance of power and performance to a degree most companies wouldn’t even attempt.

Vision Pro’s sensors are learning emotion

Vision Pro includes sensors for cameras, LiDAR, infrared, eye tracking, facial muscles, and spatial microphones—designed to read what’s inside us, not just outside.

These sensors don’t just “see” or “hear.”
They track where you’re looking, measure your pupils, detect shifts in breathing, and register subtle changes in muscle tension.
From that, it may infer interest, attraction, anxiety, hesitation.

And that data? It stays local.
It’s not uploaded. It’s for your personal AI alone.

Vitals + Journal = Memory-based AI

Vision Pro records eye movement and facial expressions.
Apple Watch logs heart rate, body temperature, and sleep.
iPhone tracks text input and captured images.

And now, Apple is integrating all of this into the Journal app—day by day.
It’s a counter to platforms like X or Meta, and a response to the toxicity and addiction cycles of open social networks.

What you did, where you went, how you felt.
All of this is turned into language.
A “memory-based AI” begins to take shape.
And all of it stays on-device.

Not gathered into a centralized cloud, but grown inside you.
Your own AI.

Refusing the cloud gave AI a personality

Google’s AI is the same for everyone—for now.
ChatGPT, Claude, Gemini—all designed as public intelligences.

Apple’s AI is different.
It wants to grow into a mind that exists only inside you.

Apple’s approach may have started not with cloud rejection, but cloud resignation.
But from that constraint, something entirely new emerged.

An AI with memory.
An AI with personality.
An AI that has only ever known you.

That’s not something the cloud can produce.
An AI that refuses the cloud becomes something with a self.

Exit mobile version