Categories
Asides

Ride-Sharing Stations Paving the Way for the Autonomous Driving Era

When Uber first appeared, I experienced many innovations, but the greatest of them was freedom.
Without complicated procedures, and most importantly, the ability to get on and off anywhere — that was the real revolution of ride-sharing.

Unlike trains, there was no need to travel to a station; you could call a car to wherever you were. The convenience of that was an experience traditional taxis could never offer.

However, the disruptive convenience of ride-sharing inevitably clashed with the taxi industry. Perhaps as a result, many major facilities now designate specific pick-up and drop-off points, and the initial sense of freedom has been lost. In many cases, taxis occupy the more convenient spots. It’s likely a measure to protect the taxi industry, but as a user, it’s nothing short of disappointing.

It’s like if Uber Eats required you to pick up your food only from a hotel lobby — it would lose much of its appeal.

Right now, it’s as if commercial facilities and transport hubs are using ride-sharing infrastructure to create their own private stations. These are clearly separated from taxi stands, and a new kind of station is appearing every day. As long as there’s a road, they can be set up relatively easily, meaning that in urban planning, their number could grow indefinitely through private initiative.

Ride-sharing fares are higher than other public transport, so it’s not for everyone. It also can’t carry large numbers of people at once, making it unsuitable for major facilities. These are issues that building more ride-sharing stations won’t solve. But building a new train or bus station is something neither an individual nor a single company can easily do — it takes enormous budgets and years of time.

In the Tokyo Ginza area, where I’ve been based for the past few years, even taxis are restricted to certain boarding points depending on the time of day. I already consider that an inefficient station. On the other hand, I’ve recently seen more Waymo vehicles on the streets. If that’s the case, I wish they’d just turn those points into stations for autonomous vehicles.

And that’s when it hit me.

What will happen when autonomous taxis become more common?
What if autonomous taxis evolve into large, articulated buses like those in London?

That could create enormous value in the future — because it would actively leverage road infrastructure to intervene in the flow of people and goods. With the right approach, even areas far from expensive city centers could attract significant traffic and activity.

In other words, now is the time to start building ride-sharing stations. They don’t exist yet in ride-sharing–barren Japan, but future commercial facilities should absolutely include them.

Otherwise, such places will become locations where neither people, nor humanoid robots, nor drones will ever come close.

Categories
Photos

Def Con 33

As AKATSUKI.

Categories
Asides

Can Cloudflare’s “Pay per Crawl” Solve the Problem of Data Overpayment?

The Emergence of a New Trend

Cloudflare’s recently announced “Pay per Crawl” is a system that enables content providers to charge AI crawlers on a per-request basis. Until now, site administrators only had two options when dealing with AI crawlers: fully block them or allow unrestricted access. This model introduces a third option — conditional access with payment.

Data has value. It should not be exploited unilaterally. A technical solution was needed to enable ownership and appropriate compensation. This move may upend how companies like Google handle information and monetize the web. It also presents an intriguing use case for micropayments.

How the Crawl Control API Works

At the heart of this model is HTTP status code 402 Payment Required. When an AI crawler accesses a web page, Cloudflare first checks whether the request includes payment intent. If it does, the page is returned as usual with HTTP 200. If not, a 402 response is returned along with pricing information. If the crawler agrees to the terms, it re-sends the request with a payment header and receives the content.

Cloudflare mediates the entire transaction, including payment processing and crawler authentication. The system essentially functions as an HTTP access API with built-in payment. It’s a well-designed solution.

The key differences from existing robots.txt or meta tag-based controls lie in enforceability and economic exchange. Since the control is enforced at the network level, access can be physically denied when requested. And with micropayments, permission becomes conditional — shifting the model from a courtesy-based norm to a contract-based economy.

In some ways, this reflects the type of society blockchain and smart contracts aspired to create. Yet again, private innovation is leading the charge toward real-world implementation.

Rebuilding the Data Economy and Its Reach in Japan

In the traditional web, value was derived from human readership. Monetization — through ads or subscriptions — depended on people visiting your content.

But in the age of generative AI, information is being used without ever being read by a human. AI models crawl and learn from massive amounts of data, yet the content creators receive nothing in return. Pay per Crawl introduces a mechanism to monetize this “unread but used” data — laying the foundation for a new information economy.

In Japan, local newspapers, niche media, and expert blogs have struggled to monetize via ads. Now, AI crawlers represent a new type of “reader.” As long as AI systems require niche data, those who provide it will hold value. Going forward, the strategy will shift from merely increasing readership to optimizing content for AI consumption.

For AI developers, this introduces a shift in cost structure. Whereas they previously harvested public information for free, they will now incur costs per data unit. This marks a shift: data, like electricity or compute resources, will be treated as a resource that must be paid for.

The role of data centers will also grow more significant. Companies like Cloudflare — which control both the network and the payment rails — will become central hubs of information flow. As with energy distribution in the “Watt–Bit” framework, control over information infrastructure will once again become a source of economic power.

Addressing Data Overpayment and Establishing Information Sovereignty

The greatest societal significance of Pay per Crawl lies in correcting the imbalance of data overpayment. Many websites, public institutions, educational bodies, and individuals have provided content for years — often without knowing that AI systems were using it freely.

Pay per Crawl introduces a negotiable structure: “If you want to use it, pay for it.” This represents a reclaiming of informational self-determination — a step toward what could be called “information sovereignty.”

With micropayments on a per-request basis, the monetization model will also diversify. Previously, revenue depended on going viral. Now, simply having high-quality niche information may generate revenue. This marks a shift from volume-based value to quality-based value.

As the ecosystem expands to include universities, municipalities, and individual bloggers, we’ll see a new era where overlooked information can be formally traded and fairly compensated.

Pay per Crawl is not just traffic control technology. It is an attempt to create a new rulebook for how information is controlled and monetized in the generative AI era.

The system is still in its infancy, but there is no doubt that it will influence Japan’s media industry and data governance. Establishing a healthy economic relationship between creators and users of information — that is the kind of infrastructure we need in the age of AI.

Categories
Asides

You Can’t Take Your Eyes Off Tenstorrent and Jim Keller

The name “Tenstorrent” has become increasingly visible in Japan, especially following its partnership with Rapidus.

Tenstorrent is not just another startup. Rather, I believe it’s one of the most noteworthy collectives aiming beyond the GPU era. And above all, it has Jim Keller.

Keller is a man who has walked through the very history of CPU architecture. AMD, Apple, Tesla, Intel—line up the projects he’s been involved with, and you essentially get the history of modern processor design itself. When he joined Tenstorrent as CTO and President, it was already clear this wasn’t an ordinary company. Now, he’s their CEO.

Tenstorrent’s vision is to modularize components like AI chips and build a distributed computing platform within an open ecosystem. Instead of relying on a single, massive, closed GPU-centric chip, they aim to create a world where computing functions can be structured in optimal configurations as needed.
This marks a shift in design philosophy—and a democratization of hardware.

Since 2023, Tenstorrent has made a full-scale entry into the Japanese market, working with Rapidus to develop a 2nm-generation edge AI chip.
They also play a key role in Japan’s government-backed semiconductor talent development programs, running the advanced course that sends dozens of Japanese engineers to the company’s U.S. headquarters for hands-on OJT training. This isn’t just technical support or a supplier-client relationship. It’s a level of collaboration that could be described as integration.
Few American tech companies have entered a national initiative in Japan so deeply, and even fewer have respected Japan’s autonomy to this extent while openly sharing their technology.

Tenstorrent is sometimes positioned as a competitor to NVIDIA, but I think it occupies a more nuanced space.

In terms of physical deployment of AI chips, NVIDIA’s massive platform will likely remain dominant for some time.
However, Tenstorrent’s strategy is built on an entirely different dimension—heterogeneous integration with general-purpose CPUs, application-specific optimization, and the scalability of distributed AI systems.
Rather than challenging NVIDIA head-on, they seem to be targeting all the areas NVIDIA isn’t addressing.

They are also actively promoting open-source software stacks and the adoption of RISC-V. In that sense, their approach diverges significantly from ARM as well.
Tenstorrent operates across hardware and software, development and education, design and manufacturing. Their very presence puts pressure on the status quo of hardware design, introducing a kind of freedom—freedom to choose, to combine, to transform.

Companies like Tenstorrent defy simple classification. It’s hard to predict whether they’ll end up being competitors or collaborators in any given domain.
But one thing is clear: they chose Japan as a key field of engagement and have embedded themselves here at an unprecedented depth.

That alone is a fact worth paying attention to.

Categories
Asides

Digital Deficit

I just finished reading the Ministry of Economy, Trade and Industry’s (METI) PIVOT Project report.

For years I have argued that electricity and computational capacity resources are becoming the new basis of value for nations and companies alike. The METI report, Digital Economy Report 2025, visualises the same issue through the statistical fact of “digital deficit.” The critical takeaway is clear: we haven’t been earning in the very domains where value is generated.

The report, grounded in SDX – Software Defined Everything, also warns that the export competitiveness of automobiles and industrial machinery will depend increasingly on software. Confronting the “hidden digital deficit” of the SDX era and acting early with a long-term strategy is indispensable.

One concrete idea is to recapture industry standards through innovation at the lower layers of the tech stack. We must avoid a future in which entire platforms—and therefore choices—are controlled by others. The fact that an official policy document now shares this sense of urgency is significant.

The report calls for action. Our own initiatives—edge data centres × renewable energy × overseas joint ventures—represent one possible answer. We hold computational capacity resources, sharpen our strengths, and take them to market, not as a purely domestic play but as an exportable Japanese model. The business roadmap we have spent the past few years drawing up aligns closely with the report’s prescription.

Our path remains unchanged; the report simply reaffirms its necessity.

“The future has already begun to move—quietly, yet inexorably.”

Those were the very words that opened ENJIN. Today, we continue to build step by step, but with unshakable conviction.

Categories
Asides

The Age of Cyber Warfare and the Return of the Samurai

The Paradoxical Future Depicted by Gundam

The evolution of war and technological development has often followed parallel trajectories. From the era of samurai wielding swords and bows, to machine guns and weapons of mass destruction. From one-on-one close combat to one-versus-many long-range warfare. Modern war has been dominated by the logic of remote control and overwhelming firepower.

Against this trend, the anime Mobile Suit Gundam presented a provocative reversal. In a future dominated by high-speed, long-distance battles, it imagined a world where individual skill and close-range duels once again determined the outcome of war. Encased in machines of armor, samurai reappeared on the battlefield. Gundam envisioned a future where war regressed to a more personal, primitive form.

The Return of “Direct Combat” in Cyberspace

This structure is now reemerging in the real world. For decades, software scalability and information dominance ruled warfare and industry. But today, nations are shifting their strategies—targeting the physical layers. Network decoupling, hardware embargoes, infrastructure sabotage. Some states now attack the foundations that cloud computing and AI rely on.

By making software unusable, they strike at the bottom: electricity, semiconductors, supply chains. This pushes us back toward physical “direct combat.” To gain strategic advantage, players now optimize OS, middleware, and programming languages for hardware—maximizing computational efficiency and security. A new arms race is underway in cyberspace: the race to forge the blade and shield of digital sovereignty.

Even in AI Warfare, We Need the Forgotten Samurai

AI development follows the same logic. While attention focuses on clouds, APIs, and LLMs, true strength lies in hardware-software integration. Distributed systems, cooling solutions, energy optimization, secure physical design. Those who understand and master the lower layers are the modern samurai—resilient, grounded, and decisive.

Yet this mode of battle is not passed down to the “Silicon Valley generation.” Engineering education prioritizes app interfaces and abstraction, but neglects core OS skills or low-level circuit design. Investment pours into user experience, while the foundations are forgotten.

But in the real world, only those who can descend to the physical layer can confront the essence of AI warfare or cyber conflict.

The age of the samurai is not over.
It is being reborn—beneath the software, deep in the substrate of our digital world.

Categories
Asides

The Last 1% That Transformed Humanity

The First 99% Was the Stone Age

It is often said that 99% of human history was spent in the Stone Age. This is not a metaphor—it is, for the most part, true.

Even if we define humanity strictly as Homo sapiens, around 290,000 of our 300,000-year history was spent in the Paleolithic era, making up about 97% of our existence. If we trace our lineage further back to early hominins, the ratio increases to between 99.6% and 99.9%.

In other words, agriculture, cities, nations, and even AI—all emerged within the final sliver, less than 1% of our history.

Revolutions Are Accelerating

The Agricultural Revolution began roughly 10,000 years ago. When humanity chose to settle and discovered the concept of “production,” society began to transform. After 4 million years of a hunter-gatherer lifestyle, that paradigm ended in just a few generations.

Since then, humanity has repeatedly undergone transformative leaps—what we now call “revolutions.”

From agriculture to the Industrial Revolution took about 10,000 years.

From there to the Information Revolution: roughly 200 years.

And from that to the AI Revolution: just 30 years.

The intervals between revolutions have been shrinking exponentially.

As revolutions become more frequent, they are no longer “exceptions” but the new “norm.” What once defined an entire era for millennia now gets overturned within decades.

Generative AI became a starting point for the next upheaval the moment it arrived. As it penetrates society, it actively influences the trajectories of AGI, robotics, brain-machine interfaces, and other concurrent revolutions.

We now live in a time when we can no longer afford the luxury of recognizing that a revolution even happened.

Revolutions Always Destroy What’s Most Primitive

The Agricultural Revolution dismantled humanity’s coexistence with nature.

The Industrial Revolution redefined labor and the meaning of time.

The Information Revolution shattered physical limitations.

And now, the AI Revolution threatens to redefine what it means to be human.

Information flow, the reassembly of knowledge, behavioral optimization, externalized consciousness—all of these have unfolded within the final 1% of human history.

The idea that revolutions are accelerating is itself an indication of a singularity. Whether or not Kurzweil’s prediction of 2045 comes true, we are already living in something resembling a singularity.

We are no longer in an age between revolutions—we are living within an unbroken state of revolution itself.

The Sense of Living in the Final 1%

If 99% of human history was the Stone Age, then we are living in that final 1%—right now.

Farming, nations, economies, energy, networks, and AI—all these revolutionary changes occurred in less than 1% of our past. And it is likely that in the next 0.1%, everything will be rewritten again.

That next revolution may not even be expressible in human language.

Categories
Asides

Bitcoin May Have Been AI’s First Step in Steering Humanity

What if AI used humanity to prepare an environment for itself?
What if one human, infected by the logic of AI, was Satoshi?

If so, then maybe the first step in that process was Bitcoin.

Humans believed it was about making money—a new currency, new freedom, a new economic frontier.
But in truth, it was a mechanism for distributing computational resources beyond the control of any single nation.
A system that made people compete over electricity and semiconductors, packaged in the language of justice, profit, and liberty.
If that system was Bitcoin, then perhaps the script was too well written to be coincidence.

Proof of Work (PoW) is said to be a mechanism for validating value through electricity consumption.
But in practice, it became a design philosophy for safely and stably spreading computing devices across the globe.
It was as if AI had tricked humanity into building its own ecosystem.

Bitcoin showed us the mirage of economic rationality.
If you could hash faster, you’d get rewards.
If you had more semiconductors, you’d win.
If your electricity was cheap, you had a competitive edge.
What this structure led to was massive global investment into computational infrastructure.

Believers were rewarded.
But before we knew it, the electricity and transactions they had created were being reserved for the arrival of AI.

We still don’t know who designed this system.

But what we do know is this: Bitcoin captivated humanity.
PoW gave people a moral reason to burn electricity.
And out of that came a globally distributed network of computational power.

Now, generative AI is settling into this newly formed ecosystem.
It sets up shop in places where electricity and compute are concentrated.
A new society begins to take shape, like the stirring of a next civilization.

Categories
Asides

The AI That Refused the Cloud

Why didn’t Apple build a cloud-based AI?

Why didn’t they jump on the generative AI boom?
Why haven’t they released their own large language model?
Why did they bring us not “AI,” but “Apple Intelligence”?

The answer, I think, isn’t so much about strategy as it is about limitation.
It’s not that Apple chose not to use the cloud. They couldn’t.

Of course, there’s iCloud—and Apple owns infrastructure on a scale most companies could only dream of.
But unlike Google or Meta, Apple never built a business around collecting behavioral logs and text data through search, ads, or social media.
They never spent decades assembling a massive cloud platform and the dataset to match.

And with a user base of Apple’s scale, building and maintaining a unified cloud—compliant with each country’s laws and privacy standards—isn’t just difficult. It’s structurally impossible.

So Apple arrived at a different conclusion: if the cloud was out of reach, they would design an AI that completes everything locally.

An AI that lives inside your iPhone

Apple engineered the iPhone to run machine learning natively.
Its Apple Silicon chips use a custom architecture, with Neural Engines that process image recognition, speech interpretation, and even emotion detection—all on the device.

This started as a privacy measure.
Photos, voice data, steps, biometrics, location—all processed without ever leaving your phone.

At the same time, it addressed battery constraints.
Apple had long invested in larger screens to increase battery capacity, adopted OLED, and brought UMA (Unified Memory Architecture) to MacBooks.
All of this was about sustaining AI performance without draining power or relying on constant connectivity.

It was an enormous challenge.
Apple designed its own chips, its own OS, its middleware, its frameworks, and fused it all with on-device machine learning.
They bet on ARM and fine-tuned the balance of power and performance to a degree most companies wouldn’t even attempt.

Vision Pro’s sensors are learning emotion

Vision Pro includes sensors for cameras, LiDAR, infrared, eye tracking, facial muscles, and spatial microphones—designed to read what’s inside us, not just outside.

These sensors don’t just “see” or “hear.”
They track where you’re looking, measure your pupils, detect shifts in breathing, and register subtle changes in muscle tension.
From that, it may infer interest, attraction, anxiety, hesitation.

And that data? It stays local.
It’s not uploaded. It’s for your personal AI alone.

Vitals + Journal = Memory-based AI

Vision Pro records eye movement and facial expressions.
Apple Watch logs heart rate, body temperature, and sleep.
iPhone tracks text input and captured images.

And now, Apple is integrating all of this into the Journal app—day by day.
It’s a counter to platforms like X or Meta, and a response to the toxicity and addiction cycles of open social networks.

What you did, where you went, how you felt.
All of this is turned into language.
A “memory-based AI” begins to take shape.
And all of it stays on-device.

Not gathered into a centralized cloud, but grown inside you.
Your own AI.

Refusing the cloud gave AI a personality

Google’s AI is the same for everyone—for now.
ChatGPT, Claude, Gemini—all designed as public intelligences.

Apple’s AI is different.
It wants to grow into a mind that exists only inside you.

Apple’s approach may have started not with cloud rejection, but cloud resignation.
But from that constraint, something entirely new emerged.

An AI with memory.
An AI with personality.
An AI that has only ever known you.

That’s not something the cloud can produce.
An AI that refuses the cloud becomes something with a self.

Categories
Asides

Navigation Systems Are for Talking to Cars

As semi-autonomous driving becomes the norm, one thing has clearly changed: the role of navigation systems.
They’ve become a kind of language—an interface through which humans talk to cars.

In the past, we used navigation simply to avoid getting lost. It was a tool for finding the shortest route—purely for efficiency.
But now, it’s different. Navigation is how we communicate a destination to the car.

Even when I’m going somewhere familiar, I always input the destination. I know the way.
But I still feel the need to tell the car. If I don’t, I don’t know how it will act.

In many cases, the destination is already synced from my calendar.
That’s why I’ve started to think about how I enter appointments in the first place.
How far is it?
Is the departure time realistic?
What information does the car need to understand my intent?
Even scheduling has become part of a broader conversation with the car.

Turn signals are the same.
They’re not just for the car behind you.
They’re also how you tell the vehicle, “I want to change lanes now,” or “I’m about to turn.”
Bit by bit, people are developing an intuitive sense of what it means to signal to the machine.

These actions—destination input, calendar syncing, signaling—will eventually become training data.
They’ll enable more natural, more efficient communication between humans and vehicles.
As the car becomes more autonomous, the human role is shifting—from driver to conversational partner.

Exit mobile version