Categories
Asides

Learning with AI Is Changing the Nature of Education

The word “education” may be too broad. Here, I want to focus strictly on the act of acquiring knowledge, not on values or character formation. From that perspective, the emergence of generative AI has begun to reshape the very structure of learning itself.

Since generative AI became widespread, my own learning across many fields has clearly accelerated. This is not limited to professional topics; it applies equally to hobbies and peripheral areas of interest. It is not simply that answers arrive faster, but that the process of learning has fundamentally changed.

A concrete example is learning Rubik’s Cube algorithms. After moving beyond basic memorization and into the phase of solving more efficiently, I found an overwhelming amount of information on the web and on YouTube. What appeared there, however, were methods and sequences optimized for someone else. Determining what was optimal for me took considerable time. Each source operated on a different set of assumptions and context, leaving the burden of organizing and reconciling those differences entirely on the learner.

Even a single symbol could cause confusion. Which face does “R” refer to, and in which direction is it rotated? What exact sequence does “SUNE” represent? Because these premises were not aligned, explanations often progressed without shared grounding, making understanding fragile and fragmented.

When AI enters the loop, this situation changes dramatically. The task of organizing information shifts to the AI, which can align definitions, symbols, and concepts before explaining them. It can propose an optimal learning path based on one’s current understanding and recalibrate the level of detail as needed. As a result, learning efficiency improves to an extraordinary degree.

Key points can be reinforced repeatedly, and review can be structured with awareness of the forgetting curve. Questions that arise mid-process can be fact-checked immediately. Beyond that, a meta-learning perspective becomes available: reflecting on how one learns, identifying synergies with other knowledge areas, and continuously refining learning methods themselves.

There are, of course, drawbacks. The final responsibility for judging truth still lies with the human. When learning veers in the wrong direction, AI does not provide an inherent ethical brake or value-based correction. In areas such as conspiracy theories, this can accelerate misunderstanding rather than resolve it, potentially deepening social division.

This style of learning also depends heavily on intrinsic motivation. Without actively asking questions and engaging in dialogue, AI offers little value. We have not yet reached a stage where knowledge can simply be installed. The trigger remains firmly on the human side.

Even so, one point is clear. For the act of learning, generative AI is becoming an exceptionally powerful tool. The central question is no longer how to deliver knowledge, but how to arrive at understanding. On that question, AI has already begun to offer practical answers.

Categories
Asides

Japan as an Information Market and the Computational Power of Local Cities

Financial markets once had clear centers of gravity—New York, London, Hong Kong, Singapore. Each era had its “world’s number-one market,” a place where capital, people, and rules converged. But today’s financial world is fragmented. Regulation and geopolitics have dispersed activity, and the idea of a single location one must watch has nearly disappeared.
If the world seeks a new center, what will it be? I believe the answer is the “information market.”

By information market, I do not mean a marketplace for trading data. It is a composite system: computational power, data, algorithms, the infrastructure that runs them, the people who operate them, and the rules that guarantee trust. When the choice of where to train an AI model—and under which legal and cultural framework to operate it—becomes a source of significant economic value, the information market will rival or surpass the importance of financial markets.

From this perspective, Japan cannot be excluded.
It is a stable rule-of-law nation with minimal risk of arbitrary seizures or retroactive regulations. Its power grid is remarkably reliable, with extremely low outage rates. Natural disasters occur, yet recovery is fast—earning Japan a reputation as a place where “things return to normal.” Additionally, Japan still retains a manufacturing foundation capable of designing and producing hardware, including semiconductors.
Taken together, these characteristics make Japan uniquely qualified as a place to “entrust information.”

Viewed through the lens of an information market, Japan has the right to stand at the “center.” Its position—neither the United States nor China—can be a geopolitical weakness, but also a strength when acting as a neutral infrastructure provider. Japan also has the institutional calmness to redesign rules around data ownership and privacy. The challenge is that its potential remains constrained by a Tokyo-centric mindset.

A Japanese information market cannot be built by focusing on Tokyo alone.
What is required is a shift: assuming that local cities must hold computational power. Until now, the role of local regions was to attract people and companies. From this point forward, they must be reframed as entities that attract computation and data. This is not a competition for population but a competition for information and processing.

Japan has many regions with renewable energy, surplus electricity, and land. Many of them enjoy cooler climates and access to water, which are favorable for cooling infrastructure. With proper planning for disaster risk, these regions can host mid-scale data centers and edge nodes—allowing each locality to own computational power.
This would create a distributed domestic information market that exists alongside, not beneath, Tokyo-centric cloud structures.

For local cities, possessing computational power is not merely about installing servers.
Services such as autonomous driving, drone logistics, and remote medicine depend on ultra-low latency and local trust. Japan’s regions—low population density, stable infrastructure, and defined geography—are ideal as real-world testbeds. If the computational layer behind these services resides locally, then each region becomes a site of the information market.

A similar structure appears at the level of individual homes. As I wrote in the 3LDDK article, the idea of embedding small-scale generation and computing into houses transforms residential units into local nodes. When aggregated at the town level, these nodes form clusters; when interconnected across municipalities, they become regional clouds.
Rather than relying entirely on centralized hyperscale clouds, local cities gain autonomy through computational power.

Financial history offers a useful analogy. Financial centers were places where capital, talent, and rules concentrated. Future information markets will concentrate computational power, data, and governance. But unlike finance, information markets will be physically distributed.
Networks of data centers in local cities—linked through invisible wiring—will collectively form a single “Japan Market.” From abroad, this appears not as a dispersed system but as a coherent, trustworthy platform.

The critical question is not “Where should we place data centers?” but “How should we design the system?”
Merely placing servers in local regions is insufficient. Market design must weave together electricity, land, and data flows while clarifying revenue distribution, risk ownership, and governance. Only then can Japan move from being a location for data centers to being the rules-maker of the information market itself.

Japan as an information market, and local cities as holders of computational power—these two visions are, in truth, one picture.
A system in which regions contribute their own compute and their own data, forming a market through federation rather than centralization. Whether Japan can articulate and implement this structure will determine the country’s position over the next decade.
That, I believe, is the question now placed before us.

Categories
Asides

Redesigning Conversation and the Emergence of a Post-Human Language

As I wrote in the previous article, the idea of a “common language for humans, things, and AI” has been one of my long-standing themes. Recently, I’ve begun to feel that this question itself needs to be reconsidered from a deeper level. The shifts happening around us suggest that the very framework of human communication is starting to update.

Human-to-human conversation is approaching a point where further optimization is difficult. Reading emotions, estimating the other person’s knowledge and cognitive range, and choosing words with care—these processes enrich human culture, yet they also impose structural burdens. I don’t deny the value of embracing these inefficiencies, but if civilization advances and technology accelerates, communication too should be allowed to transform.

Here, it becomes necessary to change perspective. Rather than polishing the API between humans, we should redesign the interface between humans and AI itself. If we move beyond language alone and incorporate mechanisms that supplement intention and context, conversation will shift to a different stage. When AI can immediately understand the purpose of a dialogue, add necessary supporting information, and reinforce human comprehension, the burdens formerly assumed to be unavoidable can dissolve naturally.

Wearing devices on our ears and eyes is already a part of everyday life. Sensors and connected objects populate our environments, creating a state in which information is constantly exchanged. What comes next is a structure in which these objects and AI function as mediators of dialogue, coordinating interactions between people—or between humans and AI. Once mediated conversation becomes ordinary, the meaning of communication itself will begin to change.

Still, today’s human–AI dialogue is far from efficient. We continue to use natural language and impose human-centered grammar and expectations onto AI, paying the cognitive cost required to do so. We do not yet fully leverage AI’s capacity for knowledge and contextual memory, nor have we developed language systems or symbolic structures truly designed for AI. Even Markdown, while convenient, is simply a human-friendly formatting choice; the semantic structure AI might benefit from is largely absent. Human and AI languages could in principle be designed from completely different origins, and within that gap lies space for a new expressive culture beyond traditional “prompt optimization.”

The most intriguing domain is communication that occurs without humans—between AIs, or between AI and machines. In those spaces, a distinct communicative culture may already be emerging. Its speed and precision likely exceed human comprehension, similar to the way plants exchange chemical signals in natural systems. If such a language already exists, our task may not be to create a universal language for humans, but to design the conditions that allow humans to participate in that domain.

How humans will enter the new linguistic realm forming between AI and machines is an open question. Yet this is no longer just an interface problem; it is part of a broader reconstruction of social and technological civilization. In the future, conversation may not rely on “words” as sound, but on direct exchanges of understanding itself. That outline is beginning to come into view.

Categories
Asides

A Common Language for Humans, Machines, and AI

Human communication still has room for improvement. In fact, it may be one of the slowest systems to evolve. The optimal way to communicate depends on the purpose—whether to convey intent, ensure accuracy, share context, or express emotion. Even between people, our communication protocols are filled with inefficiencies.

Take the example of a phone call. The first step after connecting is always to confirm that audio is working—hence the habitual “hello.” That part makes sense. But what follows often doesn’t. If both parties already know each other’s numbers, it would be more efficient to go straight to the point. If it’s the first time, an introduction makes sense, but when recognition already exists, repetition becomes redundant. In other words, if there were a protocol that could identify the level of mutual recognition before the conversation begins, communication could be much smoother.

Similar inefficiencies appear everywhere in daily life. Paying at a store, ordering in a restaurant, or getting into a taxi you booked through an app—all of these interactions involve unnecessary back-and-forth verification. The taxi example is especially frustrating. As a passenger, you want to immediately state your reservation number or name to confirm your identity. But the driver, trained for politeness, automatically starts with a formal greeting. The two signals overlap, the identification gets lost, and eventually the driver still asks, “May I have your name, please?” Both sides are correct, yet the process is fundamentally flawed.

The real issue is that neither side knows the other’s expectations beforehand. Technically, this problem could be solved easily: automate the verification. A simple touch interaction or, ideally, a near-field communication system could handle both identification and payment instantly upon entry. In some contexts, reducing human conversation could actually improve the experience.

This leads to a broader point: the need for a shared language not only between people but also between humans, machines, and AI. At present, no universal communication protocol exists among them. Rather than forcing humans to adapt to digital systems, we should design a protocol that enables mutual understanding between the two. By implementing such a system at the societal level, communication between humans and AI could evolve from guesswork into trust and efficiency.

Ultimately, the most effective form of communication is one that eliminates misunderstanding—regardless of who or what is on the other end. Whether through speech, touch, or data exchange, what we truly need is a shared grammar of interaction. That grammar, still emerging at the edges of design and technology, may become the foundation of the next social infrastructure.

Categories
Asides

The Age of the AI Home

In the age of AI, the idea of what a home is will change fundamentally. As humans begin to coexist with artificial intelligence, houses may need to include small power generators or even miniature data centers. Computing power, like electricity or water, will become part of the essential infrastructure built into everyday living spaces.

Imagine a home with a living room, a dining room, and a data room. Such a layout could become commonplace. A dedicated space for AI, or for data itself, might naturally appear in architectural plans. It could be on the rooftop, underground, or next to the bedroom. Perhaps even the family altar—once a spiritual repository of ancestral memory—could evolve into a private archive where generations of personal data are stored and shared.

Either way, we will need far more computing power at the edge. Every household could function as a small node, collectively forming a distributed computational network across neighborhoods. A society that produces and consumes both energy and compute locally may begin with the home as its basic unit.

Still, this is a vision built on the inefficiencies of today’s AI infrastructure. As models become more efficient and require fewer resources, even small-scale home data centers might disappear. In their place, countless connected devices could collaborate to form an intelligent mesh that links homes and cities into a single network. At that point, a house would no longer just be a space to live—it would be a space where information itself resides.

The idea of an “AI-ready home,” one equipped with its own computing and energy systems, may be a symbol of this transition. It represents a moment when the boundary between living space and computational space begins to blur, and the household itself becomes a unit of intelligence.

Categories
Asides

Rethinking Tron

Perhaps Tron is exactly what is needed right now.
I had never looked at it seriously before, but revisiting its history and design philosophy makes it clear that many of its principles align with today’s infrastructure challenges.
Its potential has always been there—steady, consistent, and quietly waiting for the right time.

Background

Tron was designed around the premise of computation that supports society from behind the scenes.
Long before mobile and cloud computing became common, it envisioned a distributed and cooperative world where devices could interconnect seamlessly.
Its early commitment to open ecosystem design set it apart, and while its visible success in the consumer OS market was limited, its adoption as an invisible foundation continued to grow.

The difficulty in evaluating Tron has always stemmed from this invisibility.
Its success accumulated quietly in the background, sustaining “systems that must not stop.”
The challenge has never been technological alone—it has been how to articulate the value of something that works best when unseen.

Why Reevaluate Tron Now

The rate at which computational capability is sinking into the social substrate is accelerating.
From home appliances to industrial machines, mobility systems, and city infrastructure, the demand for small, reliable operating systems at the edge continues to increase.
Tron’s core lies in real-time performance and lightweight design.
It treats the OS not as an end but as a component—one that elevates the overall reliability of the system.

Its focus has always been on operating safely and precisely inside the field, not just in the cloud.
The needs that Tron originally addressed have now become universal, especially as systems must remain secure and maintainable over long lifespans.

Another reason for its renewed relevance lies in the shifting meaning of “open.”
By removing licensing fees and negotiation costs, and by treating compatibility as a shared social contract, Tron embodies a practical model for the fragmented IoT landscape.
Having an open, standards-based domestic option also supports supply chain diversity—a form of strategic resilience.

Current Strengths

Tron’s greatest strength is that it does not break in the field.
It has long been used in environments where failure is not tolerated—automotive ECUs, industrial machinery, telecommunications infrastructure, and consumer electronics.
Its lightweight nature allows it to thrive under cost and power constraints while enabling long-term maintenance planning.

The open architecture is more than a technical advantage.
It reduces the cost of licensing and vendor lock-in, helping organizations move decisions forward.
Its accessibility to companies and universities directly contributes to talent supply stability, lowering overall risks of deployment and long-term operation.

Visible Challenges

There are still clear hurdles.
The first is recognition.
Success in the background is difficult to visualize, and in overseas markets Tron faces competition from ecosystems with richer English documentation and stronger commercial support.
To encourage adoption, it needs better documentation, clearer support structures, visible case studies, and accessible community pathways.

The second is the need to compete as an ecosystem, not merely as an OS.
Market traction requires more than technical superiority.
Integration with cloud services, consistent security updates, development tools, validation environments, and production support must all be presented in an accessible, cohesive form.
An operational model that assumes continuous updating is now essential.

Outlook and Repositioning

Tron can be repositioned as a standard edge OS for the AIoT era.
While large-scale computation moves to the cloud, local, reliable control and pre-processing at the edge are becoming more important.
By maintaining its lightweight strength while improving on four fronts—international standard compliance, English-language information, commercial support, and educational outreach—the landscape could shift considerably.

Rethinking Tron is not about nostalgia for a domestic technology.
It is a practical reconsideration of how to design maintainable infrastructure for long-lived systems.
If we can balance invisible reliability with visible communication, Tron’s growth is far from over.
What matters now is not the story of the past, but how we position it for the next decade.

Categories
Asides

The Need for a Self-Driving License

After AT-only licenses, the next step we may need is a “self-driving license.”

Recently, I rented a gasoline-powered car for the first time in a while. It was an automatic model, but because I was unfamiliar with both the vehicle and the driving environment, the experience was far more stressful than I expected. Having become used to driving an EV equipped with autonomous features, I found the act of operating everything manually—with my own judgment and physical input—strangely primitive.

When the gear is shifted to drive, the car starts moving on its own. A handbrake must be engaged separately, and the accelerator must be pressed continuously to keep moving. Every stop requires the brake, every start requires a shift of the foot back to the accelerator, and even the turn signal must be turned off manually. I was reminded that this entire system is designed around the assumption that the human body functions as the car’s control mechanism.

I also found myself confused by actions that used to be second nature—starting the engine, locking and unlocking the door with a key. What once seemed natural now feels unnecessary. There are simply too many steps required before a car can even move. Press a button, pull a lever, step on a pedal, turn a wheel. The process feels less like operating a machine and more like performing a ritual.

From a UX perspective, this reflects a design philosophy stuck between eras. The dashboard is filled with switches and meters whose meanings are not immediately clear. Beyond speed and fuel levels, how much information does a driver actually need? The system relies on human judgment, but in doing so, it also introduces confusion.

When driving shifted from manual to automatic, the clutch became obsolete. People were freed from unnecessary complexity, and driving became accessible to anyone. In the same way, in an age where autonomous driving becomes the norm, pressing pedals or turning a steering wheel will seem like relics of a bygone era. We are moving from a phase where machines adapt to humans to one where humans no longer need to touch the machines at all.

Yet driver licensing systems have not caught up with this change. Until now, a license has certified one’s ability to operate a vehicle. But in the future, what will matter is the ability to interact with the car, to understand its systems, and to intervene safely when needed. It will no longer be about physical control, but about comprehension—of AI behavior, of algorithmic decision-making, and of how to respond when something goes wrong.

When AT-only licenses were introduced, many drivers were skeptical about removing the clutch. But over time, that became the standard, and manual transmissions turned into a niche skill. Likewise, if a “self-driving license” is introduced in the near future, pressing pedals may come to be viewed as a legacy form of driving—something from another era.

The evolution of driving technology is, at its core, the gradual separation of humans from machines. A self-driving license would not be a qualification to control a vehicle, but a literacy certificate for coexisting with technology. It would mark the shift from moving the car to moving with the car. Such a change in licensing might define how transportation itself evolves in the next generation.

Categories
Asides

The Strategic Value of Compute Resources in the OpenAI–AMD Partnership

The expansion of generative AI has entered a stage where progress is determined not by model novelty but by the ability to secure and operate compute resources. The multi-year, multi-generation alliance between OpenAI and AMD clearly reflects this structure. It is no longer a simple transactional deal but a framework that integrates capital, supply, power, and implementation layers into a mechanism for mutual growth—signaling a shift toward scale as a built-in assumption.

Forecasting Power Demand

The backbone of this partnership is gigawatt-class compute capacity. An initial 1 GW, scaling to several gigawatts, links data-center construction directly to regional grid planning rather than individual projects. The key factors are not only peak power draw but sustained supply reliability and effective PUE including heat rejection. AI training workloads behave as constant loads rather than spikes, making grid stability and redundancy in auxiliary systems critical bottlenecks.

Model evolution continues to expand overall electricity demand, offsetting gains in performance per watt. Even as semiconductor generations improve efficiency, larger parameter counts, bigger datasets, and multimodal preprocessing and inference push consumption upward. Consequently, capital investment shifts its center of gravity from racks to civil-engineering and electrical domains that include cooling infrastructure.

Structural Issues in the Compute Market

Even with AMD expanding deployment options, the NVIDIA-dominated market faces other bottlenecks—optical interconnects, advanced HBM, and CoWoS packaging capacity among them. Rising rack-level heat density makes the shift from air to liquid cooling irreversible, tightening location constraints for data centers. The result is a conversion lag: capital cannot instantly be turned into usable compute capacity.

A further concern is geopolitical risk. Heightened global tensions and export controls can fragment manufacturing and deployment chains, triggering cascading delays and redesigns.

OpenAI’s Challenges

The first challenge for OpenAI is absorbing and smoothing exponentially growing compute demand. Running research, productization, and APIs concurrently complicates capacity planning across training and inference clusters, making the balance between model renewal and existing services a critical task.

The second is diversification away from a single vendor. Heavy reliance on NVIDIA has caused supply bottlenecks and eroded pricing flexibility. Sharing the roadmap with AMD therefore carries both optimization and procurement significance.

The third lies in capital structure and governance. While drawing in vast external commitments, OpenAI must maintain neutrality and research agility, requiring careful contract architecture to coordinate partnerships. The episode of its past internal split serves as a reminder: when capital providers bring divergent decision criteria, alignment of research agendas becomes a challenge.

AMD’s Challenges

AMD’s bottlenecks are manufacturing capacity and the software ecosystem. Its latest designs can compete technically, but to offer a developer experience rivaling the PyTorch/CUDA world, it must advance runtimes, compilers, kernels, and distributed-training toolchains. Hardware aspects such as HBM supply, packaging yield, and thermal management will define both delivery schedules and operational stability.

A second challenge is converting the co-developed results with OpenAI into broader market value. If collaboration remains confined to a single project or product, dependency risk increases. Generalizing and scaling the gains to other markets will be essential.

Strategic Intent of the Partnership

At the surface, the intent is clear: OpenAI seeks secure and diversified compute resources, while AMD seeks simultaneous credibility and demand. Structurally, however, there is a deeper layer—integrating models, data, compute, and capital into a unified flow; accelerating GPU design and supply cycles; and locking in diversified power and site portfolios early. In effect, both sides embed their respective challenges into a forward-loaded roadmap that reduces uncertainty in supply and financing.

Scheme Design

The distinctive feature is clause design that firmly enforces reciprocal commitment. Large take-or-pay volumes and facility milestones are tied to capital returns, linking hardware success directly to customer benefit. For suppliers, it secures quantity certainty and pricing floors, easing investment decisions. For buyers, it strengthens influence over technical specifications and workload fit. Financially, it helps smooth extreme swings in cash flow.

Difference from NVIDIA’s Model

Where NVIDIA’s massive deal channels capital from supplier to buyer—who then spends it back on the supplier—the AMD structure grants equity options from supplier to buyer, while the buyer guarantees long-term procurement. Both align incentives, but the direction of capital flow and degree of governance leverage differ.

NVIDIA’s model gives suppliers greater control and restricts buyers through capital conditions. AMD’s allows buyers to become future shareholders, giving them indirect influence over the supplier’s technical priorities.

Compute-ism

In the AI era, the value model ultimately converges on a single question: who can operate how much compute, on what power, at what efficiency, and under what governance. Partnerships with Microsoft, NVIDIA, AMD, and Oracle all stem from that premise. Compute capacity has become currency, conduit, and foundation of sovereignty. The choice of compute space—including power source, jurisdiction, ethical stance, and data lineage—extends from corporate strategy into institutional design.

From this viewpoint, true competitiveness lies in projects that integrate long-term cloud commitments, dedicated power and cooling, secured land, and supply-chain finance. Price or FLOPS comparisons alone no longer define advantage.

Impact on the Hardware and Technology Roadmap

Meeting the insatiable demand for compute requires clear priorities: larger memory space, lower latency, more efficient cooling, higher energy performance. GPUs will continue evolving accordingly—scaling HBM capacity and bandwidth, advancing interconnects, and optimizing storage and data-loading paths. Opportunities for improvement remain endless.

On the software side, the question is how close AMD’s compilers and runtimes can come to zero-friction while preserving backward compatibility with PyTorch and JAX. In an expanding market, feeding operational feedback into architecture along the shortest path will decide generational performance gaps. Even abundant hardware fails to convert into market value without matching software optimization.

Power, cooling, and site strategy should also be treated as integral parts of the roadmap. Layouts premised on liquid immersion, integration of heat recovery with district systems, hybridization of renewables and storage, and adaptive scheduling to power demand—all these “Watt and Bit” linkages define the real unit cost of compute. Chip miniaturization alone will not sustain the next decade.

Conclusion

The OpenAI–AMD partnership marks the arrival of an era where capital, supply, power, and software are designed as a single system around compute resources. Under compute-ism, victory depends not on individual products but on ecosystem maturity. Market velocity will accelerate, yet the fundamentals remain simple: which power, in which place, on which chip, through which code, under which governance. The alliances that design these layers early, deeply, and broadly will draw the next map of the AI age.

Categories
Asides

Reconsidering NFTs and the Architecture of Trust in the Generative Era

NFTs were once treated as the symbol of digital art. Their mechanism of proving “uniqueness” seemed like a fresh counterbalance to the infinite reproducibility of digital data. The idea that the proof of existence itself could hold value, rather than the artwork alone, was indeed revolutionary.

However, NFTs were quickly consumed by commercial frenzy. The market expanded without a real understanding of the underlying concept, and countless pieces of digital waste were created. Detached from the artistic context, endless collections were generated and forgotten. That phenomenon reflected not the flaw of the technology itself, but the fragility of human desire driven by trends.

Perhaps the era simply arrived too early. Yet now, with the rise of generative AI, the situation feels different. Images, voices, and videos are produced from minimal prompts, and distinguishing authenticity has become increasingly difficult. In this age where the boundary between the real and the synthetic is fading, the need to verify who created what, and when, is once again growing.

AI-generated content is closer to a generation log than a traditional work of authorship. To trace its countless derivatives, to record their origins and transformations, we need a new system—and the foundational structure of NFTs fits naturally there. Immutable verification, decentralized ownership, traceable history. These can be redefined not as artistic features, but as mechanisms for ensuring information reliability.

Watching models like Sora 2 makes that necessity clear. When generated outputs become so real that human-made and AI-made works are indistinguishable, society begins to search again for a sense of uniqueness—not in aesthetic terms, but in informational and social terms. NFTs may quietly return, not as speculative art tokens, but as the infrastructure of provenance and trust.

The meaning of a technology always changes with its context. NFTs did not end as a mere symbol of the bubble. They might have been the first structural answer to the question that defines the AI era: what does it mean for something to be genuine? Now may be the time to revisit that architecture and see it anew.

Categories
Asides

A Society Where APIs Become Unnecessary

Looking back over the past few months, I realize just how deeply I’ve fused my daily life with AI. Most of my routine tasks are already handled alongside it. Research, small repetitive work, even writing code can now be delegated. The most striking change is that tools built solely for my own efficiency are now fully automated by AI.

What’s especially fascinating is that even complex tasks—such as online banking operations—can now be automated in ways tailored specifically to an individual’s needs. For example, importing bank statements, categorizing them based on personal rules, and restructuring them as accounting data. What once required compliance with the frameworks imposed by financial institutions or accounting software can now be achieved simply by giving natural language instructions to AI.

The key here is that, unlike commercial products, there’s no need to satisfy “universality” for everyone. Personal quirks, rules that only I understand, exceptions that would never justify engineering resources for a mass-market service—these can all be captured and executed by AI. What used to be dismissed as “too niche” is now fully realizable at the individual level. Being freed from the constraints of general-purpose design has enormous value.

Even more revolutionary is the fact that APIs are no longer necessary. Traditionally, automation was possible only when a service explicitly exposed external connections. Now, AI can interact with data the same way a human would—through a browser or app interface. This means services don’t need to be designed to “export data.” AI can naturally capture it and fold it into personal workflows. From the user’s perspective, this allows data to flow freely, regardless of the provider’s intentions.

As I noted in my piece about Tesla Optimus, replacing parts of society without changing the interface itself will become a defining trend. AI exemplifies this by liberating usage from the design logic of providers and putting it back into the hands of users.

This structure leads to a reversal of power. Until now, providers dictated how services could be used. With AI as the intermediary, users can decide how they want to handle data. Whether or not a provider offers an API becomes irrelevant—users can route data into their own automation circuits regardless. At that moment, control shifts fully to the user side.

And this isn’t limited to banking. Any workflow once constrained by a provider’s convenience can now be redesigned by individuals. Subtle personal needs can be incorporated, complexity erased, external restrictions bypassed. The balance of power over data—long held by providers—is starting to wobble.

Of course, AI itself is still transitional. “AI” is not one thing but many, each with distinct characteristics. At present, people must choose and balance among them: cloud-hosted AI, private local AI, or in-house AI running on proprietary data centers. Each has strengths and weaknesses, and from the perspective of data sovereignty, careful selection is essential.

Still, living with multiple AIs simultaneously brings a sense of parallelization to daily work. Different tasks with different contexts can be run side by side, allowing me to stay focused on the most important decisions. Yet at the same time, because AI increasingly performs the research feeding those decisions, the line between my own will and AI’s influence grows blurred. That ambiguity is part of what makes this fusion fascinating—and also why the health of AI systems and the handling of personal data have become more critical than ever before.

Exit mobile version