Categories
Asides

Nvidia Is Copying the Earth

Eric Schmidt of Google once said it would take 300 years to crawl and index all the digital information in the world. Thirty years later, Google has collected, structured, and ranked the planet’s data, establishing itself as the central hub of global information.
This process has been one of humanity’s long attempts to digitally capture the sum of its knowledge.

Around the same time, Facebook began copying humanity itself. It targeted not only personal attributes and relationships but even private exchanges, mapping them into a social graph that visualized how people are connected.
If Google drew the “map of knowledge,” Facebook drew the “map of human relationships.”

AI has bloomed on top of these vast copies. What AI seeks is not mere volume of data, but the ability to analyze accumulated information and transform it into insight. Value lies in that process of interpretation. For this reason, possessing more data no longer guarantees advantage—what matters now is the ability to understand and utilize it.

So, what becomes the next battleground?
After the maps of knowledge and human connection, what is the next domain to be replicated? One emerging answer lies in Nvidia’s current approach.

Nvidia is attempting to copy the Earth itself. Whether we call it a Digital Twin or a Mirror World, the company is trying to reconstruct the planet’s structure and dynamics within its own ecosystem.
It aims to simulate the movements of the physical world and overlay them with digital laws. This marks a departure from the information-based replication of earlier internet companies, moving instead toward the duplication of reality itself.

What lies ahead is a complete digital copy of Earth—and a new industrial ecosystem built upon it. In Nvidia’s envisioned world, cities, climates, and economies all become entities that can be simulated. Within that digital Earth, AI learns, reasons, and reconstructs. Humanity has moved from understanding the planet to recreating it.

Yet if we wish to honor diversity and generate more possibilities in parallel, what we will need are not one, but countless “worlds.” Rather than imitating a single correct reality, AI could generate multiple “world lines” that diverge under different conditions. We can imagine a future where AI compares these world lines and derives the most optimal outcome. Such a vision would require an immense foundation of computational power.

This is no longer a contest of information processing alone but a struggle over resources themselves. The question becomes how efficiently we can transform energy into computation.The industries that produce semiconductors and the infrastructures that generate and distribute energy will form the next field of competition.
Nvidia’s challenge is not about data but about the “replication of worlds”—a new scale of technological struggle, an attempt to rewrite civilization with the Earth itself as the stage.

Categories
Asides

Will SoftBank Acquire Intel?

I once wondered about this question. Later, SoftBank announced a new round of investment, confirming that the company was indeed moving in that direction. I want to record here what I was thinking before that announcement. My purpose is to leave a trace that will allow me to compare those thoughts with what eventually happens.

The Background and Current State of the ARM Acquisition

SoftBank’s 2016 acquisition of ARM was a clear declaration that the company intended to anchor itself at the upstream of the semiconductor value chain — intellectual property for chip design. ARM’s licensing model, built on neutrality and scalability, expanded its reach from mobile devices to IoT, servers, and even supercomputers. While promising to maintain ARM’s neutrality, SoftBank began to tighten integration, emphasizing subsystem offerings and deeper involvement in the server domain. Even after ARM’s re-listing in 2023, it remains the group’s most important asset and the central hub connecting its other investments.

The Next Step: Building Around ARM

SoftBank’s acquisitions have sought to elevate ARM from a mere IP provider into the driving force of an entire ecosystem. The acquisition of Graphcore gave it a foothold in AI accelerators, and the purchase of Ampere brought a practical server-CPU operation under the same umbrella. The combination of ARM’s low-power design philosophy and the data-center scale-out trend offers an alternative optimal point to the traditional x86-centric server market. This configuration directly connects to the later thought experiment concerning Intel.

The Distance Between SoftBank and Nvidia

SoftBank was once a major Nvidia shareholder, forging a close relationship before the AI boom. The subsequent sale, which forfeited a massive appreciation opportunity, shifted the relationship to one of both collaboration and competition. While joint projects in Japan’s AI and telecommunications infrastructure continue, SoftBank’s push to cultivate multiple in-house AI-chip initiatives can be read as an attempt to challenge Nvidia’s dominance. Nvidia, for its part, is reinforcing its own vertical integration with ARM-based CPUs and NVLink interconnects. The two paths intersect but ultimately lead toward different goals.

The AI Investment Strategy Centered on OpenAI

SoftBank’s massive commitment to OpenAI, its infrastructure partnerships with Oracle and others, and joint ventures in Japan all signal a plan to bring the software core of AI under its orbit while pre-securing compute resources. In the AI era, supremacy converges not on algorithms but on the ability to govern and interconnect power, semiconductors, and capital. SoftBank aims to tie the scale of AI itself to its balance sheet, controlling both design IP and the physical data-center layer.

The Intel Hypothesis

How might Intel fit into this circuitry? Market stagnation, restructuring pressures, and the separation of manufacturing from products have fueled repeated speculation about acquisitions and equity partnerships. Reports suggested that ARM showed interest in Intel’s product division but talks fell through, and negotiations over AI-chip manufacturing also collapsed over production-capacity terms. There is no evidence of a formal buyout attempt, but traces of exploratory engagement remain. The core question is simple: why would SoftBank want to absorb Intel, and through what realistic path could it happen?

Examining Strategic Alignment

ARM is an IP-driven entity without manufacturing. Intel possesses vast fabrication capacity and an x86 franchise but lags in mobile and power-efficient contexts. Combined, they could span both CPU architectures, integrating from data centers to edge devices with comprehensive design and supply capabilities. Within the AI infrastructure stack, they could encompass CPUs, AI accelerators, memory, interconnects, and fabs. The logic is elegant — and access to CHIPS Act subsidies and advanced fabrication would offset reliance on external foundries.

Yet elegant logic does not guarantee practical feasibility. For foreign capital to take control of Intel — an American strategic asset — would run headlong into political and regulatory barriers. As the U.S. Steel precedent showed, national interest can override regulatory clearance. On antitrust grounds, even the perception that ARM’s neutrality might erode would provoke fierce resistance. The industry views ARM as common infrastructure; any integration skewed toward a single group’s advantage would meet opposition from all sides. Add financial strain and the operational burden of running manufacturing, and a full acquisition becomes implausible.

Pragmatic Alternatives

If full control is closed off, distributed strategies remain. Partial equity participation, co-design projects, long-term manufacturing contracts, and multinational consortiums all represent workable routes. ARM can enhance its relevance through subsystem design and joint optimization; Ampere and Graphcore can bring their products to market; Rapidus and overseas foundries can diversify manufacturing access. Rather than outright control, strengthening its role as a hub connecting specifications, capital, and power supply aligns with SoftBank’s pragmatic style.

Re-Examining the Risks

A U.S. Steel–type political blockade is entirely plausible. Cross-border semiconductor investments fall squarely within national-security and industrial-policy oversight, entangling legislators, unions, and state governments. Antitrust risks are also significant. If ARM’s neutrality were questioned, Apple, Qualcomm, Microsoft, Amazon, Google, and Nvidia would all lobby against the deal. Conflicts with existing players would be inevitable: Nvidia is consolidating independence across CPUs and GPUs, while Apple closely monitors ARM’s trajectory, vital to its own SoC strategy. The practical route to conflict avoidance lies in incentive structures that distribute value across stakeholders and in maintaining transparent, non-discriminatory licensing.

Japan’s Policy Landscape and Points of Contact

SBI’s domestic memory initiative has shifted focus from a failed PSMC alliance toward cooperation with the SK group. Subsidy frameworks remain, and Japan continues exploring ways to restore local memory capacity. With domestic AI firms such as PFN in the mix, a new ecosystem centered on AI-specific memory demand could emerge. Meanwhile, Rapidus aims for 2-nm logic mass production and is collaborating with Tenstorrent to capture edge-AI demand. SoftBank, a shareholder, holds the option to align ARM or Ampere designs with domestic manufacturing. The interplay between national and private capital thus serves SoftBank as both risk hedge and policy alignment mechanism.

Managing Relationships with Nvidia and Apple

Nvidia represents both partner and competitor. Joint efforts in Japan’s AI and 5G infrastructure coexist with SoftBank’s independent AI-chip initiatives and ARM’s expansion, both of which could alter long-term market dynamics. For Apple, ARM’s neutrality and licensing stability are paramount. Any perception that ARM’s roadmap tilts toward proprietary advantage could chill relations. Maintaining openness in software toolchains, transparency in roadmaps, and a balance between differentiation and neutrality will be key.

The Question That Remains

Even if an acquisition is unrealistic, why does the idea keep resurfacing? The answer is simple: in the AI era, value creation is migrating toward the convergence of compute resources, power, and capital. CPU architectures, advanced fabs, AI accelerators, memory, interconnects, cloud infrastructure, and generative AI platforms — whoever orchestrates these elements will define the next decade. SoftBank holds capital, IP, and market reach, but lacks proprietary access to manufacturing and power. That is why Intel enters the frame. Yet being in view and being within reach are two different things.

Conclusion

Even if the path to a full Intel acquisition is closed, SoftBank still has room to build equivalent capability through distributed partnerships. The real question is how to integrate power sources, manufacturing ecosystems, architectures, and capital structures into a coherent design. This is no longer about a one-time transaction but about the ability to interlink policy, capital, and technology. When revisited years from now, this speculation may not look like a rumor but rather an early thought experiment on the reconfiguration of power in the age of compute sovereignty.

Categories
Asides

The Strategic Value of Compute Resources in the OpenAI–AMD Partnership

The expansion of generative AI has entered a stage where progress is determined not by model novelty but by the ability to secure and operate compute resources. The multi-year, multi-generation alliance between OpenAI and AMD clearly reflects this structure. It is no longer a simple transactional deal but a framework that integrates capital, supply, power, and implementation layers into a mechanism for mutual growth—signaling a shift toward scale as a built-in assumption.

Forecasting Power Demand

The backbone of this partnership is gigawatt-class compute capacity. An initial 1 GW, scaling to several gigawatts, links data-center construction directly to regional grid planning rather than individual projects. The key factors are not only peak power draw but sustained supply reliability and effective PUE including heat rejection. AI training workloads behave as constant loads rather than spikes, making grid stability and redundancy in auxiliary systems critical bottlenecks.

Model evolution continues to expand overall electricity demand, offsetting gains in performance per watt. Even as semiconductor generations improve efficiency, larger parameter counts, bigger datasets, and multimodal preprocessing and inference push consumption upward. Consequently, capital investment shifts its center of gravity from racks to civil-engineering and electrical domains that include cooling infrastructure.

Structural Issues in the Compute Market

Even with AMD expanding deployment options, the NVIDIA-dominated market faces other bottlenecks—optical interconnects, advanced HBM, and CoWoS packaging capacity among them. Rising rack-level heat density makes the shift from air to liquid cooling irreversible, tightening location constraints for data centers. The result is a conversion lag: capital cannot instantly be turned into usable compute capacity.

A further concern is geopolitical risk. Heightened global tensions and export controls can fragment manufacturing and deployment chains, triggering cascading delays and redesigns.

OpenAI’s Challenges

The first challenge for OpenAI is absorbing and smoothing exponentially growing compute demand. Running research, productization, and APIs concurrently complicates capacity planning across training and inference clusters, making the balance between model renewal and existing services a critical task.

The second is diversification away from a single vendor. Heavy reliance on NVIDIA has caused supply bottlenecks and eroded pricing flexibility. Sharing the roadmap with AMD therefore carries both optimization and procurement significance.

The third lies in capital structure and governance. While drawing in vast external commitments, OpenAI must maintain neutrality and research agility, requiring careful contract architecture to coordinate partnerships. The episode of its past internal split serves as a reminder: when capital providers bring divergent decision criteria, alignment of research agendas becomes a challenge.

AMD’s Challenges

AMD’s bottlenecks are manufacturing capacity and the software ecosystem. Its latest designs can compete technically, but to offer a developer experience rivaling the PyTorch/CUDA world, it must advance runtimes, compilers, kernels, and distributed-training toolchains. Hardware aspects such as HBM supply, packaging yield, and thermal management will define both delivery schedules and operational stability.

A second challenge is converting the co-developed results with OpenAI into broader market value. If collaboration remains confined to a single project or product, dependency risk increases. Generalizing and scaling the gains to other markets will be essential.

Strategic Intent of the Partnership

At the surface, the intent is clear: OpenAI seeks secure and diversified compute resources, while AMD seeks simultaneous credibility and demand. Structurally, however, there is a deeper layer—integrating models, data, compute, and capital into a unified flow; accelerating GPU design and supply cycles; and locking in diversified power and site portfolios early. In effect, both sides embed their respective challenges into a forward-loaded roadmap that reduces uncertainty in supply and financing.

Scheme Design

The distinctive feature is clause design that firmly enforces reciprocal commitment. Large take-or-pay volumes and facility milestones are tied to capital returns, linking hardware success directly to customer benefit. For suppliers, it secures quantity certainty and pricing floors, easing investment decisions. For buyers, it strengthens influence over technical specifications and workload fit. Financially, it helps smooth extreme swings in cash flow.

Difference from NVIDIA’s Model

Where NVIDIA’s massive deal channels capital from supplier to buyer—who then spends it back on the supplier—the AMD structure grants equity options from supplier to buyer, while the buyer guarantees long-term procurement. Both align incentives, but the direction of capital flow and degree of governance leverage differ.

NVIDIA’s model gives suppliers greater control and restricts buyers through capital conditions. AMD’s allows buyers to become future shareholders, giving them indirect influence over the supplier’s technical priorities.

Compute-ism

In the AI era, the value model ultimately converges on a single question: who can operate how much compute, on what power, at what efficiency, and under what governance. Partnerships with Microsoft, NVIDIA, AMD, and Oracle all stem from that premise. Compute capacity has become currency, conduit, and foundation of sovereignty. The choice of compute space—including power source, jurisdiction, ethical stance, and data lineage—extends from corporate strategy into institutional design.

From this viewpoint, true competitiveness lies in projects that integrate long-term cloud commitments, dedicated power and cooling, secured land, and supply-chain finance. Price or FLOPS comparisons alone no longer define advantage.

Impact on the Hardware and Technology Roadmap

Meeting the insatiable demand for compute requires clear priorities: larger memory space, lower latency, more efficient cooling, higher energy performance. GPUs will continue evolving accordingly—scaling HBM capacity and bandwidth, advancing interconnects, and optimizing storage and data-loading paths. Opportunities for improvement remain endless.

On the software side, the question is how close AMD’s compilers and runtimes can come to zero-friction while preserving backward compatibility with PyTorch and JAX. In an expanding market, feeding operational feedback into architecture along the shortest path will decide generational performance gaps. Even abundant hardware fails to convert into market value without matching software optimization.

Power, cooling, and site strategy should also be treated as integral parts of the roadmap. Layouts premised on liquid immersion, integration of heat recovery with district systems, hybridization of renewables and storage, and adaptive scheduling to power demand—all these “Watt and Bit” linkages define the real unit cost of compute. Chip miniaturization alone will not sustain the next decade.

Conclusion

The OpenAI–AMD partnership marks the arrival of an era where capital, supply, power, and software are designed as a single system around compute resources. Under compute-ism, victory depends not on individual products but on ecosystem maturity. Market velocity will accelerate, yet the fundamentals remain simple: which power, in which place, on which chip, through which code, under which governance. The alliances that design these layers early, deeply, and broadly will draw the next map of the AI age.

Categories
Asides

Generating Infrastructure

Jansen once said:
The age of designing programs, writing code, and brute-forcing our way through problems is ending. What comes next is the age of sharing problems and generating solutions.

Generative AI creates things. Text. Images. Code.
And lately, I’ve come to feel that this general-purpose ability means it will eventually create infrastructure itself.

Until now, we’ve followed a roughly linear cycle:

  1. Humans design and operate the structure of cities and societies
  2. The resulting industries develop hardware and software
  3. Data is collected and funneled into systems
  4. Protocols, laws, and economic structures are established
  5. And finally, AI is deployed

But from now on, we’ll enter a new cycle led by AI. And at that point, step one may already be beyond the reach of human cognition. AI will generate cities in the mirror world—or in other virtual spaces—and test various models of social design.
It will simulate tax systems, transport networks, education and financial policies. And perhaps, ideally, the solutions that most broadly benefit the public good will be selected and implemented.

That future is already close at hand. We’re entering an age where AI designs semiconductors. An age where AI creates robots in the mirror world. And beyond that, perhaps an age where AI generates entire societal structures.

The word “generative” often carries connotations of improvisation or chaos. But in truth, generative AI excels at inventing structure. Just as in nature, where apparent disorder gives way to patterns when seen from a distance. The output of AI may seem arbitrary, but from a high enough view, a kind of logic will likely emerge.

Whether humans can perceive it is another question. If such technologies and the systems to adopt them are introduced into governance, then a different kind of policymaking becomes possible. This won’t be about whether data exists or not. It won’t be about evidence-based metrics. It will be a society where outcomes are implemented because they’ve been verified.

When that time comes, the rules of the game will have changed. And the shape of democracy may no longer remain the same.

Building cities. Designing institutions. Engineering infrastructure. These were once seen as things only humans could do. But a time is coming when those things will be implemented because AI has tested them, and the outcomes were simply better—higher quality, more effective, more just.

What, then, will be the measure of truth? Of maximum happiness? Of the best possible result? And who, if anyone, will be left to decide?

Categories
Asides

How Nvidia’s Mirror World Is Changing Manufacturing

Watching Nvidia’s latest announcements, I couldn’t help but feel that the world of manufacturing is entering an entirely new phase.

Until now, PDCA cycles in manufacturing could only happen in the physical world.
But that’s no longer the case. We’re entering a time when product development can be simulated in virtual environments—worlds that mirror our own—and those cycles are now run autonomously by AI.

It’s clear that Nvidia intends to make this mirror world its main battlefield.
With concepts like Omniverse and digital twins, the idea is simple: bring physical reality into a digital copy, migrate the entire industrial foundation into that alternative world, and build a new economy on top of Nvidia’s infrastructure.

In that world, prototypes and designs can be tested and iterated in real time, at extreme levels of precision.
Self-driving simulations, factory line optimization, structural analysis of buildings, drug discovery, medical research, education—it’s all happening virtually, without ever leaving the simulation.

The meaning of “making things” is starting to shift.
Before anything reaches the physical world, it will have gone through tens of thousands of iterations in the virtual one—refined, evaluated, and optimized by AI.
We’ve entered a phase where PDCA loops run at hyperspeed in the digital realm, and near-finished products are sent out into reality.

This isn’t just about CG or visualization.
It’s about structures that exist only in data, yet directly affect actions in the physical world.
The mirror world has reached the level of fidelity where it can now be deployed socially.

In this era, I believe Japan’s role becomes even more essential.

No matter how detailed the design, we still need somewhere that can realize it physically, with precision.
In a world where even the slightest error could be fatal, manufacturing accuracy and quality control become the decisive factors.

And that’s exactly where Japan excels.

Things born in simulation will descend into reality.
And the interface between the two—“manufacturing”—is only going to grow in significance.

Exit mobile version