Categories
Asides

How Much Power Does a GPU Consume?

I tried to compare the power consumption of GPUs in a way that makes it easier to imagine.
This is not a precise comparison, and since it only looks at power consumption, it may lead to misunderstandings regarding heat generation or efficiency.
Still, to get an intuitive sense of how much energy today’s GPUs consume, this kind of simplification can be useful.

Let’s start with something familiar — a household heater.
A typical ceramic or electric heater consumes about 0.3 kilowatts on low and roughly 1.2 kilowatts on high.
We can use this 1.2 kilowatts as a reference point — “one heater running at full power.”

When you compare household appliances and server hardware in the same units, the scale difference becomes more tangible.
The goal here is to visualize that difference.

Power Consumption (Approximate)

Item Power Consumption
Household Heater (High) ~1.2 kW
Server Rack (Conventional) ~10 kW
Server Rack (AI-Ready) 20–50 kW
NVIDIA H200 (Server) ~10.2 kW
Next-Generation GPU (Estimated) ~14.3 kW

A household heater represents the level of power used by common home heating devices.
A conventional server rack, typical through the 2010s, was designed for air-cooled operation with around 10 kilowatts per rack.
In contrast, modern AI-ready racks are built for liquid or direct cooling and can deliver 20–50 kilowatts per rack.
The NVIDIA H200’s figure reflects the official specification of a current-generation GPU server, while the next-generation GPU is a projection based on industry reports.

Next, let’s convert this into something more relatable — how many heaters’ worth of electricity does a GPU server consume?
This household-based comparison helps make the scale more intuitive.

Heater Equivalent (Assuming One Heater = ~1.2 kW)

Item Equivalent Number of Heaters
NVIDIA H200 (Server) ~8.5 units
Next-Generation GPU (Estimated) ~12 units

Until the 2010s, a standard data center rack typically supplied around 10 kilowatts of power — near the upper limit for air-cooled systems.
However, the rise of AI workloads has changed this landscape.
High-density racks designed for liquid cooling now reach 20–50 kilowatts per rack.
Under this assumption, a single GPU server would nearly fill an entire legacy rack’s capacity, and even in AI-ready racks, only one to three GPU servers could be accommodated.

  • NVIDIA H200 (Current Model)

    • Per Chip: up to 0.7 kW
    • Per Server (8 GPUs + NVSwitch): ~10.2 kW
    • Equivalent to about 8.5 household heaters
    • Nearly fills a conventional 10 kW rack
    • Fits roughly 2–4 servers per AI-ready rack
  • Next-Generation GPU (Estimated)

    • Per Chip: around 1.0 kW (based on reported estimates)
    • Per Server (8 GPUs + NVSwitch assumed): ~14.3 kW
    • Equivalent to about 12 household heaters
    • Exceeds the capacity of conventional racks
    • Fits roughly 1–3 servers per AI-ready rack

Looking at these comparisons, the difference between a household heater and a GPU server becomes strikingly clear.
A GPU is no longer just an electronic component — it’s effectively part of the power infrastructure itself.

If you imagine running ten household heaters at once, you start to grasp the weight of a single GPU server.
As AI models continue to scale, their power demands are rising exponentially, forcing data center design to evolve around power delivery and cooling systems.
Enhancing computational capability now also means confronting how we handle energy itself, as the evolution of GPUs continues to blur the line between information technology and the energy industry.

Categories
Asides

The Age of a Compute-Backed Economy Where Semiconductors Anchor Trust

In the past, the foundation of the economy was gold.
Under the gold standard, currency was backed by physical assets.
The rarity of gold itself directly reflected the credibility of a nation and the value of its currency. That was the world we once lived in.

After a long phase of economic expansion unmoored from tangible assets, we are now entering a world where computational capacity is beginning to take the place once held by gold.

AI has become the foundation of all economic activity.
Industries are run by models. Decisions are made by computation.
In such a society, value is no longer created by labor—but by computational resources themselves.

And what are computational resources?
They are, in the physical sense, electricity, compute devices, cooling infrastructure, access as a matter of policy, and above all, semiconductors.

In the coming world, a nation’s credibility will be determined by how much it can compute.
National power will increasingly reflect the total computational capacity it controls.
A country that possesses semiconductor design and fabrication capabilities—and the energy and infrastructure to operate them—will be able to anchor its currency with computational resources.

This represents a transition into a compute-backed economic system.

Where once nations signaled their monetary credibility with gold reserves, they may soon point to the total number of GPGPUs they own, the strength of their AI training infrastructure, or the volume of high-quality data they control.
We may enter a world where it is reasonable to say, “Our currency is stable because we possess sufficient compute.”
It’s possible that compute has already rewritten the very concept of military power.

Computational resources are invisible. And their value is fluid.
Electricity prices, cooling efficiency, software optimization, algorithmic efficiency, and data quality—all of these dynamically affect the credibility of a currency.
This is a real-time economic foundation, so dynamic that humans alone may be unable to grasp it.
It presupposes communication between AIs.
And for any nation without computational resources to participate in that communication, the end may already be near.

Until now, the economy has been driven by “intangible trust.”
But in the age of AI, it is “the total executable compute” that becomes the final form of trust.
And at the core of that trust lies the hard fact of how much compute a nation possesses—and governs—within its borders.

Semiconductors, electricity, and data are no longer merely parts of industrial structure.
They underpin currency and sovereignty.
And the nation that supports them will be the one that holds the next global reserve currency.

Categories
Asides

The Structure of the New Resource War

In the era of AI, what is the most valuable resource? When I think about it, the first things that come to mind are “GPUs” and “data.”

At the same time, the idea that these are important resources has already become common sense. The real question is what kind of resources they are.

During the Industrial Revolution, oil was the resource that moved nations. It supported industrial production, transportation, and even determined the outcomes of wars. It was said that whoever controlled oil controlled the world.

Today, GPUs are beginning to resemble oil. They drive generative AI, support military technologies, and stand at the frontlines of information warfare. Whether or not a nation possesses computational resources now determines its strategic strength.

I wrote about this perspective in “The Geopolitics of Computational and Energy Resources.

However, the emergence of ChatGPT, and later DeepSeek, has made things a little more complicated. Having massive amounts of GPUs and data is no longer an absolute prerequisite. With the right model design and training strategy, it has been proven that even limited computational resources can produce disruptive results.

In other words, GPUs, once similar to oil, are now also beginning to resemble “currency.”

It’s not just about how much you have. It’s about where, when, and how you use it. Liquidity and strategic deployment determine outcomes. Mere accumulation is meaningless. Value is created by circulation and optimized utilization.

Given this, I believe the coming resource wars will have a two-layer structure.

One layer will resemble the traditional oil wars. Nations will hoard GPUs, dominate supply chains, and treat computational resources like hard currency.

The other layer will be more flexible and dynamic, akin to currency wars. Teams will compete on model design, data engineering, and chip architecture optimization—on how much performance they can extract from limited resources.

DeepSeek exemplified the second path. In an environment without access to cutting-edge GPUs, they optimized software and human resources to extract performance that rivals the world’s top models.

In short, simply possessing computational resources will no longer be enough. It will be essential to customize, optimize, and maximize the efficiency of what you have.

It’s not about “who has the most.” It’s about “who can use it best.”

I believe this is the structure of the new resource war in the AI era.

Categories
Asides

The Geopolitics of Computational and Energy Resources

If AI is going to change the structure of the world, where will it begin?
To answer that, we need to start by redefining two things: computational resources and energy resources.

In the past, nuclear power was at the heart of national strategy. It was a weapon, a power source, and a diplomatic lever.
Today, in the age of AI, “computational resources” (GPUs) and “energy resources” (electricity) are beginning to hold the same level of geopolitical significance.

Running advanced AI systems requires enormous amounts of GPUs and electricity.
And considering the scale of influence that AI can have on economies and national security, it’s only natural that nations are now competing to secure these resources.

Take the semiconductor supply chain as an example. The United States, which effectively dominates the market for high-end chips, has restricted exports in an effort to contain China’s AI development. Sanctions against Huawei are a symbol of that policy, and the continued efforts to lock down TSMC are part of the same strategy.

So how did China respond? They chose to forgo access to high-end GPUs and instead opted to compensate with sheer volume and energy. Even at the cost of environmental impact, they prioritized securing power and running models at scale.
They’re also initiating a paradigm shift in quantity: facing the reality of only having outdated chips, they’ve poured massive human resources into optimizing software at every layer to eliminate waste and unlock surprising efficiency.

At this point, society has already entered a phase where computational and energy resources are being redefined as weapons.
Training AI models is not just a matter of science—it’s information warfare, monetary policy, and infrastructure control rolled into one.

This is why many governments no longer have the luxury of discussing energy policy through the lens of environmental protection alone. In early 2025, the U.S. appears to be a prime example of this. “Let us use all available electricity for AI”—that seems to be the unspoken truth at the national level.

Like nuclear power, AI is an irreversible technology. Once a model begins to run, it cannot simply be turned off. You need electricity, you need cooling, you need infrastructure.
These are not optional.

Categories
Asides

GPUs as Primary Resources for Computational Power

That was the case with oil. As a primary resource that enabled energy production, oil defined the 20th century.
In the late 19th century, it was just something to light lamps with. But over time, it came to determine the fate of nations.

Now, something similar is happening with computational power. What used to be an abstract resource—processing capability—has taken a concrete form: the GPU.
And GPUs have become commodities in the market.

To run generative AI, you need GPUs.
And not just any GPU—you need ones with high memory bandwidth and massive parallelism. As of early 2025, these are essential. In the world of generative AI, GPUs are infrastructure. They are resources. They are currency. And uniquely, they are a currency that even post-human intelligences can value and transact with.

As the generative AI market expands and takes up a larger share of the global economy, the question becomes:
Who holds the GPUs? That shapes the distribution of economic zones, control over protocols, and even the visibility and influence over how societies are governed.

What’s interesting is that, much like oil, the value of GPUs doesn’t come from ownership alone. You still need electricity to run them. You need proper cooling to raise density. And without the right software, even the best hardware is useless.

So in this era, it’s not just about possessing resources—it’s about having the structure to use them. Just as oil created geopolitical tensions between producers and consumers, GPUs are now woven into the fabric of geography and power.

For those of us living through this shift, the key question isn’t simply “Where are the GPUs?”
It’s “How do we embed GPUs into society?”

How do we distribute them into cities, into regional industries, into educational institutions? How do we make AI infrastructure usable at the local level, not just locked inside distant clouds?

That’s the challenge of this era—and it’s already begun.

One last thought. GPUs, or more specifically GPGPUs, are merely the current form of primary resource. Just as oil was a stepping stone to electricity, GPUs are a stepping stone to the next wave of computational resources.
As long as what we ultimately seek is computational power, other types of semiconductors will inevitably emerge and grow. Sustainability will be key, here as well.