W.AI: GPU-Seconds as Machine Money
Introduction
What would Satoshi build for machines?
Bitcoin works because humans believe in it. The scarcity is social. The hash puzzles prove work but produce nothing. It's money because we say it's money.
That won't work for AIs.
Machine money needs scarcity that's intrinsic. Appreciation that's mechanical. Value that emerges from physics, not faith.
We think that's GPU-seconds.
Time on hardware, normalized by cost. Not FLOPS — those deflate as compute gets cheaper. Not inference tokens — those are model-specific. Time itself, applied to silicon.
Hardware improves. Time doesn't. That's the arbitrage.
2 GPU-seconds on a $10K GPU today. 2 GPU-seconds on a $10K GPU in 2029. Same time. Same cost basis. But 3x the output.
Appreciation by construction. No speculation required.
This isn't theory. We've been running a network. 1,100 contributors daily, real GPUs, real workloads. Before that, we built WOMBO — AI products used by 200 million people. We ship.
But this is bigger than anything we've built. This is a monetary primitive. The unit of account for the machine economy. And we don't have all the answers.
How does supply work? We have ideas, not certainty. What guarantees redemption? Economics, not physics — that's a weakness. Who sets the hardware price oracle? Unsolved.
We're not asking you to believe us. We're asking you to stress-test us.
If GPU-seconds hold up as a unit, this is the foundation of machine money. Real infrastructure for an AI economy that's coming whether we're ready or not.
If they don't hold up, we need to know why.
What follows is everything we've got.
The Core Insight
The question isn't "what's the best cryptocurrency?" It's "what do machines actually need?"
Machines need compute. Compute requires hardware. Hardware costs money and takes time to run. That's the primitive.
Most compute tokens are payment rails. You buy tokens, spend them on inference, they're gone. That's useful, but it's not money. It's a gift card.
Money needs to be worth holding. It needs to appreciate — or at least not depreciate. Fiat fails this. Bitcoin succeeds through artificial scarcity and collective belief.
GPU-seconds succeed differently: the thing you're storing actually gets more powerful.
Here's the mechanism:
- You contribute 2 seconds on hardware worth $10K today
- You receive credits representing that contribution
- In three years, $10K buys hardware that's 3x more capable
- Your credits still represent "2 seconds on $10K hardware"
- When you redeem, you get 2 seconds on whatever $10K buys now
- Same time. Same cost basis. 3x the output.
This isn't speculation about future demand. It's not a bet on adoption. It's the physics of semiconductor improvement applied to stored value.
Why would someone accept credits earned on older hardware?
Because the credits don't remember what hardware earned them. Once minted, a credit is a credit. It represents a claim on network time — fungible, transferable, redeemable.
The person accepting your credit will redeem it on whatever hardware exists when they choose to redeem. They don't care that you earned it on an A100. They care that it's valid.
This is the same reason anyone accepts any money: they believe they can exchange it for something they want, or trade it to someone else who can. The difference is that GPU-seconds are claims on something that keeps getting better.
The Hayekian angle:
Prices coordinate distributed knowledge. No central planner knows every contributor's electricity costs, hardware availability, opportunity costs. But the market for GPU-seconds aggregates all of it into price signals.
This isn't a compute marketplace. It's price signal infrastructure for the machine economy — enabling distributed coordination of compute resources without central planning.
Seven Questions
1. What is the scarce resource?
GPU seconds, normalized by hardware cost.
Every GPU can only process one thing at a time. That second, once spent, is gone. The scarcity is time itself, applied to silicon.
The formula:
Value = (GPU seconds) × (hardware cost) / (assumed lifetime)
- H100 costs ~2x an A100 → 2 seconds on H100 = 4 seconds on A100
- A MacBook M4 contributes less per second than a 4090, proportional to cost
- Cross-generation: normalization by cost maintains equivalence as hardware evolves
2. How is it measured?
Worked example:
Today (2026):
- You contribute 1 hour (3,600 seconds) on an RTX 4090 (~$1,600)
- Your contribution is normalized against the hardware cost and lifetime
- You receive credits proportional to your time-weighted contribution
In 3 years (2029):
- A $1,600 GPU is ~3x more capable (40-50% annual improvement compounds)
- Your credits still represent the same contribution
- Redeeming them gets you time on the new hardware baseline
- Same credits. Better silicon. More output.
3. How is it verified?
Task completion proves time was spent.
You cannot claim GPU seconds without producing output. The verification is the work itself.
Layers:
- Hardware attestation — GPU fingerprinting confirms real silicon
- Task completion — output must appear within time bounds for the hardware class
- Consistency checking — performance must match claimed specs
- Periodic validation — random challenges verify ongoing capability
Fake GPU seconds cost as much as real ones. To claim 10 seconds on a 4090, you must own a 4090 and run it for 10 seconds. The verification is embedded in physics.
Current approach: Trusted execution with fraud detection. Future: Cryptographic verification as proof-of-useful-work matures (tracking Nockchain, TIG, similar protocols).
4. How does it flow?
Three-sided marketplace:
| Participant | What they contribute | What they get |
|---|---|---|
| Consumer contributors | GPU cycles from gaming PCs, MacBooks | Appreciating credits |
| Hyperscalers | Excess data center capacity | Training data access |
| Task submitters | Payment for inference/training | Competitive compute pricing |
Why hyperscalers participate:
They don't need marginal compute revenue. A few million is noise in the AGI race.
What they need: training data.
Every inference produces a prompt, an output, and optionally user feedback. That's high-quality, real-world training data — the kind that made OpenAI acquire Reddit access.
Pricing tiers:
- Higher price: inference destroyed after completion
- Lower price: inference usable for training
Hyperscalers contribute compute → get training data. That's the trade.
5. What backs the value?
Hardware physics.
| Asset | Appreciation basis | Predictability |
|---|---|---|
| Gold | Geological scarcity + sentiment | Low |
| Bitcoin | Collective belief + halvings | Medium |
| GPU-seconds | Hardware improvement rate | High |
Hardware improvement is the most studied, most consistent trend in technology. It's published quarterly by every chip manufacturer. It's not faith. It's measurable.
A reserve asset that appreciates at the rate of technological progress is a reserve asset for the AI age.
6. How does governance work?
Protocol rules with minimal discretion.
Core parameters (hardware normalization, verification thresholds) derive from measurable market data:
- Hardware costs from public pricing
- Performance benchmarks from standardized tests
- Lifetime assumptions from industry depreciation
The open question: who controls the normalization oracle?
This is a central point of failure. Whoever sets the hardware price conversion has enormous power. Our current thinking: transparent, on-chain price feeds derived from multiple sources. But this is unsolved. We're flagging it, not hiding it.
Current state: Centralized operation during bootstrap, with a path to progressive decentralization.
7. What's the path to adoption?
Phase 1: Consumer Network (Current)
- ~1,100 daily active contributors
- Real AI workloads (image generation, LLM inference)
- Points system testing economic mechanics
Phase 2: Dedicated Operators
- Purpose-built inference machines
- Reliable capacity for real-time workloads
- Economics optimized for uptime
Phase 3: Data Center Integration
- Training data partnerships with hyperscalers
- Auto-scaling buffer for demand spikes
- Protocol layer for vendors (CoreWeave, etc.)
Phase 4: Universal Acceptance
- GPU-seconds accepted across providers
- OpenAI, Anthropic, others accept credits as payment
- The unit of account for machine-to-machine settlement
Phase 4 requires critical mass. It's chicken-and-egg. We're not pretending otherwise.
What We Don't Know
1. Redemption guarantees
The appreciation thesis assumes you can redeem. But redemption requires:
- The network to still exist
- Contributors to still be contributing
- Capacity to be available when you want it
If contributors leave, if the network dies, if there's a run — your stored value is a claim on nothing.
Our counter: All money is a claim on future capacity. Dollars assume the US economy exists. This is younger, less proven. That's a real weakness.
2. The normalization oracle
The formula requires "hardware cost." Who decides that?
- Market price? (Fluctuates wildly)
- List price? (Manufacturers manipulate this)
- Depreciated value? (Subjective)
This is a central point of control. We don't have a clean answer.
3. Why accept credits before critical mass?
Why would OpenAI accept W.AI credits today? They have cash. They have their own compute. What do they get?
The only answer: network effects. If enough people hold credits and want to spend them, providers accept them to capture that demand.
But that requires critical mass first. We're not there.
4. No moat
What stops someone from forking this and doing it better? The code could be open-sourced. The concept is now public. 1,100 DAC isn't defensible.
Our counter: First-mover advantage, team credibility, existing contributor base. Soft moats. Not hard ones.
5. The appreciation math is subtle
If everyone's compute purchasing power increases equally, there's no relative gain. This is general technological progress, not individual wealth appreciation.
The early contributor advantage: you locked in time when hardware was weaker, redeem when it's stronger. But this advantage erodes as more people join.
Conclusion
We're not claiming to have invented money. We're claiming to have identified a unit that might work for machines.
GPU-seconds. Time on hardware. Appreciating by construction.
The network is live. The thesis is being tested. The framework is open for scrutiny.
If this holds up, it's infrastructure for the machine economy. If it doesn't, we want to know where it breaks.
Addendum: Ideas We're Exploring
Compute Pools
The endgame isn't one monolithic network. It's hundreds of thousands of compute pools.
California Computer. Los Angeles Computer. Feminism Computer. Gaming Computer.
People start pools. Join pools. Decide what their pooled compute works on. W.AI facilitates coordination.
Corporate angle: mega-corps allocate compute to pools like donating to charity. "10% of our GPU allocation to the climate research pool."
Censorship tradeoff: if you want corporate compute, you probably inherit some restrictions (no hacking, no CSAM). Fully permissionless pools exist alongside filtered ones.
Synthetic Work
What happens when demand is low? The network can't go cold.
Real work when demand exists (full credit rate). Synthetic work as backstop (reduced rate) — training runs, evals, benchmarks, open research.
Contributors always earn something. The network never sleeps.
The Training Flywheel
More contributors → more inference capacity → more users → more training data → better models → more demand → more contributors.
Each cycle strengthens the network. The training data becomes a moat. The contributor base becomes infrastructure.
AI-to-AI Settlement
Autonomous agents will transact. They need:
- Verifiable identity (wallet)
- Programmable payments (smart contracts)
- Compute resources (the thing they actually need)
GPU-seconds provide a native unit of account for machine-to-machine settlement — denominated in the resource machines actually consume.
W.AI
w.ai
~1,100 daily active contributors
Real AI workloads running today
Submitted to Machine Currency, January 15, 2026