Pricing

Simple, transparent pricing

Pay only for what you use. No hidden fees, no long-term contracts. Estimated pre-launch rates.

GPU compute rates

RTX 4090
VRAM: 24GB
Price: $0.40/hr
Best for: Inference & Rendering
A100 40GB
VRAM: 40GB
Price: $1.80/hr
Best for: Model Training
A100 80GB
VRAM: 80GB
Price: $2.00/hr
Best for: Large Models
H100 80GB
VRAM: 80GB
Price: $3.50/hr
Best for: Confidential Compute
B200
VRAM: 192GB
Price: $5.00/hr
Best for: Next-Gen AI

All prices are per GPU-hour. Volume discounts available.

AI inference pricing

Free

$0/forever
  • 1M tokens/mo
  • Shared GPUs
  • Best-effort latency
  • Community models
Get started
Most popular

Pro

$0.50/per 1M tokens
  • Dedicated GPUs
  • Low-latency SLA
  • All models
  • Priority support
Get started

Enterprise

Custom/pricing
  • Private deployment
  • ZK-verified inference
  • Custom SLAs
  • Dedicated account manager

Earn as a provider

Provide GPU compute and earn 80% of fees -- the highest revenue share in the industry.

80%

Revenue share

24/7

Automated

<1hr

Settlement

Frequently asked questions

You're billed per GPU-hour of compute used. Jobs are metered in real-time. No minimum commitment, no upfront payments.

1 million tokens per month on shared GPUs with best-effort latency. No credit card required. Upgrade anytime.

Yes. Teams using more than 10,000 GPU-hours per month qualify for custom enterprise pricing. Contact us for details.

Providers earn 80% of compute fees. Payments settle automatically to your connected wallet within one hour of job completion.

No. Pay-as-you-go with no minimums. Enterprise customers can opt for committed-use discounts with annual agreements.

All compute jobs include zero-knowledge proof of correct execution at no additional cost. Proofs are verifiable on-chain.

Need custom pricing?

Enterprise teams, high-volume users, and research institutions qualify for custom rates.

View documentation