WORKFLOW TEMPLATES
Pre-Built Workflow Templates
Get started quickly with templates for common GPU compute patterns. Each workflow includes step-by-step guides, code examples, and cost estimates.
Batch Rendering Workflow
Render 3D scenes, animations, and visual effects across distributed GPUs.
How It Works
- 1Upload project files (.blend, .ma, scene)
- 2Configure render settings (resolution, format, frames)
- 3Jobs distributed to available GPU nodes
- 4Frames rendered in parallel with progress tracking
- 5Results verified and delivered with proof
Use Cases
Feature film VFXAnimation studiosArchitectural visualization
Estimated cost:$0.35/GPU-hr
Quick Start
Python
from bitsage import BatchJob
# Create a rendering job
job = BatchJob.create(
type="blender",
project="animation.blend",
frames="1-240",
output_format="exr",
gpu_type="RTX_4090"
)
# Submit and monitor
job.submit()
job.wait()
# Download results
job.download("./renders/")AI Training Workflow
Distributed model training with checkpointing and verification.
How It Works
- 1Upload training script and dataset
- 2Configure distributed training parameters
- 3Training distributed across GPU cluster
- 4Checkpoints saved and verified
- 5Final model delivered with training metrics
Use Cases
LLM fine-tuningComputer visionReinforcement learning
Estimated cost:$3.20/GPU-hr
Quick Start
Python
from bitsage import TrainingJob
# Configure distributed training
job = TrainingJob.create(
script="train.py",
dataset="s3://bucket/data",
gpus=8,
gpu_type="H100",
checkpoint_interval=1000
)
# Start training
job.submit()
# Monitor progress
for update in job.stream():
print(f"Step {update.step}: loss={update.loss}")Real-time Inference Workflow
Deploy models for low-latency serving with auto-scaling.
How It Works
- 1Upload model artifacts (ONNX, PyTorch, TensorFlow)
- 2Configure serving parameters and scaling
- 3Model deployed to edge regions
- 4Auto-scaling based on request volume
- 5Pay only for actual inference time
Use Cases
Chatbots & assistantsContent generationReal-time translation
Estimated cost:$0.50/1M tokens
Quick Start
Python
from bitsage import Inference
# Deploy a model
deployment = Inference.deploy(
model="meta-llama/Llama-3-70B",
gpu="H100",
min_replicas=1,
max_replicas=10,
regions=["us-east", "eu-west"]
)
# Make requests
response = deployment.generate(
prompt="Explain quantum computing",
max_tokens=500
)Scientific Simulation Workflow
Run molecular dynamics, climate models, and physics simulations.
How It Works
- 1Upload simulation configuration and input files
- 2Configure compute resources and duration
- 3Simulation runs on verified GPU cluster
- 4Intermediate results checkpointed
- 5Final results verified and delivered
Use Cases
Drug discoveryMaterials scienceClimate modeling
Estimated cost:$1.80/GPU-hr
Quick Start
Python
from bitsage import SimulationJob
# Configure molecular dynamics simulation
job = SimulationJob.create(
type="gromacs",
config="protein.gro",
topology="topol.top",
steps=10000000,
gpus=4,
gpu_type="A100"
)
# Submit and monitor
job.submit()
trajectory = job.wait()
# Analyze results
trajectory.download("./results/")ZK Proof Generation Workflow
Generate zero-knowledge proofs with GPU acceleration.
How It Works
- 1Submit computation witness and circuit
- 2Proof generation distributed across GPUs
- 3Proofs verified on-chain
- 4Results delivered with attestation
Use Cases
Private transactionsRollup provingIdentity verification
Estimated cost:$0.35/GPU-hr
Quick Start
Python
from bitsage import ZKJob
# Generate ZK proof
job = ZKJob.create(
circuit="transfer.r1cs",
witness="witness.wtns",
proving_key="proving.key",
gpu_type="RTX_4090"
)
# Generate proof
proof = job.generate()
# Verify on-chain
tx = proof.verify_on_chain(network="starknet")Hybrid Workflow
Combine multiple workflow types in a single pipeline.
How It Works
- 1Define pipeline stages (training → inference → verification)
- 2Configure dependencies and data flow
- 3Pipeline executes across workflow types
- 4Results from each stage verified
- 5Final output delivered with full provenance
Use Cases
ML pipelinesData processingComplex workflows
Estimated cost:Varies by stage
Quick Start
Python
from bitsage import Pipeline
# Define hybrid pipeline
pipeline = Pipeline([
# Stage 1: Train model
TrainingJob(script="train.py", gpus=4),
# Stage 2: Deploy for inference
InferenceDeployment(model="${stage1.output}"),
# Stage 3: Generate proofs
ZKJob(witness="${stage2.output}")
])
# Execute pipeline
results = pipeline.run()Ready to Build?
Check out our documentation for detailed guides, API references, and SDK downloads.