Clore.ai Python SDK: Automate Your GPU Workflows in 5 Minutes
If you've been renting GPUs through web dashboards — clicking through marketplace listings, manually configuring Docker containers, checking order status in a browser — there's a better way.
The Clore.ai Python SDK lets you do everything programmatically: search the marketplace, filter servers by GPU type and price, create rental orders, monitor running workloads, and tear them down when finished. All from a Python script, a Jupyter notebook, or a CI/CD pipeline.
This tutorial takes you from pip install to a fully automated GPU workflow in about 5 minutes. No prior experience with the Clore.ai API is needed.
Prerequisites
- Python 3.8+ installed on your machine
- A Clore.ai account with funds deposited (sign up here)
- Your API key (get it from your Clore.ai account settings)
That's it. Let's go.
Step 1: Install the SDK
pip install clore-ai
That's the entire installation. The SDK has minimal dependencies and works on Linux, macOS, and Windows.
Verify the installation:
python -c "import clore_ai; print('SDK installed successfully')"
Step 2: Configure Your API Key
The SDK authenticates using your Clore.ai API key. You can pass it directly or set it as an environment variable:
Option A: Environment Variable (Recommended)
export CLORE_API_KEY="your-api-key-here"
Add this to your .bashrc or .zshrc to persist it.
Option B: Pass Directly in Code
from clore_ai import CloreClient
client = CloreClient(api_key="your-api-key-here")
Security note: Never hardcode API keys in scripts you commit to git. Use environment variables or a secrets manager.
Step 3: Browse the Marketplace
Let's start by exploring what's available. The marketplace API returns real-time data on all listed servers — their GPUs, pricing, specifications, and availability.
from clore_ai import CloreClient
import os
# Initialize client
client = CloreClient(api_key=os.getenv("CLORE_API_KEY"))
# Fetch all available servers
servers = client.get_marketplace()
print(f"Total servers on marketplace: {len(servers)}")
print(f"\nFirst 5 servers:")
for server in servers[:5]:
gpu_name = server.get("specs", {}).get("gpu", "Unknown")
price_usd = server.get("price", {}).get("usd", {})
on_demand = price_usd.get("on_demand_usd") or price_usd.get("on_demand_btc", "N/A")
rented = "RENTED" if server.get("rented") else "AVAILABLE"
reliability = server.get("reliability", 0)
print(f" [{rented}] {gpu_name} | ${on_demand}/hr | reliability: {reliability:.2f}")
Expected output:
Total servers on marketplace: 2580
First 5 servers:
[AVAILABLE] 8x NVIDIA GeForce RTX 3070 Laptop GPU | $2.0/hr | reliability: 0.99
[RENTED] 3x NVIDIA GeForce RTX 3070 Ti | $1.15/hr | reliability: 1.00
[RENTED] 3x NVIDIA GeForce GTX 1660 SUPER | $N/A/hr | reliability: 1.00
[RENTED] 1x NVIDIA GeForce RTX 4090 | $N/A/hr | reliability: 1.00
[RENTED] 1x NVIDIA GeForce RTX 4090 | $1.6/hr | reliability: 0.99
Step 4: Filter for the GPU You Need
Browsing all 2,500+ servers isn't practical. Let's filter for exactly what we need — say, an available RTX 4090 under $0.15/hr with high reliability:
def find_best_servers(servers, gpu_filter="RTX 4090", max_price=0.15, min_reliability=0.95):
"""Find available servers matching criteria."""
results = []
for server in servers:
# Skip rented servers
if server.get("rented"):
continue
# Check GPU type
gpu_name = server.get("specs", {}).get("gpu", "")
if gpu_filter.lower() not in gpu_name.lower():
continue
# Check reliability
reliability = server.get("reliability", 0)
if reliability < min_reliability:
continue
# Get price in USD
price_usd = server.get("price", {}).get("usd", {})
on_demand = price_usd.get("on_demand_usd")
if on_demand is None:
on_demand = price_usd.get("on_demand_btc")
if on_demand is None or on_demand > max_price:
continue
# Get additional specs
specs = server.get("specs", {})
results.append({
"id": server["id"],
"gpu": gpu_name,
"gpuram": specs.get("gpuram", "?"),
"ram": round(specs.get("ram", 0), 1),
"price_hr": round(on_demand, 4),
"reliability": round(reliability, 3),
"country": specs.get("net", {}).get("cc", "??"),
"net_down": specs.get("net", {}).get("down", 0),
"rating": server.get("rating", {}).get("avg", 0),
})
# Sort by price, then reliability
results.sort(key=lambda x: (x["price_hr"], -x["reliability"]))
return results
# Find available RTX 4090s
servers = client.get_marketplace()
matches = find_best_servers(servers, gpu_filter="RTX 4090", max_price=3.00)
print(f"Found {len(matches)} matching servers:\n")
for s in matches[:10]:
print(f" Server #{s['id']} | {s['gpu']} ({s['gpuram']}GB) | "
f"${s['price_hr']}/hr | RAM: {s['ram']}GB | "
f"Reliability: {s['reliability']} | Country: {s['country']}")
This gives you a ranked list of the cheapest, most reliable RTX 4090s currently available. You can adjust the filters for any GPU type, price range, or minimum reliability threshold.
Advanced Filtering: PCIe Bandwidth, Disk Speed, Network
For performance-sensitive workloads, filter on hardware details:
def find_premium_servers(servers, gpu_filter="RTX 4090"):
"""Find high-performance servers with proper PCIe, fast disk, and network."""
results = []
for server in servers:
if server.get("rented"):
continue
gpu_name = server.get("specs", {}).get("gpu", "")
if gpu_filter.lower() not in gpu_name.lower():
continue
specs = server.get("specs", {})
# Filter: PCIe Gen 4 x16 (important for 4090 — x1 cripples performance)
pcie_rev = specs.get("pcie_rev", 0)
pcie_width = specs.get("pcie_width", 0)
if pcie_rev < 4 or pcie_width < 16:
continue
# Filter: Fast network (500Mbps+ for model downloads)
net_down = specs.get("net", {}).get("down", 0)
if net_down < 500:
continue
# Filter: Adequate RAM
ram = specs.get("ram", 0)
if ram < 30:
continue
price_usd = server.get("price", {}).get("usd", {})
on_demand = price_usd.get("on_demand_usd") or price_usd.get("on_demand_btc", 999)
results.append({
"id": server["id"],
"gpu": gpu_name,
"price_hr": round(on_demand, 4),
"ram": round(ram, 1),
"pcie": f"Gen{pcie_rev} x{pcie_width}",
"net_down": round(net_down, 0),
"disk_speed": round(specs.get("disk_speed", 0), 0),
})
results.sort(key=lambda x: x["price_hr"])
return results
premium = find_premium_servers(servers, "RTX 4090")
print(f"Found {len(premium)} premium RTX 4090 servers")
for s in premium[:5]:
print(f" #{s['id']} | ${s['price_hr']}/hr | {s['ram']}GB RAM | "
f"{s['pcie']} | {s['net_down']}Mbps | disk: {s['disk_speed']}MB/s")
Step 5: Create a Rental Order
Once you've found the right server, creating an order is a single API call:
# Rent server #8835 (RTX 4090 we found earlier)
order = client.create_order(
server_id=8835,
image="pytorch/pytorch:2.3.0-cuda12.1-cudnn8-devel",
ports={"22": "tcp", "8888": "http"}, # SSH + Jupyter
env={
"JUPYTER_TOKEN": "my-secret-token",
},
spot=False, # On-demand (set True for GigaSPOT pricing)
)
print(f"Order created!")
print(f" Order ID: {order['id']}")
print(f" Status: {order['status']}")
print(f" SSH: {order.get('ssh_command', 'Pending...')}")
Order Configuration Options
| Parameter | Description | Example |
|---|---|---|
server_id |
Server to rent (from marketplace) | 8835 |
image |
Docker image to deploy | "pytorch/pytorch:2.3.0-cuda12.1-cudnn8-devel" |
ports |
Ports to expose | {"22": "tcp", "8080": "http"} |
env |
Environment variables | {"API_KEY": "xxx"} |
spot |
Use GigaSPOT (cheaper, interruptible) | True / False |
command |
Custom startup command | "jupyter lab --ip=0.0.0.0" |
Using GigaSPOT for 30–50% Savings
For experimental workloads where you can tolerate interruptions:
spot_order = client.create_order(
server_id=8835,
image="pytorch/pytorch:2.3.0-cuda12.1-cudnn8-devel",
ports={"22": "tcp"},
spot=True, # GigaSPOT — interruptible, much cheaper
)
print(f"Spot order created at reduced price!")
Step 6: Monitor Your Running Workload
Check order status, GPU utilization, and logs:
import time
def monitor_order(client, order_id, interval=30):
"""Monitor an active order until completion."""
while True:
status = client.get_order(order_id)
state = status.get("status", "unknown")
gpu_util = status.get("gpu_utilization", "N/A")
cost_so_far = status.get("total_cost", 0)
runtime = status.get("runtime_hours", 0)
print(f"[{time.strftime('%H:%M:%S')}] Status: {state} | "
f"GPU: {gpu_util}% | Cost: ${cost_so_far:.4f} | "
f"Runtime: {runtime:.2f}h")
if state in ("completed", "terminated", "error"):
print(f"\nOrder finished with status: {state}")
break
time.sleep(interval)
# Monitor our order
monitor_order(client, order["id"])
Example output:
[14:30:15] Status: running | GPU: 97% | Cost: $0.0234 | Runtime: 0.25h
[14:30:45] Status: running | GPU: 95% | Cost: $0.0312 | Runtime: 0.33h
[14:31:15] Status: running | GPU: 98% | Cost: $0.0390 | Runtime: 0.42h
Step 7: Automate the Full Workflow
Here's a complete script that ties everything together — find the cheapest GPU, rent it, run a training job, and shut it down:
#!/usr/bin/env python3
"""
Automated GPU workflow: find cheapest RTX 4090, rent it,
run a fine-tuning job, download results, terminate.
"""
from clore_ai import CloreClient
import os
import time
import subprocess
# --- Configuration ---
API_KEY = os.getenv("CLORE_API_KEY")
TARGET_GPU = "RTX 4090"
MAX_PRICE = 3.00 # USD per hour
MIN_RELIABILITY = 0.95
DOCKER_IMAGE = "pytorch/pytorch:2.3.0-cuda12.1-cudnn8-devel"
TRAINING_SCRIPT = "train.py"
# --- Initialize ---
client = CloreClient(api_key=API_KEY)
# --- Step 1: Find best server ---
print("🔍 Searching marketplace...")
servers = client.get_marketplace()
available = [s for s in servers if not s.get("rented")]
best = None
for server in available:
gpu = server.get("specs", {}).get("gpu", "")
if TARGET_GPU.lower() not in gpu.lower():
continue
if server.get("reliability", 0) < MIN_RELIABILITY:
continue
price = server.get("price", {}).get("usd", {})
hourly = price.get("on_demand_usd") or price.get("on_demand_btc", 999)
if hourly > MAX_PRICE:
continue
if best is None or hourly < best["price"]:
best = {"id": server["id"], "gpu": gpu, "price": hourly}
if not best:
print("❌ No suitable servers found. Try increasing MAX_PRICE.")
exit(1)
print(f"✅ Found: Server #{best['id']} | {best['gpu']} | ${best['price']}/hr")
# --- Step 2: Create order ---
print("🚀 Creating order...")
order = client.create_order(
server_id=best["id"],
image=DOCKER_IMAGE,
ports={"22": "tcp"},
spot=False,
)
order_id = order["id"]
print(f"✅ Order #{order_id} created. Waiting for startup...")
# --- Step 3: Wait for ready ---
for _ in range(60): # Wait up to 5 minutes
status = client.get_order(order_id)
if status.get("status") == "running":
break
time.sleep(5)
else:
print("❌ Server didn't start in time. Cancelling...")
client.cancel_order(order_id)
exit(1)
ssh_cmd = status.get("ssh_command")
print(f"✅ Server is running! SSH: {ssh_cmd}")
# --- Step 4: Upload and run training ---
print("📤 Uploading training script...")
# Parse SSH details from the command
# ssh_cmd format: "ssh -p <port> root@<host>"
ssh_parts = ssh_cmd.split()
port = ssh_parts[ssh_parts.index("-p") + 1]
host = ssh_parts[-1]
os.system(f"scp -P {port} {TRAINING_SCRIPT} {host}:/workspace/")
os.system(f"scp -P {port} data/training_data.jsonl {host}:/workspace/data/")
print("🏋️ Starting training...")
os.system(f"ssh -p {port} {host} 'cd /workspace && python {TRAINING_SCRIPT}'")
# --- Step 5: Download results ---
print("📥 Downloading model...")
os.system(f"scp -r -P {port} {host}:/workspace/outputs/final-model ./results/")
# --- Step 6: Terminate ---
print("🛑 Terminating order...")
client.cancel_order(order_id)
# --- Summary ---
final = client.get_order(order_id)
total_cost = final.get("total_cost", 0)
runtime = final.get("runtime_hours", 0)
print(f"\n{'='*50}")
print(f"✅ DONE!")
print(f" Runtime: {runtime:.2f} hours")
print(f" Total cost: ${total_cost:.4f}")
print(f" Model saved to: ./results/final-model/")
print(f"{'='*50}")
Save this as gpu_workflow.py and run:
python gpu_workflow.py
The entire workflow — finding a server, renting it, training, downloading results, and shutting down — happens in one script with zero manual intervention.
Real-World Automation Patterns
Pattern 1: Scheduled Training with Cron
Run training jobs automatically at off-peak hours (when GPUs are cheaper):
# crontab -e
0 3 * * * /usr/bin/python3 /home/user/gpu_workflow.py >> /var/log/gpu-training.log 2>&1
Pattern 2: Cost-Aware Spot Bidding
Monitor prices and only rent when GPUs drop below your target price:
def wait_for_price(client, gpu_filter, target_price, check_interval=300):
"""Wait until a GPU is available below target price."""
while True:
servers = client.get_marketplace()
for server in servers:
if server.get("rented"):
continue
gpu = server.get("specs", {}).get("gpu", "")
if gpu_filter.lower() not in gpu.lower():
continue
price = server.get("price", {}).get("usd", {})
hourly = price.get("on_demand_usd") or price.get("on_demand_btc", 999)
if hourly <= target_price:
return server
print(f"No {gpu_filter} under ${target_price}/hr. Checking again in {check_interval}s...")
time.sleep(check_interval)
# Wait for an RTX 4090 under $0.08/hr, then rent it
server = wait_for_price(client, "RTX 4090", target_price=0.08)
print(f"Found cheap server #{server['id']}! Renting now...")
Pattern 3: Multi-GPU Parallel Experiments
Run hyperparameter sweeps across multiple GPUs simultaneously:
import threading
def run_experiment(client, config):
"""Run a single experiment on a rented GPU."""
servers = client.get_marketplace()
server = find_cheapest_available(servers, "RTX 4090")
order = client.create_order(
server_id=server["id"],
image="pytorch/pytorch:2.3.0-cuda12.1-cudnn8-devel",
ports={"22": "tcp"},
env={
"LEARNING_RATE": str(config["lr"]),
"BATCH_SIZE": str(config["batch_size"]),
"EPOCHS": str(config["epochs"]),
},
)
# ... run training, download results, terminate
return {"config": config, "order_id": order["id"]}
# Launch 4 experiments in parallel
configs = [
{"lr": 1e-4, "batch_size": 4, "epochs": 3},
{"lr": 2e-4, "batch_size": 4, "epochs": 3},
{"lr": 1e-4, "batch_size": 8, "epochs": 3},
{"lr": 2e-4, "batch_size": 8, "epochs": 5},
]
threads = []
for config in configs:
t = threading.Thread(target=run_experiment, args=(client, config))
threads.append(t)
t.start()
for t in threads:
t.join()
print("All experiments complete!")
API Reference Quick Guide
Here's a cheatsheet of the most useful SDK methods:
| Method | Description | Returns |
|---|---|---|
client.get_marketplace() |
List all servers with specs and pricing | List of server objects |
client.create_order(...) |
Rent a server | Order object with ID |
client.get_order(order_id) |
Check order status | Order status object |
client.get_orders() |
List all your active orders | List of orders |
client.cancel_order(order_id) |
Terminate a rental | Confirmation |
client.get_balance() |
Check your account balance | Balance in USD/CLORE |
For the complete API reference, visit the Clore.ai API documentation.
Tips and Best Practices
- Always use environment variables for API keys — never hardcode credentials
- Add error handling — network issues happen. Wrap API calls in try/except
- Set up billing alerts — monitor spending programmatically with
get_balance() - Use GigaSPOT for dev/test — save 30–50% on non-critical workloads
- Filter by PCIe bandwidth — an RTX 4090 on PCIe x1 is dramatically slower than x16
- Check reliability scores — servers above 0.98 rarely have issues
- Use tmux on the remote server — so training survives SSH disconnects
- Download results before terminating — once you cancel the order, the data is gone
What You Can Build
The SDK opens up workflows that aren't practical with a web dashboard:
- CI/CD GPU testing — automatically spin up GPUs to test ML pipelines on every commit
- Batch inference pipelines — process thousands of images/documents overnight on spot GPUs
- Model evaluation grids — benchmark your model across different GPU types and quantization levels
- Auto-scaling inference — spin up more GPUs when demand spikes, tear them down when it subsides
- Cost optimization bots — automatically migrate workloads to cheaper servers when prices change
The combination of programmatic access and rock-bottom pricing makes Clore.ai a powerful primitive for AI infrastructure automation.
Get started in 60 seconds. Install the SDK (pip install clore-ai), grab your API key from clore.ai, and start building. Check the API documentation for the full reference, or explore the marketplace to see what's available right now.