SaladCloud Developer Hub
Build reliable, cost-effective, high-performance applications with Salad’s globally distributed cloud.
Check out our developer resources
Everything you need to deploy workloads and get started on SaladCloud
Run popular models or bring your own
Architecture guidance and best practices
Become a SaladCloud expert and build at enormous scale.
An introduction to Salad Container Engine (SCE).
Fully-managed container orchestration
Build high-performance apps on the world's largest distributed cloud.
Introduction to distributed compute
Understand the ins and outs of Salad's unique infrastructure.
Architecture guidance
Start integrating directly with SaladCloud’s robust API.
How to use the API
Performing long-running tasks on SaladCloud.
Key considerations
Enable real-time inference on SaladCloud.
Main requirements
Explore our solutions by use-case
How-to guides, tutorials, cookbooks, case studies and more to help you get started.

Deploy an AI image generation service on SaladCloud

How to Manage A Large Number of Stable Diffusion Models
Flux.1 Schnell benchmark: 5243 images per dollar

Civitai powers 10 Million AI images per day on Salad

Stable diffusion 1.5 benchmark: 14,000+ images per dollar

Blend cuts AI inference cost by 85% for 3X more scale


A Stateless and Extendable API for ComfyUI
Be smart to pick and choose who’s...

Docker run on SaladCloud
Be smart to pick and choose who’s...

Fine-tuning Stable Diffusion with Dreambooth from $0.1426 per training job

Fine-Tuning Stable Diffusion XL (SDXL) with interruptible GPUs and LoRA for low cost
Managing long-running tasks on SaladCloud with RabbitMQ

Build high-performance storage solutions

Managing long-running tasks on SaladCloud with SQS

Kelpie API: Long-running jobs on interruptible hardware


OpenMM benchmark on 25 consumer GPUs, 95% less cost

GROMACS benchmark on 30 GPUs, 90+% cost savings
Klyne accelerates AI drug discovery on SaladCloud

Managing Long-Running Tasks on SaladCloud with RabbitMQ

Managing long-running tasks on SaladCloud with SQS

Build high-performance storage solutions


Salad Transcription API overview

How to Transcribe YouTube Videos with Salad Transcription API
Transcribing 1M hrs of Youtube videos with Parakeet TDT 1.1B for $1260

Transcribe YouTube videos with Salad Transcription API

Migrate from Amazon Transcribe to Salad

Migrate from Assembly AI to Salad

Frequently Asked Questions
All GPUs on SaladCloud belong to the RTX/GTX class of GPUs from Nvidia. Our GPU selection policy is strict and we only onboard AI-enabled, high performance compute capable GPUs to the network.
We have several layers of security to keep your containers safe, encrypting them in transit, and at rest. Containers run in an isolated environment on our nodes - keeping your data isolated and also ensuring you have the same compute environment regardless of the machine you’re running on.
Since SaladCloud is a compute-share network, our GPUs have longer cold start times than usual, and are subject to interruption. The highest vRAM on the network is 24 GB. Workloads requiring extremely low latency times are not a fit for our network.
Workloads are deployed to SaladCloud via docker containers. SCE is a massively scalable orchestration engine, purpose-built to simplify this container development.Containerize your model and inference server, choose the hardware and we take care of the rest.
GPUs on SaladCloud are similar to spot instances. Some providers share GPUs for 20-22 hours a day. Others share GPUs for 1-2 hours per day. Users running workloads select the GPU types and quantity. SaladCloud handles all the orchestration in the backend and ensures you will have uninterrupted GPU time as per requirements.
Owners earn rewards (in the form of Salad balance) for sharing their compute. Many compute providers earn
$30−$200 per month on SaladCloud as a reward that they exchange for games, gift cards and more.
Our constant host intrusion detection tests look for operations like folder access, opening a shell, etc. If a host machine tries to access the linux environment, we automatically implode the environment and blacklist the machine. We’re also bringing Falco into our runtime for a more robust set of checks.
We use a proprietary trust rating system to index node performance, forecast availability, and select the optimal hardware configuration for deployment. We also run proprietary tests on every GPU to determine their fit for our network. Salad Container Engine automatically reallocates your workload to another GPU (same type and class) when a resource goes offline.
Still have questions? Read our docs.
SaladCloud DocsCheck out the docs
Get started in minutes. Explore the SaladCloud documentation.
