An easier way to train and deploy generative AI
dstack simplifies training, fine-tuning, and deploying generative AI models, leveraging the open-source ecosystem.
Dev environments
Want to quickly provision a dev environment for interactive development?
Specify a Docker image, your IDE, and required compute resources, then launch it via the CLI.

Tasks
Run a custom training job or batch task with your code.
Specify Docker image (if any), commands, exposed ports, compute resources, and schedule via CLI or Python API.
Services
Deploy your model using custom code.
Specify Docker image (if any), run commands, and allocate compute resources. Deploy as a public endpoint via CLI or Python API.
Use our cloud GPU or utilize your own cloud account
dstack Cloud
A managed solution for training and deploying gen AI models
with on-demand cloud GPU. Compatible with the open-source CLI and API.
H100 (80GB) from $2.10/h
A100 (80GB) from $1.4/h
L40 (48GB) from $1.05/h
A6000 (80GB) from $0.43/h
Open-source
A convenient CLI and API to train and deploy gen AI models
compatible with any cloud provider.
Amazon Web Services
Azure
Google Cloud Platform
Lambda
TensorDock
Vast.ai