AI container orchestration
for everyone
dstack is an open-source alternative to Kubernetes and Slurm, designed to simplify development and deployment of AI. It works with top cloud providers and on-prem servers, and supports NVIDIA, AMD, and TPU.
Dev environments
Dev environments allow you to provision a remote machine, set up with your code and favorite IDE, with just one command.
Dev environments are perfect for interactively running code using your favorite IDE or notebook before scheduling a task or deploying a service.
![](https://raw.githubusercontent.com/dstackai/static-assets/main/static-assets/images/dstack-dev-environment.gif)
![](https://raw.githubusercontent.com/dstackai/static-assets/main/static-assets/images/dstack-task.gif)
Tasks
A task allows you to schedule a job or run a web app. It lets you configure dependencies, resources, ports, and more. Tasks can be distributed and run on clusters.
Tasks are ideal for training and fine-tuning jobs or running apps for development purposes.
Services
Services allow you to deploy web apps or models as private or public auto-scalable endpoints. You can configure dependencies, resources, authorizarion, auto-scaling rules, etc.
Once deployed, the web app or a model can be used by anyone on the team.
![](https://raw.githubusercontent.com/dstackai/static-assets/main/static-assets/images/dstack-service-openai.gif)
![](https://raw.githubusercontent.com/dstackai/static-assets/main/static-assets/images/dstack-cloud-ssh-fleet-1.gif)
Fleets
Fleets enable efficient provisioning and management of clusters and instances, both in the cloud and on-prem.
Once a fleet is created, it can be reused by dev environments, tasks, and services.
Why ML engineers dstack
![](assets/images/quotes/spott.jpg)
Andrew Spott
ML Engineer at Stealth Startup
Thanks to @dstack, I get the convenience of having a personal Slurm cluster and using budget-friendly cloud GPUs, without paying the super-high premiums charged by the big three.
![](assets/images/quotes/alvarobartt.jpg)
Alvaro Bartolome
ML Engineer at Argilla
With @dstack it's incredibly easy to define a configuration within a repository and run it without worrying about GPU availability. It lets you focus on data and your research.
![](assets/images/quotes/chansung.jpg)
Park Chansung
ML Researcher at ETRI
Thanks to @dstack, I can effortlessly access the top GPU options across different clouds, saving me time and money while pushing my AI work forward.
![](assets/images/quotes/eckart.png)
Eckart Burgwedel
CEO at Uberchord
With @dstack, running LLMs on a cloud GPU is as easy as running a local Docker container. It combines the ease of Docker with the auto-scaling capabilities of K8S.
![](assets/images/quotes/cudopete.png)
Peter Hill
Co-Founder at CUDO Compute
@dstack simplifies infrastructure provisioning and AI development. If your team is on the lookout for an AI platform, I wholeheartedly recommend @dstack.
Get started in under a minute
Have questions, or need help?
Talk to us
Discord