Orchestrate GPU workloads across clouds

dstack is an open-source framework for orchestrating GPU workloads across various cloud providers.

It offers a simple cloud-agnostic interface for training, fine-tuning, inference, and development of generative AI models.

Get started Join Discord

Use multiple cloud GPU providers

Training

Pre-train or fine-tune LLMs or other state-of-the-art generative AI models across multiple cloud GPU providers, ensuring data privacy, GPU availability, and cost-efficiency.

Learn more

Inference

Deploy LLMs and other state-of-the-art generative AI models across multiple cloud GPU providers, ensuring data privacy, GPU availability, and cost-efficiency.

Learn more

Dev environments

Provision development environments over multiple cloud GPI providers, ensuring data privacy, GPU availability, and cost-efficiency.

Learn more

Get started in less than a minute

$ pip install "dstack[all] -U"
$ dstack start

The server is available at http://127.0.0.1:3000?token=b934d226-e24a-4eab-eb92b353b10f
                    

Done! Configure clouds, and use the CLI or API to orchestrate GPU workloads.

Get started Join Discord