Quickstart¶
Prerequsites
Before using dstack, ensure you've installed the server and the CLI.
Create a fleet¶
Before you can submit your first run, you have to create a fleet.
If you're using cloud providers or Kubernetes clusters and have configured the corresponding backends, create a fleet as follows:
type: fleet
name: default
# Allow to provision of up to 2 instances
nodes: 0..2
# Deprovision instances above the minimum if they remain idle
idle_duration: 1h
resources:
# Allow to provision up to 8 GPUs
gpu: 0..8
Pass the fleet configuration to dstack apply:
$ dstack apply -f fleet.dstack.yml
# BACKEND REGION RESOURCES SPOT PRICE
1 gcp us-west4 2xCPU, 8GB, 100GB (disk) yes $0.010052
2 azure westeurope 2xCPU, 8GB, 100GB (disk) yes $0.0132
3 gcp europe-central2 2xCPU, 8GB, 100GB (disk) yes $0.013248
Create the fleet? [y/n]: y
FLEET INSTANCE BACKEND RESOURCES PRICE STATUS CREATED
defalut - - - - - 10:36
If nodes is a range that starts above 0, dstack pre-creates the initial number of instances up front, while any additional ones are created on demand.
Setting the
nodesrange to start above0is supported only for VM-based backends.
If the fleet needs to be a cluster, the placement property must be set to cluster.
If you have a group of on-prem servers accessible via SSH, you can create an SSH fleet as follows:
type: fleet
name: my-fleet
ssh_config:
user: ubuntu
identity_file: ~/.ssh/id_rsa
hosts:
- 3.255.177.51
- 3.255.177.52
Pass the fleet configuration to dstack apply:
$ dstack apply -f fleet.dstack.yml
Provisioning...
---> 100%
FLEET INSTANCE GPU PRICE STATUS CREATED
my-fleet 0 L4:24GB (spot) $0 idle 3 mins ago
1 L4:24GB (spot) $0 idle 3 mins ago
Hosts must have Docker and GPU drivers installed and meet the other requirements.
If the fleet needs to be a cluster, the placement property must be set to cluster.
Submit your first run¶
dstack supports three types of run configurations.
A dev environment lets you provision an instance and access it with your desktop IDE.
Create the following run configuration:
type: dev-environment
name: vscode
# If `image` is not specified, dstack uses its default image
python: "3.11"
#image: dstackai/base:py3.13-0.7-cuda-12.1
ide: vscode
# Uncomment to request resources
#resources:
# gpu: 24GB
Apply the configuration via dstack apply:
$ dstack apply -f .dstack.yml
# BACKEND REGION RESOURCES SPOT PRICE
1 gcp us-west4 2xCPU, 8GB, 100GB (disk) yes $0.010052
2 azure westeurope 2xCPU, 8GB, 100GB (disk) yes $0.0132
3 gcp europe-central2 2xCPU, 8GB, 100GB (disk) yes $0.013248
Submit the run vscode? [y/n]: y
Launching `vscode`...
---> 100%
To open in VS Code Desktop, use this link:
vscode://vscode-remote/ssh-remote+vscode/workflow
Open the link to access the dev environment using your desktop IDE. Alternatively, you can access it via ssh <run name>.
A task allows you to schedule a job or run a web app. Tasks can be distributed and can forward ports.
Create the following run configuration:
type: task
name: streamlit
# If `image` is not specified, dstack uses its default image
python: "3.11"
#image: dstackai/base:py3.13-0.7-cuda-12.1
# Commands of the task
commands:
- pip install streamlit
- streamlit hello
# Ports to forward
ports:
- 8501
# Uncomment to request resources
#resources:
# gpu: 24GB
By default, tasks run on a single instance. To run a distributed task, specify
nodes, and dstack will run it on a cluster.
Run the configuration via dstack apply:
$ dstack apply -f task.dstack.yml
# BACKEND REGION RESOURCES SPOT PRICE
1 gcp us-west4 2xCPU, 8GB, 100GB (disk) yes $0.010052
2 azure westeurope 2xCPU, 8GB, 100GB (disk) yes $0.0132
3 gcp europe-central2 2xCPU, 8GB, 100GB (disk) yes $0.013248
Submit the run streamlit? [y/n]: y
Provisioning `streamlit`...
---> 100%
Welcome to Streamlit. Check out our demo in your browser.
Local URL: http://localhost:8501
If you specified ports, they will be automatically forwarded to localhost for convenient access.
A service allows you to deploy a model or any web app as an endpoint.
Create the following run configuration:
type: service
name: llama31-service
# If `image` is not specified, dstack uses its default image
python: "3.11"
#image: dstackai/base:py3.13-0.7-cuda-12.1
# Required environment variables
env:
- HF_TOKEN
commands:
- pip install vllm
- vllm serve meta-llama/Meta-Llama-3.1-8B-Instruct --max-model-len 4096
# Expose the vllm server port
port: 8000
# Specify a name if it's an OpenAI-compatible model
model: meta-llama/Meta-Llama-3.1-8B-Instruct
# Required resources
resources:
gpu: 24GB
Run the configuration via dstack apply:
$ HF_TOKEN=...
$ dstack apply -f service.dstack.yml
# BACKEND REGION INSTANCE RESOURCES SPOT PRICE
1 aws us-west-2 g5.4xlarge 16xCPU, 64GB, 1xA10G (24GB) yes $0.22
2 aws us-east-2 g6.xlarge 4xCPU, 16GB, 1xL4 (24GB) yes $0.27
3 gcp us-west1 g2-standard-4 4xCPU, 16GB, 1xL4 (24GB) yes $0.27
Submit the run llama31-service? [y/n]: y
Provisioning `llama31-service`...
---> 100%
Service is published at:
http://localhost:3000/proxy/services/main/llama31-service/
Model meta-llama/Meta-Llama-3.1-8B-Instruct is published at:
http://localhost:3000/proxy/models/main/
To enable auto-scaling rate limits, or use a custom domain with HTTPS, set up a gateway before running the service.
dstack apply automatically provisions instances with created fleets and runs the workload according to the configuration.
Troubleshooting¶
Something not working? See the troubleshooting guide.
What's next?