dev-environment¶
The dev-environment
configuration type allows running dev environments.
Configuration files must be inside the project repo, and their names must end with
.dstack.yml
(e.g..dstack.yml
ordev.dstack.yml
are both acceptable). Any configuration can be run viadstack apply
.
Examples¶
Python version¶
If you don't specify image
, dstack
uses its base Docker image pre-configured with
python
, pip
, conda
(Miniforge), and essential CUDA drivers.
The python
property determines which default Docker image is used.
type: dev-environment
# The name is optional, if not specified, generated randomly
name: vscode
# If `image` is not specified, dstack uses its base image
python: "3.10"
ide: vscode
nvcc
By default, the base Docker image doesn’t include nvcc
, which is required for building custom CUDA kernels.
If you need nvcc
, set the corresponding property to true.
type: dev-environment
# The name is optional, if not specified, generated randomly
name: vscode
# If `image` is not specified, dstack uses its base image
python: "3.10"
# Ensure nvcc is installed (req. for Flash Attention)
nvcc: true
ide: vscode
Docker¶
If you want, you can specify your own Docker image via image
.
type: dev-environment
# The name is optional, if not specified, generated randomly
name: vscode
# Any custom Docker image
image: ghcr.io/huggingface/text-generation-inference:latest
ide: vscode
Private registry
Use the registry_auth
property to provide credentials for a private Docker registry.
type: dev-environment
# The name is optional, if not specified, generated randomly
name: vscode
# Any private Docker image
image: ghcr.io/huggingface/text-generation-inference:latest
# Credentials of the private Docker registry
registry_auth:
username: peterschmidt85
password: ghp_e49HcZ9oYwBzUbcSk2080gXZOU2hiT9AeSR5
ide: vscode
Docker and Docker Compose
All backends except runpod
, vastai
and kubernetes
also allow to use Docker and Docker Compose
inside dstack
runs.
Resources¶
If you specify memory size, you can either specify an explicit size (e.g. 24GB
) or a
range (e.g. 24GB..
, or 24GB..80GB
, or ..80GB
).
type: dev-environment
# The name is optional, if not specified, generated randomly
name: vscode
ide: vscode
resources:
# 200GB or more RAM
memory: 200GB..
# 4 GPUs from 40GB to 80GB
gpu: 40GB..80GB:4
# Shared memory (required by multi-gpu)
shm_size: 16GB
# Disk size
disk: 500GB
The gpu
property allows specifying not only memory size but also GPU vendor, names
and their quantity. Examples: nvidia
(one NVIDIA GPU), A100
(one A100), A10G,A100
(either A10G or A100),
A100:80GB
(one A100 of 80GB), A100:2
(two A100), 24GB..40GB:2
(two GPUs between 24GB and 40GB),
A100:40GB:2
(two A100 GPUs of 40GB).
Google Cloud TPU
To use TPUs, specify its architecture via the gpu
property.
type: dev-environment
# The name is optional, if not specified, generated randomly
name: vscode
ide: vscode
resources:
gpu: v2-8
Currently, only 8 TPU cores can be specified, supporting single TPU device workloads. Multi-TPU support is coming soon.
Shared memory
If you are using parallel communicating processes (e.g., dataloaders in PyTorch), you may need to configure
shm_size
, e.g. set it to 16GB
.
Environment variables¶
type: dev-environment
# The name is optional, if not specified, generated randomly
name: vscode
# Environment variables
env:
- HF_TOKEN
- HF_HUB_ENABLE_HF_TRANSFER=1
ide: vscode
If you don't assign a value to an environment variable (see
HF_TOKEN
above),dstack
will require the value to be passed via the CLI or set in the current process.
For instance, you can define environment variables in a .envrc
file and utilize tools like direnv
.
System environment variables¶
The following environment variables are available in any run and are passed by dstack
by default:
Name | Description |
---|---|
DSTACK_RUN_NAME |
The name of the run |
DSTACK_REPO_ID |
The ID of the repo |
DSTACK_GPUS_NUM |
The total number of GPUs in the run |
Spot policy¶
You can choose whether to use spot instances, on-demand instances, or any available type.
type: dev-environment
# The name is optional, if not specified, generated randomly
name: vscode
ide: vscode
# Use either spot or on-demand instances
spot_policy: auto
The spot_policy
accepts spot
, on-demand
, and auto
. The default for dev environments is on-demand
.
Backends¶
By default, dstack
provisions instances in all configured backends. However, you can specify the list of backends:
type: dev-environment
# The name is optional, if not specified, generated randomly
name: vscode
ide: vscode
# Use only listed backends
backends: [aws, gcp]
Regions¶
By default, dstack
uses all configured regions. However, you can specify the list of regions:
type: dev-environment
# The name is optional, if not specified, generated randomly
name: vscode
ide: vscode
# Use only listed regions
regions: [eu-west-1, eu-west-2]
Volumes¶
Volumes allow you to persist data between runs.
To attach a volume, simply specify its name using the volumes
property and specify where to mount its contents:
type: dev-environment
# The name is optional, if not specified, generated randomly
name: vscode
ide: vscode
# Map the name of the volume to any path
volumes:
- name: my-new-volume
path: /volume_data
Once you run this configuration, the contents of the volume will be attached to /volume_data
inside the dev
environment, and its contents will persist across runs.
Instance volumes
If data persistence is not a strict requirement, use can also use ephemeral instance volumes.
Limitations
When you're running a dev environment, task, or service with dstack
, it automatically mounts the project folder contents
to /workflow
(and sets that as the current working directory). Right now, dstack
doesn't allow you to
attach volumes to /workflow
or any of its subdirectories.
The dev-environment
configuration type supports many other options. See below.
Root reference¶
ide
- The IDE to run.¶
version
- (Optional) The version of the IDE.¶
init
- (Optional) The bash commands to run.¶
name
- (Optional) The run name.¶
image
- (Optional) The name of the Docker image to run.¶
privileged
- (Optional) Run the container in privileged mode.¶
entrypoint
- (Optional) The Docker entrypoint.¶
working_dir
- (Optional) The path to the working directory inside the container. It's specified relative to the repository directory (/workflow
) and should be inside it. Defaults to "."
.¶
home_dir
- (Optional) The absolute path to the home directory inside the container. Defaults to /root
. Defaults to /root
.¶
registry_auth
- (Optional) Credentials for pulling a private Docker image.¶
python
- (Optional) The major version of Python. Mutually exclusive with image
.¶
nvcc
- (Optional) Use image with NVIDIA CUDA Compiler (NVCC) included. Mutually exclusive with image
.¶
env
- (Optional) The mapping or the list of environment variables.¶
setup
- (Optional) The bash commands to run on the boot.¶
resources
- (Optional) The resources requirements to run the configuration.¶
volumes
- (Optional) The volumes mount points.¶
ports
- (Optional) Port numbers/mapping to expose.¶
backends
- (Optional) The backends to consider for provisioning (e.g., [aws, gcp]
).¶
regions
- (Optional) The regions to consider for provisioning (e.g., [eu-west-1, us-west4, westeurope]
).¶
instance_types
- (Optional) The cloud-specific instance types to consider for provisioning (e.g., [p3.8xlarge, n1-standard-4]
).¶
spot_policy
- (Optional) The policy for provisioning spot or on-demand instances: spot
, on-demand
, or auto
. Defaults to on-demand
.¶
retry
- (Optional) The policy for resubmitting the run. Defaults to false
.¶
retry_policy
- (Optional) The policy for resubmitting the run. Deprecated in favor of retry
.¶
max_duration
- (Optional) The maximum duration of a run (e.g., 2h
, 1d
, etc). After it elapses, the run is forced to stop. Defaults to off
.¶
max_price
- (Optional) The maximum instance price per hour, in dollars.¶
pool_name
- (Optional) The name of the pool. If not set, dstack will use the default name.¶
instance_name
- (Optional) The name of the instance.¶
creation_policy
- (Optional) The policy for using instances from the pool. Defaults to reuse-or-create
.¶
termination_policy
- (Optional) The policy for instance termination. Defaults to destroy-after-idle
.¶
termination_idle_time
- (Optional) Time to wait before destroying the idle instance. Defaults to 5m
for dstack run
and to 3d
for dstack pool add
.¶
resources
¶
cpu
- (Optional) The number of CPU cores. Defaults to 2..
.¶
memory
- (Optional) The RAM size (e.g., 8GB
). Defaults to 8GB..
.¶
shm_size
- (Optional) The size of shared memory (e.g., 8GB
). If you are using parallel communicating processes (e.g., dataloaders in PyTorch), you may need to configure this.¶
gpu
- (Optional) The GPU requirements. Can be set to a number, a string (e.g. A100
, 80GB:2
, etc.), or an object.¶
disk
- (Optional) The disk resources.¶
resources.gpu
¶
vendor
- (Optional) The vendor of the GPU/accelerator, one of: nvidia
, amd
, google
(alias: tpu
).¶
name
- (Optional) The GPU name or list of names.¶
count
- (Optional) The number of GPUs. Defaults to 1
.¶
memory
- (Optional) The RAM size (e.g., 16GB
). Can be set to a range (e.g. 16GB..
, or 16GB..80GB
).¶
total_memory
- (Optional) The total RAM size (e.g., 32GB
). Can be set to a range (e.g. 16GB..
, or 16GB..80GB
).¶
compute_capability
- (Optional) The minimum compute capability of the GPU (e.g., 7.5
).¶
resources.disk
¶
size
- The disk size. Can be a string (e.g., 100GB
or 100GB..
) or an object.¶
registry_auth
¶
username
- The username.¶
password
- The password or access token.¶
volumes[n]
¶
Short syntax
The short syntax for volumes is a colon-separated string in the form of source:destination
volume-name:/container/path
for network volumes/instance/path:/container/path
for instance volumes