dstack offer¶
Displays available offers (hardware configurations) with the configured backends (or offers that match already provisioned fleets).
The output includes backend, region, instance type, resources, spot availability, and pricing details.
Usage¶
This command accepts most of the same arguments as dstack apply
.
$ dstack offer --help
Usage: dstack offer [-h] [--project NAME] [--format {plain,json}] [--json]
[-n RUN_NAME] [--max-offers MAX_OFFERS] [-e KEY[=VALUE]]
[--gpu SPEC] [--disk RANGE] [--profile NAME]
[--max-price PRICE] [--max-duration DURATION] [-b NAME]
[-r NAME] [--instance-type NAME] [--fleet NAME]
[-R | --dont-destroy | --idle-duration IDLE_DURATION]
[--spot | --on-demand | --spot-auto | --spot-policy POLICY]
[--retry | --no-retry | --retry-duration DURATION]
Options:
-h, --help Show this help message and exit
--project NAME The name of the project. Defaults to $DSTACK_PROJECT
--format {plain,json}
Output format (default: plain)
--json Output in JSON format (equivalent to --format json)
Task Options:
-n, --name RUN_NAME The name of the run. If not specified, a random name
is assigned
--max-offers MAX_OFFERS
Number of offers to show in the run plan
-e, --env KEY[=VALUE]
Environment variables
--gpu SPEC Request GPU for the run. The format is
NAME:COUNT:MEMORY (all parts are optional)
--disk RANGE Request the size range of disk for the run. Example
--disk 100GB...
Profile:
--profile NAME The name of the profile. Defaults to $DSTACK_PROFILE
--max-price PRICE The maximum price per hour, in dollars
--max-duration DURATION
The maximum duration of the run
-b, --backend NAME The backends that will be tried for provisioning
-r, --region NAME The regions that will be tried for provisioning
--instance-type NAME The cloud-specific instance types that will be tried
for provisioning
Fleets:
--fleet NAME Consider only instances from the specified fleet(s)
for reuse
-R, --reuse Reuse an existing instance from fleet (do not
provision a new one)
--dont-destroy Do not destroy instance after the run is finished (if
the run provisions a new instance)
--idle-duration IDLE_DURATION
Time to wait before destroying the idle instance (if
the run provisions a new instance)
Spot Policy:
--spot Consider only spot instances
--on-demand Consider only on-demand instances
--spot-auto Consider both spot and on-demand instances
--spot-policy POLICY One of spot, on-demand, auto
Retry Policy:
--retry
--no-retry
--retry-duration DURATION
Examples¶
List GPU offers¶
The --gpu
flag accepts the same specification format as the gpu
property in dev environment
, task
,
service
, and fleet
configurations.
The general format is: <vendor>:<comma-sparated names>:<memory range>:<quantity range>
.
Each component is optional.
Ranges can be:
- Closed (e.g.
24GB..80GB
or1..8
) - Open (e.g.
24GB..
or1..
) - Single values (e.g.
1
or24GB
).
Examples:
--gpu nvidia
(any NVIDIA GPU)--gpu nvidia:1..8
(from one to eigth NVIDIA GPUs)--gpu A10,A100
(single NVIDIA A10 or A100 GPU)--gpu A100:80GB
(single NVIDIA A100 with 80GB VRAM)--gpu 24GB..80GB
(any GPU with 24GB to 80GB VRAM)
The following example lists offers with one or more H100 GPUs:
$ dstack offer --gpu H100:1.. --max-offers 10
Getting offers...
---> 100%
# BACKEND REGION INSTANCE TYPE RESOURCES SPOT PRICE
1 datacrunch FIN-01 1H100.80S.30V 30xCPU, 120GB, 1xH100 (80GB), 100.0GB (disk) no $2.19
2 datacrunch FIN-02 1H100.80S.30V 30xCPU, 120GB, 1xH100 (80GB), 100.0GB (disk) no $2.19
3 datacrunch FIN-02 1H100.80S.32V 32xCPU, 185GB, 1xH100 (80GB), 100.0GB (disk) no $2.19
4 datacrunch ICE-01 1H100.80S.32V 32xCPU, 185GB, 1xH100 (80GB), 100.0GB (disk) no $2.19
5 runpod US-KS-2 NVIDIA H100 PCIe 16xCPU, 251GB, 1xH100 (80GB), 100.0GB (disk) no $2.39
6 runpod CA NVIDIA H100 80GB HBM3 24xCPU, 251GB, 1xH100 (80GB), 100.0GB (disk) no $2.69
7 nebius eu-north1 gpu-h100-sxm 16xCPU, 200GB, 1xH100 (80GB), 100.0GB (disk) no $2.95
8 runpod AP-JP-1 NVIDIA H100 80GB HBM3 20xCPU, 251GB, 1xH100 (80GB), 100.0GB (disk) no $2.99
9 runpod CA-MTL-1 NVIDIA H100 80GB HBM3 28xCPU, 251GB, 1xH100 (80GB), 100.0GB (disk) no $2.99
10 runpod CA-MTL-2 NVIDIA H100 80GB HBM3 26xCPU, 125GB, 1xH100 (80GB), 100.0GB (disk) no $2.99
...
Shown 10 of 99 offers, $127.816 max
JSON format¶
Use --json
to output offers in the JSON format.
$ dstack offer --gpu amd --json
{
"project": "main",
"user": "admin",
"resources": {
"cpu": {
"min": 2,
"max": null
},
"memory": {
"min": 8.0,
"max": null
},
"shm_size": null,
"gpu": {
"vendor": "amd",
"name": null,
"count": {
"min": 1,
"max": 1
},
"memory": null,
"total_memory": null,
"compute_capability": null
},
"disk": {
"size": {
"min": 100.0,
"max": null
}
}
},
"max_price": null,
"spot": null,
"reservation": null,
"offers": [
{
"backend": "runpod",
"region": "EU-RO-1",
"instance_type": "AMD Instinct MI300X OAM",
"resources": {
"cpus": 24,
"memory_mib": 289792,
"gpus": [
{
"name": "MI300X",
"memory_mib": 196608,
"vendor": "amd"
}
],
"spot": false,
"disk": {
"size_mib": 102400
},
"description": "24xCPU, 283GB, 1xMI300X (192GB), 100.0GB (disk)"
},
"spot": false,
"price": 2.49,
"availability": "available"
}
],
"total_offers": 1
}