Supporting Hot Aisle AMD AI Developer Cloud¶
As the ecosystem around AMD GPUs matures, developers are looking for easier ways to experiment with ROCm, benchmark new architectures, and run cost-effective workloads—without manual infrastructure setup.
dstack
is an open-source orchestrator designed for AI workloads, providing a lightweight, container-native alternative to Kubernetes and Slurm.
Today, we’re excited to announce native integration with Hot Aisle , an AMD-only GPU neocloud offering VMs and clusters at highly competitive on-demand pricing.
About Hot Aisle¶
Hot Aisle is a next-generation GPU cloud built around AMD’s flagship AI accelerators.
Highlights:
- AMD’s flagship AI-optimized accelrators
- On-demand pricing: $1.99/hour for 1-GPU VMs
- No commitment – start and stop when you want
- First AMD-only GPU backend in
dstack
While it has already been possible to use HotAisle’s 8-GPU MI300X bare-metal clusters via SSH fleets
, this integration now enables automated provisioning of VMs—made possible by HotAisle’s newly added API for MI300X instances.
Why dstack¶
dstack
is a new open-source container orchestrator built specifically for GPU workloads.
It fills the gaps left by Kubernetes and Slurm when it comes to GPU provisioning and orchestration:
- Unlike Kubernetes,
dstack
offers a high-level, AI-engineer-friendly interface, and GPUs work out of the box, with no need to wrangle custom operators, device plugins, or other low-level setup. - Unlike Slurm, it’s use-case agnostic — equally suited for training, inference, benchmarking, or even setting up long-running dev environments.
- It works across clouds and on-prem without vendor lock-in.
With the new Hot Aisle backend, you can automatically provision MI300X VMs for any workload — from experiments to production — with a single dstack
CLI command.
Getting started¶
Before configuring dstack
to use Hot Aisle’s VMs, complete these steps:
- Create a project via
ssh admin.hotaisle.app
- Get credits or approve a payment method
- Create an API key
Then, configure the backend in ~/.dstack/server/config.yml
:
projects:
- name: main
backends:
- type: hotaisle
team_handle: hotaisle-team-handle
creds:
type: api_key
api_key: 9c27a4bb7a8e472fae12ab34.3f2e3c1db75b9a0187fd2196c6b3e56d2b912e1c439ba08d89e7b6fcd4ef1d3f
Install and start the dstack
server:
$ pip install "dstack[server]"
$ dstack server
For more details, see Installation.
Use the dstack
CLI to
manage dev environments, tasks,
and services.
$ dstack apply -f .dstack.yml
# BACKEND RESOURCES INSTANCE TYPE PRICE
1 hotaisle (us-michigan-1) cpu=13 mem=224GB disk=12288GB MI300X:192GB:1 1x MI300X 13x Xeon Platinum 8470 $1.99
2 hotaisle (us-michigan-1) cpu=8 mem=224GB disk=12288GB MI300X:192GB:1 1x MI300X 8x Xeon Platinum 8470 $1.99
Submit the run? [y/n]:
Currently, dstack
supports 1xGPU Hot Aisle VMs. Support for 8xGPU VMs will be added once Hot Aisle supports it.
If you prefer to use Hot Aisle’s bare-metal 8-GPU clusters with dstack, you can create an SSH fleet. This way, you’ll be able to run distributed tasks efficiently across the cluster.
What's next?
- Check Quickstart
- Learn more about Hot Aisle
- Explore dev environments, tasks, services, and fleets
- Join Discord