Skip to content

2024

Get involved as a community ambassador

As we wrap up an exciting year at dstack, we’re thrilled to introduce our Ambassador Program. This initiative invites AI infrastructure enthusiasts and those passionate about open-source AI to share their knowledge, contribute to the growth of the dstack community, and play a key role in advancing the open AI ecosystem.

Exploring inference memory saturation effect: H100 vs MI300x

GPU memory plays a critical role in LLM inference, affecting both performance and cost. This benchmark evaluates memory saturation’s impact on inference using NVIDIA's H100 and AMD's MI300x with Llama 3.1 405B FP8.

We examine the effect of limited parallel computational resources on throughput and Time to First Token (TTFT). Additionally, we compare deployment strategies: running two Llama 3.1 405B FP8 replicas on 4xMI300x versus a single replica on 4xMI300x and 8xMI300x

Finally, we extrapolate performance projections for upcoming GPUs like NVIDIA H200, B200, and AMD MI325x, MI350x.

This benchmark is made possible through the generous support of our friends at Hot Aisle and Lambda , who provided high-end hardware.

Introducing instance volumes to persist data on instances

Until now, dstack supported data persistence only with network volumes, managed by clouds. While convenient, sometimes you might want to use a simple cache on the instance or mount an NFS share to your SSH fleet. To address this, we're now introducing instance volumes that work for both cases.

type: task 
name: llama32-task

env:
  - HF_TOKEN
  - MODEL_ID=meta-llama/Llama-3.2-3B-Instruct
commands:
  - pip install vllm
  - vllm serve $MODEL_ID --max-model-len 4096
ports: [8000]

volumes:
  - /root/.dstack/cache:/root/.cache

resources:
  gpu: 16GB..

Using Docker and Docker Compose inside GPU-enabled containers

To run containers with dstack, you can use your own Docker image (or the default one) without a need to interact directly with Docker. However, some existing code may require direct use of Docker or Docker Compose. That's why, in our latest release, we've added this option.

type: task
name: chat-ui-task

image: dstackai/dind
privileged: true

working_dir: examples/misc/docker-compose
commands:
  - start-dockerd
  - docker compose up
ports: [9000]

resources:
  gpu: 16GB..24GB

Using TPUs for fine-tuning and deploying LLMs

If you’re using or planning to use TPUs with Google Cloud, you can now do so via dstack. Just specify the TPU version and the number of cores (separated by a dash), in the gpu property under resources.

Read below to find out how to use TPUs with dstack for fine-tuning and deploying LLMs, leveraging open-source tools like Hugging Face’s Optimum TPU and vLLM .

Supporting AMD accelerators on RunPod

While dstack helps streamline the orchestration of containers for AI, its primary goal is to offer vendor independence and portability, ensuring compatibility across different hardware and cloud providers.

Inspired by the recent MI300X benchmarks, we are pleased to announce that RunPod is the first cloud provider to offer AMD GPUs through dstack, with support for other cloud providers and on-prem servers to follow.

Using volumes to optimize cold starts on RunPod

Deploying custom models in the cloud often faces the challenge of cold start times, including the time to provision a new instance and download the model. This is especially relevant for services with autoscaling when new model replicas need to be provisioned quickly.

Let's explore how dstack optimizes this process using volumes, with an example of deploying a model on RunPod.