Skip to content

AMD

Exploring inference memory saturation effect: H100 vs MI300x

GPU memory plays a critical role in LLM inference, affecting both performance and cost. This benchmark evaluates memory saturation’s impact on inference using NVIDIA's H100 and AMD's MI300x with Llama 3.1 405B FP8.

We examine the effect of limited parallel computational resources on throughput and Time to First Token (TTFT). Additionally, we compare deployment strategies: running two Llama 3.1 405B FP8 replicas on 4xMI300x versus a single replica on 4xMI300x and 8xMI300x

Finally, we extrapolate performance projections for upcoming GPUs like NVIDIA H200, B200, and AMD MI325x, MI350x.

This benchmark is made possible through the generous support of our friends at Hot Aisle and Lambda , who provided high-end hardware.

Supporting AMD accelerators on RunPod

While dstack helps streamline the orchestration of containers for AI, its primary goal is to offer vendor independence and portability, ensuring compatibility across different hardware and cloud providers.

Inspired by the recent MI300X benchmarks, we are pleased to announce that RunPod is the first cloud provider to offer AMD GPUs through dstack, with support for other cloud providers and on-prem servers to follow.