Skip to content

Axolotl

This example shows how use Axolotl with dstack to fine-tune Llama3 8B using FSDP and QLoRA.

Prerequisites

Once dstack is installed, go ahead clone the repo, and run dstack init.

$ git clone https://github.com/dstackai/dstack
$ cd dstack
$ dstack init

Training configuration recipe

Axolotl reads the model, LoRA, and dataset arguments, as well as trainer configuration from a YAML file. This file can be found at examples/fine-tuning/axolotl/config.yaml . You can modify it as needed.

Before you proceed with training, make sure to update the hub_model_id in examples/fine-tuning/axolotl/config.yaml with your HuggingFace username.

Single-node training

The easiest way to run a training script with dstack is by creating a task configuration file. This file can be found at examples/fine-tuning/axolotl/train.dstack.yml . Below is its content:

type: task
# The name is optional, if not specified, generated randomly
name: axolotl-train

# Using the official Axolotl's Docker image
image: winglian/axolotl-cloud:main-20240429-py3.11-cu121-2.2.1

# Required environment variables
env:
  - HUGGING_FACE_HUB_TOKEN
  - WANDB_API_KEY
# Commands of the task
commands:
  - accelerate launch -m axolotl.cli.train examples/fine-tuning/axolotl/config.yaml

resources:
  gpu:
    # 24GB or more vRAM
    memory: 24GB..
    # Two or more GPU
    count: 2..

The task uses Axolotl's Docker image, where Axolotl is already pre-installed.

To run the task, use dstack apply:

$ HUGGING_FACE_HUB_TOKEN=...
$ WANDB_API_KEY=...

$ dstack apply -f examples/fine-tuning/axolotl/train.dstack.yml

Fleets

By default, dstack run reuses idle instances from one of the existing fleets. If no idle instances meet the requirements, it creates a new fleet using one of the configured backends.

The example folder includes a fleet configuration: examples/fine-tuning/axolotl/fleet.dstack.yml {:target="_blank"} (a single node with a 24GB GPU).

You can update the fleet configuration to change the vRAM size, GPU model, number of GPUs per node, or number of nodes.

A fleet can be provisioned with dstack apply:

dstack apply -f examples/fine-tuning/axolotl/fleet.dstack.yml

Once provisioned, the fleet can run dev environments and fine-tuning tasks. To delete the fleet, use dstack fleet delete.

To ensure dstack apply always reuses an existing fleet, pass --reuse to dstack apply (or set creation_policy to reuse in the task configuration). The default policy is reuse_or_create.

Dev environment

If you'd like to play with the example using a dev environment, run .dstack.yml via dstack apply:

dstack apply -f examples/fine-tuning/axolotl/.dstack.yaml 

Source code

The source-code of this example can be found in examples/fine-tuning/axolotl .

What's next?

  1. Check dev environments, tasks, services, and fleets.
  2. Browse Axolotl .