Server deployment
The dstack
server can run anywhere: on your laptop, a dedicated server, or in the cloud.
You can run the server either through pip
or using Docker.
$ pip install "dstack[all]" -U
$ dstack server
Applying ~/.dstack/server/config.yml...
The admin token is "bbae0f28-d3dd-4820-bf61-8f4bb40815da"
The server is running at http://127.0.0.1:3000/
The server can be set up via
pip
on Linux, macOS, and Windows (via WSL 2). It requires Git and OpenSSH.
$ docker run -p 3000:3000 \
-v $HOME/.dstack/server/:/root/.dstack/server \
dstackai/dstack
Applying ~/.dstack/server/config.yml...
The admin token is "bbae0f28-d3dd-4820-bf61-8f4bb40815da"
The server is running at http://127.0.0.1:3000/
If you'd like to deploy the server to a private AWS VPC, you can use our CloudFormation template .
First, ensure, you've set up a private VPC with public and private subnets.
Create a stack using the template, and specify the VPC and private subnets.
Once, the stack is created, go to Outputs
for the server URL and admin token.
To access the server URL, ensure you're connected to the VPC, e.g. via VPN client.
If you'd like to adjust anything, the source code of the template can be found at
examples/server-deployment/cloudformation/template.yaml
.
Backend configuration¶
To use dstack
with your own cloud accounts, create the ~/.dstack/server/config.yml
file and
configure backends.
The server loads this file on startup.
Alternatively, you can configure backends on the project settings page via the control plane's UI.
For using
dstack
with on-prem servers, no backend configuration is required. See SSH fleets for more details.
State persistence¶
By default, the dstack
server stores its state locally in ~/.dstack/server
using SQLite.
Replicate state to cloud storage
If you’d like, you can configure automatic replication of your SQLite state to cloud object storage using Litestream.
To enable Litestream replication, set the following environment variables:
LITESTREAM_REPLICA_URL
- The url of the cloud object storage. Examples:s3://<bucket-name>/<path>
,gcs://<bucket-name>/<path>
,abs://<storage-account>@<container-name>/<path>
, etc.
You also need to configure cloud storage credentials.
AWS S3
To persist state into an AWS S3 bucket, provide the following environment variables:
AWS_ACCESS_KEY_ID
- The AWS access key IDAWS_SECRET_ACCESS_KEY
- The AWS secret access key
GCP Storage
To persist state into a GCP Storage bucket, provide one of the following environment variables:
GOOGLE_APPLICATION_CREDENTIALS
- The path to the GCP service account key JSON fileGOOGLE_APPLICATION_CREDENTIALS_JSON
- The GCP service account key JSON
Azure Blob Storage
To persist state into an Azure blog storage, provide the following environment variable.
LITESTREAM_AZURE_ACCOUNT_KEY
- The Azure storage account key
More details on options for configuring replication.
PostgreSQL¶
To store the state externally, use the DSTACK_DATABASE_URL
and DSTACK_SERVER_CLOUDWATCH_LOG_GROUP
environment variables.
Migrate from SQLite to PostgreSQL
You can migrate the existing state from SQLite to PostgreSQL using pgloader
:
- Create a new PostgreSQL database
- Clone the
dstack
repo and installdstack
from source. Ensure you've checked out the tag that corresponds to your server version (e.g.git checkout 0.18.10
). - Apply database migrations to the new database:
cd src/dstack/_internal/server/ export DSTACK_DATABASE_URL="postgresql+asyncpg://..." alembic upgrade head
- Install pgloader
- Pass the path to the
~/.dstack/server/data/sqlite.db
file toSOURCE_PATH
and setTARGET_PATH
with the URL of the PostgreSQL database. Example:Thecd scripts/ export SOURCE_PATH=sqlite:///Users/me/.dstack/server/data/sqlite.db export TARGET_PATH=postgresql://postgres:postgres@localhost:5432/postgres pgloader sqlite_to_psql.load
pgloader
script will migrate the SQLite data to PostgreSQL. It may emit warnings that are safe to ignore.
If you encounter errors, please submit an issue.
Logs storage¶
By default, dstack
stores workload logs in ~/.dstack/server/projects/<project_name>/logs
.
AWS CloudWatch¶
To store logs in AWS CloudWatch, set the DSTACK_SERVER_CLOUDWATCH_LOG_GROUP
and
the DSTACK_SERVER_CLOUDWATCH_LOG_REGION
environment variables.
The log group must be created beforehand, dstack
won't try to create it.
Required permissions
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DstackLogStorageAllow",
"Effect": "Allow",
"Action": [
"logs:DescribeLogStreams",
"logs:CreateLogStream",
"logs:GetLogEvents",
"logs:PutLogEvents"
],
"Resource": [
"arn:aws:logs:::log-group:<group name>",
"arn:aws:logs:::log-group:<group name>:*"
]
}
]
}
Running multiple replicas of the server¶
If you'd like to run multiple server replicas, make sure to configure dstack
to use PostgreSQL
and AWS CloudWatch.
Enabling encryption¶
If you want backend credentials and user tokens to be encrypted, you can set up encryption keys via
~/.dstack/server/config.yml
.