Skip to content

Deployment Workflow

Overview

The workflow file still maps branches to two environments, but only develop is currently enabled as a trigger:

BranchTrigger enabled?GitHub EnvironmentTag prefixDeploys docs?
mainNo (temporarily disabled)Productionprod-*No
developYesDevelopmentdev-*Yes

Every push to develop triggers a GitHub Actions pipeline that:

  1. Derives environment-specific values (tag prefix, environment name, docs flag) from the branch
  2. Detects which apps changed
  3. Builds Docker images only for changed apps
  4. Pushes images to Docker Hub with environment-prefixed tags
  5. Notifies Dokploy via authenticated API calls (compose.deploy / application.deploy) using environment-scoped secrets
  6. Applies a notification policy (independent or all-or-nothing) controlled by DOKPLOY_NOTIFY_MODE

Architecture

flowchart LR
    Push[git push develop] --> GHA[GitHub Actions]
    GHA --> Setup[Setup: derive env]
    Setup --> Detect[Detect Changes]
    Detect --> BuildAPI[Build API Image]
    Detect --> BuildPanel[Build Panel Image]
    Detect -->|develop only| BuildDocs[Build Docs Image]
    BuildAPI --> Hub[Docker Hub]
    BuildPanel --> Hub
    BuildDocs --> Hub
    Hub --> Notify[Dokploy API Notify]
    Notify --> Deploy[Dokploy Redeploy]

Image Naming

daramex25/daramex-{app}:{env}-{identifier}
AppImageProduction tagsDevelopment tags
apidaramex25/daramex-apiprod-{sha}, prod-latestdev-{sha}, dev-latest
paneldaramex25/daramex-panelprod-{sha}, prod-latestdev-{sha}, dev-latest
docsdaramex25/daramex-docsnot builtdev-{sha}, dev-latest

Change Detection

Each app only rebuilds when its relevant paths change:

AppTrigger paths
apiapps/api/**, packages/schemas/**, infra/apps/api.Dockerfile, pnpm-lock.yaml, turbo.json
panelapps/panel/**, packages/schemas/**, packages/api-client/**, infra/apps/panel.Dockerfile, pnpm-lock.yaml, turbo.json
docsapps/docs/**, infra/apps/docs.Dockerfile, pnpm-lock.yaml, turbo.json

GitHub Workflow Structure (step by step)

Source: .github/workflows/deploy.yml

Trigger and concurrency

yaml
on:
  push:
    branches: [develop]

concurrency:
  group: deploy-${{ github.ref }}
  cancel-in-progress: true
  • The workflow currently runs on every push to develop.
  • concurrency guarantees only one deployment runs at a time per branch. If a new push arrives while a previous deploy is still running, the old run is cancelled. This prevents race conditions where two builds push images simultaneously and Dokploy receives deploy requests out of order.

Global environment variables

yaml
env:
  REGISTRY: docker.io
  IMAGE_PREFIX: daramex25/daramex
  DOKPLOY_NOTIFY_MODE: ${{ vars.DOKPLOY_NOTIFY_MODE || 'independent' }}

Defined once at the top so every job can reference ${{ env.IMAGE_PREFIX }}-api, ${{ env.IMAGE_PREFIX }}-panel, etc. without repeating the Docker Hub org name.

  • DOKPLOY_NOTIFY_MODE controls deploy notification behavior:
    • independent (default): notify each successful app even if another app failed.
    • all-or-nothing: notify only when no app build failed/cancelled in this run.

Job 1: setup

yaml
setup:
  runs-on: ubuntu-latest
  outputs:
    env_name: ${{ steps.vars.outputs.env_name }}
    tag_prefix: ${{ steps.vars.outputs.tag_prefix }}
    deploy_docs: ${{ steps.vars.outputs.deploy_docs }}
  steps:
    - name: Set environment variables
      id: vars
      run: |
        if [ "${{ github.ref_name }}" = "main" ]; then
          echo "env_name=Production" >> "$GITHUB_OUTPUT"
          echo "tag_prefix=prod" >> "$GITHUB_OUTPUT"
          echo "deploy_docs=false" >> "$GITHUB_OUTPUT"
        else
          echo "env_name=Development" >> "$GITHUB_OUTPUT"
          echo "tag_prefix=dev" >> "$GITHUB_OUTPUT"
          echo "deploy_docs=true" >> "$GITHUB_OUTPUT"
        fi

What it does: Derives three values from the branch name that all downstream jobs consume:

OutputmaindevelopPurpose
env_nameProductionDevelopmentSelects the GitHub Environment for secrets
tag_prefixproddevPrefixes Docker image tags
deploy_docsfalsetrueControls whether the docs job runs

Downstream jobs reference these via needs.setup.outputs.* and set environment: ${{ needs.setup.outputs.env_name }} to scope secrets to the correct GitHub Environment.

Job 2: detect-changes

yaml
detect-changes:
  runs-on: ubuntu-latest
  outputs:
    api: ${{ steps.filter.outputs.api }}
    panel: ${{ steps.filter.outputs.panel }}
    docs: ${{ steps.filter.outputs.docs }}
  steps:
    - uses: actions/checkout@v4
    - uses: dorny/paths-filter@v3
      id: filter
      with:
        filters: |
          api:
            - 'apps/api/**'
            - 'packages/schemas/**'
            - ...

What it does: Compares the pushed commits against the base and outputs true/false for each app. Downstream jobs read these outputs to decide whether to run.

Why each path is included:

PathReason
apps/api/**The app's own source code
packages/schemas/**Shared Zod schemas — used by api and panel
packages/api-client/**Generated API client — used by panel
infra/apps/*.DockerfileThe Dockerfile itself changed
pnpm-lock.yamlDependency versions changed
turbo.jsonBuild pipeline config changed

The outputs block exposes the results so other jobs can use needs.detect-changes.outputs.api == 'true' as a condition.

Jobs 3-5: build-api, build-panel, build-docs

All three follow the same pattern. Using build-api as example:

yaml
build-api:
  runs-on: ubuntu-latest
  needs: [setup, detect-changes] # (1)
  if: needs.detect-changes.outputs.api == 'true' # (2)
  environment: ${{ needs.setup.outputs.env_name }} # (3)
  steps:
    - uses: actions/checkout@v4 # (4)
    - uses: docker/setup-buildx-action@v3 # (5)
    - uses: docker/login-action@v3 # (6)
      with:
        username: ${{ secrets.DOCKERHUB_USERNAME }}
        password: ${{ secrets.DOCKERHUB_TOKEN }}
    - uses: docker/build-push-action@v6 # (7)
      with:
        context: .
        file: infra/apps/api.Dockerfile
        push: true
        tags: |
          ${{ env.IMAGE_PREFIX }}-api:${{ needs.setup.outputs.tag_prefix }}-${{ github.sha }}
          ${{ env.IMAGE_PREFIX }}-api:${{ needs.setup.outputs.tag_prefix }}-latest
        cache-from: type=registry,ref=...:buildcache
        cache-to: type=registry,ref=...:buildcache,mode=max

Step by step:

  1. **needs: [setup, detect-changes]** — waits for both setup (environment values) and change detection.
  2. **if: ... == 'true'** — skips the entire job if this app's files didn't change. The job shows as "skipped" in the Actions UI (not "failed").
  3. **environment** — scopes secrets.* to the correct GitHub Environment (Production or Development). This is how the same secret names can resolve to different Dokploy IDs/credentials per environment.
  4. **actions/checkout** — clones the repo. Needed because each job runs in a fresh VM.
  5. **docker/setup-buildx-action** — sets up Docker Buildx, which enables advanced features: multi-stage caching, registry cache backend, and parallel stage building.
  6. **docker/login-action** — authenticates to Docker Hub using the repo secrets so the next step can push images and read/write cache.
  7. **docker/build-push-action** — the core step:
  • context: . — the entire repo root is the Docker build context (needed because turbo prune reads the full monorepo).
  • file — points to the specific Dockerfile for this app.
  • push: true — pushes the built image to Docker Hub.
  • tags — two tags per image using the dynamic tag_prefix:
    • {prefix}-{sha} — immutable, tied to the exact commit. Useful for rollbacks and auditing.
    • {prefix}-latest — mutable, always points to the most recent build. This is what Dokploy pulls.
  • cache-from / cache-to — uses the Docker Hub registry itself as a layer cache (mode=max caches all layers, not just the final image). On subsequent builds, unchanged layers (like pnpm install) are pulled from cache instead of rebuilt, reducing build time from ~5min to ~1min when only source code changed.

build-docs has an extra condition:

yaml
if: needs.detect-changes.outputs.docs == 'true' && needs.setup.outputs.deploy_docs == 'true'

This ensures docs deploy only for Development. With the current trigger (develop only), docs can deploy when docs paths change; if main is re-enabled later, this guard still prevents docs deploys in Production.

The three build jobs run in parallel since they all only depend on setup and detect-changes, not on each other.

Job 6: notify-dokploy

yaml
notify-dokploy:
  runs-on: ubuntu-latest
  needs: [setup, detect-changes, build-api, build-panel, build-docs]
  if: always() && !cancelled()
  environment: ${{ needs.setup.outputs.env_name }}
  steps:
    - name: Check overall build status
      id: status
      run: |
        if [ "${{ needs.build-api.result }}" == "failure" ] || [ "${{ needs.build-api.result }}" == "cancelled" ] || \
           [ "${{ needs.build-panel.result }}" == "failure" ] || [ "${{ needs.build-panel.result }}" == "cancelled" ] || \
           [ "${{ needs.build-docs.result }}" == "failure" ] || [ "${{ needs.build-docs.result }}" == "cancelled" ]; then
          echo "has_errors=true" >> "$GITHUB_OUTPUT"
        else
          echo "has_errors=false" >> "$GITHUB_OUTPUT"
        fi
    - name: Notify API
      if: >
        needs.build-api.result == 'success' && 
        (env.DOKPLOY_NOTIFY_MODE == 'independent' || steps.status.outputs.has_errors == 'false')
      run: |
        curl -X POST "https://panel.daramex.com/api/compose.deploy" \
          -H "Content-Type: application/json" \
          -H "x-api-key: ${{ secrets.DOKPLOY_API_KEY }}" \
          -d '{
            "composeId": "${{ secrets.DOKPLOY_API_COMPOSE_ID }}"
          }'
    # ... same for panel and docs
  • **needs: [setup, ...]** — waits for all build jobs to finish (or be skipped). Includes setup so env_name is available.
  • **environment** — scopes Dokploy secrets to the correct environment.
  • **if: always() && !cancelled()** — this is critical. By default, if a needs job is skipped (because the app didn't change), all downstream jobs are also skipped. always() overrides this so the notify job always runs. !cancelled() ensures it doesn't run if the entire workflow was cancelled.
  • Check overall build status consolidates failures/cancellations into has_errors.
  • Each notify step checks both:
    • needs.build-{app}.result == 'success'
    • DOKPLOY_NOTIFY_MODE policy (independent always allows successful apps; all-or-nothing requires has_errors == false)
  • API service redeploy uses Dokploy compose.deploy with composeId; Panel/Docs use application.deploy with applicationId.
  • API authentication is done with x-api-key: ${{ secrets.DOKPLOY_API_KEY }}.

Complete execution flow

flowchart TD
    Push[Push to develop] --> Setup[setup: derive env]
    Setup --> DC[detect-changes]
    DC -->|api=true| BA[build-api]
    DC -->|panel=true| BP[build-panel]
    DC -->|docs=true AND develop| BD[build-docs]
    BA -->|push image| Hub[Docker Hub]
    BP -->|push image| Hub
    BD -->|push image| Hub
    BA --> Notify[notify-dokploy]
    BP --> Notify
    BD --> Notify
    Notify -->|API built| WH1[POST compose.deploy]
    Notify -->|Panel built| WH2[POST application.deploy]
    Notify -->|Docs built| WH3[POST application.deploy]
    WH1 --> Dokploy[Dokploy pulls & redeploys]
    WH2 --> Dokploy
    WH3 --> Dokploy

Dockerfile Structure (deep dive)

All three Dockerfiles share the same 4-stage pattern. The stages are designed to maximize Docker layer caching — the most expensive operations (dependency installation) are isolated so they only re-run when package.json or pnpm-lock.yaml changes.

Why turbo prune --docker?

In a monorepo, building one app naively would require copying the entire repo and installing all dependencies for all apps. turbo prune solves this:

turbo prune api --docker

This produces two directories:

OutputContentsUsed in
out/json/Only package.json files + workspace config for the target app and its dependenciesinstaller stage (dependency installation)
out/full/Only the source code for the target app and its dependenciesbuilder stage (compilation)
out/pnpm-lock.yamlSubset of the lockfile for the pruned workspaceinstaller stage

The --docker flag splits the output specifically for multi-stage Docker builds: manifests go in one stage, source code in another.

Stage 1: pruner

dockerfile
FROM node:24-alpine AS pruner
RUN npm install -g turbo@^2
WORKDIR /app
COPY . .
RUN turbo prune api --docker
  • Starts from node:24-alpine (minimal Node.js image, ~50MB).
  • Installs turbo globally via npm — this stage only runs turbo (no pnpm needed), so a global install is simpler than pnpm dlx.
  • Copies the entire repo — turbo needs it to resolve the dependency graph.
  • Runs turbo prune which outputs the pruned workspace into /app/out/.

This stage is a throwaway — it only exists to run turbo prune. None of its layers end up in the final image.

Stage 2: installer

dockerfile
FROM node:24-alpine AS installer
RUN corepack enable
WORKDIR /app
COPY --from=pruner /app/out/json/ .
COPY --from=pruner /app/out/pnpm-lock.yaml ./pnpm-lock.yaml
RUN pnpm install --frozen-lockfile
  • Starts fresh (no turbo, no source code).
  • Enables corepack so pnpm is available (Node 24 ships corepack built-in — corepack enable is enough, no explicit corepack prepare needed).
  • Copies only the package.json files and lockfile from the pruner output.
  • Runs pnpm install --frozen-lockfile — installs exact versions from lockfile, fails if lockfile is out of sync.

Caching benefit: Docker caches layers based on the COPY inputs. Since this stage only copies manifests (not source code), the pnpm install layer is cached as long as dependencies haven't changed. Source code changes (the most frequent kind) skip this entire stage.

Stage 3: builder

dockerfile
FROM node:24-alpine AS builder
RUN corepack enable
RUN npm install -g turbo@^2
WORKDIR /app
COPY --from=installer /app/ .
COPY --from=pruner /app/out/full/ .
RUN turbo run build --filter=api
  • Enables corepack (pnpm is needed for workspace resolution during build) and installs turbo globally.
  • Copies the installed node_modules from the installer.
  • Overlays the actual source code from out/full/ on top.
  • Runs turbo run build --filter=api which builds the target app and any workspace dependencies it needs (e.g., @repo/schemas).

Docs builder note: The docs Dockerfile adds apk add --no-cache git in this stage because VitePress needs git to compute lastUpdated timestamps from commit history.

Stage 4: runner (differs per app)

API runner

dockerfile
FROM node:24-alpine AS runner
RUN apk add --no-cache dumb-init
WORKDIR /app
COPY --from=builder /app/prod/node_modules ./node_modules
COPY --from=builder /app/apps/api/dist ./dist
ENV PORT=80
EXPOSE 80
USER node
CMD ["dumb-init", "node", "dist/src/main.js"]

Key decisions:

LineWhy
dumb-initActs as PID 1 and properly forwards signals (SIGTERM, SIGINT) to the Node.js process. Without it, node runs as PID 1 and ignores signals, causing slow container shutdowns (Docker waits 10s then sends SIGKILL).
pnpm deploy --prod (in builder)Extracts only production dependencies into /app/prod/, excluding devDependencies like TypeScript, ESLint, etc. This reduces the final image size significantly.
ENV PORT=80NestJS reads PORT from the environment via AppConfigService. Port 80 is the standard HTTP port inside the container — Dokploy's reverse proxy handles external routing.
USER nodeDrops privileges from root. The node user is pre-created in the node:* base images. Running as non-root is a security best practice.

Panel runner

dockerfile
FROM nginx:1.28.2-alpine AS runner
COPY --from=builder /app/apps/panel/dist /usr/share/nginx/html
COPY infra/apps/nginx/panel.nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
  • Uses nginx:1.28.2-alpine instead of Node.js — a Vite/React app is just static files after building, no runtime needed.
  • The nginx config uses try_files $uri $uri/ /index.html — this is the SPA fallback. When the user navigates to /dashboard/settings, nginx won't find that file on disk and falls back to index.html, where React Router handles the route client-side.
  • The second COPY pulls the nginx config from the build context (not from a previous stage). This works because COPY without --from always reads from the Docker build context (the repo root).

Docs runner

dockerfile
FROM nginx:1.28.2-alpine AS runner
COPY --from=builder /app/apps/docs/.vitepress/dist /usr/share/nginx/html
COPY infra/apps/nginx/docs.nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
  • Same nginx base, but with a different try_files strategy: $uri $uri.html $uri/ =404.
  • VitePress generates clean URLs — a page at workflows/engineering/deploy-workflow is served from deploy-workflow.html. The .html fallback ($uri.html) handles this without redirects.
  • Unlike the panel, there's no SPA fallback — docs pages are pre-rendered HTML files. A missing page correctly returns 404.

Final image size comparison

AppBaseContainsApprox. size
apinode:24-alpineNode.js runtime + prod node_modules + compiled JS~200-300MB
panelnginx:1.28.2-alpinenginx + static HTML/JS/CSS~30-50MB
docsnginx:1.28.2-alpinenginx + static HTML~30-50MB

The multi-stage build discards everything from the pruner, installer, and builder stages. Only the final runner stage becomes the image.


Required GitHub Secrets

Secrets are scoped to GitHub Environments. The workflow's environment field resolves secrets from the selected environment (Development with current trigger).

SecretScopeDescription
DOCKERHUB_USERNAMERepo-levelDocker Hub username (daramex25)
DOCKERHUB_TOKENRepo-levelDocker Hub access token
DOKPLOY_API_KEYDevelopment (+ Production when re-enabled)Dokploy API key used in x-api-key header
DOKPLOY_API_COMPOSE_IDDevelopment (+ Production when re-enabled)Compose ID for API redeploy (/api/compose.deploy)
DOKPLOY_PANEL_APP_IDDevelopment (+ Production when re-enabled)Application ID for Panel redeploy (/api/application.deploy)
DOKPLOY_DOCS_APP_IDDevelopment onlyApplication ID for Docs redeploy (/api/application.deploy)

DOCKERHUB_USERNAME and DOCKERHUB_TOKEN can stay at repo level — GitHub resolves environment-scoped secrets first, then falls back to repo-level. No need to duplicate Docker Hub creds per environment.

Required GitHub Variables

VariableScopeAllowed valuesDefaultDescription
DOKPLOY_NOTIFY_MODERepo or Environmentindependent, all-or-nothingindependentControls if successful apps are notified independently or only when all builds succeed

Environment setup (manual, one-time)

With the current trigger, only Development is required. If main is re-enabled, mirror these in Production with production-specific IDs.

SecretProductionDevelopment
DOKPLOY_API_KEYprod API keydev API key
DOKPLOY_API_COMPOSE_IDprod compose IDdev compose ID
DOKPLOY_PANEL_APP_IDprod panel app IDdev panel app ID
DOKPLOY_DOCS_APP_IDnot neededdev docs app ID

Set DOKPLOY_NOTIFY_MODE in GitHub → Settings → Secrets and variables → Actions → Variables (repo-level is usually enough).

Local Testing

Build images locally:

bash
docker build -f infra/apps/api.Dockerfile -t daramex-api:test .
docker build -f infra/apps/panel.Dockerfile -t daramex-panel:test .
docker build -f infra/apps/docs.Dockerfile -t daramex-docs:test .

Run with high ports (avoids conflicts with Grafana/other services):

bash
docker run --rm -p 4510:80 --env-file apps/api/.env daramex-api:test
docker run --rm -p 4520:80 daramex-panel:test
docker run --rm -p 4530:80 daramex-docs:test

Adding a New App

  1. Create infra/apps/{app}.Dockerfile — copy an existing Dockerfile and update the scope name.
  2. If it's a static site, add an nginx config in infra/apps/nginx/{app}.nginx.conf.
  3. Add the app to .github/workflows/deploy.yml:
    • Add a filter rule in detect-changes.
    • Add a build-{app} job (with needs: [setup, detect-changes] and environment scoping).
    • Add a notify step in notify-dokploy with the correct endpoint and payload (compose.deploy + composeId or application.deploy + applicationId).
  4. Add the app resource ID secret (for example, DOKPLOY_{APP}_APP_ID) to the target GitHub Environment(s).
  5. Create the Dokploy service and configure it to pull from Docker Hub.