Appearance
Deployment Workflow
Overview
The workflow file still maps branches to two environments, but only develop is currently enabled as a trigger:
| Branch | Trigger enabled? | GitHub Environment | Tag prefix | Deploys docs? |
|---|---|---|---|---|
main | No (temporarily disabled) | Production | prod-* | No |
develop | Yes | Development | dev-* | Yes |
Every push to develop triggers a GitHub Actions pipeline that:
- Derives environment-specific values (tag prefix, environment name, docs flag) from the branch
- Detects which apps changed
- Builds Docker images only for changed apps
- Pushes images to Docker Hub with environment-prefixed tags
- Notifies Dokploy via authenticated API calls (
compose.deploy/application.deploy) using environment-scoped secrets - Applies a notification policy (
independentorall-or-nothing) controlled byDOKPLOY_NOTIFY_MODE
Architecture
flowchart LR
Push[git push develop] --> GHA[GitHub Actions]
GHA --> Setup[Setup: derive env]
Setup --> Detect[Detect Changes]
Detect --> BuildAPI[Build API Image]
Detect --> BuildPanel[Build Panel Image]
Detect -->|develop only| BuildDocs[Build Docs Image]
BuildAPI --> Hub[Docker Hub]
BuildPanel --> Hub
BuildDocs --> Hub
Hub --> Notify[Dokploy API Notify]
Notify --> Deploy[Dokploy Redeploy]
Image Naming
daramex25/daramex-{app}:{env}-{identifier}| App | Image | Production tags | Development tags |
|---|---|---|---|
| api | daramex25/daramex-api | prod-{sha}, prod-latest | dev-{sha}, dev-latest |
| panel | daramex25/daramex-panel | prod-{sha}, prod-latest | dev-{sha}, dev-latest |
| docs | daramex25/daramex-docs | not built | dev-{sha}, dev-latest |
Change Detection
Each app only rebuilds when its relevant paths change:
| App | Trigger paths |
|---|---|
| api | apps/api/**, packages/schemas/**, infra/apps/api.Dockerfile, pnpm-lock.yaml, turbo.json |
| panel | apps/panel/**, packages/schemas/**, packages/api-client/**, infra/apps/panel.Dockerfile, pnpm-lock.yaml, turbo.json |
| docs | apps/docs/**, infra/apps/docs.Dockerfile, pnpm-lock.yaml, turbo.json |
GitHub Workflow Structure (step by step)
Source:
.github/workflows/deploy.yml
Trigger and concurrency
yaml
on:
push:
branches: [develop]
concurrency:
group: deploy-${{ github.ref }}
cancel-in-progress: true- The workflow currently runs on every push to
develop. concurrencyguarantees only one deployment runs at a time per branch. If a new push arrives while a previous deploy is still running, the old run is cancelled. This prevents race conditions where two builds push images simultaneously and Dokploy receives deploy requests out of order.
Global environment variables
yaml
env:
REGISTRY: docker.io
IMAGE_PREFIX: daramex25/daramex
DOKPLOY_NOTIFY_MODE: ${{ vars.DOKPLOY_NOTIFY_MODE || 'independent' }}Defined once at the top so every job can reference ${{ env.IMAGE_PREFIX }}-api, ${{ env.IMAGE_PREFIX }}-panel, etc. without repeating the Docker Hub org name.
DOKPLOY_NOTIFY_MODEcontrols deploy notification behavior:independent(default): notify each successful app even if another app failed.all-or-nothing: notify only when no app build failed/cancelled in this run.
Job 1: setup
yaml
setup:
runs-on: ubuntu-latest
outputs:
env_name: ${{ steps.vars.outputs.env_name }}
tag_prefix: ${{ steps.vars.outputs.tag_prefix }}
deploy_docs: ${{ steps.vars.outputs.deploy_docs }}
steps:
- name: Set environment variables
id: vars
run: |
if [ "${{ github.ref_name }}" = "main" ]; then
echo "env_name=Production" >> "$GITHUB_OUTPUT"
echo "tag_prefix=prod" >> "$GITHUB_OUTPUT"
echo "deploy_docs=false" >> "$GITHUB_OUTPUT"
else
echo "env_name=Development" >> "$GITHUB_OUTPUT"
echo "tag_prefix=dev" >> "$GITHUB_OUTPUT"
echo "deploy_docs=true" >> "$GITHUB_OUTPUT"
fiWhat it does: Derives three values from the branch name that all downstream jobs consume:
| Output | main | develop | Purpose |
|---|---|---|---|
env_name | Production | Development | Selects the GitHub Environment for secrets |
tag_prefix | prod | dev | Prefixes Docker image tags |
deploy_docs | false | true | Controls whether the docs job runs |
Downstream jobs reference these via needs.setup.outputs.* and set environment: ${{ needs.setup.outputs.env_name }} to scope secrets to the correct GitHub Environment.
Job 2: detect-changes
yaml
detect-changes:
runs-on: ubuntu-latest
outputs:
api: ${{ steps.filter.outputs.api }}
panel: ${{ steps.filter.outputs.panel }}
docs: ${{ steps.filter.outputs.docs }}
steps:
- uses: actions/checkout@v4
- uses: dorny/paths-filter@v3
id: filter
with:
filters: |
api:
- 'apps/api/**'
- 'packages/schemas/**'
- ...What it does: Compares the pushed commits against the base and outputs true/false for each app. Downstream jobs read these outputs to decide whether to run.
Why each path is included:
| Path | Reason |
|---|---|
apps/api/** | The app's own source code |
packages/schemas/** | Shared Zod schemas — used by api and panel |
packages/api-client/** | Generated API client — used by panel |
infra/apps/*.Dockerfile | The Dockerfile itself changed |
pnpm-lock.yaml | Dependency versions changed |
turbo.json | Build pipeline config changed |
The outputs block exposes the results so other jobs can use needs.detect-changes.outputs.api == 'true' as a condition.
Jobs 3-5: build-api, build-panel, build-docs
All three follow the same pattern. Using build-api as example:
yaml
build-api:
runs-on: ubuntu-latest
needs: [setup, detect-changes] # (1)
if: needs.detect-changes.outputs.api == 'true' # (2)
environment: ${{ needs.setup.outputs.env_name }} # (3)
steps:
- uses: actions/checkout@v4 # (4)
- uses: docker/setup-buildx-action@v3 # (5)
- uses: docker/login-action@v3 # (6)
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- uses: docker/build-push-action@v6 # (7)
with:
context: .
file: infra/apps/api.Dockerfile
push: true
tags: |
${{ env.IMAGE_PREFIX }}-api:${{ needs.setup.outputs.tag_prefix }}-${{ github.sha }}
${{ env.IMAGE_PREFIX }}-api:${{ needs.setup.outputs.tag_prefix }}-latest
cache-from: type=registry,ref=...:buildcache
cache-to: type=registry,ref=...:buildcache,mode=maxStep by step:
**needs: [setup, detect-changes]**— waits for both setup (environment values) and change detection.**if: ... == 'true'**— skips the entire job if this app's files didn't change. The job shows as "skipped" in the Actions UI (not "failed").**environment**— scopessecrets.*to the correct GitHub Environment (Production or Development). This is how the same secret names can resolve to different Dokploy IDs/credentials per environment.**actions/checkout**— clones the repo. Needed because each job runs in a fresh VM.**docker/setup-buildx-action**— sets up Docker Buildx, which enables advanced features: multi-stage caching, registry cache backend, and parallel stage building.**docker/login-action**— authenticates to Docker Hub using the repo secrets so the next step can push images and read/write cache.**docker/build-push-action**— the core step:
context: .— the entire repo root is the Docker build context (needed becauseturbo prunereads the full monorepo).file— points to the specific Dockerfile for this app.push: true— pushes the built image to Docker Hub.tags— two tags per image using the dynamictag_prefix:{prefix}-{sha}— immutable, tied to the exact commit. Useful for rollbacks and auditing.{prefix}-latest— mutable, always points to the most recent build. This is what Dokploy pulls.
cache-from/cache-to— uses the Docker Hub registry itself as a layer cache (mode=maxcaches all layers, not just the final image). On subsequent builds, unchanged layers (likepnpm install) are pulled from cache instead of rebuilt, reducing build time from ~5min to ~1min when only source code changed.
build-docs has an extra condition:
yaml
if: needs.detect-changes.outputs.docs == 'true' && needs.setup.outputs.deploy_docs == 'true'This ensures docs deploy only for Development. With the current trigger (develop only), docs can deploy when docs paths change; if main is re-enabled later, this guard still prevents docs deploys in Production.
The three build jobs run in parallel since they all only depend on setup and detect-changes, not on each other.
Job 6: notify-dokploy
yaml
notify-dokploy:
runs-on: ubuntu-latest
needs: [setup, detect-changes, build-api, build-panel, build-docs]
if: always() && !cancelled()
environment: ${{ needs.setup.outputs.env_name }}
steps:
- name: Check overall build status
id: status
run: |
if [ "${{ needs.build-api.result }}" == "failure" ] || [ "${{ needs.build-api.result }}" == "cancelled" ] || \
[ "${{ needs.build-panel.result }}" == "failure" ] || [ "${{ needs.build-panel.result }}" == "cancelled" ] || \
[ "${{ needs.build-docs.result }}" == "failure" ] || [ "${{ needs.build-docs.result }}" == "cancelled" ]; then
echo "has_errors=true" >> "$GITHUB_OUTPUT"
else
echo "has_errors=false" >> "$GITHUB_OUTPUT"
fi
- name: Notify API
if: >
needs.build-api.result == 'success' &&
(env.DOKPLOY_NOTIFY_MODE == 'independent' || steps.status.outputs.has_errors == 'false')
run: |
curl -X POST "https://panel.daramex.com/api/compose.deploy" \
-H "Content-Type: application/json" \
-H "x-api-key: ${{ secrets.DOKPLOY_API_KEY }}" \
-d '{
"composeId": "${{ secrets.DOKPLOY_API_COMPOSE_ID }}"
}'
# ... same for panel and docs**needs: [setup, ...]**— waits for all build jobs to finish (or be skipped). Includessetupsoenv_nameis available.**environment**— scopes Dokploy secrets to the correct environment.**if: always() && !cancelled()**— this is critical. By default, if aneedsjob is skipped (because the app didn't change), all downstream jobs are also skipped.always()overrides this so the notify job always runs.!cancelled()ensures it doesn't run if the entire workflow was cancelled.Check overall build statusconsolidates failures/cancellations intohas_errors.- Each notify step checks both:
needs.build-{app}.result == 'success'DOKPLOY_NOTIFY_MODEpolicy (independentalways allows successful apps;all-or-nothingrequireshas_errors == false)
- API service redeploy uses Dokploy
compose.deploywithcomposeId; Panel/Docs useapplication.deploywithapplicationId. - API authentication is done with
x-api-key: ${{ secrets.DOKPLOY_API_KEY }}.
Complete execution flow
flowchart TD
Push[Push to develop] --> Setup[setup: derive env]
Setup --> DC[detect-changes]
DC -->|api=true| BA[build-api]
DC -->|panel=true| BP[build-panel]
DC -->|docs=true AND develop| BD[build-docs]
BA -->|push image| Hub[Docker Hub]
BP -->|push image| Hub
BD -->|push image| Hub
BA --> Notify[notify-dokploy]
BP --> Notify
BD --> Notify
Notify -->|API built| WH1[POST compose.deploy]
Notify -->|Panel built| WH2[POST application.deploy]
Notify -->|Docs built| WH3[POST application.deploy]
WH1 --> Dokploy[Dokploy pulls & redeploys]
WH2 --> Dokploy
WH3 --> Dokploy
Dockerfile Structure (deep dive)
All three Dockerfiles share the same 4-stage pattern. The stages are designed to maximize Docker layer caching — the most expensive operations (dependency installation) are isolated so they only re-run when package.json or pnpm-lock.yaml changes.
Why turbo prune --docker?
In a monorepo, building one app naively would require copying the entire repo and installing all dependencies for all apps. turbo prune solves this:
turbo prune api --dockerThis produces two directories:
| Output | Contents | Used in |
|---|---|---|
out/json/ | Only package.json files + workspace config for the target app and its dependencies | installer stage (dependency installation) |
out/full/ | Only the source code for the target app and its dependencies | builder stage (compilation) |
out/pnpm-lock.yaml | Subset of the lockfile for the pruned workspace | installer stage |
The --docker flag splits the output specifically for multi-stage Docker builds: manifests go in one stage, source code in another.
Stage 1: pruner
dockerfile
FROM node:24-alpine AS pruner
RUN npm install -g turbo@^2
WORKDIR /app
COPY . .
RUN turbo prune api --docker- Starts from
node:24-alpine(minimal Node.js image, ~50MB). - Installs turbo globally via npm — this stage only runs turbo (no pnpm needed), so a global install is simpler than
pnpm dlx. - Copies the entire repo — turbo needs it to resolve the dependency graph.
- Runs
turbo prunewhich outputs the pruned workspace into/app/out/.
This stage is a throwaway — it only exists to run turbo prune. None of its layers end up in the final image.
Stage 2: installer
dockerfile
FROM node:24-alpine AS installer
RUN corepack enable
WORKDIR /app
COPY --from=pruner /app/out/json/ .
COPY --from=pruner /app/out/pnpm-lock.yaml ./pnpm-lock.yaml
RUN pnpm install --frozen-lockfile- Starts fresh (no turbo, no source code).
- Enables
corepackso pnpm is available (Node 24 ships corepack built-in —corepack enableis enough, no explicitcorepack prepareneeded). - Copies only the
package.jsonfiles and lockfile from the pruner output. - Runs
pnpm install --frozen-lockfile— installs exact versions from lockfile, fails if lockfile is out of sync.
Caching benefit: Docker caches layers based on the COPY inputs. Since this stage only copies manifests (not source code), the pnpm install layer is cached as long as dependencies haven't changed. Source code changes (the most frequent kind) skip this entire stage.
Stage 3: builder
dockerfile
FROM node:24-alpine AS builder
RUN corepack enable
RUN npm install -g turbo@^2
WORKDIR /app
COPY --from=installer /app/ .
COPY --from=pruner /app/out/full/ .
RUN turbo run build --filter=api- Enables corepack (pnpm is needed for workspace resolution during build) and installs turbo globally.
- Copies the installed
node_modulesfrom the installer. - Overlays the actual source code from
out/full/on top. - Runs
turbo run build --filter=apiwhich builds the target app and any workspace dependencies it needs (e.g.,@repo/schemas).
Docs builder note: The docs Dockerfile adds
apk add --no-cache gitin this stage because VitePress needs git to computelastUpdatedtimestamps from commit history.
Stage 4: runner (differs per app)
API runner
dockerfile
FROM node:24-alpine AS runner
RUN apk add --no-cache dumb-init
WORKDIR /app
COPY --from=builder /app/prod/node_modules ./node_modules
COPY --from=builder /app/apps/api/dist ./dist
ENV PORT=80
EXPOSE 80
USER node
CMD ["dumb-init", "node", "dist/src/main.js"]Key decisions:
| Line | Why |
|---|---|
dumb-init | Acts as PID 1 and properly forwards signals (SIGTERM, SIGINT) to the Node.js process. Without it, node runs as PID 1 and ignores signals, causing slow container shutdowns (Docker waits 10s then sends SIGKILL). |
pnpm deploy --prod (in builder) | Extracts only production dependencies into /app/prod/, excluding devDependencies like TypeScript, ESLint, etc. This reduces the final image size significantly. |
ENV PORT=80 | NestJS reads PORT from the environment via AppConfigService. Port 80 is the standard HTTP port inside the container — Dokploy's reverse proxy handles external routing. |
USER node | Drops privileges from root. The node user is pre-created in the node:* base images. Running as non-root is a security best practice. |
Panel runner
dockerfile
FROM nginx:1.28.2-alpine AS runner
COPY --from=builder /app/apps/panel/dist /usr/share/nginx/html
COPY infra/apps/nginx/panel.nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]- Uses
nginx:1.28.2-alpineinstead of Node.js — a Vite/React app is just static files after building, no runtime needed. - The nginx config uses
try_files $uri $uri/ /index.html— this is the SPA fallback. When the user navigates to/dashboard/settings, nginx won't find that file on disk and falls back toindex.html, where React Router handles the route client-side. - The second
COPYpulls the nginx config from the build context (not from a previous stage). This works becauseCOPYwithout--fromalways reads from the Docker build context (the repo root).
Docs runner
dockerfile
FROM nginx:1.28.2-alpine AS runner
COPY --from=builder /app/apps/docs/.vitepress/dist /usr/share/nginx/html
COPY infra/apps/nginx/docs.nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]- Same nginx base, but with a different
try_filesstrategy:$uri $uri.html $uri/ =404. - VitePress generates clean URLs — a page at
workflows/engineering/deploy-workflowis served fromdeploy-workflow.html. The.htmlfallback ($uri.html) handles this without redirects. - Unlike the panel, there's no SPA fallback — docs pages are pre-rendered HTML files. A missing page correctly returns 404.
Final image size comparison
| App | Base | Contains | Approx. size |
|---|---|---|---|
| api | node:24-alpine | Node.js runtime + prod node_modules + compiled JS | ~200-300MB |
| panel | nginx:1.28.2-alpine | nginx + static HTML/JS/CSS | ~30-50MB |
| docs | nginx:1.28.2-alpine | nginx + static HTML | ~30-50MB |
The multi-stage build discards everything from the pruner, installer, and builder stages. Only the final runner stage becomes the image.
Required GitHub Secrets
Secrets are scoped to GitHub Environments. The workflow's environment field resolves secrets from the selected environment (Development with current trigger).
| Secret | Scope | Description |
|---|---|---|
DOCKERHUB_USERNAME | Repo-level | Docker Hub username (daramex25) |
DOCKERHUB_TOKEN | Repo-level | Docker Hub access token |
DOKPLOY_API_KEY | Development (+ Production when re-enabled) | Dokploy API key used in x-api-key header |
DOKPLOY_API_COMPOSE_ID | Development (+ Production when re-enabled) | Compose ID for API redeploy (/api/compose.deploy) |
DOKPLOY_PANEL_APP_ID | Development (+ Production when re-enabled) | Application ID for Panel redeploy (/api/application.deploy) |
DOKPLOY_DOCS_APP_ID | Development only | Application ID for Docs redeploy (/api/application.deploy) |
DOCKERHUB_USERNAME and DOCKERHUB_TOKEN can stay at repo level — GitHub resolves environment-scoped secrets first, then falls back to repo-level. No need to duplicate Docker Hub creds per environment.
Required GitHub Variables
| Variable | Scope | Allowed values | Default | Description |
|---|---|---|---|---|
DOKPLOY_NOTIFY_MODE | Repo or Environment | independent, all-or-nothing | independent | Controls if successful apps are notified independently or only when all builds succeed |
Environment setup (manual, one-time)
With the current trigger, only Development is required. If main is re-enabled, mirror these in Production with production-specific IDs.
| Secret | Production | Development |
|---|---|---|
DOKPLOY_API_KEY | prod API key | dev API key |
DOKPLOY_API_COMPOSE_ID | prod compose ID | dev compose ID |
DOKPLOY_PANEL_APP_ID | prod panel app ID | dev panel app ID |
DOKPLOY_DOCS_APP_ID | not needed | dev docs app ID |
Set DOKPLOY_NOTIFY_MODE in GitHub → Settings → Secrets and variables → Actions → Variables (repo-level is usually enough).
Local Testing
Build images locally:
bash
docker build -f infra/apps/api.Dockerfile -t daramex-api:test .
docker build -f infra/apps/panel.Dockerfile -t daramex-panel:test .
docker build -f infra/apps/docs.Dockerfile -t daramex-docs:test .Run with high ports (avoids conflicts with Grafana/other services):
bash
docker run --rm -p 4510:80 --env-file apps/api/.env daramex-api:test
docker run --rm -p 4520:80 daramex-panel:test
docker run --rm -p 4530:80 daramex-docs:testAdding a New App
- Create
infra/apps/{app}.Dockerfile— copy an existing Dockerfile and update the scope name. - If it's a static site, add an nginx config in
infra/apps/nginx/{app}.nginx.conf. - Add the app to
.github/workflows/deploy.yml:- Add a filter rule in
detect-changes. - Add a
build-{app}job (withneeds: [setup, detect-changes]andenvironmentscoping). - Add a notify step in
notify-dokploywith the correct endpoint and payload (compose.deploy+composeIdorapplication.deploy+applicationId).
- Add a filter rule in
- Add the app resource ID secret (for example,
DOKPLOY_{APP}_APP_ID) to the target GitHub Environment(s). - Create the Dokploy service and configure it to pull from Docker Hub.