apps/web runtime.
Files That Define The Stack
apps/web/Dockerfiledocker-compose.web.ymldocker-compose.web.prod.ymlscripts/docker-web.jsscripts/check-docker-web.js
Supported Commands
| Command | Purpose |
|---|---|
bun dev:web:docker | Run the web dev workflow inside Docker |
bun devx:web:docker | Explicitly start local Supabase, then the Docker dev workflow |
bun devrs:web:docker | Explicitly start and reset local Supabase, then the Docker dev workflow |
bun dev:web:docker:down | Stop the Docker dev workflow |
bun serve:web:docker | Build and run the production web image in-place |
bun serve:web:docker:bg | Blue/green production deploy with health-checked cutover |
bun serve:web:docker:bg:watch | Poll the tracked GitHub branch every 1s and auto-run blue/green after a successful fast-forward pull |
bun serve:web:docker:down | Stop the production Docker stack |
bun serve:web:docker:bg:down | Stop the blue/green stack and clear local runtime state |
bun check:docker | Validate Dockerfile and compose parity rules |
Flags And Implicit Mappings
| Flag | Meaning |
|---|---|
--without-redis | Disable the bundled Redis profile and skip Docker-injected Redis env |
--with-supabase | Start local Supabase before the Docker web flow |
--reset-supabase | Start and reset local Supabase before the Docker web flow |
--mode prod | Use the production compose file instead of the dev stack |
--strategy blue-green | Use blue/green production deployment instead of in-place replacement |
--profile redis | Explicitly enable the Redis profile when calling the helper directly |
--build-memory 4g | Run builds through a capped Buildx builder with a memory ceiling |
--build-cpus 2 | Run builds through a capped Buildx builder with an approximate CPU limit |
--build-max-parallelism 2 | Limit concurrent BuildKit solve steps for lower build pressure |
--build-builder-name platform-web-capped-builder | Override the throttled Buildx builder name |
| Command | Implicit flags |
|---|---|
bun dev:web:docker | none |
bun devx:web:docker | --with-supabase |
bun devrs:web:docker | --reset-supabase |
bun serve:web:docker | --mode prod |
bun serve:web:docker:bg | --mode prod --strategy blue-green |
bun dev:web:docker -- --without-redis | --without-redis |
Runtime Requirements
apps/web/.env.localmust exist for both the build secret and the runtime env file.- Docker BuildKit must be available. The helper sets
COMPOSE_DOCKER_CLI_BUILD=1andDOCKER_BUILDKIT=1. - The Docker web flow does not start local Supabase unless you explicitly choose
bun devx:web:dockerorbun devrs:web:docker. - By default the Docker container uses the Supabase URL already configured in
apps/web/.env.local, which should stay pointed at the cloud project for normal Tuturuuu work. - If that configured URL explicitly points at host-run local Supabase, the helper rewrites the server-side Supabase URL to
host.docker.internalwhile leavingNEXT_PUBLIC_SUPABASE_URLalone for browsers. - Dockerized web commands auto-enable the local Redis companion stack and inject
UPSTASH_REDIS_REST_URLplus a generatedUPSTASH_REDIS_REST_TOKENinto the web container.
Coolify
Coolify can provide enough default deployment metadata for Tuturuuu’s Dockerfile setup to derive the app origin even when you do not manually define the usual app URL variables.- During Dockerfile builds,
scripts/build-web-docker.jsnow derives missingWEB_APP_URL,NEXT_PUBLIC_WEB_APP_URL, andNEXT_PUBLIC_APP_URLvalues from Coolify’sCOOLIFY_URLorCOOLIFY_FQDNdefaults before runningbun run build:web. - During production container startup,
apps/web/docker/prod-entrypoint.jsapplies the same Coolify fallback so server-side runtime code sees the same derived values. - The runtime URL resolvers used by the web proxy, internal API client, and
drive export/auto-extract flows also fall back to
COOLIFY_URLandCOOLIFY_FQDN.
- Still set explicit Tuturuuu env like
NEXT_PUBLIC_SUPABASE_URL,NEXT_PUBLIC_SUPABASE_PUBLISHABLE_KEY,SUPABASE_SECRET_KEY, and any email or storage secrets yourself. - You can omit
WEB_APP_URL,NEXT_PUBLIC_WEB_APP_URL, andNEXT_PUBLIC_APP_URLif Coolify already injectsCOOLIFY_URLorCOOLIFY_FQDNfor the deployment. - If you need one specific canonical domain while Coolify exposes multiple domains, set the Tuturuuu app URL variables explicitly instead of relying on the automatic fallback.
Development Mode
Development mode exists to preserve the normal root script contract while moving the web runtime into containers.- Container-managed
node_modulesare isolated from the host. - Package-local
node_modulesanddistdirectories are also isolated so host installs do not shadow container artifacts. - A host
bun installis not required just to boot the Dockerized web stack.
Production Mode
The production compose file uses therunner target from apps/web/Dockerfile.
In-Place
Blue/Green
- Reads the last active color from
tmp/docker-web/prod/active-color. - Ignores that state if the corresponding container no longer exists.
- Stops and removes the old inactive-color container, then builds and starts its fresh replacement.
- Waits for the new container healthcheck to pass.
- Keeps the stable
web-proxycontainer running in place during ordinary promotions instead of re-runningcompose upagainst the public:7803listener. - Validates the generated nginx config with
nginx -t, then reloads nginx inside that existing proxy only after the new color is healthy. - Immediately verifies the proxy can serve the internal
/__platform/drain-statusendpoint through the newly promoted color before marking the new color active. This avoids false deployment failures from public API middleware or rate limits. - Polls an internal drain-status endpoint on the old color and waits until it has no in-flight HTTP work left before demoting it to standby. This keeps long-running server actions, route handlers, and other open requests from being cut off mid-flight.
- Falls back to the short fixed drain window only when the old image predates the drain-status endpoint and cannot report its active requests yet.
- Keeps the demoted color online as a warm nginx backup target instead of removing it immediately, so stale keepalive workers and Cloudflare Tunnel connections can still fail over cleanly during the post-promotion window.
- If the demoted standby color is still on the previous revision after 15 minutes, the watcher automatically rebuilds that stale standby in place so both colors converge on the latest checked-out code without flipping the active port or promoting traffic again.
400 Request Header Or Cookie Too Large before the active web container sees the request. It now also
raises its upstream response-header buffers (proxy_buffer_size,
proxy_buffers, and proxy_busy_buffers_size) so larger Supabase auth
responses with multiple Set-Cookie headers do not fail with upstream sent too big header while reading response header from upstream. The proxy uses Docker
DNS re-resolution plus a shorter keepalive timeout so promotions are less
likely to produce transient 502 Host Error responses for existing Cloudflare
Tunnel connections, while the previous color remains alive as a warm standby.
The proxy keeps both blue and green in the nginx upstream group during steady
state, with the active color as the primary upstream and the standby color as a
backup. The runtime DNS resolver is defined at the nginx include/http scope,
not just inside server, so Docker service-name resolution continues to work
for the blue/green upstream block at reload time.
Every blue/green deployment also stamps the runtime with
PLATFORM_DEPLOYMENT_STAMP and PLATFORM_BLUE_GREEN_COLOR. Those values are
surfaced through both nginx response headers and the web process itself, and
the web layout appends the deployment stamp to the service-worker URL with
updateViaCache: 'none' so new deployments push browsers toward the latest
worker instead of lingering on stale cached state.
The local runtime state lives in:
tmp/docker-web/prod/active-colortmp/docker-web/prod/deployment-stamptmp/docker-web/prod/nginx.conf
Auto-Deploy Watcher
bun serve:web:docker:bg:watch locks the current branch/upstream at startup,
polls every second, fast-forwards when GitHub has a newer commit, and runs the
blue/green deploy flow automatically.
Additional behavior:
- If the watcher script itself changed in the pulled revision, the current watcher process restarts first and the replacement process performs the deploy.
- If blue/green is already live and the standby color remains on an older revision for 15 minutes, the watcher rebuilds only the standby color in place. The active color remains primary for new traffic the whole time.
- That standby catch-up path also stops and removes the stale standby container before rebuilding it, so health checks target the fresh replacement container rather than an outdated standby instance.
- Standby catch-up rebuilds reuse the current deployment stamp so the warm backup matches the latest deployment state instead of serving an older build if nginx needs to fail over.
- The watcher dashboard keeps the latest 5 deployments, and direct manual
bun serve:web:docker:bgruns are written into that same history so the recent deployments list stays complete even when a rollout did not come from the watcher.
Build Resource Caps
When build and serve run on the same machine, use the Docker web helper’s Buildx throttling options instead of letting BuildKit consume the full host. Example:bun serve:web:dockerdefaults to--build-memory 16g --build-cpus 8bun serve:web:docker:bgdefaults to--build-memory 16g --build-cpus 8
--, for example:
DOCKER_WEB_BUILD_MEMORY=4gDOCKER_WEB_BUILD_CPUS=2DOCKER_WEB_BUILD_MAX_PARALLELISM=2DOCKER_WEB_BUILD_BUILDER_NAME=platform-web-capped-builder
- The helper creates or reuses a dedicated
docker-containerBuildx builder instead of the default daemon-bound builder. DOCKER_WEB_BUILD_MEMORYcaps the BuildKit container’s memory budget.DOCKER_WEB_BUILD_CPUSis converted to Docker CPU quota settings for the BuildKit container.DOCKER_WEB_BUILD_MAX_PARALLELISMwrites a BuildKit config that limits concurrent solve steps, which is often the most effective way to reduce CPU spikes on smaller machines.
- These caps affect image builds, not the runtime
apps/webcontainer after it has started. - If no build caps are configured, the helper continues using Docker’s default builder behavior.
- A lower parallelism setting usually trades build speed for host stability.
Redis Profile
Redis is enabled by default in both dev and production-style Docker web stacks. The helper persists the generated token in:tmp/docker-web/redis-token
apps/web automatically:
UPSTASH_REDIS_REST_URL=http://serverless-redis-http:80UPSTASH_REDIS_REST_TOKEN=<generated local token>
UPSTASH_REDIS_REST_URL / UPSTASH_REDIS_REST_TOKEN
variables, so apps/web falls back to its non-Redis behavior cleanly.
Auto-Pull Blue/Green Watcher
For simple self-hosted boxes that deploy directly from a Git branch, the repo also provides a long-running auto-deploy watcher:- Locks the current local branch and tracked upstream at startup.
- Writes a PID-backed lock file at
tmp/docker-web/watch/blue-green-auto-deploy.lock. - Renders a live terminal dashboard with the locked branch, tracked upstream, latest local commit, relative commit age, last check time, next poll time, current blue/green runtime state, and recent watcher events.
- Polls the tracked upstream every
1000msby default. - Auto-clears and redraws the dashboard in place on each state change when attached to a TTY.
- Runs the Git and deploy subprocesses quietly so the dashboard is not
disrupted by
git fetch,git pull, or Docker build output during normal watcher operation. - Skips pulls if the worktree is dirty.
- Uses
git pull --ff-onlyonly when the local branch is strictly behind the locked upstream. - Runs
bun serve:web:docker:bgautomatically after a successful fast-forward pull. - If
scripts/watch-blue-green-deploy.jsitself changed in the pulled revision, the current watcher does not deploy from the old process. It releases its lock, spawns a replacement watcher with the same CLI args, and exits first. - The replacement watcher refreshes the live
web-proxynginx config and workers in place if blue/green is already serving traffic, verifies proxy routing through/__platform/drain-status, and only then starts the new blue/green build/promotion. - Stops immediately if the checked-out branch changes while the watcher is running.
- Shows the current active blue/green color when
web-proxyis serving live traffic. - Normal promotions keep the long-lived
web-proxycontainer and bound port stable, which avoids transient listener drops for upstreams such as Cloudflare Tunnel that are connected to:7803. - Persists and renders the last 3 auto-deployments as stacked terminal cards that favor vertical scanability over very wide lines.
- As soon as a new commit starts rolling out, the recent deployment section
shows it immediately as
DEPLOYINGinstead of waiting for the rollout to finish. - Each deployment block includes:
- deploy status (
ACTIVE,ENDED, orFAILED) - build time
- activation/finish time
- deployment lifetime while it served traffic
- total requests served during that deployment window
- average requests per minute
- peak requests per minute
day: requests served on the current day for the active deployment, or the final active day for an ended deploymentdavg: average requests per day across that deployment’s serving lifetimedpeak: busiest single-day request count across that deployment’s serving lifetime
- deploy status (
- The live blue/green summary uses the same traffic metrics as the deployment history cards, with consistent color coding for build/lifetime/traffic/age metrics so the dashboard is easier to scan quickly.
- Request counters are derived from the local
web-proxycontainer logs, so they stay self-hosted and do not require any external analytics service. - Internal proxy health checks for
/api/healthand/__platform/drain-statusare excluded from the request totals so the numbers reflect real served traffic more closely. - The watcher uses the same Docker runtime env resolution as the real deploy flow, so blue/green status probes still work when the Redis profile is part of the production compose file.
Browser-State 502 Recovery
If some normal browsers still return Cloudflare502 Host Error while
incognito works, treat it as stale client state or an auth-cookie/header-size
problem before assuming the tunnel itself is broken.
How to recognize each failure mode:
upstream sent too big header while reading response header from upstreammeans nginx response-header buffers were too small for the auth response.web-green could not be resolvedorweb-blue could not be resolvedmeans a stale nginx worker or keepalive connection still tried to reach a color that no longer existed. The current warm-standby model is designed to avoid that.- A browser that fails only in regular mode but works in incognito usually has stale Supabase auth cookies, stale service-worker state, or both.
- Send affected users to
https://tuturuuu.com/~recover-browser-state. - That route is public and bypasses auth/onboarding middleware.
- It responds with
Clear-Site-Data: "cache", "cookies", "storage", "executionContexts"and explicitly expires all Supabase auth cookie variants for the current host before redirecting the browser back to/login?browserStateReset=1.
- Inspect
X-Platform-Deployment-Stamp,X-Platform-Blue-Green-Primary, andX-Platform-Blue-Green-Colorresponse headers to confirm which rollout is currently serving a request. - If the recovery URL fixes the issue for a user, the likely root cause was stale browser state rather than an active deploy outage.
- If recovery does not help and proxy logs still show
too big header, focus on auth redirect size or additional cookie bloat.
- This is intended for clean deployment clones on a server, not for active developer worktrees.
- If the local branch is ahead of or diverged from the tracked upstream, the watcher logs and skips the pull instead of forcing a merge or reset.
- The self-restart path only triggers when the watcher script itself changed in the fetched revision; normal app-code deploys keep the current watcher process alive.
- During that self-restart path, nginx keeps the assigned proxy port up the whole time because the replacement watcher refreshes the existing proxy container in place before it starts the new build.
- The watcher inherits the default blue/green build caps from
bun serve:web:docker:bg, so the current defaults still apply during auto-deploys. - Deployment history is watcher-managed. Manual blue/green rollouts still show up in the live runtime status if the stack is active, but they do not backfill the watcher’s last-3 deployment list unless they were performed through the watcher itself.
Validation And CI
docker-setup-check.yaml now validates all of the following:
node scripts/check-docker-web.jsnode --test scripts/check-docker-web.test.js scripts/docker-web.test.jsdocker compose -f docker-compose.web.yml configdocker compose -f docker-compose.web.yml --profile redis configdocker compose -f docker-compose.web.prod.yml configdocker compose -f docker-compose.web.prod.yml --profile redis configdocker build --target dev -f apps/web/Dockerfile .docker build --target runner --secret id=web_env,src=apps/web/.env.local -f apps/web/Dockerfile .
Operator Notes
- Do not paste
docker compose configoutput into chat or tickets; it expands env values. - If you need rebuild-before-restart on a server, use
bun serve:web:docker:bg. - If a blue/green deploy is interrupted, rerunning the same command from the intended commit is the normal recovery path.