Documentation Index
Fetch the complete documentation index at: https://docs.tuturuuu.com/llms.txt
Use this file to discover all available pages before exploring further.
Overview
apps/web captures server-side API, cron, and infrastructure logs through the internal log drain. The drain preserves normal stdout/stderr output, stores structured events in a dedicated Postgres container, and exposes them in Infrastructure → Monitoring.
The monitoring navigation uses compact labels:
- Overview
- Deployments
- Logs
- Analytics
- Observability
- Cron
- Requests
- Resources
Runtime
Docker web runs a dedicatedlog-drain-postgres service. It is separate from Supabase and is owned by the Docker web runtime. Compose pins the visible container name to ${COMPOSE_PROJECT_NAME:-platform}-log-drain-postgres-1 so it stays grouped with the rest of the platform stack in Docker Desktop.
Default connection inside Docker:
PLATFORM_LOG_DRAIN_DATABASE_URL: Postgres connection string.PLATFORM_LOG_DRAIN_ENABLED: set tofalseto disable persistence while preserving stdout/stderr.PLATFORM_LOG_DRAIN_RAW_RETENTION_DAYS: raw log retention, default30.PLATFORM_LOG_DRAIN_SUMMARY_RETENTION_DAYS: request, cron, deployment, and usage retention, default90.
Logging Rules
Server-side code inapps/web API, cron, and infrastructure paths must not add raw console.* calls. Use:
withRequestLogDrain(...) or withCronLogDrain(...) when the handler should attach logs to a request or cron run id.
The drain is fail-open: if Postgres is unavailable, requests and cron jobs continue normally and logs still go to stdout/stderr.
Legacy Compatibility
The revamped monitoring UI reads both the Postgres drain and the pre-drain blue/green files undertmp/docker-web. Requests and Logs include retained proxy traffic, watcher logs, and request console lines while deployments are enriched from the watcher snapshot so commit subject, full hash, short hash, stamp, active color, and runtime lane remain visible even before the Postgres drain has a complete history.
Deployment rows are grouped around the git commit identity first, then enriched with deployment stamps, runtime state, build duration, request volume, and errors. This keeps active and standby lanes for the same commit together instead of showing duplicate rows for the same build.
The Requests tab intentionally freezes its result window when opened. New traffic is counted in the background and offered as an explicit “show new” action, while older pages are appended automatically as the operator scrolls. This prevents live traffic from shifting the visible rows during investigation.
Each drained request stores the client IP address and user agent when those headers are available. Server logs emitted inside the request or cron AsyncLocalStorage context are scoped back to the same request id, so Requests can show related console/server lines next to the originating request without relying on terminal access.
Cron Control
Infrastructure → Monitoring → Cron exposes the native Docker cron runner state, a global enable/disable switch, per-job enable/disable switches, manual run buttons, retained execution rows, and captured response/console output. Runtime overrides are stored intmp/docker-web/watch/control/cron-control.json; they do not edit apps/web/cron.config.json, so Vercel cron config remains source-controlled while local Docker operations can pause individual jobs safely.
Cron jobs should show both the raw expression and a natural description such as “Every 15 minutes”, plus the previous and next scheduled run timestamps when the runtime snapshot provides them. Cron expressions are stored as runtime config, while visible daily schedule descriptions and run timestamps are rendered in the viewer’s browser timezone.
The infrastructure-sample-resources job runs every minute through /api/cron/infrastructure/sample-resources. It is the automated source for retained resource charts; do not rely on opening the Resources page to create samples.
Resources
Infrastructure → Monitoring → Resources reads the Docker runtime snapshot and displays container health, image/service identity, uptime, CPU, memory, compact network ingress/egress, and aggregate service counts. This is the operator view for Docker Desktop resource pressure without opening Docker Desktop directly. The internal resource sampler writes the current Docker snapshot into the log drainusage_events table at most once per minute. The UI charts that history across the supported resource windows: 1 hour, 6 hours, 12 hours, 24 hours, 3 days, and 7 days. If the log drain is disabled or unavailable, the tab still shows the live Docker snapshot and falls back to a single current sample.
Resource pressure uses the same thresholds as the dashboard: memory under 200 MB is green, 200-500 MB is amber, 500-1024 MB is orange, and anything above 1024 MB is red. CPU under 5% is green, 5-20% is amber, 20-40% is orange, and anything above 40% is red.
Troubleshooting
If the UI is empty:- Confirm
log-drain-postgresis healthy with Docker Compose. - Confirm
PLATFORM_LOG_DRAIN_DATABASE_URLis present in the web container. - Check that the route or cron handler is using
serverLoggeror a log-drain wrapper. - Use Infrastructure → Monitoring → Logs to search by message, route, request id, level, or source.
withCronLogDrain(...) and the cron runner is calling the expected web origin.