Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.tuturuuu.com/llms.txt

Use this file to discover all available pages before exploring further.

Overview

apps/web captures server-side API, cron, and infrastructure logs through the internal log drain. The drain preserves normal stdout/stderr output, stores structured events in a dedicated Postgres container, and exposes them in Infrastructure → Monitoring. The monitoring navigation uses compact labels:
  • Overview
  • Deployments
  • Logs
  • Analytics
  • Observability
  • Cron
  • Requests
  • Resources

Runtime

Docker web runs a dedicated log-drain-postgres service. It is separate from Supabase and is owned by the Docker web runtime. Compose pins the visible container name to ${COMPOSE_PROJECT_NAME:-platform}-log-drain-postgres-1 so it stays grouped with the rest of the platform stack in Docker Desktop. Default connection inside Docker:
postgres://platform_log_drain:platform_log_drain@log-drain-postgres:5432/platform_log_drain
Relevant environment variables:
  • PLATFORM_LOG_DRAIN_DATABASE_URL: Postgres connection string.
  • PLATFORM_LOG_DRAIN_ENABLED: set to false to disable persistence while preserving stdout/stderr.
  • PLATFORM_LOG_DRAIN_RAW_RETENTION_DAYS: raw log retention, default 30.
  • PLATFORM_LOG_DRAIN_SUMMARY_RETENTION_DAYS: request, cron, deployment, and usage retention, default 90.

Logging Rules

Server-side code in apps/web API, cron, and infrastructure paths must not add raw console.* calls. Use:
import { serverLogger } from '@/lib/infrastructure/log-drain';

serverLogger.info('Processed job', { jobId });
serverLogger.error('Job failed', error);
For route or cron handlers, wrap execution with withRequestLogDrain(...) or withCronLogDrain(...) when the handler should attach logs to a request or cron run id. The drain is fail-open: if Postgres is unavailable, requests and cron jobs continue normally and logs still go to stdout/stderr.

Legacy Compatibility

The revamped monitoring UI reads both the Postgres drain and the pre-drain blue/green files under tmp/docker-web. Requests and Logs include retained proxy traffic, watcher logs, and request console lines while deployments are enriched from the watcher snapshot so commit subject, full hash, short hash, stamp, active color, and runtime lane remain visible even before the Postgres drain has a complete history. Deployment rows are grouped around the git commit identity first, then enriched with deployment stamps, runtime state, build duration, request volume, and errors. This keeps active and standby lanes for the same commit together instead of showing duplicate rows for the same build. The Requests tab intentionally freezes its result window when opened. New traffic is counted in the background and offered as an explicit “show new” action, while older pages are appended automatically as the operator scrolls. This prevents live traffic from shifting the visible rows during investigation. Each drained request stores the client IP address and user agent when those headers are available. Server logs emitted inside the request or cron AsyncLocalStorage context are scoped back to the same request id, so Requests can show related console/server lines next to the originating request without relying on terminal access.

Cron Control

Infrastructure → Monitoring → Cron exposes the native Docker cron runner state, a global enable/disable switch, per-job enable/disable switches, manual run buttons, retained execution rows, and captured response/console output. Runtime overrides are stored in tmp/docker-web/watch/control/cron-control.json; they do not edit apps/web/cron.config.json, so Vercel cron config remains source-controlled while local Docker operations can pause individual jobs safely. Cron jobs should show both the raw expression and a natural description such as “Every 15 minutes”, plus the previous and next scheduled run timestamps when the runtime snapshot provides them. Cron expressions are stored as runtime config, while visible daily schedule descriptions and run timestamps are rendered in the viewer’s browser timezone. The infrastructure-sample-resources job runs every minute through /api/cron/infrastructure/sample-resources. It is the automated source for retained resource charts; do not rely on opening the Resources page to create samples.

Resources

Infrastructure → Monitoring → Resources reads the Docker runtime snapshot and displays container health, image/service identity, uptime, CPU, memory, compact network ingress/egress, and aggregate service counts. This is the operator view for Docker Desktop resource pressure without opening Docker Desktop directly. The internal resource sampler writes the current Docker snapshot into the log drain usage_events table at most once per minute. The UI charts that history across the supported resource windows: 1 hour, 6 hours, 12 hours, 24 hours, 3 days, and 7 days. If the log drain is disabled or unavailable, the tab still shows the live Docker snapshot and falls back to a single current sample. Resource pressure uses the same thresholds as the dashboard: memory under 200 MB is green, 200-500 MB is amber, 500-1024 MB is orange, and anything above 1024 MB is red. CPU under 5% is green, 5-20% is amber, 20-40% is orange, and anything above 40% is red.

Troubleshooting

If the UI is empty:
  1. Confirm log-drain-postgres is healthy with Docker Compose.
  2. Confirm PLATFORM_LOG_DRAIN_DATABASE_URL is present in the web container.
  3. Check that the route or cron handler is using serverLogger or a log-drain wrapper.
  4. Use Infrastructure → Monitoring → Logs to search by message, route, request id, level, or source.
If logs are missing only for cron jobs, verify the cron route is wrapped with withCronLogDrain(...) and the cron runner is calling the expected web origin.