Skip to main content
This document explains the four foundational architectural decisions that define Tuturuuu’s system design. Each section explores the rationale, detailed justification, and the alternatives considered.

1. The Choice of a Microservices Architecture

The system is designed as a collection of distributed microservices rather than a single monolithic application.

Core Rationale

To support long-term growth by enabling organizational scaling, independent deployment, and technological flexibility.

Detailed Justification

Organizational Scaling and Team Autonomy

A monolithic architecture forces all developers to work on a single, large codebase, leading to high coordination overhead, merge conflicts, and slower development cycles as the team grows. The microservices approach aligns with Conway’s Law, allowing us to structure autonomous teams around specific business capabilities (e.g., an “Identity Team,” a “Payments Team”). Each team can develop, test, and deploy their services independently, drastically increasing agility. In Tuturuuu:
  • Web team owns apps/web (main platform)
  • AI team owns apps/rewise (chatbot) and apps/nova (prompt engineering)
  • Productivity team owns apps/calendar, apps/tudo (tasks), apps/tumeet (meetings)
  • Finance team owns apps/finance
  • Infrastructure team owns apps/db and shared packages/*

Independent Scalability

In a monolith, if one feature (e.g., a real-time data processing endpoint) experiences high load, the entire application must be scaled. This is inefficient and costly. Microservices allow for granular scaling. We can scale the high-load DataIngestionService to 50 instances while keeping the low-traffic AdminDashboardService at only 2 instances, optimizing resource utilization. Example from Tuturuuu:
// In Vercel deployment configuration
// apps/web can scale independently from apps/rewise
{
  "web": { "min": 2, "max": 100 }, // Main app needs high availability
  "rewise": { "min": 1, "max": 20 }, // AI chat less critical
  "finance": { "min": 1, "max": 10 } // Finance moderate scaling
}

Fault Isolation and Resilience

A critical bug or memory leak in a non-essential module of a monolith can bring down the entire application. In a microservices architecture, a failure is contained within the boundary of that service. A crash in the ReportingService will not affect critical user-facing services like AuthenticationService or OrderService, leading to a much more resilient and available system. In Tuturuuu:
  • If apps/finance crashes, users can still access apps/web, apps/calendar, and apps/tudo
  • Supabase Auth isolation means authentication remains available even if application services fail
  • Event-driven background jobs (Trigger.dev) fail independently from user-facing requests

Technology Flexibility (Polyglot Architecture)

Microservices communicate over standard protocols (like events or APIs). This allows each service to be built with the technology best suited for its purpose. While the core services may use TypeScript and Next.js, a future machine-learning service could be written in Python, and a high-performance edge service could be written in Go or Rust. This flexibility is impossible in a traditional monolith and future-proofs the system. Current polyglot examples in Tuturuuu:
  • Most apps: TypeScript + Next.js 16 + React
  • apps/discord: Python (Discord bot utilities)
  • Database: PostgreSQL (via Supabase)
  • Background jobs: TypeScript (Trigger.dev)
  • Future: Python ML services, Go performance-critical services

Why Not a Monolith?

While a monolith offers simplicity at the start of a project, it becomes a significant impediment to growth:
Monolith LimitationImpactMicroservices Solution
Tight CouplingChanges ripple across the entire codebaseService boundaries isolate changes
Slow DeploymentAll-or-nothing deploys, high-risk releasesIndependent, low-risk deployments
All-or-Nothing ScalingWasteful resource allocationGranular, cost-effective scaling
Technological RigidityLocked into initial technology choicesFreedom to choose best-fit technologies
Team Coordination OverheadMerge conflicts, slow code reviewsAutonomous team workflows
A monolith is unsuitable for a complex, long-lived enterprise system designed for agility.

2. The Choice of an Event-Driven Architecture (EDA)

The primary communication pattern between core microservices is asynchronous and event-driven, using a message broker like Trigger.dev.

Core Rationale

To achieve true service decoupling, which is the foundation for building a system that is inherently resilient, scalable, and extensible.

Detailed Justification

Ultimate Decoupling

In a synchronous, request-response (REST/gRPC) architecture, the calling service must have direct knowledge of the downstream service’s location and API. This creates tight coupling. In an event-driven model, a service produces an event (e.g., CustomerRegistered) and is completely unaware of which services, if any, are listening. This allows consumers to be added or removed without ever changing the producer, providing unparalleled flexibility. Example from Tuturuuu:
// Producer: User registration in apps/web
await trigger.event({
  name: "user.registered",
  payload: { userId, email, workspaceId }
});
// Producer doesn't know that NotificationService, AnalyticsService,
// and OnboardingService are all consuming this event

Inherent Resilience and Asynchronicity

Synchronous API calls create “chains of failure.” If a downstream service is slow or unavailable, the upstream caller is blocked, and the failure can cascade, bringing down the entire user request. With EDA, the message broker acts as a durable buffer. If a consumer service is down, the events are safely persisted, and the producer remains unaffected. The system “heals” itself once the consumer recovers. In Tuturuuu:
  • User can complete registration even if email service is down
  • Events are retried automatically (Trigger.dev handles retries)
  • Failed events go to dead-letter queue for investigation
  • No cascading failures between apps

Natural Scalability and Load Leveling

A synchronous API can be overwhelmed by sudden traffic spikes. An event-driven architecture naturally handles this by acting as a shock absorber. The broker can ingest massive bursts of events, which are then processed by a pool of consumers at a sustainable rate. Scaling is as simple as adding more consumer instances to process the stream in parallel. Example from Tuturuuu:
// During a viral event, thousands of users register
// Trigger.dev queues all events and processes them at sustainable rate
client.defineJob({
  id: "user-onboarding",
  name: "Process user onboarding",
  version: "1.0.0",
  trigger: eventTrigger({ name: "user.registered" }),
  run: async (payload, io) => {
    // This scales horizontally with Trigger.dev infrastructure
    await io.sendEmail(/* ... */);
    await io.createResources(/* ... */);
  }
});

Extensibility and Future-Proofing

This is a key strategic advantage. EDA allows for new business capabilities to be added by simply deploying new services that listen to existing event streams. For example, a new AuditService can be deployed to listen to all user-related events to build an audit trail, requiring zero modifications to the services that originally produced those events. In Tuturuuu:
  • New analytics features: Just add new Trigger.dev job listening to existing events
  • New compliance requirements: Deploy audit service without touching production apps
  • A/B testing: New experimental service consumes events alongside production service
  • ML models: Train on historical event data, deploy as new event consumer

Why Not a Purely Synchronous/REST Architecture?

While synchronous APIs are excellent for user-facing queries and commands that require an immediate response (a pattern supported by our API Gateway), using them for core inter-service communication creates a brittle, tightly coupled system.
Synchronous Pattern IssueImpactEvent-Driven Solution
Tight CouplingServices must know each other’s APIsServices only know event schemas
Cascading FailuresOne slow service blocks entire chainFailures are isolated by broker
Difficult EvolutionAPI changes break all consumersNew consumers added without changes
Poor Load HandlingTraffic spikes overwhelm servicesBroker buffers and load-levels
Testing ComplexityMust mock all downstream servicesEvents can be replayed for testing
Our approach is hybrid: Event-driven for core business workflows and synchronous APIs (tRPC, Next.js API routes) for edge queries and user-facing operations.

3. The Choice of Hexagonal Architecture (Ports and Adapters)

Within each microservice, the internal structure is organized using the Hexagonal Architecture pattern.

Core Rationale

To protect the core business logic from technology dependencies, thereby maximizing maintainability and testability.

Detailed Justification

Protection of the Domain Core

In traditional N-Tier architecture, business logic often becomes dependent on infrastructure concerns (e.g., database annotations within domain models). This couples the business rules to the technology. The Hexagonal Architecture inverts this dependency. The core business logic is pure and has zero knowledge of any external technology. It defines “Ports” (interfaces) for the functionality it needs, such as OrderRepository. Example from Tuturuuu:
// Domain layer (packages/types/src/domain/workspace.ts)
export interface WorkspaceRepository {
  findById(id: string): Promise<Workspace | null>;
  save(workspace: Workspace): Promise<void>;
}

// Business logic doesn't know about Supabase
export class WorkspaceService {
  constructor(private repo: WorkspaceRepository) {}

  async activateWorkspace(id: string) {
    const workspace = await this.repo.findById(id);
    workspace.activate(); // Pure domain logic
    await this.repo.save(workspace);
  }
}

Technology Agnosticism and Interchangeability

External technologies are implemented as “Adapters” that plug into the core’s ports. We have a PostgresOrderRepositoryAdapter and a KafkaEventPublisherAdapter. If we decide to migrate from PostgreSQL to MongoDB, we only need to write a new MongoOrderRepositoryAdapter and plug it in. The core business logic remains completely unchanged, making technology migrations low-risk and straightforward. Example from Tuturuuu:
// Infrastructure layer (apps/web/src/infrastructure/repositories/supabase-workspace-repository.ts)
export class SupabaseWorkspaceRepository implements WorkspaceRepository {
  constructor(private client: SupabaseClient) {}

  async findById(id: string): Promise<Workspace | null> {
    const { data } = await this.client
      .from('workspaces')
      .select('*')
      .eq('id', id)
      .single();

    return data ? this.toDomain(data) : null;
  }

  async save(workspace: Workspace): Promise<void> {
    await this.client
      .from('workspaces')
      .upsert(this.toDatabase(workspace));
  }
}

// Easy to swap: Could create DrizzleWorkspaceRepository or PrismaWorkspaceRepository

Superior Testability

Because the domain core is isolated from the outside world, it can be tested completely without a running database, web server, or any other infrastructure. We can use simple, in-memory mock adapters to test complex business rules. This results in tests that are extremely fast, reliable, and easy to write, leading to higher code quality. Example from Tuturuuu:
// tests/workspace.test.ts
class InMemoryWorkspaceRepository implements WorkspaceRepository {
  private workspaces = new Map<string, Workspace>();

  async findById(id: string) {
    return this.workspaces.get(id) || null;
  }

  async save(workspace: Workspace) {
    this.workspaces.set(workspace.id, workspace);
  }
}

describe('WorkspaceService', () => {
  it('activates workspace', async () => {
    const repo = new InMemoryWorkspaceRepository();
    const service = new WorkspaceService(repo);

    // No database, no network calls - tests run in milliseconds
    await service.activateWorkspace('ws-123');

    const workspace = await repo.findById('ws-123');
    expect(workspace.isActive).toBe(true);
  });
});

Why Not a Traditional Layered/N-Tier Architecture?

The primary weakness of the traditional N-Tier architecture is its tendency to create leaky abstractions and tight coupling between layers.
N-Tier LimitationImpactHexagonal Solution
Business Logic LeaksDomain models have ORM annotationsPure domain models, infrastructure separate
Tight CouplingLayers depend on concrete implementationsLayers depend on abstractions (ports)
Hard to TestTests require full infrastructureMock adapters, fast unit tests
Technology Lock-inChanging DB means rewriting domainSwap adapters, domain unchanged
Unclear Boundaries”Service” and “Repository” blur togetherClear separation: domain, application, infrastructure
Business logic often becomes intertwined with persistence logic, making the system rigid, difficult to test in isolation, and hard to adapt to new technology requirements. The Hexagonal Architecture solves this by enforcing a strict, clean boundary around the application’s core.

4. The Choice of a React Modular Monolith & Headless UI

The frontend is architected as a “Modular Monolith” using React, with components designed using a “Headless UI” approach.

Core Rationale

To balance the agility of a single codebase with the maintainability of a modular design, while ensuring long-term flexibility in presentation.

Detailed Justification

Maintainability at Scale (Modular Monolith)

A standard Single-Page Application (SPA) can quickly become a “big ball of mud” as it grows. A modular monolith enforces logical boundaries between different features or domains (e.g., Authentication, Dashboard, Settings, Finance) within a single codebase. This improves code organization, reduces unintended coupling, and allows teams to work on different features with fewer conflicts. In Tuturuuu:
apps/web/src/
├── app/
│   ├── [locale]/(dashboard)/[wsId]/
│   │   ├── finance/          # Finance module
│   │   ├── calendar/         # Calendar module
│   │   ├── tasks/            # Tasks module
│   │   └── settings/         # Settings module
├── components/
│   ├── finance/              # Finance-specific components
│   ├── calendar/             # Calendar-specific components
│   └── shared/               # Shared components
Each module is self-contained with clear boundaries, yet benefits from shared infrastructure.

UI and Logic Separation (Headless UI)

In this pattern, React components are split into two parts:
  1. A “headless” hook that manages all logic, state, and accessibility (e.g., useUserDropdown)
  2. A presentation component that simply renders the UI based on the state provided by the hook
This clean separation allows us to completely change the look and feel (the “head”) of a component by swapping out its presentation layer, without rewriting any of the complex business logic. Example from Tuturuuu:
// Headless logic hook (packages/ui/src/hooks/useTaskList.ts)
export function useTaskList(boardId: string) {
  const [tasks, setTasks] = useState<Task[]>([]);
  const [isLoading, setIsLoading] = useState(false);

  const addTask = useCallback((task: Task) => {
    setTasks(prev => [...prev, task]);
  }, []);

  const removeTask = useCallback((id: string) => {
    setTasks(prev => prev.filter(t => t.id !== id));
  }, []);

  // All logic, accessibility, keyboard shortcuts
  return { tasks, isLoading, addTask, removeTask };
}

// Presentation component (apps/web/src/components/tasks/TaskList.tsx)
export function TaskList({ boardId }: Props) {
  const { tasks, isLoading, addTask, removeTask } = useTaskList(boardId);

  // Only UI rendering, no logic
  return (
    <div className="task-list">
      {tasks.map(task => (
        <TaskCard key={task.id} task={task} onRemove={removeTask} />
      ))}
    </div>
  );
}

Future-Proofing for Multiple Platforms

The headless hooks are pure application logic. This means they can be reused to power completely different UIs. The same useUserProfile hook that powers a web application could later be used to power a React Native mobile app or even a command-line interface, providing maximum code reuse and flexibility. Future possibilities:
// Same hook, different presentations
import { useTaskList } from '@tuturuuu/hooks';

// Web app (current)
<WebTaskList boardId={id} />

// Future: Mobile app
<MobileTaskList boardId={id} />

// Future: Desktop app (Electron/Tauri)
<DesktopTaskList boardId={id} />

// All use the same useTaskList() logic

Why Not Other Frontend Architectures?

Micro-frontends

While powerful, this architecture introduces significant complexity in:
  • Build tooling and module federation
  • Deployment pipelines and version coordination
  • Routing and state management across boundaries
  • Performance overhead of loading multiple bundles
The modular monolith provides many of the same organizational benefits with a fraction of the operational overhead, making it a more pragmatic starting point.

Traditional Coupled (Server-Rendered) Frontends

These architectures tightly couple the frontend presentation to the backend logic, making it difficult to create modern, rich user experiences.
Server-Rendered LimitationImpactReact Modular Monolith Solution
Tight Backend CouplingFrontend changes require backend deploysClean API separation via tRPC
Limited InteractivityComplex UIs are difficultFull React capabilities
Poor Multi-Client SupportCan’t easily support mobileHeadless hooks reusable
Slow IterationFull page reloads, slower developmentFast HMR, component-level updates
They do not provide a clean API separation and are not flexible enough to support multiple client types (e.g., web and mobile) from a single backend.

Architectural Advantages and Drawbacks

The chosen architecture provides significant advantages but also introduces tradeoffs that must be managed. Understanding both sides enables informed decision-making.
For a comprehensive comparison of all architectural patterns (N-Tier, Hexagonal, Clean, Onion, Monolithic, Modular Monolith, Microservices, Event-Driven) with detailed pros and cons, see Architectural Patterns Comparison.

Four Key Advantages

1. Superior and Enduring Maintainability through Strong Architectural Boundaries

Architectural Choice: The architecture deliberately enforces separation of concerns at two critical levels: between business domains using Microservices, and between business logic and technology using Hexagonal Architecture. Impact and Justification: This layered approach to separation is crucial for long-term project health. At the macro level, isolating business domains like “Web Platform” and “Finance” into separate microservices prevents the system from degrading into a “big ball of mud” where changes in one area have unintended consequences in another. At the micro level, the Hexagonal Architecture’s strict isolation of the core domain logic ensures that the most valuable business rules are shielded from technological churn. This means we can upgrade a database, switch a messaging provider, or change a web framework with minimal, predictable impact, drastically reducing the cost and risk of maintenance over the system’s lifespan. Clarifying Additions: This structure makes responsibilities clearly defined and easier for teams to understand. Changes stay local to the affected boundary instead of leaking across the system. This reduces the likelihood of unintended side effects and simplifies long-term evolution.

2. High Organizational Agility and Independent Deployability

Architectural Choice: The Microservices architecture decomposes the system into a suite of small, autonomous services, each aligned with a specific business capability. Impact and Justification: This architectural choice is a direct enabler of organizational agility. It allows us to structure development teams to have full ownership of their respective services, from code to deployment (the “you build it, you run it” model). This autonomy eliminates the bottlenecks of a monolithic release process. A team can deploy updates to their service multiple times a day without coordinating with or waiting for other teams. This dramatically accelerates the feature delivery lifecycle and allows the organization to respond more quickly to changing business requirements. Clarifying Additions: Teams can deliver updates without waiting on other domains. This separation increases overall development speed and lowers coordination overhead. The architecture naturally supports rapid and continuous improvement.

3. Enhanced Fault Isolation and System-Wide Resilience

Architectural Choice: By decomposing the system into separate, independently running Microservices, we contain the “blast radius” of any potential failure. Impact and Justification: In any complex system, failures are inevitable. A monolithic architecture is fragile because a single critical bug (like a memory leak or an unhandled exception) in a non-essential module can bring the entire application down. Our microservices design ensures that a failure in one service—for instance, the Finance app—is completely isolated. Core services like the main Web Platform and Calendar continue to run unaffected. This transforms a potentially catastrophic failure into a manageable, localized degradation of service, leading to a far more resilient and reliable platform for end-users. Clarifying Additions: The system remains partially functional rather than failing entirely. Troubleshooting becomes simpler because issues are naturally localized. This architecture improves uptime and user experience during partial disruptions.

4. UI/Logic Separation for Multi-Platform Potential and Design Flexibility

Architectural Choice: A Headless UI pattern is employed on the frontend, cleanly separating the presentation layer (the “head”) from the application logic, state management, and accessibility hooks (the “body”). Impact and Justification: This choice provides two strategic advantages. Firstly, it offers immense design flexibility. The entire look and feel of the application can be radically redesigned by creating a new presentation layer that plugs into the existing, stable application logic, allowing for rapid rebranding or UX overhauls. Secondly, and more importantly, it makes the application logic platform-agnostic. The “headless” hooks and state management code can be reused to power completely different frontends in the future, such as a native mobile application or a voice-activated interface, maximizing code reuse and ensuring the system can adapt to new user touchpoints. Clarifying Additions: Changes to visual design do not risk breaking functional behavior. Functional logic can be reused across new user interfaces with minimal changes. This future-proofs the frontend against new device types or presentation styles.

Four Key Drawbacks

1. Significant Inherent Operational Complexity

Architectural Choice: A distributed Microservices architecture is not a single application but a system of systems, requiring a sophisticated support infrastructure including Service Discovery, an API Gateway, and centralized observability. Impact and Justification: This introduces a steep increase in operational complexity compared to a monolith. The team must now manage the deployment, networking, and health of multiple independent applications. This requires specialized DevOps expertise and a robust toolchain for logging, metrics, and tracing to understand the system’s behavior. The infrastructure itself becomes a critical product that must be built and maintained, representing a significant investment of time and resources. Clarifying Additions: More moving parts require greater architectural discipline. Each service adds overhead that must be monitored and understood. Organizations must prepare for the ongoing effort required to operate a distributed system.

2. Challenges of Distributed Data Management and Consistency

Architectural Choice: The principle of decentralized data ownership in a Microservices architecture prohibits the use of traditional, simple ACID transactions across service boundaries. Impact and Justification: This forces the system to embrace Eventual Consistency. Business processes that span multiple services must be carefully designed using complex patterns like Sagas to handle failures and ensure that the system eventually reaches a consistent state. This is a fundamentally harder paradigm for developers to reason about and requires a shift in mindset away from the guarantees provided by a single relational database. Clarifying Additions: Teams must think differently about data reliability across boundaries. Data no longer updates everywhere at the same moment, which requires intentional design. This increases cognitive load when building cross-service workflows.

3. Intrinsic Complexity of Distributed System Debugging and Performance Analysis

Architectural Choice: A single user request can trigger a complex, asynchronous chain of interactions that traverses the API Gateway and fans out to multiple backend services and background jobs. Impact and Justification: When something goes wrong, diagnosing the root cause is no longer as simple as looking at a single log file or stack trace. It requires correlating logs, metrics, and traces from multiple services to piece together the full story. This necessitates a mature and well-integrated observability stack (e.g., distributed tracing with OpenTelemetry) to make debugging and performance tuning manageable. Clarifying Additions: Architectural visibility becomes essential for diagnosing the flow of requests. Failures may appear in one service even if the root cause lies elsewhere. This makes system-wide insight a core architectural requirement.

4. Governance Overhead of Maintaining Modularity

Architectural Choice: Both the backend Microservices and the frontend Modular Monolith rely on maintaining strict boundaries to be effective. Impact and Justification: This requires active architectural governance. Without discipline, developers can easily create inappropriate dependencies, turning the microservices into a distributed monolith or eroding the boundaries of the frontend modules. The architecture requires ongoing vigilance, clear standards, and automated checks (e.g., dependency analysis tools) to prevent architectural decay over time. Clarifying Additions: Teams must follow agreed-upon boundaries consistently. Regular architecture reviews help identify early signs of erosion. Maintaining clean boundaries becomes an ongoing responsibility rather than a one-time setup.

Decision Summary

DecisionProblem SolvedKey BenefitTrade-off Accepted
MicroservicesMonolith scaling limitsIndependent deployment & scalingIncreased operational complexity
Event-DrivenTight synchronous couplingUltimate decoupling & resilienceEventual consistency
HexagonalTechnology lock-inTestability & flexibilityAdditional abstraction layers
React Modular MonolithFrontend chaos at scaleMaintainability & reusabilityDiscipline required for boundaries

Next Steps