Documentation Index
Fetch the complete documentation index at: https://docs.tuturuuu.com/llms.txt
Use this file to discover all available pages before exploring further.
Prerequisite: You should have followed the Development and
Local Supabase Development setup guides.
Overview
Tuturuuu leverages the Vercel AI SDK to generate structured data from large language models (LLMs). This approach enables type-safe AI responses, improved reliability, and consistent data structures for features like flashcards, quizzes, and learning plans. This guide covers how to use AI structured data generation in the Tuturuuu development workflow.Key Concepts
What is Structured Data Generation?
While text generation can be useful, many applications require generating structured data. For example, you might want to:- Extract specific information from text
- Generate quizzes or flashcards from learning material
- Create complex objects like learning plans or task lists
- Ensure AI responses follow a consistent format
generateObject and streamObject functions. You can use Zod schemas to specify the shape of the data that you want, and the AI model will generate data that conforms to that structure.
Architecture in Tuturuuu
Tuturuuu’s AI features follow this high-level architecture:- Frontend UI - React components that display and interact with AI-generated content
- API Routes - Next.js routes that handle AI requests and responses
- AI SDK - Vercel AI SDK that manages model providers and generates structured data
- Supabase - Backend database for authentication, authorization, and storing AI-generated content
Mira Chat Attachments
Dashboard chat attachments are uploaded to Supabase Storage before the user message is sent. New chats may first place files under{wsId}/chats/ai/resources/temp/{userId} and then move them into
{wsId}/chats/ai/resources/{chatId} once the chat exists.
Tools that read those attachments, such as convert_file_to_markdown, should
resolve bare filenames and stale same-workspace attachment paths against the
current chat folder. They must still reject full paths from another workspace.
Mira Dashboard Chat Agent Loop
The dashboard Mira chat keeps the main assistant in fast mode by default. New sessions should ignore stale storedthinking preferences unless the user
manually opts into the toolbar’s deep-check mode for that active session.
Assistant text should stream as soon as it is useful, then tool calls may run
inline, followed by more assistant text in the same response. Keep every
assistant text surface on the shared Streamdown wrapper, including text rendered
inside compact tool UIs, so code blocks, tables, Mermaid, math, and CJK spacing
stay consistent.
Mira enables Streamdown math with singleDollarTextMath because model output
commonly uses $...$ inline LaTeX. Keep @streamdown/math in the Tailwind
source scan alongside the base Streamdown dist files, and keep
katex/dist/katex.min.css imported from the chat renderer path.
While the request is submitted but no assistant text has arrived yet, the chat
should render a lightweight assistant activity bubble with rotating status copy.
Hide that placeholder as soon as real assistant text streams, and keep it
separate from markdown/tool rendering so status cycling does not re-render heavy
message content.
Keep the first optional-tool model step lean. Unless a workflow must force a
specific tool before answering, expose only select_tools and
no_action_needed on the first step so the model can stream direct answers
without carrying every tool schema. Stream smoothing should not add artificial
per-chunk delay on the dashboard chat path; perceived smoothness belongs in the
client activity state, not in delayed server chunks.
Do not force select_tools as the universal first step. Force tool selection
only when the workflow cannot answer safely before a tool runs, such as current
web lookups, workspace context switching, workspace member lookups, file
conversion, or writes. Complex verification, risk review, planning checks, or
conflicting evidence should use run_parallel_checks, which delegates to
bounded ToolLoopAgent subagents in parallel and returns a compact summary to
the main assistant.
Schema Definitions
Schemas define the structure of the data that will be generated by the AI models. In Tuturuuu, these are defined inpackages/ai/src/object/types.ts using Zod.
Here are some examples of schemas used in Tuturuuu:
Flashcard Schema
Quiz Schema
Year Plan Schema
Creating an API Endpoint
To create an API endpoint that generates structured data, follow these steps:1. Create a new route file
Create a new route file in the appropriate Next.js app, for example:2. Implement authentication and validation
Use Supabase to authenticate the user and validate their permissions:Fast session auth for Mira and assistant routes
Session-authenticated AI routes may accept thex-tuturuuu-ai-temp-auth header as an optimization before falling back to Supabase getUser(). The browser mints this token through POST /api/ai/temp-auth/token after the normal session, workspace normalization, membership, and selected billing workspace checks succeed. Tokens live only in memory on the client, expire after 60 seconds, and are stored in Redis only as SHA-256 digests.
Revocation is version-based: ai:temp-auth:user-version:{userId} is bumped before logout or account removal, so any token minted under the old version is rejected. Redis is not authoritative for security; when Redis is unavailable or a token is missing/invalid, routes fall back to the existing Supabase session path. A revoked token returns 401 and does not fall back.
Credit availability snapshots are also Redis-backed under ai:credits:snapshot:{billingWsId}:{userId}. They can be used only for UI status and AI preflight when fresh, above the near-exhaustion threshold, model/feature compatible, and no charge marker exists under ai:credits:in-flight:{billingWsId}:{userId}. Actual reservations, deductions, and ledger writes remain in Postgres, and successful commits invalidate the snapshot.
3. Generate structured data
Use the AI SDK to generate structured data based on the schema:Supported Models
Tuturuuu supports multiple AI models through the Vercel AI SDK. The available models are defined inpackages/ai/src/models.ts:
Calling from the Frontend
To call your AI endpoint from the frontend, you can use the appropriate hooks or fetch API:Integration with Supabase
Tuturuuu’s AI features are tightly integrated with Supabase for several purposes:TypeScript Types
Supabase-generated TypeScript types are available atpackages/types/src/supabase.ts. These types are automatically generated when you run bun sb:typegen or bun sb:reset and are accessible to all apps that have the @tuturuuu/types package installed.
You can use these types to ensure type safety when working with Supabase data in your AI features:
Short-hand Type Access
For more convenient access to common table types in your AI features, Tuturuuu also provides short-hand type definitions inpackages/types/src/db.ts. These are easier to use and remember than the full database type paths:
Authentication and Authorization
Before making AI requests, ensure the user is authenticated and authorized to use the feature:Feature Flags
Use theworkspace_secrets table to enable or disable AI features for specific workspaces:
Storing Results
You can store AI-generated content in Supabase for future use:Best Practices
Schema Design
When designing schemas for AI-generated content:- Be specific - Use the
.describe()method to provide clear instructions to the AI model - Keep it simple - Break complex schemas into smaller, nested objects
- Add validations - Use Zod’s validation methods (
.min(),.max(),.regex(), etc.) - Use enums - For fields with a fixed set of values, use
.enum()
Error Handling
Implement robust error handling for AI-generated content:Response Processing
For complex AI-generated content, you may need to post-process the response:Local Development and Testing
Setting Up API Keys
To test AI features locally, you need to set up the appropriate API keys in your environment:- Create a
.env.localfile in the root of your Next.js app - Add the necessary API keys:
- Restart your development server
Testing AI Endpoints
You can test your AI endpoints using tools like Postman or simple cURL commands:Troubleshooting
Common Issues
- API Key Issues: Ensure your API keys are correctly set in your environment
- Model Unavailability: Some models may be unavailable in certain regions
- Token Limits: Large prompts may exceed token limits
- Schema Validation Errors: The AI might generate content that doesn’t match your schema
Debugging Tips
- Log the prompt: Print the full prompt being sent to the AI model
- Start with simple schemas: Begin with simple schemas and gradually increase complexity
- Check response format: Verify the raw response from the AI model before schema validation