* feat: add OpenTelemetry distributed tracing to letta-web Enables end-to-end distributed tracing from letta-web through memgpt-server using OpenTelemetry. Traces are exported via OTLP to Datadog APM for monitoring request latency across services. Key changes: - Install OTEL packages: @opentelemetry/sdk-node, auto-instrumentations-node - Create apps/web/src/lib/tracing.ts with full OTEL configuration - Initialize tracing in instrumentation.ts (before any other imports) - Add OTEL packages to next.config.js serverExternalPackages - Add OTEL environment variables to deployment configs: - OTEL_EXPORTER_OTLP_ENDPOINT (e.g., http://datadog-agent:4317) - OTEL_SERVICE_NAME (letta-web) - OTEL_ENABLED (true in production) Features enabled: - Automatic HTTP/fetch instrumentation with trace context propagation - Service metadata (name, version, environment) - Trace correlation with logs (getCurrentTraceId helper) - Graceful shutdown handling - Health check endpoint filtering Configuration: - Traces sent to OTLP endpoint (Datadog agent) - W3C Trace Context propagation for distributed tracing - BatchSpanProcessor for efficient trace export - Debug logging in development environment GitHub variables to set: - OTEL_EXPORTER_OTLP_ENDPOINT (e.g., http://datadog-agent:4317) - OTEL_ENABLED (true) * feat: add OpenTelemetry distributed tracing to cloud-api Completes end-to-end distributed tracing across the full request chain: letta-web → cloud-api → memgpt-server (core) All three services now export traces via OTLP to Datadog APM. Key changes: - Install OTEL packages in cloud-api - Create apps/cloud-api/src/instrument-otel.ts with full OTEL configuration - Initialize OTEL tracing in main.ts (before Sentry) - Add OTEL environment variables to deployment configs: - OTEL_EXPORTER_OTLP_ENDPOINT (e.g., http://datadog-agent:4317) - OTEL_SERVICE_NAME (cloud-api) - OTEL_ENABLED (true in production) - GIT_HASH (for service version) Features enabled: - Automatic HTTP/Express instrumentation - Trace context propagation (W3C Trace Context) - Service metadata (name, version, environment) - Trace correlation with logs (getCurrentTraceId helper) - Health check endpoint filtering Configuration: - Traces sent to OTLP endpoint (Datadog agent) - Seamless trace propagation through the full request chain - BatchSpanProcessor for efficient trace export Complete trace flow: 1. letta-web receives request, starts root span 2. letta-web calls cloud-api, propagates trace context 3. cloud-api calls memgpt-server, propagates trace context 4. All spans linked by trace ID, visible as single trace in Datadog * fix: prevent duplicate OTEL SDK initialization and handle array headers Fixes identified by Cursor bugbot: 1. Added initialization guard to prevent duplicate SDK initialization - Added isInitialized flag to prevent multiple SDK instances - Prevents duplicate SIGTERM handlers from being registered - Prevents resource leaks from lost SDK references 2. Fixed array header value handling - HTTP headers can be string | string[] | undefined - Now properly handles array case by taking first element - Prevents passing arrays to span.setAttribute() which expects strings 3. Verified OTEL dependencies are correctly installed - Packages are in root package.json (monorepo structure) - Available to all workspace packages (web, cloud-api) - Bugbot false positive - dependencies ARE present Applied fixes to both: - apps/web/src/lib/tracing.ts - apps/cloud-api/src/instrument-otel.ts * fix: handle SIGTERM promise rejections and unify initialization pattern Fixes identified by Cursor bugbot: 1. Fixed unhandled promise rejection in SIGTERM handlers - Changed from async arrow function to sync with .catch() - Prevents unhandled promise rejections during shutdown - Logs errors if OTLP endpoint is unreachable during shutdown - Applied to both web and cloud-api 2. Unified initialization pattern across services - Removed auto-initialization from cloud-api instrument-otel.ts - Now explicitly calls initializeTracing() in main.ts - Matches web pattern (explicit call in instrumentation.ts) - Reduces confusion and maintains consistency Both services now follow the same pattern: - Import tracing module - Explicitly call initializeTracing() - Guard against duplicate initialization with isInitialized flag Before (cloud-api): import './instrument-otel'; // Auto-initializes After (cloud-api): import { initializeTracing } from './instrument-otel'; initializeTracing(); // Explicit call SIGTERM handler before: process.on('SIGTERM', async () => { await shutdownTracing(); // Unhandled rejection! }); SIGTERM handler after: process.on('SIGTERM', () => { shutdownTracing().catch((error) => { console.error('Error during OTEL shutdown:', error); }); }); * feat: add environment differentiation for distributed tracing Enables proper environment filtering in Datadog APM by introducing LETTA_ENV to distinguish between production, staging, canary, and development. Problem: - NODE_ENV is always 'production' or 'development' - No way to differentiate staging, canary, etc. in Datadog - All traces appeared under no environment or same environment - Couldn't test with staging traces Solution: - Added LETTA_ENV variable (production, staging, canary, development) - Set deployment.environment attribute for Datadog APM filtering - Updated all deployment configs (workflows, justfile) - Falls back to NODE_ENV if LETTA_ENV not set Changes: 1. Updated tracing code (web + cloud-api): - Use LETTA_ENV for environment name - Set SEMRESATTRS_DEPLOYMENT_ENVIRONMENT (resolves to deployment.environment) - Fallback: LETTA_ENV → NODE_ENV → 'development' 2. Updated deployment configs: - .github/workflows/deploy-web.yml: LETTA_ENV=production - .github/workflows/deploy-cloud-api.yml: LETTA_ENV=production - justfile: LETTA_ENV with default to production 3. Added comprehensive documentation: - OTEL_TRACING.md with full setup guide - How to view environments in Datadog APM - How to test with staging environment - Dashboard query examples - Troubleshooting guide Usage: # Production LETTA_ENV=production # Staging LETTA_ENV=staging # Local dev LETTA_ENV=development Datadog APM now shows: - env:production (main traffic) - env:staging (staging deployments) - env:canary (canary deployments) - env:development (local testing) View in Datadog: APM → Services → Filter by env dropdown → Select production/staging/etc. * fix: prevent OTEL SDK double shutdown and error handler failures Fixes identified by Cursor bugbot: 1. SDK double shutdown prevention - Set sdk = null after successful shutdown - Set isInitialized = false to allow re-initialization - Even on shutdown error, mark as shutdown to prevent retry - Prevents errors when shutdownTracing() called multiple times - Applied to both web and cloud-api 2. Error handler using console.error directly (web only) - Replaced dynamic require('./logger') with console.error - Logger module may not be loaded during early initialization - This code runs in Next.js instrumentation.ts before modules load - Prevents masking original OTEL errors with logger failures - Cloud-api already correctly used console.error Before (bug #1): await sdk.shutdown(); // sdk still references shutdown SDK // Next call to shutdownTracing() tries to shutdown again After (bug #1): await sdk.shutdown(); sdk = null; // ✅ Prevent double shutdown isInitialized = false; // ✅ Allow re-init Before (bug #2 - web): const { logger } = require('./logger'); // ❌ May fail during init logger.error('Failed to initialize OTEL', errorInfo); After (bug #2 - web): console.error('Failed to initialize OTEL:', error); // ✅ Always works Scenarios protected: - Multiple SIGTERM signals - Explicit shutdownTracing() calls - Logger initialization failures - Circular dependencies during early init * feat: add environment differentiation to core and staging deployments Enables proper environment filtering in Datadog APM for memgpt-server (core) and staging deployments by adding deployment.environment resource attribute. Problem: - Core traces didn't show environment in Datadog APM - Staging workflow had no OTEL configuration - Couldn't differentiate staging vs production core traces Solution: 1. Updated core OTEL resource to include deployment.environment - Added deployment.environment attribute in resource.py - Uses settings.environment which maps to LETTA_ENVIRONMENT env var - Applied .lower() for consistency with web/cloud-api 2. Added LETTA_ENV to staging workflow - nightly-staging-deploy-test.yaml: LETTA_ENV=staging - Added OTEL_EXPORTER_OTLP_ENDPOINT and OTEL_ENABLED vars - Traces from staging will show env:staging in Datadog 3. Added LETTA_ENV to production core workflow - deploy-core.yml: LETTA_ENV=production - Added OTEL configuration at workflow level - Traces from production will show env:production 4. Updated justfile for core deployments - Set LETTA_ENVIRONMENT from LETTA_ENV with default to production - Maps to settings.environment field (env_prefix="letta_") Environment mapping: - Web/Cloud-API: Use LETTA_ENV directly - Core: Use LETTA_ENVIRONMENT (Pydantic with letta_ prefix) - Both map to deployment.environment resource attribute Now all services properly tag traces with environment: ✅ letta-web: deployment.environment set ✅ cloud-api: deployment.environment set ✅ memgpt-server: deployment.environment set View in Datadog: APM → Services → Filter by env:production or env:staging * refactor: unify environment variable to LETTA_ENV across all services Simplifies environment configuration by using LETTA_ENV consistently across all three services (web, cloud-api, and core) instead of having core use LETTA_ENVIRONMENT. Problem: - Core used LETTA_ENVIRONMENT (due to Pydantic env_prefix) - Web and cloud-api used LETTA_ENV - Confusing to have two different variable names - Justfile had to map LETTA_ENV → LETTA_ENVIRONMENT Solution: - Added validation_alias to core settings.py - environment field now reads from LETTA_ENV directly - Falls back to letta_environment for backwards compatibility - Updated justfile to set LETTA_ENV for core (not LETTA_ENVIRONMENT) - Updated documentation to clarify consistent naming Changes: 1. apps/core/letta/settings.py - Added validation_alias=AliasChoices("LETTA_ENV", "letta_environment") - Prioritizes LETTA_ENV, falls back to letta_environment - Updated description to include all environment values 2. justfile - Changed --set secrets.LETTA_ENVIRONMENT to --set secrets.LETTA_ENV - Now consistent with web and cloud-api deployments 3. OTEL_TRACING.md - Added note that all services use LETTA_ENV consistently - Fixed trailing whitespace Before: - Web: LETTA_ENV - Cloud-API: LETTA_ENV - Core: LETTA_ENVIRONMENT ❌ After: - Web: LETTA_ENV - Cloud-API: LETTA_ENV - Core: LETTA_ENV ✅ All services now use the same environment variable name! * refactor: standardize on LETTA_ENVIRONMENT across all services Unifies environment variable naming to use LETTA_ENVIRONMENT consistently across all three services (web, cloud-api, and core). Problem: - Previous commit tried to use LETTA_ENV everywhere - Core already uses Pydantic with env_prefix="letta_" - Better to standardize on LETTA_ENVIRONMENT to match core conventions Solution: - All services now read from LETTA_ENVIRONMENT - Web: process.env.LETTA_ENVIRONMENT - Cloud-API: process.env.LETTA_ENVIRONMENT - Core: settings.environment (reads LETTA_ENVIRONMENT via Pydantic prefix) Changes: 1. apps/web/src/lib/tracing.ts - Changed LETTA_ENV → LETTA_ENVIRONMENT 2. apps/cloud-api/src/instrument-otel.ts - Changed LETTA_ENV → LETTA_ENVIRONMENT 3. apps/core/letta/settings.py - Removed validation_alias (not needed) - Uses standard Pydantic env_prefix behavior 4. All workflow files updated: - deploy-web.yml: LETTA_ENVIRONMENT=production - deploy-cloud-api.yml: LETTA_ENVIRONMENT=production - deploy-core.yml: LETTA_ENVIRONMENT=production - nightly-staging-deploy-test.yaml: LETTA_ENVIRONMENT=staging - stage-web.yaml: LETTA_ENVIRONMENT=staging - stage-cloud-api.yaml: LETTA_ENVIRONMENT=staging (added OTEL config) - stage-core.yaml: LETTA_ENVIRONMENT=staging (added OTEL config) 5. justfile - Updated all LETTA_ENV → LETTA_ENVIRONMENT - Web: --set env.LETTA_ENVIRONMENT - Cloud-API: --set env.LETTA_ENVIRONMENT - Core: --set secrets.LETTA_ENVIRONMENT 6. OTEL_TRACING.md - All references updated to LETTA_ENVIRONMENT Final state: ✅ Web: LETTA_ENVIRONMENT ✅ Cloud-API: LETTA_ENVIRONMENT ✅ Core: LETTA_ENVIRONMENT (via letta_ prefix) All services use the same variable name with proper Pydantic conventions! * feat: implement split OTEL architecture (Option A) Implements Option A: Web and cloud-api send traces directly to Datadog Agent, while core keeps its existing OTEL sidecar (exports to ClickHouse + Datadog). Architecture: - letta-web → Datadog Agent (OTLP:4317) → Datadog APM - cloud-api → Datadog Agent (OTLP:4317) → Datadog APM - memgpt-server → OTEL Sidecar → ClickHouse + Datadog (unchanged) Rationale: - Core has existing production sidecar setup (exports to ClickHouse for analytics) - Web/cloud-api don't need ClickHouse export, only APM - Simpler: Direct to Datadog Agent is sufficient - Minimal changes to core (already working) - Traces still link end-to-end via W3C Trace Context propagation Changes: 1. Helm Charts - Added OTEL config defaults: - helm/letta-web/values.yaml: Added OTEL env vars - helm/cloud-api/values.yaml: Added OTEL env vars - Default: OTEL_ENABLED="false", override in production - Endpoint: http://datadog-agent:4317 2. Production Workflows - Direct to Datadog Agent: - deploy-web.yml: Set OTEL_EXPORTER_OTLP_ENDPOINT to datadog-agent - deploy-cloud-api.yml: Set OTEL_EXPORTER_OTLP_ENDPOINT to datadog-agent - deploy-core.yml: Removed OTEL vars (keep existing setup) - OTEL_ENABLED="true", LETTA_ENVIRONMENT=production 3. Staging Workflows - Direct to Datadog Agent: - stage-web.yaml: Set OTEL_EXPORTER_OTLP_ENDPOINT to datadog-agent - stage-cloud-api.yaml: Set OTEL_EXPORTER_OTLP_ENDPOINT to datadog-agent - stage-core.yaml: Removed OTEL vars (keep existing setup) - nightly-staging-deploy-test.yaml: Removed OTEL vars - OTEL_ENABLED="true", LETTA_ENVIRONMENT=staging 4. Justfile: - Removed LETTA_ENVIRONMENT from core deployment (keep unchanged) - Web/cloud-api already correctly pass OTEL vars from workflows 5. Documentation: - Completely rewrote OTEL_TRACING.md - Added architecture diagrams explaining split setup - Added Datadog Agent prerequisites - Added troubleshooting for split architecture - Explained why we chose this approach Prerequisites (must verify before deploying): - Datadog Agent deployed with service name: datadog-agent - OTLP receiver enabled on port 4317 - If different service name/namespace, update workflows Next Steps: - Verify datadog-agent service exists in cluster - Verify OTLP receiver is enabled on Datadog agent - Deploy and test trace propagation across services * refactor: shorten environment names to prod and dev Changes LETTA_ENVIRONMENT values from 'production' to 'prod' and 'development' to 'dev' for consistency and brevity. Changes: 1. Workflows: - deploy-web.yml: production → prod - deploy-cloud-api.yml: production → prod 2. Helm charts: - letta-web/values.yaml: development → dev - cloud-api/values.yaml: development → dev 3. Justfile: - Default values: production → prod 4. Code: - apps/web/src/lib/tracing.ts: Fallback 'development' → 'dev' - apps/cloud-api/src/instrument-otel.ts: Fallback 'development' → 'dev' - apps/core/letta/settings.py: Updated description 5. Documentation: - OTEL_TRACING.md: Updated all examples and table Environment values: - prod (was production) - staging (unchanged) - canary (unchanged) - dev (was development) * refactor: align environment names with codebase patterns Changes staging to 'dev' and local development to 'local-test' to match existing codebase conventions (like test_temporal_metrics_local.py). Rationale: - 'dev' for staging matches consistent pattern across codebase - 'local-test' for local development follows test naming convention - Clearer distinction between deployed staging and local testing Environment values: - prod (production) - dev (staging/dev cluster) - canary (canary deployments) - local-test (local development) Changes: 1. Staging workflows: - stage-web.yaml: staging → dev - stage-cloud-api.yaml: staging → dev 2. Helm chart defaults (for local): - letta-web/values.yaml: dev → local-test - cloud-api/values.yaml: dev → local-test 3. Code fallbacks: - apps/web/src/lib/tracing.ts: 'dev' → 'local-test' - apps/cloud-api/src/instrument-otel.ts: 'dev' → 'local-test' - apps/core/letta/settings.py: Updated description 4. Documentation: - OTEL_TRACING.md: Updated table, examples, and all references - Clarified dev = staging cluster, local-test = local development Datadog APM filters: - env:prod (production) - env:dev (staging cluster) - env:canary (canary) - env:local-test (local development) * fix: update environment checks for lowercase values and add missing configs Fixes 4 bugs identified by Cursor bugbot: 1. Case-sensitive environment checks (5 locations) - Updated all checks from "PRODUCTION" to case-insensitive "prod" - Fixed in: resource.py, multi_agent.py, tool_manager.py, multi_agent_tool_executor.py, agent_manager_helper.py - Now properly filters local-only tools in production - Prevents exposing debug tools in production 2. Device ID leak in production - Fixed resource.py to use case-insensitive check - Now correctly excludes device.id (MAC address) in production - Only adds device.id when env is not "prod" 3. Missing @opentelemetry/sdk-trace-base in Next.js externals - Added to serverExternalPackages in next.config.js - Prevents webpack bundling issues with native dependencies - Package is directly imported for BatchSpanProcessor 4. Missing NEXT_PUBLIC_GIT_HASH in stage-web workflow - Added NEXT_PUBLIC_GIT_HASH: ${{ github.sha }} - Now matches stage-cloud-api.yaml pattern - Staging traces will show correct version instead of 'unknown' - Enables correlation of traces with specific deployments Changes: - apps/core/letta/otel/resource.py: Case-insensitive check, add device.id only if not prod - apps/core/letta/functions/function_sets/multi_agent.py: Case-insensitive prod check - apps/core/letta/services/tool_manager.py: Case-insensitive prod check - apps/core/letta/services/tool_executor/multi_agent_tool_executor.py: Case-insensitive prod check - apps/core/letta/services/helpers/agent_manager_helper.py: Case-insensitive prod check - apps/web/next.config.js: Added @opentelemetry/sdk-trace-base to externals - .github/workflows/stage-web.yaml: Added NEXT_PUBLIC_GIT_HASH All checks now use: settings.environment.lower() == "prod" This matches our new convention: prod/dev/canary/local-test Also includes: distributed-tracing skill (created in /skill session) * refactor: keep core PRODUCTION but normalize OTEL tags to prod Changes approach to maintain backward compatibility with core business logic while standardizing OTEL environment tags. Previous approach: - Changed all "PRODUCTION" checks to lowercase "prod" - Would break existing core business logic expectations New approach: - Core continues using "PRODUCTION" (uppercase) for business logic - OTEL resource.py normalizes environment to lowercase abbreviated tags - Web/cloud-api use "prod" directly (they don't have business logic checks) Changes: 1. Reverted business logic checks to use "PRODUCTION" (uppercase): - multi_agent.py: Check for "PRODUCTION" to block tools - tool_manager.py: Check for "PRODUCTION" to filter local-only tools - multi_agent_tool_executor.py: Check for "PRODUCTION" to block tools - agent_manager_helper.py: Check for "PRODUCTION" to filter tools 2. Added environment normalization for OTEL tags: - resource.py: New _normalize_environment_tag() function - Maps PRODUCTION → prod, DEV/STAGING → dev - Other values (CANARY, etc.) converted to lowercase - Device ID check reverted to != "PRODUCTION" 3. Updated core deployments to set PRODUCTION: - deploy-core.yml: LETTA_ENVIRONMENT=PRODUCTION - stage-core.yaml: LETTA_ENVIRONMENT=DEV - justfile: Added LETTA_ENVIRONMENT with default PRODUCTION 4. Updated settings description: - Clarifies values are uppercase (PRODUCTION, DEV) - Notes normalization to lowercase for OTEL tags Result: - Core business logic: Uses "PRODUCTION" (unchanged, backward compatible) - OTEL Datadog tags: Shows "prod" (normalized, consistent with web/cloud-api) - Web/cloud-api: Continue using "prod" directly (no change needed) - Device ID properly excluded in PRODUCTION environments * fix: correct Python FastAPI instrumentation and environment normalization Fixes 3 bugs identified by Cursor bugbot in distributed-tracing skill: 1. Python import typo (line 50) - Was: from opentelemetry.instrumentation.fastapi import FastAPIInstrumentatio - Now: from opentelemetry.instrumentation.fastapi import FastAPIInstrumentor - Missing final 'n' in Instrumentatio - Correct class name is FastAPIInstrumentor (with 'or' suffix) 2. Wrong class name usage (line 151) - Was: FastAPIInstrumentation.instrument_app() - Now: FastAPIInstrumentor.instrument_app() - Fixed to match correct OpenTelemetry API 3. Environment tag inconsistency - Problem: Python template used .lower() which converts PRODUCTION -> production - But resource.py normalizes PRODUCTION -> prod - Would create inconsistent tags: 'production' vs 'prod' in Datadog Solution: - Added _normalize_environment_tag() function to Python template - Matches resource.py normalization logic - PRODUCTION -> prod, DEV/STAGING -> dev, others lowercase - Updated comments in workflows to clarify normalization happens in code Changes: - .skills/distributed-tracing/templates/python-fastapi-tracing.py: - Fixed import: FastAPIInstrumentor (not FastAPIInstrumentatio) - Fixed usage: FastAPIInstrumentor.instrument_app() - Added _normalize_environment_tag() function - Updated environment handling to use normalization - Updated docstring to clarify PRODUCTION/DEV -> prod/dev mapping - .github/workflows/deploy-core.yml: - Clarified comment: _normalize_environment_tag() converts to "prod" - .github/workflows/stage-core.yaml: - Clarified comment: _normalize_environment_tag() converts to "dev" Result: All services now consistently show 'prod' (not 'production') in Datadog APM, enabling proper filtering and correlation across distributed traces. * fix: add Datadog config to staging workflows and fix justfile backslash Fixes 3 issues found in staging deployment logs: 1. Missing backslash in justfile (line 134) Problem: LETTA_ENVIRONMENT line missing backslash caused all subsequent helm --set flags to be ignored, including OTEL_EXPORTER_OTLP_ENDPOINT Result: letta-web and cloud-api logs showed "OTEL_EXPORTER_OTLP_ENDPOINT not set" Fixed: --set env.LETTA_ENVIRONMENT=${LETTA_ENVIRONMENT:-prod} \ # Added backslash 2. Missing Datadog vars in staging workflows Problem: stage-web.yaml, stage-cloud-api.yaml, stage-core.yaml didn't set DD_SITE, DD_API_KEY, DD_LOGS_INJECTION, etc. For web/cloud-api: - Added to top-level env section so justfile can use them For core: - Added to top-level env section - Added to Deploy step env section (so justfile can pass to helm) - core OTEL collector config reads these from environment Result: core logs showed "exporters::datadog: api.key is not set" 3. Wrong environment tag in staging (secondary issue) Problem: letta-web logs showed 'dd.env":"production"' in staging Cause: Missing backslash broke LETTA_ENVIRONMENT, defaulted to prod Fixed: Backslash fix ensures LETTA_ENVIRONMENT=dev is set Changes: - justfile: Fixed missing backslash on LETTA_ENVIRONMENT line - .github/workflows/stage-web.yaml: Added DD_* vars to env - .github/workflows/stage-cloud-api.yaml: Added DD_* vars to env - .github/workflows/stage-core.yaml: Added DD_* vars to env and Deploy step After this fix: - Web/cloud-api will send traces to Datadog Agent via OTLP - Core OTEL collector will export traces to both ClickHouse and Datadog - All staging traces will show env:dev tag (not env:production) * fix: move OTEL config from prod helm to dev helm values Problem: OTEL configuration was added to production helm values files (helm/letta-web/values.yaml and helm/cloud-api/values.yaml) but these are for production deployments. Staging deployments use the dev helm values (helm/dev/<service>/values.yaml). Changes: - Removed OTEL vars from helm/letta-web/values.yaml (prod) - Removed OTEL vars from helm/cloud-api/values.yaml (prod) - Added OTEL vars to helm/dev/letta-web/values.yaml (staging) - Added OTEL vars to helm/dev/cloud-api/values.yaml (staging) Dev helm values now include: OTEL_ENABLED: "true" OTEL_SERVICE_NAME: "letta-web" or "cloud-api" OTEL_EXPORTER_OTLP_ENDPOINT: "http://datadog-agent.default.svc.cluster.local:4317" LETTA_ENVIRONMENT: "dev" Note: Production deployments override these via workflow env vars, so prod helm values don't need OTEL config. Dev/staging deployments use these helm values as defaults. * remove generated doc * secrets in dev * totally unrelated changes to tf for runner sizing and scaling * feat: add DD_ENV tags to staging helm for log correlation Problem: Logs show 'dd.env":"production"' instead of 'dd.env":"dev"' in staging because Datadog's logger injection uses DD_ENV, DD_SERVICE, and DD_VERSION environment variables for tagging. Changes: - Added DD_ENV, DD_SERVICE, DD_VERSION to helm/dev/letta-web/values.yaml - Added DD_ENV, DD_SERVICE, DD_VERSION to helm/dev/cloud-api/values.yaml Values: DD_ENV: "dev" DD_SERVICE: "letta-web" or "cloud-api" DD_VERSION: "dev" This ensures: - Logs show correct env:dev tag in Datadog - Traces and logs are properly correlated - Consistent tagging across OTEL traces and DD logs * feat: enable OTLP receiver in Datadog Agent configurations Added OpenTelemetry Protocol (OTLP) receiver to Datadog Agent for both dev and prod environments to support distributed tracing from services using OpenTelemetry SDKs. Changes: - helm/dev/datadog/datadog-agent.yaml: Added otlp.receiver configuration - helm/datadog/datadog-agent.yaml: Added otlp.receiver configuration OTLP Configuration: otlp: receiver: protocols: grpc: enabled: true endpoint: "0.0.0.0:4317" http: enabled: true endpoint: "0.0.0.0:4318" This enables: - Web/cloud-api services to send traces via OTLP (port 4317) - Core OTEL collector to export to Datadog via OTLP (port 4317) - Alternative HTTP endpoint for OTLP (port 4318) When applied, the Datadog Agent service will expose: - Port 4317/TCP - OTLP gRPC (for traces) - Port 4318/TCP - OTLP HTTP (for traces) - Port 8126/TCP - Native Datadog APM (existing) - Port 8125/UDP - DogStatsD (existing) Apply with: kubectl apply -f helm/dev/datadog/datadog-agent.yaml # staging kubectl apply -f helm/datadog/datadog-agent.yaml # production * feat: use git hash as DD_VERSION for all services Changed from static version strings to using git commit hash as the version tag in Datadog APM for better version tracking and correlation. Changes: 1. Workflows - Set DD_VERSION to github.sha: - .github/workflows/stage-web.yaml: Added DD_VERSION: ${{ github.sha }} - .github/workflows/stage-cloud-api.yaml: Added DD_VERSION: ${{ github.sha }} - .github/workflows/stage-core.yaml: Added DD_VERSION: ${{ github.sha }} (both top-level env and Deploy step env) 2. Justfile - Pass DD_VERSION to helm: - deploy-web: Added --set env.DD_VERSION=${DD_VERSION:-unknown} - deploy-cloud-api: Added --set env.DD_VERSION=${DD_VERSION:-unknown} - deploy-core: Added --set secrets.DD_VERSION=${DD_VERSION:-unknown} 3. Helm dev values - Remove hardcoded version: - helm/dev/letta-web/values.yaml: Removed DD_VERSION: "dev" - helm/dev/cloud-api/values.yaml: Removed DD_VERSION: "dev" - Added comments that DD_VERSION is set via workflow Result: - Traces in Datadog will show version as git commit SHA (e.g., "abc123def") - Can correlate traces with specific deployments/commits - Consistent with internal versioning strategy (git hash, not semver) - Defaults to "unknown" if DD_VERSION not set Example trace tags after deployment: env:dev service:letta-web version:7eafc5b0c12345... * feat: add DD_VERSION to production workflows Added DD_VERSION to production deployment workflows for consistent version tracking across staging and production environments. Changes: - .github/workflows/deploy-web.yml: Added DD_VERSION: ${{ github.sha }} - .github/workflows/deploy-core.yml: Added DD_VERSION: ${{ github.sha }} Note: deploy-cloud-api.yml doesn't have DD config yet, will add when cloud-api gets OTEL enabled in production. Context: This was partially flagged by bugbot - it noted that NEXT_PUBLIC_GIT_HASH was missing from prod, but that was incorrect (line 53 already has it). However, DD_VERSION was indeed missing and needed for Datadog log correlation. Result: - Production logs will show version tag matching git commit SHA - Consistent with staging configuration - Better trace/log correlation in Datadog APM Staging already has DD_VERSION (added in commit fb1a3eea0) * feat: add DD tags to memgpt-server dev helm for APM correlation Problem: memgpt-server logs show up in Datadog but traces don't appear properly in APM UI because DD_ENV, DD_SERVICE, DD_SITE tags were missing. The service was using native Datadog agent instrumentation (via LETTA_TELEMETRY_ENABLE_DATADOG) but without proper unified service tagging, traces weren't being correlated correctly in the APM interface. Changes: - helm/dev/memgpt-server/values.yaml: - Added DD_ENV: "dev" - Added DD_SERVICE: "memgpt-server" - Added DD_SITE: "us5.datadoghq.com" - Added comment that DD_VERSION comes from workflow Existing configuration: - DD_VERSION already passed via stage-core.yaml (line 215) and justfile (line 272) - DD_API_KEY already in secretsProvider (line 194) - LETTA_TELEMETRY_ENABLE_DATADOG: "true" (enables native DD agent) - LETTA_TELEMETRY_DATADOG_AGENT_HOST/PORT (routes to DD cluster agent) Result: After redeployment, memgpt-server traces will show in Datadog APM with: - env:dev - service:memgpt-server - version:<git-hash> - Proper correlation with logs * refactor: use image tag for DD_VERSION instead of separate env var Changed from passing DD_VERSION separately to deriving it from the image.tag that's already set (which contains the git hash). This is cleaner because: - Image tag is already set to git hash via TAG env var - Removes redundant DD_VERSION from workflows (6 locations) - Single source of truth for version (the deployed image tag) - Simpler configuration Changes: Workflows (removed DD_VERSION): - .github/workflows/stage-web.yaml - .github/workflows/stage-cloud-api.yaml - .github/workflows/stage-core.yaml (2 locations) - .github/workflows/deploy-web.yml - .github/workflows/deploy-core.yml Justfile (use {{TAG}} instead of ${DD_VERSION}): - deploy-web: --set env.DD_VERSION={{TAG}} - deploy-cloud-api: --set env.DD_VERSION={{TAG}} - deploy-core: --set secrets.DD_VERSION={{TAG}} Helm values (updated comments): - helm/dev/letta-web/values.yaml - helm/dev/cloud-api/values.yaml - helm/dev/memgpt-server/values.yaml - Changed from "set via workflow" to "set from image.tag by justfile" Flow: 1. Workflow sets TAG=${{ github.sha }} 2. Workflow calls justfile with TAG env var 3. Justfile sets image.tag={{TAG}} and DD_VERSION={{TAG}} 4. Both use same git hash value Example: image.tag: abc123def DD_VERSION: abc123def Both from TAG env var set to github.sha * feat: add Datadog native tracer (dd-trace) to cloud-api for APM Problem: cloud-api traces weren't appearing in Datadog APM despite OTEL being configured. Investigation revealed letta-web uses dd-trace (Datadog's native tracer) in addition to OTEL, and those traces show up perfectly. Analysis: - letta-web: Uses BOTH OTEL + dd-trace → traces visible in APM ✓ - cloud-api: Uses ONLY OTEL → traces NOT visible in APM ✗ Root cause: While OTEL *should* work, dd-trace provides better integration with Datadog's APM backend and is proven to work in production. Solution: Add dd-trace initialization to cloud-api, matching letta-web's dual-tracing approach (OTEL + dd-trace). Changes: - apps/cloud-api/src/instrument-otel.ts: - Added dd-trace initialization after OTEL setup - Checks for DD_API_KEY env var (already configured in helm) - Enables logInjection, runtimeMetrics, and profiling - Graceful fallback if dd-trace fails to initialize Dependencies: - dd-trace@^5.31.0 already available in root package.json Configuration (already set in helm): - DD_API_KEY: From secretsProvider ✓ - DD_ENV: "dev" ✓ - DD_SERVICE: "cloud-api" ✓ - DD_LOGS_INJECTION: From workflow ✓ Expected result: After deployment, cloud-api traces will appear in Datadog APM alongside letta-web and letta-server, with proper env:dev service:cloud-api tags. * tweak vars in staging * fix: initialize Datadog tracer for memgpt-server APM traces Problem: memgpt-server (letta-server) shows up in Datadog APM with env:null instead of env:dev, and traces weren't being properly captured. Root cause: The code was only initializing the Datadog Profiler (for CPU/memory profiling), but NOT the Tracer (for distributed tracing/APM). Analysis: - Profiler: Records performance metrics (CPU, memory) - WAS initialized ✓ - Tracer: Records distributed traces/spans for APM - NOT initialized ✗ The existing code (line 248-256) did: from ddtrace.profiling import Profiler # Only profiler! profiler = Profiler(...) profiler.start() # No tracer initialization! This explains why: - letta-server appears in Datadog with env:null (profiling data sent without proper tags) - Traces don't show proper service/env correlation - APM service map is incomplete Solution: Initialize the Datadog tracer with ddtrace.patch_all() to: 1. Auto-instrument FastAPI, HTTP clients, database calls, etc. 2. Send proper distributed traces to Datadog APM 3. Use the DD_ENV, DD_SERVICE env vars already set in helm Changes: - apps/core/letta/server/rest_api/app.py: - Added import ddtrace - Added ddtrace.patch_all() to auto-instrument all libraries - Added logging for tracer initialization Configuration (already set in helm): - DD_ENV: "dev" ✓ - DD_SERVICE: "memgpt-server" ✓ - DD_SITE: "us5.datadoghq.com" ✓ - DD_VERSION: From image.tag ✓ - DD_AGENT_HOST/PORT: Set by code from settings ✓ Expected result: After redeployment, letta-server will: - Show as env:dev (not env:null) in Datadog APM - Send proper distributed traces with full context - Appear correctly in service maps and trace explorer * fix: add dd-trace dependency to cloud-api package.json Problem: cloud-api Docker image doesn't include dd-trace, causing "Cannot find module 'dd-trace'" error at runtime. Root cause: dd-trace is in root package.json but not in cloud-api's package.json, so it's not included in the Docker build. Solution: Add dd-trace@^5.31.0 to cloud-api dependencies. Changes: - apps/cloud-api/package.json: Added dd-trace dependency * fix: mark dd-trace as external in cloud-api esbuild config Problem: esbuild fails when trying to bundle dd-trace because it attempts to bundle optional GraphQL plugin dependencies that aren't installed. Error: Could not resolve "graphql/language/visitor" Could not resolve "graphql/language/printer" Could not resolve "graphql/utilities" Root cause: dd-trace has optional plugins for various frameworks (GraphQL, MongoDB, etc.) that it loads conditionally at runtime. esbuild tries to statically analyze and bundle all requires, including these optional deps. Solution: Add dd-trace to the externals list so it's loaded at runtime instead of being bundled. This is the standard approach for native modules and packages with optional dependencies. Changes: - apps/cloud-api/esbuild.config.js: Added 'dd-trace' to externals array Result: - Build succeeds ✓ - dd-trace loads at runtime with only the plugins it needs ✓ - No GraphQL dependency required ✓ * add dd-trace * fix: increase cloud-api memory and make dd-trace profiling configurable Problem: cloud-api pods crash looping with out of memory errors when dd-trace profiling is enabled: FATAL ERROR: JavaScript heap out of memory current_heap_limit=268435456 (268MB in 512Mi total) Root cause: dd-trace profiling is memory-intensive (50-100MB+ overhead) and the original 512Mi limit was too tight. Solution: Two-part fix: 1. Increase memory limits: 512Mi → 1Gi (gives profiling room to breathe) 2. Make profiling configurable via DD_PROFILING_ENABLED env var Changes: helm/dev/cloud-api/values.yaml: - resources.limits.memory: 512Mi → 1Gi - resources.requests.memory: 512Mi → 1Gi - Added DD_PROFILING_ENABLED: "true" apps/cloud-api/src/instrument-otel.ts: - Read DD_PROFILING_ENABLED env var - Pass to tracer.init({ profiling: profilingEnabled }) - Log profiling status on initialization Benefits: ✓ Profiling enabled by default (CPU/heap flame graphs in Datadog) ✓ Can disable via env var if needed (set to "false") ✓ More headroom prevents OOM crashes (1Gi vs 512Mi) ✓ Configurable per environment Memory breakdown with profiling: - App baseline: ~300-400MB - dd-trace profiling: ~50-100MB - Buffer/headroom: ~500MB - Total: 1Gi (comfortable margin)
763 lines
32 KiB
Python
763 lines
32 KiB
Python
import faulthandler
|
|
import importlib.util
|
|
import json
|
|
import logging
|
|
import os
|
|
import platform
|
|
import sys
|
|
import threading
|
|
from contextlib import asynccontextmanager
|
|
from functools import partial
|
|
from pathlib import Path
|
|
from typing import Optional
|
|
|
|
import uvicorn
|
|
|
|
# Enable Python fault handler to get stack traces on segfaults
|
|
faulthandler.enable()
|
|
|
|
from fastapi import FastAPI, Request
|
|
from fastapi.exceptions import RequestValidationError
|
|
from fastapi.responses import JSONResponse
|
|
from marshmallow import ValidationError
|
|
from sqlalchemy.exc import IntegrityError, OperationalError
|
|
from starlette.middleware.cors import CORSMiddleware
|
|
|
|
from letta.__init__ import __version__ as letta_version
|
|
from letta.agents.exceptions import IncompatibleAgentType
|
|
from letta.constants import ADMIN_PREFIX, API_PREFIX, OPENAI_API_PREFIX
|
|
from letta.errors import (
|
|
AgentExportIdMappingError,
|
|
AgentExportProcessingError,
|
|
AgentFileImportError,
|
|
AgentNotFoundForExportError,
|
|
BedrockPermissionError,
|
|
HandleNotFoundError,
|
|
LettaAgentNotFoundError,
|
|
LettaExpiredError,
|
|
LettaInvalidArgumentError,
|
|
LettaInvalidMCPSchemaError,
|
|
LettaMCPConnectionError,
|
|
LettaMCPTimeoutError,
|
|
LettaServiceUnavailableError,
|
|
LettaToolCreateError,
|
|
LettaToolNameConflictError,
|
|
LettaUnsupportedFileUploadError,
|
|
LettaUserNotFoundError,
|
|
LLMAuthenticationError,
|
|
LLMError,
|
|
LLMProviderOverloaded,
|
|
LLMRateLimitError,
|
|
LLMTimeoutError,
|
|
PendingApprovalError,
|
|
)
|
|
from letta.helpers.pinecone_utils import get_pinecone_indices, should_use_pinecone, upsert_pinecone_indices
|
|
from letta.jobs.scheduler import start_scheduler_with_leader_election
|
|
from letta.log import get_logger
|
|
from letta.orm.errors import DatabaseTimeoutError, ForeignKeyConstraintViolationError, NoResultFound, UniqueConstraintViolationError
|
|
from letta.otel.tracing import get_trace_id
|
|
from letta.schemas.letta_message import create_letta_error_message_schema, create_letta_message_union_schema, create_letta_ping_schema
|
|
from letta.schemas.letta_message_content import (
|
|
create_letta_assistant_message_content_union_schema,
|
|
create_letta_message_content_union_schema,
|
|
create_letta_user_message_content_union_schema,
|
|
)
|
|
from letta.server.constants import REST_DEFAULT_PORT
|
|
from letta.server.db import db_registry
|
|
from letta.server.global_exception_handler import setup_global_exception_handlers
|
|
|
|
# NOTE(charles): these are extra routes that are not part of v1 but we still need to mount to pass tests
|
|
from letta.server.rest_api.auth.index import setup_auth_router # TODO: probably remove right?
|
|
from letta.server.rest_api.interface import StreamingServerInterface
|
|
from letta.server.rest_api.middleware import CheckPasswordMiddleware, LoggingMiddleware
|
|
from letta.server.rest_api.routers.v1 import ROUTERS as v1_routes
|
|
from letta.server.rest_api.routers.v1.organizations import router as organizations_router
|
|
from letta.server.rest_api.routers.v1.users import router as users_router # TODO: decide on admin
|
|
from letta.server.rest_api.static_files import mount_static_files
|
|
from letta.server.rest_api.utils import SENTRY_ENABLED
|
|
from letta.server.server import SyncServer
|
|
from letta.settings import settings, telemetry_settings
|
|
from letta.validators import PATH_VALIDATORS, PRIMITIVE_ID_PATTERNS
|
|
|
|
if SENTRY_ENABLED:
|
|
import sentry_sdk
|
|
|
|
IS_WINDOWS = platform.system() == "Windows"
|
|
|
|
# NOTE(charles): @ethan I had to add this to get the global as the bottom to work
|
|
interface: type = StreamingServerInterface
|
|
server = SyncServer(default_interface_factory=lambda: interface())
|
|
logger = get_logger(__name__)
|
|
|
|
|
|
def generate_openapi_schema(app: FastAPI):
|
|
# Update the OpenAPI schema
|
|
if not app.openapi_schema:
|
|
app.openapi_schema = app.openapi()
|
|
|
|
letta_docs = app.openapi_schema.copy()
|
|
letta_docs["paths"] = {k: v for k, v in letta_docs["paths"].items() if not k.startswith("/openai")}
|
|
letta_docs["info"]["title"] = "Letta API"
|
|
letta_docs["components"]["schemas"]["LettaMessageUnion"] = create_letta_message_union_schema()
|
|
letta_docs["components"]["schemas"]["LettaMessageContentUnion"] = create_letta_message_content_union_schema()
|
|
letta_docs["components"]["schemas"]["LettaAssistantMessageContentUnion"] = create_letta_assistant_message_content_union_schema()
|
|
letta_docs["components"]["schemas"]["LettaUserMessageContentUnion"] = create_letta_user_message_content_union_schema()
|
|
letta_docs["components"]["schemas"]["LettaPing"] = create_letta_ping_schema()
|
|
letta_docs["components"]["schemas"]["LettaErrorMessage"] = create_letta_error_message_schema()
|
|
|
|
# Update the app's schema with our modified version
|
|
app.openapi_schema = letta_docs
|
|
|
|
for name, docs in [
|
|
(
|
|
"letta",
|
|
letta_docs,
|
|
),
|
|
]:
|
|
if settings.cors_origins:
|
|
docs["servers"] = [{"url": host} for host in settings.cors_origins]
|
|
Path(f"openapi_{name}.json").write_text(json.dumps(docs, indent=2))
|
|
|
|
|
|
# middleware that only allows requests to pass through if user provides a password thats randomly generated and stored in memory
|
|
def generate_password():
|
|
import secrets
|
|
|
|
return secrets.token_urlsafe(16)
|
|
|
|
|
|
random_password = os.getenv("LETTA_SERVER_PASSWORD") or generate_password()
|
|
|
|
|
|
@asynccontextmanager
|
|
async def lifespan(app_: FastAPI):
|
|
"""
|
|
FastAPI lifespan context manager with setup before the app starts pre-yield and on shutdown after the yield.
|
|
"""
|
|
worker_id = os.getpid()
|
|
|
|
# Initialize event loop watchdog
|
|
try:
|
|
import asyncio
|
|
|
|
from letta.monitoring.event_loop_watchdog import start_watchdog
|
|
|
|
loop = asyncio.get_running_loop()
|
|
start_watchdog(loop, check_interval=5.0, timeout_threshold=15.0)
|
|
logger.info(f"[Worker {worker_id}] Event loop watchdog started")
|
|
except Exception as e:
|
|
logger.warning(f"[Worker {worker_id}] Failed to start watchdog: {e}")
|
|
|
|
# Pre-download NLTK data to avoid blocking during requests (fallback if Docker build failed)
|
|
try:
|
|
import asyncio
|
|
|
|
import nltk
|
|
|
|
logger.info(f"[Worker {worker_id}] Checking NLTK data availability...")
|
|
await asyncio.to_thread(nltk.download, "punkt_tab", quiet=True)
|
|
logger.info(f"[Worker {worker_id}] NLTK data ready")
|
|
except Exception as e:
|
|
logger.warning(f"[Worker {worker_id}] Failed to download NLTK data: {e}")
|
|
|
|
# logger.info(f"[Worker {worker_id}] Starting lifespan initialization")
|
|
# logger.info(f"[Worker {worker_id}] Initializing database connections")
|
|
# db_registry.initialize_async()
|
|
# logger.info(f"[Worker {worker_id}] Database connections initialized")
|
|
|
|
if should_use_pinecone():
|
|
if settings.upsert_pinecone_indices:
|
|
logger.info(f"[Worker {worker_id}] Upserting pinecone indices: {get_pinecone_indices()}")
|
|
await upsert_pinecone_indices()
|
|
logger.info(f"[Worker {worker_id}] Upserted pinecone indices")
|
|
else:
|
|
logger.info(f"[Worker {worker_id}] Enabled pinecone")
|
|
else:
|
|
logger.info(f"[Worker {worker_id}] Disabled pinecone")
|
|
|
|
logger.info(f"[Worker {worker_id}] Starting scheduler with leader election")
|
|
global server
|
|
await server.init_async()
|
|
try:
|
|
await start_scheduler_with_leader_election(server)
|
|
logger.info(f"[Worker {worker_id}] Scheduler initialization completed")
|
|
except Exception as e:
|
|
logger.error(f"[Worker {worker_id}] Scheduler initialization failed: {e}", exc_info=True)
|
|
logger.info(f"[Worker {worker_id}] Lifespan startup completed")
|
|
yield
|
|
|
|
# Cleanup on shutdown
|
|
logger.info(f"[Worker {worker_id}] Starting lifespan shutdown")
|
|
|
|
try:
|
|
from letta.jobs.scheduler import shutdown_scheduler_and_release_lock
|
|
|
|
await shutdown_scheduler_and_release_lock()
|
|
logger.info(f"[Worker {worker_id}] Scheduler shutdown completed")
|
|
except Exception as e:
|
|
logger.error(f"[Worker {worker_id}] Scheduler shutdown failed: {e}", exc_info=True)
|
|
|
|
# Cleanup SQLAlchemy instrumentation
|
|
if not settings.disable_tracing and settings.sqlalchemy_tracing:
|
|
try:
|
|
from letta.otel.sqlalchemy_instrumentation_integration import teardown_letta_db_instrumentation
|
|
|
|
teardown_letta_db_instrumentation()
|
|
logger.info(f"[Worker {worker_id}] SQLAlchemy instrumentation shutdown completed")
|
|
except Exception as e:
|
|
logger.warning(f"[Worker {worker_id}] SQLAlchemy instrumentation shutdown failed: {e}")
|
|
|
|
logger.info(f"[Worker {worker_id}] Lifespan shutdown completed")
|
|
|
|
|
|
def create_application() -> "FastAPI":
|
|
"""the application start routine"""
|
|
# global server
|
|
# server = SyncServer(default_interface_factory=lambda: interface())
|
|
print(f"\n[[ Letta server // v{letta_version} ]]")
|
|
|
|
if SENTRY_ENABLED:
|
|
sentry_sdk.init(
|
|
dsn=os.getenv("SENTRY_DSN"),
|
|
environment=os.getenv("LETTA_ENVIRONMENT", "undefined"),
|
|
traces_sample_rate=1.0,
|
|
_experiments={
|
|
"continuous_profiling_auto_start": True,
|
|
},
|
|
)
|
|
|
|
if telemetry_settings.enable_datadog:
|
|
try:
|
|
dd_env = settings.environment or "development"
|
|
print(f"▶ Initializing Datadog tracing (env={dd_env})")
|
|
|
|
# Configure environment variables before importing ddtrace (must be set in environment before importing ddtrace)
|
|
os.environ.setdefault("DD_ENV", dd_env)
|
|
os.environ.setdefault("DD_SERVICE", telemetry_settings.datadog_service_name)
|
|
os.environ.setdefault("DD_VERSION", letta_version)
|
|
os.environ.setdefault("DD_AGENT_HOST", telemetry_settings.datadog_agent_host)
|
|
os.environ.setdefault("DD_TRACE_AGENT_PORT", str(telemetry_settings.datadog_agent_port))
|
|
os.environ.setdefault("DD_PROFILING_ENABLED", str(telemetry_settings.datadog_profiling_enabled).lower())
|
|
os.environ.setdefault("DD_PROFILING_MEMORY_ENABLED", str(telemetry_settings.datadog_profiling_memory_enabled).lower())
|
|
os.environ.setdefault("DD_PROFILING_HEAP_ENABLED", str(telemetry_settings.datadog_profiling_heap_enabled).lower())
|
|
|
|
# Note: DD_LOGS_INJECTION, DD_APPSEC_ENABLED, DD_IAST_ENABLED, DD_APPSEC_SCA_ENABLED
|
|
# are set via deployment configs and automatically picked up by ddtrace
|
|
|
|
# Initialize Datadog tracer for APM (distributed tracing)
|
|
import ddtrace
|
|
|
|
ddtrace.patch_all() # Auto-instrument FastAPI, HTTP, DB, etc.
|
|
logger.info(
|
|
f"Datadog tracer initialized: env={dd_env}, "
|
|
f"service={telemetry_settings.datadog_service_name}, "
|
|
f"agent={telemetry_settings.datadog_agent_host}:{telemetry_settings.datadog_agent_port}"
|
|
)
|
|
|
|
if telemetry_settings.datadog_profiling_enabled:
|
|
from ddtrace.profiling import Profiler
|
|
|
|
# Initialize and start profiler
|
|
profiler = Profiler(
|
|
env=dd_env,
|
|
service=telemetry_settings.datadog_service_name,
|
|
version=letta_version,
|
|
)
|
|
profiler.start()
|
|
|
|
# Log Git metadata for source code integration
|
|
git_info = ""
|
|
if telemetry_settings.datadog_git_commit_sha:
|
|
git_info = f", commit={telemetry_settings.datadog_git_commit_sha[:8]}"
|
|
if telemetry_settings.datadog_git_repository_url:
|
|
git_info += f", repo={telemetry_settings.datadog_git_repository_url}"
|
|
|
|
logger.info(
|
|
f"Datadog profiling enabled: env={dd_env}, "
|
|
f"service={telemetry_settings.datadog_service_name}, "
|
|
f"agent={telemetry_settings.datadog_agent_host}:{telemetry_settings.datadog_agent_port}{git_info}"
|
|
)
|
|
except Exception as e:
|
|
logger.error(f"Failed to initialize Datadog tracing/profiling: {e}", exc_info=True)
|
|
if SENTRY_ENABLED:
|
|
sentry_sdk.capture_exception(e)
|
|
# Don't fail application startup if Datadog initialization fails
|
|
|
|
debug_mode = "--debug" in sys.argv
|
|
app = FastAPI(
|
|
swagger_ui_parameters={"docExpansion": "none"},
|
|
# openapi_tags=TAGS_METADATA,
|
|
title="Letta",
|
|
summary="Create LLM agents with long-term memory and custom tools 📚🦙",
|
|
version=letta_version,
|
|
debug=debug_mode, # if True, the stack trace will be printed in the response
|
|
lifespan=lifespan,
|
|
)
|
|
|
|
# === Global Exception Handlers ===
|
|
# Set up handlers for exceptions outside of request context (background tasks, threads, etc.)
|
|
setup_global_exception_handlers()
|
|
|
|
# === Exception Handlers ===
|
|
# TODO (cliandy): move to separate file
|
|
|
|
@app.exception_handler(Exception)
|
|
async def generic_error_handler(request: Request, exc: Exception):
|
|
# Log with structured context
|
|
request_context = {
|
|
"method": request.method,
|
|
"url": str(request.url),
|
|
"path": request.url.path,
|
|
}
|
|
|
|
# Extract user context if available
|
|
user_context = {}
|
|
if hasattr(request.state, "user_id"):
|
|
user_context["user_id"] = request.state.user_id
|
|
if hasattr(request.state, "org_id"):
|
|
user_context["org_id"] = request.state.org_id
|
|
|
|
logger.error(
|
|
f"Unhandled error: {exc.__class__.__name__}: {str(exc)}",
|
|
extra={
|
|
"exception_type": exc.__class__.__name__,
|
|
"exception_message": str(exc),
|
|
"exception_module": exc.__class__.__module__,
|
|
"request": request_context,
|
|
"user": user_context,
|
|
},
|
|
exc_info=True,
|
|
)
|
|
|
|
if SENTRY_ENABLED:
|
|
sentry_sdk.capture_exception(exc)
|
|
|
|
return JSONResponse(
|
|
status_code=500,
|
|
content={
|
|
"detail": "An unknown error occurred",
|
|
# Only include error details in debug/development mode
|
|
# "debug_info": str(exc) if settings.debug else None
|
|
},
|
|
)
|
|
|
|
# Reasoning for this handler is the default path validation logic returns a pretty gnarly error message
|
|
# because of the uuid4 pattern. This handler rewrites the error message to be more user-friendly and less intimidating.
|
|
@app.exception_handler(RequestValidationError)
|
|
async def custom_request_validation_handler(request: Request, exc: RequestValidationError):
|
|
"""Generalize path `_id` validation messages and include example IDs.
|
|
|
|
- Rewrites string pattern/length mismatches to "primitive-{uuid4}"
|
|
- Preserves stringified `detail` and includes `trace_id`
|
|
- Adds top-level `examples` from `PATH_VALIDATORS` for offending params
|
|
"""
|
|
errors = exc.errors()
|
|
examples_set: set[str] = set()
|
|
content = {"trace_id": get_trace_id() or ""}
|
|
for err in errors:
|
|
fastapi_error_loc = err.get("loc", [])
|
|
# only rewrite path param validation errors (should expand in future)
|
|
if len(fastapi_error_loc) != 2 or fastapi_error_loc[0] != "path":
|
|
continue
|
|
|
|
# re-write the error message
|
|
parameter_name = fastapi_error_loc[1]
|
|
err_type = err.get("type")
|
|
if (
|
|
err_type in {"string_pattern_mismatch", "string_too_short", "string_too_long"}
|
|
and isinstance(parameter_name, str)
|
|
and parameter_name.endswith("_id")
|
|
):
|
|
primitive = parameter_name[:-3]
|
|
validator = PATH_VALIDATORS.get(primitive)
|
|
if validator:
|
|
# simplify default error message
|
|
err["msg"] = f"String should match pattern '{primitive}-{{uuid4}}'"
|
|
|
|
# rewrite as string_pattern_mismatch even if the input length is too short or too long (more intuitive for user)
|
|
if err_type in {"string_too_short", "string_too_long"}:
|
|
# FYI: the pattern is the same as the pattern inthe validator object but for some reason the validator object
|
|
# doesn't let you access it directly (unless you call into pydantic layer)
|
|
err["ctx"] = {"pattern": PRIMITIVE_ID_PATTERNS[primitive].pattern}
|
|
err["type"] = "string_pattern_mismatch"
|
|
|
|
# collect examples for top-level examples field (prevents duplicates and allows for multiple examples for multiple primitives)
|
|
# e.g. if there are 2 malformed agent ids, the examples field will contain 2 examples for the agent primitive
|
|
# e.g. if there is a malformed agent id and malformed folder id, the examples field will contain both examples, for both the agent and folder primitives
|
|
try:
|
|
exs = getattr(validator, "examples", None)
|
|
if exs:
|
|
for ex in exs:
|
|
examples_set.add(ex)
|
|
else:
|
|
examples_set.add(f"{primitive}-123e4567-e89b-42d3-8456-426614174000")
|
|
except Exception:
|
|
examples_set.add(f"{primitive}-123e4567-e89b-42d3-8456-426614174000")
|
|
|
|
# Preserve current API contract: stringified list of errors
|
|
content["detail"] = repr(errors)
|
|
if examples_set:
|
|
content["examples"] = sorted(examples_set)
|
|
return JSONResponse(status_code=422, content=content)
|
|
|
|
async def error_handler_with_code(request: Request, exc: Exception, code: int, detail: str | None = None):
|
|
logger.error(f"{type(exc).__name__}", exc_info=exc)
|
|
|
|
if not detail:
|
|
detail = str(exc)
|
|
return JSONResponse(
|
|
status_code=code,
|
|
content={"detail": detail},
|
|
)
|
|
|
|
_error_handler_400 = partial(error_handler_with_code, code=400)
|
|
_error_handler_404 = partial(error_handler_with_code, code=404)
|
|
_error_handler_404_agent = partial(_error_handler_404, detail="Agent not found")
|
|
_error_handler_404_user = partial(_error_handler_404, detail="User not found")
|
|
_error_handler_408 = partial(error_handler_with_code, code=408)
|
|
_error_handler_409 = partial(error_handler_with_code, code=409)
|
|
_error_handler_410 = partial(error_handler_with_code, code=410)
|
|
_error_handler_415 = partial(error_handler_with_code, code=415)
|
|
_error_handler_422 = partial(error_handler_with_code, code=422)
|
|
_error_handler_500 = partial(error_handler_with_code, code=500)
|
|
_error_handler_503 = partial(error_handler_with_code, code=503)
|
|
|
|
# 400 Bad Request errors
|
|
app.add_exception_handler(LettaInvalidArgumentError, _error_handler_400)
|
|
app.add_exception_handler(LettaToolCreateError, _error_handler_400)
|
|
app.add_exception_handler(LettaToolNameConflictError, _error_handler_400)
|
|
app.add_exception_handler(AgentFileImportError, _error_handler_400)
|
|
app.add_exception_handler(ValueError, _error_handler_400)
|
|
|
|
# 404 Not Found errors
|
|
app.add_exception_handler(NoResultFound, _error_handler_404)
|
|
app.add_exception_handler(LettaAgentNotFoundError, _error_handler_404_agent)
|
|
app.add_exception_handler(LettaUserNotFoundError, _error_handler_404_user)
|
|
app.add_exception_handler(AgentNotFoundForExportError, _error_handler_404)
|
|
app.add_exception_handler(HandleNotFoundError, _error_handler_404)
|
|
|
|
# 410 Expired errors
|
|
app.add_exception_handler(LettaExpiredError, _error_handler_410)
|
|
|
|
# 408 Timeout errors
|
|
app.add_exception_handler(LettaMCPTimeoutError, _error_handler_408)
|
|
app.add_exception_handler(LettaInvalidMCPSchemaError, _error_handler_400)
|
|
|
|
# 409 Conflict errors
|
|
app.add_exception_handler(ForeignKeyConstraintViolationError, _error_handler_409)
|
|
app.add_exception_handler(UniqueConstraintViolationError, _error_handler_409)
|
|
app.add_exception_handler(IntegrityError, _error_handler_409)
|
|
app.add_exception_handler(PendingApprovalError, _error_handler_409)
|
|
|
|
# 415 Unsupported Media Type errors
|
|
app.add_exception_handler(LettaUnsupportedFileUploadError, _error_handler_415)
|
|
|
|
# 422 Validation errors
|
|
app.add_exception_handler(ValidationError, _error_handler_422)
|
|
|
|
# 500 Internal Server errors
|
|
app.add_exception_handler(AgentExportIdMappingError, _error_handler_500)
|
|
app.add_exception_handler(AgentExportProcessingError, _error_handler_500)
|
|
|
|
# 503 Service Unavailable errors
|
|
app.add_exception_handler(OperationalError, _error_handler_503)
|
|
app.add_exception_handler(LettaServiceUnavailableError, _error_handler_503)
|
|
app.add_exception_handler(LLMProviderOverloaded, _error_handler_503)
|
|
|
|
@app.exception_handler(IncompatibleAgentType)
|
|
async def handle_incompatible_agent_type(request: Request, exc: IncompatibleAgentType):
|
|
logger.error("Incompatible agent types. Expected: %s, Actual: %s", exc.expected_type, exc.actual_type)
|
|
if SENTRY_ENABLED:
|
|
sentry_sdk.capture_exception(exc)
|
|
|
|
return JSONResponse(
|
|
status_code=400,
|
|
content={
|
|
"detail": str(exc),
|
|
"expected_type": exc.expected_type,
|
|
"actual_type": exc.actual_type,
|
|
},
|
|
)
|
|
|
|
@app.exception_handler(DatabaseTimeoutError)
|
|
async def database_timeout_error_handler(request: Request, exc: DatabaseTimeoutError):
|
|
logger.error(f"Timeout occurred: {exc}. Original exception: {exc.original_exception}")
|
|
if SENTRY_ENABLED:
|
|
sentry_sdk.capture_exception(exc)
|
|
|
|
return JSONResponse(
|
|
status_code=503,
|
|
content={"detail": "The database is temporarily unavailable. Please try again later."},
|
|
)
|
|
|
|
@app.exception_handler(BedrockPermissionError)
|
|
async def bedrock_permission_error_handler(request, exc: BedrockPermissionError):
|
|
logger.error("Bedrock permission denied.")
|
|
|
|
return JSONResponse(
|
|
status_code=403,
|
|
content={
|
|
"error": {
|
|
"type": "bedrock_permission_denied",
|
|
"message": "Unable to access the required AI model. Please check your Bedrock permissions or contact support.",
|
|
"detail": {str(exc)},
|
|
}
|
|
},
|
|
)
|
|
|
|
@app.exception_handler(LLMTimeoutError)
|
|
async def llm_timeout_error_handler(request: Request, exc: LLMTimeoutError):
|
|
return JSONResponse(
|
|
status_code=504,
|
|
content={
|
|
"error": {
|
|
"type": "llm_timeout",
|
|
"message": "The LLM request timed out. Please try again.",
|
|
"detail": str(exc),
|
|
}
|
|
},
|
|
)
|
|
|
|
@app.exception_handler(LLMRateLimitError)
|
|
async def llm_rate_limit_error_handler(request: Request, exc: LLMRateLimitError):
|
|
return JSONResponse(
|
|
status_code=429,
|
|
content={
|
|
"error": {
|
|
"type": "llm_rate_limit",
|
|
"message": "Rate limit exceeded for LLM model provider. Please wait before making another request.",
|
|
"detail": str(exc),
|
|
}
|
|
},
|
|
)
|
|
|
|
@app.exception_handler(LLMAuthenticationError)
|
|
async def llm_auth_error_handler(request: Request, exc: LLMAuthenticationError):
|
|
return JSONResponse(
|
|
status_code=401,
|
|
content={
|
|
"error": {
|
|
"type": "llm_authentication",
|
|
"message": "Authentication failed with the LLM model provider.",
|
|
"detail": str(exc),
|
|
}
|
|
},
|
|
)
|
|
|
|
@app.exception_handler(LettaMCPConnectionError)
|
|
async def mcp_connection_error_handler(request: Request, exc: LettaMCPConnectionError):
|
|
return JSONResponse(
|
|
status_code=502,
|
|
content={
|
|
"error": {
|
|
"type": "mcp_connection_error",
|
|
"message": "Failed to connect to MCP server.",
|
|
"detail": str(exc),
|
|
}
|
|
},
|
|
)
|
|
|
|
@app.exception_handler(LLMError)
|
|
async def llm_error_handler(request: Request, exc: LLMError):
|
|
return JSONResponse(
|
|
status_code=502,
|
|
content={
|
|
"error": {
|
|
"type": "llm_error",
|
|
"message": "An error occurred with the LLM request.",
|
|
"detail": str(exc),
|
|
}
|
|
},
|
|
)
|
|
|
|
settings.cors_origins.append("https://app.letta.com")
|
|
|
|
if (os.getenv("LETTA_SERVER_SECURE") == "true") or "--secure" in sys.argv:
|
|
print(f"▶ Using secure mode with password: {random_password}")
|
|
app.add_middleware(CheckPasswordMiddleware, password=random_password)
|
|
|
|
# Add reverse proxy middleware to handle X-Forwarded-* headers
|
|
# app.add_middleware(ReverseProxyMiddleware, base_path=settings.server_base_path)
|
|
|
|
# Add unified logging middleware - enriches log context and logs exceptions
|
|
app.add_middleware(LoggingMiddleware)
|
|
|
|
app.add_middleware(
|
|
CORSMiddleware,
|
|
allow_origins=settings.cors_origins,
|
|
allow_credentials=True,
|
|
allow_methods=["*"],
|
|
allow_headers=["*"],
|
|
)
|
|
|
|
# Set up OpenTelemetry tracing
|
|
otlp_endpoint = settings.otel_exporter_otlp_endpoint
|
|
if otlp_endpoint and not settings.disable_tracing:
|
|
print(f"▶ Using OTLP tracing with endpoint: {otlp_endpoint}")
|
|
env_name_suffix = os.getenv("ENV_NAME")
|
|
service_name = f"letta-server-{env_name_suffix.lower()}" if env_name_suffix else "letta-server"
|
|
from letta.otel.metrics import setup_metrics
|
|
from letta.otel.tracing import setup_tracing
|
|
|
|
setup_tracing(
|
|
endpoint=otlp_endpoint,
|
|
app=app,
|
|
service_name=service_name,
|
|
)
|
|
setup_metrics(endpoint=otlp_endpoint, app=app, service_name=service_name)
|
|
|
|
# Set up SQLAlchemy synchronous operation instrumentation
|
|
if settings.sqlalchemy_tracing:
|
|
from letta.otel.sqlalchemy_instrumentation_integration import setup_letta_db_instrumentation
|
|
|
|
try:
|
|
setup_letta_db_instrumentation(
|
|
enable_joined_monitoring=True, # Monitor joined loading operations
|
|
sql_truncate_length=1500, # Longer SQL statements for debugging
|
|
)
|
|
print("▶ SQLAlchemy synchronous operation instrumentation enabled")
|
|
except Exception as e:
|
|
logger.warning(f"Failed to setup SQLAlchemy instrumentation: {e}")
|
|
# Don't fail startup if instrumentation fails
|
|
|
|
# Ensure our validation handler overrides tracing's handler when tracing is enabled
|
|
app.add_exception_handler(RequestValidationError, custom_request_validation_handler)
|
|
|
|
for route in v1_routes:
|
|
app.include_router(route, prefix=API_PREFIX)
|
|
# this gives undocumented routes for "latest" and bare api calls.
|
|
# we should always tie this to the newest version of the api.
|
|
# app.include_router(route, prefix="", include_in_schema=False)
|
|
app.include_router(route, prefix="/latest", include_in_schema=False)
|
|
|
|
# NOTE: ethan these are the extra routes
|
|
# TODO(ethan) remove
|
|
|
|
# admin/users
|
|
app.include_router(users_router, prefix=ADMIN_PREFIX)
|
|
app.include_router(organizations_router, prefix=ADMIN_PREFIX)
|
|
|
|
# /api/auth endpoints
|
|
app.include_router(setup_auth_router(server, interface, random_password), prefix=API_PREFIX)
|
|
|
|
# / static files
|
|
mount_static_files(app)
|
|
|
|
no_generation = "--no-generation" in sys.argv
|
|
|
|
# Generate OpenAPI schema after all routes are mounted
|
|
if not no_generation:
|
|
generate_openapi_schema(app)
|
|
|
|
return app
|
|
|
|
|
|
app = create_application()
|
|
|
|
|
|
def start_server(
|
|
port: Optional[int] = None,
|
|
host: Optional[str] = None,
|
|
debug: bool = False,
|
|
reload: bool = False,
|
|
):
|
|
"""Convenience method to start the server from within Python"""
|
|
if debug:
|
|
from letta.server.server import logger as server_logger
|
|
|
|
# Set the logging level
|
|
server_logger.setLevel(logging.DEBUG)
|
|
# Create a StreamHandler
|
|
stream_handler = logging.StreamHandler()
|
|
# Set the formatter (optional)
|
|
formatter = logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(message)s")
|
|
stream_handler.setFormatter(formatter)
|
|
# Add the handler to the logger
|
|
server_logger.addHandler(stream_handler)
|
|
|
|
# Experimental UV Loop Support
|
|
try:
|
|
if settings.use_uvloop:
|
|
print("Running server asyncio loop on uvloop...")
|
|
import asyncio
|
|
|
|
import uvloop
|
|
|
|
asyncio.set_event_loop_policy(uvloop.EventLoopPolicy())
|
|
except:
|
|
pass
|
|
|
|
if (os.getenv("LOCAL_HTTPS") == "true") or "--localhttps" in sys.argv:
|
|
print(f"▶ Server running at: https://{host or 'localhost'}:{port or REST_DEFAULT_PORT}")
|
|
print("▶ View using ADE at: https://app.letta.com/development-servers/local/dashboard\n")
|
|
if importlib.util.find_spec("granian") is not None and settings.use_granian:
|
|
from granian import Granian
|
|
|
|
# Experimental Granian engine
|
|
Granian(
|
|
target="letta.server.rest_api.app:app",
|
|
# factory=True,
|
|
interface="asgi",
|
|
address=host or "127.0.0.1", # Note granian address must be an ip address
|
|
port=port or REST_DEFAULT_PORT,
|
|
workers=settings.uvicorn_workers,
|
|
# runtime_blocking_threads=
|
|
# runtime_threads=
|
|
reload=reload or settings.uvicorn_reload,
|
|
reload_paths=["letta/"],
|
|
reload_ignore_worker_failure=True,
|
|
reload_tick=4000, # set to 4s to prevent crashing on weird state
|
|
# log_level="info"
|
|
ssl_keyfile="certs/localhost-key.pem",
|
|
ssl_cert="certs/localhost.pem",
|
|
).serve()
|
|
else:
|
|
uvicorn.run(
|
|
"letta.server.rest_api.app:app",
|
|
host=host or "localhost",
|
|
port=port or REST_DEFAULT_PORT,
|
|
workers=settings.uvicorn_workers,
|
|
reload=reload or settings.uvicorn_reload,
|
|
timeout_keep_alive=settings.uvicorn_timeout_keep_alive,
|
|
ssl_keyfile="certs/localhost-key.pem",
|
|
ssl_certfile="certs/localhost.pem",
|
|
)
|
|
|
|
else:
|
|
if IS_WINDOWS:
|
|
# Windows doesn't those the fancy unicode characters
|
|
print(f"Server running at: http://{host or 'localhost'}:{port or REST_DEFAULT_PORT}")
|
|
print("View using ADE at: https://app.letta.com/development-servers/local/dashboard\n")
|
|
else:
|
|
print(f"▶ Server running at: http://{host or 'localhost'}:{port or REST_DEFAULT_PORT}")
|
|
print("▶ View using ADE at: https://app.letta.com/development-servers/local/dashboard\n")
|
|
|
|
if importlib.util.find_spec("granian") is not None and settings.use_granian:
|
|
# Experimental Granian engine
|
|
from granian import Granian
|
|
|
|
Granian(
|
|
target="letta.server.rest_api.app:app",
|
|
# factory=True,
|
|
interface="asgi",
|
|
address=host or "127.0.0.1", # Note granian address must be an ip address
|
|
port=port or REST_DEFAULT_PORT,
|
|
workers=settings.uvicorn_workers,
|
|
# runtime_blocking_threads=
|
|
# runtime_threads=
|
|
reload=reload or settings.uvicorn_reload,
|
|
reload_paths=["letta/"],
|
|
reload_ignore_worker_failure=True,
|
|
reload_tick=4000, # set to 4s to prevent crashing on weird state
|
|
# log_level="info"
|
|
).serve()
|
|
else:
|
|
uvicorn.run(
|
|
"letta.server.rest_api.app:app",
|
|
host=host or "localhost",
|
|
port=port or REST_DEFAULT_PORT,
|
|
workers=settings.uvicorn_workers,
|
|
reload=reload or settings.uvicorn_reload,
|
|
timeout_keep_alive=settings.uvicorn_timeout_keep_alive,
|
|
)
|