* fix(core): catch bare openai.APIError in handle_llm_error fallthrough
openai.APIError raised during streaming (e.g. OpenRouter credit
exhaustion) is not an APIStatusError, so it skipped the catch-all
at the end and fell through to LLMError("Unhandled"). Now bare
APIErrors that aren't context window overflows are mapped to
LLMBadRequestError.
Datadog: https://us5.datadoghq.com/error-tracking/issue/7a2c356c-0849-11f1-be66-da7ad0900000🐾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* feat(core): add LLMInsufficientCreditsError for BYOK credit exhaustion
Adds dedicated error type for insufficient credits/quota across all
providers (OpenAI, Anthropic, Google). Returns HTTP 402 with
BYOK-aware messaging instead of generic 400.
- New LLMInsufficientCreditsError class and PAYMENT_REQUIRED ErrorCode
- is_insufficient_credits_message() helper detecting credit/quota strings
- All 3 provider clients detect 402 status + credit keywords
- FastAPI handler returns 402 with "your API key" vs generic messaging
- 5 new parametrized tests covering OpenRouter, OpenAI, and negative case
🐾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
---------
Co-authored-by: Letta <noreply@letta.com>
* test(core): strengthen git-memory system prompt stability integration coverage
Switch git-memory HTTP integration tests to OpenAI model handles and add assertions that system prompt content remains stable after normal turns and direct block value updates until explicit recompilation or reset.
👾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix(core): preserve git-memory formatting and enforce lock conflicts
Preserve existing markdown frontmatter formatting on block updates while still ensuring required metadata fields exist, and make post-push git sync propagate memory-repo lock conflicts as 409 responses. Also enable slash-containing core-memory block labels in route params and add regression coverage.
👾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix(memfs): fail closed on memory repo lock contention
Make memfs git commits fail closed when the per-agent Redis lock cannot be acquired, return 409 MEMORY_REPO_BUSY from the memfs files write API, and map that 409 back to core MemoryRepoBusyError so API callers receive consistent busy conflicts.
👾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* chore(core): minimize git-memory fix scope to memfs lock and frontmatter paths
👾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* chore: drop unrelated changes and keep memfs-focused scope
Revert branch-only changes that are not required for the memfs lock contention and frontmatter-preservation fix so the PR contains only issue-relevant files.
👾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix(memfs): lock push sync path and improve nested sync diagnostics
Serialize memfs push-to-GCS sync with the same per-agent Redis lock key used by API commits, and add targeted post-push nested-block diagnostics plus a focused nested-label sync regression test for _sync_after_push.
👾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
---------
Co-authored-by: Letta <noreply@letta.com>
Google genai.errors.ClientError with code 400 was being caught and
wrapped as LLMBadRequestError but returned to clients as 502 because
no dedicated FastAPI exception handler existed for LLMBadRequestError.
- Add LLMBadRequestError exception handler in app.py returning HTTP 400
- Fix ErrorCode on Google 400 bad requests from INTERNAL_SERVER_ERROR
to INVALID_ARGUMENT
- Route Google API errors through handle_llm_error in stream_async path
Datadog: https://us5.datadoghq.com/error-tracking/issue/4eb3ff3c-d937-11f0-8177-da7ad0900000🤖 Generated with [Letta Code](https://letta.com)
Co-authored-by: Letta <noreply@letta.com>
Re-apply changes on top of latest main to resolve merge conflicts.
- Add DatabaseLockNotAvailableError custom exception in orm/errors.py
- Catch asyncpg LockNotAvailableError and pgcode 55P03 in _handle_dbapi_error
- Register FastAPI exception handler returning 409 with Retry-After header
🐾 Generated with [Letta Code](https://letta.com)
Co-authored-by: letta-code <248085862+letta-code@users.noreply.github.com>
Co-authored-by: Letta <noreply@letta.com>
* fix(core): handle UTF-8 surrogate characters in API responses
LLM responses or user input can contain surrogate characters (U+D800-U+DFFF)
which are valid Python strings but illegal in UTF-8. ORJSONResponse rejects
them with "str is not valid UTF-8: surrogates not allowed". Add
SafeORJSONResponse that catches the TypeError and strips surrogates before
retrying serialization.
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* refactor: reuse sanitize_unicode_surrogates from json_helpers
Replace the inline _sanitize_surrogates function with the existing
sanitize_unicode_surrogates helper from letta.helpers.json_helpers,
which is already used across all LLM clients.
Co-authored-by: Kian Jones <kianjones9@users.noreply.github.com>
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
---------
Co-authored-by: Letta <noreply@letta.com>
Co-authored-by: letta-code <248085862+letta-code@users.noreply.github.com>
* feat(core): add git-backed memory repos and block manager
Introduce a GCS-backed git repository per agent as the source of truth for core
memory blocks. Add a GitEnabledBlockManager that writes block updates to git and
syncs values back into Postgres as a cache.
Default newly-created memory repos to the `main` branch.
👾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* feat(core): serve memory repos over git smart HTTP
Run dulwich's WSGI HTTPGitApplication on a local sidecar port and proxy
/v1/git/* through FastAPI to support git clone/fetch/push directly against
GCS-backed memory repos.
👾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix(core): create memory repos on demand and stabilize git HTTP
- Ensure MemoryRepoManager creates the git repo on first write (instead of 500ing)
and avoids rewriting history by only auto-creating on FileNotFoundError.
- Simplify dulwich-thread async execution and auto-create empty repos on first
git clone.
👾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix(core): make dulwich optional for CI installs
Guard dulwich imports in the git smart HTTP router so the core server can boot
(and CI tests can run) without installing the memory-repo extra.
👾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix(core): guard git HTTP WSGI init when dulwich missing
Avoid instantiating dulwich's HTTPGitApplication at import time when dulwich
isn't installed (common in CI installs).
👾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix(core): avoid masking send_message errors in finally
Initialize `result` before the agent loop so error paths (e.g. approval
validation) don't raise UnboundLocalError in the run-tracking finally block.
👾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix(core): stop event loop watchdog on FastAPI shutdown
Ensure the EventLoopWatchdog thread is stopped during FastAPI lifespan
shutdown to avoid daemon threads logging during interpreter teardown (seen in CI
unit tests).
👾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* chore(core): remove send_*_message_to_agent from SyncServer
Drop send_message_to_agent and send_group_message_to_agent from SyncServer and
route internal fire-and-forget messaging through send_messages helpers instead.
👾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix(core): backfill git memory repo when tag added
When an agent is updated to include the git-memory-enabled tag, ensure the
git-backed memory repo is created and initialized from the agent's current
blocks. Also support configuring the memory repo object store via
LETTA_OBJECT_STORE_URI.
👾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix(core): preserve block tags on git-enabled updates
When updating a block for a git-memory-enabled agent, keep block tags in sync
with PostgreSQL (tags are not currently stored in the git repo).
👾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* chore(core): remove git-state legacy shims
- Rename optional dependency extra from memory-repo to git-state
- Drop legacy object-store env aliases and unused region config
- Simplify memory repo metadata to a single canonical format
- Remove unused repo-cache invalidation helper
👾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix(core): keep PR scope for git-backed blocks
- Revert unrelated change in fire-and-forget multi-agent send helper
- Route agent block updates-by-label through injected block manager only when needed
👾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
---------
Co-authored-by: Letta <noreply@letta.com>
* feat: add non-streaming option for conversation messages
- Add ConversationMessageRequest with stream=True default (backwards compatible)
- stream=true (default): SSE streaming via StreamingService
- stream=false: JSON response via AgentLoop.load().step()
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* chore: regenerate API schema for ConversationMessageRequest
* feat: add direct ClickHouse storage for raw LLM traces
Adds ability to store raw LLM request/response payloads directly in ClickHouse,
bypassing OTEL span attribute size limits. This enables debugging and analytics
on large LLM payloads (>10MB system prompts, large tool schemas, etc.).
New files:
- letta/schemas/llm_raw_trace.py: Pydantic schema with ClickHouse row helper
- letta/services/llm_raw_trace_writer.py: Async batching writer (fire-and-forget)
- letta/services/llm_raw_trace_reader.py: Reader with query methods
- scripts/sql/clickhouse/llm_raw_traces.ddl: Production table DDL
- scripts/sql/clickhouse/llm_raw_traces_local.ddl: Local dev DDL
- apps/core/clickhouse-init.sql: Local dev initialization
Modified:
- letta/settings.py: Added 4 settings (store_llm_raw_traces, ttl, batch_size, flush_interval)
- letta/llm_api/llm_client_base.py: Integration into request_async_with_telemetry
- compose.yaml: Added ClickHouse service for local dev
- justfile: Added clickhouse, clickhouse-cli, clickhouse-traces commands
Feature disabled by default (LETTA_STORE_LLM_RAW_TRACES=false).
Uses ZSTD(3) compression for 10-30x reduction on JSON payloads.
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix: address code review feedback for LLM raw traces
Fixes based on code review feedback:
1. Fix ClickHouse endpoint parsing - default to secure=False for raw host:port
inputs (was defaulting to HTTPS which breaks local dev)
2. Make raw trace writes truly fire-and-forget - use asyncio.create_task()
instead of awaiting, so JSON serialization doesn't block request path
3. Add bounded queue (maxsize=10000) - prevents unbounded memory growth
under load. Drops traces with warning if queue is full.
4. Fix deprecated asyncio usage - get_running_loop() instead of get_event_loop()
5. Add org_id fallback - use _telemetry_org_id if actor doesn't have it
6. Remove unused imports - json import in reader
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix: add missing asyncio import and simplify JSON serialization
- Add missing 'import asyncio' that was causing 'name asyncio is not defined' error
- Remove unnecessary clean_double_escapes() function - the JSON is stored correctly,
the clickhouse-client CLI was just adding extra escaping when displaying
- Update just clickhouse-trace to use Python client for correct JSON output
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* test: add clickhouse raw trace integration test
* test: simplify clickhouse trace assertions
* refactor: centralize usage parsing and stream error traces
Use per-client usage helpers for raw trace extraction and ensure streaming errors log requests with error metadata.
👾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* test: exercise provider usage parsing live
Make live OpenAI/Anthropic/Gemini requests with credential gating and validate Anthropic cache usage mapping when present.
👾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* test: fix usage parsing tests to pass
- Use GoogleAIClient with GEMINI_API_KEY instead of GoogleVertexClient
- Update model to gemini-2.0-flash (1.5-flash deprecated in v1beta)
- Add tools=[] for Gemini/Anthropic build_request_data
👾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* refactor: extract_usage_statistics returns LettaUsageStatistics
Standardize on LettaUsageStatistics as the canonical usage format returned by client helpers. Inline UsageStatistics construction for ChatCompletionResponse where needed.
👾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* feat: add is_byok and llm_config_json columns to ClickHouse traces
Extend llm_raw_traces table with:
- is_byok (UInt8): Track BYOK vs base provider usage for billing analytics
- llm_config_json (String, ZSTD): Store full LLM config for debugging and analysis
This enables queries like:
- BYOK usage breakdown by provider/model
- Config parameter analysis (temperature, max_tokens, etc.)
- Debugging specific request configurations
* feat: add tests for error traces, llm_config_json, and cache tokens
- Update llm_raw_trace_reader.py to query new columns (is_byok,
cached_input_tokens, cache_write_tokens, reasoning_tokens, llm_config_json)
- Add test_error_trace_stored_in_clickhouse to verify error fields
- Add test_cache_tokens_stored_for_anthropic to verify cache token storage
- Update existing tests to verify llm_config_json is stored correctly
- Make llm_config required in log_provider_trace_async()
- Simplify provider extraction to use provider_name directly
🐾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* ci: add ClickHouse integration tests to CI pipeline
- Add use-clickhouse option to reusable-test-workflow.yml
- Add ClickHouse service container with otel database
- Add schema initialization step using clickhouse-init.sql
- Add ClickHouse env vars (CLICKHOUSE_ENDPOINT, etc.)
- Add separate clickhouse-integration-tests job running
integration_test_clickhouse_llm_raw_traces.py
🐾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* refactor: simplify provider and org_id extraction in raw trace writer
- Use model_endpoint_type.value for provider (not provider_name)
- Simplify org_id to just self.actor.organization_id (actor is always pydantic)
🐾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* refactor: simplify LLMRawTraceWriter with _enabled flag
- Check ClickHouse env vars once at init, set _enabled flag
- Early return in write_async/flush_async if not enabled
- Remove ValueError raises (never used)
- Simplify _get_client (no validation needed since already checked)
🐾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix: add LLMRawTraceWriter shutdown to FastAPI lifespan
Properly flush pending traces on graceful shutdown via lifespan
instead of relying only on atexit handler.
🐾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* feat: add agent_tags column to ClickHouse traces
Store agent tags as Array(String) for filtering/analytics by tag.
🐾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* cleanup
* fix(ci): fix ClickHouse schema initialization in CI
- Create database separately before loading SQL file
- Remove CREATE DATABASE from SQL file (handled in CI step)
- Add verification step to confirm table was created
- Use -sf flag for curl to fail on HTTP errors
🐾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* refactor: simplify LLM trace writer with ClickHouse async_insert
- Use ClickHouse async_insert for server-side batching instead of manual queue/flush loop
- Sync cloud DDL schema with clickhouse-init.sql (add missing columns)
- Remove redundant llm_raw_traces_local.ddl
- Remove unused batch_size/flush_interval settings
- Update tests for simplified writer
Key changes:
- async_insert=1, wait_for_async_insert=1 for reliable server-side batching
- Simple per-trace retry with exponential backoff (max 3 retries)
- ~150 lines removed from writer
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* refactor: consolidate ClickHouse direct writes into TelemetryManager backend
- Add clickhouse_direct backend to provider_trace_backends
- Remove duplicate ClickHouse write logic from llm_client_base.py
- Configure via LETTA_TELEMETRY_PROVIDER_TRACE_BACKEND=postgres,clickhouse_direct
The clickhouse_direct backend:
- Converts ProviderTrace to LLMRawTrace
- Extracts usage stats from response JSON
- Writes via LLMRawTraceWriter with async_insert
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* refactor: address PR review comments and fix llm_config bug
Review comment fixes:
- Rename clickhouse_direct -> clickhouse_analytics (clearer purpose)
- Remove ClickHouse from OSS compose.yaml, create separate compose.clickhouse.yaml
- Delete redundant scripts/test_llm_raw_traces.py (use pytest tests)
- Remove unused llm_raw_traces_ttl_days setting (TTL handled in DDL)
- Fix socket description leak in telemetry_manager docstring
- Add cloud-only comment to clickhouse-init.sql
- Update justfile to use separate compose file
Bug fix:
- Fix llm_config not being passed to ProviderTrace in telemetry
- Now correctly populates provider, model, is_byok for all LLM calls
- Affects both request_async_with_telemetry and log_provider_trace_async
DDL optimizations:
- Add secondary indexes (bloom_filter for agent_id, model, step_id)
- Add minmax indexes for is_byok, is_error
- Change model and error_type to LowCardinality for faster GROUP BY
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* refactor: rename llm_raw_traces -> llm_traces
Address review feedback that "raw" is misleading since we denormalize fields.
Renames:
- Table: llm_raw_traces -> llm_traces
- Schema: LLMRawTrace -> LLMTrace
- Files: llm_raw_trace_{reader,writer}.py -> llm_trace_{reader,writer}.py
- Setting: store_llm_raw_traces -> store_llm_traces
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix: update workflow references to llm_traces
Missed renaming table name in CI workflow files.
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix: update clickhouse_direct -> clickhouse_analytics in docstring
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* chore: remove inaccurate OTEL size limit comments
The 4MB limit is our own truncation logic, not an OTEL protocol limit.
The real benefit is denormalized columns for analytics queries.
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* chore: remove local ClickHouse dev setup (cloud-only feature)
- Delete clickhouse-init.sql and compose.clickhouse.yaml
- Remove local clickhouse just commands
- Update CI to use cloud DDL with MergeTree for testing
clickhouse_analytics is a cloud-only feature. For local dev, use postgres backend.
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix: restore compose.yaml to match main
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* refactor: merge clickhouse_analytics into clickhouse backend
Per review feedback - having two separate backends was confusing.
Now the clickhouse backend:
- Writes to llm_traces table (denormalized for cost analytics)
- Reads from OTEL traces table (will cut over to llm_traces later)
Config: LETTA_TELEMETRY_PROVIDER_TRACE_BACKEND=postgres,clickhouse
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix: correct path to DDL file in CI workflow
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* chore: add provider index to DDL for faster filtering
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix: configure telemetry backend in clickhouse tests
Tests need to set telemetry_settings.provider_trace_backends to include
'clickhouse', otherwise traces are routed to default postgres backend.
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix: set provider_trace_backend field, not property
provider_trace_backends is a computed property, need to set the
underlying provider_trace_backend string field instead.
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix: error trace test and error_type extraction
- Add TelemetryManager to error trace test so traces get written
- Fix error_type extraction to check top-level before nested error dict
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix: use provider_trace.id for trace correlation across backends
- Pass provider_trace.id to LLMTrace instead of auto-generating
- Log warning if ID is missing (shouldn't happen, helps debug)
- Fallback to new UUID only if not set
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix: trace ID correlation and concurrency issues
- Strip "provider_trace-" prefix from ID for UUID storage in ClickHouse
- Add asyncio.Lock to serialize writes (clickhouse_connect not thread-safe)
- Fix Anthropic prompt_tokens to include cached tokens for cost analytics
- Log warning if provider_trace.id is missing
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
---------
Co-authored-by: Letta <noreply@letta.com>
Co-authored-by: Caren Thomas <carenthomas@gmail.com>
**Problem:**
When a user sends a message with an image URL that times out or fails to
fetch, the server returns a 500 Internal Server Error with a generic message.
This is confusing because the user doesn't know what went wrong.
**Root Cause:**
`LettaImageFetchError` was not registered in the exception handlers, so it
bubbled up as an unhandled exception.
**Fix:**
Register `LettaImageFetchError` with the 400 Bad Request handler. Now users
get a clear error message like:
```
Failed to fetch image from https://...: Timeout after 2 attempts
```
This tells users exactly what went wrong so they can retry with a different
image or verify the URL is accessible.
👾 Generated with [Letta Code](https://letta.com)
Co-authored-by: Letta <noreply@letta.com>
* feat(core): add image support in tool returns [LET-7140]
Enable tool_return to support both string and ImageContent content parts,
matching the pattern used for user message inputs. This allows tools
executed client-side to return images back to the agent.
Changes:
- Add LettaToolReturnContentUnion type for text/image content parts
- Update ToolReturn schema to accept Union[str, List[content parts]]
- Update converters for each provider:
- OpenAI Chat Completions: placeholder text for images
- OpenAI Responses API: full image support
- Anthropic: full image support with base64
- Google: placeholder text for images
- Add resolve_tool_return_images() for URL-to-base64 conversion
- Make create_approval_response_message_from_input() async
🐾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix(core): support images in Google tool returns as sibling parts
Following the gemini-cli pattern: images in tool returns are sent as
sibling inlineData parts alongside the functionResponse, rather than
inside it.
🐾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* test(core): add integration tests for multi-modal tool returns [LET-7140]
Tests verify that:
- Models with image support (Anthropic, OpenAI Responses API) can see
images in tool returns and identify the secret text
- Models without image support (Chat Completions) get placeholder text
and cannot see the actual image content
- Tool returns with images persist correctly in the database
Uses secret.png test image containing hidden text "FIREBRAWL" that
models must identify to pass the test.
Also fixes misleading comment about Anthropic only supporting base64
images - they support URLs too, we just pre-resolve for consistency.
🐾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* refactor: simplify tool return image support implementation
Reduce code verbosity while maintaining all functionality:
- Extract _resolve_url_to_base64() helper in message_helper.py (eliminates duplication)
- Add _get_text_from_part() helper for text extraction
- Add _get_base64_image_data() helper for image data extraction
- Add _tool_return_to_google_parts() to simplify Google implementation
- Add _image_dict_to_data_url() for OpenAI Responses format
- Use walrus operator and list comprehensions where appropriate
- Add integration_test_multi_modal_tool_returns.py to CI workflow
Net change: -120 lines while preserving all features and test coverage.
👾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix(tests): improve prompt for multi-modal tool return tests
Make prompts more direct to reduce LLM flakiness:
- Simplify tool description: "Retrieves a secret image with hidden text. Call this function to get the image."
- Change user prompt from verbose request to direct command: "Call the get_secret_image function now."
- Apply to both test methods
This reduces ambiguity and makes tool calling more reliable across different LLM models.
👾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix bugs
* test(core): add google_ai/gemini-2.0-flash-exp to multi-modal tests
Add Gemini model to test coverage for multi-modal tool returns. Google AI already supports images in tool returns via sibling inlineData parts.
👾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix(ui): handle multi-modal tool_return type in frontend components
Convert Union<string, LettaToolReturnContentUnion[]> to string for display:
- ViewRunDetails: Convert array to '[Image here]' placeholder
- ToolCallMessageComponent: Convert array to '[Image here]' placeholder
Fixes TypeScript errors in web, desktop-ui, and docker-ui type-checks.
👾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
---------
Co-authored-by: Letta <noreply@letta.com>
Co-authored-by: Caren Thomas <carenthomas@gmail.com>
* fix: prevent empty reasoning messages in streaming interfaces
Prevents empty "Thinking..." indicators from appearing in clients by
filtering out reasoning messages with no content at the source.
Changes:
- Gemini: Don't emit ReasoningMessage when only thought_signature exists
- Gemini: Only emit reasoning content if text is non-empty
- Anthropic: Don't emit ReasoningMessage for BetaSignatureDelta
- Anthropic: Only emit reasoning content if thinking text is non-empty
This fixes the issue where providers send signature metadata before
actual thinking content, causing empty reasoning blocks to appear
in the UI after responses complete.
Affects: Gemini reasoning, Anthropic extended thinking
👾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* feat: enable Datadog LLM Observability for memgpt-server
Enables DD_LLMOBS to track LLM calls, prompts, completions, and costs
in production for memgpt-server.
Changes:
- Add DD_LLMOBS_ENABLED=1 and DD_LLMOBS_ML_APP=memgpt-server in:
- .github/workflows/deploy-core.yml (GitHub Actions deployment)
- justfile (Helm deployment secrets)
- apps/core/letta/server/rest_api/app.py (runtime config)
This provides visibility into:
- LLM API calls and latency
- Prompt/completion content and tokens
- Model costs and usage
- Error rates per model/provider
👾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* dd llmobs
* Revert "fix: prevent empty reasoning messages in streaming interfaces"
This reverts commit a900228b3611de49eb5f740f68dc76a657fc9b14.
---------
Co-authored-by: Letta <noreply@letta.com>
* feat: add OpenTelemetry distributed tracing to letta-web
Enables end-to-end distributed tracing from letta-web through memgpt-server
using OpenTelemetry. Traces are exported via OTLP to Datadog APM for
monitoring request latency across services.
Key changes:
- Install OTEL packages: @opentelemetry/sdk-node, auto-instrumentations-node
- Create apps/web/src/lib/tracing.ts with full OTEL configuration
- Initialize tracing in instrumentation.ts (before any other imports)
- Add OTEL packages to next.config.js serverExternalPackages
- Add OTEL environment variables to deployment configs:
- OTEL_EXPORTER_OTLP_ENDPOINT (e.g., http://datadog-agent:4317)
- OTEL_SERVICE_NAME (letta-web)
- OTEL_ENABLED (true in production)
Features enabled:
- Automatic HTTP/fetch instrumentation with trace context propagation
- Service metadata (name, version, environment)
- Trace correlation with logs (getCurrentTraceId helper)
- Graceful shutdown handling
- Health check endpoint filtering
Configuration:
- Traces sent to OTLP endpoint (Datadog agent)
- W3C Trace Context propagation for distributed tracing
- BatchSpanProcessor for efficient trace export
- Debug logging in development environment
GitHub variables to set:
- OTEL_EXPORTER_OTLP_ENDPOINT (e.g., http://datadog-agent:4317)
- OTEL_ENABLED (true)
* feat: add OpenTelemetry distributed tracing to cloud-api
Completes end-to-end distributed tracing across the full request chain:
letta-web → cloud-api → memgpt-server (core)
All three services now export traces via OTLP to Datadog APM.
Key changes:
- Install OTEL packages in cloud-api
- Create apps/cloud-api/src/instrument-otel.ts with full OTEL configuration
- Initialize OTEL tracing in main.ts (before Sentry)
- Add OTEL environment variables to deployment configs:
- OTEL_EXPORTER_OTLP_ENDPOINT (e.g., http://datadog-agent:4317)
- OTEL_SERVICE_NAME (cloud-api)
- OTEL_ENABLED (true in production)
- GIT_HASH (for service version)
Features enabled:
- Automatic HTTP/Express instrumentation
- Trace context propagation (W3C Trace Context)
- Service metadata (name, version, environment)
- Trace correlation with logs (getCurrentTraceId helper)
- Health check endpoint filtering
Configuration:
- Traces sent to OTLP endpoint (Datadog agent)
- Seamless trace propagation through the full request chain
- BatchSpanProcessor for efficient trace export
Complete trace flow:
1. letta-web receives request, starts root span
2. letta-web calls cloud-api, propagates trace context
3. cloud-api calls memgpt-server, propagates trace context
4. All spans linked by trace ID, visible as single trace in Datadog
* fix: prevent duplicate OTEL SDK initialization and handle array headers
Fixes identified by Cursor bugbot:
1. Added initialization guard to prevent duplicate SDK initialization
- Added isInitialized flag to prevent multiple SDK instances
- Prevents duplicate SIGTERM handlers from being registered
- Prevents resource leaks from lost SDK references
2. Fixed array header value handling
- HTTP headers can be string | string[] | undefined
- Now properly handles array case by taking first element
- Prevents passing arrays to span.setAttribute() which expects strings
3. Verified OTEL dependencies are correctly installed
- Packages are in root package.json (monorepo structure)
- Available to all workspace packages (web, cloud-api)
- Bugbot false positive - dependencies ARE present
Applied fixes to both:
- apps/web/src/lib/tracing.ts
- apps/cloud-api/src/instrument-otel.ts
* fix: handle SIGTERM promise rejections and unify initialization pattern
Fixes identified by Cursor bugbot:
1. Fixed unhandled promise rejection in SIGTERM handlers
- Changed from async arrow function to sync with .catch()
- Prevents unhandled promise rejections during shutdown
- Logs errors if OTLP endpoint is unreachable during shutdown
- Applied to both web and cloud-api
2. Unified initialization pattern across services
- Removed auto-initialization from cloud-api instrument-otel.ts
- Now explicitly calls initializeTracing() in main.ts
- Matches web pattern (explicit call in instrumentation.ts)
- Reduces confusion and maintains consistency
Both services now follow the same pattern:
- Import tracing module
- Explicitly call initializeTracing()
- Guard against duplicate initialization with isInitialized flag
Before (cloud-api):
import './instrument-otel'; // Auto-initializes
After (cloud-api):
import { initializeTracing } from './instrument-otel';
initializeTracing(); // Explicit call
SIGTERM handler before:
process.on('SIGTERM', async () => {
await shutdownTracing(); // Unhandled rejection!
});
SIGTERM handler after:
process.on('SIGTERM', () => {
shutdownTracing().catch((error) => {
console.error('Error during OTEL shutdown:', error);
});
});
* feat: add environment differentiation for distributed tracing
Enables proper environment filtering in Datadog APM by introducing LETTA_ENV
to distinguish between production, staging, canary, and development.
Problem:
- NODE_ENV is always 'production' or 'development'
- No way to differentiate staging, canary, etc. in Datadog
- All traces appeared under no environment or same environment
- Couldn't test with staging traces
Solution:
- Added LETTA_ENV variable (production, staging, canary, development)
- Set deployment.environment attribute for Datadog APM filtering
- Updated all deployment configs (workflows, justfile)
- Falls back to NODE_ENV if LETTA_ENV not set
Changes:
1. Updated tracing code (web + cloud-api):
- Use LETTA_ENV for environment name
- Set SEMRESATTRS_DEPLOYMENT_ENVIRONMENT (resolves to deployment.environment)
- Fallback: LETTA_ENV → NODE_ENV → 'development'
2. Updated deployment configs:
- .github/workflows/deploy-web.yml: LETTA_ENV=production
- .github/workflows/deploy-cloud-api.yml: LETTA_ENV=production
- justfile: LETTA_ENV with default to production
3. Added comprehensive documentation:
- OTEL_TRACING.md with full setup guide
- How to view environments in Datadog APM
- How to test with staging environment
- Dashboard query examples
- Troubleshooting guide
Usage:
# Production
LETTA_ENV=production
# Staging
LETTA_ENV=staging
# Local dev
LETTA_ENV=development
Datadog APM now shows:
- env:production (main traffic)
- env:staging (staging deployments)
- env:canary (canary deployments)
- env:development (local testing)
View in Datadog:
APM → Services → Filter by env dropdown → Select production/staging/etc.
* fix: prevent OTEL SDK double shutdown and error handler failures
Fixes identified by Cursor bugbot:
1. SDK double shutdown prevention
- Set sdk = null after successful shutdown
- Set isInitialized = false to allow re-initialization
- Even on shutdown error, mark as shutdown to prevent retry
- Prevents errors when shutdownTracing() called multiple times
- Applied to both web and cloud-api
2. Error handler using console.error directly (web only)
- Replaced dynamic require('./logger') with console.error
- Logger module may not be loaded during early initialization
- This code runs in Next.js instrumentation.ts before modules load
- Prevents masking original OTEL errors with logger failures
- Cloud-api already correctly used console.error
Before (bug #1):
await sdk.shutdown();
// sdk still references shutdown SDK
// Next call to shutdownTracing() tries to shutdown again
After (bug #1):
await sdk.shutdown();
sdk = null; // ✅ Prevent double shutdown
isInitialized = false; // ✅ Allow re-init
Before (bug #2 - web):
const { logger } = require('./logger'); // ❌ May fail during init
logger.error('Failed to initialize OTEL', errorInfo);
After (bug #2 - web):
console.error('Failed to initialize OTEL:', error); // ✅ Always works
Scenarios protected:
- Multiple SIGTERM signals
- Explicit shutdownTracing() calls
- Logger initialization failures
- Circular dependencies during early init
* feat: add environment differentiation to core and staging deployments
Enables proper environment filtering in Datadog APM for memgpt-server (core)
and staging deployments by adding deployment.environment resource attribute.
Problem:
- Core traces didn't show environment in Datadog APM
- Staging workflow had no OTEL configuration
- Couldn't differentiate staging vs production core traces
Solution:
1. Updated core OTEL resource to include deployment.environment
- Added deployment.environment attribute in resource.py
- Uses settings.environment which maps to LETTA_ENVIRONMENT env var
- Applied .lower() for consistency with web/cloud-api
2. Added LETTA_ENV to staging workflow
- nightly-staging-deploy-test.yaml: LETTA_ENV=staging
- Added OTEL_EXPORTER_OTLP_ENDPOINT and OTEL_ENABLED vars
- Traces from staging will show env:staging in Datadog
3. Added LETTA_ENV to production core workflow
- deploy-core.yml: LETTA_ENV=production
- Added OTEL configuration at workflow level
- Traces from production will show env:production
4. Updated justfile for core deployments
- Set LETTA_ENVIRONMENT from LETTA_ENV with default to production
- Maps to settings.environment field (env_prefix="letta_")
Environment mapping:
- Web/Cloud-API: Use LETTA_ENV directly
- Core: Use LETTA_ENVIRONMENT (Pydantic with letta_ prefix)
- Both map to deployment.environment resource attribute
Now all services properly tag traces with environment:
✅ letta-web: deployment.environment set
✅ cloud-api: deployment.environment set
✅ memgpt-server: deployment.environment set
View in Datadog:
APM → Services → Filter by env:production or env:staging
* refactor: unify environment variable to LETTA_ENV across all services
Simplifies environment configuration by using LETTA_ENV consistently across
all three services (web, cloud-api, and core) instead of having core use
LETTA_ENVIRONMENT.
Problem:
- Core used LETTA_ENVIRONMENT (due to Pydantic env_prefix)
- Web and cloud-api used LETTA_ENV
- Confusing to have two different variable names
- Justfile had to map LETTA_ENV → LETTA_ENVIRONMENT
Solution:
- Added validation_alias to core settings.py
- environment field now reads from LETTA_ENV directly
- Falls back to letta_environment for backwards compatibility
- Updated justfile to set LETTA_ENV for core (not LETTA_ENVIRONMENT)
- Updated documentation to clarify consistent naming
Changes:
1. apps/core/letta/settings.py
- Added validation_alias=AliasChoices("LETTA_ENV", "letta_environment")
- Prioritizes LETTA_ENV, falls back to letta_environment
- Updated description to include all environment values
2. justfile
- Changed --set secrets.LETTA_ENVIRONMENT to --set secrets.LETTA_ENV
- Now consistent with web and cloud-api deployments
3. OTEL_TRACING.md
- Added note that all services use LETTA_ENV consistently
- Fixed trailing whitespace
Before:
- Web: LETTA_ENV
- Cloud-API: LETTA_ENV
- Core: LETTA_ENVIRONMENT ❌
After:
- Web: LETTA_ENV
- Cloud-API: LETTA_ENV
- Core: LETTA_ENV ✅
All services now use the same environment variable name!
* refactor: standardize on LETTA_ENVIRONMENT across all services
Unifies environment variable naming to use LETTA_ENVIRONMENT consistently
across all three services (web, cloud-api, and core).
Problem:
- Previous commit tried to use LETTA_ENV everywhere
- Core already uses Pydantic with env_prefix="letta_"
- Better to standardize on LETTA_ENVIRONMENT to match core conventions
Solution:
- All services now read from LETTA_ENVIRONMENT
- Web: process.env.LETTA_ENVIRONMENT
- Cloud-API: process.env.LETTA_ENVIRONMENT
- Core: settings.environment (reads LETTA_ENVIRONMENT via Pydantic prefix)
Changes:
1. apps/web/src/lib/tracing.ts
- Changed LETTA_ENV → LETTA_ENVIRONMENT
2. apps/cloud-api/src/instrument-otel.ts
- Changed LETTA_ENV → LETTA_ENVIRONMENT
3. apps/core/letta/settings.py
- Removed validation_alias (not needed)
- Uses standard Pydantic env_prefix behavior
4. All workflow files updated:
- deploy-web.yml: LETTA_ENVIRONMENT=production
- deploy-cloud-api.yml: LETTA_ENVIRONMENT=production
- deploy-core.yml: LETTA_ENVIRONMENT=production
- nightly-staging-deploy-test.yaml: LETTA_ENVIRONMENT=staging
- stage-web.yaml: LETTA_ENVIRONMENT=staging
- stage-cloud-api.yaml: LETTA_ENVIRONMENT=staging (added OTEL config)
- stage-core.yaml: LETTA_ENVIRONMENT=staging (added OTEL config)
5. justfile
- Updated all LETTA_ENV → LETTA_ENVIRONMENT
- Web: --set env.LETTA_ENVIRONMENT
- Cloud-API: --set env.LETTA_ENVIRONMENT
- Core: --set secrets.LETTA_ENVIRONMENT
6. OTEL_TRACING.md
- All references updated to LETTA_ENVIRONMENT
Final state:
✅ Web: LETTA_ENVIRONMENT
✅ Cloud-API: LETTA_ENVIRONMENT
✅ Core: LETTA_ENVIRONMENT (via letta_ prefix)
All services use the same variable name with proper Pydantic conventions!
* feat: implement split OTEL architecture (Option A)
Implements Option A: Web and cloud-api send traces directly to Datadog Agent,
while core keeps its existing OTEL sidecar (exports to ClickHouse + Datadog).
Architecture:
- letta-web → Datadog Agent (OTLP:4317) → Datadog APM
- cloud-api → Datadog Agent (OTLP:4317) → Datadog APM
- memgpt-server → OTEL Sidecar → ClickHouse + Datadog (unchanged)
Rationale:
- Core has existing production sidecar setup (exports to ClickHouse for analytics)
- Web/cloud-api don't need ClickHouse export, only APM
- Simpler: Direct to Datadog Agent is sufficient
- Minimal changes to core (already working)
- Traces still link end-to-end via W3C Trace Context propagation
Changes:
1. Helm Charts - Added OTEL config defaults:
- helm/letta-web/values.yaml: Added OTEL env vars
- helm/cloud-api/values.yaml: Added OTEL env vars
- Default: OTEL_ENABLED="false", override in production
- Endpoint: http://datadog-agent:4317
2. Production Workflows - Direct to Datadog Agent:
- deploy-web.yml: Set OTEL_EXPORTER_OTLP_ENDPOINT to datadog-agent
- deploy-cloud-api.yml: Set OTEL_EXPORTER_OTLP_ENDPOINT to datadog-agent
- deploy-core.yml: Removed OTEL vars (keep existing setup)
- OTEL_ENABLED="true", LETTA_ENVIRONMENT=production
3. Staging Workflows - Direct to Datadog Agent:
- stage-web.yaml: Set OTEL_EXPORTER_OTLP_ENDPOINT to datadog-agent
- stage-cloud-api.yaml: Set OTEL_EXPORTER_OTLP_ENDPOINT to datadog-agent
- stage-core.yaml: Removed OTEL vars (keep existing setup)
- nightly-staging-deploy-test.yaml: Removed OTEL vars
- OTEL_ENABLED="true", LETTA_ENVIRONMENT=staging
4. Justfile:
- Removed LETTA_ENVIRONMENT from core deployment (keep unchanged)
- Web/cloud-api already correctly pass OTEL vars from workflows
5. Documentation:
- Completely rewrote OTEL_TRACING.md
- Added architecture diagrams explaining split setup
- Added Datadog Agent prerequisites
- Added troubleshooting for split architecture
- Explained why we chose this approach
Prerequisites (must verify before deploying):
- Datadog Agent deployed with service name: datadog-agent
- OTLP receiver enabled on port 4317
- If different service name/namespace, update workflows
Next Steps:
- Verify datadog-agent service exists in cluster
- Verify OTLP receiver is enabled on Datadog agent
- Deploy and test trace propagation across services
* refactor: shorten environment names to prod and dev
Changes LETTA_ENVIRONMENT values from 'production' to 'prod' and
'development' to 'dev' for consistency and brevity.
Changes:
1. Workflows:
- deploy-web.yml: production → prod
- deploy-cloud-api.yml: production → prod
2. Helm charts:
- letta-web/values.yaml: development → dev
- cloud-api/values.yaml: development → dev
3. Justfile:
- Default values: production → prod
4. Code:
- apps/web/src/lib/tracing.ts: Fallback 'development' → 'dev'
- apps/cloud-api/src/instrument-otel.ts: Fallback 'development' → 'dev'
- apps/core/letta/settings.py: Updated description
5. Documentation:
- OTEL_TRACING.md: Updated all examples and table
Environment values:
- prod (was production)
- staging (unchanged)
- canary (unchanged)
- dev (was development)
* refactor: align environment names with codebase patterns
Changes staging to 'dev' and local development to 'local-test' to match
existing codebase conventions (like test_temporal_metrics_local.py).
Rationale:
- 'dev' for staging matches consistent pattern across codebase
- 'local-test' for local development follows test naming convention
- Clearer distinction between deployed staging and local testing
Environment values:
- prod (production)
- dev (staging/dev cluster)
- canary (canary deployments)
- local-test (local development)
Changes:
1. Staging workflows:
- stage-web.yaml: staging → dev
- stage-cloud-api.yaml: staging → dev
2. Helm chart defaults (for local):
- letta-web/values.yaml: dev → local-test
- cloud-api/values.yaml: dev → local-test
3. Code fallbacks:
- apps/web/src/lib/tracing.ts: 'dev' → 'local-test'
- apps/cloud-api/src/instrument-otel.ts: 'dev' → 'local-test'
- apps/core/letta/settings.py: Updated description
4. Documentation:
- OTEL_TRACING.md: Updated table, examples, and all references
- Clarified dev = staging cluster, local-test = local development
Datadog APM filters:
- env:prod (production)
- env:dev (staging cluster)
- env:canary (canary)
- env:local-test (local development)
* fix: update environment checks for lowercase values and add missing configs
Fixes 4 bugs identified by Cursor bugbot:
1. Case-sensitive environment checks (5 locations)
- Updated all checks from "PRODUCTION" to case-insensitive "prod"
- Fixed in: resource.py, multi_agent.py, tool_manager.py,
multi_agent_tool_executor.py, agent_manager_helper.py
- Now properly filters local-only tools in production
- Prevents exposing debug tools in production
2. Device ID leak in production
- Fixed resource.py to use case-insensitive check
- Now correctly excludes device.id (MAC address) in production
- Only adds device.id when env is not "prod"
3. Missing @opentelemetry/sdk-trace-base in Next.js externals
- Added to serverExternalPackages in next.config.js
- Prevents webpack bundling issues with native dependencies
- Package is directly imported for BatchSpanProcessor
4. Missing NEXT_PUBLIC_GIT_HASH in stage-web workflow
- Added NEXT_PUBLIC_GIT_HASH: ${{ github.sha }}
- Now matches stage-cloud-api.yaml pattern
- Staging traces will show correct version instead of 'unknown'
- Enables correlation of traces with specific deployments
Changes:
- apps/core/letta/otel/resource.py: Case-insensitive check, add device.id only if not prod
- apps/core/letta/functions/function_sets/multi_agent.py: Case-insensitive prod check
- apps/core/letta/services/tool_manager.py: Case-insensitive prod check
- apps/core/letta/services/tool_executor/multi_agent_tool_executor.py: Case-insensitive prod check
- apps/core/letta/services/helpers/agent_manager_helper.py: Case-insensitive prod check
- apps/web/next.config.js: Added @opentelemetry/sdk-trace-base to externals
- .github/workflows/stage-web.yaml: Added NEXT_PUBLIC_GIT_HASH
All checks now use: settings.environment.lower() == "prod"
This matches our new convention: prod/dev/canary/local-test
Also includes: distributed-tracing skill (created in /skill session)
* refactor: keep core PRODUCTION but normalize OTEL tags to prod
Changes approach to maintain backward compatibility with core business logic
while standardizing OTEL environment tags.
Previous approach:
- Changed all "PRODUCTION" checks to lowercase "prod"
- Would break existing core business logic expectations
New approach:
- Core continues using "PRODUCTION" (uppercase) for business logic
- OTEL resource.py normalizes environment to lowercase abbreviated tags
- Web/cloud-api use "prod" directly (they don't have business logic checks)
Changes:
1. Reverted business logic checks to use "PRODUCTION" (uppercase):
- multi_agent.py: Check for "PRODUCTION" to block tools
- tool_manager.py: Check for "PRODUCTION" to filter local-only tools
- multi_agent_tool_executor.py: Check for "PRODUCTION" to block tools
- agent_manager_helper.py: Check for "PRODUCTION" to filter tools
2. Added environment normalization for OTEL tags:
- resource.py: New _normalize_environment_tag() function
- Maps PRODUCTION → prod, DEV/STAGING → dev
- Other values (CANARY, etc.) converted to lowercase
- Device ID check reverted to != "PRODUCTION"
3. Updated core deployments to set PRODUCTION:
- deploy-core.yml: LETTA_ENVIRONMENT=PRODUCTION
- stage-core.yaml: LETTA_ENVIRONMENT=DEV
- justfile: Added LETTA_ENVIRONMENT with default PRODUCTION
4. Updated settings description:
- Clarifies values are uppercase (PRODUCTION, DEV)
- Notes normalization to lowercase for OTEL tags
Result:
- Core business logic: Uses "PRODUCTION" (unchanged, backward compatible)
- OTEL Datadog tags: Shows "prod" (normalized, consistent with web/cloud-api)
- Web/cloud-api: Continue using "prod" directly (no change needed)
- Device ID properly excluded in PRODUCTION environments
* fix: correct Python FastAPI instrumentation and environment normalization
Fixes 3 bugs identified by Cursor bugbot in distributed-tracing skill:
1. Python import typo (line 50)
- Was: from opentelemetry.instrumentation.fastapi import FastAPIInstrumentatio
- Now: from opentelemetry.instrumentation.fastapi import FastAPIInstrumentor
- Missing final 'n' in Instrumentatio
- Correct class name is FastAPIInstrumentor (with 'or' suffix)
2. Wrong class name usage (line 151)
- Was: FastAPIInstrumentation.instrument_app()
- Now: FastAPIInstrumentor.instrument_app()
- Fixed to match correct OpenTelemetry API
3. Environment tag inconsistency
- Problem: Python template used .lower() which converts PRODUCTION -> production
- But resource.py normalizes PRODUCTION -> prod
- Would create inconsistent tags: 'production' vs 'prod' in Datadog
Solution:
- Added _normalize_environment_tag() function to Python template
- Matches resource.py normalization logic
- PRODUCTION -> prod, DEV/STAGING -> dev, others lowercase
- Updated comments in workflows to clarify normalization happens in code
Changes:
- .skills/distributed-tracing/templates/python-fastapi-tracing.py:
- Fixed import: FastAPIInstrumentor (not FastAPIInstrumentatio)
- Fixed usage: FastAPIInstrumentor.instrument_app()
- Added _normalize_environment_tag() function
- Updated environment handling to use normalization
- Updated docstring to clarify PRODUCTION/DEV -> prod/dev mapping
- .github/workflows/deploy-core.yml:
- Clarified comment: _normalize_environment_tag() converts to "prod"
- .github/workflows/stage-core.yaml:
- Clarified comment: _normalize_environment_tag() converts to "dev"
Result:
All services now consistently show 'prod' (not 'production') in Datadog APM,
enabling proper filtering and correlation across distributed traces.
* fix: add Datadog config to staging workflows and fix justfile backslash
Fixes 3 issues found in staging deployment logs:
1. Missing backslash in justfile (line 134)
Problem: LETTA_ENVIRONMENT line missing backslash caused all subsequent
helm --set flags to be ignored, including OTEL_EXPORTER_OTLP_ENDPOINT
Result: letta-web and cloud-api logs showed "OTEL_EXPORTER_OTLP_ENDPOINT not set"
Fixed:
--set env.LETTA_ENVIRONMENT=${LETTA_ENVIRONMENT:-prod} \ # Added backslash
2. Missing Datadog vars in staging workflows
Problem: stage-web.yaml, stage-cloud-api.yaml, stage-core.yaml didn't set
DD_SITE, DD_API_KEY, DD_LOGS_INJECTION, etc.
For web/cloud-api:
- Added to top-level env section so justfile can use them
For core:
- Added to top-level env section
- Added to Deploy step env section (so justfile can pass to helm)
- core OTEL collector config reads these from environment
Result: core logs showed "exporters::datadog: api.key is not set"
3. Wrong environment tag in staging (secondary issue)
Problem: letta-web logs showed 'dd.env":"production"' in staging
Cause: Missing backslash broke LETTA_ENVIRONMENT, defaulted to prod
Fixed: Backslash fix ensures LETTA_ENVIRONMENT=dev is set
Changes:
- justfile: Fixed missing backslash on LETTA_ENVIRONMENT line
- .github/workflows/stage-web.yaml: Added DD_* vars to env
- .github/workflows/stage-cloud-api.yaml: Added DD_* vars to env
- .github/workflows/stage-core.yaml: Added DD_* vars to env and Deploy step
After this fix:
- Web/cloud-api will send traces to Datadog Agent via OTLP
- Core OTEL collector will export traces to both ClickHouse and Datadog
- All staging traces will show env:dev tag (not env:production)
* fix: move OTEL config from prod helm to dev helm values
Problem: OTEL configuration was added to production helm values files
(helm/letta-web/values.yaml and helm/cloud-api/values.yaml) but these
are for production deployments. Staging deployments use the dev helm
values (helm/dev/<service>/values.yaml).
Changes:
- Removed OTEL vars from helm/letta-web/values.yaml (prod)
- Removed OTEL vars from helm/cloud-api/values.yaml (prod)
- Added OTEL vars to helm/dev/letta-web/values.yaml (staging)
- Added OTEL vars to helm/dev/cloud-api/values.yaml (staging)
Dev helm values now include:
OTEL_ENABLED: "true"
OTEL_SERVICE_NAME: "letta-web" or "cloud-api"
OTEL_EXPORTER_OTLP_ENDPOINT: "http://datadog-agent.default.svc.cluster.local:4317"
LETTA_ENVIRONMENT: "dev"
Note: Production deployments override these via workflow env vars, so
prod helm values don't need OTEL config. Dev/staging deployments use
these helm values as defaults.
* remove generated doc
* secrets in dev
* totally unrelated changes to tf for runner sizing and scaling
* feat: add DD_ENV tags to staging helm for log correlation
Problem: Logs show 'dd.env":"production"' instead of 'dd.env":"dev"'
in staging because Datadog's logger injection uses DD_ENV, DD_SERVICE,
and DD_VERSION environment variables for tagging.
Changes:
- Added DD_ENV, DD_SERVICE, DD_VERSION to helm/dev/letta-web/values.yaml
- Added DD_ENV, DD_SERVICE, DD_VERSION to helm/dev/cloud-api/values.yaml
Values:
DD_ENV: "dev"
DD_SERVICE: "letta-web" or "cloud-api"
DD_VERSION: "dev"
This ensures:
- Logs show correct env:dev tag in Datadog
- Traces and logs are properly correlated
- Consistent tagging across OTEL traces and DD logs
* feat: enable OTLP receiver in Datadog Agent configurations
Added OpenTelemetry Protocol (OTLP) receiver to Datadog Agent for both
dev and prod environments to support distributed tracing from services
using OpenTelemetry SDKs.
Changes:
- helm/dev/datadog/datadog-agent.yaml: Added otlp.receiver configuration
- helm/datadog/datadog-agent.yaml: Added otlp.receiver configuration
OTLP Configuration:
otlp:
receiver:
protocols:
grpc:
enabled: true
endpoint: "0.0.0.0:4317"
http:
enabled: true
endpoint: "0.0.0.0:4318"
This enables:
- Web/cloud-api services to send traces via OTLP (port 4317)
- Core OTEL collector to export to Datadog via OTLP (port 4317)
- Alternative HTTP endpoint for OTLP (port 4318)
When applied, the Datadog Agent service will expose:
- Port 4317/TCP - OTLP gRPC (for traces)
- Port 4318/TCP - OTLP HTTP (for traces)
- Port 8126/TCP - Native Datadog APM (existing)
- Port 8125/UDP - DogStatsD (existing)
Apply with:
kubectl apply -f helm/dev/datadog/datadog-agent.yaml # staging
kubectl apply -f helm/datadog/datadog-agent.yaml # production
* feat: use git hash as DD_VERSION for all services
Changed from static version strings to using git commit hash as the
version tag in Datadog APM for better version tracking and correlation.
Changes:
1. Workflows - Set DD_VERSION to github.sha:
- .github/workflows/stage-web.yaml: Added DD_VERSION: ${{ github.sha }}
- .github/workflows/stage-cloud-api.yaml: Added DD_VERSION: ${{ github.sha }}
- .github/workflows/stage-core.yaml: Added DD_VERSION: ${{ github.sha }}
(both top-level env and Deploy step env)
2. Justfile - Pass DD_VERSION to helm:
- deploy-web: Added --set env.DD_VERSION=${DD_VERSION:-unknown}
- deploy-cloud-api: Added --set env.DD_VERSION=${DD_VERSION:-unknown}
- deploy-core: Added --set secrets.DD_VERSION=${DD_VERSION:-unknown}
3. Helm dev values - Remove hardcoded version:
- helm/dev/letta-web/values.yaml: Removed DD_VERSION: "dev"
- helm/dev/cloud-api/values.yaml: Removed DD_VERSION: "dev"
- Added comments that DD_VERSION is set via workflow
Result:
- Traces in Datadog will show version as git commit SHA (e.g., "abc123def")
- Can correlate traces with specific deployments/commits
- Consistent with internal versioning strategy (git hash, not semver)
- Defaults to "unknown" if DD_VERSION not set
Example trace tags after deployment:
env:dev
service:letta-web
version:7eafc5b0c12345...
* feat: add DD_VERSION to production workflows
Added DD_VERSION to production deployment workflows for consistent version
tracking across staging and production environments.
Changes:
- .github/workflows/deploy-web.yml: Added DD_VERSION: ${{ github.sha }}
- .github/workflows/deploy-core.yml: Added DD_VERSION: ${{ github.sha }}
Note: deploy-cloud-api.yml doesn't have DD config yet, will add when
cloud-api gets OTEL enabled in production.
Context:
This was partially flagged by bugbot - it noted that NEXT_PUBLIC_GIT_HASH
was missing from prod, but that was incorrect (line 53 already has it).
However, DD_VERSION was indeed missing and needed for Datadog log
correlation.
Result:
- Production logs will show version tag matching git commit SHA
- Consistent with staging configuration
- Better trace/log correlation in Datadog APM
Staging already has DD_VERSION (added in commit fb1a3eea0)
* feat: add DD tags to memgpt-server dev helm for APM correlation
Problem: memgpt-server logs show up in Datadog but traces don't appear
properly in APM UI because DD_ENV, DD_SERVICE, DD_SITE tags were missing.
The service was using native Datadog agent instrumentation (via
LETTA_TELEMETRY_ENABLE_DATADOG) but without proper unified service tagging,
traces weren't being correlated correctly in the APM interface.
Changes:
- helm/dev/memgpt-server/values.yaml:
- Added DD_ENV: "dev"
- Added DD_SERVICE: "memgpt-server"
- Added DD_SITE: "us5.datadoghq.com"
- Added comment that DD_VERSION comes from workflow
Existing configuration:
- DD_VERSION already passed via stage-core.yaml (line 215) and justfile (line 272)
- DD_API_KEY already in secretsProvider (line 194)
- LETTA_TELEMETRY_ENABLE_DATADOG: "true" (enables native DD agent)
- LETTA_TELEMETRY_DATADOG_AGENT_HOST/PORT (routes to DD cluster agent)
Result:
After redeployment, memgpt-server traces will show in Datadog APM with:
- env:dev
- service:memgpt-server
- version:<git-hash>
- Proper correlation with logs
* refactor: use image tag for DD_VERSION instead of separate env var
Changed from passing DD_VERSION separately to deriving it from the
image.tag that's already set (which contains the git hash).
This is cleaner because:
- Image tag is already set to git hash via TAG env var
- Removes redundant DD_VERSION from workflows (6 locations)
- Single source of truth for version (the deployed image tag)
- Simpler configuration
Changes:
Workflows (removed DD_VERSION):
- .github/workflows/stage-web.yaml
- .github/workflows/stage-cloud-api.yaml
- .github/workflows/stage-core.yaml (2 locations)
- .github/workflows/deploy-web.yml
- .github/workflows/deploy-core.yml
Justfile (use {{TAG}} instead of ${DD_VERSION}):
- deploy-web: --set env.DD_VERSION={{TAG}}
- deploy-cloud-api: --set env.DD_VERSION={{TAG}}
- deploy-core: --set secrets.DD_VERSION={{TAG}}
Helm values (updated comments):
- helm/dev/letta-web/values.yaml
- helm/dev/cloud-api/values.yaml
- helm/dev/memgpt-server/values.yaml
- Changed from "set via workflow" to "set from image.tag by justfile"
Flow:
1. Workflow sets TAG=${{ github.sha }}
2. Workflow calls justfile with TAG env var
3. Justfile sets image.tag={{TAG}} and DD_VERSION={{TAG}}
4. Both use same git hash value
Example:
image.tag: abc123def
DD_VERSION: abc123def
Both from TAG env var set to github.sha
* feat: add Datadog native tracer (dd-trace) to cloud-api for APM
Problem: cloud-api traces weren't appearing in Datadog APM despite OTEL
being configured. Investigation revealed letta-web uses dd-trace (Datadog's
native tracer) in addition to OTEL, and those traces show up perfectly.
Analysis:
- letta-web: Uses BOTH OTEL + dd-trace → traces visible in APM ✓
- cloud-api: Uses ONLY OTEL → traces NOT visible in APM ✗
Root cause: While OTEL *should* work, dd-trace provides better integration
with Datadog's APM backend and is proven to work in production.
Solution: Add dd-trace initialization to cloud-api, matching letta-web's
dual-tracing approach (OTEL + dd-trace).
Changes:
- apps/cloud-api/src/instrument-otel.ts:
- Added dd-trace initialization after OTEL setup
- Checks for DD_API_KEY env var (already configured in helm)
- Enables logInjection, runtimeMetrics, and profiling
- Graceful fallback if dd-trace fails to initialize
Dependencies:
- dd-trace@^5.31.0 already available in root package.json
Configuration (already set in helm):
- DD_API_KEY: From secretsProvider ✓
- DD_ENV: "dev" ✓
- DD_SERVICE: "cloud-api" ✓
- DD_LOGS_INJECTION: From workflow ✓
Expected result:
After deployment, cloud-api traces will appear in Datadog APM alongside
letta-web and letta-server, with proper env:dev service:cloud-api tags.
* tweak vars in staging
* fix: initialize Datadog tracer for memgpt-server APM traces
Problem: memgpt-server (letta-server) shows up in Datadog APM with env:null
instead of env:dev, and traces weren't being properly captured.
Root cause: The code was only initializing the Datadog Profiler (for CPU/memory
profiling), but NOT the Tracer (for distributed tracing/APM).
Analysis:
- Profiler: Records performance metrics (CPU, memory) - WAS initialized ✓
- Tracer: Records distributed traces/spans for APM - NOT initialized ✗
The existing code (line 248-256) did:
from ddtrace.profiling import Profiler # Only profiler!
profiler = Profiler(...)
profiler.start()
# No tracer initialization!
This explains why:
- letta-server appears in Datadog with env:null (profiling data sent without proper tags)
- Traces don't show proper service/env correlation
- APM service map is incomplete
Solution: Initialize the Datadog tracer with ddtrace.patch_all() to:
1. Auto-instrument FastAPI, HTTP clients, database calls, etc.
2. Send proper distributed traces to Datadog APM
3. Use the DD_ENV, DD_SERVICE env vars already set in helm
Changes:
- apps/core/letta/server/rest_api/app.py:
- Added import ddtrace
- Added ddtrace.patch_all() to auto-instrument all libraries
- Added logging for tracer initialization
Configuration (already set in helm):
- DD_ENV: "dev" ✓
- DD_SERVICE: "memgpt-server" ✓
- DD_SITE: "us5.datadoghq.com" ✓
- DD_VERSION: From image.tag ✓
- DD_AGENT_HOST/PORT: Set by code from settings ✓
Expected result:
After redeployment, letta-server will:
- Show as env:dev (not env:null) in Datadog APM
- Send proper distributed traces with full context
- Appear correctly in service maps and trace explorer
* fix: add dd-trace dependency to cloud-api package.json
Problem: cloud-api Docker image doesn't include dd-trace, causing
"Cannot find module 'dd-trace'" error at runtime.
Root cause: dd-trace is in root package.json but not in cloud-api's
package.json, so it's not included in the Docker build.
Solution: Add dd-trace@^5.31.0 to cloud-api dependencies.
Changes:
- apps/cloud-api/package.json: Added dd-trace dependency
* fix: mark dd-trace as external in cloud-api esbuild config
Problem: esbuild fails when trying to bundle dd-trace because it attempts
to bundle optional GraphQL plugin dependencies that aren't installed.
Error:
Could not resolve "graphql/language/visitor"
Could not resolve "graphql/language/printer"
Could not resolve "graphql/utilities"
Root cause: dd-trace has optional plugins for various frameworks (GraphQL,
MongoDB, etc.) that it loads conditionally at runtime. esbuild tries to
statically analyze and bundle all requires, including these optional deps.
Solution: Add dd-trace to the externals list so it's loaded at runtime
instead of being bundled. This is the standard approach for native modules
and packages with optional dependencies.
Changes:
- apps/cloud-api/esbuild.config.js: Added 'dd-trace' to externals array
Result:
- Build succeeds ✓
- dd-trace loads at runtime with only the plugins it needs ✓
- No GraphQL dependency required ✓
* add dd-trace
* fix: increase cloud-api memory and make dd-trace profiling configurable
Problem: cloud-api pods crash looping with out of memory errors when
dd-trace profiling is enabled:
FATAL ERROR: JavaScript heap out of memory
current_heap_limit=268435456 (268MB in 512Mi total)
Root cause: dd-trace profiling is memory-intensive (50-100MB+ overhead)
and the original 512Mi limit was too tight.
Solution: Two-part fix:
1. Increase memory limits: 512Mi → 1Gi (gives profiling room to breathe)
2. Make profiling configurable via DD_PROFILING_ENABLED env var
Changes:
helm/dev/cloud-api/values.yaml:
- resources.limits.memory: 512Mi → 1Gi
- resources.requests.memory: 512Mi → 1Gi
- Added DD_PROFILING_ENABLED: "true"
apps/cloud-api/src/instrument-otel.ts:
- Read DD_PROFILING_ENABLED env var
- Pass to tracer.init({ profiling: profilingEnabled })
- Log profiling status on initialization
Benefits:
✓ Profiling enabled by default (CPU/heap flame graphs in Datadog)
✓ Can disable via env var if needed (set to "false")
✓ More headroom prevents OOM crashes (1Gi vs 512Mi)
✓ Configurable per environment
Memory breakdown with profiling:
- App baseline: ~300-400MB
- dd-trace profiling: ~50-100MB
- Buffer/headroom: ~500MB
- Total: 1Gi (comfortable margin)
* Fix event loop blocking in NLTK downloads and Azure model listing
Found via watchdog detecting 61.6s hang during file upload.
**Root causes:**
1. NLTK punkt_tab downloads blocking during file processing
2. Azure model listing using sync requests.get() in async context
**Fixes:**
1. Pre-download NLTK data at Docker build time
2. Async fallback download at startup if build failed
3. Move Azure model fetch to thread pool with asyncio.to_thread()
**Impact:**
- Eliminates 60+ second event loop hangs
- Startup: instant if data baked in, ~60s async if needs download
- Requests: never block, all I/O offloaded to threads
* Fix Docker build: ensure /root/nltk_data exists even if download fails
- Create directory before download attempt
- Add verification step to confirm download success
- Directory always exists so COPY won't fail in runtime stage
* Fix: use venv python for NLTK download in Docker build
The builder stage installs NLTK in /app/.venv but we were using
system python which doesn't have NLTK. Now using venv python so
download actually works.
* Use uv run for NLTK download (more idiomatic)
uv run automatically uses the synced venv, cleaner than hardcoding
the venv path.
* Add lightweight event loop watchdog monitoring
- Thread-based watchdog detects event loop hangs >15s
- Runs independently, won't interfere with normal operation
- Disabled in test environments
- Minimal overhead, just heartbeat checks every 5s
* actually test it
* Add test script to validate watchdog detects hangs
Run with: uv run python test_watchdog_hang.py
Tests:
- Normal operation (no false positives)
- Short blocks under threshold (no alerts)
- Long blocks over threshold (correctly alerts)
* add memory tracking to core
* move to asyncio from threading.Thread
* remove threading.thread all the way
* delay decorator monitoring initialization until after event loop is registered
* context manager to decorator
* add psutil
* change my PR to match Caren's
* add path parameter validation for agent id first
* remove old import
* remove old agent_id_pattern pattern
* add example and fix max/min calculation to include hyphen
* fix regex string interpolation
* example deprecated in favour of examples
* openapi autogen
* change template test to expect 422
* fix 422 swallow
* expect 422 or 400
* rewrite error codes
* fix hallucinated uuid
* tweaked error message test
* print docker logs on failure