Both letta_llm_stream_adapter and simple_llm_stream_adapter were
creating ProviderTrace without call_type, causing "unknown" in
ClickHouse analytics.
🤖 Generated with [Letta Code](https://letta.com)
Co-authored-by: Letta <noreply@letta.com>
Log error traces to ClickHouse when streaming requests fail,
matching the behavior in letta_llm_stream_adapter.
🤖 Generated with [Letta Code](https://letta.com)
Co-authored-by: Letta <noreply@letta.com>
* feat: add non-streaming option for conversation messages
- Add ConversationMessageRequest with stream=True default (backwards compatible)
- stream=true (default): SSE streaming via StreamingService
- stream=false: JSON response via AgentLoop.load().step()
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* chore: regenerate API schema for ConversationMessageRequest
* feat: add direct ClickHouse storage for raw LLM traces
Adds ability to store raw LLM request/response payloads directly in ClickHouse,
bypassing OTEL span attribute size limits. This enables debugging and analytics
on large LLM payloads (>10MB system prompts, large tool schemas, etc.).
New files:
- letta/schemas/llm_raw_trace.py: Pydantic schema with ClickHouse row helper
- letta/services/llm_raw_trace_writer.py: Async batching writer (fire-and-forget)
- letta/services/llm_raw_trace_reader.py: Reader with query methods
- scripts/sql/clickhouse/llm_raw_traces.ddl: Production table DDL
- scripts/sql/clickhouse/llm_raw_traces_local.ddl: Local dev DDL
- apps/core/clickhouse-init.sql: Local dev initialization
Modified:
- letta/settings.py: Added 4 settings (store_llm_raw_traces, ttl, batch_size, flush_interval)
- letta/llm_api/llm_client_base.py: Integration into request_async_with_telemetry
- compose.yaml: Added ClickHouse service for local dev
- justfile: Added clickhouse, clickhouse-cli, clickhouse-traces commands
Feature disabled by default (LETTA_STORE_LLM_RAW_TRACES=false).
Uses ZSTD(3) compression for 10-30x reduction on JSON payloads.
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix: address code review feedback for LLM raw traces
Fixes based on code review feedback:
1. Fix ClickHouse endpoint parsing - default to secure=False for raw host:port
inputs (was defaulting to HTTPS which breaks local dev)
2. Make raw trace writes truly fire-and-forget - use asyncio.create_task()
instead of awaiting, so JSON serialization doesn't block request path
3. Add bounded queue (maxsize=10000) - prevents unbounded memory growth
under load. Drops traces with warning if queue is full.
4. Fix deprecated asyncio usage - get_running_loop() instead of get_event_loop()
5. Add org_id fallback - use _telemetry_org_id if actor doesn't have it
6. Remove unused imports - json import in reader
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix: add missing asyncio import and simplify JSON serialization
- Add missing 'import asyncio' that was causing 'name asyncio is not defined' error
- Remove unnecessary clean_double_escapes() function - the JSON is stored correctly,
the clickhouse-client CLI was just adding extra escaping when displaying
- Update just clickhouse-trace to use Python client for correct JSON output
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* test: add clickhouse raw trace integration test
* test: simplify clickhouse trace assertions
* refactor: centralize usage parsing and stream error traces
Use per-client usage helpers for raw trace extraction and ensure streaming errors log requests with error metadata.
👾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* test: exercise provider usage parsing live
Make live OpenAI/Anthropic/Gemini requests with credential gating and validate Anthropic cache usage mapping when present.
👾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* test: fix usage parsing tests to pass
- Use GoogleAIClient with GEMINI_API_KEY instead of GoogleVertexClient
- Update model to gemini-2.0-flash (1.5-flash deprecated in v1beta)
- Add tools=[] for Gemini/Anthropic build_request_data
👾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* refactor: extract_usage_statistics returns LettaUsageStatistics
Standardize on LettaUsageStatistics as the canonical usage format returned by client helpers. Inline UsageStatistics construction for ChatCompletionResponse where needed.
👾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* feat: add is_byok and llm_config_json columns to ClickHouse traces
Extend llm_raw_traces table with:
- is_byok (UInt8): Track BYOK vs base provider usage for billing analytics
- llm_config_json (String, ZSTD): Store full LLM config for debugging and analysis
This enables queries like:
- BYOK usage breakdown by provider/model
- Config parameter analysis (temperature, max_tokens, etc.)
- Debugging specific request configurations
* feat: add tests for error traces, llm_config_json, and cache tokens
- Update llm_raw_trace_reader.py to query new columns (is_byok,
cached_input_tokens, cache_write_tokens, reasoning_tokens, llm_config_json)
- Add test_error_trace_stored_in_clickhouse to verify error fields
- Add test_cache_tokens_stored_for_anthropic to verify cache token storage
- Update existing tests to verify llm_config_json is stored correctly
- Make llm_config required in log_provider_trace_async()
- Simplify provider extraction to use provider_name directly
🐾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* ci: add ClickHouse integration tests to CI pipeline
- Add use-clickhouse option to reusable-test-workflow.yml
- Add ClickHouse service container with otel database
- Add schema initialization step using clickhouse-init.sql
- Add ClickHouse env vars (CLICKHOUSE_ENDPOINT, etc.)
- Add separate clickhouse-integration-tests job running
integration_test_clickhouse_llm_raw_traces.py
🐾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* refactor: simplify provider and org_id extraction in raw trace writer
- Use model_endpoint_type.value for provider (not provider_name)
- Simplify org_id to just self.actor.organization_id (actor is always pydantic)
🐾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* refactor: simplify LLMRawTraceWriter with _enabled flag
- Check ClickHouse env vars once at init, set _enabled flag
- Early return in write_async/flush_async if not enabled
- Remove ValueError raises (never used)
- Simplify _get_client (no validation needed since already checked)
🐾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix: add LLMRawTraceWriter shutdown to FastAPI lifespan
Properly flush pending traces on graceful shutdown via lifespan
instead of relying only on atexit handler.
🐾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* feat: add agent_tags column to ClickHouse traces
Store agent tags as Array(String) for filtering/analytics by tag.
🐾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* cleanup
* fix(ci): fix ClickHouse schema initialization in CI
- Create database separately before loading SQL file
- Remove CREATE DATABASE from SQL file (handled in CI step)
- Add verification step to confirm table was created
- Use -sf flag for curl to fail on HTTP errors
🐾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* refactor: simplify LLM trace writer with ClickHouse async_insert
- Use ClickHouse async_insert for server-side batching instead of manual queue/flush loop
- Sync cloud DDL schema with clickhouse-init.sql (add missing columns)
- Remove redundant llm_raw_traces_local.ddl
- Remove unused batch_size/flush_interval settings
- Update tests for simplified writer
Key changes:
- async_insert=1, wait_for_async_insert=1 for reliable server-side batching
- Simple per-trace retry with exponential backoff (max 3 retries)
- ~150 lines removed from writer
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* refactor: consolidate ClickHouse direct writes into TelemetryManager backend
- Add clickhouse_direct backend to provider_trace_backends
- Remove duplicate ClickHouse write logic from llm_client_base.py
- Configure via LETTA_TELEMETRY_PROVIDER_TRACE_BACKEND=postgres,clickhouse_direct
The clickhouse_direct backend:
- Converts ProviderTrace to LLMRawTrace
- Extracts usage stats from response JSON
- Writes via LLMRawTraceWriter with async_insert
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* refactor: address PR review comments and fix llm_config bug
Review comment fixes:
- Rename clickhouse_direct -> clickhouse_analytics (clearer purpose)
- Remove ClickHouse from OSS compose.yaml, create separate compose.clickhouse.yaml
- Delete redundant scripts/test_llm_raw_traces.py (use pytest tests)
- Remove unused llm_raw_traces_ttl_days setting (TTL handled in DDL)
- Fix socket description leak in telemetry_manager docstring
- Add cloud-only comment to clickhouse-init.sql
- Update justfile to use separate compose file
Bug fix:
- Fix llm_config not being passed to ProviderTrace in telemetry
- Now correctly populates provider, model, is_byok for all LLM calls
- Affects both request_async_with_telemetry and log_provider_trace_async
DDL optimizations:
- Add secondary indexes (bloom_filter for agent_id, model, step_id)
- Add minmax indexes for is_byok, is_error
- Change model and error_type to LowCardinality for faster GROUP BY
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* refactor: rename llm_raw_traces -> llm_traces
Address review feedback that "raw" is misleading since we denormalize fields.
Renames:
- Table: llm_raw_traces -> llm_traces
- Schema: LLMRawTrace -> LLMTrace
- Files: llm_raw_trace_{reader,writer}.py -> llm_trace_{reader,writer}.py
- Setting: store_llm_raw_traces -> store_llm_traces
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix: update workflow references to llm_traces
Missed renaming table name in CI workflow files.
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix: update clickhouse_direct -> clickhouse_analytics in docstring
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* chore: remove inaccurate OTEL size limit comments
The 4MB limit is our own truncation logic, not an OTEL protocol limit.
The real benefit is denormalized columns for analytics queries.
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* chore: remove local ClickHouse dev setup (cloud-only feature)
- Delete clickhouse-init.sql and compose.clickhouse.yaml
- Remove local clickhouse just commands
- Update CI to use cloud DDL with MergeTree for testing
clickhouse_analytics is a cloud-only feature. For local dev, use postgres backend.
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix: restore compose.yaml to match main
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* refactor: merge clickhouse_analytics into clickhouse backend
Per review feedback - having two separate backends was confusing.
Now the clickhouse backend:
- Writes to llm_traces table (denormalized for cost analytics)
- Reads from OTEL traces table (will cut over to llm_traces later)
Config: LETTA_TELEMETRY_PROVIDER_TRACE_BACKEND=postgres,clickhouse
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix: correct path to DDL file in CI workflow
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* chore: add provider index to DDL for faster filtering
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix: configure telemetry backend in clickhouse tests
Tests need to set telemetry_settings.provider_trace_backends to include
'clickhouse', otherwise traces are routed to default postgres backend.
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix: set provider_trace_backend field, not property
provider_trace_backends is a computed property, need to set the
underlying provider_trace_backend string field instead.
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix: error trace test and error_type extraction
- Add TelemetryManager to error trace test so traces get written
- Fix error_type extraction to check top-level before nested error dict
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix: use provider_trace.id for trace correlation across backends
- Pass provider_trace.id to LLMTrace instead of auto-generating
- Log warning if ID is missing (shouldn't happen, helps debug)
- Fallback to new UUID only if not set
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix: trace ID correlation and concurrency issues
- Strip "provider_trace-" prefix from ID for UUID storage in ClickHouse
- Add asyncio.Lock to serialize writes (clickhouse_connect not thread-safe)
- Fix Anthropic prompt_tokens to include cached tokens for cost analytics
- Log warning if provider_trace.id is missing
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
---------
Co-authored-by: Letta <noreply@letta.com>
Co-authored-by: Caren Thomas <carenthomas@gmail.com>
fix: use shared event + .athrow() to properly set stream_was_cancelled flag
**Problem:**
When a run is cancelled via /cancel endpoint, `stream_was_cancelled` remained
False because `RunCancelledException` was raised in the consumer code (wrapper),
which closes the generator from outside. This causes Python to skip the
generator's except blocks and jump directly to finally with the wrong flag value.
**Solution:**
1. Shared `asyncio.Event` registry for cross-layer cancellation signaling
2. `cancellation_aware_stream_wrapper` sets the event when cancellation detected
3. Wrapper uses `.athrow()` to inject exception INTO generator (not consumer-side raise)
4. All streaming interfaces check event in `finally` block to set flag correctly
5. `streaming_service.py` handles `RunCancelledException` gracefully, yields [DONE]
**Changes:**
- streaming_response.py: Event registry + .athrow() injection + graceful handling
- openai_streaming_interface.py: 3 classes check event in finally
- gemini_streaming_interface.py: Check event in finally
- anthropic_*.py: Catch RunCancelledException
- simple_llm_stream_adapter.py: Create & pass event to interfaces
- streaming_service.py: Handle RunCancelledException, yield [DONE], skip double-update
- routers/v1/{conversations,runs}.py: Pass event to wrapper
- integration_test_human_in_the_loop.py: New test for approval + cancellation
**Tests:**
- test_tool_call with cancellation (OpenAI models) ✅
- test_approve_with_cancellation (approval flow + concurrent cancel) ✅
**Known cosmetic warnings (pre-existing):**
- "Run already in terminal state" - agent loop tries to update after /cancel
- "Stream ended without terminal event" - background streaming timing race
👾 Generated with [Letta Code](https://letta.com)
Co-authored-by: Letta <noreply@letta.com>
* feat: centralize telemetry logging at LLM client level
Moves telemetry logging from individual adapters to LLMClientBase:
- Add TelemetryStreamWrapper for streaming telemetry on stream close
- Add request_async_with_telemetry() for non-streaming requests
- Add stream_async_with_telemetry() for streaming requests
- Add set_telemetry_context() to configure agent_id, run_id, step_id
Updates adapters and agents to use new pattern:
- LettaLLMAdapter now accepts agent_id/run_id in constructor
- Adapters call set_telemetry_context() before LLM requests
- Removes duplicate telemetry logging from adapters
- Enriches traces with agent_id, run_id, call_type metadata
🐙 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix: accumulate streaming response content for telemetry
TelemetryStreamWrapper now extracts actual response data from chunks:
- Content text (concatenated from deltas)
- Tool calls (id, name, arguments)
- Model name, finish reason, usage stats
🐙 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* refactor: move streaming telemetry to caller (option 3)
- Remove TelemetryStreamWrapper class
- Add log_provider_trace_async() helper to LLMClientBase
- stream_async_with_telemetry() now just returns raw stream
- Callers log telemetry after processing with rich interface data
Updated callers:
- summarizer.py: logs content + usage after stream processing
- letta_agent.py: logs tool_call, reasoning, model, usage
🐙 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix: pass agent_id and run_id to parent adapter class
LettaLLMStreamAdapter was not passing agent_id/run_id to parent,
causing "unexpected keyword argument" errors.
🐙 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
---------
Co-authored-by: Letta <noreply@letta.com>
* feat: add provider trace backend abstraction for multi-backend telemetry
Introduces a pluggable backend system for provider traces:
- Base class with async/sync create and read interfaces
- PostgreSQL backend (existing behavior)
- ClickHouse backend (via OTEL instrumentation)
- Socket backend (writes to Unix socket for crouton sidecar)
- Factory for instantiating backends from config
Refactors TelemetryManager to use backends with support for:
- Multi-backend writes (concurrent via asyncio.gather)
- Primary backend for reads (first in config list)
- Graceful error handling per backend
Config: LETTA_TELEMETRY_PROVIDER_TRACE_BACKEND (comma-separated)
Example: "postgres,socket" for dual-write to Postgres and crouton
🐙 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* feat: add protocol version to socket backend records
Adds PROTOCOL_VERSION constant to socket backend:
- Included in every telemetry record sent to crouton
- Must match ProtocolVersion in apps/crouton/main.go
- Enables crouton to detect and reject incompatible messages
🐙 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix: remove organization_id from ProviderTraceCreate calls
The organization_id is now handled via the actor parameter in the
telemetry manager, not through ProviderTraceCreate schema. This fixes
validation errors after changing ProviderTraceCreate to inherit from
BaseProviderTrace which forbids extra fields.
🐙 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* consolidate provider trace
* add clickhouse-connect to fix bug on main lmao
* auto generated sdk changes, and deployment details, and clikchouse prefix bug and added fields to runs trace return api
* auto generated sdk changes, and deployment details, and clikchouse prefix bug and added fields to runs trace return api
* consolidate provider trace
* consolidate provider trace bug fix
---------
Co-authored-by: Letta <noreply@letta.com>
* fix(core): sanitize messages to anthropic in the main path the same way (or similar) to how we do it in the token counter
* fix: also patch poison error in backend by filtering lazily
* fix: remap streaming errors (what the fuck)
* fix: dedupe tool clals
* fix: cleanup, removed try/catch
* first hack
* clean up
* first implementation working
* revert package-lock
* remove openai test
* error throw
* typo
* Update integration_test_send_message_v2.py
* Update integration_test_send_message_v2.py
* refine test
* Only make changes for openai non streaming
* Add tests
---------
Co-authored-by: Ari Webb <ari@letta.com>
Co-authored-by: Matt Zhou <mattzh1314@gmail.com>
* wip
* Fix parallel tool calling interface
* wip
* wip adapt using id field
* Integrate new multi tool return schemas into parallel tool calling
* Remove example script
* Reset changes to llm stream adapter since old agent loop should not enable parallel tool calling
* Clean up fallback logic for extracting tool calls
* Remove redundant check
* Simplify logic
* Clean up logic in handle ai response
* Fix tests
* Write anthropic dict conversion to be back compatible
* wip
* Double write tool call id for legacy reasons
* Fix override args failures
* Patch for approvals
* Revert comments
* Remove extraneous prints
* feat: add gemini streaming to new agent loop
* add google as required dependency
* support storing all content parts
* remove extra google references
* feat: squash rebase of OSS PR
* fix: revert changes that weren't on manual rebase
* fix: caught another one
* fix: disable force
* chore: drop print
* fix: just stage-api && just publish-api
* fix: make agent_type consistently an arg in the client
* fix: patch multi-modal support
* chore: put in todo stub
* fix: disable hardcoding for tests
* fix: patch validate agent sync (#4882)
patch validate agent sync
* fix: strip bad merge diff
* fix: revert unrelated diff
* fix: react_v2 naming -> letta_v1 naming
* fix: strip bad merge
---------
Co-authored-by: Kevin Lin <klin5061@gmail.com>
* remove apps/core and apps/fern
* fix precommit
* add submodule updates in workflows
* submodule
* remove core tests
* update core revision
* Add submodules: true to all GitHub workflows
- Ensure all workflows can access git submodules
- Add submodules support to deployment, test, and CI workflows
- Fix YAML syntax issues in workflow files
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* remove core-lint
* upgrade core with latest main of oss
---------
Co-authored-by: Claude <noreply@anthropic.com>