Commit Graph

2072 Commits

Author SHA1 Message Date
cthomas
aeeec41859 feat: new agent id query param for default convo (#9756)
* feat: new agent id query param for default convo

* update stainless
2026-03-03 18:34:15 -08:00
cthomas
416ffc7cd7 Add billing context to LLM telemetry traces (#9745)
* feat: add billing context to LLM telemetry traces

Add billing metadata (plan type, cost source, customer ID) to LLM traces in ClickHouse for cost analytics and attribution.

**Data Flow:**
- Cloud-API: Extract billing info from subscription in rate limiting, set x-billing-* headers
- Core: Parse headers into BillingContext object via dependencies
- Adapters: Flow billing_context through all LLM adapters (blocking & streaming)
- Agent: Pass billing_context to step() and stream() methods
- ClickHouse: Store in billing_plan_type, billing_cost_source, billing_customer_id columns

**Changes:**
- Add BillingContext schema to provider_trace.py
- Add billing columns to llm_traces ClickHouse table DDL
- Update getCustomerSubscription to fetch stripeCustomerId from organization_billing_details
- Propagate billing_context through agent step flow, adapters, and streaming service
- Update ProviderTrace and LLMTrace to include billing metadata
- Regenerate SDK with autogen

**Production Deployment:**
Requires env vars: LETTA_PROVIDER_TRACE_BACKEND=clickhouse, LETTA_STORE_LLM_TRACES=true, CLICKHOUSE_*

🐾 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

* fix: add billing_context parameter to agent step methods

- Add billing_context to BaseAgent and BaseAgentV2 abstract methods
- Update LettaAgent, LettaAgentV2, LettaAgentV3 step methods
- Update multi-agent groups: SleeptimeMultiAgentV2, V3, V4
- Fix test_utils.py to include billing header parameters
- Import BillingContext in all affected files

* fix: add billing_context to stream methods

- Add billing_context parameter to BaseAgentV2.stream()
- Add billing_context parameter to LettaAgentV2.stream()
- LettaAgentV3.stream() already has it from previous commit

* fix: exclude billing headers from OpenAPI spec

Mark billing headers as internal (include_in_schema=False) so they don't appear in the public API.
These are internal headers between cloud-api and core, not part of the public SDK.

Regenerated SDK with stage-api - removes 10,650 lines of bloat that was causing OOM during Next.js build.

* refactor: return billing context from handleUnifiedRateLimiting instead of mutating req

Instead of passing req into handleUnifiedRateLimiting and mutating headers inside it:
- Return billing context fields (billingPlanType, billingCostSource, billingCustomerId) from handleUnifiedRateLimiting
- Set headers in handleMessageRateLimiting (middleware layer) after getting the result
- This fixes step-orchestrator compatibility since it doesn't have a real Express req object

* chore: remove extra gencode

* p

---------

Co-authored-by: Letta <noreply@letta.com>
2026-03-03 18:34:13 -08:00
jnjpng
db9e0f42af fix(core): prevent ModelSettings default max_output_tokens from overriding agent config (#9739)
* fix(core): prevent ModelSettings default max_output_tokens from overriding agent config

When a conversation's model_settings were saved, the Pydantic default
of max_output_tokens=4096 was always persisted to the DB even when the
client never specified it. On subsequent messages, this default would
overwrite the agent's max_tokens (typically None) with 4096, silently
capping output.

Two changes:
1. Use model_dump(exclude_unset=True) when persisting model_settings
   to the DB so Pydantic defaults are not saved.
2. Add model_fields_set guards at all callsites that apply
   _to_legacy_config_params() to skip max_tokens when it was not
   explicitly provided by the caller.

Also conditionally set max_output_tokens in the OpenAI Responses API
request builder so None is not sent as null (which some models treat
as a hard 4096 cap).

* nit

* Fix model_settings serialization to preserve provider_type discriminator

Replace blanket exclude_unset=True with targeted removal of only
max_output_tokens when not explicitly set. The previous approach
stripped the provider_type field (a Literal with a default), which
broke discriminated union deserialization when reading back from DB.
2026-03-03 18:34:02 -08:00
Ari Webb
8335aa0fa0 fix: add some more logging for interrupts (#9733) 2026-03-03 18:34:02 -08:00
Christina Tong
c8ae02a1fb feat(core): sort agents by updated_at [LET-7771] (#9730)
feat(core): sort agents by last_updated_at
2026-03-03 18:34:02 -08:00
amysguan
8e60b73eee fix: minor change in upsert logic for prompt default (#9729)
minor compaction upsert change
2026-03-03 18:34:02 -08:00
amysguan
c28ba77354 Fix: ADE compaction button compacts current conversation (#9720)
* ADE compaction button compacts current conversation, update conversation endpoint

* update name (summerizer --> summarizer), type fixes

* bug fix for conversation + self_compact_sliding_window

* chore: add French translations for AgentSimulatorOptionsMenu

Add missing French translations for the AgentSimulatorOptionsMenu
section to match en.json changes.

Co-authored-by: Christina Tong <christinatong01@users.noreply.github.com>

🤖 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

* retrigger CI

* error typefix

---------

Co-authored-by: letta-code <248085862+letta-code@users.noreply.github.com>
Co-authored-by: Letta <noreply@letta.com>
2026-03-03 18:34:02 -08:00
Sarah Wooders
a50482e6d3 feat(core): sync skills from SKILL.md into memFS blocks (#9718) 2026-03-03 18:34:02 -08:00
cthomas
ab784e702d feat: support default convo in list and cancel endpoints (#9707)
* feat: support default convo in list and cancel endpoints

* also support in compact endpoint

* api sync
2026-03-03 18:34:02 -08:00
cthomas
39a537a9a5 feat: add default convo support to conversations endpoint (#9706)
* feat: add default convo support to conversations endpoint

* api sync
2026-03-03 18:34:02 -08:00
Sarah Wooders
57e7e0e52b feat(core): reserve skills in memfs sync and list top-level skill directory [LET-7710] (#9691) 2026-03-03 18:34:02 -08:00
cthomas
28a66fa9d7 chore: remove stmt timeout debug logging (#9693) 2026-03-03 18:34:02 -08:00
jnjpng
46971414a4 fix: preserve agent max_tokens when caller doesn't explicitly set it (#9679)
* fix: preserve agent max_tokens when caller doesn't explicitly set it

When updating an agent with convenience fields (model, model_settings)
but without an explicit max_tokens, the server was constructing a fresh
LLMConfig via get_llm_config_from_handle_async. The Pydantic validator
on LLMConfig hardcodes max_tokens=16384 for gpt-5* models, silently
overriding the agent's existing value (e.g. 128000).

This was triggered by reasoning tab-switch in the CLI, which sends
model + model_settings (with reasoning_effort) but no max_tokens.

Now, when request.max_tokens is None we carry forward the agent's
current max_tokens instead of accepting the provider default.

* fix: use correct 128k max_output_tokens defaults for gpt-5.2/5.3

- Update OpenAI provider fallback to return 128000 for gpt-5.2*/5.3*
  models (except -chat variants which are 16k)
- Update LLMConfig Pydantic validator to match
- Update gpt-5.2 default_config factory to use 128000
- Move server-side max_tokens preservation guard into the
  model_settings branch where llm_config is already available
2026-03-03 18:34:01 -08:00
cthomas
9422b2d993 fix: set otid for all approval request message (#9655) 2026-03-03 18:34:01 -08:00
amysguan
47b0c87ebe Add modes self and self_sliding_window for prompt caching (#9372)
* add self compaction method with proper caching (pass in tools, don't refresh sys prompt beforehand) + sliding fallback

* updated prompts for self compaction

* add tests for self, self_sliding_window modes and w/o refresh messages before compaction

* add cache logging to summarization

* better handling to prevent agent from continuing convo on self modes

* if mode changes via summarize endpoint, will use default prompt for the new mode

---------

Co-authored-by: Amy Guan <amy@letta.com>
2026-02-24 10:55:26 -08:00
cthomas
8ab9d78a23 chore: cleanup (#9602)
* chore: cleanup

* update dependencies
2026-02-24 10:55:26 -08:00
Shubham Naik
f082fd5061 feat: add order_by and order params to /v1/conversations list endpoin… (#9599)
* feat: add order_by and order params to /v1/conversations list endpoint [LET-7628]

Added sorting support to the conversations list endpoint, matching the pattern from /v1/agents.

**API Changes:**
- Added `order` query param: "asc" or "desc" (default: "desc")
- Added `order_by` query param: "created_at" or "last_run_completion" (default: "created_at")

**Implementation:**

**created_at ordering:**
- Simple ORDER BY on ConversationModel.created_at
- No join required, fast query
- Nulls not applicable (created_at always set)

**last_run_completion ordering:**
- LEFT JOIN with runs table using subquery
- Subquery: MAX(completed_at) grouped by conversation_id
- Uses OUTER JOIN so conversations with no runs are included
- Nulls last ordering (conversations with no runs go to end)
- Index on runs.conversation_id ensures performant join

**Pagination:**
- Cursor-based pagination with `after` parameter
- Handles null values correctly for last_run_completion
- For created_at: simple timestamp comparison
- For last_run_completion: complex null-aware cursor logic

**Performance:**
- Existing index: `ix_runs_conversation_id` on runs table
- Subquery with GROUP BY is efficient for this use case
- OUTER JOIN ensures conversations without runs are included

**Follows agents pattern:**
- Same parameter names (order, order_by)
- Same Literal types and defaults
- Converts "asc"/"desc" to ascending boolean internally

🐾 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

* chore: order

---------

Co-authored-by: Letta <noreply@letta.com>
2026-02-24 10:55:26 -08:00
Sarah Wooders
afbc416972 feat(core): add model/model_settings override fields to conversation create/update (#9607) 2026-02-24 10:55:26 -08:00
jnjpng
9155b4fa86 fix: use canonical stop reason mapping in redis stream finalizer (#9600)
fix: derive run status from StopReasonType mapping
2026-02-24 10:55:12 -08:00
github-actions[bot]
ba67621e1b feat: add conversation deletion endpoint (soft delete) [LET-7286] (#9230)
* feat: add conversation deletion endpoint (soft delete) [LET-7286]

- Add DELETE /conversations/{conversation_id} endpoint
- Filter soft-deleted conversations from list operations
- Add check_is_deleted=True to update/delete operations

Co-authored-by: Sarah Wooders <sarahwooders@users.noreply.github.com>

🤖 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

* feat: add tests, update SDK and stainless for delete conversation

- Add 5 integration tests for DELETE conversation endpoint
- Run stage-api to regenerate OpenAPI spec and SDK
- Add delete method to conversations in stainless.yml

Co-authored-by: Sarah Wooders <sarahwooders@users.noreply.github.com>

🤖 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

* test: add manager-level tests for conversation soft delete [LET-7286]

- test_delete_conversation_removes_from_list
- test_delete_conversation_double_delete_raises
- test_update_deleted_conversation_raises
- test_delete_conversation_excluded_from_summary_search

🤖 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

---------

Co-authored-by: letta-code <248085862+letta-code@users.noreply.github.com>
Co-authored-by: Letta <noreply@letta.com>
Co-authored-by: Sarah Wooders <sarahwooders@gmail.com>
2026-02-24 10:55:12 -08:00
Shubham Naik
73c824f5d2 feat: make agent_id optional in conversations list endpoint [LET-7612] (#9585)
* feat: make agent_id optional in conversations list endpoint [LET-7612]

Allow listing all conversations without filtering by agent_id.

**Router changes (conversations.py):**
- Changed agent_id from required (`...`) to optional (`None`)
- Updated description to clarify behavior
- Updated docstring to reflect optional filtering

**Manager changes (conversation_manager.py):**
- Updated list_conversations signature: agent_id: str → Optional[str]
- Updated docstring to clarify optional behavior
- Summary search query: conditionally adds agent_id filter only if provided
- Default list logic: passes agent_id (can be None) to list_async

**How it works:**
- Without agent_id: returns all conversations for the user's organization
- With agent_id: returns conversations filtered by that agent
- list_async handles None gracefully via **kwargs pattern

**Use case:**
- Cloud UI can list all user conversations across agents
- Still supports filtering by agent_id when needed

🐾 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

* chore: update logs

* chore: update logs

---------

Co-authored-by: Letta <noreply@letta.com>
2026-02-24 10:55:12 -08:00
jnjpng
257b99923b fix: preserve max_tokens on model_settings updates without max_output_tokens (#9591)
When model_settings is sent without max_output_tokens (e.g. only
changing reasoning_effort), the Pydantic default of 4096 was being
applied via _to_legacy_config_params(), silently overwriting the
agent's existing max_tokens.

Use model_fields_set to detect when max_output_tokens was not
explicitly provided and skip overwriting max_tokens in that case.
Only applied to the update path — on create, letting the default
apply is reasonable since there's no pre-existing value.
2026-02-24 10:55:12 -08:00
cthomas
73c9b14fa9 fix: dont throw error if compaction races (#9576) 2026-02-24 10:55:12 -08:00
amysguan
33969d7190 Default to lightweight compaction model instead of agent's model (#9488)
---------

Co-authored-by: Amy Guan <amy@letta.com>
2026-02-24 10:55:12 -08:00
jnjpng
828c89c76f fix: populate max_tokens when listing LLM models (#9559)
list_llm_models_async was constructing LLMConfig without max_tokens,
causing the GET /models/ endpoint to return null for max_tokens.
Now calls typed_provider.get_default_max_output_tokens() for both
base and BYOK provider paths, matching get_llm_config_from_handle.
2026-02-24 10:55:12 -08:00
Kian Jones
f5c4ab50f4 chore: add ty + pre-commit hook and repeal even more ruff rules (#9504)
* auto fixes

* auto fix pt2 and transitive deps and undefined var checking locals()

* manual fixes (ignored or letta-code fixed)

* fix circular import

* remove all ignores, add FastAPI rules and Ruff rules

* add ty and precommit

* ruff stuff

* ty check fixes

* ty check fixes pt 2

* error on invalid
2026-02-24 10:55:11 -08:00
Kian Jones
25d54dd896 chore: enable F821, F401, W293 (#9503)
* auto fixes

* auto fix pt2 and transitive deps and undefined var checking locals()

* manual fixes (ignored or letta-code fixed)

* fix circular import
2026-02-24 10:55:08 -08:00
Charles Packer
c32d53f8a3 fix(core): remove old static landing page from Docker image (#9369)
The "Experience the new ADE" page was outdated and no longer useful.
Root path now redirects to /docs (FastAPI Swagger UI) instead.

👾 Generated with [Letta Code](https://letta.com)

Co-authored-by: Letta <noreply@letta.com>
2026-02-24 10:52:07 -08:00
Kian Jones
2f0294165c debug: log statement_timeout + connection pid on session checkout (#9472)
* debug: log statement_timeout + connection pid on every session checkout

Temporary instrumentation to diagnose why some PlanetScale connections
have statement_timeout=5s while others have 0 (disabled).

🤖 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

* debug: log statement_timeout on every checkout, not just non-zero

🤖 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

* fix: rollback implicit transaction from debug query

The SELECT implicitly begins a transaction, causing "A transaction is
already begun" errors for code that calls session.begin() explicitly.

🤖 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

---------

Co-authored-by: Letta <noreply@letta.com>
2026-02-24 10:52:07 -08:00
Kian Jones
80f34f134d fix(core): catch bare openai.APIError in handle_llm_error (#9468)
* fix(core): catch bare openai.APIError in handle_llm_error fallthrough

openai.APIError raised during streaming (e.g. OpenRouter credit
exhaustion) is not an APIStatusError, so it skipped the catch-all
at the end and fell through to LLMError("Unhandled"). Now bare
APIErrors that aren't context window overflows are mapped to
LLMBadRequestError.

Datadog: https://us5.datadoghq.com/error-tracking/issue/7a2c356c-0849-11f1-be66-da7ad0900000

🐾 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

* feat(core): add LLMInsufficientCreditsError for BYOK credit exhaustion

Adds dedicated error type for insufficient credits/quota across all
providers (OpenAI, Anthropic, Google). Returns HTTP 402 with
BYOK-aware messaging instead of generic 400.

- New LLMInsufficientCreditsError class and PAYMENT_REQUIRED ErrorCode
- is_insufficient_credits_message() helper detecting credit/quota strings
- All 3 provider clients detect 402 status + credit keywords
- FastAPI handler returns 402 with "your API key" vs generic messaging
- 5 new parametrized tests covering OpenRouter, OpenAI, and negative case

🐾 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

---------

Co-authored-by: Letta <noreply@letta.com>
2026-02-24 10:52:07 -08:00
Sarah Wooders
05073ba837 fix(core): preserve git-memory formatting and enforce lock conflicts (#9451)
* test(core): strengthen git-memory system prompt stability integration coverage

Switch git-memory HTTP integration tests to OpenAI model handles and add assertions that system prompt content remains stable after normal turns and direct block value updates until explicit recompilation or reset.

👾 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

* fix(core): preserve git-memory formatting and enforce lock conflicts

Preserve existing markdown frontmatter formatting on block updates while still ensuring required metadata fields exist, and make post-push git sync propagate memory-repo lock conflicts as 409 responses. Also enable slash-containing core-memory block labels in route params and add regression coverage.

👾 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

* fix(memfs): fail closed on memory repo lock contention

Make memfs git commits fail closed when the per-agent Redis lock cannot be acquired, return 409 MEMORY_REPO_BUSY from the memfs files write API, and map that 409 back to core MemoryRepoBusyError so API callers receive consistent busy conflicts.

👾 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

* chore(core): minimize git-memory fix scope to memfs lock and frontmatter paths

👾 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

* chore: drop unrelated changes and keep memfs-focused scope

Revert branch-only changes that are not required for the memfs lock contention and frontmatter-preservation fix so the PR contains only issue-relevant files.

👾 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

* fix(memfs): lock push sync path and improve nested sync diagnostics

Serialize memfs push-to-GCS sync with the same per-agent Redis lock key used by API commits, and add targeted post-push nested-block diagnostics plus a focused nested-label sync regression test for _sync_after_push.

👾 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

---------

Co-authored-by: Letta <noreply@letta.com>
2026-02-24 10:52:07 -08:00
Sarah Wooders
d7793a4474 fix(core): stabilize system prompt refresh and expand git-memory coverage (#9438)
* fix(core): stabilize system prompt refresh and expand git-memory coverage

Only rebuild system prompts on explicit refresh paths so normal turns preserve prefix-cache stability, including git/custom prompt layouts. Add integration coverage for memory filesystem tree structure and recompile/reset system-message updates via message-id retrieval.

👾 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

* fix(core): recompile system prompt around compaction and stabilize source tests

Force system prompt refresh before/after compaction in LettaAgentV3 so repaired system+memory state is used and persisted across subsequent turns. Update source-system prompt tests to explicitly recompile before raw preview assertions instead of assuming automatic rebuild timing.

👾 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

---------

Co-authored-by: Letta <noreply@letta.com>
2026-02-24 10:52:07 -08:00
Kian Jones
7c65fd77f1 fix(core): return 400 for Google GenAI ClientError bad requests (#9357)
Google genai.errors.ClientError with code 400 was being caught and
wrapped as LLMBadRequestError but returned to clients as 502 because
no dedicated FastAPI exception handler existed for LLMBadRequestError.

- Add LLMBadRequestError exception handler in app.py returning HTTP 400
- Fix ErrorCode on Google 400 bad requests from INTERNAL_SERVER_ERROR
  to INVALID_ARGUMENT
- Route Google API errors through handle_llm_error in stream_async path

Datadog: https://us5.datadoghq.com/error-tracking/issue/4eb3ff3c-d937-11f0-8177-da7ad0900000

🤖 Generated with [Letta Code](https://letta.com)

Co-authored-by: Letta <noreply@letta.com>
2026-02-24 10:52:07 -08:00
Kian Jones
3634464251 fix(core): handle anyio.BrokenResourceError for client disconnects (#9358)
Catch BrokenResourceError alongside ClosedResourceError in streaming
response, logging middleware, and app exception handlers so client
disconnects are logged at info level instead of surfacing as 500s.

Datadog: https://us5.datadoghq.com/error-tracking/issue/4f57af0c-d558-11f0-a65d-da7ad0900000

🤖 Generated with [Letta Code](https://letta.com)

Co-authored-by: Letta <noreply@letta.com>
2026-02-24 10:52:07 -08:00
Kian Jones
0d42afa151 fix(core): catch LockNotAvailableError and return 409 instead of 500 (#9359)
Re-apply changes on top of latest main to resolve merge conflicts.

- Add DatabaseLockNotAvailableError custom exception in orm/errors.py
- Catch asyncpg LockNotAvailableError and pgcode 55P03 in _handle_dbapi_error
- Register FastAPI exception handler returning 409 with Retry-After header



🐾 Generated with [Letta Code](https://letta.com)

Co-authored-by: letta-code <248085862+letta-code@users.noreply.github.com>
Co-authored-by: Letta <noreply@letta.com>
2026-02-24 10:52:07 -08:00
Kian Jones
382e216cbb fix(core): differentiate BYOK vs base provider in all LLM error details (#9425)
Add is_byok flag to every LLMError's details dict returned from
handle_llm_error across all providers (OpenAI, Anthropic, Google,
ChatGPT OAuth). This enables observability into whether errors
originate from Letta's production keys or user-provided BYOK keys.

The rate limit handler in app.py now returns a more helpful message
for BYOK users ("check your provider's rate limits and billing")
versus the generic message for base provider rate limits.

Datadog issues:
- https://us5.datadoghq.com/error-tracking/issue/b711c824-f490-11f0-96e4-da7ad0900000
- https://us5.datadoghq.com/error-tracking/issue/76623036-f4de-11f0-8697-da7ad0900000
- https://us5.datadoghq.com/error-tracking/issue/43e9888a-dfcf-11f0-a645-da7ad0900000

🤖 Generated with [Letta Code](https://letta.com)

Co-authored-by: Letta <noreply@letta.com>
2026-02-24 10:52:07 -08:00
Kian Jones
a00270d199 fix(core): handle UTF-8 surrogate characters in API responses (#9422)
* fix(core): handle UTF-8 surrogate characters in API responses

LLM responses or user input can contain surrogate characters (U+D800-U+DFFF)
which are valid Python strings but illegal in UTF-8. ORJSONResponse rejects
them with "str is not valid UTF-8: surrogates not allowed". Add
SafeORJSONResponse that catches the TypeError and strips surrogates before
retrying serialization.

🤖 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

* refactor: reuse sanitize_unicode_surrogates from json_helpers

Replace the inline _sanitize_surrogates function with the existing
sanitize_unicode_surrogates helper from letta.helpers.json_helpers,
which is already used across all LLM clients.

Co-authored-by: Kian Jones <kianjones9@users.noreply.github.com>

🤖 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

---------

Co-authored-by: Letta <noreply@letta.com>
Co-authored-by: letta-code <248085862+letta-code@users.noreply.github.com>
2026-02-24 10:52:07 -08:00
Shubham Naik
ca32311b9a feat: allow users to specify via query to stip messages [LET-7392] (#9411)
* feat: allow users to specify via query to stip messages

* chore: regenerate API SDK and OpenAPI spec [LET-7392]

🤖 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Ari Webb <AriWebb@users.noreply.github.com>
Co-Authored-By: Letta <noreply@letta.com>

---------

Co-authored-by: letta-code <248085862+letta-code@users.noreply.github.com>
Co-authored-by: Ari Webb <AriWebb@users.noreply.github.com>
Co-authored-by: Letta <noreply@letta.com>
2026-02-24 10:52:07 -08:00
Sarah Wooders
0dde155e9a feat: Prefix cache optimization system prompt (#9381) 2026-02-24 10:52:07 -08:00
Kevin Lin
23c94ec6d3 feat: add log probabilities from OpenAI-compatible servers and SGLang native endpoint (#9240)
* Add log probabilities support for RL training

This enables Letta server to request and return log probabilities from
OpenAI-compatible providers (including SGLang) for use in RL training.

Changes:
- LLMConfig: Add return_logprobs and top_logprobs fields
- OpenAIClient: Set logprobs in ChatCompletionRequest when enabled
- LettaLLMAdapter: Add logprobs field and extract from response
- LettaResponse: Add logprobs field to return log probs to client
- LettaRequest: Add return_logprobs/top_logprobs for per-request override
- LettaAgentV3: Store and pass logprobs through to response
- agents.py: Handle request-level logprobs override

Usage:
  response = client.agents.messages.create(
      agent_id=agent_id,
      messages=[...],
      return_logprobs=True,
      top_logprobs=5,
  )
  print(response.logprobs)  # Per-token log probabilities

🤖 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

* Add multi-turn token tracking for RL training via SGLang native endpoint

- Add TurnTokenData schema to track token IDs and logprobs per turn
- Add return_token_ids flag to LettaRequest and LLMConfig
- Create SGLangNativeClient for /generate endpoint (returns output_ids)
- Create SGLangNativeAdapter that uses native endpoint
- Modify LettaAgentV3 to accumulate turns across LLM calls
- Include turns in LettaResponse when return_token_ids=True

* Fix: Add SGLang native adapter to step() method, not just stream()

* Fix: Handle Pydantic Message objects in SGLang native adapter

* Fix: Remove api_key reference from LLMConfig (not present)

* Fix: Add missing 'created' field to ChatCompletionResponse

* Add full tool support to SGLang native adapter

- Format tools into prompt in Qwen-style format
- Parse tool calls from <tool_call> tags in response
- Format tool results as <tool_response> in user messages
- Set finish_reason to 'tool_calls' when tools are called

* Use tokenizer.apply_chat_template for proper tool formatting

- Add tokenizer caching in SGLang native adapter
- Use apply_chat_template when tokenizer available
- Fall back to manual formatting if not
- Convert Letta messages to OpenAI format for tokenizer

* Fix: Use func_response instead of tool_return for ToolReturn content

* Fix: Get output_token_logprobs from meta_info in SGLang response

* Fix: Allow None in output_token_logprobs (SGLang format includes null)

* chore: remove unrelated files from logprobs branch

🤖 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

* fix: add missing call_type param to adapter constructors in letta_agent_v3

The SGLang refactor dropped call_type=LLMCallType.agent_step when extracting
adapter creation into conditional blocks. Restores it for all 3 spots (SGLang
in step, SimpleLLM in step, SGLang in stream).

🤖 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

* just stage-api && just publish-api

* fix: update expected LLMConfig fields in schema test for logprobs support

🤖 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

* chore: remove rllm provider references

🤖 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

* just stage-api && just publish-api

🤖 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

---------

Co-authored-by: Ubuntu <ubuntu@ip-172-31-65-206.ec2.internal>
Co-authored-by: Letta <noreply@letta.com>
2026-02-24 10:52:07 -08:00
Sarah Wooders
bbc648909b refactor: drop memory/ prefix from git memory repo file paths and update core memory rendering [LET-7356] (#9395) 2026-02-24 10:52:07 -08:00
Charles Packer
b0e16ae50f fix: surface GPT-5.3 Codex for ChatGPT OAuth providers (#9379) 2026-02-24 10:52:07 -08:00
Sarah Wooders
526da4c49b Revert "perf: optimize prefix caching by skipping system prompt rebuild on every step" (#9380)
Revert "perf: optimize prefix caching by skipping system prompt rebuild on ev…"

This reverts commit eafa4144c2577a45b7007a177b701863b98d1dfa.
2026-02-24 10:52:07 -08:00
Sarah Wooders
9dbe28e8f1 perf: optimize prefix caching by skipping system prompt rebuild on every step (#9080) 2026-02-24 10:52:07 -08:00
Kian Jones
32d87b70d7 fix(core): return 400 for ContextWindowExceededError instead of 500 (#9348)
SystemPromptTokenExceededError (a subclass of ContextWindowExceededError)
was hitting the generic 500 handler in Datadog. This is a client-side
configuration issue (system prompt too large for context window), so it
should return 400 Bad Request.

Datadog: https://us5.datadoghq.com/error-tracking/issue/3360e252-d5f1-11f0-8467-da7ad0900000
Also covers: https://us5.datadoghq.com/error-tracking/issue/6e50c57c

🤖 Generated with [Letta Code](https://letta.com)

Co-authored-by: Letta <noreply@letta.com>
2026-02-24 10:52:07 -08:00
Kian Jones
47aedfa1a7 fix(core): convert MCP ConnectionError to LettaMCPConnectionError for proper HTTP 502 responses (#9364)
MCP server connection failures were raising Python's builtin ConnectionError,
which bypassed the LettaMCPConnectionError FastAPI exception handler and hit
Datadog as unhandled 500 errors. Now all MCP client classes convert
ConnectionError to LettaMCPConnectionError at the source, which the existing
exception handler returns as a user-friendly 502.

Datadog: https://us5.datadoghq.com/error-tracking/issue/93db4a82-fe5a-11f0-85f0-da7ad0900000

🐛 Generated with [Letta Code](https://letta.com)

Co-authored-by: Letta <noreply@letta.com>
2026-02-24 10:52:07 -08:00
Kian Jones
cbbb6d776d fix(core): unwrap ExceptionGroup in OAuth stream generator (#9346)
Fixes Datadog issue 5efbb1d4-eec5-11f0-8f8e-da7ad0900000

Add ExceptionGroup unwrapping in OAuth stream exception handler.

The bug was caused by ExceptionGroup not being caught by the general
`except Exception` handler, since ExceptionGroup is a subclass of
BaseException, not Exception. This caused TaskGroup errors to escape
as unhandled ExceptionGroups in Datadog.

The fix adds an explicit ExceptionGroup handler before the general
Exception handler, following the same unwrapping pattern used in
other parts of the codebase (mcp_tool_executor.py, base_client.py).

🐾 Generated with [Letta Code](https://letta.com)

Co-authored-by: Letta <noreply@letta.com>
2026-02-24 10:52:07 -08:00
Kian Jones
d592ec3135 fix: handle DBAPIError wrapping asyncpg DeadlockDetectedError (#9355)
SQLAlchemy wraps asyncpg's DeadlockDetectedError in a DBAPIError,
which was falling through to the generic 500 handler. Now detected
at both the ORM level (_handle_dbapi_error) and FastAPI handler level,
returning 409 with Retry-After header.

Datadog: https://us5.datadoghq.com/error-tracking/issue/2f1dc54c-dab6-11f0-a828-da7ad0900000

🐾 Generated with [Letta Code](https://letta.com)

Co-authored-by: Letta <noreply@letta.com>
2026-02-24 10:52:07 -08:00
Sarah Wooders
369cdf72c7 feat(core): store block metadata as YAML frontmatter in .md files (#9365)
* feat(core): store block metadata as YAML frontmatter in .md files

Block .md files in git repos now embed metadata (description, limit,
read_only, metadata dict) as YAML frontmatter instead of a separate
metadata/blocks.json file. Only non-default values are rendered.

Format:
  ---
  description: "Who I am"
  limit: 5000
  ---
  Block value content here...

Changes:
- New block_markdown.py utility (serialize_block / parse_block_markdown)
- Updated all three write/read paths: manager.py, memfs_client.py,
  memfs_client_base.py
- block_manager_git.py now passes description/limit/read_only/metadata
  through to git commits
- Post-push sync (git_http.py) parses frontmatter and syncs metadata
  fields to Postgres
- Removed metadata/blocks.json reads/writes entirely
- Backward compat: files without frontmatter treated as raw value
- Integration test verifies frontmatter in cloned files and metadata
  sync via git push

🐾 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

* fix: derive frontmatter defaults from BaseBlock schema, not hardcoded dict

Remove _DEFAULTS dict from block_markdown.py. The core version now
imports BaseBlock and reads field defaults via model_fields. This
fixes the limit default (was 5000, should be CORE_MEMORY_BLOCK_CHAR_LIMIT=20000).

Also:
- memfs-py copy simplified to parse-only (no serialize, no letta imports)
- All hardcoded limit=5000 fallbacks replaced with CORE_MEMORY_BLOCK_CHAR_LIMIT
- Test updated: blocks with all-default metadata correctly have no frontmatter;
  frontmatter verified after setting non-default description via API

🐾 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

* fix: always include description and limit in frontmatter

description and limit are always rendered in the YAML frontmatter,
even when at their default values. Only read_only and metadata are
conditional (omitted when at defaults).

🐾 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

* fix: resolve read_only from block_update before git commit

read_only was using the old Postgres value instead of the update value
when committing to git. Also adds integration test coverage for
read_only: true appearing in frontmatter after API PATCH, and
verifying it's omitted when false (default).

🐾 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

* test: add API→git round-trip coverage for description and limit

Verifies that PATCH description/limit via API is reflected in
frontmatter after git pull. Combined with the existing push→API
test (step 6), this gives full bidirectional coverage:
- API edit description/limit → pull → frontmatter updated
- Push frontmatter with description/limit → API reflects changes

🐾 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

---------

Co-authored-by: Letta <noreply@letta.com>
2026-02-24 10:52:07 -08:00
Sarah Wooders
93e453ef8f fix(core): transform nested block labels on git memory enable (#9339)
Co-authored-by: Letta <noreply@letta.com>
2026-02-24 10:52:07 -08:00