* add compaction settings to ADE, add get default prompt for updated mode route
* update patch to auto set prompt on mode change, related ade changes
* reset api and update test
* feat: add compaction configuration translation keys for fr and cn
Add ADE/CompactionConfiguration translation keys to fr.json and cn.json
to match the new keys added in en.json.
Co-authored-by: Christina Tong <christinatong01@users.noreply.github.com>
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* type/translation/etc fixes
* fix typing
* update model selector path w/ change from main
* import mode from sdk
---------
Co-authored-by: letta-code <248085862+letta-code@users.noreply.github.com>
Co-authored-by: Letta <noreply@letta.com>
* load default settings instead of loading from agent for summarizer config
* update tests to allow use of get_llm_config_from_handle
* remove nit comment
---------
Co-authored-by: Amy Guan <amy@letta.com>
* auto fixes
* auto fix pt2 and transitive deps and undefined var checking locals()
* manual fixes (ignored or letta-code fixed)
* fix circular import
* remove all ignores, add FastAPI rules and Ruff rules
* add ty and precommit
* ruff stuff
* ty check fixes
* ty check fixes pt 2
* error on invalid
* Add log probabilities support for RL training
This enables Letta server to request and return log probabilities from
OpenAI-compatible providers (including SGLang) for use in RL training.
Changes:
- LLMConfig: Add return_logprobs and top_logprobs fields
- OpenAIClient: Set logprobs in ChatCompletionRequest when enabled
- LettaLLMAdapter: Add logprobs field and extract from response
- LettaResponse: Add logprobs field to return log probs to client
- LettaRequest: Add return_logprobs/top_logprobs for per-request override
- LettaAgentV3: Store and pass logprobs through to response
- agents.py: Handle request-level logprobs override
Usage:
response = client.agents.messages.create(
agent_id=agent_id,
messages=[...],
return_logprobs=True,
top_logprobs=5,
)
print(response.logprobs) # Per-token log probabilities
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* Add multi-turn token tracking for RL training via SGLang native endpoint
- Add TurnTokenData schema to track token IDs and logprobs per turn
- Add return_token_ids flag to LettaRequest and LLMConfig
- Create SGLangNativeClient for /generate endpoint (returns output_ids)
- Create SGLangNativeAdapter that uses native endpoint
- Modify LettaAgentV3 to accumulate turns across LLM calls
- Include turns in LettaResponse when return_token_ids=True
* Fix: Add SGLang native adapter to step() method, not just stream()
* Fix: Handle Pydantic Message objects in SGLang native adapter
* Fix: Remove api_key reference from LLMConfig (not present)
* Fix: Add missing 'created' field to ChatCompletionResponse
* Add full tool support to SGLang native adapter
- Format tools into prompt in Qwen-style format
- Parse tool calls from <tool_call> tags in response
- Format tool results as <tool_response> in user messages
- Set finish_reason to 'tool_calls' when tools are called
* Use tokenizer.apply_chat_template for proper tool formatting
- Add tokenizer caching in SGLang native adapter
- Use apply_chat_template when tokenizer available
- Fall back to manual formatting if not
- Convert Letta messages to OpenAI format for tokenizer
* Fix: Use func_response instead of tool_return for ToolReturn content
* Fix: Get output_token_logprobs from meta_info in SGLang response
* Fix: Allow None in output_token_logprobs (SGLang format includes null)
* chore: remove unrelated files from logprobs branch
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix: add missing call_type param to adapter constructors in letta_agent_v3
The SGLang refactor dropped call_type=LLMCallType.agent_step when extracting
adapter creation into conditional blocks. Restores it for all 3 spots (SGLang
in step, SimpleLLM in step, SGLang in stream).
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* just stage-api && just publish-api
* fix: update expected LLMConfig fields in schema test for logprobs support
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* chore: remove rllm provider references
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* just stage-api && just publish-api
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
---------
Co-authored-by: Ubuntu <ubuntu@ip-172-31-65-206.ec2.internal>
Co-authored-by: Letta <noreply@letta.com>
* refactor: extract compact logic to shared function
Extract the compaction logic from LettaAgentV3.compact() into a
standalone compact_messages() function that can be shared between
the agent and temporal workflows.
Changes:
- Create apps/core/letta/services/summarizer/compact.py with:
- compact_messages(): Core compaction logic
- build_summarizer_llm_config(): LLM config builder for summarization
- CompactResult: Dataclass for compaction results
- Update LettaAgentV3.compact() to use compact_messages()
- Update temporal summarize_conversation_history activity to use
compact_messages() instead of the old Summarizer class
- Add use_summary_role parameter to SummarizeParams
This ensures consistent summarization behavior across different
execution paths and prevents drift as we improve the implementation.
* chore: clean up verbose comments
* fix: correct CompactionSettings import path
* fix: correct count_tokens import from summarizer_sliding_window
* fix: update test patch path for count_tokens_with_tools
After extracting compact logic to compact.py, the test was patching
the old location. Update the patch path to the new module location.
* fix: update test to use build_summarizer_llm_config from compact.py
The function was moved from LettaAgentV3._build_summarizer_llm_config
to compact.py as a standalone function.
* fix: add early check for system prompt size in compact_messages
Check if the system prompt alone exceeds the context window before
attempting summarization. The system prompt cannot be compacted,
so fail fast with SystemPromptTokenExceededError.
* fix: properly propagate SystemPromptTokenExceededError from compact
The exception handler in _step() was not setting the correct stop_reason
for SystemPromptTokenExceededError, which caused the finally block to
return early and swallow the exception.
Add special handling to set stop_reason to context_window_overflow_in_system_prompt
when SystemPromptTokenExceededError is caught.
* revert: remove redundant SystemPromptTokenExceededError handling
The special handling in the outer exception handler is redundant because
stop_reason is already set in the inner handler at line 943. The actual
fix for the test was the early check in compact_messages(), not this
redundant handling.
* fix: correctly re-raise SystemPromptTokenExceededError
The inner exception handler was using 'raise e' which re-raised the outer
ContextWindowExceededError instead of the current SystemPromptTokenExceededError.
Changed to 'raise' to correctly re-raise the current exception. This bug
was pre-existing but masked because _check_for_system_prompt_overflow was
only called as a fallback. The new early check in compact_messages() exposed it.
* revert: remove early check and restore raise e to match main behavior
* fix: set should_continue=False and correctly re-raise exception
- Add should_continue=False in SystemPromptTokenExceededError handler (matching main's _check_for_system_prompt_overflow behavior)
- Fix raise e -> raise to correctly propagate SystemPromptTokenExceededError
Note: test_large_system_prompt_summarization still fails locally but passes on main.
Need to investigate why exception isn't propagating correctly on refactored branch.
* fix: add SystemPromptTokenExceededError handler for post-step compaction
The post-step compaction (line 1066) was missing a SystemPromptTokenExceededError
exception handler. When compact_messages() raised this error, it would be caught
by the outer exception handler which would:
1. Set stop_reason to "error" instead of "context_window_overflow_in_system_prompt"
2. Not set should_continue = False
3. Get swallowed by the finally block (line 1126) which returns early
This caused test_large_system_prompt_summarization to fail because the exception
never propagated to the test.
The fix adds the same exception handler pattern used in the retry compaction flow
(line 941-946), ensuring proper state is set before re-raising.
This issue only affected the refactored code because on main, _check_for_system_prompt_overflow()
was an instance method that set should_continue/stop_reason BEFORE raising. In the refactor,
compact_messages() is a standalone function that cannot set instance state, so the caller
must handle the exception and set the state.
* feat: add ID format validation to agent and user schemas
Reuse existing validator types (ToolId, SourceId, BlockId, MessageId,
IdentityId, UserId) from letta.validators to enforce ID format validation
at the schema level. This ensures malformed IDs are rejected with a 422
validation error instead of causing 500 database errors.
Changes:
- CreateAgent: validate tool_ids, source_ids, folder_ids, block_ids, identity_ids
- UpdateAgent: validate tool_ids, source_ids, folder_ids, block_ids, message_ids, identity_ids
- UserUpdate: validate id
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* chore: regenerate API spec and SDK
* fix: override ID validation in AgentSchema for agent file portability
AgentSchema extends CreateAgent but needs to allow arbitrary short IDs
(e.g., tool-0, block-0) for portable agent files. Override the validated
ID fields to use plain List[str] instead of the validated types.
Also fix test_agent.af to use proper UUID-format IDs.
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* chore: regenerate API spec and SDK
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix: revert test_agent.af - short IDs are valid for agent files
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix openapi schema
---------
Co-authored-by: Letta <noreply@letta.com>
fix: load default provider config when summarizer uses different provider
**Problem:**
Summarization failed when agent used one provider (e.g., Google AI) but
summarizer config specified a different provider (e.g., Anthropic):
```python
# Agent LLM config
model_endpoint_type='google_ai', handle='gemini-something/gemini-2.5-pro',
context_window=100000
# Summarizer config
model='anthropic/claude-haiku-4-5-20251001'
# Bug: Resulting summarizer_llm_config mixed Google + Anthropic settings
model='claude-haiku-4-5-20251001', model_endpoint_type='google_ai', # ❌ Wrong endpoint!
context_window=100000 # ❌ Google's context window, not Anthropic's default!
```
This sent Claude requests to Google AI endpoints with incorrect parameters.
**Root Cause:**
`_build_summarizer_llm_config()` always copied the agent's LLM config as base,
then patched model/provider fields. But this kept all provider-specific settings
(endpoint, context_window, etc.) from the wrong provider.
**Fix:**
1. Parse provider_name from summarizer handle
2. Check if it matches agent's model_endpoint_type (or provider_name for custom)
3. **If YES** → Use agent config as base, override model/handle (same provider)
4. **If NO** → Load default config via `provider_manager.get_llm_config_from_handle()` (new provider)
**Example Flow:**
```python
# Agent: google_ai/gemini-2.5-pro
# Summarizer: anthropic/claude-haiku
provider_name = "anthropic" # Parsed from handle
provider_matches = ("anthropic" == "google_ai") # False ❌
# Different provider → load default Anthropic config
base = await provider_manager.get_llm_config_from_handle(
handle="anthropic/claude-haiku",
actor=self.actor
)
# Returns: model_endpoint_type='anthropic', endpoint='https://api.anthropic.com', etc. ✅
```
**Result:**
- Summarizer with different provider gets correct default config
- No more mixing Google endpoints with Anthropic models
- Same-provider summarizers still inherit agent settings efficiently
👾 Generated with [Letta Code](https://letta.com)
Co-authored-by: Letta <noreply@letta.com>
* feat: add tags support to blocks
* fix: add timestamps and org scoping to blocks_tags
Addresses PR feedback:
1. Migration: Added timestamps (created_at, updated_at), soft delete
(is_deleted), audit fields (_created_by_id, _last_updated_by_id),
and organization_id to blocks_tags table for filtering support.
Follows SQLite baseline pattern (composite PK of block_id+tag, no
separate id column) to avoid insert failures.
2. ORM: Relationship already correct with lazy="raise" to prevent
implicit joins and passive_deletes=True for efficient CASCADE deletes.
3. Schema: Changed normalize_tags() from Any to dict for type safety.
4. SQLite: Added blocks_tags to SQLite baseline schema to prevent
table-not-found errors.
5. Code: Updated all tag row inserts to include organization_id.
🐾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix: add ORM columns and update SQLite baseline for blocks_tags
Fixes test failures (CompileError: Unconsumed column names: organization_id):
1. ORM: Added organization_id, timestamps, audit fields to BlocksTags
ORM model to match database schema from migrations.
2. SQLite baseline: Added full column set to blocks_tags (organization_id,
timestamps, audit fields) to match PostgreSQL schema.
3. Test: Added 'tags' to expected Block schema fields.
This ensures SQLite and PostgreSQL have matching schemas and the ORM
can consume all columns that the code inserts.
🐾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* revert change to existing alembic migration
* fix: remove passive_deletes and SQLite support for blocks_tags
1. Removed passive_deletes=True from Block.tags relationship to match
AgentsTags pattern (neither have ondelete CASCADE in DB schema).
2. Removed SQLite branch from _replace_block_pivot_rows_async since
blocks_tags table is PostgreSQL-only (migration skips SQLite).
🐾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* api sync
---------
Co-authored-by: Letta <noreply@letta.com>
* initial commit
* Add database migration for compaction_settings field
This migration adds the compaction_settings column to the agents table
to support customized summarization configuration for each agent.
🐾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix
* rename
* update apis
* fix tests
* update web test
---------
Co-authored-by: Letta <noreply@letta.com>
Co-authored-by: Kian Jones <kian@letta.com>
* fix: replace all 'PRODUCTION' references with 'prod' for consistency
Problem: Codebase had 11 references to 'PRODUCTION' (uppercase) that should
use 'prod' (lowercase) for consistency with the deployment workflows and
environment normalization.
Changes across 8 files:
1. Source files (using settings.environment):
- letta/functions/function_sets/multi_agent.py
- letta/services/tool_manager.py
- letta/services/tool_executor/multi_agent_tool_executor.py
- letta/services/helpers/agent_manager_helper.py
All checks changed from: settings.environment == "PRODUCTION"
To: settings.environment == "prod"
2. OTEL resource configuration:
- letta/otel/resource.py
- Updated _normalize_environment_tag() to handle 'prod' directly
- Removed 'PRODUCTION' -> 'prod' mapping (no longer needed)
- Updated device.id check from _env != "PRODUCTION" to _env != "prod"
3. Test files:
- tests/managers/conftest.py
- Fixture parameter changed from "PRODUCTION" to "prod"
- tests/managers/test_agent_manager.py (3 occurrences)
- tests/managers/test_tool_manager.py (2 occurrences)
All test checks changed to use "prod"
Result: Complete consistency across the codebase:
- All environment checks use "prod" instead of "PRODUCTION"
- Normalization function simplified (no special case for PRODUCTION)
- Tests use correct "prod" value
- Matches deployment workflow configuration from PR #6626
This completes the environment naming standardization effort.
* fix: update settings.py environment description to use 'prod' instead of 'PRODUCTION'
The field description still referenced PRODUCTION as an example value.
Updated to use lowercase 'prod' for consistency with actual usage.
Before: "Application environment (PRODUCTION, DEV, CANARY, etc. - normalized to lowercase for OTEL tags)"
After: "Application environment (prod, dev, canary, etc. - lowercase values used for OTEL tags)"
* first hack with test
* remove changes integration test
* Delete apps/core/tests/sdk_v1/integration/integration_test_send_message_v2.py
* add test
* remove comment
* stage and publish api
* deprecate base level response_schema
* add param to llm_config test
---------
Co-authored-by: Ari Webb <ari@letta.com>