* feat: make agent_id optional in conversations list endpoint [LET-7612]
Allow listing all conversations without filtering by agent_id.
**Router changes (conversations.py):**
- Changed agent_id from required (`...`) to optional (`None`)
- Updated description to clarify behavior
- Updated docstring to reflect optional filtering
**Manager changes (conversation_manager.py):**
- Updated list_conversations signature: agent_id: str → Optional[str]
- Updated docstring to clarify optional behavior
- Summary search query: conditionally adds agent_id filter only if provided
- Default list logic: passes agent_id (can be None) to list_async
**How it works:**
- Without agent_id: returns all conversations for the user's organization
- With agent_id: returns conversations filtered by that agent
- list_async handles None gracefully via **kwargs pattern
**Use case:**
- Cloud UI can list all user conversations across agents
- Still supports filtering by agent_id when needed
🐾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* chore: update logs
* chore: update logs
---------
Co-authored-by: Letta <noreply@letta.com>
* auto fixes
* auto fix pt2 and transitive deps and undefined var checking locals()
* manual fixes (ignored or letta-code fixed)
* fix circular import
* remove all ignores, add FastAPI rules and Ruff rules
* add ty and precommit
* ruff stuff
* ty check fixes
* ty check fixes pt 2
* error on invalid
* test(core): strengthen git-memory system prompt stability integration coverage
Switch git-memory HTTP integration tests to OpenAI model handles and add assertions that system prompt content remains stable after normal turns and direct block value updates until explicit recompilation or reset.
👾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix(core): preserve git-memory formatting and enforce lock conflicts
Preserve existing markdown frontmatter formatting on block updates while still ensuring required metadata fields exist, and make post-push git sync propagate memory-repo lock conflicts as 409 responses. Also enable slash-containing core-memory block labels in route params and add regression coverage.
👾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix(memfs): fail closed on memory repo lock contention
Make memfs git commits fail closed when the per-agent Redis lock cannot be acquired, return 409 MEMORY_REPO_BUSY from the memfs files write API, and map that 409 back to core MemoryRepoBusyError so API callers receive consistent busy conflicts.
👾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* chore(core): minimize git-memory fix scope to memfs lock and frontmatter paths
👾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* chore: drop unrelated changes and keep memfs-focused scope
Revert branch-only changes that are not required for the memfs lock contention and frontmatter-preservation fix so the PR contains only issue-relevant files.
👾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix(memfs): lock push sync path and improve nested sync diagnostics
Serialize memfs push-to-GCS sync with the same per-agent Redis lock key used by API commits, and add targeted post-push nested-block diagnostics plus a focused nested-label sync regression test for _sync_after_push.
👾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
---------
Co-authored-by: Letta <noreply@letta.com>
* fix(core): stabilize system prompt refresh and expand git-memory coverage
Only rebuild system prompts on explicit refresh paths so normal turns preserve prefix-cache stability, including git/custom prompt layouts. Add integration coverage for memory filesystem tree structure and recompile/reset system-message updates via message-id retrieval.
👾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix(core): recompile system prompt around compaction and stabilize source tests
Force system prompt refresh before/after compaction in LettaAgentV3 so repaired system+memory state is used and persisted across subsequent turns. Update source-system prompt tests to explicitly recompile before raw preview assertions instead of assuming automatic rebuild timing.
👾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
---------
Co-authored-by: Letta <noreply@letta.com>
* Add log probabilities support for RL training
This enables Letta server to request and return log probabilities from
OpenAI-compatible providers (including SGLang) for use in RL training.
Changes:
- LLMConfig: Add return_logprobs and top_logprobs fields
- OpenAIClient: Set logprobs in ChatCompletionRequest when enabled
- LettaLLMAdapter: Add logprobs field and extract from response
- LettaResponse: Add logprobs field to return log probs to client
- LettaRequest: Add return_logprobs/top_logprobs for per-request override
- LettaAgentV3: Store and pass logprobs through to response
- agents.py: Handle request-level logprobs override
Usage:
response = client.agents.messages.create(
agent_id=agent_id,
messages=[...],
return_logprobs=True,
top_logprobs=5,
)
print(response.logprobs) # Per-token log probabilities
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* Add multi-turn token tracking for RL training via SGLang native endpoint
- Add TurnTokenData schema to track token IDs and logprobs per turn
- Add return_token_ids flag to LettaRequest and LLMConfig
- Create SGLangNativeClient for /generate endpoint (returns output_ids)
- Create SGLangNativeAdapter that uses native endpoint
- Modify LettaAgentV3 to accumulate turns across LLM calls
- Include turns in LettaResponse when return_token_ids=True
* Fix: Add SGLang native adapter to step() method, not just stream()
* Fix: Handle Pydantic Message objects in SGLang native adapter
* Fix: Remove api_key reference from LLMConfig (not present)
* Fix: Add missing 'created' field to ChatCompletionResponse
* Add full tool support to SGLang native adapter
- Format tools into prompt in Qwen-style format
- Parse tool calls from <tool_call> tags in response
- Format tool results as <tool_response> in user messages
- Set finish_reason to 'tool_calls' when tools are called
* Use tokenizer.apply_chat_template for proper tool formatting
- Add tokenizer caching in SGLang native adapter
- Use apply_chat_template when tokenizer available
- Fall back to manual formatting if not
- Convert Letta messages to OpenAI format for tokenizer
* Fix: Use func_response instead of tool_return for ToolReturn content
* Fix: Get output_token_logprobs from meta_info in SGLang response
* Fix: Allow None in output_token_logprobs (SGLang format includes null)
* chore: remove unrelated files from logprobs branch
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix: add missing call_type param to adapter constructors in letta_agent_v3
The SGLang refactor dropped call_type=LLMCallType.agent_step when extracting
adapter creation into conditional blocks. Restores it for all 3 spots (SGLang
in step, SimpleLLM in step, SGLang in stream).
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* just stage-api && just publish-api
* fix: update expected LLMConfig fields in schema test for logprobs support
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* chore: remove rllm provider references
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* just stage-api && just publish-api
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
---------
Co-authored-by: Ubuntu <ubuntu@ip-172-31-65-206.ec2.internal>
Co-authored-by: Letta <noreply@letta.com>
MCP server connection failures were raising Python's builtin ConnectionError,
which bypassed the LettaMCPConnectionError FastAPI exception handler and hit
Datadog as unhandled 500 errors. Now all MCP client classes convert
ConnectionError to LettaMCPConnectionError at the source, which the existing
exception handler returns as a user-friendly 502.
Datadog: https://us5.datadoghq.com/error-tracking/issue/93db4a82-fe5a-11f0-85f0-da7ad0900000🐛 Generated with [Letta Code](https://letta.com)
Co-authored-by: Letta <noreply@letta.com>
Fixes Datadog issue 5efbb1d4-eec5-11f0-8f8e-da7ad0900000
Add ExceptionGroup unwrapping in OAuth stream exception handler.
The bug was caused by ExceptionGroup not being caught by the general
`except Exception` handler, since ExceptionGroup is a subclass of
BaseException, not Exception. This caused TaskGroup errors to escape
as unhandled ExceptionGroups in Datadog.
The fix adds an explicit ExceptionGroup handler before the general
Exception handler, following the same unwrapping pattern used in
other parts of the codebase (mcp_tool_executor.py, base_client.py).
🐾 Generated with [Letta Code](https://letta.com)
Co-authored-by: Letta <noreply@letta.com>
* feat(core): store block metadata as YAML frontmatter in .md files
Block .md files in git repos now embed metadata (description, limit,
read_only, metadata dict) as YAML frontmatter instead of a separate
metadata/blocks.json file. Only non-default values are rendered.
Format:
---
description: "Who I am"
limit: 5000
---
Block value content here...
Changes:
- New block_markdown.py utility (serialize_block / parse_block_markdown)
- Updated all three write/read paths: manager.py, memfs_client.py,
memfs_client_base.py
- block_manager_git.py now passes description/limit/read_only/metadata
through to git commits
- Post-push sync (git_http.py) parses frontmatter and syncs metadata
fields to Postgres
- Removed metadata/blocks.json reads/writes entirely
- Backward compat: files without frontmatter treated as raw value
- Integration test verifies frontmatter in cloned files and metadata
sync via git push
🐾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix: derive frontmatter defaults from BaseBlock schema, not hardcoded dict
Remove _DEFAULTS dict from block_markdown.py. The core version now
imports BaseBlock and reads field defaults via model_fields. This
fixes the limit default (was 5000, should be CORE_MEMORY_BLOCK_CHAR_LIMIT=20000).
Also:
- memfs-py copy simplified to parse-only (no serialize, no letta imports)
- All hardcoded limit=5000 fallbacks replaced with CORE_MEMORY_BLOCK_CHAR_LIMIT
- Test updated: blocks with all-default metadata correctly have no frontmatter;
frontmatter verified after setting non-default description via API
🐾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix: always include description and limit in frontmatter
description and limit are always rendered in the YAML frontmatter,
even when at their default values. Only read_only and metadata are
conditional (omitted when at defaults).
🐾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix: resolve read_only from block_update before git commit
read_only was using the old Postgres value instead of the update value
when committing to git. Also adds integration test coverage for
read_only: true appearing in frontmatter after API PATCH, and
verifying it's omitted when false (default).
🐾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* test: add API→git round-trip coverage for description and limit
Verifies that PATCH description/limit via API is reflected in
frontmatter after git pull. Combined with the existing push→API
test (step 6), this gives full bidirectional coverage:
- API edit description/limit → pull → frontmatter updated
- Push frontmatter with description/limit → API reflects changes
🐾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
---------
Co-authored-by: Letta <noreply@letta.com>
* fix(core): handle ExceptionGroup-wrapped ToolError and McpError in MCP tool execution
Fixes 3 related Datadog bugs (all fastmcp.exceptions.ToolError):
- 75d43daa-ff04-11f0-81b2-da7ad0900000
- 7af6373e-0080-11f1-9855-da7ad0900000
- a322edc8-fffa-11f0-b26c-da7ad0900000
These errors were caused by ToolError and McpError exceptions bubbling up
unhandled from the MCP REST endpoint. This fix combines the approaches from
PRs #9320 and #9321:
1. Handle ExceptionGroup wrapping (Python 3.11+ async TaskGroup)
2. Check for ToolError by class name to handle module variations
3. Convert ToolError to LettaInvalidArgumentError for proper client response
4. Catch McpError and return HTTP 500 with proper error message
Issue-IDs: 75d43daa-ff04-11f0-81b2-da7ad0900000, 7af6373e-0080-11f1-9855-da7ad0900000, a322edc8-fffa-11f0-b26c-da7ad0900000
🐾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix: return 422 instead of 500 for McpError (user config issue)
* fix: use LettaMCPConnectionError instead of HTTPException for McpError
---------
Co-authored-by: Letta <noreply@letta.com>
When git push completes, the webhook fires immediately but GCS upload
may still be in progress. This causes KeyError when trying to read
commit objects that haven't been uploaded yet.
Add retry with exponential backoff (1s, 2s, 4s) to handle this race.
🐾 Generated with [Letta Code](https://letta.com)
Co-authored-by: Letta <noreply@letta.com>
PR #9309 changed the block storage from blocks/ to memory/ directory.
Update memfs_client.py and memfs_client_base.py to match.
🐾 Generated with [Letta Code](https://letta.com)
Co-authored-by: Letta <noreply@letta.com>
* feat: add memfs-py service
* add tf for bucket access and secrets v2 access
* feat(memfs): add helm charts, deploy workflow, and bug fixes
- Add dev helm chart (helm/dev/memfs-py/) with CSI secrets pattern
- Update prod helm chart with CSI secrets and correct service account
- Add GitHub Actions deploy workflow
- Change port from 8284 to 8285 to avoid conflict with core's dulwich sidecar
- Fix chunked transfer encoding issue (strip HTTP_TRANSFER_ENCODING header)
- Fix timestamp parsing to handle both ISO and HTTP date formats
- Fix get_head_sha to raise FileNotFoundError on 404
🐾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
---------
Co-authored-by: Kian Jones <kian@letta.com>
Co-authored-by: Letta <noreply@letta.com>
* fix(core): handle PermissionDeniedError in provider API key validation
Fixed OpenAI PermissionDeniedError being raised as unknown error when
validating provider API keys. The check_api_key methods in OpenAI-based
providers (OpenAI, OpenRouter, Azure, Together) now properly catch and
re-raise PermissionDeniedError as LLMPermissionDeniedError.
🐛 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix(core): handle Unicode surrogates in OpenAI requests
Sanitize invalid UTF-16 surrogates before sending requests to OpenAI API.
Fixes UnicodeEncodeError when message content contains unpaired surrogates
from corrupted emoji data or malformed Unicode sequences.
🐾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix(core): handle MCP tool schema validation errors gracefully
Catch fastmcp.exceptions.ToolError in execute_mcp_tool endpoint and
convert to LettaInvalidArgumentError (400) instead of letting it
propagate as 500 error. This is an expected user error when tool
arguments don't match the MCP tool's schema.
Fixes Datadog issue 8f2d874a-f8e5-11f0-9b25-da7ad0900000
🐾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix(core): handle ExceptionGroup-wrapped ToolError in MCP executor
When MCP tools fail with validation errors (e.g., missing required parameters),
fastmcp raises ToolError exceptions that may be wrapped in ExceptionGroup by
Python's async TaskGroup. The exception handler now unwraps single-exception
groups before checking if the error should be handled gracefully.
Fixes Calendly API "organization parameter missing" errors being logged to
Datadog instead of returning friendly error messages to users.
🐾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix: handle missing agent in create_conversation to prevent foreign key violation
* Update .gitignore
---------
Co-authored-by: Letta <noreply@letta.com>
* feat: add /agents/{agent_id}/generate endpoint for direct LLM requests
Add new endpoint that makes direct LLM provider requests without agent
context, memory, tools, or state modification. This enables:
- Quick LLM queries without agent overhead
- Testing model configurations
- Simple chat completions using agent's credentials
- Comparing responses across different models
Features:
- Uses agent's LLM config by default
- Supports model override with full provider config resolution
- Non-streaming, stateless operation
- Proper error handling and validation
- Request/response schemas with Pydantic validation
Implementation:
- Add GenerateRequest and GenerateResponse schemas
- Implement generate_completion endpoint handler
- Add necessary imports (LLMError, LLMClient, HandleNotFoundError)
- Include logging and comprehensive error handling
* fix: improve error handling and fix Message construction
- Fix critical bug: use content=[TextContent(text=...)] instead of text=...
- Add explicit error handling for NoResultFound and HandleNotFoundError
- Add error handling for convert_response_to_chat_completion
- Add structured logging for debugging
- Remove unnecessary .get() calls since Pydantic validates messages
* refactor: extract generate logic to AgentCompletionService
Move the generate endpoint business logic out of the endpoint handler
into a dedicated AgentCompletionService class for better code organization
and separation of concerns.
Changes:
- Create new AgentCompletionService in services/agent_completion_service.py
- Service handles all business logic: agent validation, LLM config resolution,
message conversion, LLM client creation, and request/response processing
- Integrate service with SyncServer initialization
- Refactor generate_completion endpoint to use the service
- Endpoint now only handles HTTP concerns (auth, error mapping)
Benefits:
- Cleaner endpoint code (reduced from ~140 lines to ~25 lines)
- Better separation of concerns (HTTP vs business logic)
- Service logic can be reused or tested independently
- Follows established patterns in the codebase (AgentManager, etc.)
* feat: simplify generate API to accept just prompt text
Simplify the client interface by accepting a simple prompt string instead
of requiring clients to format messages.
Changes:
- Update GenerateRequest schema:
- Replace 'messages' array with simple 'prompt' string
- Add optional 'system_prompt' for context/instructions
- Keep 'override_model' for model selection
- Update AgentCompletionService to format messages automatically:
- Accepts prompt and optional system_prompt
- Constructs message array internally (system + user messages)
- Simpler API surface for clients
- Update endpoint documentation with new simplified examples
- Regenerate OpenAPI spec and TypeScript SDK
Benefits:
- Much simpler client experience - just send text
- No need to understand message formatting
- Still supports system prompts for context
- Cleaner API that matches common use cases
Example (before):
{
"messages": [{"role": "user", "content": "What is 2+2?"}]
}
Example (after):
{
"prompt": "What is 2+2?"
}
* test: add comprehensive integration tests for generate endpoint
Add 9 integration tests covering various scenarios:
Happy path tests:
- test_agent_generate_basic: Basic prompt -> response flow
- test_agent_generate_with_system_prompt: System prompt + user prompt
- test_agent_generate_with_model_override: Override model selection
- test_agent_generate_long_prompt: Handle longer prompts
- test_agent_generate_no_persistence: Verify no messages saved to agent
Error handling tests:
- test_agent_generate_empty_prompt_error: Empty prompt validation (422)
- test_agent_generate_invalid_agent_id: Invalid agent ID (404)
- test_agent_generate_invalid_model_override: Invalid model handle (404)
All tests verify:
- Response structure (content, model, usage)
- Proper status codes for errors
- Usage statistics (tokens, counts)
- No side effects on agent state
Tests follow existing test patterns in test_client.py and use the
letta_client SDK (assuming generate_completion method is auto-generated
from the OpenAPI spec).
* openapi
* refactor: rename AgentCompletionService to AgentGenerateCompletionManager
Rename for better clarity and consistency with codebase naming conventions:
- Rename file: agent_completion_service.py → agent_generate_completion_manager.py
- Rename class: AgentCompletionService → AgentGenerateCompletionManager
- Rename attribute: server.agent_completion_service → server.agent_generate_completion_manager
- Update docstrings: 'Service' → 'Manager'
Changes:
- apps/core/letta/services/agent_generate_completion_manager.py (renamed + updated class)
- apps/core/letta/server/server.py (import + initialization)
- apps/core/letta/server/rest_api/routers/v1/agents.py (usage in endpoint)
No functional changes, purely a naming refactor.
* fix: remove invalid Message parameters in generate manager
Remove agent_id=None and user_id=None from Message construction.
The Message model doesn't accept these as None values - only pass
required parameters (role, content).
Fixes validation error:
'Extra inputs are not permitted [type=extra_forbidden, input_value=None]'
This aligns with other Message construction patterns in the codebase
(see tools.py, memory.py examples).
* feat: improve generate endpoint validation and tests
- Add field validator for whitespace-only prompts
- Always include system message (required by Anthropic)
- Use default "You are a helpful assistant." when no system_prompt provided
- Update tests to use direct HTTP calls via httpx
- Fix test issues:
- Use valid agent ID format (agent-{uuid})
- Use available model (openai/gpt-4o-mini)
- Add whitespace validation test
- All 9 integration tests passing
* fix(core): derive dulwich org context from user_id fallback
Make git smart HTTP more robust in prod by:
- normalizing/injecting a single X-Organization-Id header in the FastAPI proxy
- keeping dulwich org contextvar set through WSGI iteration
- falling back to resolving org_id from user_id when X-Organization-Id is missing
- adding opt-in debug logs (env LETTA_GIT_HTTP_DEBUG_LOGS or letta_debug query)
👾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* refactor(core): drop user->org cache in dulwich org fallback
Keep the dulwich org_id fallback simple by resolving org_id from user_id via
UserManager lookup when X-Organization-Id is missing, without maintaining an
in-process cache.
👾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* chore(core): make git HTTP debug logging always-on
Remove opt-in toggles for git HTTP debug logs and log proxy + dulwich request
context for every git smart-HTTP request.
👾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
---------
Co-authored-by: Letta <noreply@letta.com>
* fix(core): pass org_id to dulwich via header for git HTTP
* fix(core): use actor org id for git HTTP org header
Git smart HTTP proxies were reading `organization_id` from AgentState, which
is not present and caused 500s during clone/push. Use the authenticated
actor's org id while still performing an authorization check on the agent.
👾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
---------
Co-authored-by: Letta <noreply@letta.com>
* feat(core): add git-backed memory repos and block manager
Introduce a GCS-backed git repository per agent as the source of truth for core
memory blocks. Add a GitEnabledBlockManager that writes block updates to git and
syncs values back into Postgres as a cache.
Default newly-created memory repos to the `main` branch.
👾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* feat(core): serve memory repos over git smart HTTP
Run dulwich's WSGI HTTPGitApplication on a local sidecar port and proxy
/v1/git/* through FastAPI to support git clone/fetch/push directly against
GCS-backed memory repos.
👾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix(core): create memory repos on demand and stabilize git HTTP
- Ensure MemoryRepoManager creates the git repo on first write (instead of 500ing)
and avoids rewriting history by only auto-creating on FileNotFoundError.
- Simplify dulwich-thread async execution and auto-create empty repos on first
git clone.
👾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix(core): make dulwich optional for CI installs
Guard dulwich imports in the git smart HTTP router so the core server can boot
(and CI tests can run) without installing the memory-repo extra.
👾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix(core): guard git HTTP WSGI init when dulwich missing
Avoid instantiating dulwich's HTTPGitApplication at import time when dulwich
isn't installed (common in CI installs).
👾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix(core): avoid masking send_message errors in finally
Initialize `result` before the agent loop so error paths (e.g. approval
validation) don't raise UnboundLocalError in the run-tracking finally block.
👾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix(core): stop event loop watchdog on FastAPI shutdown
Ensure the EventLoopWatchdog thread is stopped during FastAPI lifespan
shutdown to avoid daemon threads logging during interpreter teardown (seen in CI
unit tests).
👾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* chore(core): remove send_*_message_to_agent from SyncServer
Drop send_message_to_agent and send_group_message_to_agent from SyncServer and
route internal fire-and-forget messaging through send_messages helpers instead.
👾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix(core): backfill git memory repo when tag added
When an agent is updated to include the git-memory-enabled tag, ensure the
git-backed memory repo is created and initialized from the agent's current
blocks. Also support configuring the memory repo object store via
LETTA_OBJECT_STORE_URI.
👾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix(core): preserve block tags on git-enabled updates
When updating a block for a git-memory-enabled agent, keep block tags in sync
with PostgreSQL (tags are not currently stored in the git repo).
👾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* chore(core): remove git-state legacy shims
- Rename optional dependency extra from memory-repo to git-state
- Drop legacy object-store env aliases and unused region config
- Simplify memory repo metadata to a single canonical format
- Remove unused repo-cache invalidation helper
👾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* fix(core): keep PR scope for git-backed blocks
- Revert unrelated change in fire-and-forget multi-agent send helper
- Route agent block updates-by-label through injected block manager only when needed
👾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
---------
Co-authored-by: Letta <noreply@letta.com>
Fixes UnboundLocalError when client disconnects (EndOfStream) during
request processing - the finally block tried to access `result` which
was never assigned.
🤖 Generated with [Letta Code](https://letta.com)
Co-authored-by: Letta <noreply@letta.com>
Initialize `result` and `run_status` variables before the try block
to prevent UnboundLocalError in the finally block when exceptions
occur early in execution (e.g., during AgentLoop.load()).
Previously, if an exception was raised before `result` was assigned
inside the try block, the finally block would fail when trying to
access `result`, masking the original error with an UnboundLocalError.
This fix ensures the finally block can safely check and use these
variables regardless of when/where an exception occurs.
* fix: non-streaming conversation messages endpoint
**Problems:**
1. `AssertionError: run_id is required when enforce_run_id_set is True`
- Non-streaming path didn't create a run before calling `step()`
2. `ResponseValidationError: Unable to extract tag using discriminator 'message_type'`
- `response_model=LettaStreamingResponse` but non-streaming returns `LettaResponse`
**Fixes:**
1. Add run creation before calling `step()` (mirrors agents endpoint)
2. Set run_id in Redis for cancellation support
3. Pass `run_id` to `step()`
4. Change `response_model` from `LettaStreamingResponse` to `LettaResponse`
(streaming returns `StreamingResponse` which bypasses response_model validation)
**Test:**
Added `test_conversation_non_streaming_raw_http` to verify the fix.
👾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* api sync
---------
Co-authored-by: Letta <noreply@letta.com>
* fix: remove deprecation from agent passages endpoints
The client.agent.passages endpoints (list, create, search, delete) were
incorrectly marked as deprecated. This would break significant amounts
of user code and negatively impact developer experience.
Fixes#9116
Co-authored-by: Ari Webb <AriWebb@users.noreply.github.com>
* stage publish api
---------
Co-authored-by: letta-code <248085862+letta-code@users.noreply.github.com>
Co-authored-by: Ari Webb <AriWebb@users.noreply.github.com>
Co-authored-by: Ari Webb <ari@letta.com>
**Problem:**
Production error showed runs being marked as failed with stop_reason=None,
which violates LettaStopReason's Pydantic schema (requires valid enum value).
This caused cascading validation errors that got stored in metadata.
Example error:
```
Run is already in a terminal state failed with stop reason None, but is being
updated with data {'status': 'failed', 'stop_reason': None, 'metadata':
{'error': "1 validation error for LettaStopReason\nstop_reason Input should
be 'end_turn', 'error', ... [type=enum, input_value=None]"}}
```
**Root Causes:**
1. routers/v1/agents.py had 3 exception handlers creating RunUpdate(status=failed)
without stop_reason
2. Success path assumed result.stop_reason always exists (AttributeError if None)
3. run_manager.py tried to create LettaStopReason(stop_reason=None) when
refreshing result messages
**Fixes:**
1. Added stop_reason=StopReasonType.error to 3 exception handlers
2. Added defensive None checks before accessing result.stop_reason.stop_reason
3. Added fallback to StopReasonType.error when pydantic_run.stop_reason is None
**Trigger:**
OpenAI BadRequestError for invalid tool schema → exception handlers marked
run as failed without stop_reason → validation error when constructing response
👾 Generated with [Letta Code](https://letta.com)
Co-authored-by: Letta <noreply@letta.com>