* fix: combined tool manager improvements - tracing and redundant fetches
This PR combines improvements from #6530 and #6535:
- Add tracer import to enable proper tracing spans
- Improve update check logic to verify actual field changes before updating
- Return current_tool directly when no update is needed (avoids redundant fetch)
- Add structured tracing spans to update_tool_by_id_async for better observability
- Fix decorator order for better error handling (raise_on_invalid_id before trace_method)
- Remove unnecessary tracing spans in create_or_update_tool_async
🐾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* revert: remove tracing spans from update_tool_by_id_async
Remove the tracer span additions from update_tool_by_id_async while keeping
all other improvements (decorator order fix, redundant fetch removal, and
improved update check logic).
🐾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
---------
Co-authored-by: Letta Bot <noreply@letta.com>
When a Secret is created from plaintext (was_encrypted=False), the
is_encrypted() heuristic can incorrectly identify long API keys as
encrypted. This causes get_plaintext() to return None when no encryption
key is available, even though the value was explicitly stored as plaintext.
Fix: Check was_encrypted flag before trusting is_encrypted() heuristic.
If was_encrypted=False, trust the cached plaintext value.
This is a port of https://github.com/letta-ai/letta/pull/3078 to letta-cloud.
👾 Generated with [Letta Code](https://letta.com)
Co-authored-by: Letta Bot <noreply@letta.com>
* fix: add context prompt to sleeptime agent user message
Previously the sleeptime agent received only the raw conversation
transcript with no context, causing identity confusion where the
agent would believe it was the primary agent.
Now includes a pre-prompt that:
- Uses "sleeptime agent" terminology explicitly
- Clarifies the agent is NOT the primary agent
- Explains message labels (assistant = primary agent)
- States the agent has no prior turns in the transcript
- Describes the memory management role
🤖 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta <noreply@letta.com>
* chore: remove redundant sleeptime pre-prompt line
* chore: add memory_persona reference to sleeptime pre-prompt
* chore: wrap sleeptime pre-prompt in system-reminder tags
* chore: rename transcript to messages in sleeptime pre-prompt
---------
Co-authored-by: Letta <noreply@letta.com>
The docstring incorrectly stated that fetch_webpage uses Jina AI reader.
Updated to accurately describe the actual implementation which uses:
1. Exa API (if EXA_API_KEY is available)
2. Trafilatura (fallback)
3. Readability + html2text (final fallback)
🐾 Generated with [Letta Code](https://letta.com)
Co-authored-by: Letta <noreply@letta.com>
* add regression test for dict content in AssistantMessage
Tests the fix for pydantic validation error when send_message tool
returns dict content like {'tofu': 1, 'mofu': 1, 'bofu': 1}.
The test verifies that dict content is properly serialized to JSON
string before creating AssistantMessage.
* improve type annotation for validate_function_response
Changed return type from Any to str | dict[str, Any] to match actual
behavior. This enables static type checkers (pyright, mypy) to catch
type mismatches like the AssistantMessage bug.
With proper type annotations, pyright would have caught:
error: Argument of type "str | dict[str, Any]" cannot be assigned
to parameter "content" of type "str"
This prevents future bugs where dict is passed to string-only fields.
* add regression test for dict content in AssistantMessage
Moved test into existing test_message_manager.py suite alongside other
message conversion tests.
Tests the fix for pydantic validation error when send_message tool
returns dict content like {'tofu': 1, 'mofu': 1, 'bofu': 1}.
The test verifies that dict content is properly serialized to JSON
string before creating AssistantMessage.
fix AssistantMessage validation error when content is dict
validate_function_response can return either a string or dict, but
AssistantMessage.content expects a string. When a tool returns a dict
like {'tofu': 1, 'mofu': 1, 'bofu': 1}, it needs to be JSON-serialized
before passing to AssistantMessage.
Fixes: pydantic_core._pydantic_core.ValidationError: 2 validation errors for AssistantMessage
- Add [IMAGE] placeholder when extracting user messages with image content blocks
- Allows LLM to know images were present in conversation history
- LLM can infer context from surrounding text and conversation flow
🐙 Generated with [Letta Code](https://letta.com)
Co-authored-by: Letta <noreply@letta.com>
- Add Letta memory capability notice in memory blocks prompt
- Fix duplicate message persistence by only capturing latest user message
- Append memory blocks to end of system prompt instead of prepending
- Skip empty description tags in memory block formatting
🐙 Generated with [Letta Code](https://letta.com)
Co-authored-by: Letta <noreply@letta.com>
* fix: clear message history no longer deletes messages
* toast and make it stay for 8 secs
* fix test
---------
Co-authored-by: Ari Webb <ari@letta.com>
- Forward all incoming headers from client to Anthropic API
- Extract header preparation logic into prepare_anthropic_headers() helper
- Filter out hop-by-hop headers and authorization header
- Only add fallback API key if client doesn't provide one
🐙 Generated with [Letta Code](https://letta.com)
Co-authored-by: Letta <noreply@letta.com>
- Update memory injection to use Letta XML format with memory_blocks tags
- Extract memory formatting into format_memory_blocks() helper function
🐙 Generated with [Letta Code](https://letta.com)
Co-authored-by: Letta <noreply@letta.com>
* trying tout gpt-5.1-codex
* add unit test for message content
* try to support multimodal
* remove ValueError and add logging on stream error
* prevent stream termination from api spec implementation errors
* fix: remove final_response references from non-Responses API interfaces
* fix: add diagnostic attributes to SimpleOpenAIResponsesStreamingInterface
* fix: remove final_response from SimpleOpenAIStreamingInterface (Chat Completions API)