fix(core): preserve Gemini thought_signature on function calls in non-streaming path (#9351)

* fix(core): preserve Gemini thought_signature on function calls in non-streaming path

The Google Gemini API requires thought_signature to be echoed back on
function call parts in multi-turn conversations. In the non-streaming
request path, the signature was only captured for subsequent function
calls (else branch) but dropped for the first/only function call (if
branch) in convert_response_to_chat_completion. This caused 400
INVALID_ARGUMENT errors on the next turn.

Additionally, when no ReasoningContent existed to carry the signature
(e.g. Gemini 2.5 Flash with include_thoughts=False), the signature was
lost in the adapter layer. Now it falls through to TextContent.

Datadog: https://us5.datadoghq.com/error-tracking/issue/17c4b114-d596-11f0-bcd6-da7ad0900000

🤖 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

* fix(core): preserve Gemini thought_signature in non-temporal agent path

Carry reasoning_content_signature on TextContent in letta_agent.py
at both locations where content falls through from reasoning (same
fix already applied to the adapter and temporal activity paths).

Co-authored-by: Kian Jones <kianjones9@users.noreply.github.com>

🤖 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

---------

Co-authored-by: Letta <noreply@letta.com>
Co-authored-by: letta-code <248085862+letta-code@users.noreply.github.com>
This commit is contained in:
Kian Jones
2026-02-06 17:23:06 -08:00
committed by Caren Thomas
parent 32d87b70d7
commit f20fdc73d1
4 changed files with 26 additions and 4 deletions

View File

@@ -66,7 +66,13 @@ class LettaLLMRequestAdapter(LettaLLMAdapter):
self.reasoning_content = [OmittedReasoningContent()]
elif self.chat_completions_response.choices[0].message.content:
# Reasoning placed into content for legacy reasons
self.reasoning_content = [TextContent(text=self.chat_completions_response.choices[0].message.content)]
# Carry thought_signature on TextContent when ReasoningContent doesn't exist to hold it
self.reasoning_content = [
TextContent(
text=self.chat_completions_response.choices[0].message.content,
signature=self.chat_completions_response.choices[0].message.reasoning_content_signature,
)
]
else:
# logger.info("No reasoning content found.")
self.reasoning_content = None

View File

@@ -81,7 +81,12 @@ class SimpleLLMRequestAdapter(LettaLLMRequestAdapter):
if self.chat_completions_response.choices[0].message.content:
# NOTE: big difference - 'content' goes into 'content'
# Reasoning placed into content for legacy reasons
self.content = [TextContent(text=self.chat_completions_response.choices[0].message.content)]
# Carry thought_signature on TextContent when ReasoningContent doesn't exist to hold it
# (e.g. Gemini 2.5 Flash with include_thoughts=False still returns thought_signature)
orphan_sig = (
self.chat_completions_response.choices[0].message.reasoning_content_signature if not self.reasoning_content else None
)
self.content = [TextContent(text=self.chat_completions_response.choices[0].message.content, signature=orphan_sig)]
else:
self.content = None