fix: load default provider config when summarizer uses different prov… (#9051)
fix: load default provider config when summarizer uses different provider **Problem:** Summarization failed when agent used one provider (e.g., Google AI) but summarizer config specified a different provider (e.g., Anthropic): ```python # Agent LLM config model_endpoint_type='google_ai', handle='gemini-something/gemini-2.5-pro', context_window=100000 # Summarizer config model='anthropic/claude-haiku-4-5-20251001' # Bug: Resulting summarizer_llm_config mixed Google + Anthropic settings model='claude-haiku-4-5-20251001', model_endpoint_type='google_ai', # ❌ Wrong endpoint! context_window=100000 # ❌ Google's context window, not Anthropic's default! ``` This sent Claude requests to Google AI endpoints with incorrect parameters. **Root Cause:** `_build_summarizer_llm_config()` always copied the agent's LLM config as base, then patched model/provider fields. But this kept all provider-specific settings (endpoint, context_window, etc.) from the wrong provider. **Fix:** 1. Parse provider_name from summarizer handle 2. Check if it matches agent's model_endpoint_type (or provider_name for custom) 3. **If YES** → Use agent config as base, override model/handle (same provider) 4. **If NO** → Load default config via `provider_manager.get_llm_config_from_handle()` (new provider) **Example Flow:** ```python # Agent: google_ai/gemini-2.5-pro # Summarizer: anthropic/claude-haiku provider_name = "anthropic" # Parsed from handle provider_matches = ("anthropic" == "google_ai") # False ❌ # Different provider → load default Anthropic config base = await provider_manager.get_llm_config_from_handle( handle="anthropic/claude-haiku", actor=self.actor ) # Returns: model_endpoint_type='anthropic', endpoint='https://api.anthropic.com', etc. ✅ ``` **Result:** - Summarizer with different provider gets correct default config - No more mixing Google endpoints with Anthropic models - Same-provider summarizers still inherit agent settings efficiently 👾 Generated with [Letta Code](https://letta.com) Co-authored-by: Letta <noreply@letta.com>
This commit is contained in:
@@ -406,8 +406,14 @@ async def test_compaction_settings_model_uses_separate_llm_config_for_summarizat
|
||||
tool_rules=None,
|
||||
)
|
||||
|
||||
# Use the static helper on LettaAgentV3 to derive summarizer llm_config
|
||||
summarizer_llm_config = LettaAgentV3._build_summarizer_llm_config(
|
||||
# Create a mock agent instance to call the instance method
|
||||
mock_agent = Mock(spec=LettaAgentV3)
|
||||
mock_agent.actor = default_user
|
||||
mock_agent.logger = Mock()
|
||||
|
||||
# Use the instance method to derive summarizer llm_config
|
||||
summarizer_llm_config = await LettaAgentV3._build_summarizer_llm_config(
|
||||
mock_agent,
|
||||
agent_llm_config=agent_state.llm_config,
|
||||
summarizer_config=agent_state.compaction_settings,
|
||||
)
|
||||
|
||||
Reference in New Issue
Block a user