fix: load default provider config when summarizer uses different provider **Problem:** Summarization failed when agent used one provider (e.g., Google AI) but summarizer config specified a different provider (e.g., Anthropic): ```python # Agent LLM config model_endpoint_type='google_ai', handle='gemini-something/gemini-2.5-pro', context_window=100000 # Summarizer config model='anthropic/claude-haiku-4-5-20251001' # Bug: Resulting summarizer_llm_config mixed Google + Anthropic settings model='claude-haiku-4-5-20251001', model_endpoint_type='google_ai', # ❌ Wrong endpoint! context_window=100000 # ❌ Google's context window, not Anthropic's default! ``` This sent Claude requests to Google AI endpoints with incorrect parameters. **Root Cause:** `_build_summarizer_llm_config()` always copied the agent's LLM config as base, then patched model/provider fields. But this kept all provider-specific settings (endpoint, context_window, etc.) from the wrong provider. **Fix:** 1. Parse provider_name from summarizer handle 2. Check if it matches agent's model_endpoint_type (or provider_name for custom) 3. **If YES** → Use agent config as base, override model/handle (same provider) 4. **If NO** → Load default config via `provider_manager.get_llm_config_from_handle()` (new provider) **Example Flow:** ```python # Agent: google_ai/gemini-2.5-pro # Summarizer: anthropic/claude-haiku provider_name = "anthropic" # Parsed from handle provider_matches = ("anthropic" == "google_ai") # False ❌ # Different provider → load default Anthropic config base = await provider_manager.get_llm_config_from_handle( handle="anthropic/claude-haiku", actor=self.actor ) # Returns: model_endpoint_type='anthropic', endpoint='https://api.anthropic.com', etc. ✅ ``` **Result:** - Summarizer with different provider gets correct default config - No more mixing Google endpoints with Anthropic models - Same-provider summarizers still inherit agent settings efficiently 👾 Generated with [Letta Code](https://letta.com) Co-authored-by: Letta <noreply@letta.com>
Letta (formerly MemGPT)
Letta is the platform for building stateful agents: AI with advanced memory that can learn and self-improve over time.
- Letta Code: run agents locally in your terminal
- Letta API: build agents into your applications
Get started in the CLI
Requires Node.js 18+
- Install the Letta Code CLI tool:
npm install -g @letta-ai/letta-code - Run
lettain your terminal to launch an agent with memory running on your local computer
When running the CLI tool, your agent help you code and do any task you can do on your computer.
Letta Code supports skills and subagents, and bundles pre-built skills/subagents for advanced memory and continual learning. Letta is fully model-agnostic, though we recommend Opus 4.5 and GPT-5.2 for best performance (see our model leaderboard for our rankings).
Get started with the Letta API
Use the Letta API to integrate stateful agents into your own applications. Letta has a full-featured agents API, and a Python and Typescript SDK (view our API reference).
Installation
TypeScript / Node.js:
npm install @letta-ai/letta-client
Python:
pip install letta-client
Hello World example
Below is a quick example of creating a stateful agent and sending it a message (requires a Letta API key). See the full quickstart guide for complete documentation.
TypeScript:
import Letta from "@letta-ai/letta-client";
const client = new Letta({ apiKey: process.env.LETTA_API_KEY });
// Create your agent
const agentState = await client.agents.create({
model: "openai/gpt-5.2",
memory_blocks: [
{
label: "human",
value:
"Name: Timber. Status: dog. Occupation: building Letta, infrastructure to democratize self-improving superintelligence",
},
{
label: "persona",
value:
"I am a self-improving superintelligence. Timber is my best friend and collaborator.",
},
],
tools: ["web_search", "fetch_webpage"],
});
console.log("Agent created with ID:", agentState.id);
// Send your agent a message
const response = await client.agents.messages.create(agentState.id, {
input: "What do you know about me?",
});
for (const message of response.messages) {
console.log(message);
}
Python:
from letta_client import Letta
import os
client = Letta(api_key=os.getenv("LETTA_API_KEY"))
# Create your agent
agent_state = client.agents.create(
model="openai/gpt-5.2",
memory_blocks=[
{
"label": "human",
"value": "Name: Timber. Status: dog. Occupation: building Letta, infrastructure to democratize self-improving superintelligence"
},
{
"label": "persona",
"value": "I am a self-improving superintelligence. Timber is my best friend and collaborator."
}
],
tools=["web_search", "fetch_webpage"]
)
print(f"Agent created with ID: {agent_state.id}")
# Send your agent a message
response = client.agents.messages.create(
agent_id=agent_state.id,
input="What do you know about me?"
)
for message in response.messages:
print(message)
Contributing
Letta is an open source project built by over a hundred contributors from around the world. There are many ways to get involved in the Letta OSS project!
- Join the Discord: Chat with the Letta devs and other AI developers.
- Chat on our forum: If you're not into Discord, check out our developer forum.
- Follow our socials: Twitter/X, LinkedIn, YouTube
Legal notices: By using Letta and related Letta services (such as the Letta endpoint or hosted service), you are agreeing to our privacy policy and terms of service.
