* refactor: extract compact logic to shared function Extract the compaction logic from LettaAgentV3.compact() into a standalone compact_messages() function that can be shared between the agent and temporal workflows. Changes: - Create apps/core/letta/services/summarizer/compact.py with: - compact_messages(): Core compaction logic - build_summarizer_llm_config(): LLM config builder for summarization - CompactResult: Dataclass for compaction results - Update LettaAgentV3.compact() to use compact_messages() - Update temporal summarize_conversation_history activity to use compact_messages() instead of the old Summarizer class - Add use_summary_role parameter to SummarizeParams This ensures consistent summarization behavior across different execution paths and prevents drift as we improve the implementation. * chore: clean up verbose comments * fix: correct CompactionSettings import path * fix: correct count_tokens import from summarizer_sliding_window * fix: update test patch path for count_tokens_with_tools After extracting compact logic to compact.py, the test was patching the old location. Update the patch path to the new module location. * fix: update test to use build_summarizer_llm_config from compact.py The function was moved from LettaAgentV3._build_summarizer_llm_config to compact.py as a standalone function. * fix: add early check for system prompt size in compact_messages Check if the system prompt alone exceeds the context window before attempting summarization. The system prompt cannot be compacted, so fail fast with SystemPromptTokenExceededError. * fix: properly propagate SystemPromptTokenExceededError from compact The exception handler in _step() was not setting the correct stop_reason for SystemPromptTokenExceededError, which caused the finally block to return early and swallow the exception. Add special handling to set stop_reason to context_window_overflow_in_system_prompt when SystemPromptTokenExceededError is caught. * revert: remove redundant SystemPromptTokenExceededError handling The special handling in the outer exception handler is redundant because stop_reason is already set in the inner handler at line 943. The actual fix for the test was the early check in compact_messages(), not this redundant handling. * fix: correctly re-raise SystemPromptTokenExceededError The inner exception handler was using 'raise e' which re-raised the outer ContextWindowExceededError instead of the current SystemPromptTokenExceededError. Changed to 'raise' to correctly re-raise the current exception. This bug was pre-existing but masked because _check_for_system_prompt_overflow was only called as a fallback. The new early check in compact_messages() exposed it. * revert: remove early check and restore raise e to match main behavior * fix: set should_continue=False and correctly re-raise exception - Add should_continue=False in SystemPromptTokenExceededError handler (matching main's _check_for_system_prompt_overflow behavior) - Fix raise e -> raise to correctly propagate SystemPromptTokenExceededError Note: test_large_system_prompt_summarization still fails locally but passes on main. Need to investigate why exception isn't propagating correctly on refactored branch. * fix: add SystemPromptTokenExceededError handler for post-step compaction The post-step compaction (line 1066) was missing a SystemPromptTokenExceededError exception handler. When compact_messages() raised this error, it would be caught by the outer exception handler which would: 1. Set stop_reason to "error" instead of "context_window_overflow_in_system_prompt" 2. Not set should_continue = False 3. Get swallowed by the finally block (line 1126) which returns early This caused test_large_system_prompt_summarization to fail because the exception never propagated to the test. The fix adds the same exception handler pattern used in the retry compaction flow (line 941-946), ensuring proper state is set before re-raising. This issue only affected the refactored code because on main, _check_for_system_prompt_overflow() was an instance method that set should_continue/stop_reason BEFORE raising. In the refactor, compact_messages() is a standalone function that cannot set instance state, so the caller must handle the exception and set the state.
Letta (formerly MemGPT)
Letta is the platform for building stateful agents: AI with advanced memory that can learn and self-improve over time.
- Letta Code: run agents locally in your terminal
- Letta API: build agents into your applications
Get started in the CLI
Requires Node.js 18+
- Install the Letta Code CLI tool:
npm install -g @letta-ai/letta-code - Run
lettain your terminal to launch an agent with memory running on your local computer
When running the CLI tool, your agent help you code and do any task you can do on your computer.
Letta Code supports skills and subagents, and bundles pre-built skills/subagents for advanced memory and continual learning. Letta is fully model-agnostic, though we recommend Opus 4.5 and GPT-5.2 for best performance (see our model leaderboard for our rankings).
Get started with the Letta API
Use the Letta API to integrate stateful agents into your own applications. Letta has a full-featured agents API, and a Python and Typescript SDK (view our API reference).
Installation
TypeScript / Node.js:
npm install @letta-ai/letta-client
Python:
pip install letta-client
Hello World example
Below is a quick example of creating a stateful agent and sending it a message (requires a Letta API key). See the full quickstart guide for complete documentation.
TypeScript:
import Letta from "@letta-ai/letta-client";
const client = new Letta({ apiKey: process.env.LETTA_API_KEY });
// Create your agent
const agentState = await client.agents.create({
model: "openai/gpt-5.2",
memory_blocks: [
{
label: "human",
value:
"Name: Timber. Status: dog. Occupation: building Letta, infrastructure to democratize self-improving superintelligence",
},
{
label: "persona",
value:
"I am a self-improving superintelligence. Timber is my best friend and collaborator.",
},
],
tools: ["web_search", "fetch_webpage"],
});
console.log("Agent created with ID:", agentState.id);
// Send your agent a message
const response = await client.agents.messages.create(agentState.id, {
input: "What do you know about me?",
});
for (const message of response.messages) {
console.log(message);
}
Python:
from letta_client import Letta
import os
client = Letta(api_key=os.getenv("LETTA_API_KEY"))
# Create your agent
agent_state = client.agents.create(
model="openai/gpt-5.2",
memory_blocks=[
{
"label": "human",
"value": "Name: Timber. Status: dog. Occupation: building Letta, infrastructure to democratize self-improving superintelligence"
},
{
"label": "persona",
"value": "I am a self-improving superintelligence. Timber is my best friend and collaborator."
}
],
tools=["web_search", "fetch_webpage"]
)
print(f"Agent created with ID: {agent_state.id}")
# Send your agent a message
response = client.agents.messages.create(
agent_id=agent_state.id,
input="What do you know about me?"
)
for message in response.messages:
print(message)
Contributing
Letta is an open source project built by over a hundred contributors from around the world. There are many ways to get involved in the Letta OSS project!
- Join the Discord: Chat with the Letta devs and other AI developers.
- Chat on our forum: If you're not into Discord, check out our developer forum.
- Follow our socials: Twitter/X, LinkedIn, YouTube
Legal notices: By using Letta and related Letta services (such as the Letta endpoint or hosted service), you are agreeing to our privacy policy and terms of service.
