Clear screen and force Static component remount when approval state is cleared. This fixes "streaking" where fragments of approval dialogs would appear mixed with transcript content. The issue was caused by Ink's line count tracking getting confused when large UI components (approval dialogs, plan preview) disappear abruptly. Applied fix in: - sendAllResults (normal approval flow) - sendAllResults early return (cancelled flow) - handleInterrupt (Ctrl+C during approvals) - handleCancelApprovals (cancel all approvals) 🤖 Generated with [Letta Code](https://letta.com) Co-Authored-By: Letta <noreply@letta.com>
Letta Code
Letta Code is a memory-first coding harness, built on top of the Letta API. Instead of working in independent sessions, you work with a persisted agent that learns over time and is portable across models (Claude Sonnet/Opus, GPT-5, Gemini 3 Pro, GLM-4.6, and more).
Read more about how to use Letta Code on the official docs page.
Get started
Install the package via npm:
npm install -g @letta-ai/letta-code
Navigate to your project directory and run letta (see various command-line options on the docs).
Note
By default, Letta Code will connect to the Letta Developer Platform (includes a free tier), which you can connect to via OAuth or setting a
LETTA_API_KEY. You can also connect it to a self-hosted Letta server by settingLETTA_BASE_URL
Philosophy
Letta Code is built around long-lived agents that persist across sessions and improve with use. Rather than working in independent sessions, each session is tied to a persisted agent that learns.
Claude Code / Codex / Gemini CLI (Session-Based)
- Sessions are independent
- No learning between sessions
- Context = messages in the current session +
AGENTS.md - Relationship: Every conversation is like meeting a new contractor
Letta Code (Agent-Based)
- Same agent across sessions
- Persistent memory and learning over time
/clearresets the session (clears current in-context messages), but memory persists- Relationship: Like having a coworker or mentee that learns and remembers
Agent Memory & Learning
If you’re using Letta Code for the first time, you will likely want to run the /init command to initialize the agent’s memory system:
> /init
Over time, the agent will update its memory as it learns. To actively guide your agents memory, you can use the /remember command:
> /remember [optional instructions on what to remember]
Letta Code works with skills (reusable modules that teach your agent new capabilities in a .skills directory), but additionally supports skill learning. You can ask your agent to learn a skill from it's current trajectory with the command:
> /skill [optional instructions on what skill to learn]
Read the docs to learn more about skills and skill learning.
Community maintained packages are available for Arch Linux users on the AUR:
yay -S letta-code # release
yay -S letta-code-git # nightly
yay -S letta-code-bin # prebuilt release
Made with 💜 in San Francisco
