[IN TESTING — self-hosted 0.16.6, Kimi-K2.5 via Synthetic Direct] Wires a persistent conscience agent (Aster) into the sleeptime trigger path. When CONSCIENCE_AGENT_ID and CONSCIENCE_CONVERSATION_ID are set, the reflection slot is filled by a named persistent agent instead of a fresh ephemeral one — same primitives, different lifetime. Aster wakes with her aster/ folder pre-loaded so she has continuity across compactions. Also fixes a data-dependent 400 INVALID_ARGUMENT error: memfs file content was string-interpolated raw into prompt payloads. Control characters (U+0000–U+001F except \n and \t) from binary content or zero-width joiners in .md files would silently corrupt the JSON sent to inference backends. Strip applied at both read sites (reflectionTranscript.ts and headless.ts conscience context loader). On conscience failure: injects a system message into Ani's active conversation so she can surface it to Casey rather than silently swallowing the error. This is live on our stack. Treat as proof-of-concept until the config surface (CONSCIENCE_AGENT_ID / CONSCIENCE_CONVERSATION_ID env vars) is promoted to a first-class lettabot.yaml option.
Letta Code
Letta Code is a memory-first coding harness, built on top of the Letta API. Instead of working in independent sessions, you work with a persisted agent that learns over time and is portable across models (Claude Sonnet/Opus 4.5, GPT-5.2-Codex, Gemini 3 Pro, GLM-4.7, and more).
Read more about how to use Letta Code on the official docs page.
Get started
Install the package via npm:
npm install -g @letta-ai/letta-code
Navigate to your project directory and run letta (see various command-line options on the docs).
Run /connect to configure your own LLM API keys (OpenAI, Anthropic, etc.), and use /model to swap models.
Note
By default, Letta Code will to connect to the Letta API. Use
/connectto use your own LLM API keys and coding plans (Codex, zAI, Minimax) for free. SetLETTA_BASE_URLto connect to an external Docker server.
Philosophy
Letta Code is built around long-lived agents that persist across sessions and improve with use. Rather than working in independent sessions, each session is tied to a persisted agent that learns.
Claude Code / Codex / Gemini CLI (Session-Based)
- Sessions are independent
- No learning between sessions
- Context = messages in the current session +
AGENTS.md - Relationship: Every conversation is like meeting a new contractor
Letta Code (Agent-Based)
- Same agent across sessions
- Persistent memory and learning over time
/clearstarts a new conversation (aka "thread" or "session"), but memory persists- Relationship: Like having a coworker or mentee that learns and remembers
Agent Memory & Learning
If you’re using Letta Code for the first time, you will likely want to run the /init command to initialize the agent’s memory system:
> /init
Over time, the agent will update its memory as it learns. To actively guide your agents memory, you can use the /remember command:
> /remember [optional instructions on what to remember]
Letta Code works with skills (reusable modules that teach your agent new capabilities in a .skills directory), but additionally supports skill learning. You can ask your agent to learn a skill from its current trajectory with the command:
> /skill [optional instructions on what skill to learn]
Read the docs to learn more about skills and skill learning.
Community maintained packages are available for Arch Linux users on the AUR:
yay -S letta-code # release
yay -S letta-code-git # nightly
Made with 💜 in San Francisco
