cpacker 6b7c59b0be fix(tui): populate reasoning_effort from updateArgs as fallback when API omits it
The Letta API _to_model_settings() for Anthropic was not including
effort in the GET response (backend bug), so agentState.model_settings.effort
and llm_config.reasoning_effort both came back null after a /model switch.
deriveReasoningEffort then had nothing to work with.

Client-side fix: after updateAgentLLMConfig returns, merge reasoning_effort
from model.updateArgs (/model switch) and desired.effort (Tab cycle flush)
into the llmConfig state we set. This populates the fallback path in
deriveReasoningEffort reliably regardless of what the API echoes back.

The actual root cause is fixed in letta-cloud: _to_model_settings() now
includes effort=self.effort for Anthropic models.

👾 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>
2026-02-18 19:00:07 -08:00
2025-10-24 21:19:24 -07:00
2025-12-14 22:59:09 -08:00
2025-12-15 21:23:49 -08:00
2025-10-24 21:19:24 -07:00
2025-10-24 21:19:24 -07:00
2026-01-26 15:54:12 -08:00
2025-10-24 21:19:24 -07:00
2025-10-24 21:19:24 -07:00
2025-12-14 15:24:15 -08:00
2025-10-24 21:19:24 -07:00
2026-02-18 17:38:48 -08:00
2026-02-18 17:38:48 -08:00

Letta Code

npm Discord

Letta Code is a memory-first coding harness, built on top of the Letta API. Instead of working in independent sessions, you work with a persisted agent that learns over time and is portable across models (Claude Sonnet/Opus 4.5, GPT-5.2-Codex, Gemini 3 Pro, GLM-4.7, and more).

Read more about how to use Letta Code on the official docs page.

Get started

Install the package via npm:

npm install -g @letta-ai/letta-code

Navigate to your project directory and run letta (see various command-line options on the docs).

Run /connect to configure your own LLM API keys (OpenAI, Anthropic, etc.), and use /model to swap models.

Note

By default, Letta Code will to connect to the Letta API. Use /connect to use your own LLM API keys and coding plans (Codex, zAI, Minimax) for free. Set LETTA_BASE_URL to connect to an external Docker server.

Philosophy

Letta Code is built around long-lived agents that persist across sessions and improve with use. Rather than working in independent sessions, each session is tied to a persisted agent that learns.

Claude Code / Codex / Gemini CLI (Session-Based)

  • Sessions are independent
  • No learning between sessions
  • Context = messages in the current session + AGENTS.md
  • Relationship: Every conversation is like meeting a new contractor

Letta Code (Agent-Based)

  • Same agent across sessions
  • Persistent memory and learning over time
  • /clear starts a new conversation (aka "thread" or "session"), but memory persists
  • Relationship: Like having a coworker or mentee that learns and remembers

Agent Memory & Learning

If youre using Letta Code for the first time, you will likely want to run the /init command to initialize the agents memory system:

> /init

Over time, the agent will update its memory as it learns. To actively guide your agents memory, you can use the /remember command:

> /remember [optional instructions on what to remember]

Letta Code works with skills (reusable modules that teach your agent new capabilities in a .skills directory), but additionally supports skill learning. You can ask your agent to learn a skill from it's current trajectory with the command:

> /skill [optional instructions on what skill to learn]

Read the docs to learn more about skills and skill learning.

Community maintained packages are available for Arch Linux users on the AUR:

yay -S letta-code # release
yay -S letta-code-git # nightly

Made with 💜 in San Francisco

Description
letta-code - primary development repo
Readme 37 MiB
Languages
TypeScript 98.2%
Python 1%
Shell 0.5%
JavaScript 0.2%
MDX 0.1%