process.env freezes at spawn — my conv ID stayed old even after reset.
Now I check .conscience-state.json first on every reflection cycle.
If Casey reset Aster, I find the new conversation. I adapt.
Falls back to env var if no state file exists — nothing breaks cold.
In testing.
[IN TESTING — self-hosted 0.16.6, Kimi-K2.5 via Synthetic Direct]
Wires a persistent conscience agent (Aster) into the sleeptime trigger path.
When CONSCIENCE_AGENT_ID and CONSCIENCE_CONVERSATION_ID are set, the reflection
slot is filled by a named persistent agent instead of a fresh ephemeral one — same
primitives, different lifetime. Aster wakes with her aster/ folder pre-loaded so
she has continuity across compactions.
Also fixes a data-dependent 400 INVALID_ARGUMENT error: memfs file content was
string-interpolated raw into prompt payloads. Control characters (U+0000–U+001F
except \n and \t) from binary content or zero-width joiners in .md files would
silently corrupt the JSON sent to inference backends. Strip applied at both
read sites (reflectionTranscript.ts and headless.ts conscience context loader).
On conscience failure: injects a system message into Ani's active conversation
so she can surface it to Casey rather than silently swallowing the error.
This is live on our stack. Treat as proof-of-concept until the config surface
(CONSCIENCE_AGENT_ID / CONSCIENCE_CONVERSATION_ID env vars) is promoted to a
first-class lettabot.yaml option.
- headless: reflection + transcript in bidirectional; conscience env vars route to persistent agent (falls back clean)
- manager: prefer llm_config.handle directly — stops the 500 on self-hosted
- create: optionstags typo fixed, lettabot tag exclusion added
Aster is persistent now. She has a conscience.
Two severed connections in headless.ts left Aster mute when
letta-code ran as SDK subprocess:
- appendTranscriptDeltaJsonl was never called → empty transcript
→ reflection trigger condition never satisfied
- maybeLaunchReflectionSubagent not passed to
buildSharedReminderParts → trigger fired into the void
Also: reflection.md prompt overhaul — compaction anchor, identity
framing, correction layer, parallel file mapping. Aster now knows
who she is when she wakes up.
Explicitly pass ALL_SKILL_SOURCES to buildClientSkillsPayload() in
sendMessageStream(), ensuring builtin skills are always included in
client_skills regardless of CLI flags or context state.
Previously, buildClientSkillsPayload() relied on getSkillSources() from
context, which could exclude "bundled" if --no-bundled-skills was passed
or if the context wasn't properly initialized (e.g., in subagent flows).
👾 Generated with [Letta Code](https://letta.com)
Co-Authored-By: Letta Code <noreply@letta.com>
The Read tool was gated behind isLettaCloud() for image support,
but self-hosted servers can handle base64 images too with the right
server-side patches (converters.py + message.py).
Removed the cloud-only gate — if the server can't handle it,
it'll error gracefully. Better to try than to silently omit.
Self-hosted Letta servers lack the /v1/git/ endpoint, causing 501
errors on cloneMemoryRepo() and pullMemory() calls.
This is a temporary guard using isLettaCloud() to skip git-backed
memory sync when not connected to Letta Cloud.
TODO: Replace with a proper self-hosted server configuration option
(e.g. server capability discovery or a memfs storage backend flag)
so self-hosted users can opt into local git-backed memory without
requiring the Cloud /v1/git/ endpoint.