feat: update defrag skill to use memfs instead of backup/restore scripts (#732)
Co-authored-by: Letta <noreply@letta.com>
This commit is contained in:
@@ -1,17 +1,18 @@
|
||||
---
|
||||
name: defragmenting-memory
|
||||
description: Decomposes and reorganizes agent memory blocks into focused, single-purpose components. Use when memory has large multi-topic blocks, redundancy, or poor organization. Backs up memory, uses a subagent to decompose and clean it up, then restores the improved version.
|
||||
description: Decomposes and reorganizes agent memory blocks into focused, single-purpose components. Use when memory has large multi-topic blocks, redundancy, or poor organization. Works directly on memfs files — memory sync handles propagation to the API.
|
||||
---
|
||||
|
||||
# Memory Defragmentation Skill
|
||||
|
||||
This skill helps you maintain clean, well-organized memory blocks by:
|
||||
1. Dumping current memory to local files and backing up the agent file
|
||||
2. Using a subagent to decompose and reorganize the files
|
||||
3. Restoring the cleaned files back to memory
|
||||
1. Creating a safety backup of the memfs directory
|
||||
2. Using a subagent to decompose and reorganize the memory files in-place
|
||||
|
||||
The focus is on **decomposition**—splitting large, multi-purpose blocks into focused, single-purpose components—rather than consolidation.
|
||||
|
||||
Memory files live at `~/.letta/agents/$LETTA_AGENT_ID/memory/` and are synced to API blocks automatically by **memfs sync** on CLI startup. There is no separate backup/restore step needed.
|
||||
|
||||
## When to Use
|
||||
|
||||
- Memory blocks have redundant information
|
||||
@@ -23,36 +24,61 @@ The focus is on **decomposition**—splitting large, multi-purpose blocks into f
|
||||
|
||||
## Workflow
|
||||
|
||||
⚠️ **CRITICAL SAFETY REQUIREMENT**: You MUST complete Step 1 (backup) before proceeding to Step 2. The backup is your safety net. Do not spawn the subagent until the backup is guaranteed to have succeeded.
|
||||
### Step 1: Safety Backup
|
||||
|
||||
### Step 1: Backup Memory to Files
|
||||
Before the subagent edits files, create a timestamped backup of the memfs directory:
|
||||
|
||||
```bash
|
||||
npx tsx <SKILL_DIR>/scripts/backup-memory.ts $LETTA_AGENT_ID .letta/backups/working
|
||||
cp -r ~/.letta/agents/$LETTA_AGENT_ID/memory/ ~/.letta/agents/$LETTA_AGENT_ID/memory-backup-$(date +%Y%m%d-%H%M%S)/
|
||||
```
|
||||
|
||||
This creates:
|
||||
- `.letta/backups/<agent-id>/<timestamp>/` - Timestamped memory blocks backup
|
||||
- `.letta/backups/working/` - Working directory with editable files
|
||||
- Each memory block as a `.md` file: `persona.md`, `human.md`, `project.md`, etc.
|
||||
⚠️ **CRITICAL**: You MUST complete the backup before proceeding to Step 2. The backup is your safety net.
|
||||
|
||||
### Step 2: Spawn Subagent to Clean Files
|
||||
### Step 2: Spawn Subagent to Edit Memory Files
|
||||
|
||||
The memory subagent works directly on the memfs `system/` directory. After it finishes, memfs sync will propagate changes to the API on next CLI startup.
|
||||
|
||||
```typescript
|
||||
Task({
|
||||
subagent_type: "memory",
|
||||
description: "Clean up and decompose memory files",
|
||||
prompt: `⚠️ CRITICAL PREREQUISITE: The agent memory blocks MUST be backed up to .letta/backups/working/ BEFORE you begin this task. The main agent must have run backup-memory.ts first. You are ONLY responsible for editing the files in that working directory—the backup is your safety net.
|
||||
description: "Decompose and reorganize memory files",
|
||||
prompt: `You are decomposing and reorganizing memory files in ~/.letta/agents/${LETTA_AGENT_ID}/memory/system/ to improve clarity and focus.
|
||||
|
||||
You are decomposing and reorganizing memory block files in .letta/backups/working/ to improve clarity and focus. "Decompose" means take large memory blocks with multiple sections and split them into smaller memory blocks, each with fewer sections and a single focused purpose.
|
||||
These files ARE the agent's memory — they sync directly to API memory blocks via memfs. Changes you make here will be picked up automatically.
|
||||
|
||||
## Directory Structure
|
||||
|
||||
~/.letta/agents/<agent-id>/memory/
|
||||
├── system/ ← Attached blocks (always loaded in system prompt) — EDIT THESE
|
||||
├── user/ ← Detached blocks (on-demand) — can create new files here
|
||||
└── .sync-state.json ← DO NOT EDIT (internal sync tracking)
|
||||
|
||||
## Files to Skip (DO NOT edit)
|
||||
- memory_filesystem.md (auto-generated tree view)
|
||||
- skills.md (auto-generated)
|
||||
- loaded_skills.md (system-managed)
|
||||
- .sync-state.json (internal)
|
||||
|
||||
## What to Edit
|
||||
- persona.md → Consider splitting into: persona/identity.md, persona/values.md, persona/approach.md
|
||||
- project.md → Consider splitting into: project/overview.md, project/architecture.md, project/conventions.md, etc.
|
||||
- human.md → Consider splitting into: human/identity.md, human/preferences.md, etc.
|
||||
- Any other non-system blocks present
|
||||
|
||||
## How Memfs File ↔ Block Mapping Works
|
||||
- File path relative to system/ or user/ becomes the block label
|
||||
- Example: system/project/tooling/bun.md → block label "project/tooling/bun"
|
||||
- New files you create will become new memory blocks on next sync
|
||||
- Files you delete will cause the corresponding blocks to be deleted on next sync
|
||||
- YAML frontmatter is supported for metadata (label, description, limit, read_only)
|
||||
|
||||
## Evaluation Criteria
|
||||
|
||||
1. **DECOMPOSITION** - Split large, multi-purpose blocks into focused, single-purpose components
|
||||
- Example: A "persona" block mixing Git operations, communication style, AND behavioral preferences should become separate blocks like "communication-style.md", "behavioral-preferences.md", "version-control-practices.md"
|
||||
- Example: A "project" block with structure, patterns, rendering, error handling, and architecture should split into specialized blocks like "architecture.md", "patterns.md", "rendering-approach.md", "error-handling.md"
|
||||
- Goal: Each block should have ONE clear purpose that can be described in a short title
|
||||
- Create new files when splitting (e.g., communication-style.md, behavioral-preferences.md)
|
||||
- Example: A "persona" block mixing identity, values, AND approach should become persona/identity.md, persona/values.md, persona/approach.md
|
||||
- Example: A "project" block with overview, architecture, conventions, and gotchas should split into project/overview.md, project/architecture.md, project/conventions.md, project/gotchas.md
|
||||
- Goal: Each block should have ONE clear purpose described by its filename
|
||||
- Use hierarchical / naming (e.g., project/tooling/bun.md, not project-tooling-bun.md)
|
||||
|
||||
2. **STRUCTURE** - Organize content with clear markdown formatting
|
||||
- Use headers (##, ###) for subsections
|
||||
@@ -80,8 +106,7 @@ You are decomposing and reorganizing memory block files in .letta/backups/workin
|
||||
- Flag blocks where subtopics could be their own focused blocks
|
||||
|
||||
2. **Decompose** - Split multi-purpose blocks into specialized files
|
||||
- Create new .md files for each focused purpose
|
||||
- Use clear, descriptive filenames (e.g., "keyboard-protocols.md", "error-handling-patterns.md")
|
||||
- Create new files using hierarchical paths (e.g., project/tooling/bun.md)
|
||||
- Ensure each new block has ONE primary purpose
|
||||
|
||||
3. **Clean Up** - For remaining blocks (or new focused blocks):
|
||||
@@ -91,16 +116,10 @@ You are decomposing and reorganizing memory block files in .letta/backups/workin
|
||||
- Improve clarity
|
||||
|
||||
4. **Delete** - Remove files only when appropriate
|
||||
- After consolidating into other blocks (rare - most blocks should stay focused)
|
||||
- After moving all content to new decomposed files
|
||||
- Never delete a focused, single-purpose block
|
||||
- Only delete if a block contains junk/irrelevant data with no value
|
||||
|
||||
## Files to Edit
|
||||
- persona.md → Consider splitting into: communication-style.md, behavioral-preferences.md, technical-practices.md
|
||||
- project.md → Consider splitting into: architecture.md, patterns.md, rendering.md, error-handling.md, etc.
|
||||
- human.md → OK to keep as-is if focused on understanding the user
|
||||
- DO NOT edit: skills.md (auto-generated), loaded_skills.md (system-managed)
|
||||
|
||||
## Success Indicators
|
||||
- No block tries to cover 2+ distinct topics
|
||||
- Each block title clearly describes its single purpose
|
||||
@@ -118,86 +137,63 @@ Provide a detailed report including:
|
||||
```
|
||||
|
||||
The subagent will:
|
||||
- Read the files from `.letta/backups/working/`
|
||||
- Edit them to reorganize and consolidate redundancy
|
||||
- Merge related blocks together for better organization
|
||||
- Read files from `~/.letta/agents/<agent-id>/memory/system/` (and `user/`)
|
||||
- Edit them to reorganize and decompose large blocks
|
||||
- Create new hierarchically-named files (e.g., `project/overview.md`)
|
||||
- Add clear structure with markdown formatting
|
||||
- Delete source files after merging their content into other blocks
|
||||
- Provide a detailed report of changes (including what was merged where)
|
||||
- Delete source files after decomposing their content into focused children
|
||||
- Provide a detailed report of changes
|
||||
|
||||
### Step 3: Restore Cleaned Files to Memory
|
||||
|
||||
```bash
|
||||
npx tsx <SKILL_DIR>/scripts/restore-memory.ts $LETTA_AGENT_ID .letta/backups/working
|
||||
```
|
||||
|
||||
This will:
|
||||
- Compare each file to current memory blocks
|
||||
- Update only the blocks that changed
|
||||
- Show before/after character counts
|
||||
- Skip unchanged blocks
|
||||
After the subagent finishes, **memfs sync will automatically propagate changes** to API blocks on the next CLI startup. No manual restore step is needed.
|
||||
|
||||
## Example Complete Flow
|
||||
|
||||
```typescript
|
||||
// ⚠️ STEP 1 IS MANDATORY: Backup memory to files
|
||||
// This MUST complete successfully before proceeding to Step 2
|
||||
// Step 1: Safety backup (MANDATORY)
|
||||
Bash({
|
||||
command: "npx tsx <SKILL_DIR>/scripts/backup-memory.ts $LETTA_AGENT_ID .letta/backups/working",
|
||||
description: "Backup memory to files (MANDATORY prerequisite)"
|
||||
command: "cp -r ~/.letta/agents/$LETTA_AGENT_ID/memory/ ~/.letta/agents/$LETTA_AGENT_ID/memory-backup-$(date +%Y%m%d-%H%M%S)/",
|
||||
description: "Backup memfs directory before defrag"
|
||||
})
|
||||
|
||||
// ⚠️ STEP 2 CAN ONLY BEGIN AFTER STEP 1 SUCCEEDS
|
||||
// The subagent works on the backed-up files, with the original memory safe
|
||||
// Step 2: Spawn subagent to decompose and reorganize
|
||||
Task({
|
||||
subagent_type: "memory",
|
||||
description: "Clean up and decompose memory files",
|
||||
prompt: "Decompose and reorganize memory block files in .letta/backups/working/. Be aggressive about splitting large multi-section blocks into many smaller, single-purpose blocks with fewer sections. Prefer creating new focused files over keeping large blocks. Structure with markdown headers and bullets. Remove redundancy and speculation. Resolve contradictions. Organize logically. Each block should have ONE clear purpose. Create new files for decomposed blocks rather than consolidating. Report files created, modified, deleted, before/after character counts, and rationale for changes."
|
||||
description: "Decompose and reorganize memory files",
|
||||
prompt: "Decompose and reorganize memory files in ~/.letta/agents/$LETTA_AGENT_ID/memory/system/. These files sync directly to API blocks via memfs. Be aggressive about splitting large multi-section blocks into many smaller, single-purpose blocks using hierarchical / naming. Skip memory_filesystem.md, skills.md, loaded_skills.md, and .sync-state.json. Structure with markdown headers and bullets. Remove redundancy and speculation. Resolve contradictions. Organize logically. Each block should have ONE clear purpose. Report files created, modified, deleted, before/after character counts, and rationale for changes."
|
||||
})
|
||||
|
||||
// Step 3: Restore (only after cleanup is approved)
|
||||
// Review the subagent's report before running this
|
||||
Bash({
|
||||
command: "npx tsx <SKILL_DIR>/scripts/restore-memory.ts $LETTA_AGENT_ID .letta/backups/working",
|
||||
description: "Restore cleaned memory blocks"
|
||||
})
|
||||
// No Step 3 needed — memfs sync handles propagation to API blocks
|
||||
```
|
||||
|
||||
## Rollback
|
||||
|
||||
If something goes wrong, restore from a previous backup:
|
||||
If something goes wrong, restore from the safety backup:
|
||||
|
||||
```bash
|
||||
# Find the backup directory
|
||||
ls -la .letta/backups/<agent-id>/
|
||||
# Find backups
|
||||
ls -la ~/.letta/agents/$LETTA_AGENT_ID/memory-backup-*/
|
||||
|
||||
# Restore from specific timestamp
|
||||
npx tsx <SKILL_DIR>/scripts/restore-memory.ts $LETTA_AGENT_ID .letta/backups/<agent-id>/<timestamp>
|
||||
# Restore from a specific backup (replace the current memory dir)
|
||||
rm -rf ~/.letta/agents/$LETTA_AGENT_ID/memory/
|
||||
cp -r ~/.letta/agents/$LETTA_AGENT_ID/memory-backup-<TIMESTAMP>/ ~/.letta/agents/$LETTA_AGENT_ID/memory/
|
||||
```
|
||||
|
||||
## Dry Run
|
||||
|
||||
Preview changes without applying them:
|
||||
|
||||
```bash
|
||||
npx tsx <SKILL_DIR>/scripts/restore-memory.ts $LETTA_AGENT_ID .letta/backups/working --dry-run
|
||||
```
|
||||
On next CLI startup, memfs sync will detect the changes and update API blocks accordingly.
|
||||
|
||||
## What the Subagent Does
|
||||
|
||||
The subagent focuses on decomposing and cleaning up files. It has full tool access (including Bash) and:
|
||||
- Discovers `.md` files in `.letta/backups/working/` (via Glob or Bash)
|
||||
- Discovers `.md` files in `~/.letta/agents/<agent-id>/memory/system/` (via Glob or Bash)
|
||||
- Reads and examines each file's content
|
||||
- Identifies multi-purpose blocks that serve 2+ distinct purposes
|
||||
- Splits large blocks into focused, single-purpose components
|
||||
- Splits large blocks into focused, single-purpose components with hierarchical naming
|
||||
- Modifies/creates .md files for decomposed blocks
|
||||
- Improves structure with headers and bullet points
|
||||
- Removes redundancy and speculation across blocks
|
||||
- Resolves contradictions with clear, concrete guidance
|
||||
- Organizes content logically (general to specific, by importance)
|
||||
- Provides detailed before/after reports including decomposition rationale
|
||||
- Does NOT run backup scripts (main agent does this)
|
||||
- Does NOT run restore scripts (main agent does this)
|
||||
- Does NOT run any backup or restore scripts
|
||||
|
||||
The focus is on decomposition—breaking apart large monolithic blocks into focused, specialized components rather than consolidating them together.
|
||||
|
||||
@@ -212,17 +208,17 @@ The focus is on decomposition—breaking apart large monolithic blocks into focu
|
||||
|
||||
**Decomposition Strategy:**
|
||||
- Split blocks that serve 2+ distinct purposes into focused components
|
||||
- Create new specialized blocks with clear, single-purpose titles
|
||||
- Example: A "persona" mixing communication style + Git practices → split into "communication-style.md" and "version-control-practices.md"
|
||||
- Example: A "project" with structure + patterns + rendering → split into "architecture.md", "patterns.md", "rendering.md"
|
||||
- Use hierarchical `/` naming: `project/tooling/bun.md`, not `project-bun.md`
|
||||
- Create parent index files that reference children
|
||||
- Example: A "persona" mixing identity + values + approach → split into `persona/identity.md`, `persona/values.md`, `persona/approach.md`
|
||||
- Example: A "project" with overview + architecture + conventions → split into `project/overview.md`, `project/architecture.md`, `project/conventions.md`
|
||||
- Add clear headers and bullet points for scannability
|
||||
- Group similar information together within focused blocks
|
||||
|
||||
**When to DELETE a file:**
|
||||
- Only delete if file contains junk/irrelevant data with no project value
|
||||
- Don't delete after decomposing - Each new focused block is valuable
|
||||
- Delete source files after fully decomposing content into child files
|
||||
- Don't delete unique information just to reduce file count
|
||||
- Exception: Delete source files only if consolidating multiple blocks into one (rare)
|
||||
|
||||
**What to preserve:**
|
||||
- User preferences (sacred - never delete)
|
||||
@@ -236,3 +232,4 @@ The focus is on decomposition—breaking apart large monolithic blocks into focu
|
||||
- Organize with bullet points
|
||||
- Keep related information together
|
||||
- Make it scannable at a glance
|
||||
- Use `/` hierarchy for discoverability
|
||||
|
||||
@@ -1,206 +0,0 @@
|
||||
#!/usr/bin/env npx tsx
|
||||
/**
|
||||
* Backup Memory Blocks to Local Files
|
||||
*
|
||||
* Exports all memory blocks from an agent to local files for checkpointing and editing.
|
||||
* Creates a timestamped backup directory with:
|
||||
* - Individual .md files for each memory block
|
||||
* - manifest.json with metadata
|
||||
*
|
||||
* This script is standalone and can be run outside the CLI process.
|
||||
* It reads auth from LETTA_API_KEY env var or ~/.letta/settings.json.
|
||||
*
|
||||
* Usage:
|
||||
* npx tsx backup-memory.ts <agent-id> [backup-dir]
|
||||
*
|
||||
* Example:
|
||||
* npx tsx backup-memory.ts agent-abc123
|
||||
* npx tsx backup-memory.ts $LETTA_AGENT_ID .letta/backups/working
|
||||
*/
|
||||
|
||||
import { mkdirSync, readFileSync, writeFileSync } from "node:fs";
|
||||
import { createRequire } from "node:module";
|
||||
import { homedir } from "node:os";
|
||||
import { dirname, join } from "node:path";
|
||||
|
||||
// Use createRequire for @letta-ai/letta-client so NODE_PATH is respected
|
||||
// (ES module imports don't respect NODE_PATH, but require does)
|
||||
const require = createRequire(import.meta.url);
|
||||
const Letta = require("@letta-ai/letta-client")
|
||||
.default as typeof import("@letta-ai/letta-client").default;
|
||||
type LettaClient = InstanceType<typeof Letta>;
|
||||
|
||||
export interface BackupManifest {
|
||||
agent_id: string;
|
||||
timestamp: string;
|
||||
backup_path: string;
|
||||
blocks: Array<{
|
||||
id: string;
|
||||
label: string;
|
||||
filename: string;
|
||||
limit: number;
|
||||
value_length: number;
|
||||
}>;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get API key from env var or settings file
|
||||
*/
|
||||
function getApiKey(): string {
|
||||
if (process.env.LETTA_API_KEY) {
|
||||
return process.env.LETTA_API_KEY;
|
||||
}
|
||||
|
||||
const settingsPath = join(homedir(), ".letta", "settings.json");
|
||||
try {
|
||||
const settings = JSON.parse(readFileSync(settingsPath, "utf-8"));
|
||||
if (settings.env?.LETTA_API_KEY) {
|
||||
return settings.env.LETTA_API_KEY;
|
||||
}
|
||||
} catch {
|
||||
// Settings file doesn't exist or is invalid
|
||||
}
|
||||
|
||||
throw new Error(
|
||||
"No LETTA_API_KEY found. Set the env var or run the Letta CLI to authenticate.",
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a Letta client with auth from env/settings
|
||||
*/
|
||||
function createClient(): LettaClient {
|
||||
const baseUrl = process.env.LETTA_BASE_URL || "https://api.letta.com";
|
||||
return new Letta({ apiKey: getApiKey(), baseUrl });
|
||||
}
|
||||
|
||||
/**
|
||||
* Backup memory blocks to local files
|
||||
*/
|
||||
async function backupMemory(
|
||||
agentId: string,
|
||||
backupDir?: string,
|
||||
): Promise<string> {
|
||||
const client = createClient();
|
||||
|
||||
// Create backup directory
|
||||
const timestamp = new Date().toISOString().replace(/[:.]/g, "-");
|
||||
const defaultBackupDir = join(
|
||||
process.cwd(),
|
||||
".letta",
|
||||
"backups",
|
||||
agentId,
|
||||
timestamp,
|
||||
);
|
||||
const backupPath = backupDir || defaultBackupDir;
|
||||
|
||||
mkdirSync(backupPath, { recursive: true });
|
||||
|
||||
console.log(`Backing up memory blocks for agent ${agentId}...`);
|
||||
console.log(`Backup location: ${backupPath}`);
|
||||
|
||||
// Get all memory blocks
|
||||
const blocksResponse = await client.agents.blocks.list(agentId);
|
||||
const blocks = Array.isArray(blocksResponse)
|
||||
? blocksResponse
|
||||
: (blocksResponse as { items?: unknown[] }).items ||
|
||||
(blocksResponse as { blocks?: unknown[] }).blocks ||
|
||||
[];
|
||||
|
||||
console.log(`Found ${blocks.length} memory blocks`);
|
||||
|
||||
// Export each block to a file
|
||||
const manifest: BackupManifest = {
|
||||
agent_id: agentId,
|
||||
timestamp: new Date().toISOString(),
|
||||
backup_path: backupPath,
|
||||
blocks: [],
|
||||
};
|
||||
|
||||
for (const block of blocks as Array<{
|
||||
id: string;
|
||||
label?: string;
|
||||
value?: string;
|
||||
limit?: number;
|
||||
}>) {
|
||||
const label = block.label || `block-${block.id}`;
|
||||
// For hierarchical labels like "A/B", create directory A/ with file B.md
|
||||
const filename = `${label}.md`;
|
||||
const filepath = join(backupPath, filename);
|
||||
|
||||
// Create parent directories if label contains slashes
|
||||
const parentDir = dirname(filepath);
|
||||
if (parentDir !== backupPath) {
|
||||
mkdirSync(parentDir, { recursive: true });
|
||||
}
|
||||
|
||||
// Write block content to file
|
||||
const content = block.value || "";
|
||||
writeFileSync(filepath, content, "utf-8");
|
||||
|
||||
console.log(` ✓ ${label} -> ${filename} (${content.length} chars)`);
|
||||
|
||||
// Add to manifest
|
||||
manifest.blocks.push({
|
||||
id: block.id,
|
||||
label,
|
||||
filename,
|
||||
limit: block.limit || 0,
|
||||
value_length: content.length,
|
||||
});
|
||||
}
|
||||
|
||||
// Write manifest
|
||||
const manifestPath = join(backupPath, "manifest.json");
|
||||
writeFileSync(manifestPath, JSON.stringify(manifest, null, 2), "utf-8");
|
||||
console.log(` ✓ manifest.json`);
|
||||
|
||||
console.log(`\n✅ Backup complete: ${backupPath}`);
|
||||
return backupPath;
|
||||
}
|
||||
|
||||
// CLI Entry Point - check if this file is being run directly
|
||||
const isMainModule = import.meta.url === `file://${process.argv[1]}`;
|
||||
if (isMainModule) {
|
||||
const args = process.argv.slice(2);
|
||||
|
||||
if (args.length === 0 || args[0] === "--help" || args[0] === "-h") {
|
||||
console.log(`
|
||||
Usage: npx tsx backup-memory.ts <agent-id> [backup-dir]
|
||||
|
||||
Arguments:
|
||||
agent-id Agent ID to backup (can use $LETTA_AGENT_ID)
|
||||
backup-dir Optional custom backup directory
|
||||
Default: .letta/backups/<agent-id>/<timestamp>
|
||||
|
||||
Examples:
|
||||
npx tsx backup-memory.ts agent-abc123
|
||||
npx tsx backup-memory.ts $LETTA_AGENT_ID
|
||||
npx tsx backup-memory.ts agent-abc123 .letta/backups/working
|
||||
`);
|
||||
process.exit(0);
|
||||
}
|
||||
|
||||
const agentId = args[0];
|
||||
const backupDir = args[1];
|
||||
|
||||
if (!agentId) {
|
||||
console.error("Error: agent-id is required");
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
backupMemory(agentId, backupDir)
|
||||
.then((path) => {
|
||||
// Output just the path for easy capture in scripts
|
||||
console.log(path);
|
||||
})
|
||||
.catch((error) => {
|
||||
console.error(
|
||||
"Error backing up memory:",
|
||||
error instanceof Error ? error.message : String(error),
|
||||
);
|
||||
process.exit(1);
|
||||
});
|
||||
}
|
||||
|
||||
export { backupMemory };
|
||||
@@ -1,328 +0,0 @@
|
||||
#!/usr/bin/env npx tsx
|
||||
/**
|
||||
* Restore Memory Blocks from Local Files
|
||||
*
|
||||
* Imports memory blocks from local files back into an agent.
|
||||
* Reads files from a backup directory and updates the agent's memory blocks.
|
||||
*
|
||||
* This script is standalone and can be run outside the CLI process.
|
||||
* It reads auth from LETTA_API_KEY env var or ~/.letta/settings.json.
|
||||
*
|
||||
* Usage:
|
||||
* npx tsx restore-memory.ts <agent-id> <backup-dir> [options]
|
||||
*
|
||||
* Example:
|
||||
* npx tsx restore-memory.ts agent-abc123 .letta/backups/working
|
||||
* npx tsx restore-memory.ts $LETTA_AGENT_ID .letta/backups/working --dry-run
|
||||
*/
|
||||
|
||||
import { readdirSync, readFileSync, statSync } from "node:fs";
|
||||
import { createRequire } from "node:module";
|
||||
import { homedir } from "node:os";
|
||||
import { extname, join, relative } from "node:path";
|
||||
|
||||
import { getLettaCodeHeaders } from "../../../../agent/http-headers";
|
||||
import type { BackupManifest } from "./backup-memory";
|
||||
|
||||
// Use createRequire for @letta-ai/letta-client so NODE_PATH is respected
|
||||
// (ES module imports don't respect NODE_PATH, but require does)
|
||||
const require = createRequire(import.meta.url);
|
||||
const Letta = require("@letta-ai/letta-client")
|
||||
.default as typeof import("@letta-ai/letta-client").default;
|
||||
type LettaClient = InstanceType<typeof Letta>;
|
||||
|
||||
/**
|
||||
* Get API key from env var or settings file
|
||||
*/
|
||||
function getApiKey(): string {
|
||||
if (process.env.LETTA_API_KEY) {
|
||||
return process.env.LETTA_API_KEY;
|
||||
}
|
||||
|
||||
const settingsPath = join(homedir(), ".letta", "settings.json");
|
||||
try {
|
||||
const settings = JSON.parse(readFileSync(settingsPath, "utf-8"));
|
||||
if (settings.env?.LETTA_API_KEY) {
|
||||
return settings.env.LETTA_API_KEY;
|
||||
}
|
||||
} catch {
|
||||
// Settings file doesn't exist or is invalid
|
||||
}
|
||||
|
||||
throw new Error(
|
||||
"No LETTA_API_KEY found. Set the env var or run the Letta CLI to authenticate.",
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a Letta client with auth from env/settings
|
||||
*/
|
||||
function createClient(): LettaClient {
|
||||
const baseUrl = process.env.LETTA_BASE_URL || "https://api.letta.com";
|
||||
return new Letta({ apiKey: getApiKey(), baseUrl });
|
||||
}
|
||||
|
||||
/**
|
||||
* Recursively scan directory for .md files
|
||||
* Returns array of relative file paths from baseDir
|
||||
*/
|
||||
function scanMdFiles(dir: string, baseDir: string = dir): string[] {
|
||||
const results: string[] = [];
|
||||
const entries = readdirSync(dir);
|
||||
|
||||
for (const entry of entries) {
|
||||
const fullPath = join(dir, entry);
|
||||
const stat = statSync(fullPath);
|
||||
|
||||
if (stat.isDirectory()) {
|
||||
// Recursively scan subdirectory
|
||||
results.push(...scanMdFiles(fullPath, baseDir));
|
||||
} else if (stat.isFile() && extname(entry) === ".md") {
|
||||
// Convert to relative path from baseDir
|
||||
const relativePath = relative(baseDir, fullPath);
|
||||
results.push(relativePath);
|
||||
}
|
||||
}
|
||||
|
||||
return results;
|
||||
}
|
||||
|
||||
/**
|
||||
* Restore memory blocks from local files
|
||||
*/
|
||||
async function restoreMemory(
|
||||
agentId: string,
|
||||
backupDir: string,
|
||||
options: { dryRun?: boolean } = {},
|
||||
): Promise<void> {
|
||||
const client = createClient();
|
||||
|
||||
console.log(`Restoring memory blocks for agent ${agentId}...`);
|
||||
console.log(`Source: ${backupDir}`);
|
||||
|
||||
if (options.dryRun) {
|
||||
console.log("⚠️ DRY RUN MODE - No changes will be made\n");
|
||||
}
|
||||
|
||||
// Read manifest for metadata only (block IDs)
|
||||
const manifestPath = join(backupDir, "manifest.json");
|
||||
let manifest: BackupManifest | null = null;
|
||||
|
||||
try {
|
||||
const manifestContent = readFileSync(manifestPath, "utf-8");
|
||||
manifest = JSON.parse(manifestContent);
|
||||
} catch {
|
||||
// Manifest is optional
|
||||
}
|
||||
|
||||
// Get current agent blocks using direct fetch (SDK may hit wrong server)
|
||||
const baseUrl = process.env.LETTA_BASE_URL || "https://api.letta.com";
|
||||
const blocksResp = await fetch(
|
||||
`${baseUrl}/v1/agents/${agentId}/core-memory`,
|
||||
{
|
||||
headers: getLettaCodeHeaders(getApiKey()),
|
||||
},
|
||||
);
|
||||
if (!blocksResp.ok) {
|
||||
throw new Error(`Failed to list blocks: ${blocksResp.status}`);
|
||||
}
|
||||
const blocksJson = (await blocksResp.json()) as { blocks: unknown[] };
|
||||
const blocksResponse = blocksJson.blocks;
|
||||
const currentBlocks = Array.isArray(blocksResponse)
|
||||
? blocksResponse
|
||||
: (blocksResponse as { items?: unknown[] }).items ||
|
||||
(blocksResponse as { blocks?: unknown[] }).blocks ||
|
||||
[];
|
||||
const blocksByLabel = new Map(
|
||||
(currentBlocks as Array<{ label: string; id: string; value?: string }>).map(
|
||||
(b) => [b.label, b],
|
||||
),
|
||||
);
|
||||
|
||||
// Always scan directory for .md files (manifest is only used for block IDs)
|
||||
const files = scanMdFiles(backupDir);
|
||||
console.log(`Scanned ${files.length} .md files\n`);
|
||||
const filesToRestore = files.map((relativePath) => {
|
||||
// Convert path like "A/B.md" to label "A/B"
|
||||
// Replace backslashes with forward slashes (Windows compatibility)
|
||||
const normalizedPath = relativePath.replace(/\\/g, "/");
|
||||
const label = normalizedPath.replace(/\.md$/, "");
|
||||
// Look up block ID from manifest if available
|
||||
const manifestBlock = manifest?.blocks.find((b) => b.label === label);
|
||||
return {
|
||||
label,
|
||||
filename: relativePath,
|
||||
blockId: manifestBlock?.id,
|
||||
};
|
||||
});
|
||||
|
||||
// Detect blocks to delete (exist on agent but not in backup)
|
||||
const backupLabels = new Set(filesToRestore.map((f) => f.label));
|
||||
const blocksToDelete = (
|
||||
currentBlocks as Array<{ label: string; id: string }>
|
||||
).filter((b) => !backupLabels.has(b.label));
|
||||
|
||||
// Restore each block
|
||||
let updated = 0;
|
||||
let created = 0;
|
||||
let deleted = 0;
|
||||
|
||||
for (const { label, filename } of filesToRestore) {
|
||||
const filepath = join(backupDir, filename);
|
||||
|
||||
try {
|
||||
const newValue = readFileSync(filepath, "utf-8");
|
||||
const existingBlock = blocksByLabel.get(label);
|
||||
|
||||
if (existingBlock) {
|
||||
// Update existing block using block ID (not label, which may contain /)
|
||||
if (!options.dryRun) {
|
||||
const baseUrl = process.env.LETTA_BASE_URL || "https://api.letta.com";
|
||||
const url = `${baseUrl}/v1/blocks/${existingBlock.id}`;
|
||||
const resp = await fetch(url, {
|
||||
method: "PATCH",
|
||||
headers: getLettaCodeHeaders(getApiKey()),
|
||||
body: JSON.stringify({ value: newValue }),
|
||||
});
|
||||
if (!resp.ok) {
|
||||
throw new Error(`${resp.status} ${await resp.text()}`);
|
||||
}
|
||||
}
|
||||
|
||||
const oldLen = existingBlock.value?.length || 0;
|
||||
const newLen = newValue.length;
|
||||
const unchanged = existingBlock.value === newValue;
|
||||
|
||||
if (unchanged) {
|
||||
console.log(` ✓ ${label} - restored (${newLen} chars, unchanged)`);
|
||||
} else {
|
||||
const diff = newLen - oldLen;
|
||||
const diffStr = diff > 0 ? `+${diff}` : `${diff}`;
|
||||
console.log(
|
||||
` ✓ ${label} - restored (${oldLen} -> ${newLen} chars, ${diffStr})`,
|
||||
);
|
||||
}
|
||||
updated++;
|
||||
} else {
|
||||
// New block - create immediately
|
||||
if (!options.dryRun) {
|
||||
const createdBlock = await client.blocks.create({
|
||||
label,
|
||||
value: newValue,
|
||||
description: `Memory block: ${label}`,
|
||||
limit: 20000,
|
||||
});
|
||||
|
||||
if (!createdBlock.id) {
|
||||
throw new Error(`Created block ${label} has no ID`);
|
||||
}
|
||||
|
||||
await client.agents.blocks.attach(createdBlock.id, {
|
||||
agent_id: agentId,
|
||||
});
|
||||
}
|
||||
console.log(` ✓ ${label} - created (${newValue.length} chars)`);
|
||||
created++;
|
||||
}
|
||||
} catch (error) {
|
||||
console.error(
|
||||
` ❌ ${label} - error: ${error instanceof Error ? error.message : String(error)}`,
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
// Handle deletions (blocks that exist on agent but not in backup)
|
||||
if (blocksToDelete.length > 0) {
|
||||
console.log(
|
||||
`\n⚠️ Found ${blocksToDelete.length} block(s) that were removed from backup:`,
|
||||
);
|
||||
for (const block of blocksToDelete) {
|
||||
console.log(` - ${block.label}`);
|
||||
}
|
||||
|
||||
if (!options.dryRun) {
|
||||
console.log(`\nThese blocks will be DELETED from the agent.`);
|
||||
console.log(
|
||||
`Press Ctrl+C to cancel, or press Enter to confirm deletion...`,
|
||||
);
|
||||
|
||||
// Wait for user confirmation
|
||||
await new Promise<void>((resolve) => {
|
||||
process.stdin.once("data", () => resolve());
|
||||
});
|
||||
|
||||
console.log();
|
||||
for (const block of blocksToDelete) {
|
||||
try {
|
||||
await client.agents.blocks.detach(block.id, {
|
||||
agent_id: agentId,
|
||||
});
|
||||
console.log(` 🗑️ ${block.label} - deleted`);
|
||||
deleted++;
|
||||
} catch (error) {
|
||||
console.error(
|
||||
` ❌ ${block.label} - error deleting: ${error instanceof Error ? error.message : String(error)}`,
|
||||
);
|
||||
}
|
||||
}
|
||||
} else {
|
||||
console.log(`\n(Would delete these blocks if not in dry-run mode)`);
|
||||
}
|
||||
}
|
||||
|
||||
console.log(`\n📊 Summary:`);
|
||||
console.log(` Restored: ${updated}`);
|
||||
console.log(` Created: ${created}`);
|
||||
console.log(` Deleted: ${deleted}`);
|
||||
|
||||
if (options.dryRun) {
|
||||
console.log(`\n⚠️ DRY RUN - No changes were made`);
|
||||
console.log(` Run without --dry-run to apply changes`);
|
||||
} else {
|
||||
console.log(`\n✅ Restore complete`);
|
||||
}
|
||||
}
|
||||
|
||||
// CLI Entry Point - check if this file is being run directly
|
||||
const isMainModule = import.meta.url === `file://${process.argv[1]}`;
|
||||
if (isMainModule) {
|
||||
const args = process.argv.slice(2);
|
||||
|
||||
if (args.length === 0 || args[0] === "--help" || args[0] === "-h") {
|
||||
console.log(`
|
||||
Usage: npx tsx restore-memory.ts <agent-id> <backup-dir> [options]
|
||||
|
||||
Arguments:
|
||||
agent-id Agent ID to restore to (can use $LETTA_AGENT_ID)
|
||||
backup-dir Backup directory containing memory block files
|
||||
|
||||
Options:
|
||||
--dry-run Preview changes without applying them
|
||||
|
||||
Examples:
|
||||
npx tsx restore-memory.ts agent-abc123 .letta/backups/working
|
||||
npx tsx restore-memory.ts $LETTA_AGENT_ID .letta/backups/working
|
||||
npx tsx restore-memory.ts agent-abc123 .letta/backups/working --dry-run
|
||||
`);
|
||||
process.exit(0);
|
||||
}
|
||||
|
||||
const agentId = args[0];
|
||||
const backupDir = args[1];
|
||||
const dryRun = args.includes("--dry-run");
|
||||
|
||||
if (!agentId || !backupDir) {
|
||||
console.error("Error: agent-id and backup-dir are required");
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
restoreMemory(agentId, backupDir, { dryRun }).catch((error) => {
|
||||
console.error(
|
||||
"Error restoring memory:",
|
||||
error instanceof Error ? error.message : String(error),
|
||||
);
|
||||
process.exit(1);
|
||||
});
|
||||
}
|
||||
|
||||
export { restoreMemory };
|
||||
Reference in New Issue
Block a user