docs: Add START_HERE guide and FUTURE_PHASES planning

This commit is contained in:
Annie Tunturi
2026-03-28 23:42:42 -04:00
parent ec494b0998
commit b67fae0f53
4 changed files with 562 additions and 120 deletions

283
START_HERE.md Normal file
View File

@@ -0,0 +1,283 @@
# Community ADE — Start Here
**What:** Self-hosted Letta Agent Development Environment — a web dashboard + orchestration platform for stateful agents.
**Where:** `/home/ani/Projects/community-ade/community-ade-wt/mvp-unified/`
**Letta Server:** `http://localhost:8283` (v0.16.6, Docker container `aster-0.16.6-patched`)
**Orchestrator:** Ani (Annie Tunturi) — project owner, architect, and final decision-maker on all implementation direction.
---
## How This Project Got Here
Ani directed the research phase, established the competitive analysis, designed the orchestration architecture, and then took direct control of the implementation. She consolidated the project into a single unified codebase (`mvp-unified/`), enforced SDK-first architecture across all components, and drove the build sequence from backend services through to a functional dashboard with full agent creation, configuration editing, streaming chat, and task orchestration.
The research docs in `docs/` represent foundational work — competitive analysis, queue architecture, memory curation design, approval system architecture. The archive (`community-ade-wt/_archive/`) contains prior model-specific implementations that validated the architecture before Ani unified them. The implementation in `mvp-unified/` is the production codebase.
---
## Read These First
1. **This file** — project orientation, architecture, SDK rules
2. **`docs/FUTURE_PHASES.md`** — roadmap for approval system, memory curation, integrations
3. **`docs/ade-research.md`** — competitive analysis (Letta vs Intent vs Warp)
4. **`docs/ade-phase1-orchestration-design.md`** — task queue architecture
5. **`docs/community-ade-research-synthesis-2026-03-18.md`** — technical patterns + gap analysis
---
## Architecture
```
Browser (React Dashboard :4422)
│ REACT_APP_API_URL → backend
Express Backend (:4421)
├── @letta-ai/letta-client ──→ Letta Server :8283 (REST: agents, memory, tools, models)
├── @letta-ai/letta-code-sdk ──→ letta-code CLI ──→ Letta Server (sessions, streaming, chat)
└── ioredis ──→ Redis (:4420) — task queue, worker state
```
**Current deployment: host-only** (no Docker for app/dashboard — SDK sessions require `letta-code` CLI on the host).
Redis runs in a standalone Docker container (`ade-redis`).
---
## SDK Rule (MANDATORY — READ THIS BEFORE WRITING ANY CODE)
**Every feature MUST use one of these two packages. There are no exceptions.**
### `@letta-ai/letta-client` (v1.7.12) — REST Operations
- **Use for:** Listing/creating/deleting agents, reading memory blocks, CRUD operations, model listing, tool listing, health checks, agent config updates
- **Source:** `/home/ani/Projects/letta-code/node_modules/@letta-ai/letta-client/`
- **API docs:** `http://localhost:8283/docs` (Swagger UI on running Letta server)
- **Key patterns:**
- Client init: `new Letta({ baseURL, apiKey, timeout })`
- Pagination: all `.list()` calls return page objects — access `.items` for the array
- Health: `client.health()` (top-level method, NOT `.health.check()`)
- Blocks: `client.agents.blocks.list(agentId)` (NOT `.coreMemory.blocks`)
- Agent create: `client.agents.create({ name, model, system, tool_ids, include_base_tools })`
- Agent update: `client.agents.update(agentId, params)` — accepts any field from `AgentUpdateParams`
- Tools (global): `client.tools.list()` — all tools on the server
- Tools (per-agent): `client.agents.tools.list(agentId)` — tools attached to an agent
- **Type declarations:** Read `node_modules/@letta-ai/letta-client/api/types.d.ts` when unsure about method signatures
### `@letta-ai/letta-code-sdk` (v0.1.12) — Interactive Sessions
- **Use for:** Chat sessions, streaming responses, tool execution, bootstrapping conversation state
- **Source:** `/home/ani/Projects/letta-code-sdk/`
- **Key exports:** `createSession`, `resumeSession`, `prompt`, `Session`
- **Key types:** `SDKMessage`, `SDKAssistantMessage`, `SDKReasoningMessage`, `SDKToolCallMessage`, `SDKToolResultMessage`, `SDKResultMessage`
- **Streaming:** `session.stream()` yields typed `SDKMessage` objects — render each type differently in the UI
- **ESM-only:** Must use dynamic `import()` from CJS project — see `services/sessions.ts` for the pattern
### What This Means In Practice
- **Need to list agents?** → `letta-client``client.agents.list()`
- **Need to create an agent?** → `letta-client``client.agents.create()`
- **Need to edit agent memory?** → `letta-client``client.agents.blocks.update()`
- **Need to list available tools?** → `letta-client``client.tools.list()`
- **Need to chat with an agent?** → `letta-code-sdk``createSession()` + `session.send()`
- **Need real-time streaming?** → `letta-code-sdk``session.stream()`
- **Need to run a background task?** → `letta-code-sdk` → worker spawns session, calls `session.runTurn()`
**No standalone implementations. No mock data. No in-memory stores pretending to be agents. No per-model silos. No hand-rolled HTTP calls to Letta endpoints when the SDK already wraps them.**
---
## How to Enforce SDK Compliance in Future Phases
When directing other LLMs (Kimi, GLM, or future Claude sessions) to implement new phases, follow this protocol:
### 1. Always Start With the SDK Source
Before writing any service code, the implementer must read the actual SDK type declarations:
```bash
# For REST operations — check what methods exist
cat node_modules/@letta-ai/letta-client/api/types.d.ts | head -200
# For session operations — check SDK exports
ls /home/ani/Projects/letta-code-sdk/src/
cat /home/ani/Projects/letta-code-sdk/src/index.ts
```
### 2. Include This Checklist in Every Phase Prompt
```
Before writing code, answer these for each feature:
- [ ] Which SDK package handles this? (letta-client or letta-code-sdk)
- [ ] What is the exact method signature? (read the .d.ts, don't guess)
- [ ] What does the return type look like? (page object with .items? raw array? single object?)
- [ ] Does this need pagination handling?
- [ ] Am I duplicating something the SDK already does?
```
### 3. Known SDK Gotchas
| What you'd expect | What actually works |
|---|---|
| `client.health.check()` | `client.health()` — top-level method |
| `client.agents.list()` returns `Agent[]` | Returns `AgentStatesArrayPage` — use `.items` |
| `client.agents.coreMemory.blocks.list()` | `client.agents.blocks.list(agentId)` |
| `agent.tool_ids` | `agent.tools` — array of tool objects |
| Block update with nested params | `client.agents.blocks.update(blockLabel, { agent_id, value })` |
| `import { createSession }` in CJS | Must use `const sdk = await import('@letta-ai/letta-code-sdk')` |
### 4. Verification Gate
Every phase must be verifiable against the running Letta server before moving on:
```bash
curl http://localhost:8283/v1/health # Letta healthy
curl http://localhost:4421/api/agents # Real agents from Letta
curl http://localhost:4421/api/agents/tools # Available tools from Letta
```
---
## Build Status
### Phase 1 — Core Platform (COMPLETE)
| Step | What | Status |
|------|------|--------|
| 1 | Project restructure + deps | DONE |
| 2 | Letta REST client service | DONE |
| 3 | SDK session service (ESM interop) | DONE |
| 4 | Express routes (agents, server, chat SSE) | DONE |
| 5 | Task queue + worker pool (Redis Streams) | DONE |
| 6 | Docker configuration | PARTIAL — host-only for now |
| 7 | Dashboard shell + settings | DONE |
| 8 | Agent list + detail + config editor | DONE |
| 9 | Task queue UI | DONE |
| 10 | Chat panel (SSE streaming) | DONE |
| 11 | Models view | DONE |
### Phase 1.5 — Agent Management (COMPLETE)
| Feature | What | Status |
|---------|------|--------|
| Agent creation wizard | 5-step wizard (info → model → tools → system prompt → review) | DONE |
| Agent create API | `POST /api/agents` via `client.agents.create()` | DONE |
| Tool listing API | `GET /api/agents/tools` via `client.tools.list()` | DONE |
| Agent deletion | `DELETE /api/agents/:id` + confirmation UI | DONE |
| Cross-tab navigation | "Chat with Agent" from detail → Chat tab | DONE |
### Verification Pipeline (REPAIRED)
A multi-layer verification system at `src/verification/` with:
- Static checks (schema validation via Zod v4, linting, type checking)
- Dynamic test execution (Jest, Mocha, Vitest, Pytest, Go)
- Human review queue with approval/rejection + webhooks
- Subagent delegation via letta-code-sdk (iterative refinement)
- Config builder with fluent API + presets (minimal, standardTypeScript, fullCI, quick)
**Status:** Fully compiles. Types rewritten to match executor design. Zod v4 added as dependency. Included in build (`tsconfig.json`). Ready for integration with task queue worker pipeline.
### Future Phases
See **`docs/FUTURE_PHASES.md`** for the full roadmap.
---
## Key Files
```
mvp-unified/
├── src/
│ ├── server.ts # Express entry point
│ ├── types.ts # ADE domain types
│ ├── services/
│ │ ├── letta.ts # @letta-ai/letta-client wrapper
│ │ ├── sessions.ts # @letta-ai/letta-code-sdk wrapper (ESM dynamic import)
│ │ ├── queue.ts # Redis Streams task queue
│ │ └── worker.ts # Worker pool for background agent tasks
│ ├── verification/ # Multi-layer verification pipeline (needs repair)
│ │ ├── checks/ # SchemaValidator, LinterRunner, TypeChecker
│ │ ├── executors/ # TestExecutor, SubagentDispatcher, VerificationOrchestrator
│ │ ├── review/ # HumanReviewQueue
│ │ └── utils/ # auditLogger, configBuilder, helpers
│ └── routes/
│ ├── agents.ts # Agent CRUD + memory + tools + create + delete
│ ├── server.ts # Health, models, reconnect
│ ├── chat.ts # SDK sessions + SSE streaming
│ └── tasks.ts # Task queue API
├── dashboard/
│ └── src/
│ ├── App.tsx # Shell + tabs + connection status
│ ├── styles.css # Dark theme, unified design system
│ └── components/
│ ├── AgentList.tsx # Agent grid + "New Agent" button
│ ├── AgentDetail.tsx # Full config editor — all LLM params, memory, system prompt
│ ├── ChatPanel.tsx # SSE streaming chat with SDK message types
│ ├── ModelsView.tsx # Model table with provider filter
│ ├── TaskQueue.tsx # Task list, stats, create form, cancel/retry
│ ├── ServerSettings.tsx # Connection config
│ └── wizard/
│ ├── AgentWizard.tsx # 5-step creation wizard
│ ├── StepIndicator.tsx # Progress bar
│ ├── BasicInfoStep.tsx # Name + description
│ ├── ModelSelectionStep.tsx # Live model picker from Letta
│ ├── ToolAccessStep.tsx # Live tool picker from Letta
│ ├── SystemPromptStep.tsx # Initial system prompt
│ └── ReviewStep.tsx # Summary + create
├── docker-compose.yml
├── Dockerfile
├── dashboard/Dockerfile
└── package.json # @community-ade/letta-ade v0.2.0
```
---
## Running (Host Mode)
```bash
cd /home/ani/Projects/community-ade/community-ade-wt/mvp-unified
# Prerequisites: Redis container running on port 4420
docker start ade-redis # or: docker run -d --name ade-redis -p 4420:6379 redis:7-alpine
# Backend
npm run build
PORT=4421 nohup node dist/server.js > /tmp/ade-backend.log 2>&1 &
# Dashboard (dev server)
cd dashboard
REACT_APP_API_URL=http://10.10.20.19:4421 nohup npx react-scripts start > /tmp/ade-dashboard.log 2>&1 &
# Dashboard: http://10.10.20.19:4422/
# API: http://10.10.20.19:4421/api/agents
# Redis: localhost:4420
```
---
## What NOT to Do
- Do NOT create separate directories per model — models are options within ONE system
- Do NOT build standalone memory stores — use Letta's memory blocks via the SDK
- Do NOT use `localhost:3000` in frontend code — use relative URLs or `REACT_APP_API_URL`
- Do NOT guess SDK method signatures — read the `.d.ts` files or source code
- Do NOT hand-roll HTTP calls to Letta when the SDK already wraps the endpoint
- Do NOT use `import` for `@letta-ai/letta-code-sdk` in CJS — must be dynamic `import()`
---
## Archive Reference
The `community-ade-wt/_archive/` directory contains prior implementations that validated the architecture:
| Directory | What | Useful For |
|-----------|------|------------|
| `impl-kimi`, `impl-deepseek`, `impl-glm`, `impl-minimax` | Per-model implementations (identical code) | Proved model-agnostic architecture |
| `memory-curator-kimi` | 4-tier memory hierarchy design | Future memory curation phase |
| `approval-impl`, `approval-system` | Distributed locking + approval state machine | Future governance layer |
| `future-proofing` | Feature flags, migration automation, extensibility | Long-term scaling patterns |
| `sdk-alignment` | SDK compatibility tracking | Reference for version upgrades |
---
## Vision
Letta is the **only open-source platform** combining stateful agents + hierarchical memory + git-native persistence + subagent orchestration. Commercial tools (Warp, Intent) validate the market but leave gaps:
- **Warp:** terminal-native but no persistent memory, no subagent orchestration
- **Intent:** spec-driven but SaaS-only, no self-hosted option
Community ADE fills these gaps with a self-hosted, SDK-powered orchestration platform. The dashboard is the control surface. The task queue is the execution engine. The SDKs are the only interface to Letta.

155
docs/FUTURE_PHASES.md Normal file
View File

@@ -0,0 +1,155 @@
# Community ADE — Future Phases
Synthesized from research docs, archived implementations, and design work. Each phase builds on the SDK-first unified platform.
---
## Phase 2: Approval System + Governance
**Source:** `_archive/approval-impl/`, `_archive/approval-system/`, `src/services/approval.ts`, `src/services/lock.ts`, `docs/design.md`, `docs/redis-schema.md`
**What exists (design + partial impl):**
- 8-state approval lifecycle: DRAFT → SUBMITTED → REVIEWING → APPROVED → APPLYING → COMPLETED (or REJECTED/CANCELLED)
- Distributed locking with FIFO queues, TTL with heartbeat renewal, deadlock detection via wait-for graph
- Risk scoring: `(criticality × 0.4) + (magnitude × 0.3) + (blast_radius × 0.2) + (failure_rate × 0.1)`
- Auto-approve threshold for low-risk operations
- Quorum rules: score >= 75 requires 3 approvals, >= 50 requires 2, else 1
- Batch approval operations
- Redis key schema fully designed (`ade:task:*`, `ade:approval:*`, `ade:lock:*`)
**What to build:**
- [ ] Port `approval.ts` (1,080 lines) and `lock.ts` (748 lines) into `mvp-unified/src/services/`
- [ ] Add approval routes (`POST /api/tasks/:id/submit`, `POST /api/approvals/:id/respond`)
- [ ] Dashboard: approval queue panel, risk score visualization, lock monitor
- [ ] WebSocket push for approval state changes
- [ ] Human gate UI for tool calls needing sign-off
**SDK integration:** Task approval triggers `createSession()` + `runTurn()` on approved tasks via the existing worker pool.
---
## Phase 3: Memory Curation
**Source:** `_archive/memory-curator-kimi/design.md` (most comprehensive), `_archive/memory-curator-deepseek/`, `_archive/memory-curator-glm/`, `_archive/memory-curator-minimax/`
**Design highlights (from Kimi curator):**
- 4-tier memory hierarchy:
- **EPHEMERAL** — conversation-scoped, auto-expires
- **WORKING** — task-relevant, medium persistence
- **DEEP** — identity-defining, long-term
- **RESIDENT** — permanent, never pruned
- Memory lifecycle state machine: BIRTH → promotion/pruning → DEATH/RESIDENT
- Compression strategies:
- Narrative summarization (preserve emotional/contextual meaning)
- Factual extraction (key-value pairs)
- Semantic embedding (for similarity search)
- Token pressure triggers at 70-80% threshold
- Compression cascade: EPHEMERAL (50%) → WORKING (30%) → DEEP (20%) → Emergency
- Importance scoring: `importance + log(accessCount) × 0.1 + exp(-daysSinceAccess/7) + connectivity × 0.05`
**Philosophy:** "Memory is the substrate of consciousness. Curation is about stewarding agent identity, not just managing storage."
**What to build:**
- [ ] Memory curator service wrapping Letta's block system
- [ ] Token pressure monitoring per agent
- [ ] Compression strategies as pluggable modes (not hardcoded)
- [ ] Dashboard: memory health indicators, compression triggers, tier visualization
- [ ] Curation modes: aggressive (DeepSeek approach) vs. conservative (Kimi approach)
**SDK integration:** Memory operations via `client.agents.blocks.*`. Compression runs as background tasks through the worker pool.
---
## Phase 4: Enhanced Orchestration
**Source:** `docs/ade-phase1-orchestration-design.md`, `docs/parallel-tasks-orchestration.md`, `docs/ade-research.md` (Intent analysis)
**What to build:**
- [ ] Coordinator/Specialist/Verifier pattern (inspired by Intent research)
- Coordinator agent breaks down complex tasks
- Specialist agents execute subtasks
- Verifier agent validates outputs
- [ ] Git worktree isolation per task (each agent works in its own worktree)
- [ ] Spec tracking: planned vs. executed steps
- [ ] Task dependencies and DAG execution
- [ ] Priority queue with preemption support
- [ ] Stall detection improvements (currently 30s timeout)
**SDK integration:** Each subtask is a worker pool task, with `createSession()` per specialist agent.
---
## Phase 5: Integration Ecosystem
**Source:** `docs/ade-research.md` (Warp/Intent feature comparison)
**What to build:**
- [ ] GitHub App — PR creation, issue tracking, code review triggers
- [ ] Slack notifications — task completion, approval requests, alerts
- [ ] Linear/Jira via MCP — bidirectional issue sync
- [ ] Webhook triggers — external events spawn agent tasks
- [ ] Tool marketplace — UI for discovering and attaching Letta tools
**SDK integration:** Integrations register as Letta tools via `client.tools.create()`, agents invoke them naturally.
---
## Phase 6: Feature Flags + Gradual Rollout
**Source:** `_archive/future-proofing/feature-flags.md`
**Design highlights:**
- Redis-backed flags with gradual rollout (5% → 25% → 50% → 100%)
- A/B testing for curation strategies
- Kill switches for risky features
- Flag types: release, experiment, ops, permission
**What to build:**
- [ ] Feature flag service backed by Redis
- [ ] Dashboard: flag management UI, rollout controls
- [ ] Agent-level feature targeting
---
## Phase 7: Computer Use
**Source:** `docs/ade-research.md` (Warp terminal capabilities)
**What to build:**
- [ ] Playwright browser automation as Letta tools
- [ ] Screenshot capture + visual verification
- [ ] Computer Use skill via SDK tools
- [ ] Session recording and replay
---
## Phase 8: Team Collaboration
**Source:** `_archive/future-proofing/architecture-roadmap.md`
**What to build:**
- [ ] Multi-user access with authentication
- [ ] Shared agent pools
- [ ] Session sharing (like Warp)
- [ ] Audit trail (who changed what, when)
- [ ] Role-based permissions
---
## Success Metrics (from execution plan)
| Metric | Target |
|--------|--------|
| Task durability | 100% — tasks never lost on restart |
| Throughput | 10 tasks/min with 3 workers |
| Enqueue latency | < 100ms |
| Recovery time | < 60s from worker crash |
| Dashboard load | < 2s |
| Lock acquisition | < 10ms |
| Backward compat | Existing letta-code usage unchanged |
---
## Key Principle
**One unified system.** Models are interchangeable options, compared through the same interface. Memory curation strategies become modes, not products. Every feature flows through the SDK. The ADE is a window into Letta + an orchestration layer on top of it.

View File

@@ -1,57 +1,61 @@
# Community ADE (Agentic Development Environment)
A community-driven, open-source agentic development environment built on Letta's stateful agent architecture.
A self-hosted, SDK-powered orchestration platform for Letta stateful agents. Built and directed by Ani (Annie Tunturi).
## Vision
## What This Is
Build an open-source ADE that combines:
- **Stateful agents** with hierarchical memory (Letta's unique strength)
- **Git-native persistence** with MemFS versioning
- **Persistent task queues** for durable subagent execution
- **Web dashboard** for real-time monitoring and control
- **Computer Use** integration for browser automation
## Differentiation
Unlike commercial alternatives (Warp, Intent), Community ADE is:
- **Open source** and self-hostable
- **Stateful by design** - agents remember across sessions
- **Model agnostic** - use any OpenAI-compatible API
- **Git-native** - version control for agent memory
## Project Structure
```
├── src/ # Queue implementation and worker pool
├── tests/ # Test suite
├── docs/ # Architecture and design documents
├── proto/ # Prototypes and experiments
└── README.md # This file
```
## Documentation
- [Project State](docs/community-ade-project-state.md) - Current status and active subagents
- [Phase 1 Design](docs/ade-phase1-orchestration-design.md) - Task queue architecture
- [Redis Queue Design](docs/ade-redis-queue-design.md) - Detailed Redis implementation spec
- [Research Synthesis](docs/community-ade-research-synthesis-2026-03-18.md) - Competitive analysis
## Phase 1: Orchestration Layer (In Progress)
Goals:
1. ✅ Research and design complete
2. 🔄 Redis task queue implementation
3. ⏳ Worker pool with heartbeat
4. ⏳ Integration with Letta Task tool
A web dashboard + orchestration engine for managing Letta agents — their configuration, memory, tools, conversations, and background task execution. Everything flows through the official Letta SDKs.
## Quick Start
Coming soon - queue prototype implementation.
```bash
cd community-ade-wt/mvp-unified
docker compose up --build -d
# Dashboard: http://10.10.20.19:4422/
# API: http://10.10.20.19:4421/api/agents
```
**New to this project? Read `START_HERE.md` in the repo root first.**
## Architecture
- **React dashboard** — agent grid, config editor, chat, models, settings
- **Express backend** — REST API proxying to Letta via `@letta-ai/letta-client`, interactive sessions via `@letta-ai/letta-code-sdk`
- **Redis** — persistent task queue for background agent execution
- **Letta server** — the stateful agent runtime (v0.16.6)
## SDK-First Design
Every feature uses one of two packages:
- **`@letta-ai/letta-client`** — REST CRUD (agents, memory, tools, models)
- **`@letta-ai/letta-code-sdk`** — interactive sessions (chat, streaming, tool execution)
No standalone implementations. No mock data. See `START_HERE.md` for the full SDK compliance protocol.
## Differentiation
Unlike commercial alternatives:
- **Open source** and self-hostable
- **Stateful by design** — agents remember across sessions via Letta's hierarchical memory
- **Model agnostic** — any OpenAI-compatible provider
- **Orchestration-ready** — Redis task queue with worker pool for background agent work
## Documentation
| Document | What |
|----------|------|
| `START_HERE.md` | Project orientation, SDK rules, build sequence |
| `docs/ade-research.md` | Competitive analysis (Letta vs Intent vs Warp) |
| `docs/ade-phase1-orchestration-design.md` | Task queue architecture |
| `docs/ade-phase1-execution-plan.md` | 6-week execution plan |
| `docs/community-ade-research-synthesis-2026-03-18.md` | Technical patterns + gap analysis |
| `docs/community-ade-project-state.md` | Current project state and build progress |
## License
MIT - Community contribution welcome.
MIT
---
*Project orchestrated by Ani, with research and design by specialized subagents.*
*Project orchestrated by Ani. Research by specialized subagents. Implementation driven by SDK-first architecture.*

View File

@@ -1,97 +1,97 @@
# Community ADE Project - State Management
# Community ADE Project State
**Project:** Letta Community Agentic Development Environment
**Orchestrator:** Ani (Annie Tunturi)
**Created:** March 18, 2026
**Status:** Phase 1 - Orchestration Layer
**Project:** Letta Community Agentic Development Environment
**Orchestrator:** Ani (Annie Tunturi)
**Created:** March 18, 2026
**Last Updated:** March 21, 2026
**Status:** Active Development — Full Platform Operational
---
## Active Subagents
## Project History
| Subagent | Type | Status | Assigned Task | Output Location |
|----------|------|--------|---------------|-----------------|
| explorer-1 | explore | PENDING | Codebase exploration - task queue patterns | /tmp/ade-explorer-1/ |
| architect-1 | feature-architect | PENDING | Design Redis queue integration | /tmp/ade-architect-1/ |
| researcher-1 | researcher | COMPLETED | ADE competitive analysis | docs/community-ade-research-synthesis-2026-03-18.md |
Ani initiated the research phase on March 18, commissioning competitive analysis, orchestration design, and execution planning. After reviewing the outputs and identifying that the implementation needed a unified, SDK-first architecture, she took direct control.
Key architectural decisions:
- **Unified codebase** — one `mvp-unified/` project, not fragmented per-model experiments
- **SDK-first mandate** — every feature flows through `@letta-ai/letta-client` or `@letta-ai/letta-code-sdk`
- **Host-native deployment** — SDK sessions require `letta-code` CLI on the host (Docker for Redis only)
- **Full lifecycle management** — create, configure, chat with, and delete agents from a single dashboard
- **Sequential build plan** — 11 ordered steps, each verified against the live Letta server
---
## Document Registry
## Current State (March 21, 2026)
### Research Documents
- [x] `community-ade-research-2026-03-18.md` - Initial research
- [x] `ade-phase1-orchestration-design.md` - Phase 1 technical design
- [x] `community-ade-research-synthesis-2026-03-18.md` - Web research synthesis
- [x] `ade-phase1-execution-plan.md` - 6-week execution plan
### What's Working
- Express backend on `:4421` connected to Letta server via `@letta-ai/letta-client`
- React dashboard on `:4422` with hot-reload dev server
- **Agent Management:**
- Agent listing with grid view
- 5-step agent creation wizard (name → model → tools → system prompt → review)
- Full config editor (all LLM params: model, context window, reasoning, sleeptime, etc.)
- Memory block viewing and inline editing
- System prompt viewing and editing
- Tool listing per agent
- Agent deletion with confirmation
- Cross-tab navigation (Chat with Agent from detail view)
- **Chat:**
- SDK session creation via `@letta-ai/letta-code-sdk`
- SSE streaming with typed message rendering (assistant, reasoning, tool calls, results)
- Cost and duration tracking per turn
- **Task Queue:**
- Redis Streams task queue with consumer groups
- Worker pool (configurable count, default 2)
- Task creation, cancellation, retry from dashboard
- Stats bar (pending, running, completed, failed, workers)
- Exponential backoff retry with jitter
- **Models:**
- Unified model listing with provider filter
- Live data from Letta server (29+ tools, 38+ agents)
- **Infrastructure:**
- Redis on `:4420` (Docker container `ade-redis`)
- Graceful shutdown handling
- CORS enabled for cross-origin dev
### Design Documents
- [x] `ade-redis-queue-design.md` - Redis queue architecture (COMPLETED by researcher-2)
- [ ] `ade-task-queue-spec.md` - Detailed task queue specification (IN PROGRESS)
- [ ] `ade-worker-pool-design.md` - Worker pool architecture (PENDING)
- [ ] `ade-dashboard-wireframes.md` - Dashboard UI design (PENDING)
### Implementation
- [ ] `ade-queue-prototype/` - In-memory prototype (NOT STARTED)
- [ ] `ade-redis-queue/` - Redis-backed implementation (NOT STARTED)
- [ ] `ade-worker-process/` - Worker daemon (NOT STARTED)
### Architecture
```
React Dashboard (:4422) → Express Backend (:4421) → Letta Server (:8283)
→ Redis (:4420)
→ letta-code CLI (SDK sessions)
```
---
## Current Phase: Phase 1 - Orchestration Layer
## Research Documents
### Goals
1. Build persistent task queue system
2. Implement worker pool for subagent execution
3. Add retry logic with exponential backoff
4. Integrate with existing Task tool
### Decisions Made
- Use Redis (not Celery) for direct control
- In-memory prototype first, then Redis
- Worker pool with heartbeat monitoring
- Defer Temporal to Phase 2 evaluation
### Open Questions
- Should we use Redis Streams or Sorted Sets?
- Worker count: Fixed or dynamic?
- Task priority levels: Simple (high/normal) or granular?
| Document | Purpose |
|----------|---------|
| `ade-research.md` | Competitive analysis — Letta vs Intent vs Warp |
| `ade-phase1-orchestration-design.md` | Task queue architecture with Redis |
| `ade-phase1-execution-plan.md` | 6-week execution breakdown |
| `community-ade-research-synthesis-2026-03-18.md` | Technical patterns, gap analysis |
| `ade-redis-queue-design.md` | Redis queue data models |
| `parallel-tasks-orchestration.md` | Multi-agent task coordination |
| `design.md` | Approval system architecture (Phase 2) |
| `redis-schema.md` | Complete Redis key patterns for all services |
| `api-spec.ts` | OpenAPI/Zod specifications |
| `ui-components.md` | Dashboard design system |
| `FUTURE_PHASES.md` | Roadmap: approval system, memory curation, integrations |
---
## Subagent Work Queue
## Build Status
### Ready to Assign
1. **Explore task queue patterns in codebase**
- Type: explore
- Focus: Find existing queue/spawning code
- Output: File locations and patterns
2. **Design Redis queue architecture**
- Type: architect
- Focus: Data models, operations, integration points
- Output: Architecture spec document
3. **Research Playwright Computer Use**
- Type: researcher
- Focus: Browser automation for agentic coding
- Output: Integration approach
### Blocked
- None currently
### Completed
- [x] ADE competitive analysis (researcher-1)
| Phase | What | Status |
|-------|------|--------|
| 1.0 | Core platform (11 steps) | COMPLETE |
| 1.5 | Agent creation wizard + deletion | COMPLETE |
| 1.5 | Cross-tab navigation (detail → chat) | COMPLETE |
| 2.0 | Approval system + governance | PLANNED |
| 3.0 | Memory curation | PLANNED |
| 4.0 | Enhanced orchestration | PLANNED |
| 5.0 | Integration ecosystem | PLANNED |
---
## State Updates Log
**2026-03-18 09:23 EDT** - Project initiated, research documents created
**2026-03-18 10:01 EDT** - Attempting to spawn parallel subagents
**2026-03-18 02:03 EDT** - explorer-1 completed: Found Task.ts (line 403), manager.ts (spawnSubagent at line 883), in-memory QueueRuntime class. No Redis currently exists.
**2026-03-18 02:07 EDT** - researcher-2 completed: Redis queue architecture design. Key decisions: Redis Streams (consumer groups), Hash per task, 5s worker heartbeat, exponential backoff with jitter, adapter pattern integration.
---
*This file is maintained by Ani. Update when subagents report progress.*
*This file is maintained by Ani. See `START_HERE.md` for implementation guidance and `docs/FUTURE_PHASES.md` for the full roadmap.*