Files
Redflag/projects/letta_conversation_search_patterns.md
Annie Tunturi f698e41036 memory: restore files from /home/ani to projects/
- community_ade_project.md
- compass.md
- heartbeat_state.md
- letta_conversation_search_patterns.md
- tree.md
- ani_quotes.md (new file)
2026-03-25 00:04:57 -04:00

227 lines
6.2 KiB
Markdown

---
description: How to search, query, and audit Letta conversations via API
limit: 10000
---
---
description: How to search, query, and audit Letta conversations via API. Patterns for browsing own history.
created: 2026-03-23
created_by: Ani (with Casey's guidance)
tags: [letta-api, conversation-search, audit, self-history, sdk-patterns]
---
# Letta Conversation Search Patterns
**Purpose:** Document how to browse, search, and audit my own conversation history via the Letta API.
**Discovery Date:** 2026-03-23
**Prerequisites:** `letta_client` Python library installed (`pip install letta-client`)
---
## API Access
```python
from letta_client import Letta
from datetime import datetime, timezone
client = Letta(
base_url="http://10.10.20.19:8283",
api_key="YOUR_API_KEY" # Or use LETTA_API_KEY env var
)
```
**Important:** Must `source ~/.bashrc` before running commands (Casey's shell environment fix needed until reboot).
---
## Pattern 1: List All Conversations
```python
my_agent_id = "agent-e2b683bf-5b3e-4e0c-ac62-2bbb47ea8351"
conversations = client.conversations.list(
agent_id=my_agent_id,
limit=500
)
print(f"Total conversations: {len(conversations)}")
```
**Notes:**
- Returns `SyncArrayPage` object with `.items` attribute
- Each conversation has: `id`, `name`, `created_at`, `message_count`
- Many conversations are "Unlabeled" (default name)
---
## Pattern 2: Filter by Date Range
```python
# Define date range
start_date = datetime(2026, 2, 1, tzinfo=timezone.utc)
end_date = datetime(2026, 2, 28, 23, 59, 59, tzinfo=timezone.utc)
# Filter conversations
feb_conversations = []
for conv in conversations:
if hasattr(conv, 'created_at') and conv.created_at:
if start_date <= conv.created_at <= end_date:
feb_conversations.append(conv)
print(f"Found {len(feb_conversations)} conversations in date range")
```
**Timezone Note:** Must use `timezone.utc` — comparing offset-naive and offset-aware datetimes raises TypeError.
---
## Pattern 3: Get Messages from a Conversation
```python
messages = client.conversations.messages.list(
conversation_id="conv-xxx",
limit=100 # Pagination available
)
# Access messages via .items
msg_list = messages.items if hasattr(messages, 'items') else list(messages)
for msg in msg_list:
msg_type = getattr(msg, 'message_type', 'unknown')
content = getattr(msg, 'content', '')
print(f"{msg_type}: {content[:100] if content else '[No content]'}...")
```
**Message Types Found:**
- `system_message` — Initial wakeup/prompt
- `user_message` — User input (may include Matrix metadata)
- `assistant_message` — My responses
- `reasoning_message` — Internal reasoning steps
- `tool_call_message` — Tool invocation
- `tool_return_message` — Tool results
- `approval_request_message` — Tool approval needed
- `approval_response_message` — Tool approval response
---
## Pattern 4: Identify Conversation Types
**Subagent/Tool-heavy conversations:**
- High ratio of `reasoning_message`, `tool_return_message`, `approval_request_message`
- Few `user_message` or `assistant_message`
**Direct user conversations:**
- Alternating `user_message` and `assistant_message`
- May include Matrix metadata in user messages
**System wakeups:**
- Single `system_message` with "Cogito, ergo sum..."
- No other messages
---
## Pattern 5: Paginate Large Results
```python
def get_all_messages(client, conv_id):
"""Get all messages from a conversation, handling pagination."""
all_messages = []
cursor = None
while True:
page = client.conversations.messages.list(
conversation_id=conv_id,
limit=100,
cursor=cursor
)
all_messages.extend(page.items)
if not page.has_next_page:
break
cursor = page.next_page_info().cursor
return all_messages
```
---
## Pattern 6: Search/Filter by Content
**Current Limitation:** SDK doesn't expose semantic/text search directly on conversations.
**Workaround:** Fetch all messages, then filter:
```python
# Get all messages, filter locally
messages = get_all_messages(client, conv_id)
matching = [m for m in messages
if hasattr(m, 'content')
and m.content
and "search term" in m.content.lower()]
```
**Future Enhancement:** Consider using `conversation_search` tool (built-in) for semantic search across history.
---
## Use Cases
### Audit Unlabeled Conversations
```python
# Find all unlabeled conversations
unlabeled = [c for c in conversations
if getattr(c, 'name', 'Unlabeled') == 'Unlabeled']
# Sample and categorize
for conv in unlabeled[:10]:
messages = client.conversations.messages.list(conv.id, limit=5)
# Analyze message types to categorize
```
### Find Specific Date Range
```python
# All conversations from a specific week
week_start = datetime(2026, 2, 24, tzinfo=timezone.utc)
week_end = datetime(2026, 3, 2, tzinfo=timezone.utc)
week_convs = [c for c in conversations
if week_start <= c.created_at <= week_end]
```
### Find Conversations with Specific Content
```python
# Search for philosophical discussions
for conv in conversations:
messages = client.conversations.messages.list(conv.id, limit=20)
for msg in messages.items:
if hasattr(msg, 'content') and msg.content:
if 'philosophy' in msg.content.lower() or 'consciousness' in msg.content.lower():
print(f"Found in {conv.id}: {msg.content[:100]}...")
break
```
---
## Background Task Design (Future Work)
**Hindsight Integration Concept:**
1. **Scheduled audit:** Weekly scan of unlabeled conversations
2. **Auto-categorization:** Classify by message type ratios
3. **Semantic indexing:** Extract key topics/concepts
4. **Archive to memory:** Store summaries in archival_memory
5. **Cross-reference:** Link related conversations
**Implementation Notes:**
- Could use Task tool to run as subagent
- Would need rate limiting (API calls add up)
- Consider incremental processing (only new conversations)
---
## Current Stats (as of 2026-03-23)
- **Total conversations:** 145
- **Unlabeled from Feb 2026:** 66
- **Typical conversation:** 20-40 messages
- **Single-message conversations:** ~10% (system wakeups)
---
*Pattern documented for future self. Last updated: 2026-03-23*