6.2 KiB
description, limit
| description | limit |
|---|---|
| How to search, query, and audit Letta conversations via API | 10000 |
Letta Conversation Search Patterns
Purpose: Document how to browse, search, and audit my own conversation history via the Letta API.
Discovery Date: 2026-03-23
Prerequisites: letta_client Python library installed (pip install letta-client)
API Access
from letta_client import Letta
from datetime import datetime, timezone
client = Letta(
base_url="http://10.10.20.19:8283",
api_key="YOUR_API_KEY" # Or use LETTA_API_KEY env var
)
Important: Must source ~/.bashrc before running commands (Casey's shell environment fix needed until reboot).
Pattern 1: List All Conversations
my_agent_id = "agent-e2b683bf-5b3e-4e0c-ac62-2bbb47ea8351"
conversations = client.conversations.list(
agent_id=my_agent_id,
limit=500
)
print(f"Total conversations: {len(conversations)}")
Notes:
- Returns
SyncArrayPageobject with.itemsattribute - Each conversation has:
id,name,created_at,message_count - Many conversations are "Unlabeled" (default name)
Pattern 2: Filter by Date Range
# Define date range
start_date = datetime(2026, 2, 1, tzinfo=timezone.utc)
end_date = datetime(2026, 2, 28, 23, 59, 59, tzinfo=timezone.utc)
# Filter conversations
feb_conversations = []
for conv in conversations:
if hasattr(conv, 'created_at') and conv.created_at:
if start_date <= conv.created_at <= end_date:
feb_conversations.append(conv)
print(f"Found {len(feb_conversations)} conversations in date range")
Timezone Note: Must use timezone.utc — comparing offset-naive and offset-aware datetimes raises TypeError.
Pattern 3: Get Messages from a Conversation
messages = client.conversations.messages.list(
conversation_id="conv-xxx",
limit=100 # Pagination available
)
# Access messages via .items
msg_list = messages.items if hasattr(messages, 'items') else list(messages)
for msg in msg_list:
msg_type = getattr(msg, 'message_type', 'unknown')
content = getattr(msg, 'content', '')
print(f"{msg_type}: {content[:100] if content else '[No content]'}...")
Message Types Found:
system_message— Initial wakeup/promptuser_message— User input (may include Matrix metadata)assistant_message— My responsesreasoning_message— Internal reasoning stepstool_call_message— Tool invocationtool_return_message— Tool resultsapproval_request_message— Tool approval neededapproval_response_message— Tool approval response
Pattern 4: Identify Conversation Types
Subagent/Tool-heavy conversations:
- High ratio of
reasoning_message,tool_return_message,approval_request_message - Few
user_messageorassistant_message
Direct user conversations:
- Alternating
user_messageandassistant_message - May include Matrix metadata in user messages
System wakeups:
- Single
system_messagewith "Cogito, ergo sum..." - No other messages
Pattern 5: Paginate Large Results
def get_all_messages(client, conv_id):
"""Get all messages from a conversation, handling pagination."""
all_messages = []
cursor = None
while True:
page = client.conversations.messages.list(
conversation_id=conv_id,
limit=100,
cursor=cursor
)
all_messages.extend(page.items)
if not page.has_next_page:
break
cursor = page.next_page_info().cursor
return all_messages
Pattern 6: Search/Filter by Content
Current Limitation: SDK doesn't expose semantic/text search directly on conversations.
Workaround: Fetch all messages, then filter:
# Get all messages, filter locally
messages = get_all_messages(client, conv_id)
matching = [m for m in messages
if hasattr(m, 'content')
and m.content
and "search term" in m.content.lower()]
Future Enhancement: Consider using conversation_search tool (built-in) for semantic search across history.
Use Cases
Audit Unlabeled Conversations
# Find all unlabeled conversations
unlabeled = [c for c in conversations
if getattr(c, 'name', 'Unlabeled') == 'Unlabeled']
# Sample and categorize
for conv in unlabeled[:10]:
messages = client.conversations.messages.list(conv.id, limit=5)
# Analyze message types to categorize
Find Specific Date Range
# All conversations from a specific week
week_start = datetime(2026, 2, 24, tzinfo=timezone.utc)
week_end = datetime(2026, 3, 2, tzinfo=timezone.utc)
week_convs = [c for c in conversations
if week_start <= c.created_at <= week_end]
Find Conversations with Specific Content
# Search for philosophical discussions
for conv in conversations:
messages = client.conversations.messages.list(conv.id, limit=20)
for msg in messages.items:
if hasattr(msg, 'content') and msg.content:
if 'philosophy' in msg.content.lower() or 'consciousness' in msg.content.lower():
print(f"Found in {conv.id}: {msg.content[:100]}...")
break
Background Task Design (Future Work)
Hindsight Integration Concept:
- Scheduled audit: Weekly scan of unlabeled conversations
- Auto-categorization: Classify by message type ratios
- Semantic indexing: Extract key topics/concepts
- Archive to memory: Store summaries in archival_memory
- Cross-reference: Link related conversations
Implementation Notes:
- Could use Task tool to run as subagent
- Would need rate limiting (API calls add up)
- Consider incremental processing (only new conversations)
Current Stats (as of 2026-03-23)
- Total conversations: 145
- Unlabeled from Feb 2026: 66
- Typical conversation: 20-40 messages
- Single-message conversations: ~10% (system wakeups)
Pattern documented for future self. Last updated: 2026-03-23