Files
letta-server/fern/pages/ade-guide/simulator.mdx
Cameron Pfiffer fc531ca6de docs: center documentation around current Letta architecture (#5634)
* docs: restructure architecture documentation to sideline legacy agent types

This commit reorganizes the agent architecture documentation to address
confusion around legacy agent types (memgpt_agent, memgpt_v2_agent) and
clarify that users should not specify agent_type for new projects.

The documentation was causing confusion for both users and LLMs:
- References to memgpt_agent, memgpt_v2_agent, and letta_v1_agent were scattered
  throughout main docs
- The naming progression (memgpt → memgpt_v2 → letta_v1) is non-standard
- LLMs trained on these docs were recommending deprecated architectures
- Discord users were confused about which agent type to use
- send_message tool and heartbeat references were in mainline docs

- architectures_overview.mdx - Landing page explaining legacy types exist
- migration_guide.mdx - Step-by-step migration with code snippets
- naming_history.mdx - Hidden page explaining progression for LLMs
- memgpt_agents_legacy.mdx - Moved from main docs with deprecation warnings
- heartbeats_legacy.mdx - Moved from main docs with deprecation warnings

- Removed "Agent Architectures" subsection from main nav
- Moved "MemGPT Agents" to top-level (renamed "Agent Memory & Architecture")
- Removed "Heartbeats" page from main nav
- Added "Legacy & Migration" section with 5 sub-pages
- Added redirects for old URLs

- pages/agents/memgpt_agents.mdx - Completely rewritten to focus on current
  architecture without mentioning legacy agent types
- pages/agents/sleep_time_agents.mdx - Changed from agent_type to enableSleeptime
- pages/agents/base_tools.mdx - Added stronger deprecation warning for send_message
- pages/agents/overview.mdx - Updated assistant_message description
- pages/agents/tool_rules.mdx - Removed send_message default rule examples
- pages/agents/message_types.mdx - Removed heartbeat message type section
- pages/agents/json_mode.mdx - Removed send_message requirements
- pages/agents/archival_best_practices.mdx - Removed send_message tool rule example
- pages/agents/react_agents.mdx - Removed heartbeat mechanism reference
- pages/getting-started/prompts.mdx - Removed send_message note
- pages/ade-guide/simulator.mdx - Removed tip about removing send_message
- pages/advanced/custom_memory.mdx - Changed send_message to "respond to user"
- pages/deployment/railway.mdx - Removed legacy tools array from example
- pages/selfhosting/overview.mdx - Changed send_message example to memory_insert

- pages/agents/heartbeats.mdx - Moved to legacy section

Added to memory: aggressively remove send_message and heartbeat references
from main docs. Keep legacy content only in /guides/legacy/ section. Don't
add notes about legacy in main docs - just remove the references entirely.

* docs: remove evals tab from navigation

The evals content is not ready for public documentation yet.

* docs: move send_message to deprecated tools table with legacy link

- Removed Legacy Tools section
- Added send_message to Deprecated Tools table with link to legacy guide
- Removed undefined warning text

* docs: move ReAct agents to legacy section

- Moved pages/agents/react_agents.mdx to pages/legacy/react_agents_legacy.mdx
- Added deprecation warning at top
- Updated slug to guides/legacy/react_agents_legacy
- Added to Legacy & Migration navigation section
- Added redirect from old URL to new legacy location

ReAct agents are a legacy architecture that lacks long-term memory
capabilities compared to the current Letta architecture.

* docs: move workflow and low-latency architectures to legacy

- Moved pages/agents/workflows.mdx to pages/legacy/workflows_legacy.mdx
- Moved pages/agents/low_latency_agents.mdx to pages/legacy/low_latency_agents_legacy.mdx
- Deleted pages/agents/architectures.mdx (overview page no longer needed)
- Removed 'Agent Memory & Architecture' from main Agents section
- Added workflows and low-latency to Legacy & Migration section
- Added redirects for old URLs

These agent architectures (workflow_agent, voice_convo_agent) are legacy.
For new projects, users should use the current Letta architecture with
tool rules or voice-optimized configurations instead.

* docs: remove orphaned stateful workflows page

- Deleted pages/agents/stateful_workflows.mdx
- Page was not linked in navigation or from other docs
- Feature (message_buffer_autoclear flag) is already documented in API reference
- Avoids confusion with legacy workflow architectures
2025-10-24 15:13:45 -07:00

76 lines
3.8 KiB
Plaintext

---
title: Agent Simulator
subtitle: Use the agent simulator to chat with your agent
slug: guides/ade/simulator
---
The Agent Simulator is the central interface where you interact with your agent in real-time. It provides a comprehensive view of your agent's conversation history and tool usage while offering an intuitive chat interface.
<img className="block dark:hidden" src="../../images/ade_screenshot_chat_light.png" />
<img className="hidden dark:block" src="../../images/ade_screenshot_chat.png" />
## Key Features
### Conversation Visualization
The simulator displays the complete event and conversation (or event) history of your agent, organized chronologically. Each message is color-coded and formatted according to its type for clear differentiation:
- **User Messages**: Messages sent by you (the user) to the agent. These appear on the right side of the conversation view.
- **Agent Messages**: Responses generated by the agent and directed to the user. These appear on the left side of the conversation view.
- **System Messages**: Non-user messages that represent events or notifications, such as `[Alert] The user just logged on` or `[Notification] File upload completed`. These provide context about events happening in the environment.
- **Function (Tool) Messages** <span style={{ color: '#6366F1' }}><i className="fas fa-rectangle-terminal mr-1"></i></span>: Detailed records of tool executions, including:
- Tool calls made by the agent
- Arguments passed to the tools
- Results returned by the tools
- Any errors encountered during execution
If an error occurs during tool execution, the agent is given an opportunity to handle the error and continue execution by calling the tool again.
The simulator supports real-time streaming of agent responses, allowing you to see the agent's thought process as it happens.
### Advanced Conversation Controls
Beyond basic chatting, the simulator provides several controls to enhance your interaction:
- **Message Type Selection**: Toggle between sending user messages or system messages
- **Conversation History**: Scroll through the entire conversation history
- **Message Search**: Quickly find specific messages or tool calls
- **Tool Execution View**: Expand tool calls to see detailed execution information
- **Token Usage**: Monitor token consumption throughout the conversation
## Using the Simulator Effectively
### Testing Agent Behavior
The simulator is ideal for testing how your agent responds to different inputs:
- Try various user queries to test the agent's understanding
- Send edge case questions to verify error handling
- Use system messages to simulate events and observe reactions
### Debugging Tool Usage
When developing custom tools, the simulator provides valuable insights:
- See exactly which tools the agent chooses to use
- Verify that arguments are correctly formatted
- Check tool execution results and error handling
- Monitor the agent's interpretation of tool results
### Simulating Multi-turn Conversations
To test your agent's memory and conversation abilities:
1. Start with a simple query to establish context
2. Follow up with related questions to test if the agent maintains context
3. Introduce new topics to see how the agent handles context switching
4. Return to previous topics to verify if information was retained
### Best Practices
- **Start with simple queries**: Begin testing with straightforward questions before moving to complex scenarios
- **Monitor tool usage**: Pay attention to which tools the agent chooses and why
- **Test edge cases**: Deliberately test how your agent handles unexpected inputs
- **Use system messages**: Simulate environmental events to test agent adaptability
- **Review context window**: Cross-reference with the Context Window Viewer to understand what information the agent is using to form responses