Files
letta-server/letta/agents/base_agent_v2.py
Sarah Wooders 7669896184 feat: allow client-side tools to be specified in request (#8220)
* feat: allow client-side tools to be specified in request

Add `client_tools` field to LettaRequest to allow passing tool schemas
at message creation time without requiring server-side registration.
When the agent calls a client-side tool, execution pauses with
stop_reason=requires_approval for the client to provide tool returns.

- Add ClientToolSchema class for request-level tool schemas
- Merge client tools with agent tools in _get_valid_tools()
- Treat client-side tool calls as requiring approval
- Add integration tests for client-side tools flow

🤖 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

* test: add comprehensive end-to-end test for client-side tools

Update integration test to verify the complete flow:
- Agent calls client-side tool and pauses
- Client provides tool return with secret code
- Agent processes and responds
- User asks about the code, agent recalls it
- Validate full conversation history makes sense

🤖 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

* update apis

* fix: client-side tools schema format and test assertions

- Use flat schema format for client tools (matching t.json_schema)
- Support both object and dict access for client tools
- Fix stop_reason assertions to access .stop_reason attribute

🤖 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

* refactor: simplify client_tools access pattern

ClientToolSchema objects always have .name attribute

🤖 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

* fix: add client_tools parameter to LettaAgentV2 for API compatibility

V2 agent doesn't use client_tools but needs the parameter
to match the base class signature.

🤖 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

* revert: remove client_tools from LettaRequestConfig

Client-side tools don't work with background jobs since
there's no client present to provide tool returns.

🤖 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

* fix: add client_tools parameter to SleeptimeMultiAgent classes

Add client_tools to step() and stream() methods in:
- SleeptimeMultiAgentV3
- SleeptimeMultiAgentV4

🤖 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

* chore: regenerate API specs for client_tools support

🤖 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

---------

Co-authored-by: Letta <noreply@letta.com>
2026-01-12 10:57:48 -08:00

82 lines
3.0 KiB
Python

from abc import ABC, abstractmethod
from typing import TYPE_CHECKING, AsyncGenerator
from letta.constants import DEFAULT_MAX_STEPS
from letta.log import get_logger
from letta.schemas.agent import AgentState
from letta.schemas.enums import MessageStreamStatus
from letta.schemas.letta_message import LegacyLettaMessage, LettaMessage, MessageType
from letta.schemas.letta_response import LettaResponse
from letta.schemas.message import MessageCreate
from letta.schemas.user import User
if TYPE_CHECKING:
from letta.schemas.letta_request import ClientToolSchema
class BaseAgentV2(ABC):
"""
Abstract base class for the main agent execution loop for letta agents, handling
message management, llm api request, tool execution, and context tracking.
"""
def __init__(self, agent_state: AgentState, actor: User):
self.agent_state = agent_state
self.actor = actor
self.logger = get_logger(agent_state.id)
@abstractmethod
async def build_request(
self,
input_messages: list[MessageCreate],
) -> dict:
"""
Execute the agent loop in dry_run mode, returning just the generated request
payload sent to the underlying llm provider.
"""
raise NotImplementedError
@abstractmethod
async def step(
self,
input_messages: list[MessageCreate],
max_steps: int = DEFAULT_MAX_STEPS,
run_id: str | None = None,
use_assistant_message: bool = True,
include_return_message_types: list[MessageType] | None = None,
request_start_timestamp_ns: int | None = None,
client_tools: list["ClientToolSchema"] | None = None,
) -> LettaResponse:
"""
Execute the agent loop in blocking mode, returning all messages at once.
Args:
client_tools: Optional list of client-side tools. When called, execution pauses
for client to provide tool returns.
"""
raise NotImplementedError
@abstractmethod
async def stream(
self,
input_messages: list[MessageCreate],
max_steps: int = DEFAULT_MAX_STEPS,
stream_tokens: bool = False,
run_id: str | None = None,
use_assistant_message: bool = True,
include_return_message_types: list[MessageType] | None = None,
request_start_timestamp_ns: int | None = None,
client_tools: list["ClientToolSchema"] | None = None,
) -> AsyncGenerator[LettaMessage | LegacyLettaMessage | MessageStreamStatus, None]:
"""
Execute the agent loop in streaming mode, yielding chunks as they become available.
If stream_tokens is True, individual tokens are streamed as they arrive from the LLM,
providing the lowest latency experience, otherwise each complete step (reasoning +
tool call + tool return) is yielded as it completes.
Args:
client_tools: Optional list of client-side tools. When called, execution pauses
for client to provide tool returns.
"""
raise NotImplementedError