* feat: add non-streaming option for conversation messages - Add ConversationMessageRequest with stream=True default (backwards compatible) - stream=true (default): SSE streaming via StreamingService - stream=false: JSON response via AgentLoop.load().step() 🤖 Generated with [Letta Code](https://letta.com) Co-Authored-By: Letta <noreply@letta.com> * chore: regenerate API schema for ConversationMessageRequest * feat: add direct ClickHouse storage for raw LLM traces Adds ability to store raw LLM request/response payloads directly in ClickHouse, bypassing OTEL span attribute size limits. This enables debugging and analytics on large LLM payloads (>10MB system prompts, large tool schemas, etc.). New files: - letta/schemas/llm_raw_trace.py: Pydantic schema with ClickHouse row helper - letta/services/llm_raw_trace_writer.py: Async batching writer (fire-and-forget) - letta/services/llm_raw_trace_reader.py: Reader with query methods - scripts/sql/clickhouse/llm_raw_traces.ddl: Production table DDL - scripts/sql/clickhouse/llm_raw_traces_local.ddl: Local dev DDL - apps/core/clickhouse-init.sql: Local dev initialization Modified: - letta/settings.py: Added 4 settings (store_llm_raw_traces, ttl, batch_size, flush_interval) - letta/llm_api/llm_client_base.py: Integration into request_async_with_telemetry - compose.yaml: Added ClickHouse service for local dev - justfile: Added clickhouse, clickhouse-cli, clickhouse-traces commands Feature disabled by default (LETTA_STORE_LLM_RAW_TRACES=false). Uses ZSTD(3) compression for 10-30x reduction on JSON payloads. 🤖 Generated with [Letta Code](https://letta.com) Co-Authored-By: Letta <noreply@letta.com> * fix: address code review feedback for LLM raw traces Fixes based on code review feedback: 1. Fix ClickHouse endpoint parsing - default to secure=False for raw host:port inputs (was defaulting to HTTPS which breaks local dev) 2. Make raw trace writes truly fire-and-forget - use asyncio.create_task() instead of awaiting, so JSON serialization doesn't block request path 3. Add bounded queue (maxsize=10000) - prevents unbounded memory growth under load. Drops traces with warning if queue is full. 4. Fix deprecated asyncio usage - get_running_loop() instead of get_event_loop() 5. Add org_id fallback - use _telemetry_org_id if actor doesn't have it 6. Remove unused imports - json import in reader 🤖 Generated with [Letta Code](https://letta.com) Co-Authored-By: Letta <noreply@letta.com> * fix: add missing asyncio import and simplify JSON serialization - Add missing 'import asyncio' that was causing 'name asyncio is not defined' error - Remove unnecessary clean_double_escapes() function - the JSON is stored correctly, the clickhouse-client CLI was just adding extra escaping when displaying - Update just clickhouse-trace to use Python client for correct JSON output 🤖 Generated with [Letta Code](https://letta.com) Co-Authored-By: Letta <noreply@letta.com> * test: add clickhouse raw trace integration test * test: simplify clickhouse trace assertions * refactor: centralize usage parsing and stream error traces Use per-client usage helpers for raw trace extraction and ensure streaming errors log requests with error metadata. 👾 Generated with [Letta Code](https://letta.com) Co-Authored-By: Letta <noreply@letta.com> * test: exercise provider usage parsing live Make live OpenAI/Anthropic/Gemini requests with credential gating and validate Anthropic cache usage mapping when present. 👾 Generated with [Letta Code](https://letta.com) Co-Authored-By: Letta <noreply@letta.com> * test: fix usage parsing tests to pass - Use GoogleAIClient with GEMINI_API_KEY instead of GoogleVertexClient - Update model to gemini-2.0-flash (1.5-flash deprecated in v1beta) - Add tools=[] for Gemini/Anthropic build_request_data 👾 Generated with [Letta Code](https://letta.com) Co-Authored-By: Letta <noreply@letta.com> * refactor: extract_usage_statistics returns LettaUsageStatistics Standardize on LettaUsageStatistics as the canonical usage format returned by client helpers. Inline UsageStatistics construction for ChatCompletionResponse where needed. 👾 Generated with [Letta Code](https://letta.com) Co-Authored-By: Letta <noreply@letta.com> * feat: add is_byok and llm_config_json columns to ClickHouse traces Extend llm_raw_traces table with: - is_byok (UInt8): Track BYOK vs base provider usage for billing analytics - llm_config_json (String, ZSTD): Store full LLM config for debugging and analysis This enables queries like: - BYOK usage breakdown by provider/model - Config parameter analysis (temperature, max_tokens, etc.) - Debugging specific request configurations * feat: add tests for error traces, llm_config_json, and cache tokens - Update llm_raw_trace_reader.py to query new columns (is_byok, cached_input_tokens, cache_write_tokens, reasoning_tokens, llm_config_json) - Add test_error_trace_stored_in_clickhouse to verify error fields - Add test_cache_tokens_stored_for_anthropic to verify cache token storage - Update existing tests to verify llm_config_json is stored correctly - Make llm_config required in log_provider_trace_async() - Simplify provider extraction to use provider_name directly 🐾 Generated with [Letta Code](https://letta.com) Co-Authored-By: Letta <noreply@letta.com> * ci: add ClickHouse integration tests to CI pipeline - Add use-clickhouse option to reusable-test-workflow.yml - Add ClickHouse service container with otel database - Add schema initialization step using clickhouse-init.sql - Add ClickHouse env vars (CLICKHOUSE_ENDPOINT, etc.) - Add separate clickhouse-integration-tests job running integration_test_clickhouse_llm_raw_traces.py 🐾 Generated with [Letta Code](https://letta.com) Co-Authored-By: Letta <noreply@letta.com> * refactor: simplify provider and org_id extraction in raw trace writer - Use model_endpoint_type.value for provider (not provider_name) - Simplify org_id to just self.actor.organization_id (actor is always pydantic) 🐾 Generated with [Letta Code](https://letta.com) Co-Authored-By: Letta <noreply@letta.com> * refactor: simplify LLMRawTraceWriter with _enabled flag - Check ClickHouse env vars once at init, set _enabled flag - Early return in write_async/flush_async if not enabled - Remove ValueError raises (never used) - Simplify _get_client (no validation needed since already checked) 🐾 Generated with [Letta Code](https://letta.com) Co-Authored-By: Letta <noreply@letta.com> * fix: add LLMRawTraceWriter shutdown to FastAPI lifespan Properly flush pending traces on graceful shutdown via lifespan instead of relying only on atexit handler. 🐾 Generated with [Letta Code](https://letta.com) Co-Authored-By: Letta <noreply@letta.com> * feat: add agent_tags column to ClickHouse traces Store agent tags as Array(String) for filtering/analytics by tag. 🐾 Generated with [Letta Code](https://letta.com) Co-Authored-By: Letta <noreply@letta.com> * cleanup * fix(ci): fix ClickHouse schema initialization in CI - Create database separately before loading SQL file - Remove CREATE DATABASE from SQL file (handled in CI step) - Add verification step to confirm table was created - Use -sf flag for curl to fail on HTTP errors 🐾 Generated with [Letta Code](https://letta.com) Co-Authored-By: Letta <noreply@letta.com> * refactor: simplify LLM trace writer with ClickHouse async_insert - Use ClickHouse async_insert for server-side batching instead of manual queue/flush loop - Sync cloud DDL schema with clickhouse-init.sql (add missing columns) - Remove redundant llm_raw_traces_local.ddl - Remove unused batch_size/flush_interval settings - Update tests for simplified writer Key changes: - async_insert=1, wait_for_async_insert=1 for reliable server-side batching - Simple per-trace retry with exponential backoff (max 3 retries) - ~150 lines removed from writer 🤖 Generated with [Letta Code](https://letta.com) Co-Authored-By: Letta <noreply@letta.com> * refactor: consolidate ClickHouse direct writes into TelemetryManager backend - Add clickhouse_direct backend to provider_trace_backends - Remove duplicate ClickHouse write logic from llm_client_base.py - Configure via LETTA_TELEMETRY_PROVIDER_TRACE_BACKEND=postgres,clickhouse_direct The clickhouse_direct backend: - Converts ProviderTrace to LLMRawTrace - Extracts usage stats from response JSON - Writes via LLMRawTraceWriter with async_insert 🤖 Generated with [Letta Code](https://letta.com) Co-Authored-By: Letta <noreply@letta.com> * refactor: address PR review comments and fix llm_config bug Review comment fixes: - Rename clickhouse_direct -> clickhouse_analytics (clearer purpose) - Remove ClickHouse from OSS compose.yaml, create separate compose.clickhouse.yaml - Delete redundant scripts/test_llm_raw_traces.py (use pytest tests) - Remove unused llm_raw_traces_ttl_days setting (TTL handled in DDL) - Fix socket description leak in telemetry_manager docstring - Add cloud-only comment to clickhouse-init.sql - Update justfile to use separate compose file Bug fix: - Fix llm_config not being passed to ProviderTrace in telemetry - Now correctly populates provider, model, is_byok for all LLM calls - Affects both request_async_with_telemetry and log_provider_trace_async DDL optimizations: - Add secondary indexes (bloom_filter for agent_id, model, step_id) - Add minmax indexes for is_byok, is_error - Change model and error_type to LowCardinality for faster GROUP BY 🤖 Generated with [Letta Code](https://letta.com) Co-Authored-By: Letta <noreply@letta.com> * refactor: rename llm_raw_traces -> llm_traces Address review feedback that "raw" is misleading since we denormalize fields. Renames: - Table: llm_raw_traces -> llm_traces - Schema: LLMRawTrace -> LLMTrace - Files: llm_raw_trace_{reader,writer}.py -> llm_trace_{reader,writer}.py - Setting: store_llm_raw_traces -> store_llm_traces 🤖 Generated with [Letta Code](https://letta.com) Co-Authored-By: Letta <noreply@letta.com> * fix: update workflow references to llm_traces Missed renaming table name in CI workflow files. 🤖 Generated with [Letta Code](https://letta.com) Co-Authored-By: Letta <noreply@letta.com> * fix: update clickhouse_direct -> clickhouse_analytics in docstring 🤖 Generated with [Letta Code](https://letta.com) Co-Authored-By: Letta <noreply@letta.com> * chore: remove inaccurate OTEL size limit comments The 4MB limit is our own truncation logic, not an OTEL protocol limit. The real benefit is denormalized columns for analytics queries. 🤖 Generated with [Letta Code](https://letta.com) Co-Authored-By: Letta <noreply@letta.com> * chore: remove local ClickHouse dev setup (cloud-only feature) - Delete clickhouse-init.sql and compose.clickhouse.yaml - Remove local clickhouse just commands - Update CI to use cloud DDL with MergeTree for testing clickhouse_analytics is a cloud-only feature. For local dev, use postgres backend. 🤖 Generated with [Letta Code](https://letta.com) Co-Authored-By: Letta <noreply@letta.com> * fix: restore compose.yaml to match main 🤖 Generated with [Letta Code](https://letta.com) Co-Authored-By: Letta <noreply@letta.com> * refactor: merge clickhouse_analytics into clickhouse backend Per review feedback - having two separate backends was confusing. Now the clickhouse backend: - Writes to llm_traces table (denormalized for cost analytics) - Reads from OTEL traces table (will cut over to llm_traces later) Config: LETTA_TELEMETRY_PROVIDER_TRACE_BACKEND=postgres,clickhouse 🤖 Generated with [Letta Code](https://letta.com) Co-Authored-By: Letta <noreply@letta.com> * fix: correct path to DDL file in CI workflow 🤖 Generated with [Letta Code](https://letta.com) Co-Authored-By: Letta <noreply@letta.com> * chore: add provider index to DDL for faster filtering 🤖 Generated with [Letta Code](https://letta.com) Co-Authored-By: Letta <noreply@letta.com> * fix: configure telemetry backend in clickhouse tests Tests need to set telemetry_settings.provider_trace_backends to include 'clickhouse', otherwise traces are routed to default postgres backend. 🤖 Generated with [Letta Code](https://letta.com) Co-Authored-By: Letta <noreply@letta.com> * fix: set provider_trace_backend field, not property provider_trace_backends is a computed property, need to set the underlying provider_trace_backend string field instead. 🤖 Generated with [Letta Code](https://letta.com) Co-Authored-By: Letta <noreply@letta.com> * fix: error trace test and error_type extraction - Add TelemetryManager to error trace test so traces get written - Fix error_type extraction to check top-level before nested error dict 🤖 Generated with [Letta Code](https://letta.com) Co-Authored-By: Letta <noreply@letta.com> * fix: use provider_trace.id for trace correlation across backends - Pass provider_trace.id to LLMTrace instead of auto-generating - Log warning if ID is missing (shouldn't happen, helps debug) - Fallback to new UUID only if not set 🤖 Generated with [Letta Code](https://letta.com) Co-Authored-By: Letta <noreply@letta.com> * fix: trace ID correlation and concurrency issues - Strip "provider_trace-" prefix from ID for UUID storage in ClickHouse - Add asyncio.Lock to serialize writes (clickhouse_connect not thread-safe) - Fix Anthropic prompt_tokens to include cached tokens for cost analytics - Log warning if provider_trace.id is missing 🤖 Generated with [Letta Code](https://letta.com) Co-Authored-By: Letta <noreply@letta.com> --------- Co-authored-by: Letta <noreply@letta.com> Co-authored-by: Caren Thomas <carenthomas@gmail.com>
455 lines
19 KiB
Python
455 lines
19 KiB
Python
import json
|
|
from abc import abstractmethod
|
|
from typing import TYPE_CHECKING, Dict, List, Optional, Tuple, Union
|
|
|
|
import httpx
|
|
from anthropic.types.beta.messages import BetaMessageBatch
|
|
from openai import AsyncStream, Stream
|
|
from openai.types.chat.chat_completion_chunk import ChatCompletionChunk
|
|
|
|
from letta.errors import ErrorCode, LLMConnectionError, LLMError
|
|
from letta.otel.tracing import log_event, trace_method
|
|
from letta.schemas.embedding_config import EmbeddingConfig
|
|
from letta.schemas.enums import AgentType, ProviderCategory
|
|
from letta.schemas.llm_config import LLMConfig
|
|
from letta.schemas.message import Message
|
|
from letta.schemas.openai.chat_completion_response import ChatCompletionResponse
|
|
from letta.schemas.provider_trace import ProviderTrace
|
|
from letta.schemas.usage import LettaUsageStatistics
|
|
from letta.services.telemetry_manager import TelemetryManager
|
|
from letta.settings import settings
|
|
|
|
if TYPE_CHECKING:
|
|
from letta.orm import User
|
|
|
|
|
|
class LLMClientBase:
|
|
"""
|
|
Abstract base class for LLM clients, formatting the request objects,
|
|
handling the downstream request and parsing into chat completions response format
|
|
"""
|
|
|
|
def __init__(
|
|
self,
|
|
put_inner_thoughts_first: Optional[bool] = True,
|
|
use_tool_naming: bool = True,
|
|
actor: Optional["User"] = None,
|
|
):
|
|
self.actor = actor
|
|
self.put_inner_thoughts_first = put_inner_thoughts_first
|
|
self.use_tool_naming = use_tool_naming
|
|
self._telemetry_manager: Optional["TelemetryManager"] = None
|
|
self._telemetry_agent_id: Optional[str] = None
|
|
self._telemetry_agent_tags: Optional[List[str]] = None
|
|
self._telemetry_run_id: Optional[str] = None
|
|
self._telemetry_step_id: Optional[str] = None
|
|
self._telemetry_call_type: Optional[str] = None
|
|
self._telemetry_org_id: Optional[str] = None
|
|
self._telemetry_user_id: Optional[str] = None
|
|
self._telemetry_compaction_settings: Optional[Dict] = None
|
|
self._telemetry_llm_config: Optional[Dict] = None
|
|
|
|
def set_telemetry_context(
|
|
self,
|
|
telemetry_manager: Optional["TelemetryManager"] = None,
|
|
agent_id: Optional[str] = None,
|
|
agent_tags: Optional[List[str]] = None,
|
|
run_id: Optional[str] = None,
|
|
step_id: Optional[str] = None,
|
|
call_type: Optional[str] = None,
|
|
org_id: Optional[str] = None,
|
|
user_id: Optional[str] = None,
|
|
compaction_settings: Optional[Dict] = None,
|
|
llm_config: Optional[Dict] = None,
|
|
) -> None:
|
|
"""Set telemetry context for provider trace logging."""
|
|
self._telemetry_manager = telemetry_manager
|
|
self._telemetry_agent_id = agent_id
|
|
self._telemetry_agent_tags = agent_tags
|
|
self._telemetry_run_id = run_id
|
|
self._telemetry_step_id = step_id
|
|
self._telemetry_call_type = call_type
|
|
self._telemetry_org_id = org_id
|
|
self._telemetry_user_id = user_id
|
|
self._telemetry_compaction_settings = compaction_settings
|
|
self._telemetry_llm_config = llm_config
|
|
|
|
def extract_usage_statistics(self, response_data: Optional[dict], llm_config: LLMConfig) -> LettaUsageStatistics:
|
|
"""Provider-specific usage parsing hook (override in subclasses). Returns LettaUsageStatistics."""
|
|
return LettaUsageStatistics()
|
|
|
|
async def request_async_with_telemetry(self, request_data: dict, llm_config: LLMConfig) -> dict:
|
|
"""Wrapper around request_async that logs telemetry for all requests including errors.
|
|
|
|
Call set_telemetry_context() first to set agent_id, run_id, etc.
|
|
|
|
Telemetry is logged via TelemetryManager which supports multiple backends
|
|
(postgres, clickhouse, socket, etc.) configured via
|
|
LETTA_TELEMETRY_PROVIDER_TRACE_BACKEND.
|
|
"""
|
|
from letta.log import get_logger
|
|
|
|
logger = get_logger(__name__)
|
|
response_data = None
|
|
error_msg = None
|
|
error_type = None
|
|
try:
|
|
response_data = await self.request_async(request_data, llm_config)
|
|
return response_data
|
|
except Exception as e:
|
|
error_msg = str(e)
|
|
error_type = type(e).__name__
|
|
raise
|
|
finally:
|
|
# Log telemetry via configured backends
|
|
if self._telemetry_manager and settings.track_provider_trace:
|
|
if self.actor is None:
|
|
logger.warning(f"Skipping telemetry: actor is None (call_type={self._telemetry_call_type})")
|
|
else:
|
|
try:
|
|
pydantic_actor = self.actor.to_pydantic() if hasattr(self.actor, "to_pydantic") else self.actor
|
|
await self._telemetry_manager.create_provider_trace_async(
|
|
actor=pydantic_actor,
|
|
provider_trace=ProviderTrace(
|
|
request_json=request_data,
|
|
response_json=response_data if response_data else {"error": error_msg, "error_type": error_type},
|
|
step_id=self._telemetry_step_id,
|
|
agent_id=self._telemetry_agent_id,
|
|
agent_tags=self._telemetry_agent_tags,
|
|
run_id=self._telemetry_run_id,
|
|
call_type=self._telemetry_call_type,
|
|
org_id=self._telemetry_org_id,
|
|
user_id=self._telemetry_user_id,
|
|
compaction_settings=self._telemetry_compaction_settings,
|
|
llm_config=llm_config.model_dump() if llm_config else self._telemetry_llm_config,
|
|
),
|
|
)
|
|
except Exception as e:
|
|
logger.warning(f"Failed to log telemetry: {e}")
|
|
|
|
async def stream_async_with_telemetry(self, request_data: dict, llm_config: LLMConfig):
|
|
"""Returns raw stream. Caller should log telemetry after processing via log_provider_trace_async().
|
|
|
|
Call set_telemetry_context() first to set agent_id, run_id, etc.
|
|
After consuming the stream, call log_provider_trace_async() with the response data.
|
|
"""
|
|
return await self.stream_async(request_data, llm_config)
|
|
|
|
async def log_provider_trace_async(
|
|
self,
|
|
request_data: dict,
|
|
response_json: Optional[dict],
|
|
llm_config: Optional[LLMConfig] = None,
|
|
latency_ms: Optional[int] = None,
|
|
error_msg: Optional[str] = None,
|
|
error_type: Optional[str] = None,
|
|
) -> None:
|
|
"""Log provider trace telemetry. Call after processing LLM response.
|
|
|
|
Uses telemetry context set via set_telemetry_context().
|
|
Telemetry is logged via TelemetryManager which supports multiple backends.
|
|
|
|
Args:
|
|
request_data: The request payload sent to the LLM
|
|
response_json: The response payload from the LLM
|
|
llm_config: LLMConfig for extracting provider/model info
|
|
latency_ms: Latency in milliseconds (not used currently, kept for API compatibility)
|
|
error_msg: Error message if request failed (not used currently)
|
|
error_type: Error type if request failed (not used currently)
|
|
"""
|
|
from letta.log import get_logger
|
|
|
|
logger = get_logger(__name__)
|
|
|
|
if not self._telemetry_manager or not settings.track_provider_trace:
|
|
return
|
|
|
|
if self.actor is None:
|
|
logger.warning(f"Skipping telemetry: actor is None (call_type={self._telemetry_call_type})")
|
|
return
|
|
|
|
if response_json is None:
|
|
return
|
|
|
|
try:
|
|
pydantic_actor = self.actor.to_pydantic() if hasattr(self.actor, "to_pydantic") else self.actor
|
|
await self._telemetry_manager.create_provider_trace_async(
|
|
actor=pydantic_actor,
|
|
provider_trace=ProviderTrace(
|
|
request_json=request_data,
|
|
response_json=response_json,
|
|
step_id=self._telemetry_step_id,
|
|
agent_id=self._telemetry_agent_id,
|
|
agent_tags=self._telemetry_agent_tags,
|
|
run_id=self._telemetry_run_id,
|
|
call_type=self._telemetry_call_type,
|
|
org_id=self._telemetry_org_id,
|
|
user_id=self._telemetry_user_id,
|
|
compaction_settings=self._telemetry_compaction_settings,
|
|
llm_config=llm_config.model_dump() if llm_config else self._telemetry_llm_config,
|
|
),
|
|
)
|
|
except Exception as e:
|
|
logger.warning(f"Failed to log telemetry: {e}")
|
|
|
|
@trace_method
|
|
async def send_llm_request(
|
|
self,
|
|
agent_type: AgentType,
|
|
messages: List[Message],
|
|
llm_config: LLMConfig,
|
|
tools: Optional[List[dict]] = None, # TODO: change to Tool object
|
|
force_tool_call: Optional[str] = None,
|
|
telemetry_manager: Optional["TelemetryManager"] = None,
|
|
step_id: Optional[str] = None,
|
|
tool_return_truncation_chars: Optional[int] = None,
|
|
) -> Union[ChatCompletionResponse, Stream[ChatCompletionChunk]]:
|
|
"""
|
|
Issues a request to the downstream model endpoint and parses response.
|
|
If stream=True, returns a Stream[ChatCompletionChunk] that can be iterated over.
|
|
Otherwise returns a ChatCompletionResponse.
|
|
"""
|
|
request_data = self.build_request_data(
|
|
agent_type,
|
|
messages,
|
|
llm_config,
|
|
tools,
|
|
force_tool_call,
|
|
requires_subsequent_tool_call=False,
|
|
tool_return_truncation_chars=tool_return_truncation_chars,
|
|
)
|
|
|
|
try:
|
|
log_event(name="llm_request_sent", attributes=request_data)
|
|
response_data = await self.request_async(request_data, llm_config)
|
|
if step_id and telemetry_manager:
|
|
telemetry_manager.create_provider_trace(
|
|
actor=self.actor,
|
|
provider_trace=ProviderTrace(
|
|
request_json=request_data,
|
|
response_json=response_data,
|
|
step_id=step_id,
|
|
),
|
|
)
|
|
log_event(name="llm_response_received", attributes=response_data)
|
|
except Exception as e:
|
|
raise self.handle_llm_error(e)
|
|
|
|
return await self.convert_response_to_chat_completion(response_data, messages, llm_config)
|
|
|
|
@trace_method
|
|
async def send_llm_request_async(
|
|
self,
|
|
request_data: dict,
|
|
messages: List[Message],
|
|
llm_config: LLMConfig,
|
|
telemetry_manager: "TelemetryManager | None" = None,
|
|
step_id: str | None = None,
|
|
) -> Union[ChatCompletionResponse, AsyncStream[ChatCompletionChunk]]:
|
|
"""
|
|
Issues a request to the downstream model endpoint.
|
|
If stream=True, returns an AsyncStream[ChatCompletionChunk] that can be async iterated over.
|
|
Otherwise returns a ChatCompletionResponse.
|
|
"""
|
|
|
|
try:
|
|
log_event(name="llm_request_sent", attributes=request_data)
|
|
response_data = await self.request_async(request_data, llm_config)
|
|
if settings.track_provider_trace and telemetry_manager:
|
|
await telemetry_manager.create_provider_trace_async(
|
|
actor=self.actor,
|
|
provider_trace=ProviderTrace(
|
|
request_json=request_data,
|
|
response_json=response_data,
|
|
step_id=step_id,
|
|
),
|
|
)
|
|
|
|
log_event(name="llm_response_received", attributes=response_data)
|
|
except Exception as e:
|
|
raise self.handle_llm_error(e)
|
|
|
|
return await self.convert_response_to_chat_completion(response_data, messages, llm_config)
|
|
|
|
async def send_llm_batch_request_async(
|
|
self,
|
|
agent_type: AgentType,
|
|
agent_messages_mapping: Dict[str, List[Message]],
|
|
agent_tools_mapping: Dict[str, List[dict]],
|
|
agent_llm_config_mapping: Dict[str, LLMConfig],
|
|
) -> Union[BetaMessageBatch]:
|
|
"""
|
|
Issues a batch request to the downstream model endpoint and parses response.
|
|
"""
|
|
raise NotImplementedError
|
|
|
|
@abstractmethod
|
|
def build_request_data(
|
|
self,
|
|
agent_type: AgentType,
|
|
messages: List[Message],
|
|
llm_config: LLMConfig,
|
|
tools: List[dict],
|
|
force_tool_call: Optional[str] = None,
|
|
requires_subsequent_tool_call: bool = False,
|
|
tool_return_truncation_chars: Optional[int] = None,
|
|
) -> dict:
|
|
"""
|
|
Constructs a request object in the expected data format for this client.
|
|
|
|
Args:
|
|
tool_return_truncation_chars: If set, truncates tool return content to this many characters.
|
|
Used during summarization to avoid context window issues.
|
|
"""
|
|
raise NotImplementedError
|
|
|
|
@abstractmethod
|
|
def request(self, request_data: dict, llm_config: LLMConfig) -> dict:
|
|
"""
|
|
Performs underlying request to llm and returns raw response.
|
|
"""
|
|
raise NotImplementedError
|
|
|
|
@abstractmethod
|
|
async def request_async(self, request_data: dict, llm_config: LLMConfig) -> dict:
|
|
"""
|
|
Performs underlying request to llm and returns raw response.
|
|
"""
|
|
raise NotImplementedError
|
|
|
|
@abstractmethod
|
|
async def request_embeddings(self, texts: List[str], embedding_config: EmbeddingConfig) -> List[List[float]]:
|
|
"""
|
|
Generate embeddings for a batch of texts.
|
|
|
|
Args:
|
|
texts (List[str]): List of texts to generate embeddings for.
|
|
embedding_config (EmbeddingConfig): Configuration for the embedding model.
|
|
|
|
Returns:
|
|
embeddings (List[List[float]]): List of embeddings for the input texts.
|
|
"""
|
|
raise NotImplementedError
|
|
|
|
@abstractmethod
|
|
async def convert_response_to_chat_completion(
|
|
self,
|
|
response_data: dict,
|
|
input_messages: List[Message],
|
|
llm_config: LLMConfig,
|
|
) -> ChatCompletionResponse:
|
|
"""
|
|
Converts custom response format from llm client into an OpenAI
|
|
ChatCompletionsResponse object.
|
|
"""
|
|
raise NotImplementedError
|
|
|
|
@abstractmethod
|
|
async def stream_async(self, request_data: dict, llm_config: LLMConfig) -> AsyncStream[ChatCompletionChunk]:
|
|
"""
|
|
Performs underlying streaming request to llm and returns raw response.
|
|
"""
|
|
raise NotImplementedError(f"Streaming is not supported for {llm_config.model_endpoint_type}")
|
|
|
|
@abstractmethod
|
|
def is_reasoning_model(self, llm_config: LLMConfig) -> bool:
|
|
"""
|
|
Returns True if the model is a native reasoning model.
|
|
"""
|
|
raise NotImplementedError
|
|
|
|
@abstractmethod
|
|
def handle_llm_error(self, e: Exception) -> Exception:
|
|
"""
|
|
Maps provider-specific errors to common LLMError types.
|
|
Each LLM provider should implement this to translate their specific errors.
|
|
|
|
Args:
|
|
e: The original provider-specific exception
|
|
|
|
Returns:
|
|
An LLMError subclass that represents the error in a provider-agnostic way
|
|
"""
|
|
# Handle httpx.RemoteProtocolError which can occur during streaming
|
|
# when the remote server closes the connection unexpectedly
|
|
# (e.g., "peer closed connection without sending complete message body")
|
|
if isinstance(e, httpx.RemoteProtocolError):
|
|
from letta.log import get_logger
|
|
|
|
logger = get_logger(__name__)
|
|
logger.warning(f"[LLM] Remote protocol error during streaming: {e}")
|
|
return LLMConnectionError(
|
|
message=f"Connection error during streaming: {str(e)}",
|
|
code=ErrorCode.INTERNAL_SERVER_ERROR,
|
|
details={"cause": str(e.__cause__) if e.__cause__ else None},
|
|
)
|
|
|
|
return LLMError(f"Unhandled LLM error: {str(e)}")
|
|
|
|
def get_byok_overrides(self, llm_config: LLMConfig) -> Tuple[Optional[str], Optional[str], Optional[str]]:
|
|
"""
|
|
Returns the override key for the given llm config.
|
|
Only fetches API key from database for BYOK providers.
|
|
Base providers use environment variables directly.
|
|
"""
|
|
api_key = None
|
|
# Only fetch API key from database for BYOK providers
|
|
# Base providers should always use environment variables
|
|
if llm_config.provider_category == ProviderCategory.byok:
|
|
from letta.services.provider_manager import ProviderManager
|
|
|
|
api_key = ProviderManager().get_override_key(llm_config.provider_name, actor=self.actor)
|
|
# If we got an empty string from the database, treat it as None
|
|
# so the client can fall back to environment variables or default behavior
|
|
if api_key == "":
|
|
api_key = None
|
|
|
|
return api_key, None, None
|
|
|
|
async def get_byok_overrides_async(self, llm_config: LLMConfig) -> Tuple[Optional[str], Optional[str], Optional[str]]:
|
|
"""
|
|
Returns the override key for the given llm config.
|
|
Only fetches API key from database for BYOK providers.
|
|
Base providers use environment variables directly.
|
|
"""
|
|
api_key = None
|
|
# Only fetch API key from database for BYOK providers
|
|
# Base providers should always use environment variables
|
|
if llm_config.provider_category == ProviderCategory.byok:
|
|
from letta.services.provider_manager import ProviderManager
|
|
|
|
api_key = await ProviderManager().get_override_key_async(llm_config.provider_name, actor=self.actor)
|
|
# If we got an empty string from the database, treat it as None
|
|
# so the client can fall back to environment variables or default behavior
|
|
if api_key == "":
|
|
api_key = None
|
|
|
|
return api_key, None, None
|
|
|
|
def _fix_truncated_json_response(self, response: ChatCompletionResponse) -> ChatCompletionResponse:
|
|
"""
|
|
Fixes truncated JSON responses by ensuring the content is properly formatted.
|
|
This is a workaround for some providers that may return incomplete JSON.
|
|
"""
|
|
if response.choices and response.choices[0].message and response.choices[0].message.tool_calls:
|
|
tool_call_args_str = response.choices[0].message.tool_calls[0].function.arguments
|
|
try:
|
|
json.loads(tool_call_args_str)
|
|
except json.JSONDecodeError:
|
|
try:
|
|
json_str_end = ""
|
|
quote_count = tool_call_args_str.count('"')
|
|
if quote_count % 2 != 0:
|
|
json_str_end = json_str_end + '"'
|
|
|
|
open_braces = tool_call_args_str.count("{")
|
|
close_braces = tool_call_args_str.count("}")
|
|
missing_braces = open_braces - close_braces
|
|
json_str_end += "}" * missing_braces
|
|
fixed_tool_call_args_str = tool_call_args_str[: -len(json_str_end)] + json_str_end
|
|
json.loads(fixed_tool_call_args_str)
|
|
response.choices[0].message.tool_calls[0].function.arguments = fixed_tool_call_args_str
|
|
except json.JSONDecodeError:
|
|
pass
|
|
return response
|