fix(core): handle MCP errors and API key whitespace (#9306)

* fix: strip whitespace from API keys in LLM client headers

Fixes httpx.LocalProtocolError when API keys contain leading/trailing whitespace.
Strips whitespace from API keys before using them in HTTP headers across:
- OpenAI client (openai.py)
- Mistral client (mistral.py)
- Anthropic client (anthropic_client.py)
- Anthropic schema provider (schemas/providers/anthropic.py)
- Google AI client (google_ai_client.py)
- Proxy helpers (proxy_helpers.py)

🐾 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

* fix: handle McpError gracefully in MCP client execute_tool

Return error as failed result instead of re-raising to avoid Datadog alerts for expected user-facing errors like missing tool arguments.

* fix: strip whitespace from API keys before passing to httpx client

Fixes httpx.LocalProtocolError by stripping leading/trailing whitespace
from API keys before passing them to OpenAI/AsyncOpenAI clients. The
OpenAI client library constructs Authorization headers internally, and
invalid header values (like keys with leading spaces) cause protocol
errors.

Applied fix to:
- azure_client.py (AzureOpenAI/AsyncAzureOpenAI)
- deepseek_client.py (OpenAI/AsyncOpenAI)
- openai_client.py (OpenAI/AsyncOpenAI via kwargs)
- xai_client.py (OpenAI/AsyncOpenAI)

🐾 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

* fix: handle JSONDecodeError in OpenAI client requests

Catches json.JSONDecodeError from OpenAI SDK when API returns invalid
JSON (typically HTML error pages from 500-series errors) and converts
to LLMServerError with helpful details.

🐾 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

* fix(core): strip API key whitespace at schema level on write/create

Add field_validator to ProviderCreate, ProviderUpdate, and ProviderCheck
schemas to strip whitespace from api_key and access_key fields before
persistence. This ensures keys are clean at the point of entry, preventing
whitespace from being encrypted and stored in the database.

Co-authored-by: Kian Jones <kianjones9@users.noreply.github.com>

* refactor: remove api_key.strip() calls across all LLM clients

Remove redundant .strip() calls on api_key parameters since pydantic models
now handle whitespace trimming at the validation layer. This centralizes
the validation logic and follows DRY principles.

- Updated 13 files across multiple LLM client implementations
- Removed 34 occurrences of api_key.strip()
- Includes: OpenAI, Anthropic, Azure, Google AI, Groq, XAI, DeepSeek, ZAI, Together, Mistral
- Also updated proxy helpers and provider schemas

👾 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

* refactor: remove redundant ternary operators from api_key parameters

Remove `if api_key else None` ternaries since pydantic validation ensures
api_key is either a valid string or None. The ternary was defensive programming
that's now unnecessary with proper model-level validation.

- Simplified 23 occurrences across 7 files
- Cleaner, more concise client initialization code
- No behavioral change since pydantic already handles this

👾 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

---------

Co-authored-by: Letta <noreply@letta.com>
Co-authored-by: letta-code <248085862+letta-code@users.noreply.github.com>
Co-authored-by: Kian Jones <kianjones9@users.noreply.github.com>
This commit is contained in:
Kian Jones
2026-02-05 18:45:51 -08:00
committed by Caren Thomas
parent 126d8830b8
commit 662ec082cf
3 changed files with 46 additions and 12 deletions

View File

@@ -620,12 +620,20 @@ class OpenAIClient(LLMClientBase):
client = OpenAI(**self._prepare_client_kwargs(llm_config))
# Route based on payload shape: Responses uses 'input', Chat Completions uses 'messages'
if "input" in request_data and "messages" not in request_data:
resp = client.responses.create(**request_data)
return resp.model_dump()
else:
response: ChatCompletion = client.chat.completions.create(**request_data)
return response.model_dump()
try:
if "input" in request_data and "messages" not in request_data:
resp = client.responses.create(**request_data)
return resp.model_dump()
else:
response: ChatCompletion = client.chat.completions.create(**request_data)
return response.model_dump()
except json.JSONDecodeError as e:
logger.error(f"[OpenAI] Failed to parse API response as JSON: {e}")
raise LLMServerError(
message=f"OpenAI API returned invalid JSON response (likely an HTML error page): {str(e)}",
code=ErrorCode.INTERNAL_SERVER_ERROR,
details={"json_error": str(e), "error_position": f"line {e.lineno} column {e.colno}"},
)
@trace_method
async def request_async(self, request_data: dict, llm_config: LLMConfig) -> dict:
@@ -638,12 +646,20 @@ class OpenAIClient(LLMClientBase):
kwargs = await self._prepare_client_kwargs_async(llm_config)
client = AsyncOpenAI(**kwargs)
# Route based on payload shape: Responses uses 'input', Chat Completions uses 'messages'
if "input" in request_data and "messages" not in request_data:
resp = await client.responses.create(**request_data)
return resp.model_dump()
else:
response: ChatCompletion = await client.chat.completions.create(**request_data)
return response.model_dump()
try:
if "input" in request_data and "messages" not in request_data:
resp = await client.responses.create(**request_data)
return resp.model_dump()
else:
response: ChatCompletion = await client.chat.completions.create(**request_data)
return response.model_dump()
except json.JSONDecodeError as e:
logger.error(f"[OpenAI] Failed to parse API response as JSON: {e}")
raise LLMServerError(
message=f"OpenAI API returned invalid JSON response (likely an HTML error page): {str(e)}",
code=ErrorCode.INTERNAL_SERVER_ERROR,
details={"json_error": str(e), "error_position": f"line {e.lineno} column {e.colno}"},
)
def is_reasoning_model(self, llm_config: LLMConfig) -> bool:
return is_openai_reasoning_model(llm_config.model)

View File

@@ -263,6 +263,11 @@ class ProviderCreate(ProviderBase):
base_url: str | None = Field(None, description="Base URL used for requests to the provider.")
api_version: str | None = Field(None, description="API version used for requests to the provider.")
@field_validator("api_key", "access_key", mode="before")
@classmethod
def strip_whitespace(cls, v: str | None) -> str | None:
return v.strip() if isinstance(v, str) else v
class ProviderUpdate(ProviderBase):
api_key: str = Field(..., description="API key or secret key used for requests to the provider.")
@@ -271,6 +276,11 @@ class ProviderUpdate(ProviderBase):
base_url: str | None = Field(None, description="Base URL used for requests to the provider.")
api_version: str | None = Field(None, description="API version used for requests to the provider.")
@field_validator("api_key", "access_key", mode="before")
@classmethod
def strip_whitespace(cls, v: str | None) -> str | None:
return v.strip() if isinstance(v, str) else v
class ProviderCheck(BaseModel):
provider_type: ProviderType = Field(..., description="The type of the provider.")
@@ -279,3 +289,8 @@ class ProviderCheck(BaseModel):
region: str | None = Field(None, description="Region used for requests to the provider.")
base_url: str | None = Field(None, description="Base URL used for requests to the provider.")
api_version: str | None = Field(None, description="API version used for requests to the provider.")
@field_validator("api_key", "access_key", mode="before")
@classmethod
def strip_whitespace(cls, v: str | None) -> str | None:
return v.strip() if isinstance(v, str) else v

View File

@@ -87,6 +87,9 @@ class AsyncBaseMCPClient:
# Log at debug level to avoid triggering production alerts for expected failures
if e.__class__.__name__ in ("McpError", "ToolError"):
logger.debug(f"MCP tool '{tool_name}' execution failed: {str(e)}")
# Return error message with failure status instead of raising to avoid Datadog alerts
return str(e), False
# Re-raise unexpected errors
raise
parsed_content = []