fix(core): handle MCP errors and API key whitespace (#9306)
* fix: strip whitespace from API keys in LLM client headers Fixes httpx.LocalProtocolError when API keys contain leading/trailing whitespace. Strips whitespace from API keys before using them in HTTP headers across: - OpenAI client (openai.py) - Mistral client (mistral.py) - Anthropic client (anthropic_client.py) - Anthropic schema provider (schemas/providers/anthropic.py) - Google AI client (google_ai_client.py) - Proxy helpers (proxy_helpers.py) 🐾 Generated with [Letta Code](https://letta.com) Co-Authored-By: Letta <noreply@letta.com> * fix: handle McpError gracefully in MCP client execute_tool Return error as failed result instead of re-raising to avoid Datadog alerts for expected user-facing errors like missing tool arguments. * fix: strip whitespace from API keys before passing to httpx client Fixes httpx.LocalProtocolError by stripping leading/trailing whitespace from API keys before passing them to OpenAI/AsyncOpenAI clients. The OpenAI client library constructs Authorization headers internally, and invalid header values (like keys with leading spaces) cause protocol errors. Applied fix to: - azure_client.py (AzureOpenAI/AsyncAzureOpenAI) - deepseek_client.py (OpenAI/AsyncOpenAI) - openai_client.py (OpenAI/AsyncOpenAI via kwargs) - xai_client.py (OpenAI/AsyncOpenAI) 🐾 Generated with [Letta Code](https://letta.com) Co-Authored-By: Letta <noreply@letta.com> * fix: handle JSONDecodeError in OpenAI client requests Catches json.JSONDecodeError from OpenAI SDK when API returns invalid JSON (typically HTML error pages from 500-series errors) and converts to LLMServerError with helpful details. 🐾 Generated with [Letta Code](https://letta.com) Co-Authored-By: Letta <noreply@letta.com> * fix(core): strip API key whitespace at schema level on write/create Add field_validator to ProviderCreate, ProviderUpdate, and ProviderCheck schemas to strip whitespace from api_key and access_key fields before persistence. This ensures keys are clean at the point of entry, preventing whitespace from being encrypted and stored in the database. Co-authored-by: Kian Jones <kianjones9@users.noreply.github.com> * refactor: remove api_key.strip() calls across all LLM clients Remove redundant .strip() calls on api_key parameters since pydantic models now handle whitespace trimming at the validation layer. This centralizes the validation logic and follows DRY principles. - Updated 13 files across multiple LLM client implementations - Removed 34 occurrences of api_key.strip() - Includes: OpenAI, Anthropic, Azure, Google AI, Groq, XAI, DeepSeek, ZAI, Together, Mistral - Also updated proxy helpers and provider schemas 👾 Generated with [Letta Code](https://letta.com) Co-Authored-By: Letta <noreply@letta.com> * refactor: remove redundant ternary operators from api_key parameters Remove `if api_key else None` ternaries since pydantic validation ensures api_key is either a valid string or None. The ternary was defensive programming that's now unnecessary with proper model-level validation. - Simplified 23 occurrences across 7 files - Cleaner, more concise client initialization code - No behavioral change since pydantic already handles this 👾 Generated with [Letta Code](https://letta.com) Co-Authored-By: Letta <noreply@letta.com> --------- Co-authored-by: Letta <noreply@letta.com> Co-authored-by: letta-code <248085862+letta-code@users.noreply.github.com> Co-authored-by: Kian Jones <kianjones9@users.noreply.github.com>
This commit is contained in:
@@ -620,12 +620,20 @@ class OpenAIClient(LLMClientBase):
|
||||
|
||||
client = OpenAI(**self._prepare_client_kwargs(llm_config))
|
||||
# Route based on payload shape: Responses uses 'input', Chat Completions uses 'messages'
|
||||
if "input" in request_data and "messages" not in request_data:
|
||||
resp = client.responses.create(**request_data)
|
||||
return resp.model_dump()
|
||||
else:
|
||||
response: ChatCompletion = client.chat.completions.create(**request_data)
|
||||
return response.model_dump()
|
||||
try:
|
||||
if "input" in request_data and "messages" not in request_data:
|
||||
resp = client.responses.create(**request_data)
|
||||
return resp.model_dump()
|
||||
else:
|
||||
response: ChatCompletion = client.chat.completions.create(**request_data)
|
||||
return response.model_dump()
|
||||
except json.JSONDecodeError as e:
|
||||
logger.error(f"[OpenAI] Failed to parse API response as JSON: {e}")
|
||||
raise LLMServerError(
|
||||
message=f"OpenAI API returned invalid JSON response (likely an HTML error page): {str(e)}",
|
||||
code=ErrorCode.INTERNAL_SERVER_ERROR,
|
||||
details={"json_error": str(e), "error_position": f"line {e.lineno} column {e.colno}"},
|
||||
)
|
||||
|
||||
@trace_method
|
||||
async def request_async(self, request_data: dict, llm_config: LLMConfig) -> dict:
|
||||
@@ -638,12 +646,20 @@ class OpenAIClient(LLMClientBase):
|
||||
kwargs = await self._prepare_client_kwargs_async(llm_config)
|
||||
client = AsyncOpenAI(**kwargs)
|
||||
# Route based on payload shape: Responses uses 'input', Chat Completions uses 'messages'
|
||||
if "input" in request_data and "messages" not in request_data:
|
||||
resp = await client.responses.create(**request_data)
|
||||
return resp.model_dump()
|
||||
else:
|
||||
response: ChatCompletion = await client.chat.completions.create(**request_data)
|
||||
return response.model_dump()
|
||||
try:
|
||||
if "input" in request_data and "messages" not in request_data:
|
||||
resp = await client.responses.create(**request_data)
|
||||
return resp.model_dump()
|
||||
else:
|
||||
response: ChatCompletion = await client.chat.completions.create(**request_data)
|
||||
return response.model_dump()
|
||||
except json.JSONDecodeError as e:
|
||||
logger.error(f"[OpenAI] Failed to parse API response as JSON: {e}")
|
||||
raise LLMServerError(
|
||||
message=f"OpenAI API returned invalid JSON response (likely an HTML error page): {str(e)}",
|
||||
code=ErrorCode.INTERNAL_SERVER_ERROR,
|
||||
details={"json_error": str(e), "error_position": f"line {e.lineno} column {e.colno}"},
|
||||
)
|
||||
|
||||
def is_reasoning_model(self, llm_config: LLMConfig) -> bool:
|
||||
return is_openai_reasoning_model(llm_config.model)
|
||||
|
||||
Reference in New Issue
Block a user