fix(core): sanitize messages to anthropic in the main path the same way (or similar) to how we do it in the token counter (#6044)

* fix(core): sanitize messages to anthropic in the main path the same way (or similar) to how we do it in the token counter

* fix: also patch poison error in backend by filtering lazily

* fix: remap streaming errors (what the fuck)

* fix: dedupe tool clals

* fix: cleanup, removed try/catch
This commit is contained in:
Charles Packer
2025-11-07 01:21:26 -08:00
committed by Caren Thomas
parent 363a5c1f92
commit 18029250d0
4 changed files with 245 additions and 4 deletions

View File

@@ -110,8 +110,11 @@ class SimpleLLMStreamAdapter(LettaLLMStreamAdapter):
# Extract optional parameters
# ttft_span = kwargs.get('ttft_span', None)
# Start the streaming request
stream = await self.llm_client.stream_async(request_data, self.llm_config)
# Start the streaming request (map provider errors to common LLMError types)
try:
stream = await self.llm_client.stream_async(request_data, self.llm_config)
except Exception as e:
raise self.llm_client.handle_llm_error(e)
# Process the stream and yield chunks immediately for TTFT
async for chunk in self.interface.process(stream): # TODO: add ttft span