fix(core): sanitize messages to anthropic in the main path the same way (or similar) to how we do it in the token counter (#6044)
* fix(core): sanitize messages to anthropic in the main path the same way (or similar) to how we do it in the token counter * fix: also patch poison error in backend by filtering lazily * fix: remap streaming errors (what the fuck) * fix: dedupe tool clals * fix: cleanup, removed try/catch
This commit is contained in:
committed by
Caren Thomas
parent
363a5c1f92
commit
18029250d0
@@ -110,8 +110,11 @@ class SimpleLLMStreamAdapter(LettaLLMStreamAdapter):
|
||||
# Extract optional parameters
|
||||
# ttft_span = kwargs.get('ttft_span', None)
|
||||
|
||||
# Start the streaming request
|
||||
stream = await self.llm_client.stream_async(request_data, self.llm_config)
|
||||
# Start the streaming request (map provider errors to common LLMError types)
|
||||
try:
|
||||
stream = await self.llm_client.stream_async(request_data, self.llm_config)
|
||||
except Exception as e:
|
||||
raise self.llm_client.handle_llm_error(e)
|
||||
|
||||
# Process the stream and yield chunks immediately for TTFT
|
||||
async for chunk in self.interface.process(stream): # TODO: add ttft span
|
||||
|
||||
Reference in New Issue
Block a user