cthomas
|
e4da78fce7
|
fix: gracefully handle too long responses from llm provider (#2677)
|
2025-06-06 13:13:32 -07:00 |
|
Andy Li
|
d2252f2953
|
feat: otel metrics and expanded collecting (#2647)
(passed tests in last run)
|
2025-06-05 17:20:14 -07:00 |
|
Sarah Wooders
|
3354f5fe50
|
feat: concurrently make embedding request and use async client for OpenAI (#2482)
Co-authored-by: Matthew Zhou <mattzh1314@gmail.com>
|
2025-05-28 11:35:22 -07:00 |
|
Andy Li
|
a78abc610e
|
feat: track llm provider traces and tracking steps in async agent loop (#2219)
|
2025-05-19 15:50:56 -07:00 |
|
cthomas
|
db6982a4bc
|
feat: add provider_category field to distinguish byok (#2038)
|
2025-05-06 17:31:36 -07:00 |
|
cthomas
|
c4f603d7b6
|
feat: always add user id to openai requests (#1969)
|
2025-04-30 23:23:01 -07:00 |
|
cthomas
|
18db9b9509
|
feat: byok 2.0 (#1963)
|
2025-04-30 21:26:50 -07:00 |
|
cthomas
|
ce2e8f5c4d
|
feat: add llm config per request (#1866)
|
2025-04-23 16:37:05 -07:00 |
|
Matthew Zhou
|
dec66f928e
|
feat: Finish step_until_request in new batch agent loop (#1656)
|
2025-04-10 10:19:06 -07:00 |
|
Matthew Zhou
|
f109259b0b
|
chore: Inject LLM config directly to batch api request func (#1652)
|
2025-04-09 15:56:54 -07:00 |
|
Matthew Zhou
|
4cb7f576d9
|
feat: Write batch request on base LLM client (#1646)
|
2025-04-09 14:58:26 -07:00 |
|
Matthew Zhou
|
3797b0d536
|
feat: Simplify arguments for LLM clients (#1536)
|
2025-04-02 14:26:27 -07:00 |
|
cthomas
|
432961e9c9
|
fix: anthropic system message parse (#1467)
|
2025-03-30 18:44:55 -07:00 |
|
Matthew Zhou
|
54206ad643
|
fix: Fix message_id ordering in agent serialization (#1458)
|
2025-03-28 15:13:33 -07:00 |
|
cthomas
|
c2f79ac61f
|
feat: anthropic class improvements (#1425)
|
2025-03-27 08:47:54 -07:00 |
|
cthomas
|
3715b08635
|
chore: migrate anthropic to llm client class (#1409)
|
2025-03-26 09:37:27 -07:00 |
|
cthomas
|
2a36af8a5d
|
feat: add new llm client framework and migrate google apis (#1209)
|
2025-03-07 16:34:06 -08:00 |
|