jnjpng
|
243a2b65e0
|
fix: gemini 2.5 thinking models fail to call functions if thinking is fully disabled
Co-authored-by: Jin Peng <jinjpeng@Jins-MacBook-Pro.local>
|
2025-08-08 16:34:32 -07:00 |
|
Sarah Wooders
|
bde4714294
|
chore: merge oss (#3712)
|
2025-08-07 22:20:26 -07:00 |
|
Sarah Wooders
|
b85f8aa43c
|
feat: support opus 4.1 and gpt-5 (#3806)
|
2025-08-07 15:11:57 -07:00 |
|
Matthew Zhou
|
76c9a58d6f
|
feat: Support streaming and move endpoint for letta-free (#3780)
|
2025-08-06 15:33:35 -07:00 |
|
Sarah Wooders
|
c2b2d976b6
|
feat: move ollama to new agent loop (#3615)
|
2025-07-31 13:40:26 -07:00 |
|
Andy Li
|
c22b9c1af5
|
chore: remove excessive warning logging
|
2025-07-30 14:10:41 -07:00 |
|
jnjpng
|
9d8a122da0
|
fix: lmstudio support for qwen and llama
Co-authored-by: Jin Peng <jinjpeng@Jins-MacBook-Pro.local>
Co-authored-by: Charles Packer <packercharles@gmail.com>
|
2025-07-29 15:57:20 -07:00 |
|
jnjpng
|
04511d1ffc
|
feat: allow mcp authentication overrides per agent (#3318)
Co-authored-by: Jin Peng <jinjpeng@Jins-MacBook-Pro.local>
|
2025-07-28 18:20:58 -07:00 |
|
Matthew Zhou
|
d77eb1230f
|
feat: Add ability to disable reasoning (#3594)
|
2025-07-28 15:30:10 -07:00 |
|
Andy Li
|
58081e3cea
|
feat: support for providers
|
2025-07-22 16:09:50 -07:00 |
|
Andy Li
|
04e9f43220
|
chore: strings lint cleanup (#3374)
|
2025-07-18 09:20:45 -07:00 |
|
Eric Ly
|
396f37156c
|
feat: create 'test connection' bedrock api + fix endpoints for test connection (ant, openai, gemini) (#3227)
Co-authored-by: Eric Ly <lyyeric@letta.com>
|
2025-07-17 11:39:46 -07:00 |
|
Charles Packer
|
12c2b49461
|
fix: add frequency penalty for gpt-4o-mini (#3166)
|
2025-07-06 11:05:31 -07:00 |
|
Matthew Zhou
|
efca9d8ea0
|
feat: Only add suffix on duplication (#3120)
|
2025-07-01 13:48:38 -07:00 |
|
Matthew Zhou
|
5dccccec21
|
fix: Fix constraints and also implement bulk attach (#3107)
|
2025-06-30 14:27:57 -07:00 |
|
Charles Packer
|
aa02da3bb3
|
fix: patch annoying user warning caused by not having sonnet/opus 4 listed (#3017)
|
2025-06-24 19:07:25 -07:00 |
|
Andy Li
|
734680db81
|
feat: timeout configuration for LLM clients + vertex (#2972)
|
2025-06-23 16:55:23 -07:00 |
|
Sarah Wooders
|
630fe0b067
|
fix: remove from mcp so that it works with gemini (#2961)
|
2025-06-21 21:32:18 -07:00 |
|
cthomas
|
56493de971
|
feat: add bedrock client (#2913)
|
2025-06-19 12:07:00 -07:00 |
|
cthomas
|
eab5a60311
|
feat: rename aws env vars for bedrock (#2907)
Co-authored-by: Andy Li <55300002+cliandy@users.noreply.github.com>
|
2025-06-19 10:36:47 -07:00 |
|
cthomas
|
e89164f71b
|
feat: add bedrock to byok (#2891)
|
2025-06-18 16:03:28 -07:00 |
|
Kevin Lin
|
65530e8380
|
fix: add exceptions to accept_developer_role (#2848)
|
2025-06-16 15:14:40 -07:00 |
|
Matthew Zhou
|
4df0268674
|
fix: Harden string matching for context window exceeded error (#2847)
|
2025-06-16 14:34:41 -07:00 |
|
Kevin Lin
|
93c15244ab
|
feat: add reasoning models to integration_test_send_message (#2710)
|
2025-06-13 14:54:37 -07:00 |
|
cthomas
|
c1255dc9d1
|
feat: make tool calls required for model proxy (#2756)
|
2025-06-11 11:36:45 -07:00 |
|
Matthew Zhou
|
0399fc8b11
|
feat: Add prompting to guide tool rule usage (#2742)
|
2025-06-10 16:21:27 -07:00 |
|
cthomas
|
5ecd8a706c
|
fix: parallel tool calling OpenAI (#2738)
|
2025-06-10 14:27:01 -07:00 |
|
cthomas
|
b332ebfa85
|
feat: support multi content part input (#2717)
|
2025-06-10 13:36:17 -07:00 |
|
Matthew Zhou
|
b53be62e7a
|
fix: Turn parallel tool calling off for OpenAI (#2737)
|
2025-06-10 13:27:00 -07:00 |
|
Matthew Zhou
|
881506d574
|
fix: Turn off parallel tool calling for Claude (#2736)
|
2025-06-10 13:04:20 -07:00 |
|
Matthew Zhou
|
039f5f70d9
|
feat: Remove debug artifacts (#2734)
|
2025-06-10 12:26:45 -07:00 |
|
cthomas
|
20e6732f36
|
feat: add multi-modal input support (#2590)
|
2025-06-08 18:28:01 -07:00 |
|
cthomas
|
4554f6168b
|
fix: incorrect anthropic tool format hack (#2685)
|
2025-06-06 15:48:18 -07:00 |
|
cthomas
|
e4da78fce7
|
fix: gracefully handle too long responses from llm provider (#2677)
|
2025-06-06 13:13:32 -07:00 |
|
Andy Li
|
d2252f2953
|
feat: otel metrics and expanded collecting (#2647)
(passed tests in last run)
|
2025-06-05 17:20:14 -07:00 |
|
cthomas
|
6d094fd196
|
fix: send message tests (#2656)
|
2025-06-05 13:57:43 -07:00 |
|
Matthew Zhou
|
470b13f4b9
|
feat: Add tools for opening and closing files (#2638)
|
2025-06-04 17:33:18 -07:00 |
|
Matthew Zhou
|
82b3222a52
|
fix: Make OpenAI context window exceeded error more specific (#2624)
|
2025-06-04 12:57:51 -07:00 |
|
Matthew Zhou
|
ebccd8176a
|
fix: Add additional testing for anthropic token counting (#2619)
|
2025-06-03 20:56:39 -07:00 |
|
Matthew Zhou
|
87f4bcad9a
|
feat: Add summarization for more scenarios (#2499)
|
2025-05-29 11:10:13 -07:00 |
|
Sarah Wooders
|
3354f5fe50
|
feat: concurrently make embedding request and use async client for OpenAI (#2482)
Co-authored-by: Matthew Zhou <mattzh1314@gmail.com>
|
2025-05-28 11:35:22 -07:00 |
|
cthomas
|
05e376d521
|
feat: add property ordering for vertex structured outputs (#2487)
|
2025-05-28 09:27:26 -07:00 |
|
Sarah Wooders
|
4cc075f1fc
|
feat: add more trace methods (#2471)
|
2025-05-27 16:56:30 -07:00 |
|
cthomas
|
e813a65351
|
feat(asyncify): byok in async loop (#2421)
|
2025-05-25 19:47:20 -07:00 |
|
Shangyin Tan
|
2199d8fdda
|
fix: do not pass temperature to request if model is oai reasoning model (#2189)
Co-authored-by: Charles Packer <packercharles@gmail.com>
|
2025-05-24 21:34:18 -07:00 |
|
cthomas
|
eaeac54798
|
fix: google clients thinking config (#2414)
Co-authored-by: Sarah Wooders <sarahwooders@gmail.com>
|
2025-05-24 09:42:34 -07:00 |
|
cthomas
|
f9d2793caf
|
fix: set thinking budget for vertex tokens (#2367)
|
2025-05-23 09:07:32 -07:00 |
|
cthomas
|
b554171d41
|
feat: add tracing to llm clients (#2340)
|
2025-05-22 13:55:32 -07:00 |
|
cthomas
|
c9aa69d30e
|
fix: google vertex client errors (#2307)
|
2025-05-21 12:03:50 -07:00 |
|
cthomas
|
095a14cd1d
|
ci: use experimental for send message tests (#2290)
Co-authored-by: Sarah Wooders <sarahwooders@gmail.com>
|
2025-05-20 18:39:27 -07:00 |
|