Kian Jones
fecf6decfb
chore: migrate to ruff ( #4305 )
...
* base requirements
* autofix
* Configure ruff for Python linting and formatting
- Set up minimal ruff configuration with basic checks (E, W, F, I)
- Add temporary ignores for common issues during migration
- Configure pre-commit hooks to use ruff with pass_filenames
- This enables gradual migration from black to ruff
* Delete sdj
* autofixed only
* migrate lint action
* more autofixed
* more fixes
* change precommit
* try changing the hook
* try this stuff
2025-08-29 11:11:19 -07:00
cthomas
c8b370466e
fix: duplicate message stream error ( #3834 )
2025-08-11 14:27:35 -07:00
cthomas
db41f01ac2
feat: continue stream processing on client cancel ( #3796 )
2025-08-07 13:17:36 -07:00
Andy Li
ca6f474c4e
feat: track metrics for runs in db
2025-08-06 15:46:50 -07:00
cthomas
7d33254f5f
feat: log stream cancellation to sentry ( #3759 )
2025-08-05 16:07:30 -07:00
jnjpng
6b082f0447
fix: manually count tokens for streaming lmstudio models
...
Co-authored-by: Jin Peng <jinjpeng@Jins-MacBook-Pro.local >
Co-authored-by: Charles Packer <packercharles@gmail.com >
2025-07-29 18:12:42 -07:00
Andy Li
33c1f26ab6
feat: support for agent loop job cancelation ( #2837 )
2025-07-02 14:31:16 -07:00
Kevin Lin
868294533c
feat: add omitted reasoning to streaming openai reasoning ( #2846 )
...
Co-authored-by: Charles Packer <packercharles@gmail.com >
2025-06-24 18:47:38 -07:00
Sarah Wooders
5fa52a2c38
fix: avoid calling model_dump on stop reason messages twice ( #2811 )
2025-06-13 18:25:35 -07:00
cthomas
1405464a1c
feat: send stop reason in letta APIs ( #2789 )
2025-06-13 16:04:48 -07:00
Andy Li
33bfd14017
fix: metric tracking ( #2785 )
2025-06-13 13:53:10 -07:00
cthomas
605a1f410c
feat: consolidate logic for finish tokens ( #2779 )
2025-06-12 15:24:06 -07:00
Kevin Lin
58c4448235
fix: patch reasoning models ( #2703 )
...
Co-authored-by: Charles Packer <packercharles@gmail.com >
2025-06-11 17:20:04 -07:00
Andy Li
d2252f2953
feat: otel metrics and expanded collecting ( #2647 )
...
(passed tests in last run)
2025-06-05 17:20:14 -07:00
cthomas
22c66da7bc
fix: add temp hack to gracefully handle parallel tool calling ( #2654 )
2025-06-05 14:43:46 -07:00
Kevin Lin
0d6907c8cf
fix: set openai streaming interface letta_message_id ( #2648 )
...
Co-authored-by: Caren Thomas <carenthomas@gmail.com >
2025-06-05 12:26:01 -07:00
cthomas
a8f394d675
feat: populate tool call name and id in when token streaming ( #2639 )
2025-06-04 17:06:44 -07:00
Matthew Zhou
7debadb3b9
fix: Change enum to fix composio tests ( #2488 )
2025-05-28 10:24:22 -07:00
Matthew Zhou
dba4cc9ea0
feat: Add TTFT latency from provider in traces ( #2481 )
2025-05-28 10:06:16 -07:00
cthomas
871e171b44
feat: add tracing to streaming interface ( #2477 )
2025-05-27 16:20:05 -07:00
Matthew Zhou
ad6e446849
feat: Asyncify insert archival memories ( #2430 )
...
Co-authored-by: Caren Thomas <carenthomas@gmail.com >
2025-05-25 22:28:35 -07:00
Shangyin Tan
2199d8fdda
fix: do not pass temperature to request if model is oai reasoning model ( #2189 )
...
Co-authored-by: Charles Packer <packercharles@gmail.com >
2025-05-24 21:34:18 -07:00
cthomas
095a14cd1d
ci: use experimental for send message tests ( #2290 )
...
Co-authored-by: Sarah Wooders <sarahwooders@gmail.com >
2025-05-20 18:39:27 -07:00
Andy Li
a78abc610e
feat: track llm provider traces and tracking steps in async agent loop ( #2219 )
2025-05-19 15:50:56 -07:00
cthomas
00914e5308
feat(asyncify): migrate actors(users) endpoints ( #2211 )
2025-05-16 00:37:08 -07:00
Sarah Wooders
5bd559651b
feat: add OpenAI streaming interface for new agent loop ( #2191 )
2025-05-15 22:20:08 -07:00