cthomas
4823416af9
feat: default unpack assistant message content [LET-5404] ( #5707 )
...
feat: default unpack assistant message content
2025-10-24 15:14:20 -07:00
cthomas
73dcc0d4b7
feat: latest hitl + parallel tool call changes ( #5565 )
2025-10-24 15:12:49 -07:00
Matthew Zhou
643ec8fe2f
fix: Double write tool call deltas [LET-5545] ( #5461 )
...
* Double write tool call deltas
* Fix
2025-10-24 15:12:11 -07:00
Kevin Lin
08da1a64bb
feat: parse reasoning_content from OAI proxies (eg. vLLM / OpenRouter) ( #5372 )
...
* reasonig_content support
* fix
* comment
* fix
* rm comment
---------
Co-authored-by: Charles Packer <packercharles@gmail.com >
2025-10-24 15:11:31 -07:00
Kian Jones
c2e474e03a
feat: refactor logs to parse as a single log line each and filter out 404s from sentry ( #5242 )
...
* add multiline log auto detect
* implement logger.exception()
* filter out 404
* remove potentially problematic changes
2025-10-24 15:11:31 -07:00
Matthew Zhou
7511b0f4fe
feat: Write anthropic streaming interface that supports parallel tool calling [LET-5355] ( #5295 )
...
Write anthropic streaming interface that supports parallel tool calling
2025-10-09 15:25:21 -07:00
Matthew Zhou
5593f1450b
feat: Double write to ToolCallMessage's new list tool_calls field ( #5268 )
...
* Add new tool_calls field to ToolCallMessage
* fern autogen
* Double write to new tool_calls field
* Update straggling instances
2025-10-09 13:20:52 -07:00
cthomas
cc913df27c
feat: add signature to content parts ( #5134 )
...
* feat: add signature to content parts
* always base64 encode thought signature
* propagate thought signature back to request
2025-10-07 17:50:49 -07:00
cthomas
93d9ff01c6
feat: add gemini native thinking ( #5124 )
...
* feat: add gemini native thinking
* update test
* revert comments
2025-10-07 17:50:49 -07:00
cthomas
3e17b4289a
feat: gracefully handle gemini empty content parts ( #5116 )
2025-10-07 17:50:48 -07:00
cthomas
f7755d837a
feat: add gemini streaming to new agent loop ( #5109 )
...
* feat: add gemini streaming to new agent loop
* add google as required dependency
* support storing all content parts
* remove extra google references
2025-10-07 17:50:48 -07:00
Sarah Wooders
ef07e03ee3
feat: add run_id to input messages and step_id to messages ( #5099 )
2025-10-07 17:50:48 -07:00
cthomas
a3545110cf
feat: add full responses api support in new agent loop ( #5051 )
...
* feat: add full responses api support in new agent loop
* update matrix in workflow
* relax check for reasoning messages for high effort gpt 5
* fix indent
* one more relax
2025-10-07 17:50:48 -07:00
cthomas
67f8e46619
feat: add run id to streamed messages ( #5037 )
2025-10-07 17:50:47 -07:00
cthomas
f235dfb356
feat: add tool call test for new agent loop ( #5034 )
2025-10-07 17:50:47 -07:00
Charles Packer
a4041879a4
feat: add new agent loop (squash rebase of OSS PR) ( #4815 )
...
* feat: squash rebase of OSS PR
* fix: revert changes that weren't on manual rebase
* fix: caught another one
* fix: disable force
* chore: drop print
* fix: just stage-api && just publish-api
* fix: make agent_type consistently an arg in the client
* fix: patch multi-modal support
* chore: put in todo stub
* fix: disable hardcoding for tests
* fix: patch validate agent sync (#4882 )
patch validate agent sync
* fix: strip bad merge diff
* fix: revert unrelated diff
* fix: react_v2 naming -> letta_v1 naming
* fix: strip bad merge
---------
Co-authored-by: Kevin Lin <klin5061@gmail.com >
2025-10-07 17:50:45 -07:00
Kian Jones
b8e9a80d93
merge this ( #4759 )
...
* wait I forgot to comit locally
* cp the entire core directory and then rm the .git subdir
2025-09-17 15:47:40 -07:00
Kian Jones
22f70ca07c
chore: officially migrate to submodule ( #4502 )
...
* remove apps/core and apps/fern
* fix precommit
* add submodule updates in workflows
* submodule
* remove core tests
* update core revision
* Add submodules: true to all GitHub workflows
- Ensure all workflows can access git submodules
- Add submodules support to deployment, test, and CI workflows
- Fix YAML syntax issues in workflow files
🤖 Generated with [Claude Code](https://claude.ai/code )
Co-Authored-By: Claude <noreply@anthropic.com >
* remove core-lint
* upgrade core with latest main of oss
---------
Co-authored-by: Claude <noreply@anthropic.com >
2025-09-09 12:45:53 -07:00
cthomas
cb7296c81d
fix: approval request for streaming ( #4445 )
...
* fix: approval request for streaming
* fix: claude code attempt, unit test passing (add on to #4445 ) (#4448 )
* fix: claude code attempt, unit test passing
* chore: update locks to 0.1.314 from 0.1.312
* chore: just stage-api && just publish-api
* chore: drop dead poetry lock
---------
Co-authored-by: Charles Packer <packercharles@gmail.com >
2025-09-05 17:43:21 -07:00
Charles Packer
264171f327
fix: patch streaming hidden reasoning event [LET-4167] ( #4367 )
...
* fix: patch streaming hidden reasoning event
* fix: patch reasoning_effort not getting passed to openai
2025-09-02 16:21:18 -07:00
Charles Packer
9d49eff204
fix: patch the streaming issue in the openai client for when inner_thoughts_in_kwargs is off [LET-4146] ( #4350 )
...
fix: patch the streaming issue in the openai client for when inner_thoughts_in_kwargs is off
2025-09-02 12:44:25 -07:00
Charles Packer
e741f84add
fix: patch bug w/ extended thinking mode involving text leaking into reasoning ( #4341 )
...
* fix: patch for bad native reasoning behavior w/ sonnet
* fix: cleanup
* fix: cleanup
* fix: another prompt tune for less flaking
2025-09-01 20:26:24 -07:00
cthomas
1edcc13778
feat: support filtering out messages when converting to openai dict ( #4337 )
...
* feat: support filtering out messages when converting to openai dict
* fix imports
2025-09-01 12:48:45 -07:00
Kian Jones
fecf6decfb
chore: migrate to ruff ( #4305 )
...
* base requirements
* autofix
* Configure ruff for Python linting and formatting
- Set up minimal ruff configuration with basic checks (E, W, F, I)
- Add temporary ignores for common issues during migration
- Configure pre-commit hooks to use ruff with pass_filenames
- This enables gradual migration from black to ruff
* Delete sdj
* autofixed only
* migrate lint action
* more autofixed
* more fixes
* change precommit
* try changing the hook
* try this stuff
2025-08-29 11:11:19 -07:00
cthomas
c8b370466e
fix: duplicate message stream error ( #3834 )
2025-08-11 14:27:35 -07:00
cthomas
db41f01ac2
feat: continue stream processing on client cancel ( #3796 )
2025-08-07 13:17:36 -07:00
Andy Li
ca6f474c4e
feat: track metrics for runs in db
2025-08-06 15:46:50 -07:00
cthomas
7d33254f5f
feat: log stream cancellation to sentry ( #3759 )
2025-08-05 16:07:30 -07:00
jnjpng
6b082f0447
fix: manually count tokens for streaming lmstudio models
...
Co-authored-by: Jin Peng <jinjpeng@Jins-MacBook-Pro.local >
Co-authored-by: Charles Packer <packercharles@gmail.com >
2025-07-29 18:12:42 -07:00
Andy Li
33c1f26ab6
feat: support for agent loop job cancelation ( #2837 )
2025-07-02 14:31:16 -07:00
Kevin Lin
868294533c
feat: add omitted reasoning to streaming openai reasoning ( #2846 )
...
Co-authored-by: Charles Packer <packercharles@gmail.com >
2025-06-24 18:47:38 -07:00
Matthew Zhou
0c2a80d69b
feat: Integrate tool executor into VoiceAgent ( #2872 )
2025-06-17 14:55:58 -07:00
Sarah Wooders
5fa52a2c38
fix: avoid calling model_dump on stop reason messages twice ( #2811 )
2025-06-13 18:25:35 -07:00
cthomas
1405464a1c
feat: send stop reason in letta APIs ( #2789 )
2025-06-13 16:04:48 -07:00
Andy Li
33bfd14017
fix: metric tracking ( #2785 )
2025-06-13 13:53:10 -07:00
cthomas
605a1f410c
feat: consolidate logic for finish tokens ( #2779 )
2025-06-12 15:24:06 -07:00
Kevin Lin
58c4448235
fix: patch reasoning models ( #2703 )
...
Co-authored-by: Charles Packer <packercharles@gmail.com >
2025-06-11 17:20:04 -07:00
Matthew Zhou
881506d574
fix: Turn off parallel tool calling for Claude ( #2736 )
2025-06-10 13:04:20 -07:00
Andy Li
d2252f2953
feat: otel metrics and expanded collecting ( #2647 )
...
(passed tests in last run)
2025-06-05 17:20:14 -07:00
cthomas
22c66da7bc
fix: add temp hack to gracefully handle parallel tool calling ( #2654 )
2025-06-05 14:43:46 -07:00
cthomas
6d094fd196
fix: send message tests ( #2656 )
2025-06-05 13:57:43 -07:00
cthomas
da49024a5a
fix: inner thoughts kwarg should never be streamed ( #2644 )
2025-06-05 12:30:55 -07:00
Kevin Lin
0d6907c8cf
fix: set openai streaming interface letta_message_id ( #2648 )
...
Co-authored-by: Caren Thomas <carenthomas@gmail.com >
2025-06-05 12:26:01 -07:00
cthomas
904ccd65b6
fix: remove separate tool call id in streaming path ( #2641 )
2025-06-04 17:35:55 -07:00
cthomas
a8f394d675
feat: populate tool call name and id in when token streaming ( #2639 )
2025-06-04 17:06:44 -07:00
Matthew Zhou
7debadb3b9
fix: Change enum to fix composio tests ( #2488 )
2025-05-28 10:24:22 -07:00
Matthew Zhou
dba4cc9ea0
feat: Add TTFT latency from provider in traces ( #2481 )
2025-05-28 10:06:16 -07:00
cthomas
871e171b44
feat: add tracing to streaming interface ( #2477 )
2025-05-27 16:20:05 -07:00
Matthew Zhou
ad6e446849
feat: Asyncify insert archival memories ( #2430 )
...
Co-authored-by: Caren Thomas <carenthomas@gmail.com >
2025-05-25 22:28:35 -07:00
Shangyin Tan
2199d8fdda
fix: do not pass temperature to request if model is oai reasoning model ( #2189 )
...
Co-authored-by: Charles Packer <packercharles@gmail.com >
2025-05-24 21:34:18 -07:00