Commit Graph

115 Commits

Author SHA1 Message Date
cthomas
77d1c3365e fix: granular cancellation check (#6540) 2025-12-15 12:02:34 -08:00
cthomas
b67347dff2 fix: remove redundant letta message conversion (#6538) 2025-12-15 12:02:33 -08:00
cthomas
4916d281ce fix: dont let message ids diverge in memory vs db (#6537) 2025-12-15 12:02:33 -08:00
Sarah Wooders
3569721fd4 fix: avoid infinite summarization loops (#6506) 2025-12-15 12:02:33 -08:00
Sarah Wooders
bd97b23025 feat: fallback to all mode for summarizer if error (#6465) 2025-12-15 12:02:19 -08:00
Sarah Wooders
7fa141273d fix: dont run summarizer if pending approval (#6464) 2025-12-15 12:02:19 -08:00
Sarah Wooders
91e3dd8b3e feat: fix new summarizer code and add more tests (#6461) 2025-12-15 12:02:19 -08:00
Sarah Wooders
f417e53638 fix: fix cancellation issues without making too many changes to message_ids persistence (#6442) 2025-12-15 12:02:19 -08:00
Charles Packer
1f7165afc4 fix: patch counting of tokens for anthropic (#6458)
* fix: patch counting of tokens for anthropic

* fix: patch ui to be simpler

* fix: patch undercounting bug in anthropic when caching is on
2025-12-15 12:02:19 -08:00
Sarah Wooders
1939a9d185 feat: patch summarizer without changes to AgentState (#6450) 2025-12-15 12:02:19 -08:00
cthomas
db534836e4 feat: allow follow up user message for approvals LET-6272 (#6392)
* feat: allow follow up user message for approvals

* add tests
2025-11-26 14:39:40 -08:00
jnjpng
32e4caf0d2 fix: stream return sending full message after yielding chunks (#6295)
base

Co-authored-by: Letta Bot <noreply@letta.com>
2025-11-24 19:10:26 -08:00
cthomas
1be2f61f05 feat: add new letta error message stream response type (#6192) 2025-11-24 19:10:11 -08:00
cthomas
1d71468ab2 feat: don't yield tool return message back in hitl [LET-6012] (#6219)
feat: don't yield tool return message back in hitl
2025-11-24 19:10:11 -08:00
jnjpng
52c9abf39b fix: v1 agent message content for anthropic and usage stats tracking [LET-6199] (#6249)
base

Co-authored-by: Letta Bot <noreply@letta.com>
2025-11-24 19:09:33 -08:00
jnjpng
9ffbfa6d67 feat: base letta v1 agent on temporal (#6208)
* base

* another

* parallel

* update

* rename

* naming

---------

Co-authored-by: Letta Bot <noreply@letta.com>
2025-11-24 19:09:33 -08:00
cthomas
41392cdb8a test: make hitl testing pass (#6188) 2025-11-24 19:09:32 -08:00
Charles Packer
2e721ddc62 fix: various hardening to prevent stale state on background mode runs (#6072)
fix: various hardening to prevent stale state on backgroun
d mode runs
2025-11-13 15:36:56 -08:00
Charles Packer
363a5c1f92 fix: fix poison state from bad approval response (#5979)
* fix: detect and fail on malformed approval responses

* fix: guard against None approvals in utils.py

* fix: add extra warning

* fix: stop silent drops in deserialize_approvals

* fix: patch v3 stream error handling to prevent sending end_turn after an error occurs, and ensures stop_reason is always set when an error occurs

* fix: Prevents infinite client hangs by ensuring a terminal event is ALWAYS sent

* fix:  Ensures terminal events are sent even if inner stream generator fails to
  send them
2025-11-13 15:36:55 -08:00
Sarah Wooders
5b9cac08b6 fix: populate stop_reason [LET-6040] (#5955)
fix: populate stop_reason
2025-11-13 15:36:55 -08:00
Charles Packer
52ff51755c fix: move persistence on message_ids to prevent desync [LET-6011] (#5908)
fix: move persistence on message_ids to prevent desync
2025-11-13 15:36:39 -08:00
Charles Packer
468b47bef5 fix(core): patch sse streaming errors (#5906)
* fix: patch sse streaming errors

* fix: don't re-raise, but log explicitly with sentry

* chore: cleanup comments

* fix: revert change from #5907, also make sure to write out a [DONE] to close the stream
2025-11-13 15:36:39 -08:00
Sarah Wooders
ac599145bb fix: various fixes for runs (#5907)
* Fix agent loop continuing after cancellation in letta_agent_v3

Bug: When a run is cancelled, _check_run_cancellation() sets
self.should_continue=False and returns early from _step(), but the outer
for loop (line 245) continues to the next iteration, executing subsequent
steps even though cancellation was requested.

Symptom: User hits cancel during step 1, backend marks run as cancelled,
but agent continues executing steps 2, 3, etc.

Root cause: After the 'async for chunk in response' loop completes (line 255),
there was no check of self.should_continue before continuing to the next
iteration of the outer step loop.

Fix: Added 'if not self.should_continue: break' check after the inner loop
to exit the outer step loop when cancellation is detected. This makes v3
consistent with v2 which already had this check (line 306-307).

🐾 Generated with [Letta Code](https://letta.com)

Co-authored-by: Letta <noreply@letta.com>

* add integration tests

* passing tests

* fix: minor patches

* undo

---------

Co-authored-by: cpacker <packercharles@gmail.com>
Co-authored-by: Letta <noreply@letta.com>
2025-11-13 15:36:39 -08:00
Charles Packer
a6077f3927 fix(core): Fix agent loop continuing after cancellation in letta_agent_v3 [LET-6006] (#5905)
* Fix agent loop continuing after cancellation in letta_agent_v3

Bug: When a run is cancelled, _check_run_cancellation() sets
self.should_continue=False and returns early from _step(), but the outer
for loop (line 245) continues to the next iteration, executing subsequent
steps even though cancellation was requested.

Symptom: User hits cancel during step 1, backend marks run as cancelled,
but agent continues executing steps 2, 3, etc.

Root cause: After the 'async for chunk in response' loop completes (line 255),
there was no check of self.should_continue before continuing to the next
iteration of the outer step loop.

Fix: Added 'if not self.should_continue: break' check after the inner loop
to exit the outer step loop when cancellation is detected. This makes v3
consistent with v2 which already had this check (line 306-307).

🐾 Generated with [Letta Code](https://letta.com)

Co-authored-by: Letta <noreply@letta.com>

* add integration tests

* fix: misc fixes required to get cancellations to work on letta code localhost

---------

Co-authored-by: Letta <noreply@letta.com>
Co-authored-by: Sarah Wooders <sarahwooders@gmail.com>
2025-11-13 15:36:39 -08:00
Charles Packer
a44c05040a fix(core): big context overflow handling patch (#5901) 2025-11-13 15:36:39 -08:00
Ari Webb
7427c0998e feat: gemini parallel tool calling non streaming [LET-5993] (#5889)
* first hack

* just test non streaming

* stream_steps should pass too

* clean up

---------

Co-authored-by: Ari Webb <ari@letta.com>
2025-11-13 15:36:39 -08:00
Charles Packer
60ed435727 fix(core): patch summarizer on letta_agent_v3.py (letta_agent_v1 loop) (#5863)
* fix(core): patch summarizer

* fix: misc fixes

* refactor: remove fallbacks, instead throw a warning

* refactor: pull out magic number to constant
2025-11-13 15:36:20 -08:00
Ari Webb
48cc73175b feat: parallel tool calling for openai non streaming [LET-4593] (#5773)
* first hack

* clean up

* first implementation working

* revert package-lock

* remove openai test

* error throw

* typo

* Update integration_test_send_message_v2.py

* Update integration_test_send_message_v2.py

* refine test

* Only make changes for openai non streaming

* Add tests

---------

Co-authored-by: Ari Webb <ari@letta.com>
Co-authored-by: Matt Zhou <mattzh1314@gmail.com>
2025-11-13 15:36:14 -08:00
Sarah Wooders
0c454d6eaf fix: patch unknown error type handling for agent (#5848) 2025-11-13 15:36:14 -08:00
Sarah Wooders
a566900533 chore: add back test_server.py (#5783) 2025-11-13 15:36:00 -08:00
cthomas
e418d7c5bd fix: error handling in agent stream (#5703) 2025-10-24 15:14:20 -07:00
cthomas
01f9194711 feat: downgrade agent loop log level (#5701) 2025-10-24 15:14:20 -07:00
cthomas
06d2cde43d fix: llm error interrupting stream for agent loop (#5696) 2025-10-24 15:14:20 -07:00
cthomas
73dcc0d4b7 feat: latest hitl + parallel tool call changes (#5565) 2025-10-24 15:12:49 -07:00
Matthew Zhou
fc950ecddf feat: Change execution pattern depending on enable_parallel_execution (#5550)
* Change execution pattern depending on

* Increase efficiency
2025-10-24 15:12:11 -07:00
cthomas
f8437d47e2 feat: add support for hitl parallel tool calling (#5549)
* feat: add support for hitl parallel tool calling

* rename to requested_tool_calls
2025-10-24 15:12:11 -07:00
Matthew Zhou
396959da2f feat: Add toggle on llm config for parallel tool calling [LET-5610] (#5542)
* Add parallel tool calling field

* Thread through parallel tool use

* Fern autogen

* Fix send message v2
2025-10-24 15:12:11 -07:00
cthomas
5a475fd1a5 feat: add comprehensive testing for client side tool calling (#5539) 2025-10-24 15:12:11 -07:00
cthomas
69b15d606c feat: support approval requests for parallel tool calls (#5538) 2025-10-24 15:12:11 -07:00
cthomas
181a73c333 feat: handle noop early return case first in agent loop (#5536) 2025-10-24 15:12:11 -07:00
cthomas
0efcf6ed95 feat: support passing around multiple tool returns (#5535) 2025-10-24 15:12:11 -07:00
cthomas
507a83a81b feat: rename is_approval var to is_approval_response for clarity (#5534) 2025-10-24 15:12:11 -07:00
cthomas
51426c9c51 feat: integrate new tool call denials object (#5533) 2025-10-24 15:12:11 -07:00
cthomas
62d5ae1828 feat: separate out hitl cases (#5531) 2025-10-24 15:12:11 -07:00
cthomas
a03263aca2 feat: remove single tool call case in new agent loop (#5504)
* feat: remove single tool call case in new agent loop

* fix hitl test
2025-10-24 15:12:11 -07:00
Ari Webb
dfba037226 enforce_run_id in v3 agent loop [LET-5542] (#5455)
enforce_run_id in v3 agent loop

Co-authored-by: Ari Webb <ari@letta.com>
2025-10-24 15:12:11 -07:00
Ari Webb
e7ef73c0b6 Ari/let 5503 error during step processing error code 429 type error error [LET-5503] (#5421)
* letta agent v2 throw exception not error

* warning instead of error or exception

---------

Co-authored-by: Ari Webb <ari@letta.com>
2025-10-24 15:12:11 -07:00
cthomas
128afeb587 feat: fix cancellation bugs and add testing (#5353) 2025-10-24 15:11:31 -07:00
Matthew Zhou
bb8a7889e0 feat: Add parallel tool call streaming for anthropic [LET-4601] (#5225)
* wip

* Fix parallel tool calling interface

* wip

* wip adapt using id field

* Integrate new multi tool return schemas into parallel tool calling

* Remove example script

* Reset changes to llm stream adapter since old agent loop should not enable parallel tool calling

* Clean up fallback logic for extracting tool calls

* Remove redundant check

* Simplify logic

* Clean up logic in handle ai response

* Fix tests

* Write anthropic dict conversion to be back compatible

* wip

* Double write tool call id for legacy reasons

* Fix override args failures

* Patch for approvals

* Revert comments

* Remove extraneous prints
2025-10-24 15:11:31 -07:00
Kian Jones
c2e474e03a feat: refactor logs to parse as a single log line each and filter out 404s from sentry (#5242)
* add multiline log auto detect

* implement logger.exception()

* filter out 404

* remove potentially problematic changes
2025-10-24 15:11:31 -07:00