Sarah Wooders
0b1fe096ec
feat: split up handle and model_settings ( #6022 )
2025-11-13 15:36:56 -08:00
jnjpng
5e35887295
fix: tools page not subscriptable ( #6057 )
...
fix
Co-authored-by: Letta Bot <noreply@letta.com >
2025-11-13 15:36:56 -08:00
jnjpng
a66a8187cd
fix: v1 sdk tests directly subscripting from list endpoints ( #6054 )
...
* base
* fix
* fix
* runs
* skip
---------
Co-authored-by: Letta Bot <noreply@letta.com >
2025-11-13 15:36:56 -08:00
Ari Webb
526a678f8c
fix: fix agent test, returns new data format ( #6039 )
...
fix conftest
Co-authored-by: Ari Webb <ari@letta.com >
2025-11-13 15:36:55 -08:00
Kian Jones
4acda9c80f
feat: global exception middleware ( #6017 )
...
* global exception middleware
* redo both logging middlewares as one
* remove extra middleware files
2025-11-13 15:36:55 -08:00
jnjpng
8ff8ef2102
chore: clean up count agent test ( #6037 )
...
base
Co-authored-by: Letta Bot <noreply@letta.com >
2025-11-13 15:36:55 -08:00
Ari Webb
d2fe64bab4
fix: fix parallel tool calling tests in ci [LET-6043] ( #5950 )
...
* first hack
* test
* fix test for v1, comment out for legacy
* test shows parallel tool calling now happening
* fix test to detect parallel tool calling
* update to use oai too
* uncomment v2 test
---------
Co-authored-by: Ari Webb <ari@letta.com >
2025-11-13 15:36:55 -08:00
Ari Webb
13a77289b9
fix: send message test for gpt 4o ( #6012 )
...
fix
Co-authored-by: Ari Webb <ari@letta.com >
2025-11-13 15:36:55 -08:00
Christina Tong
c76bc9e216
feat: add filters to count_agents endpoint [LET-5380] [LET-5497] ( #6008 )
...
* feat: add filters to count_agents endpoint [LET-5380]
* comment
* update
2025-11-13 15:36:55 -08:00
jnjpng
849d0dc64a
feat: provider-specific model configuration ( #5873 ) ( #5874 )
2025-11-13 15:36:55 -08:00
Sarah Wooders
fd7c8193fe
feat: remove chunking for archival memory [LET-6080] ( #5997 )
...
* feat: remove chunking for archival memory
* add error and tests
2025-11-13 15:36:55 -08:00
jnjpng
46457d3f93
fix: send message integration tests agent loop errors ( #5995 )
...
base
Co-authored-by: Letta Bot <noreply@letta.com >
2025-11-13 15:36:55 -08:00
Kian Jones
361c9a14d8
feat(metrics): Surface custom metrics for temporal workflows and workers ( #5951 )
...
* temporal custom metrics
* Delete apps/core/letta/agents/temporal/PRODUCTION_SETUP.md
* Delete apps/core/letta/agents/temporal/DATADOG_METRICS.md
* add unit testing
2025-11-13 15:36:55 -08:00
Kian Jones
ea3248593c
feat(logs): Enrich logs with context-aware primtive types ( #5949 )
...
* enrich logs with context-aware primtive types
* Delete apps/core/docs/LOG_CONTEXT.md
2025-11-13 15:36:55 -08:00
jnjpng
e2774c07c6
feat: generate otid when using input field on message send ( #5990 )
...
* base
* try this out
* plz
* fix
---------
Co-authored-by: Letta Bot <noreply@letta.com >
2025-11-13 15:36:55 -08:00
Kian Jones
6943b68288
tests: adding unit testing and fix edge case ( #5992 )
...
cursor bugbot suggestion number 2 and adding unit testing
2025-11-13 15:36:55 -08:00
jnjpng
05b359b7f5
chore: add local base 64 url image for send message integration ( #5969 )
...
* base
* update
* clean up
* update
---------
Co-authored-by: Letta Bot <noreply@letta.com >
2025-11-13 15:36:55 -08:00
Christina Tong
881831501a
feat: filter list agents by stop reason [LET-5928] ( #5779 )
...
* feat: add last_stop_reason to AgentState [LET-5911]
* feat: filter list agents by stop reason [LET-5928]
* undo agent loop changes, use update_run_by_id_async
* add run manager test
* add integration tests
* remove comment
* fix duplicate
* fix docs
2025-11-13 15:36:55 -08:00
Ari Webb
395c04c52e
fix: stainless pagination ( #5943 )
...
---------
Co-authored-by: Ari Webb <ari@letta.com >
2025-11-13 15:36:55 -08:00
Christina Tong
ef3df907c5
feat: add last_stop_reason to AgentState [LET-5911] ( #5772 )
...
* feat: add last_stop_reason to AgentState [LET-5911]
* undo agent loop changes, use update_run_by_id_async
* add run manager test
* add integration tests
* remove comment
* remove duplicate test
2025-11-13 15:36:55 -08:00
Christina Tong
7c731feab3
fix: integration tests send message v1 sdk ( #5920 )
...
* fix: integration tests send message v1 sdk
* early cancellationg
* fix image
* remove
* update
* update comment
* specific prompt
2025-11-13 15:36:50 -08:00
Ari Webb
ed99d7eb2b
feat: add input option to send message route [LET-4540] ( #5938 )
...
---------
Co-authored-by: Ari Webb <ari@letta.com >
2025-11-13 15:36:50 -08:00
Matthew Zhou
7b3cb0224a
feat: Add gemini parallel tool call streaming for gemini [LET-6027] ( #5913 )
...
* Make changes to gemini streaming interface to support parallel tool calling
* Finish send message integration test
* Add comments
2025-11-13 15:36:39 -08:00
Christina Tong
8468ef3cd7
chore: migrate test sdk client to v1 [LET-5981] ( #5887 )
...
* chore: migrate test sdk client to v1 [LET-5981]
* simplify
* simplify
2025-11-13 15:36:39 -08:00
Charles Packer
a6077f3927
fix(core): Fix agent loop continuing after cancellation in letta_agent_v3 [LET-6006] ( #5905 )
...
* Fix agent loop continuing after cancellation in letta_agent_v3
Bug: When a run is cancelled, _check_run_cancellation() sets
self.should_continue=False and returns early from _step(), but the outer
for loop (line 245) continues to the next iteration, executing subsequent
steps even though cancellation was requested.
Symptom: User hits cancel during step 1, backend marks run as cancelled,
but agent continues executing steps 2, 3, etc.
Root cause: After the 'async for chunk in response' loop completes (line 255),
there was no check of self.should_continue before continuing to the next
iteration of the outer step loop.
Fix: Added 'if not self.should_continue: break' check after the inner loop
to exit the outer step loop when cancellation is detected. This makes v3
consistent with v2 which already had this check (line 306-307).
🐾 Generated with [Letta Code](https://letta.com )
Co-authored-by: Letta <noreply@letta.com >
* add integration tests
* fix: misc fixes required to get cancellations to work on letta code localhost
---------
Co-authored-by: Letta <noreply@letta.com >
Co-authored-by: Sarah Wooders <sarahwooders@gmail.com >
2025-11-13 15:36:39 -08:00
Ari Webb
7427c0998e
feat: gemini parallel tool calling non streaming [LET-5993] ( #5889 )
...
* first hack
* just test non streaming
* stream_steps should pass too
* clean up
---------
Co-authored-by: Ari Webb <ari@letta.com >
2025-11-13 15:36:39 -08:00
Kian Jones
60115d4931
fix: lettuce import and add unit tests for new run manager function ( #5893 )
...
* fix lettuce import and add unit tests for new run manager function
* fix unit tests
* bump version (unrelated)
2025-11-13 15:36:38 -08:00
Sarah Wooders
57bb051ea4
feat: add tool return truncation to summarization as a fallback [LET-5970] ( #5859 )
2025-11-13 15:36:30 -08:00
Christina Tong
381ca5bde8
chore: migrate built in tools integration test to sdk v1 [LET-5980] ( #5883 )
...
* chore: migrate built in tools integration test to sdk v1
* fix
* remove trialing commas
2025-11-13 15:36:20 -08:00
Christina Tong
255fdfecf2
feat: migrate integration_test_human_in_the_loop to sdk v1 [LET-5979] ( #5878 )
...
* feat: migrate integration_test_human_in_the_loop to sdk v1
* update modify
* parallel tool calling fixes
* fix
* updat aparallel tool calling
* remove regex matching
2025-11-13 15:36:20 -08:00
Kian Jones
185031882a
fix: prevent huge otel spans causing pods to be OOMKilled ( #5871 )
...
* otel fix
* add unit test
* log the resource id
* iterables support and fix unittest
* fix some edge cases
2025-11-13 15:36:20 -08:00
Sarah Wooders
cfeed463a9
Revert "feat: provider-specific model configuration " ( #5873 )
...
Revert "feat: provider-specific model configuration (#5774 )"
This reverts commit 34a334949a3ef72cd49ff0ca3da9e85d16daa57c.
2025-11-13 15:36:20 -08:00
Matthew Zhou
ff81f4153b
feat: Support parallel tool calling streaming for OpenAI chat completions [LET-4594] ( #5865 )
...
* Finish chat completions parallel tool calling
* Undo comments
* Add comments
* Remove test file
2025-11-13 15:36:14 -08:00
Christina Tong
599adb4c26
chore: migrate integration test send message to v1 sdk [LET-5940] ( #5794 )
...
* chore: migrate integration test send message to v1 sdk
* new folder
* set up new workflows for integration test
* remove file
* update stainless workflow
* fix import err
* add letta-client version logging
* fix: SDK cache miss should fall back to PyPI instead of failing
When the Stainless SDK cache is not available, the workflow should
fall back to installing the published SDK from PyPI rather than
failing the CI build. The workflow already has this fallback logic
in the "Install Stainless SDK" step, but the "Check SDK cache"
step was failing before it could reach that point.
This change converts the hard failure (exit 1) to a warning message,
allowing the workflow to continue and use the PyPI fallback.
Co-Authored-By: Claude <noreply@anthropic.com >
* force upgrade
* remove frozen
* install before running
* add no sync
* use upgrade instead of upgrade package
* update
* fix llm config
* fix
* update
* update path
* update workflow
* see installed version
* add fallback
* update
* fix mini
* lettaping
* fix: handle o1 token streaming and LettaPing step_id validation
- Skip LettaPing messages in step_id validation (they don't have step_id)
- Move o1/o3/o4 token streaming check before general assertion in assert_tool_call_response
- o1 reasoning models omit AssistantMessage in token streaming mode (6 messages instead of 7)
---------
Co-authored-by: Kian Jones <kian@letta.com >
Co-authored-by: Claude <noreply@anthropic.com >
2025-11-13 15:36:14 -08:00
Sarah Wooders
aaa12a393c
feat: provider-specific model configuration ( #5774 )
...
* initial code updates
* add models
* cleanup
* support overriding
* add apis
* cleanup reasoning interfaces to match models
* update schemas
* update apis
* add new field
* remove parallel
* various fixes
* modify schemas
* fix
* fix
* make model optional
* undo model schema change
* update schemas
* update schemas
* format
* fix tests
* attempt to patch web
* fic docs
* change schemas
* update error
* fix tests
* delete tests
* clean up undefined matching conditional
---------
Co-authored-by: jnjpng <jin@letta.com >
Co-authored-by: Letta Bot <noreply@letta.com >
2025-11-13 15:36:14 -08:00
Ari Webb
48cc73175b
feat: parallel tool calling for openai non streaming [LET-4593] ( #5773 )
...
* first hack
* clean up
* first implementation working
* revert package-lock
* remove openai test
* error throw
* typo
* Update integration_test_send_message_v2.py
* Update integration_test_send_message_v2.py
* refine test
* Only make changes for openai non streaming
* Add tests
---------
Co-authored-by: Ari Webb <ari@letta.com >
Co-authored-by: Matt Zhou <mattzh1314@gmail.com >
2025-11-13 15:36:14 -08:00
Sarah Wooders
6654473514
fix: handle block race conditions ( #5819 )
2025-11-13 15:36:14 -08:00
Shubham Naik
95816b9b28
Shub/let 5962 add perfomranceduration search to runs [LET-5962] ( #5850 )
...
* feat: add perfomrance/search to list internal runs
* chore: add tests
* chore: fix ui
* feat: support UI for this
* chore: update tests
* chore: update types
---------
Co-authored-by: Shubham Naik <shub@memgpt.ai >
2025-11-13 15:36:14 -08:00
jnjpng
bd61ba85dd
chore: update stainless mcp config ( #5830 )
...
* base
* try
* client no token
* session
* try tests
* fix mcp_servers_test
* remove deprecated test
* remove reference to mcp_serverS
* use fastmcp for mocking
* uncomment
---------
Co-authored-by: Letta Bot <noreply@letta.com >
Co-authored-by: Ari Webb <ari@letta.com >
Co-authored-by: Ari Webb <arijwebb@gmail.com >
2025-11-13 15:36:14 -08:00
Sarah Wooders
24a14490d8
fix: add more error logging and tests for streaming LLM errors ( #5844 )
2025-11-13 15:36:14 -08:00
Ari Webb
9d5fdc6de7
feat: migrate integration test mcp serverspy to use 1.0 client [LET-5945] ( #5814 )
...
* new test first hack, should still break
---------
Co-authored-by: Ari Webb <ari@letta.com >
2025-11-13 15:36:14 -08:00
Ari Webb
0596a66c04
feat: new stainless sdk tests working locally [LET-5939] ( #5793 )
...
* new stainless sdk tests working locally
* Update conftest.py
* Update conftest.py
---------
Co-authored-by: Ari Webb <ari@letta.com >
2025-11-13 15:36:02 -08:00
Sarah Wooders
a566900533
chore: add back test_server.py ( #5783 )
2025-11-13 15:36:00 -08:00
Ari Webb
9cab61fe3f
feat: create sdk_v1 test folder [LET-5937] ( #5790 )
...
create sdk_v1 test folder
Co-authored-by: Ari Webb <ari@letta.com >
2025-11-13 15:35:41 -08:00
Kian Jones
60226ad203
fix: missing cursor kwargs ( #5763 )
...
* accept cursor logic
* add test
2025-11-13 15:35:34 -08:00
Kian Jones
1059452c11
fix: expect raise on detach to deleted agent ( #5770 )
...
just expect raise
2025-11-13 15:35:34 -08:00
Ari Webb
f3a40a41f5
feat: updated backend to not allow minimal for codex [LET-5883] ( #5760 )
...
* updated backend
* add function in openai_client
* remove values before error
* remove test
---------
Co-authored-by: Ari Webb <ari@letta.com >
2025-11-13 15:35:34 -08:00
Sarah Wooders
e7fff12da0
feat: patch model listing to actually match handle [LET-5888] ( #5754 )
2025-11-13 15:35:34 -08:00
Christina Tong
5fddf94ac3
fix: pagination broken in runs table [LET-5790] ( #5759 )
...
* fix: pagination broken in runs table [LET-5790]
* update pagination test
* fix test
2025-11-13 15:35:34 -08:00
jnjpng
6e2c002af3
feat: add stainless pagination for top level arrays with order by [LET-5800] ( #5687 )
...
* base
* revert openapi
* union
* simplify
* stainless
* stainless
* fix
* fix test
* generate
---------
Co-authored-by: Letta Bot <noreply@letta.com >
2025-10-24 15:14:31 -07:00