Commit Graph

41 Commits

Author SHA1 Message Date
Charles Packer
619e81ed1e fix(core): add OpenAI prompt cache key and model-gated 24h retention (#9492)
* fix(core): apply OpenAI prompt cache settings to request payloads

Set prompt_cache_key using agent and conversation context on both Responses and Chat Completions request builders, and enable 24h retention only for supported OpenAI models while excluding OpenRouter paths.

👾 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

* fix(core): prefix prompt cache key with letta tag

Add a `letta:` prefix to generated OpenAI prompt_cache_key values so cache-related entries are easier to identify in provider-side logs and diagnostics.

👾 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

* add integration test

* skip test

---------

Co-authored-by: Letta <noreply@letta.com>
Co-authored-by: Ari Webb <ari@letta.com>
2026-02-24 10:52:07 -08:00
Ari Webb
d0e25ae471 feat: add glm 5 to core (#9436)
* feat: add glm 5 to core

* test glm 5
2026-02-24 10:52:07 -08:00
Kian Jones
7cc1cd3dc0 feat(ci): self-hosted provider test for lmstudio (#9404)
* add gpu runners and prod memory_repos

* add lmstudio and vllm in model_settings

* fix llm_configs and change variable name in reusable workflow and change perms for memory_repos to admin in tf

* fix: update self-hosted provider tests to use SDK 1.0 and v2 tests

- Update letta-client from ==0.1.324 to >=1.0.0
- Switch ollama/vllm/lmstudio tests to integration_test_send_message_v2.py

🤖 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

* fix: use openai provider_type for self-hosted model settings

ollama/vllm/lmstudio are not valid provider_type values in the SDK
model_settings schema - they use openai-compatible APIs so provider_type
should be openai. The provider routing is determined by the handle prefix.

🤖 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

* fix: enable redis for ollama/vllm/lmstudio tests

Background streaming tests require Redis. Add use-redis: true to
self-hosted provider test workflows.

🤖 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

* prep for lmstudio and vllm

* used lmstudio_openai client

* change tool call parser from hermes to qwen3_xml

* qwen3_xmlk -> qwen3_coder

* revert to hermes (incompatible with parallel tool calls?) and skipping vllm tests on parallel tool calls

* install uv redis extra

* remove lmstudio

* create lmstudio test

* qwen3-14b on lmstudio

* try with qwen3-4b

* actually update the model config json to use qwen3-4b

* add test_providers::test_lmstudio

* bump timeout from 60 to 120 for slow lmstudio on cpu model

* misc vllm changes

---------

Co-authored-by: Letta <noreply@letta.com>
2026-02-24 10:52:07 -08:00
Kian Jones
7eb85707b1 feat(tf): gpu runners and prod memory_repos (#9283)
* add gpu runners and prod memory_repos

* add lmstudio and vllm in model_settings

* fix llm_configs and change variable name in reusable workflow and change perms for memory_repos to admin in tf

* fix: update self-hosted provider tests to use SDK 1.0 and v2 tests

- Update letta-client from ==0.1.324 to >=1.0.0
- Switch ollama/vllm/lmstudio tests to integration_test_send_message_v2.py

🤖 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

* fix: use openai provider_type for self-hosted model settings

ollama/vllm/lmstudio are not valid provider_type values in the SDK
model_settings schema - they use openai-compatible APIs so provider_type
should be openai. The provider routing is determined by the handle prefix.

🤖 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

* fix: use openai_compat_base_url for ollama/vllm/lmstudio providers

When reconstructing LLMConfig from a model handle lookup, use the
provider's openai_compat_base_url (which includes /v1) instead of
raw base_url. This fixes 404 errors when calling ollama/vllm/lmstudio
since OpenAI client expects /v1/chat/completions endpoint.

🤖 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

* fix: enable redis for ollama/vllm/lmstudio tests

Background streaming tests require Redis. Add use-redis: true to
self-hosted provider test workflows.

🤖 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

* add memfs-py in prod bucket access

* change ollama

* change packer model defaults

* self-hosted provider support

* diasble reasoner to match the number of messages in test case, enable parallel tool calls, and pass embedding configs

* remove reasoning setting not supported for ollama

* add qwen3 to extra assistant message case

* lower temp

* prep for lmstudio and vllm

* used lmstudio_openai client

* skip parallel tool calls on cpu ran provider lmstudio

* revert downgrade since it's so slow already

* add reuired flags for tool call parsing etc.

* change tool call parser from hermes to qwen3_xml

* qwen3_xmlk -> qwen3_coder

* upgrade vllm to latest container

* revert to hermes (incompatible with parallel tool calls?) and skipping vllm tests on parallel tool calls

* install uv redis extra

* remove lmstudio

---------

Co-authored-by: Letta <noreply@letta.com>
2026-02-24 10:52:07 -08:00
cthomas
59ffaec8f4 fix: revert test comments (#9161) 2026-01-29 12:44:04 -08:00
cthomas
d992aa0df4 fix: non-streaming conversation messages endpoint (#9159)
* fix: non-streaming conversation messages endpoint

**Problems:**
1. `AssertionError: run_id is required when enforce_run_id_set is True`
   - Non-streaming path didn't create a run before calling `step()`

2. `ResponseValidationError: Unable to extract tag using discriminator 'message_type'`
   - `response_model=LettaStreamingResponse` but non-streaming returns `LettaResponse`

**Fixes:**
1. Add run creation before calling `step()` (mirrors agents endpoint)
2. Set run_id in Redis for cancellation support
3. Pass `run_id` to `step()`
4. Change `response_model` from `LettaStreamingResponse` to `LettaResponse`
   (streaming returns `StreamingResponse` which bypasses response_model validation)

**Test:**
Added `test_conversation_non_streaming_raw_http` to verify the fix.

👾 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

* api sync

---------

Co-authored-by: Letta <noreply@letta.com>
2026-01-29 12:44:04 -08:00
Ari Webb
9dbf428c1f feat: enable bedrock for anthropic models (#8847)
* feat: enable bedrock for anthropic models

* parallel tool calls in ade

* attempt add to ci

* update tests

* add env vars

* hardcode region

* get it working

* debugging

* add bedrock extra

* default env var [skip ci]

* run ci

* reasoner model update

* secrets

* clean up log

* clean up
2026-01-19 15:54:44 -08:00
Sarah Wooders
0cbdf452fa fix: temporarily disable structured outputs for anthropic (#8491) 2026-01-12 10:57:49 -08:00
Kian Jones
e60e8ed670 chore: bump wait_for_server from 30 to 60 seconds (#8435)
bump 30 to 60 seconds
2026-01-12 10:57:49 -08:00
Sarah Wooders
87d920782f feat: add conversation and conversation_messages tables for concurrent messaging (#8182) 2026-01-12 10:57:48 -08:00
Ari Webb
cd45212acb feat: add zai provider support (#7626)
* feat: add zai provider support

* add zai_api_key secret to deploy-core

* add to justfile

* add testing, provider integration skill

* enable zai key

* fix zai test

* clean up skill a little

* small changes
2026-01-12 10:57:19 -08:00
Ari Webb
d4e7428c98 feat: structured outputs for anthropic [LET-6232] (#6410)
feat: structured outputs for anthropic

Co-authored-by: Ari Webb <ari@letta.com>
2025-11-26 14:39:40 -08:00
Ari Webb
89c7ab5f14 feat: structured outputs for openai [LET-6233] (#6363)
* first hack with test

* remove changes integration test

* Delete apps/core/tests/sdk_v1/integration/integration_test_send_message_v2.py

* add test

* remove comment

* stage and publish api

* deprecate base level response_schema

* add param to llm_config test

---------

Co-authored-by: Ari Webb <ari@letta.com>
2025-11-26 14:39:39 -08:00
cthomas
898c0ed83e test: cancellation edge case in test (#6379) 2025-11-26 14:39:39 -08:00
cthomas
7b0bd1cb13 feat: cutover repo to 1.0 sdk client LET-6256 (#6361)
feat: cutover repo to 1.0 sdk client
2025-11-24 19:11:18 -08:00
Ari Webb
c79859f0b0 fix: fix send_message_v2 ci tests (#6240)
* fix send_message_v2

* revert send_message

---------

Co-authored-by: Ari Webb <ari@letta.com>
2025-11-24 19:09:33 -08:00
Kian Jones
d360242307 fix: don't expect stop reason to have a run id (#6083)
don't expect stop reason to have a run id
2025-11-13 15:36:56 -08:00
jnjpng
a66a8187cd fix: v1 sdk tests directly subscripting from list endpoints (#6054)
* base

* fix

* fix

* runs

* skip

---------

Co-authored-by: Letta Bot <noreply@letta.com>
2025-11-13 15:36:56 -08:00
Matthew Zhou
7b3cb0224a feat: Add gemini parallel tool call streaming for gemini [LET-6027] (#5913)
* Make changes to gemini streaming interface to support parallel tool calling

* Finish send message integration test

* Add comments
2025-11-13 15:36:39 -08:00
Ari Webb
7427c0998e feat: gemini parallel tool calling non streaming [LET-5993] (#5889)
* first hack

* just test non streaming

* stream_steps should pass too

* clean up

---------

Co-authored-by: Ari Webb <ari@letta.com>
2025-11-13 15:36:39 -08:00
Matthew Zhou
ff81f4153b feat: Support parallel tool calling streaming for OpenAI chat completions [LET-4594] (#5865)
* Finish chat completions parallel tool calling

* Undo comments

* Add comments

* Remove test file
2025-11-13 15:36:14 -08:00
Ari Webb
48cc73175b feat: parallel tool calling for openai non streaming [LET-4593] (#5773)
* first hack

* clean up

* first implementation working

* revert package-lock

* remove openai test

* error throw

* typo

* Update integration_test_send_message_v2.py

* Update integration_test_send_message_v2.py

* refine test

* Only make changes for openai non streaming

* Add tests

---------

Co-authored-by: Ari Webb <ari@letta.com>
Co-authored-by: Matt Zhou <mattzh1314@gmail.com>
2025-11-13 15:36:14 -08:00
Matthew Zhou
396959da2f feat: Add toggle on llm config for parallel tool calling [LET-5610] (#5542)
* Add parallel tool calling field

* Thread through parallel tool use

* Fern autogen

* Fix send message v2
2025-10-24 15:12:11 -07:00
Matthew Zhou
a36bd1118d fix: Fix send message v2 tests [LET-5505] (#5435)
* wip

* Restore comments

* Remove extra prints
2025-10-24 15:12:11 -07:00
Ari Webb
9e94c344b8 using uuid and datetime [LET-5508] (#5430)
* using uuid and datetime

* add run_id

---------

Co-authored-by: Ari Webb <ari@letta.com>
2025-10-24 15:12:11 -07:00
cthomas
5c35be42fb fix: increase delay for responses api to fix flake (#5391) 2025-10-24 15:11:31 -07:00
cthomas
15a4fe3228 test: revert comments (#5384) 2025-10-24 15:11:31 -07:00
Matthew Zhou
25f140bd13 fix: Fix anthropic step parallel tool calling and add tests [LET-5438] (#5379)
* Fix anthropic step parallel tool calling and add tests

* Remove print statements
2025-10-24 15:11:31 -07:00
Matthew Zhou
b466cfdb1f fix: Fix parallel tool calling test for streaming (#5376)
Fix parallel tool calling test
2025-10-24 15:11:31 -07:00
Matthew Zhou
b205acf1f1 fix: Fix send message tests v2 (#5374)
Fix send message tests
2025-10-24 15:11:31 -07:00
Matthew Zhou
10a3d86507 test: Add basic parallel tool calling test to send_message v2 for anthropic [LET-5362] (#5355)
Add basic parallel tool calling test to send_message v2 for anthropic
2025-10-24 15:11:31 -07:00
cthomas
128afeb587 feat: fix cancellation bugs and add testing (#5353) 2025-10-24 15:11:31 -07:00
cthomas
89321ff29a feat: handle flaky reasoning in v2 tests (#5133) 2025-10-07 17:50:49 -07:00
cthomas
93d9ff01c6 feat: add gemini native thinking (#5124)
* feat: add gemini native thinking

* update test

* revert comments
2025-10-07 17:50:49 -07:00
cthomas
a3545110cf feat: add full responses api support in new agent loop (#5051)
* feat: add full responses api support in new agent loop

* update matrix in workflow

* relax check for reasoning messages for high effort gpt 5

* fix indent

* one more relax
2025-10-07 17:50:48 -07:00
cthomas
ad42c886b7 feat: add new agent loop tests to ci (#5049) 2025-10-07 17:50:48 -07:00
cthomas
f235dfb356 feat: add tool call test for new agent loop (#5034) 2025-10-07 17:50:47 -07:00
cthomas
cd900a6f4d feat: check run completion in send message tests (#5030) 2025-10-07 17:50:47 -07:00
cthomas
2d36002fc5 feat: add background mode test for new agent loop (#5025) 2025-10-07 17:50:47 -07:00
cthomas
e248ac27e2 feat: add messages.create_async test for new agent loop (#5024)
feat: add async test for new agent loop
2025-10-07 17:50:47 -07:00
cthomas
2af3130be1 feat: add integration test for new agent loop (#5020) 2025-10-07 17:50:47 -07:00