Commit Graph

9 Commits

Author SHA1 Message Date
Kian Jones
7cc1cd3dc0 feat(ci): self-hosted provider test for lmstudio (#9404)
* add gpu runners and prod memory_repos

* add lmstudio and vllm in model_settings

* fix llm_configs and change variable name in reusable workflow and change perms for memory_repos to admin in tf

* fix: update self-hosted provider tests to use SDK 1.0 and v2 tests

- Update letta-client from ==0.1.324 to >=1.0.0
- Switch ollama/vllm/lmstudio tests to integration_test_send_message_v2.py

🤖 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

* fix: use openai provider_type for self-hosted model settings

ollama/vllm/lmstudio are not valid provider_type values in the SDK
model_settings schema - they use openai-compatible APIs so provider_type
should be openai. The provider routing is determined by the handle prefix.

🤖 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

* fix: enable redis for ollama/vllm/lmstudio tests

Background streaming tests require Redis. Add use-redis: true to
self-hosted provider test workflows.

🤖 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

* prep for lmstudio and vllm

* used lmstudio_openai client

* change tool call parser from hermes to qwen3_xml

* qwen3_xmlk -> qwen3_coder

* revert to hermes (incompatible with parallel tool calls?) and skipping vllm tests on parallel tool calls

* install uv redis extra

* remove lmstudio

* create lmstudio test

* qwen3-14b on lmstudio

* try with qwen3-4b

* actually update the model config json to use qwen3-4b

* add test_providers::test_lmstudio

* bump timeout from 60 to 120 for slow lmstudio on cpu model

* misc vllm changes

---------

Co-authored-by: Letta <noreply@letta.com>
2026-02-24 10:52:07 -08:00
Kian Jones
7eb85707b1 feat(tf): gpu runners and prod memory_repos (#9283)
* add gpu runners and prod memory_repos

* add lmstudio and vllm in model_settings

* fix llm_configs and change variable name in reusable workflow and change perms for memory_repos to admin in tf

* fix: update self-hosted provider tests to use SDK 1.0 and v2 tests

- Update letta-client from ==0.1.324 to >=1.0.0
- Switch ollama/vllm/lmstudio tests to integration_test_send_message_v2.py

🤖 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

* fix: use openai provider_type for self-hosted model settings

ollama/vllm/lmstudio are not valid provider_type values in the SDK
model_settings schema - they use openai-compatible APIs so provider_type
should be openai. The provider routing is determined by the handle prefix.

🤖 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

* fix: use openai_compat_base_url for ollama/vllm/lmstudio providers

When reconstructing LLMConfig from a model handle lookup, use the
provider's openai_compat_base_url (which includes /v1) instead of
raw base_url. This fixes 404 errors when calling ollama/vllm/lmstudio
since OpenAI client expects /v1/chat/completions endpoint.

🤖 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

* fix: enable redis for ollama/vllm/lmstudio tests

Background streaming tests require Redis. Add use-redis: true to
self-hosted provider test workflows.

🤖 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

* add memfs-py in prod bucket access

* change ollama

* change packer model defaults

* self-hosted provider support

* diasble reasoner to match the number of messages in test case, enable parallel tool calls, and pass embedding configs

* remove reasoning setting not supported for ollama

* add qwen3 to extra assistant message case

* lower temp

* prep for lmstudio and vllm

* used lmstudio_openai client

* skip parallel tool calls on cpu ran provider lmstudio

* revert downgrade since it's so slow already

* add reuired flags for tool call parsing etc.

* change tool call parser from hermes to qwen3_xml

* qwen3_xmlk -> qwen3_coder

* upgrade vllm to latest container

* revert to hermes (incompatible with parallel tool calls?) and skipping vllm tests on parallel tool calls

* install uv redis extra

* remove lmstudio

---------

Co-authored-by: Letta <noreply@letta.com>
2026-02-24 10:52:07 -08:00
Ari Webb
0bbb9c9bc0 feat: add reasoning zai openrouter (#9189)
* feat: add reasoning zai openrouter

* add openrouter reasoning

* stage + publish api

* openrouter reasoning always on

* revert

* fix

* remove reference

* do
2026-02-24 10:52:06 -08:00
Sarah Wooders
adab8cd9b5 feat: add MiniMax provider support (#9095)
* feat: add MiniMax provider support

Add MiniMax as a new LLM provider using their Anthropic-compatible API.

Key implementation details:
- Uses standard messages API (not beta) - MiniMax supports thinking blocks natively
- Base URL: https://api.minimax.io/anthropic
- Models: MiniMax-M2.1, MiniMax-M2.1-lightning, MiniMax-M2 (all 200K context, 128K output)
- Temperature clamped to valid range (0.0, 1.0]
- All M2.x models treated as reasoning models (support interleaved thinking)

Files added:
- letta/schemas/providers/minimax.py - MiniMax provider schema
- letta/llm_api/minimax_client.py - Client extending AnthropicClient
- tests/test_minimax_client.py - Unit tests (13 tests)
- tests/model_settings/minimax-m2.1.json - Integration test config

🐾 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

* chore: regenerate API spec with MiniMax provider

🐾 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

* chore: use MiniMax-M2.1-lightning for CI tests

Switch to the faster/cheaper lightning model variant for integration tests.

🐾 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta <noreply@letta.com>

* chore: add MINIMAX_API_KEY to deploy-core command

Co-authored-by: Sarah Wooders <sarahwooders@users.noreply.github.com>

* chore: regenerate web openapi spec with MiniMax provider

Co-authored-by: Sarah Wooders <sarahwooders@users.noreply.github.com>

🐾 Generated with [Letta Code](https://letta.com)

---------

Co-authored-by: Letta <noreply@letta.com>
Co-authored-by: letta-code <248085862+letta-code@users.noreply.github.com>
Co-authored-by: Sarah Wooders <sarahwooders@users.noreply.github.com>
2026-01-29 12:44:04 -08:00
Ari Webb
9dbf428c1f feat: enable bedrock for anthropic models (#8847)
* feat: enable bedrock for anthropic models

* parallel tool calls in ade

* attempt add to ci

* update tests

* add env vars

* hardcode region

* get it working

* debugging

* add bedrock extra

* default env var [skip ci]

* run ci

* reasoner model update

* secrets

* clean up log

* clean up
2026-01-19 15:54:44 -08:00
Kevin Lin
e5ed8ca0e8 feat: set default temperature to 1.0 [LET-6920] (#8618)
* temp 1

* stage

* update core tests
2026-01-19 15:54:43 -08:00
Ari Webb
cd45212acb feat: add zai provider support (#7626)
* feat: add zai provider support

* add zai_api_key secret to deploy-core

* add to justfile

* add testing, provider integration skill

* enable zai key

* fix zai test

* clean up skill a little

* small changes
2026-01-12 10:57:19 -08:00
cthomas
8e9d85fcc7 test: fix legacy send message tests (#6382)
* test: fix legacy send message tests

* fix reasoner model test

* add hidden reasoning
2025-11-26 14:39:39 -08:00
cthomas
7b0bd1cb13 feat: cutover repo to 1.0 sdk client LET-6256 (#6361)
feat: cutover repo to 1.0 sdk client
2025-11-24 19:11:18 -08:00