Updates multimodal examples to place text content before image content,
which prevents request failures.
Changes:
- Reordered content array in all SDK examples to have text first, then image
- Fixed TypeScript mediaType casing (media_type -> mediaType)
- Applied to both URL-based and base64-encoded image examples
🐾 Generated with [Letta Code](https://letta.com)
Co-authored-by: Letta <noreply@letta.com>
* chore: migrate integration test send message to v1 sdk
* new folder
* set up new workflows for integration test
* remove file
* update stainless workflow
* fix import err
* add letta-client version logging
* fix: SDK cache miss should fall back to PyPI instead of failing
When the Stainless SDK cache is not available, the workflow should
fall back to installing the published SDK from PyPI rather than
failing the CI build. The workflow already has this fallback logic
in the "Install Stainless SDK" step, but the "Check SDK cache"
step was failing before it could reach that point.
This change converts the hard failure (exit 1) to a warning message,
allowing the workflow to continue and use the PyPI fallback.
Co-Authored-By: Claude <noreply@anthropic.com>
* force upgrade
* remove frozen
* install before running
* add no sync
* use upgrade instead of upgrade package
* update
* fix llm config
* fix
* update
* update path
* update workflow
* see installed version
* add fallback
* update
* fix mini
* lettaping
* fix: handle o1 token streaming and LettaPing step_id validation
- Skip LettaPing messages in step_id validation (they don't have step_id)
- Move o1/o3/o4 token streaming check before general assertion in assert_tool_call_response
- o1 reasoning models omit AssistantMessage in token streaming mode (6 messages instead of 7)
---------
Co-authored-by: Kian Jones <kian@letta.com>
Co-authored-by: Claude <noreply@anthropic.com>
* first hack
* clean up
* first implementation working
* revert package-lock
* remove openai test
* error throw
* typo
* Update integration_test_send_message_v2.py
* Update integration_test_send_message_v2.py
* refine test
* Only make changes for openai non streaming
* Add tests
---------
Co-authored-by: Ari Webb <ari@letta.com>
Co-authored-by: Matt Zhou <mattzh1314@gmail.com>
Moves Human-in-the-Loop documentation from the Experimental section
to the Tool Use section, as it is a stable feature for tool approval
workflows.
Changes:
- Moved Human-in-the-Loop page to Tool Use section after Tool Variables
- Removed from Experimental section in docs navigation
- Removed experimental warning from human_in_the_loop.mdx
Human-in-the-Loop is now positioned alongside other tool-related
features, making it easier for users to discover when implementing
tool approval workflows.
🐾 Generated with [Letta Code](https://letta.com)
Co-authored-by: Letta <noreply@letta.com>
Fix broken link in prompts documentation that was pointing to
/prompts instead of /letta/prompts in the GitHub repository.
Reported by user feedback at docs.letta.com/prompts
* letta coded
* migrate to stainless from fern
* revert core workflows
* fix if statement
* fix typo
* run on self-hosted ci runners
* add empty check
* change file
* fix upstream renaming and special character escaping
* fix letta-code with opus
* remove random client type import
* remove env localhost
* remove fialing tests
* ignore ts for now
* fix caching maybe
* tar.gz -> whl
* retain name metadata
* don't build on cache hit
* add sdk_v1 tests