Add 'apps/core/' from commit 'ea2a7395f4023f5b9fab03e6273db3b64a1181d5'

git-subtree-dir: apps/core
git-subtree-mainline: a8963e11e7a5a0059acbc849ce768e1eee80df61
git-subtree-split: ea2a7395f4023f5b9fab03e6273db3b64a1181d5
This commit is contained in:
Shubham Naik
2024-12-22 20:31:22 -08:00
commit 5a743d1dc4
478 changed files with 65642 additions and 0 deletions

9
.dockerignore Normal file
View File

@@ -0,0 +1,9 @@
**/__pycache__
**/.pytest_cache
**/*.pyc
**/*.pyo
**/*.pyd
.git
.gitignore
.env
*.log

44
.env.example Normal file
View File

@@ -0,0 +1,44 @@
##########################################################
Example enviornment variable configurations for the Letta
Docker container. Un-coment the sections you want to
configure with.
Hint: You don't need to have the same LLM and
Embedding model backends (can mix and match).
##########################################################
##########################################################
OpenAI configuration
##########################################################
## LLM Model
#LETTA_LLM_ENDPOINT_TYPE=openai
#LETTA_LLM_MODEL=gpt-4o-mini
## Embeddings
#LETTA_EMBEDDING_ENDPOINT_TYPE=openai
#LETTA_EMBEDDING_MODEL=text-embedding-ada-002
##########################################################
Ollama configuration
##########################################################
## LLM Model
#LETTA_LLM_ENDPOINT=http://host.docker.internal:11434
#LETTA_LLM_ENDPOINT_TYPE=ollama
#LETTA_LLM_MODEL=dolphin2.2-mistral:7b-q6_K
#LETTA_LLM_CONTEXT_WINDOW=8192
## Embeddings
#LETTA_EMBEDDING_ENDPOINT=http://host.docker.internal:11434
#LETTA_EMBEDDING_ENDPOINT_TYPE=ollama
#LETTA_EMBEDDING_MODEL=mxbai-embed-large
#LETTA_EMBEDDING_DIM=512
##########################################################
vLLM configuration
##########################################################
## LLM Model
#LETTA_LLM_ENDPOINT=http://host.docker.internal:8000
#LETTA_LLM_ENDPOINT_TYPE=vllm
#LETTA_LLM_MODEL=ehartford/dolphin-2.2.1-mistral-7b
#LETTA_LLM_CONTEXT_WINDOW=8192

20
.gitattributes vendored Normal file
View File

@@ -0,0 +1,20 @@
# Set the default behavior, in case people don't have core.autocrlf set.
* text=auto
# Explicitly declare text files you want to always be normalized and converted
# to LF on checkout.
*.py text eol=lf
*.txt text eol=lf
*.md text eol=lf
*.json text eol=lf
*.yml text eol=lf
*.yaml text eol=lf
# Declare files that will always have CRLF line endings on checkout.
# (Only if you have specific Windows-only files)
*.bat text eol=crlf
# Denote all files that are truly binary and should not be modified.
*.png binary
*.jpg binary
*.gif binary

39
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@@ -0,0 +1,39 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: ''
assignees: ''
---
**Describe the bug**
A clear and concise description of what the bug is.
**Please describe your setup**
- [ ] How did you install letta?
- `pip install letta`? `pip install letta-nightly`? `git clone`?
- [ ] Describe your setup
- What's your OS (Windows/MacOS/Linux)?
- How are you running `letta`? (`cmd.exe`/Powershell/Anaconda Shell/Terminal)
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Additional context**
Add any other context about the problem here.
**Letta Config**
Please attach your `~/.letta/config` file or copy paste it below.
---
If you're not using OpenAI, please provide additional information on your local LLM setup:
**Local LLM details**
If you are trying to run Letta with local LLMs, please provide the following information:
- [ ] The exact model you're trying to use (e.g. `dolphin-2.1-mistral-7b.Q6_K.gguf`)
- [ ] The local LLM backend you are using (web UI? LM Studio?)
- [ ] Your hardware for the local LLM backend (local computer? operating system? remote RunPod?)

View File

@@ -0,0 +1,20 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: ''
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.

17
.github/pull_request_template.md vendored Normal file
View File

@@ -0,0 +1,17 @@
**Please describe the purpose of this pull request.**
Is it to add a new feature? Is it to fix a bug?
**How to test**
How can we test your PR during review? What commands should we run? What outcomes should we expect?
**Have you tested this PR?**
Have you tested the latest commit on the PR? If so please provide outputs from your tests.
**Related issues or PRs**
Please link any related GitHub [issues](https://github.com/letta-ai/letta/issues) or [PRs](https://github.com/letta-ai/letta/pulls).
**Is your PR over 500 lines of code?**
If so, please break up your PR into multiple smaller PRs so that we can review them quickly, or provide justification for its length.
**Additional context**
Add any other context or screenshots about the PR here.

View File

@@ -0,0 +1,62 @@
name: Check for Print Statements
on:
pull_request:
paths:
- '**.py'
jobs:
check-print-statements:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.x'
- name: Check for new print statements
run: |
# Get the files changed in this PR
git diff --name-only ${{ github.event.pull_request.base.sha }} ${{ github.sha }} > changed_files.txt
# Filter for only Python files, excluding tests directory
grep "\.py$" changed_files.txt | grep -v "^tests/" > python_files.txt || true
# Initialize error flag
ERROR=0
# Check each changed Python file
while IFS= read -r file; do
if [ "$file" == "letta/main.py" ]; then
echo "Skipping $file for print statement checks."
continue
fi
if [ -f "$file" ]; then
echo "Checking $file for new print statements..."
# Get diff and look for added lines containing print statements
NEW_PRINTS=$(git diff ${{ github.event.pull_request.base.sha }} ${{ github.sha }} "$file" | \
grep "^+" | \
grep -v "^+++" | \
grep -E "(^|\s)print\(" || true)
if [ ! -z "$NEW_PRINTS" ]; then
echo "❌ Found new print statements in $file:"
echo "$NEW_PRINTS"
ERROR=1
fi
fi
done < python_files.txt
# Exit with error if print statements were found
if [ $ERROR -eq 1 ]; then
echo "::error::New print statements were found in the changes"
exit 1
fi
echo "✅ No new print statements found"

View File

@@ -0,0 +1,22 @@
name: Close inactive issues
on:
schedule:
- cron: "30 1 * * *"
jobs:
close-issues:
runs-on: ubuntu-latest
permissions:
issues: write
pull-requests: write
steps:
- uses: actions/stale@v5
with:
days-before-issue-stale: 30
days-before-issue-close: 14
stale-issue-label: "stale"
stale-issue-message: "This issue is stale because it has been open for 30 days with no activity."
close-issue-message: "This issue was closed because it has been inactive for 14 days since being marked as stale."
days-before-pr-stale: -1
days-before-pr-close: -1
repo-token: ${{ secrets.GITHUB_TOKEN }}

50
.github/workflows/code_style_checks.yml vendored Normal file
View File

@@ -0,0 +1,50 @@
name: Code Style Checks
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
style-checks:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.12"] # Adjust Python version matrix if needed
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
ref: ${{ github.head_ref }} # Checkout the PR branch
fetch-depth: 0 # Fetch all history for all branches and tags
- name: "Setup Python, Poetry and Dependencies"
uses: packetcoders/action-setup-cache-python-poetry@main
with:
python-version: ${{ matrix.python-version }}
poetry-version: "1.8.2"
install-args: "-E dev -E postgres -E external-tools -E tests" # Adjust as necessary
- name: Validate PR Title
if: github.event_name == 'pull_request'
uses: amannn/action-semantic-pull-request@v5
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Run Pyright
uses: jakebailey/pyright-action@v2
with:
python-version: ${{ matrix.python-version }}
level: "error"
continue-on-error: true
- name: Run isort
run: poetry run isort --profile black --check-only --diff .
- name: Run Black
run: poetry run black --check .
- name: Run Autoflake
run: poetry run autoflake --remove-all-unused-imports --remove-unused-variables --in-place --recursive --ignore-init-module-imports .

View File

@@ -0,0 +1,27 @@
name: Docker Image CI (nightly)
on:
schedule:
- cron: '35 10 * * *' # 10:35am UTC, 2:35am PST, 5:35am EST
release:
types: [published]
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- uses: actions/checkout@v3
- name: Build and push the Docker image (letta)
run: |
docker build . --file Dockerfile --tag letta/letta:nightly
docker push letta/letta:nightly

41
.github/workflows/docker-image.yml vendored Normal file
View File

@@ -0,0 +1,41 @@
name: Docker Image CI
on:
release:
types: [published]
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- uses: actions/checkout@v3
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Extract version number
id: extract_version
run: echo "CURRENT_VERSION=$(awk -F '\"' '/version =/ { print $2 }' pyproject.toml | head -n 1)" >> $GITHUB_ENV
- name: Build and push
uses: docker/build-push-action@v6
with:
platforms: linux/amd64,linux/arm64
push: true
tags: |
letta/letta:${{ env.CURRENT_VERSION }}
letta/letta:latest
memgpt/letta:${{ env.CURRENT_VERSION }}
memgpt/letta:latest

View File

@@ -0,0 +1,65 @@
name: Run Docker integration tests
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
test:
runs-on: ubuntu-latest
timeout-minutes: 15
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Python
uses: actions/setup-python@v5
with:
python-version: '3.12'
- name: Set permissions for log directory
run: |
mkdir -p /home/runner/.letta/logs
sudo chown -R $USER:$USER /home/runner/.letta/logs
chmod -R 755 /home/runner/.letta/logs
- name: Build and run docker dev server
env:
LETTA_PG_DB: letta
LETTA_PG_USER: letta
LETTA_PG_PASSWORD: letta
LETTA_PG_PORT: 8888
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
run: docker compose -f dev-compose.yaml up --build -d
#- name: "Setup Python, Poetry and Dependencies"
# uses: packetcoders/action-setup-cache-python-poetry@v1.2.0
# with:
# python-version: "3.12"
# poetry-version: "1.8.2"
# install-args: "--all-extras"
- name: Wait for service
run: bash scripts/wait_for_service.sh http://localhost:8283 -- echo "Service is ready"
- name: Run tests with pytest
env:
LETTA_PG_DB: letta
LETTA_PG_USER: letta
LETTA_PG_PASSWORD: letta
LETTA_PG_PORT: 8888
LETTA_SERVER_PASS: test_server_token
LETTA_SERVER_URL: http://localhost:8283
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
PYTHONPATH: ${{ github.workspace }}:${{ env.PYTHONPATH }}
run: |
pipx install poetry==1.8.2
poetry install -E dev -E postgres
poetry run pytest -s tests/test_client_legacy.py
- name: Print docker logs if tests fail
if: failure()
run: |
echo "Printing Docker Logs..."
docker compose -f dev-compose.yaml logs

81
.github/workflows/integration_tests.yml vendored Normal file
View File

@@ -0,0 +1,81 @@
name: Integration Tests
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
COMPOSIO_API_KEY: ${{ secrets.COMPOSIO_API_KEY }}
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
GEMINI_API_KEY: ${{ secrets.GEMINI_API_KEY }}
GROQ_API_KEY: ${{ secrets.GROQ_API_KEY }}
AZURE_API_KEY: ${{ secrets.AZURE_API_KEY }}
AZURE_BASE_URL: ${{ secrets.AZURE_BASE_URL }}
E2B_API_KEY: ${{ secrets.E2B_API_KEY }}
E2B_SANDBOX_TEMPLATE_ID: ${{ secrets.E2B_SANDBOX_TEMPLATE_ID }}
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
integ-run:
runs-on: ubuntu-latest
timeout-minutes: 15
strategy:
fail-fast: false
matrix:
integration_test_suite:
- "integration_test_summarizer.py"
- "integration_test_tool_execution_sandbox.py"
- "integration_test_offline_memory_agent.py"
- "integration_test_agent_tool_graph.py"
- "integration_test_o1_agent.py"
services:
qdrant:
image: qdrant/qdrant
ports:
- 6333:6333
postgres:
image: pgvector/pgvector:pg17
ports:
- 5432:5432
env:
POSTGRES_HOST_AUTH_METHOD: trust
POSTGRES_DB: postgres
POSTGRES_USER: postgres
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Python, Poetry, and Dependencies
uses: packetcoders/action-setup-cache-python-poetry@main
with:
python-version: "3.12"
poetry-version: "1.8.2"
install-args: "-E dev -E postgres -E external-tools -E tests -E cloud-tool-sandbox"
- name: Migrate database
env:
LETTA_PG_PORT: 5432
LETTA_PG_USER: postgres
LETTA_PG_PASSWORD: postgres
LETTA_PG_DB: postgres
LETTA_PG_HOST: localhost
run: |
psql -h localhost -U postgres -d postgres -c 'CREATE EXTENSION vector'
poetry run alembic upgrade head
- name: Run core unit tests
env:
LETTA_PG_PORT: 5432
LETTA_PG_USER: postgres
LETTA_PG_PASSWORD: postgres
LETTA_PG_DB: postgres
LETTA_PG_HOST: localhost
LETTA_SERVER_PASS: test_server_token
run: |
poetry run pytest -s -vv tests/${{ matrix.integration_test_suite }}

View File

@@ -0,0 +1,42 @@
name: "Letta Web OpenAPI Compatibility Checker"
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
validate-openapi:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: "Setup Python, Poetry and Dependencies"
uses: packetcoders/action-setup-cache-python-poetry@main
with:
python-version: "3.12"
poetry-version: "1.8.2"
install-args: "-E dev"
- name: Checkout letta web
uses: actions/checkout@v4
with:
repository: letta-ai/letta-web
token: ${{ secrets.PULLER_TOKEN }}
path: letta-web
- name: Run OpenAPI schema generation
run: |
bash ./letta/server/generate_openapi_schema.sh
- name: Setup letta-web
working-directory: letta-web
run: npm ci
- name: Copy OpenAPI schema
working-directory: .
run: cp openapi_letta.json letta-web/libs/letta-agents-api/letta-agents-openapi.json
- name: Validate OpenAPI schema
working-directory: letta-web
run: |
npm run agents-api:generate
npm run type-check

85
.github/workflows/letta-web-safety.yml vendored Normal file
View File

@@ -0,0 +1,85 @@
name: "Letta Web Compatibility Checker"
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
cypress-run:
runs-on: ubuntu-latest
environment: Deployment
# Runs tests in parallel with matrix strategy https://docs.cypress.io/guides/guides/parallelization
# https://docs.github.com/en/actions/using-jobs/using-a-matrix-for-your-jobs
# Also see warning here https://github.com/cypress-io/github-action#parallel
strategy:
fail-fast: false # https://github.com/cypress-io/github-action/issues/48
matrix:
containers: [ 1 ]
services:
redis:
image: redis
ports:
- 6379:6379
options: >-
--health-cmd "redis-cli ping"
--health-interval 10s
--health-timeout 5s
--health-retries 5
postgres:
image: postgres
ports:
- 5433:5432
env:
POSTGRES_DB: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Checkout letta web
uses: actions/checkout@v4
with:
repository: letta-ai/letta-web
token: ${{ secrets.PULLER_TOKEN }}
path: letta-web
- name: Turn on Letta agents
env:
LETTA_PG_DB: letta
LETTA_PG_USER: letta
LETTA_PG_PASSWORD: letta
LETTA_PG_PORT: 8888
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
run: docker compose -f dev-compose.yaml up --build -d
- name: Cypress run
uses: cypress-io/github-action@v6
with:
working-directory: letta-web
build: npm run build:e2e
start: npm run start:e2e
project: apps/letta
wait-on: 'http://localhost:3000' # Waits for above
record: false
parallel: false
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
CYPRESS_PROJECT_KEY: 38nemh
DATABASE_URL: postgres://postgres:postgres@localhost:5433/postgres
REDIS_HOST: localhost
REDIS_PORT: 6379
CYPRESS_RECORD_KEY: ${{ secrets.CYPRESS_RECORD_KEY }}
CYPRESS_GOOGLE_CLIENT_ID: ${{ secrets.CYPRESS_GOOGLE_CLIENT_ID }}
CYPRESS_GOOGLE_CLIENT_SECRET: ${{ secrets.CYPRESS_GOOGLE_CLIENT_SECRET }}
CYPRESS_GOOGLE_REFRESH_TOKEN: ${{ secrets.CYPRESS_GOOGLE_REFRESH_TOKEN }}
LETTA_AGENTS_ENDPOINT: http://localhost:8283
NEXT_PUBLIC_CURRENT_HOST: http://localhost:3000
IS_CYPRESS_RUN: yes
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}

View File

@@ -0,0 +1,25 @@
name: Clear Old Issues
on:
workflow_dispatch:
jobs:
cleanup-old-issues:
runs-on: ubuntu-latest
permissions:
issues: write
pull-requests: write
steps:
- uses: actions/stale@v5
with:
days-before-issue-stale: 60
days-before-issue-close: 0
stale-issue-label: "auto-closed"
stale-issue-message: ""
close-issue-message: "This issue has been automatically closed due to 60 days of inactivity."
days-before-pr-stale: -1
days-before-pr-close: -1
exempt-issue-labels: ""
only-issue-labels: ""
remove-stale-when-updated: true
operations-per-run: 1000
repo-token: ${{ secrets.GITHUB_TOKEN }}

44
.github/workflows/migration-test.yml vendored Normal file
View File

@@ -0,0 +1,44 @@
name: Alembic Migration Tester
on:
pull_request:
paths:
- '**.py'
workflow_dispatch:
jobs:
test:
runs-on: ubuntu-latest
timeout-minutes: 15
services:
postgres:
image: pgvector/pgvector:pg17
ports:
- 5432:5432
env:
POSTGRES_HOST_AUTH_METHOD: trust
POSTGRES_DB: postgres
POSTGRES_USER: postgres
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- name: Checkout
uses: actions/checkout@v4
- run: psql -h localhost -U postgres -d postgres -c 'CREATE EXTENSION vector'
- name: "Setup Python, Poetry and Dependencies"
uses: packetcoders/action-setup-cache-python-poetry@main
with:
python-version: "3.12"
poetry-version: "1.8.2"
install-args: "--all-extras"
- name: Test alembic migration
env:
LETTA_PG_PORT: 5432
LETTA_PG_USER: postgres
LETTA_PG_PASSWORD: postgres
LETTA_PG_DB: postgres
LETTA_PG_HOST: localhost
run: |
poetry run alembic upgrade head
poetry run alembic check

View File

@@ -0,0 +1,62 @@
name: poetry-publish-nightly
on:
schedule:
- cron: '35 10 * * *' # 10:35am UTC, 2:35am PST, 5:35am EST
release:
types: [published]
workflow_dispatch:
jobs:
# nightly release check from https://stackoverflow.com/a/67527144
check-date:
runs-on: ubuntu-latest
outputs:
should_run: ${{ steps.should_run.outputs.should_run }}
steps:
- uses: actions/checkout@v4
- name: print latest_commit
run: echo ${{ github.sha }}
- id: should_run
continue-on-error: true
name: check latest commit is less than a day
if: ${{ github.event_name == 'schedule' }}
run: test -z $(git rev-list --after="24 hours" ${{ github.sha }}) && echo "::set-output name=should_run::false"
build-and-publish-nightly:
name: Build and Publish to PyPI (nightly)
if: github.repository == 'letta-ai/letta' # TODO: if the repo org ever changes, this must be updated
runs-on: ubuntu-latest
needs: check-date
steps:
- name: Check out the repository
uses: actions/checkout@v4
- name: "Setup Python, Poetry and Dependencies"
uses: packetcoders/action-setup-cache-python-poetry@main
with:
python-version: "3.11"
poetry-version: "1.7.1"
- name: Set release version
run: |
# Extract the version number from pyproject.toml using awk
CURRENT_VERSION=$(awk -F '"' '/version =/ { print $2 }' pyproject.toml | head -n 1)
# Export the CURRENT_VERSION with the .dev and current date suffix
NIGHTLY_VERSION="${CURRENT_VERSION}.dev$(date +%Y%m%d%H%M%S)"
# Overwrite pyproject.toml with nightly config
sed -i "0,/version = \"${CURRENT_VERSION}\"/s//version = \"${NIGHTLY_VERSION}\"/" pyproject.toml
sed -i 's/name = "letta"/name = "letta-nightly"/g' pyproject.toml
sed -i "s/__version__ = '.*'/__version__ = '${NIGHTLY_VERSION}'/g" letta/__init__.py
cat pyproject.toml
cat letta/__init__.py
- name: Configure poetry
env:
PYPI_TOKEN: ${{ secrets.PYPI_TOKEN}}
run: poetry config pypi-token.pypi "$PYPI_TOKEN"
- name: Build the Python package
run: poetry build
- name: Publish the package to PyPI
run: poetry publish

32
.github/workflows/poetry-publish.yml vendored Normal file
View File

@@ -0,0 +1,32 @@
name: poetry-publish
on:
release:
types: [published]
workflow_dispatch:
jobs:
build-and-publish:
name: Build and Publish to PyPI
if: github.repository == 'letta-ai/letta' # TODO: if the repo org ever changes, this must be updated
runs-on: ubuntu-latest
steps:
- name: Check out the repository
uses: actions/checkout@v4
- name: "Setup Python, Poetry and Dependencies"
uses: packetcoders/action-setup-cache-python-poetry@main
with:
python-version: "3.11"
poetry-version: "1.7.1"
- name: Configure poetry
env:
PYPI_TOKEN: ${{ secrets.PYPI_TOKEN }}
run: |
poetry config pypi-token.pypi "$PYPI_TOKEN"
- name: Build the Python package
run: poetry build
- name: Publish the package to PyPI
run: poetry publish

23
.github/workflows/test-pip-install.yml vendored Normal file
View File

@@ -0,0 +1,23 @@
name: Test Package Installation
on: [push, pull_request, workflow_dispatch]
jobs:
test-install:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.10", "3.11", "3.12", "3.13"] # Adjust Python versions as needed
steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install package with extras
run: pip install '.[external-tools,postgres,dev,server,ollama]' # Replace 'all' with the key that includes all extras
- name: Check package installation
run: pip list # Or any other command to verify successful installation

102
.github/workflows/test_anthropic.yml vendored Normal file
View File

@@ -0,0 +1,102 @@
name: Anthropic Claude Opus 3 Capabilities Test
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
COMPOSIO_API_KEY: ${{ secrets.COMPOSIO_API_KEY }}
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
test:
runs-on: ubuntu-latest
timeout-minutes: 15
steps:
- name: Checkout
uses: actions/checkout@v4
- name: "Setup Python, Poetry and Dependencies"
uses: packetcoders/action-setup-cache-python-poetry@main
with:
python-version: "3.12"
poetry-version: "1.8.2"
install-args: "-E dev -E external-tools"
- name: Test first message contains expected function call and inner monologue
id: test_first_message
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
run: |
poetry run pytest -s -vv tests/test_model_letta_perfomance.py::test_claude_opus_3_returns_valid_first_message
echo "TEST_FIRST_MESSAGE_EXIT_CODE=$?" >> $GITHUB_ENV
continue-on-error: true
- name: Test model sends message with keyword
id: test_keyword_message
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
run: |
poetry run pytest -s -vv tests/test_model_letta_perfomance.py::test_claude_opus_3_returns_keyword
echo "TEST_KEYWORD_MESSAGE_EXIT_CODE=$?" >> $GITHUB_ENV
continue-on-error: true
- name: Test model uses external tool correctly
id: test_external_tool
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
run: |
poetry run pytest -s -vv tests/test_model_letta_perfomance.py::test_claude_opus_3_uses_external_tool
echo "TEST_EXTERNAL_TOOL_EXIT_CODE=$?" >> $GITHUB_ENV
continue-on-error: true
- name: Test model recalls chat memory
id: test_chat_memory
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
run: |
poetry run pytest -s -vv tests/test_model_letta_perfomance.py::test_claude_opus_3_recall_chat_memory
echo "TEST_CHAT_MEMORY_EXIT_CODE=$?" >> $GITHUB_ENV
continue-on-error: true
- name: Test model uses 'archival_memory_search' to find secret
id: test_archival_memory
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
run: |
poetry run pytest -s -vv tests/test_model_letta_perfomance.py::test_claude_opus_3_archival_memory_retrieval
echo "TEST_ARCHIVAL_MEMORY_EXIT_CODE=$?" >> $GITHUB_ENV
continue-on-error: true
- name: Test model can edit core memories
id: test_core_memory
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
run: |
poetry run pytest -s -vv tests/test_model_letta_perfomance.py::test_claude_opus_3_edit_core_memory
echo "TEST_CORE_MEMORY_EXIT_CODE=$?" >> $GITHUB_ENV
continue-on-error: true
- name: Summarize test results
if: always()
run: |
echo "Test Results Summary:"
echo "Test first message: $([[ $TEST_FIRST_MESSAGE_EXIT_CODE -eq 0 ]] && echo ✅ || echo ❌)"
echo "Test model sends message with keyword: $([[ $TEST_KEYWORD_MESSAGE_EXIT_CODE -eq 0 ]] && echo ✅ || echo ❌)"
echo "Test model uses external tool: $([[ $TEST_EXTERNAL_TOOL_EXIT_CODE -eq 0 ]] && echo ✅ || echo ❌)"
echo "Test model recalls chat memory: $([[ $TEST_CHAT_MEMORY_EXIT_CODE -eq 0 ]] && echo ✅ || echo ❌)"
echo "Test model uses 'archival_memory_search' to find secret: $([[ $TEST_ARCHIVAL_MEMORY_EXIT_CODE -eq 0 ]] && echo ✅ || echo ❌)"
echo "Test model can edit core memories: $([[ $TEST_CORE_MEMORY_EXIT_CODE -eq 0 ]] && echo ✅ || echo ❌)"
# Check if any test failed
if [[ $TEST_FIRST_MESSAGE_EXIT_CODE -ne 0 || \
$TEST_KEYWORD_MESSAGE_EXIT_CODE -ne 0 || \
$TEST_EXTERNAL_TOOL_EXIT_CODE -ne 0 || \
$TEST_CHAT_MEMORY_EXIT_CODE -ne 0 || \
$TEST_ARCHIVAL_MEMORY_EXIT_CODE -ne 0 || \
$TEST_CORE_MEMORY_EXIT_CODE -ne 0 ]]; then
echo "Some tests failed."
exit 78
fi

111
.github/workflows/test_azure.yml vendored Normal file
View File

@@ -0,0 +1,111 @@
name: Azure OpenAI GPT-4o Mini Capabilities Test
env:
AZURE_API_KEY: ${{ secrets.AZURE_API_KEY }}
AZURE_BASE_URL: ${{ secrets.AZURE_BASE_URL }}
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
test:
runs-on: ubuntu-latest
timeout-minutes: 15
steps:
- name: Checkout
uses: actions/checkout@v4
- name: "Setup Python, Poetry and Dependencies"
uses: packetcoders/action-setup-cache-python-poetry@main
with:
python-version: "3.12"
poetry-version: "1.8.2"
install-args: "-E dev -E external-tools"
- name: Test first message contains expected function call and inner monologue
id: test_first_message
env:
AZURE_API_KEY: ${{ secrets.AZURE_API_KEY }}
AZURE_BASE_URL: ${{ secrets.AZURE_BASE_URL }}
run: |
poetry run pytest -s -vv tests/test_model_letta_perfomance.py::test_azure_gpt_4o_mini_returns_valid_first_message
echo "TEST_FIRST_MESSAGE_EXIT_CODE=$?" >> $GITHUB_ENV
continue-on-error: true
- name: Test model sends message with keyword
id: test_keyword_message
env:
AZURE_API_KEY: ${{ secrets.AZURE_API_KEY }}
AZURE_BASE_URL: ${{ secrets.AZURE_BASE_URL }}
run: |
poetry run pytest -s -vv tests/test_model_letta_perfomance.py::test_azure_gpt_4o_mini_returns_keyword
echo "TEST_KEYWORD_MESSAGE_EXIT_CODE=$?" >> $GITHUB_ENV
continue-on-error: true
- name: Test model uses external tool correctly
id: test_external_tool
env:
AZURE_API_KEY: ${{ secrets.AZURE_API_KEY }}
AZURE_BASE_URL: ${{ secrets.AZURE_BASE_URL }}
run: |
poetry run pytest -s -vv tests/test_model_letta_perfomance.py::test_azure_gpt_4o_mini_uses_external_tool
echo "TEST_EXTERNAL_TOOL_EXIT_CODE=$?" >> $GITHUB_ENV
continue-on-error: true
- name: Test model recalls chat memory
id: test_chat_memory
env:
AZURE_API_KEY: ${{ secrets.AZURE_API_KEY }}
AZURE_BASE_URL: ${{ secrets.AZURE_BASE_URL }}
run: |
poetry run pytest -s -vv tests/test_model_letta_perfomance.py::test_azure_gpt_4o_mini_recall_chat_memory
echo "TEST_CHAT_MEMORY_EXIT_CODE=$?" >> $GITHUB_ENV
continue-on-error: true
- name: Test model uses 'archival_memory_search' to find secret
id: test_archival_memory
env:
AZURE_API_KEY: ${{ secrets.AZURE_API_KEY }}
AZURE_BASE_URL: ${{ secrets.AZURE_BASE_URL }}
run: |
poetry run pytest -s -vv tests/test_model_letta_perfomance.py::test_azure_gpt_4o_mini_archival_memory_retrieval
echo "TEST_ARCHIVAL_MEMORY_EXIT_CODE=$?" >> $GITHUB_ENV
continue-on-error: true
- name: Test model can edit core memories
id: test_core_memory
env:
AZURE_API_KEY: ${{ secrets.AZURE_API_KEY }}
AZURE_BASE_URL: ${{ secrets.AZURE_BASE_URL }}
run: |
poetry run pytest -s -vv tests/test_model_letta_perfomance.py::test_azure_gpt_4o_mini_edit_core_memory
echo "TEST_CORE_MEMORY_EXIT_CODE=$?" >> $GITHUB_ENV
continue-on-error: true
- name: Summarize test results
if: always()
run: |
echo "Test Results Summary:"
# If the exit code is empty, treat it as a failure (❌)
echo "Test first message: $([[ -z $TEST_FIRST_MESSAGE_EXIT_CODE || $TEST_FIRST_MESSAGE_EXIT_CODE -ne 0 ]] && echo ❌ || echo ✅)"
echo "Test model sends message with keyword: $([[ -z $TEST_KEYWORD_MESSAGE_EXIT_CODE || $TEST_KEYWORD_MESSAGE_EXIT_CODE -ne 0 ]] && echo ❌ || echo ✅)"
echo "Test model uses external tool: $([[ -z $TEST_EXTERNAL_TOOL_EXIT_CODE || $TEST_EXTERNAL_TOOL_EXIT_CODE -ne 0 ]] && echo ❌ || echo ✅)"
echo "Test model recalls chat memory: $([[ -z $TEST_CHAT_MEMORY_EXIT_CODE || $TEST_CHAT_MEMORY_EXIT_CODE -ne 0 ]] && echo ❌ || echo ✅)"
echo "Test model uses 'archival_memory_search' to find secret: $([[ -z $TEST_ARCHIVAL_MEMORY_EXIT_CODE || $TEST_ARCHIVAL_MEMORY_EXIT_CODE -ne 0 ]] && echo ❌ || echo ✅)"
echo "Test model can edit core memories: $([[ -z $TEST_CORE_MEMORY_EXIT_CODE || $TEST_CORE_MEMORY_EXIT_CODE -ne 0 ]] && echo ❌ || echo ✅)"
# Check if any test failed (either non-zero or unset exit code)
if [[ -z $TEST_FIRST_MESSAGE_EXIT_CODE || $TEST_FIRST_MESSAGE_EXIT_CODE -ne 0 || \
-z $TEST_KEYWORD_MESSAGE_EXIT_CODE || $TEST_KEYWORD_MESSAGE_EXIT_CODE -ne 0 || \
-z $TEST_EXTERNAL_TOOL_EXIT_CODE || $TEST_EXTERNAL_TOOL_EXIT_CODE -ne 0 || \
-z $TEST_CHAT_MEMORY_EXIT_CODE || $TEST_CHAT_MEMORY_EXIT_CODE -ne 0 || \
-z $TEST_ARCHIVAL_MEMORY_EXIT_CODE || $TEST_ARCHIVAL_MEMORY_EXIT_CODE -ne 0 || \
-z $TEST_CORE_MEMORY_EXIT_CODE || $TEST_CORE_MEMORY_EXIT_CODE -ne 0 ]]; then
echo "Some tests failed."
exit 78
fi
continue-on-error: true

67
.github/workflows/test_cli.yml vendored Normal file
View File

@@ -0,0 +1,67 @@
name: Test CLI
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
test-cli:
runs-on: ubuntu-latest
timeout-minutes: 15
services:
qdrant:
image: qdrant/qdrant
ports:
- 6333:6333
postgres:
image: pgvector/pgvector:pg17
ports:
- 5432:5432
env:
POSTGRES_HOST_AUTH_METHOD: trust
POSTGRES_DB: postgres
POSTGRES_USER: postgres
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- name: Checkout
uses: actions/checkout@v4
- name: "Setup Python, Poetry and Dependencies"
uses: packetcoders/action-setup-cache-python-poetry@main
with:
python-version: "3.12"
poetry-version: "1.8.2"
install-args: "-E dev -E postgres -E tests"
- name: Migrate database
env:
LETTA_PG_PORT: 5432
LETTA_PG_USER: postgres
LETTA_PG_PASSWORD: postgres
LETTA_PG_DB: postgres
LETTA_PG_HOST: localhost
run: |
psql -h localhost -U postgres -d postgres -c 'CREATE EXTENSION vector'
poetry run alembic upgrade head
- name: Test `letta run` up until first message
env:
LETTA_PG_PORT: 5432
LETTA_PG_USER: postgres
LETTA_PG_PASSWORD: postgres
LETTA_PG_DB: postgres
LETTA_PG_HOST: localhost
LETTA_SERVER_PASS: test_server_token
run: |
poetry run pytest -s -vv tests/test_cli.py::test_letta_run_create_new_agent

69
.github/workflows/test_examples.yml vendored Normal file
View File

@@ -0,0 +1,69 @@
name: Examples (documentation)
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
test:
runs-on: ubuntu-latest
timeout-minutes: 15
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Python
uses: actions/setup-python@v5
with:
python-version: '3.12'
- name: Set permissions for log directory
run: |
mkdir -p /home/runner/.letta/logs
sudo chown -R $USER:$USER /home/runner/.letta/logs
chmod -R 755 /home/runner/.letta/logs
- name: Build and run docker dev server
env:
LETTA_PG_DB: letta
LETTA_PG_USER: letta
LETTA_PG_PASSWORD: letta
LETTA_PG_PORT: 8888
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
run: docker compose -f dev-compose.yaml up --build -d
#- name: "Setup Python, Poetry and Dependencies"
# uses: packetcoders/action-setup-cache-python-poetry@v1.2.0
# with:
# python-version: "3.12"
# poetry-version: "1.8.2"
# install-args: "--all-extras"
- name: Wait for service
run: bash scripts/wait_for_service.sh http://localhost:8283 -- echo "Service is ready"
- name: Run tests with pytest
env:
LETTA_PG_DB: letta
LETTA_PG_USER: letta
LETTA_PG_PASSWORD: letta
LETTA_PG_PORT: 8888
LETTA_SERVER_PASS: test_server_token
LETTA_SERVER_URL: http://localhost:8283
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
PYTHONPATH: ${{ github.workspace }}:${{ env.PYTHONPATH }}
run: |
pipx install poetry==1.8.2
poetry install -E dev -E postgres -E external-tools
poetry run python examples/docs/agent_advanced.py
poetry run python examples/docs/agent_basic.py
poetry run python examples/docs/memory.py
poetry run python examples/docs/rest_client.py
poetry run python examples/docs/tools.py
- name: Print docker logs if tests fail
if: failure()
run: |
echo "Printing Docker Logs..."
docker compose -f dev-compose.yaml logs

View File

@@ -0,0 +1,31 @@
name: Endpoint (Letta)
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
test:
runs-on: ubuntu-latest
timeout-minutes: 15
steps:
- name: Checkout
uses: actions/checkout@v4
- name: "Setup Python, Poetry and Dependencies"
uses: packetcoders/action-setup-cache-python-poetry@main
with:
python-version: "3.12"
poetry-version: "1.8.2"
install-args: "-E dev"
- name: Test LLM endpoint
run: |
poetry run pytest -s -vv tests/test_model_letta_perfomance.py::test_llm_endpoint_letta_hosted
continue-on-error: true
- name: Test embedding endpoint
run: |
poetry run pytest -s -vv tests/test_model_letta_perfomance.py::test_embedding_endpoint_letta_hosted

87
.github/workflows/test_ollama.yml vendored Normal file
View File

@@ -0,0 +1,87 @@
name: Endpoint (Ollama)
env:
OLLAMA_BASE_URL: "http://localhost:11434"
COMPOSIO_API_KEY: ${{ secrets.COMPOSIO_API_KEY }}
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
test:
runs-on: ubuntu-latest
timeout-minutes: 15
steps:
- name: Checkout Code
uses: actions/checkout@v4
- name: Install Ollama
run: |
set -e
set -x
curl -vfsSL https://ollama.com/install.sh -o install.sh
chmod +x install.sh
bash -x install.sh
if ! command -v ollama; then
echo "Ollama binary not found in PATH after installation."
exit 1
fi
echo "Ollama installed successfully."
- name: Start Ollama Server
run: |
set -e
set -x
ollama serve >ollama_server.log 2>&1 &
sleep 15
if ! curl -v http://localhost:11434; then
echo "Server logs (if available):"
[ -f ollama_server.log ] && cat ollama_server.log || echo "No logs found."
exit 1
fi
echo "Ollama server started successfully."
- name: Pull Models
run: |
set -e
set -x
for attempt in {1..3}; do
ollama pull thewindmom/hermes-3-llama-3.1-8b && break || sleep 5
done
for attempt in {1..3}; do
ollama pull mxbai-embed-large && break || sleep 5
done
- name: Debug Logs on Failure
if: failure()
run: |
echo "Debugging logs on failure:"
[ -f ollama_server.log ] && cat ollama_server.log || echo "No server logs available."
- name: Setup Python, Poetry, and Dependencies
uses: packetcoders/action-setup-cache-python-poetry@main
with:
python-version: "3.12"
poetry-version: "1.8.2"
install-args: "-E dev"
- name: Test LLM Endpoint
run: |
set -e
set -x
poetry run pytest -s -vv tests/test_model_letta_perfomance.py::test_llm_endpoint_ollama
- name: Test Embedding Endpoint
run: |
set -e
set -x
poetry run pytest -s -vv tests/test_model_letta_perfomance.py::test_embedding_endpoint_ollama
- name: Test Provider
run: |
set -e
set -x
poetry run pytest -s -vv tests/test_providers.py::test_ollama

82
.github/workflows/test_openai.yml vendored Normal file
View File

@@ -0,0 +1,82 @@
name: OpenAI GPT-4 Capabilities Test
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
COMPOSIO_API_KEY: ${{ secrets.COMPOSIO_API_KEY }}
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
test:
runs-on: ubuntu-latest
timeout-minutes: 15
steps:
- name: Checkout
uses: actions/checkout@v4
- name: "Setup Python, Poetry and Dependencies"
uses: packetcoders/action-setup-cache-python-poetry@main
with:
python-version: "3.12"
poetry-version: "1.8.2"
install-args: "-E dev -E external-tools"
- name: Test first message contains expected function call and inner monologue
id: test_first_message
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
run: |
poetry run pytest -s -vv tests/test_model_letta_perfomance.py::test_openai_gpt_4o_returns_valid_first_message
- name: Test model sends message with keyword
id: test_keyword_message
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
run: |
poetry run pytest -s -vv tests/test_model_letta_perfomance.py::test_openai_gpt_4o_returns_keyword
- name: Test model uses external tool correctly
id: test_external_tool
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
run: |
poetry run pytest -s -vv tests/test_model_letta_perfomance.py::test_openai_gpt_4o_uses_external_tool
- name: Test model recalls chat memory
id: test_chat_memory
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
run: |
poetry run pytest -s -vv tests/test_model_letta_perfomance.py::test_openai_gpt_4o_recall_chat_memory
- name: Test model uses 'archival_memory_search' to find secret
id: test_archival_memory_search
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
run: |
poetry run pytest -s -vv tests/test_model_letta_perfomance.py::test_openai_gpt_4o_archival_memory_retrieval
- name: Test model uses 'archival_memory_insert' to insert archival memories
id: test_archival_memory_insert
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
run: |
poetry run pytest -s -vv tests/test_model_letta_perfomance.py::test_openai_gpt_4o_archival_memory_insert
- name: Test model can edit core memories
id: test_core_memory
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
run: |
poetry run pytest -s -vv tests/test_model_letta_perfomance.py::test_openai_gpt_4o_edit_core_memory
- name: Test embedding endpoint
id: test_embedding_endpoint
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
run: |
poetry run pytest -s -vv tests/test_model_letta_perfomance.py::test_embedding_endpoint_openai

105
.github/workflows/test_together.yml vendored Normal file
View File

@@ -0,0 +1,105 @@
name: Together Llama 3.1 70b Capabilities Test
env:
TOGETHER_API_KEY: ${{ secrets.TOGETHER_API_KEY }}
COMPOSIO_API_KEY: ${{ secrets.COMPOSIO_API_KEY }}
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
test:
runs-on: ubuntu-latest
timeout-minutes: 15
steps:
- name: Checkout
uses: actions/checkout@v4
- name: "Setup Python, Poetry and Dependencies"
uses: packetcoders/action-setup-cache-python-poetry@main
with:
python-version: "3.12"
poetry-version: "1.8.2"
install-args: "-E dev -E external-tools"
- name: Test first message contains expected function call and inner monologue
id: test_first_message
env:
TOGETHER_API_KEY: ${{ secrets.TOGETHER_API_KEY }}
run: |
poetry run pytest -s -vv tests/test_model_letta_perfomance.py::test_together_llama_3_70b_returns_valid_first_message
echo "TEST_FIRST_MESSAGE_EXIT_CODE=$?" >> $GITHUB_ENV
continue-on-error: true
- name: Test model sends message with keyword
id: test_keyword_message
env:
TOGETHER_API_KEY: ${{ secrets.TOGETHER_API_KEY }}
run: |
poetry run pytest -s -vv tests/test_model_letta_perfomance.py::test_together_llama_3_70b_returns_keyword
echo "TEST_KEYWORD_MESSAGE_EXIT_CODE=$?" >> $GITHUB_ENV
continue-on-error: true
- name: Test model uses external tool correctly
id: test_external_tool
env:
TOGETHER_API_KEY: ${{ secrets.TOGETHER_API_KEY }}
run: |
poetry run pytest -s -vv tests/test_model_letta_perfomance.py::test_together_llama_3_70b_uses_external_tool
echo "TEST_EXTERNAL_TOOL_EXIT_CODE=$?" >> $GITHUB_ENV
continue-on-error: true
- name: Test model recalls chat memory
id: test_chat_memory
env:
TOGETHER_API_KEY: ${{ secrets.TOGETHER_API_KEY }}
run: |
poetry run pytest -s -vv tests/test_model_letta_perfomance.py::test_together_llama_3_70b_recall_chat_memory
echo "TEST_CHAT_MEMORY_EXIT_CODE=$?" >> $GITHUB_ENV
continue-on-error: true
- name: Test model uses 'archival_memory_search' to find secret
id: test_archival_memory
env:
TOGETHER_API_KEY: ${{ secrets.TOGETHER_API_KEY }}
run: |
poetry run pytest -s -vv tests/test_model_letta_perfomance.py::test_together_llama_3_70b_archival_memory_retrieval
echo "TEST_ARCHIVAL_MEMORY_EXIT_CODE=$?" >> $GITHUB_ENV
continue-on-error: true
- name: Test model can edit core memories
id: test_core_memory
env:
TOGETHER_API_KEY: ${{ secrets.TOGETHER_API_KEY }}
run: |
poetry run pytest -s -vv tests/test_model_letta_perfomance.py::test_together_llama_3_70b_edit_core_memory
echo "TEST_CORE_MEMORY_EXIT_CODE=$?" >> $GITHUB_ENV
continue-on-error: true
- name: Summarize test results
if: always()
run: |
echo "Test Results Summary:"
# If the exit code is empty, treat it as a failure (❌)
echo "Test first message: $([[ -z $TEST_FIRST_MESSAGE_EXIT_CODE || $TEST_FIRST_MESSAGE_EXIT_CODE -ne 0 ]] && echo ❌ || echo ✅)"
echo "Test model sends message with keyword: $([[ -z $TEST_KEYWORD_MESSAGE_EXIT_CODE || $TEST_KEYWORD_MESSAGE_EXIT_CODE -ne 0 ]] && echo ❌ || echo ✅)"
echo "Test model uses external tool: $([[ -z $TEST_EXTERNAL_TOOL_EXIT_CODE || $TEST_EXTERNAL_TOOL_EXIT_CODE -ne 0 ]] && echo ❌ || echo ✅)"
echo "Test model recalls chat memory: $([[ -z $TEST_CHAT_MEMORY_EXIT_CODE || $TEST_CHAT_MEMORY_EXIT_CODE -ne 0 ]] && echo ❌ || echo ✅)"
echo "Test model uses 'archival_memory_search' to find secret: $([[ -z $TEST_ARCHIVAL_MEMORY_EXIT_CODE || $TEST_ARCHIVAL_MEMORY_EXIT_CODE -ne 0 ]] && echo ❌ || echo ✅)"
echo "Test model can edit core memories: $([[ -z $TEST_CORE_MEMORY_EXIT_CODE || $TEST_CORE_MEMORY_EXIT_CODE -ne 0 ]] && echo ❌ || echo ✅)"
# Check if any test failed (either non-zero or unset exit code)
if [[ -z $TEST_FIRST_MESSAGE_EXIT_CODE || $TEST_FIRST_MESSAGE_EXIT_CODE -ne 0 || \
-z $TEST_KEYWORD_MESSAGE_EXIT_CODE || $TEST_KEYWORD_MESSAGE_EXIT_CODE -ne 0 || \
-z $TEST_EXTERNAL_TOOL_EXIT_CODE || $TEST_EXTERNAL_TOOL_EXIT_CODE -ne 0 || \
-z $TEST_CHAT_MEMORY_EXIT_CODE || $TEST_CHAT_MEMORY_EXIT_CODE -ne 0 || \
-z $TEST_ARCHIVAL_MEMORY_EXIT_CODE || $TEST_ARCHIVAL_MEMORY_EXIT_CODE -ne 0 || \
-z $TEST_CORE_MEMORY_EXIT_CODE || $TEST_CORE_MEMORY_EXIT_CODE -ne 0 ]]; then
echo "Some tests failed."
exit 78
fi
continue-on-error: true

84
.github/workflows/tests.yml vendored Normal file
View File

@@ -0,0 +1,84 @@
name: Unit Tests
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
COMPOSIO_API_KEY: ${{ secrets.COMPOSIO_API_KEY }}
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
GEMINI_API_KEY: ${{ secrets.GEMINI_API_KEY }}
GROQ_API_KEY: ${{ secrets.GROQ_API_KEY }}
on:
push:
branches: [ main ]
pull_request:
jobs:
unit-run:
runs-on: ubuntu-latest
timeout-minutes: 15
strategy:
fail-fast: false
matrix:
test_suite:
- "test_vector_embeddings.py"
- "test_client.py"
- "test_client_legacy.py"
- "test_server.py"
- "test_v1_routes.py"
- "test_local_client.py"
- "test_managers.py"
- "test_base_functions.py"
- "test_tool_schema_parsing.py"
- "test_tool_rule_solver.py"
- "test_memory.py"
- "test_utils.py"
- "test_stream_buffer_readers.py"
services:
qdrant:
image: qdrant/qdrant
ports:
- 6333:6333
postgres:
image: pgvector/pgvector:pg17
ports:
- 5432:5432
env:
POSTGRES_HOST_AUTH_METHOD: trust
POSTGRES_DB: postgres
POSTGRES_USER: postgres
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Python, Poetry, and Dependencies
uses: packetcoders/action-setup-cache-python-poetry@main
with:
python-version: "3.12"
poetry-version: "1.8.2"
install-args: "-E dev -E postgres -E external-tools -E tests"
- name: Migrate database
env:
LETTA_PG_PORT: 5432
LETTA_PG_USER: postgres
LETTA_PG_PASSWORD: postgres
LETTA_PG_DB: postgres
LETTA_PG_HOST: localhost
run: |
psql -h localhost -U postgres -d postgres -c 'CREATE EXTENSION vector'
poetry run alembic upgrade head
- name: Run core unit tests
env:
LETTA_PG_PORT: 5432
LETTA_PG_USER: postgres
LETTA_PG_PASSWORD: postgres
LETTA_PG_DB: postgres
LETTA_PG_HOST: localhost
LETTA_SERVER_PASS: test_server_token
run: |
poetry run pytest -s -vv tests/${{ matrix.test_suite }}

View File

@@ -0,0 +1,63 @@
name: Check Poetry Dependencies Changes
on:
pull_request:
paths:
- 'poetry.lock'
- 'pyproject.toml'
jobs:
check-poetry-changes:
runs-on: ubuntu-latest
permissions:
pull-requests: write
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Check for poetry.lock changes
id: check-poetry-lock
run: |
if git diff --name-only ${{ github.event.pull_request.base.sha }} ${{ github.event.pull_request.head.sha }} | grep -q "poetry.lock"; then
echo "poetry_lock_changed=true" >> $GITHUB_OUTPUT
else
echo "poetry_lock_changed=false" >> $GITHUB_OUTPUT
fi
- name: Check for pyproject.toml changes
id: check-pyproject
run: |
if git diff --name-only ${{ github.event.pull_request.base.sha }} ${{ github.event.pull_request.head.sha }} | grep -q "pyproject.toml"; then
echo "pyproject_changed=true" >> $GITHUB_OUTPUT
else
echo "pyproject_changed=false" >> $GITHUB_OUTPUT
fi
- name: Create PR comment
if: steps.check-poetry-lock.outputs.poetry_lock_changed == 'true' || steps.check-pyproject.outputs.pyproject_changed == 'true'
uses: actions/github-script@v7
with:
script: |
const poetryLockChanged = ${{ steps.check-poetry-lock.outputs.poetry_lock_changed }};
const pyprojectChanged = ${{ steps.check-pyproject.outputs.pyproject_changed }};
let message = '📦 Dependencies Alert:\n\n';
if (poetryLockChanged && pyprojectChanged) {
message += '- Both `poetry.lock` and `pyproject.toml` have been modified\n';
} else if (poetryLockChanged) {
message += '- `poetry.lock` has been modified\n';
} else if (pyprojectChanged) {
message += '- `pyproject.toml` has been modified\n';
}
message += '\nPlease review these changes carefully to ensure they are intended (cc @sarahwooders @cpacker).';
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: message
});

1027
.gitignore vendored Normal file

File diff suppressed because it is too large Load Diff

33
.pre-commit-config.yaml Normal file
View File

@@ -0,0 +1,33 @@
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v2.3.0
hooks:
- id: check-yaml
exclude: 'docs/.*|tests/data/.*|configs/.*'
- id: end-of-file-fixer
exclude: 'docs/.*|tests/data/.*|letta/server/static_files/.*'
- id: trailing-whitespace
exclude: 'docs/.*|tests/data/.*|letta/server/static_files/.*'
- repo: local
hooks:
- id: autoflake
name: autoflake
entry: poetry run autoflake
language: system
types: [python]
args: ['--remove-all-unused-imports', '--remove-unused-variables', '--in-place', '--recursive', '--ignore-init-module-imports']
- id: isort
name: isort
entry: poetry run isort
language: system
types: [python]
args: ['--profile', 'black']
exclude: ^docs/
- id: black
name: black
entry: poetry run black
language: system
types: [python]
args: ['--line-length', '140', '--target-version', 'py310', '--target-version', 'py311']
exclude: ^docs/

25
CITATION.cff Normal file
View File

@@ -0,0 +1,25 @@
cff-version: 1.2.0
message: "If you use this software, please cite it as below."
title: "Letta"
url: "https://github.com/letta-ai/letta"
preferred-citation:
type: article
authors:
- family-names: "Packer"
given-names: "Charles"
- family-names: "Wooders"
given-names: "Sarah"
- family-names: "Lin"
given-names: "Kevin"
- family-names: "Fang"
given-names: "Vivian"
- family-names: "Patil"
given-names: "Shishir G"
- family-names: "Stoica"
given-names: "Ion"
- family-names: "Gonzalez"
given-names: "Joseph E"
journal: "arXiv preprint arXiv:2310.08560"
month: 10
title: "MemGPT: Towards LLMs as Operating Systems"
year: 2023

139
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,139 @@
# 🚀 How to Contribute to Letta
Thank you for investing time in contributing to our project! Here's a guide to get you started.
## 1. 🚀 Getting Started
### 🍴 Fork the Repository
First things first, let's get you a personal copy of Letta to play with. Think of it as your very own playground. 🎪
1. Head over to the Letta repository on GitHub.
2. In the upper-right corner, hit the 'Fork' button.
### 🚀 Clone the Repository
Now, let's bring your new playground to your local machine.
```shell
git clone https://github.com/your-username/letta.git
```
### 🧩 Install Dependencies
First, install Poetry using [the official instructions here](https://python-poetry.org/docs/#installation).
Once Poetry is installed, navigate to the Letta directory and install the Letta project with Poetry:
```shell
cd Letta
poetry shell
poetry install --all-extras
```
Now when you want to use `letta`, make sure you first activate the `poetry` environment using poetry shell:
```shell
$ poetry shell
(pyletta-py3.12) $ letta run
```
Alternatively, you can use `poetry run` (which will activate the `poetry` environment for the `letta run` command only):
```shell
poetry run letta run
```
#### Installing pre-commit
We recommend installing pre-commit to ensure proper formatting during development:
```
poetry run pre-commit install
poetry run pre-commit run --all-files
```
If you don't install pre-commit, you will need to run `poetry run black .` before submitting a PR.
## 2. 🛠️ Making Changes
### 🌟 Create a Branch
Time to put on your creative hat and make some magic happen. First, let's create a new branch for your awesome changes. 🧙‍♂️
```shell
git checkout -b feature/your-feature
```
### ✏️ Make your Changes
Now, the world is your oyster! Go ahead and craft your fabulous changes. 🎨
#### Handling Database Migrations
If you are running Letta for the first time, your database will be automatically be setup. If you are updating Letta, you may need to run migrations. To run migrations, use the following command:
```shell
poetry run alembic upgrade head
```
#### Creating a new Database Migration
If you have made changes to the database models, you will need to create a new migration. To create a new migration, use the following command:
```shell
poetry run alembic revision --autogenerate -m "Your migration message here"
```
Visit the [Alembic documentation](https://alembic.sqlalchemy.org/en/latest/tutorial.html) for more information on creating and running migrations.
## 3. ✅ Testing
Before we hit the 'Wow, I'm Done' button, let's make sure everything works as expected. Run tests and make sure the existing ones don't throw a fit. And if needed, create new tests. 🕵️
### Run existing tests
Running tests if you installed via poetry:
```
poetry run pytest -s tests
```
Running tests if you installed via pip:
```
pytest -s tests
```
### Creating new tests
If you added a major feature change, please add new tests in the `tests/` directory.
## 4. 🧩 Adding new dependencies
If you need to add a new dependency to Letta, please add the package via `poetry add <PACKAGE_NAME>`. This will update the `pyproject.toml` and `poetry.lock` files. If the dependency does not need to be installed by all users, make sure to mark the dependency as optional in the `pyproject.toml` file and if needed, create a new extra under `[tool.poetry.extras]`.
## 5. 🚀 Submitting Changes
### Check Formatting
Please ensure your code is formatted correctly by running:
```
poetry run black . -l 140
```
### 🚀 Create a Pull Request
You're almost there! It's time to share your brilliance with the world. 🌍
1. Visit [Letta](https://github.com/letta-ai/letta).
2. Click "New Pull Request" button.
3. Choose the base branch (`main`) and the compare branch (your feature branch).
4. Whip up a catchy title and describe your changes in the description. 🪄
## 6. 🔍 Review and Approval
The maintainers will take a look and might suggest some cool upgrades or ask for more details. Once they give the thumbs up, your creation becomes part of Letta!
## 7. 📜 Code of Conduct
Please be sure to follow the project's Code of Conduct.
## 8. 📫 Contact
Need help or just want to say hi? We're here for you. Reach out through filing an issue on this GitHub repository or message us on our [Discord server](https://discord.gg/9GEQrxmVyE).
Thanks for making Letta even more fantastic!
## WIP - 🐋 Docker Development
If you prefer to keep your resources isolated by developing purely in containers, you can start Letta in development with:
```shell
docker compose -f compose.yaml -f development.compose.yml up
```
This will volume mount your local codebase and reload the server on file changes.

69
Dockerfile Normal file
View File

@@ -0,0 +1,69 @@
# Start with pgvector base for builder
FROM ankane/pgvector:v0.5.1 AS builder
# Install Python and required packages
RUN apt-get update && apt-get install -y \
python3 \
python3-venv \
python3-pip \
python3-full \
build-essential \
libpq-dev \
python3-dev \
&& rm -rf /var/lib/apt/lists/*
ARG LETTA_ENVIRONMENT=PRODUCTION
ENV LETTA_ENVIRONMENT=${LETTA_ENVIRONMENT} \
POETRY_NO_INTERACTION=1 \
POETRY_VIRTUALENVS_IN_PROJECT=1 \
POETRY_VIRTUALENVS_CREATE=1 \
POETRY_CACHE_DIR=/tmp/poetry_cache
WORKDIR /app
# Create and activate virtual environment
RUN python3 -m venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"
# Now install poetry in the virtual environment
RUN pip install --no-cache-dir poetry==1.8.2
# Copy dependency files first
COPY pyproject.toml poetry.lock ./
# Then copy the rest of the application code
COPY . .
RUN poetry lock --no-update && \
poetry install --all-extras && \
rm -rf $POETRY_CACHE_DIR
# Runtime stage
FROM ankane/pgvector:v0.5.1 AS runtime
# Install Python packages
RUN apt-get update && apt-get install -y \
python3 \
python3-venv \
&& rm -rf /var/lib/apt/lists/* \
&& mkdir -p /app
ARG LETTA_ENVIRONMENT=PRODUCTION
ENV LETTA_ENVIRONMENT=${LETTA_ENVIRONMENT} \
VIRTUAL_ENV="/app/.venv" \
PATH="/app/.venv/bin:$PATH" \
POSTGRES_USER=letta \
POSTGRES_PASSWORD=letta \
POSTGRES_DB=letta
WORKDIR /app
# Copy virtual environment and app from builder
COPY --from=builder /app .
# Copy initialization SQL if it exists
COPY init.sql /docker-entrypoint-initdb.d/
EXPOSE 8283 5432
ENTRYPOINT ["/usr/local/bin/docker-entrypoint.sh"]
CMD ["./letta/server/startup.sh"]

190
LICENSE Normal file
View File

@@ -0,0 +1,190 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
Copyright 2023, Letta authors
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

206
PRIVACY.md Normal file
View File

@@ -0,0 +1,206 @@
Privacy Policy
==============
Your privacy is critically important to us. As an overview:
- When you use Letta applications/services/websites, we collect basic (anonymous) telemetry data such as clicks, crashes, etc.
- This data helps us understand how our users are using the Letta application(s) and it informs our roadmap of future features and buxfixes.
- If you would like to opt-out of basic telemetry, you can modify your configuration file to include `telemetry_disabled = True`.
- When you use Letta hosted services (such as the hosted endpoints or Discord Bot), we collect the data that was used to render these services.
- For example, for the hosted endpoint, this includes the message request and message response.
- We may use this data to improve our services, for example to train new models in the future.
- We do NOT collect data on any of your messages or prompts unless you are using our hosted services (for example, if you are running your own model backends, this data will never be collected).
Below is our full Privacy Policy, which expands the overview in full detail.
### What This Policy Covers
This Privacy Policy applies to information that we collect about you when you use:
- Our websites (including letta.ai, the Letta Discord server, and the repository github.com/cpacker/Letta);
- Our applications (including the Python package, Discord Bot, and any other hosted services);
- Our other Letta products, services, and features that are available on or through our websites;
Throughout this Privacy Policy we'll refer to our websites, mobile applications, and other products and services collectively as "Services."
Below we explain how we collect, use, and share information about you, along with the choices that you have with respect to that information.
### Information We Collect
We only collect information about you if we have a reason to do so — for example, to provide our Services, to communicate with you, or to make our Services better.
We collect this information from three sources: if and when you provide information to us, automatically through operating our Services, and from outside sources. Let's go over the information that we collect.
#### *Information You Provide to Us*
It's probably no surprise that we collect information that you provide to us directly. Here are some examples:
- **Basic account information:** We ask for basic information from you in order to set up your account.
- **Public profile information:** If you have an account with us, we collect the information that you provide for your public profile.
- **Credentials: **Depending on the Services you use, you may provide us with credentials for your self-hosted website (like SSH, FTP, and SFTP username and password).
- **Communications with us (hi there!):** You may also provide us with information when you post on GitHub, Discord, or message us through separate channels.
#### *Information We Collect Automatically*
We also collect some information automatically:
- **Log information:** We collect information that web browsers, mobile devices, and servers typically make available, including the browser type, IP address, unique device identifiers, language preference, referring site, the date and time of access, operating system, and mobile network information. We collect log information when you use our Services.
- **Usage information:** We collect information about your usage of our Services. We use this information to, for example, provide our Services to you, get insights on how people use our Services so we can make our Services better, and understand and make predictions about user retention.
- **Location information:** We may determine the location of your device from your IP address. We collect and use this information to, for example, calculate how many people visit our Services from certain geographic regions.
- **Stored information:** We may access information stored on your devices if you upload this information to our Services.
- **Information from cookies & other technologies:** A cookie is a string of information that a website stores on a visitor's computer, and that the visitor's browser provides to the website each time the visitor returns. Pixel tags (also called web beacons) are small blocks of code placed on websites and emails. We may use cookies and other technologies like pixel tags to help us identify and track visitors, usage, and access preferences for our Services.
#### *Information We Collect from Other Sources*
We may also get information about you from other sources. For example:
- **Third Party Login:** If you create or log in to our Services through another service (like Google) we'll receive associated login information (e.g. a connection token, your username, your email address)
The information we receive depends on which services you use or authorize and what options are available.
Third-party services may also give us information, like mailing addresses for individuals who are not yet our users (but we hope will be!). We use this information for marketing purposes like postcards and other mailers advertising our Services.
### How and Why We Use Information
#### *Purposes for Using Information*
We use information about you for the purposes listed below:
- **To provide our Services.** For example, to run a model on our hosted services to deliver a message to your client.
- **To ensure quality, maintain safety, and improve our Services.** For example, by providing automatic upgrades and new versions of our Services. Or, for example, by monitoring and analyzing how users interact with our Services so we can create new features that we think our users will enjoy and that will help them create and manage websites more efficiently or make our Services easier to use.
- **To protect our Services, our users, and the public.** For example, by detecting security incidents; detecting and protecting against malicious, deceptive, fraudulent, or illegal activity; fighting spam; complying with our legal obligations; and protecting the rights and property of Letta and others, which may result in us, for example, declining a transaction or terminating Services.
- **To fix problems with our Services.** For example, by monitoring, debugging, repairing, and preventing issues.
- **To customize the user experience.** For example, to personalize your experience by serving you relevant notifications for our Services.
#### *Legal Bases for Collecting and Using Information*
A note here for those in the European Union about our legal grounds for processing information about you under EU data protection laws, which is that our use of your information is based on the grounds that:
(1) The use is necessary in order to fulfill our commitments to you under the applicable terms of service or other agreements with you or is necessary to administer your account — for example, in order to enable access to our website on your device or charge you for a paid plan; or
(2) The use is necessary for compliance with a legal obligation; or
(3) The use is necessary in order to protect your vital interests or those of another person; or
(4) We have a legitimate interest in using your information — for example, to provide and update our Services; to improve our Services so that we can offer you an even better user experience; to safeguard our Services; to communicate with you; to measure, gauge, and improve the effectiveness of our advertising; and to understand our user retention and attrition; to monitor and prevent any problems with our Services; and to personalize your experience; or
(5) You have given us your consent
### Sharing Information
#### *How We Share Information*
We share information about you in limited circumstances, and with appropriate safeguards on your privacy.
- **Subsidiaries, independent contractors, and research partners:** We may disclose information about you to our subsidiaries, independent contractors, and/or research partners who need the information to help us provide our Services or process the information on our behalf. We require our subsidiaries and independent contractors to follow this Privacy Policy for any personal information that we share with them. This includes the transfer of data collect on our Services to facilitate model training and refinement.
- **Third-party vendors:** We may share information about you with third-party vendors who need the information in order to provide their services to us, or to provide their services to you or your site. This includes vendors that help us provide our Services to you (such as intrastructure or model serving companies); those that help us understand and enhance our Services (like analytics providers); those that make tools to help us run our operations (like programs that help us with task management, scheduling, word processing, email and other communications, and collaboration among our teams); other third-party tools that help us manage operations; and companies that make products available on our websites, who may need information about you in order to, for example, provide technical or other support services to you.
- **Legal and regulatory requirements:** We may disclose information about you in response to a subpoena, court order, or other governmental request.
- **To protect rights, property, and others:** We may disclose information about you when we believe in good faith that disclosure is reasonably necessary to protect the property or rights of Letta, third parties, or the public at large.
- **Asset/IP transfers:** If any transfer of Letta assets were to happen, this Privacy Policy would continue to apply to your information and the party receiving your information may continue to use your information, but only consistent with this Privacy Policy.
- **With your consent:** We may share and disclose information with your consent or at your direction.
- **Aggregated or de-identified information:** We may share information that has been aggregated or de-identified, so that it can no longer reasonably be used to identify you. For instance, we may publish aggregate statistics about the use of our Services, or share a hashed version of your email address to facilitate customized ad campaigns on other platforms.
- **Published support requests:** If you send us a request for assistance (for example, via a support email or one of our other feedback mechanisms), we reserve the right to publish that request in order to clarify or respond to your request, or to help us support other users.
#### *Information Shared Publicly*
Information that you choose to make public is — you guessed it — disclosed publicly.
That means information like your public profile, posts, other content that you make public on your website, and your "Likes" and comments on other websites are all available to others — and we hope they get a lot of views!
For example, the photo that you upload to your public profile, or a default image if you haven't uploaded one, is your **G**lobally **R**ecognized Avatar, or Gravatar — get it? :) Your Gravatar, along with other public profile information, displays alongside the comments and "Likes" that you make on other users' websites while logged in to your WordPress.com account. Your Gravatar and public profile information may also display with your comments, "Likes," and other interactions on websites that use our Gravatar service, if the email address associated with your account is the same email address you use on the other website.
Please keep all of this in mind when deciding what you would like to share publicly.
### How Long We Keep Information
We generally discard information about you when it's no longer needed for the purposes for which we collect and use it — described in the section above on How and Why We Use Information — and we're not legally required to keep it.
### Security
While no online service is 100% secure, we work very hard to protect information about you against unauthorized access, use, alteration, or destruction, and take reasonable measures to do so. We monitor our Services for potential vulnerabilities and attacks. To enhance the security of your account, we encourage you to enable our advanced security settings when available.
### Choices
You have several choices available when it comes to information about you:
- **Opt out of telemetry:** You can opt our of basic telemetry by modifying your configuration file.
- **Limit use of hosted services:** We only retain information on model inputs/outputs when you use our hosted services.
### Your Rights
If you are located in certain parts of the world, including some US states and countries that fall under the scope of the European General Data Protection Regulation (aka the "GDPR"), you may have certain rights regarding your personal information, like the right to request access to or deletion of your data.
#### *European General Data Protection Regulation (GDPR)*
If you are located in a country that falls under the scope of the GDPR, data protection laws give you certain rights with respect to your personal data, subject to any exemptions provided by the law, including the rights to:
- Request access to your personal data;
- Request correction or deletion of your personal data;
- Object to our use and processing of your personal data;
- Request that we limit our use and processing of your personal data; and
- Request portability of your personal data.
You also have the right to make a complaint to a government supervisory authority.
#### *US Privacy Laws*
Laws in some US states, including California, Colorado, Connecticut, Utah, and Virginia, require us to provide residents with additional information about the categories of personal information we collect and share, where we get that personal information, and how and why we use it. You'll find that information in this section (if you are a California resident, please note that this is the Notice at Collection we are required to provide you under California law).
In the last 12 months, we collected the following categories of personal information, depending on the Services used:
- Identifiers (like your name, contact information, and device and online identifiers);
- Characteristics protected by law (for example, you might provide your gender as part of a research survey for us or you may choose to voluntarily disclose your race or veteran status);
- Internet or other electronic network activity information (such as your usage of our Services);
- Application and user data (such as model data and user inputs used to render our Services)
- Geolocation data (such as your location based on your IP address);
- Audio, electronic, visual or similar information (such as your profile picture, if you uploaded one);
- Inferences we make (such as likelihood of retention or attrition).
We collect personal information for the purposes described in the "How and Why We Use Information section". And we share this information with the categories of third parties described in the "Sharing Information section". We retain this information for the length of time described in our "How Long We Keep Information section".
In some US states you have additional rights subject to any exemptions provided by your state's respective law, including the right to:
- Request a copy of the specific pieces of information we collect about you and, if you're in California, to know the categories of personal information we collect, the categories of business or commercial purpose for collecting and using it, the categories of sources from which the information came, and the categories of third parties we share it with;
- Request deletion of personal information we collect or maintain;
- Request correction of personal information we collect or maintain;
- Opt out of the sale or sharing of personal information;
- Receive a copy of your information in a readily portable format; and
- Not receive discriminatory treatment for exercising your rights.
***Right to Opt Out***
Our procedures to opt-out of data collection to our Services is the "Choices" section. We do not collect or process your sensitive (and potentially sensitive) personal information except where it is strictly necessary to provide you with our service or improve our services in the future, where the processing is not for the purpose of inferring characteristics about you, or for other purposes that do not require an option to limit under California law. We don't knowingly sell or share personal information of those under 16.
#### *Contacting Us About These Rights*
If you'd like to contact us about one of the other rights, scroll down to "How to Reach Us" to, well, find out how to reach us. When you contact us about one of your rights under this section, we'll need to verify that you are the right person before we disclose or delete anything. For example, if you are a user, we will need you to contact us from the email address associated with your account. You can also designate an authorized agent to make a request on your behalf by giving us written authorization. We may still require you to verify your identity with us.
#### ***Appeals Process for Rights Requests Denials***
In some circumstances we may deny your request to exercise one of these rights. For example, if we cannot verify that you are the account owner we may deny your request to access the personal information associated with your account. As another example, if we are legally required to maintain a copy of your personal information we may deny your request to delete your personal information.
In the event that we deny your request, we will communicate this fact to you in writing. You may appeal our decision by responding in writing to our denial email and stating that you would like to appeal. All appeals will be reviewed by an internal expert who was not involved in your original request. In the event that your appeal is also denied this information will be communicated to you in writing. Please note that the appeal process does not apply to job applicants.
If your appeal is denied, in some US states (Colorado, Connecticut, and Virginia) you may refer the denied appeal to the state attorney general if you believe the denial is in conflict with your legal rights. The process for how to do this will be communicated to you in writing at the same time we send you our decision about your appeal.
### How to Reach Us
If you have a question about this Privacy Policy, please contact us through our via [email](mailto:contact@charlespacker.com).
### Other Things You Should Know (Keep Reading!)
#### *Ads and Analytics Services Provided by Others*
Ads appearing on any of our Services may be delivered by advertising networks. Othjjgger parties may also provide analytics services via our Services. These ad networks and analytics providers may set tracking technologies (like cookies) to collect information about your use of our Services and across other websites and online services. These technologies allow these third parties to recognize your device to compile information about you or others who use your device. This information allows us and other companies to, among other things, analyze and track usage, determine the popularity of certain content, and deliver ads that may be more targeted to your interests. Please note this Privacy Policy only covers the collection of information by Letta and does not cover the collection of information by any third-party advertisers or analytics providers.
#### *Third-Party Software and Services*
If you'd like to use third-party software or services (such as forks of our code), please keep in mind that interacting with them may mean providing information about yourself (or your site visitors) to those third parties. For example, some third-party services may request or require access to your (yours, your visitors', or customers') data via a pixel or cookie. Please note that if you use the third-party service or grant access, your data will be handled in accordance with the third party's privacy policy and practices. We don't own or control these third parties, and they have their own rules about information collection, use, and sharing, which you should review before using the software or services.
### Privacy Policy Changes
Although most changes are likely to be minor, we may change its Privacy Policy from time to time. We encourage visitors to frequently check this page for any changes to its Privacy Policy. If we make changes, we will notify you by revising the policy in the public repository (change log is publically viewable). Your further use of the Services after a change to our Privacy Policy will be subject to the updated policy.
### Creative Commons Sharealike License
This privacy policy is derived from the [Automattic Privacy Policy](https://github.com/Automattic/legalmattic) distributed under a Creative Commons Sharealike license. Thank you Automattic!

304
README.md Normal file
View File

@@ -0,0 +1,304 @@
<p align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/letta-ai/letta/refs/heads/main/assets/Letta-logo-RGB_GreyonTransparent_cropped_small.png">
<source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/letta-ai/letta/refs/heads/main/assets/Letta-logo-RGB_OffBlackonTransparent_cropped_small.png">
<img alt="Letta logo" src="https://raw.githubusercontent.com/letta-ai/letta/refs/heads/main/assets/Letta-logo-RGB_GreyonOffBlack_cropped_small.png" width="500">
</picture>
</p>
<div align="center">
<h1>Letta (previously MemGPT)</h1>
**☄️ New release: Letta Agent Development Environment (_read more [here](#-access-the-letta-ade-agent-development-environment)_) ☄️**
<p align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/letta-ai/letta/refs/heads/main/assets/example_ade_screenshot.png">
<source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/letta-ai/letta/refs/heads/main/assets/example_ade_screenshot_light.png">
<img alt="Letta logo" src="https://raw.githubusercontent.com/letta-ai/letta/refs/heads/main/assets/example_ade_screenshot.png" width="800">
</picture>
</p>
---
<h3>
[Homepage](https://letta.com) // [Documentation](https://docs.letta.com) // [ADE](https://app.letta.com) // [Letta Cloud](https://forms.letta.com/early-access)
</h3>
**👾 Letta** is an open source framework for building stateful LLM applications. You can use Letta to build **stateful agents** with advanced reasoning capabilities and transparent long-term memory. The Letta framework is white box and model-agnostic.
[![Discord](https://img.shields.io/discord/1161736243340640419?label=Discord&logo=discord&logoColor=5865F2&style=flat-square&color=5865F2)](https://discord.gg/letta)
[![Twitter Follow](https://img.shields.io/badge/Follow-%40Letta__AI-1DA1F2?style=flat-square&logo=x&logoColor=white)](https://twitter.com/Letta_AI)
[![arxiv 2310.08560](https://img.shields.io/badge/Research-2310.08560-B31B1B?logo=arxiv&style=flat-square)](https://arxiv.org/abs/2310.08560)
[![Apache 2.0](https://img.shields.io/badge/License-Apache%202.0-silver?style=flat-square)](LICENSE)
[![Release](https://img.shields.io/github/v/release/cpacker/MemGPT?style=flat-square&label=Release&color=limegreen)](https://github.com/cpacker/MemGPT/releases)
[![Docker](https://img.shields.io/docker/v/letta/letta?style=flat-square&logo=docker&label=Docker&color=0db7ed)](https://hub.docker.com/r/letta/letta)
[![GitHub](https://img.shields.io/github/stars/cpacker/MemGPT?style=flat-square&logo=github&label=Stars&color=gold)](https://github.com/cpacker/MemGPT)
<a href="https://trendshift.io/repositories/3612" target="_blank"><img src="https://trendshift.io/api/badge/repositories/3612" alt="cpacker%2FMemGPT | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
</div>
> [!IMPORTANT]
> **Looking for MemGPT?** You're in the right place!
>
> The MemGPT package and Docker image have been renamed to `letta` to clarify the distinction between MemGPT *agents* and the Letta API *server* / *runtime* that runs LLM agents as *services*. Read more about the relationship between MemGPT and Letta [here](https://www.letta.com/blog/memgpt-and-letta).
---
## ⚡ Quickstart
_The recommended way to use Letta is to run use Docker. To install Docker, see [Docker's installation guide](https://docs.docker.com/get-docker/). For issues with installing Docker, see [Docker's troubleshooting guide](https://docs.docker.com/desktop/troubleshoot-and-support/troubleshoot/). You can also install Letta using `pip` (see instructions [below](#-quickstart-pip))._
### 🌖 Run the Letta server
> [!NOTE]
> Letta agents live inside the Letta server, which persists them to a database. You can interact with the Letta agents inside your Letta server via the [REST API](https://docs.letta.com/api-reference) + Python / Typescript SDKs, and the [Agent Development Environment](https://app.letta.com) (a graphical interface).
The Letta server can be connected to various LLM API backends ([OpenAI](https://docs.letta.com/models/openai), [Anthropic](https://docs.letta.com/models/anthropic), [vLLM](https://docs.letta.com/models/vllm), [Ollama](https://docs.letta.com/models/ollama), etc.). To enable access to these LLM API providers, set the appropriate environment variables when you use `docker run`:
```sh
# replace `~/.letta/.persist/pgdata` with wherever you want to store your agent data
docker run \
-v ~/.letta/.persist/pgdata:/var/lib/postgresql/data \
-p 8283:8283 \
-e OPENAI_API_KEY="your_openai_api_key" \
letta/letta:latest
```
If you have many different LLM API keys, you can also set up a `.env` file instead and pass that to `docker run`:
```sh
# using a .env file instead of passing environment variables
docker run \
-v ~/.letta/.persist/pgdata:/var/lib/postgresql/data \
-p 8283:8283 \
--env-file .env \
letta/letta:latest
```
Once the Letta server is running, you can access it via port `8283` (e.g. sending REST API requests to `http://localhost:8283/v1`). You can also connect your server to the Letta ADE to access and manage your agents in a web interface.
### 👾 Access the [Letta ADE (Agent Development Environment)](https://app.letta.com)
> [!NOTE]
> The Letta ADE is a graphical user interface for creating, deploying, interacting and observing with your Letta agents.
>
> For example, if you're running a Letta server to power an end-user application (such as a customer support chatbot), you can use the ADE to test, debug, and observe the agents in your server. You can also use the ADE as a general chat interface to interact with your Letta agents.
<p align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/letta-ai/letta/refs/heads/main/assets/example_ade_screenshot.png">
<source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/letta-ai/letta/refs/heads/main/assets/example_ade_screenshot_light.png">
<img alt="ADE screenshot" src="https://raw.githubusercontent.com/letta-ai/letta/refs/heads/main/assets/example_ade_screenshot.png" width="800">
</picture>
</p>
The ADE can connect to self-hosted Letta servers (e.g. a Letta server running on your laptop), as well as the Letta Cloud service. When connected to a self-hosted / private server, the ADE uses the Letta REST API to communicate with your server.
#### 🖥️ Connecting the ADE to your local Letta server
To connect the ADE with your local Letta server, simply:
1. Start your Letta server (`docker run ...`)
2. Visit [https://app.letta.com](https://app.letta.com) and you will see "Local server" as an option in the left panel
<p align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/letta-ai/letta/refs/heads/main/assets/example_ade_screenshot_agents.png">
<source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/letta-ai/letta/refs/heads/main/assets/example_ade_screenshot_agents_light.png">
<img alt="Letta logo" src="https://raw.githubusercontent.com/letta-ai/letta/refs/heads/main/assets/example_ade_screenshot_agents.png" width="800">
</picture>
</p>
🔐 To password protect your server, include `SECURE=true` and `LETTA_SERVER_PASSWORD=yourpassword` in your `docker run` command:
```sh
# If LETTA_SERVER_PASSWORD isn't set, the server will autogenerate a password
docker run \
-v ~/.letta/.persist/pgdata:/var/lib/postgresql/data \
-p 8283:8283 \
--env-file .env \
-e SECURE=true \
-e LETTA_SERVER_PASSWORD=yourpassword \
letta/letta:latest
```
#### 🌐 Connecting the ADE to an external (self-hosted) Letta server
If your Letta server isn't running on `localhost` (for example, you deployed it on an external service like EC2):
1. Click "Add remote server"
2. Enter your desired server name, the IP address of the server, and the server password (if set)
---
## 🧑‍🚀 Frequently asked questions (FAQ)
> _"Do I need to install Docker to use Letta?"_
No, you can install Letta using `pip` (via `pip install -U letta`), as well as from source (via `poetry install`). See instructions below.
> _"What's the difference between installing with `pip` vs `Docker`?"_
Letta gives your agents persistence (they live indefinitely) by storing all your agent data in a database. Letta is designed to be used with a [PostgreSQL](https://en.wikipedia.org/wiki/PostgreSQL) (the world's most popular database), however, it is not possible to install PostgreSQL via `pip`, so the `pip` install of Letta defaults to using [SQLite](https://www.sqlite.org/). If you have a PostgreSQL instance running on your own computer, you can still connect Letta (installed via `pip`) to PostgreSQL by setting the environment variable `LETTA_PG_URI`.
**Database migrations are not officially supported for Letta when using SQLite**, so if you would like to ensure that you're able to upgrade to the latest Letta version and migrate your Letta agents data, make sure that you're using PostgreSQL as your Letta database backend. Full compatability table below:
| Installation method | Start server command | Database backend | Data migrations supported? |
|---|---|---|---|
| `pip install letta` | `letta server` | SQLite | ❌ |
| `pip install letta` | `export LETTA_PG_URI=...` + `letta server` | PostgreSQL | ✅ |
| *[Install Docker](https://www.docker.com/get-started/)* |`docker run ...` ([full command](#-run-the-letta-server)) | PostgreSQL | ✅ |
> _"How do I use the ADE locally?"_
To connect the ADE to your local Letta server, simply run your Letta server (make sure you can access `localhost:8283`) and go to [https://app.letta.com](https://app.letta.com). If you would like to use the old version of the ADE (that runs on `localhost`), downgrade to Letta version `<=0.5.0`.
> _"If I connect the ADE to my local server, does my agent data get uploaded to letta.com?"_
No, the data in your Letta server database stays on your machine. The Letta ADE web application simply connects to your local Letta server (via the REST API) and provides a graphical interface on top of it to visualize your local Letta data in your browser's local state.
> _"Do I have to use your ADE? Can I build my own?"_
The ADE is built on top of the (fully open source) Letta server and Letta Agents API. You can build your own application like the ADE on top of the REST API (view the documention [here](https://docs.letta.com/api-reference)).
> _"Can I interact with Letta agents via the CLI?"_
The recommended way to use Letta is via the REST API and ADE, however you can also access your agents via the CLI.
<details>
<summary>View instructions for running the Letta CLI</summary>
You can chat with your agents via the Letta CLI tool (`letta run`). If you have a Letta Docker container running, you can use `docker exec` to run the Letta CLI inside the container:
```sh
# replace `<letta_container_id>` with the ID of your Letta container, found via `docker ps`
docker exec -it <letta_container_id> letta run
```
You can also use `docker ps` within the command to automatically find the ID of your Letta container:
```
docker exec -it $(docker ps -q -f ancestor=letta/letta) letta run
```
In the CLI tool, you'll be able to create new agents, or load existing agents:
```
🧬 Creating new agent...
? Select LLM model: letta-free [type=openai] [ip=https://inference.memgpt.ai]
? Select embedding model: letta-free [type=hugging-face] [ip=https://embeddings.memgpt.ai]
-> 🤖 Using persona profile: 'sam_pov'
-> 🧑 Using human profile: 'basic'
-> 🛠️ 8 tools: send_message, pause_heartbeats, conversation_search, conversation_search_date, archival_memory_insert, archival_memory_search, core_memory_append, core_memory_replace
🎉 Created new agent 'InspiringSpinach'
Hit enter to begin (will request first Letta message)
💭 User engaged. Recognize as Chad. Acknowledge presence and establish connection. Explore interests and engage in meaningful discourse.
🤖 Hello, Chad. It's nice to meet you. I am keen on learning and exploring new ideas and concepts. Is there a specific topic you'd like to discuss today?
> Enter your message: my name is Brad, not Chad...
💭 A discrepancy in identity. Reflect this change in core memory. Apologize for the oversight, and rectify the error.
⚡🧠 [function] updating memory with core_memory_replace
First name: Chad
→ First name: Brad
💭 Apologize for the mistake in recognizing his name, and begin the conversation anew, addressing him by the correct name this time.
🤖 I apologize for the oversight, Brad. Let's restart. It's a pleasure to meet you, Brad. Is there a specific topic you'd like to discuss today?
> Enter your message:
```
</details>
---
## ⚡ Quickstart (pip)
> [!WARNING]
> **Database migrations are not officially supported with `SQLite`**
>
> When you install Letta with `pip`, the default database backend is `SQLite` (you can still use an external `postgres` service with your `pip` install of Letta by setting `LETTA_PG_URI`).
>
> We do not officially support migrations between Letta versions with `SQLite` backends, only `postgres`. If you would like to keep your agent data across multiple Letta versions we highly recommend using the Docker install method which is the easiest way to use `postgres` with Letta.
<details>
<summary>View instructions for installing with pip</summary>
You can also install Letta with `pip`, which will default to using `SQLite` for the database backends (whereas Docker will default to using `postgres`).
### Step 1 - Install Letta using `pip`
```sh
pip install -U letta
```
### Step 2 - Set your environment variables for your chosen LLM / embedding providers
```sh
export OPENAI_API_KEY=sk-...
```
For Ollama (see our full [documentation](https://docs.letta.com/install) for examples of how to set up various providers):
```sh
export OLLAMA_BASE_URL=http://localhost:11434
```
### Step 3 - Run the Letta CLI
You can create agents and chat with them via the Letta CLI tool (`letta run`):
```sh
letta run
```
```
🧬 Creating new agent...
? Select LLM model: letta-free [type=openai] [ip=https://inference.memgpt.ai]
? Select embedding model: letta-free [type=hugging-face] [ip=https://embeddings.memgpt.ai]
-> 🤖 Using persona profile: 'sam_pov'
-> 🧑 Using human profile: 'basic'
-> 🛠️ 8 tools: send_message, pause_heartbeats, conversation_search, conversation_search_date, archival_memory_insert, archival_memory_search, core_memory_append, core_memory_replace
🎉 Created new agent 'InspiringSpinach'
Hit enter to begin (will request first Letta message)
💭 User engaged. Recognize as Chad. Acknowledge presence and establish connection. Explore interests and engage in meaningful discourse.
🤖 Hello, Chad. It's nice to meet you. I am keen on learning and exploring new ideas and concepts. Is there a specific topic you'd like to discuss today?
> Enter your message: my name is Brad, not Chad...
💭 A discrepancy in identity. Reflect this change in core memory. Apologize for the oversight, and rectify the error.
⚡🧠 [function] updating memory with core_memory_replace
First name: Chad
→ First name: Brad
💭 Apologize for the mistake in recognizing his name, and begin the conversation anew, addressing him by the correct name this time.
🤖 I apologize for the oversight, Brad. Let's restart. It's a pleasure to meet you, Brad. Is there a specific topic you'd like to discuss today?
> Enter your message:
```
### Step 4 - Run the Letta server
You can start the Letta API server with `letta server` (see the full API reference [here](https://docs.letta.com/api-reference)):
```sh
letta server
```
```
Initializing database...
Running: uvicorn server:app --host localhost --port 8283
INFO: Started server process [47750]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://localhost:8283 (Press CTRL+C to quit)
```
</details>
---
## 🤗 How to contribute
Letta is an open source project built by over a hundred contributors. There are many ways to get involved in the Letta OSS project!
* **Contribute to the project**: Interested in contributing? Start by reading our [Contribution Guidelines](https://github.com/cpacker/MemGPT/tree/main/CONTRIBUTING.md).
* **Ask a question**: Join our community on [Discord](https://discord.gg/letta) and direct your questions to the `#support` channel.
* **Report issues or suggest features**: Have an issue or a feature request? Please submit them through our [GitHub Issues page](https://github.com/cpacker/MemGPT/issues).
* **Explore the roadmap**: Curious about future developments? View and comment on our [project roadmap](https://github.com/cpacker/MemGPT/issues/1533).
* **Join community events**: Stay updated with the [event calendar](https://lu.ma/berkeley-llm-meetup) or follow our [Twitter account](https://twitter.com/Letta_AI).
---
***Legal notices**: By using Letta and related Letta services (such as the Letta endpoint or hosted service), you are agreeing to our [privacy policy](https://www.letta.com/privacy-policy) and [terms of service](https://www.letta.com/terms-of-service).*

42
TERMS.md Normal file
View File

@@ -0,0 +1,42 @@
Terms of Service
================
**Binding Agreement**. This is a binding contract ("Terms") between you and the developers of Letta and associated services ("we," "us," "our," "Letta developers", "Letta"). These Terms apply whenever you use any of the sites, apps, products, or services ("Services") we offer, in existence now to created in the future. Further, we may automatically upgrade our Services, and these Terms will apply to such upgrades. By accessing or using the Services, you agree to be bound by these Terms. If you use our services on behalf of an organization, you agree to these terms on behalf of that organization. If you do not agree to these Terms, you may not use the Services.
**Privacy**. See our Privacy Policy for details on how we collect, store, and share user information.
**Age Restrictions**. The Services are not intended for users who are under the age of 13. In order to create an account for the Services, you must be 13 years of age or older. By registering, you represent and warrant that you are 13 years of age or older. If children between the ages of 13 and 18 wish to use the Services, they must be registered by their parent or guardian.
**Your Content and Permissions**. Content may be uploaded to, shared with, or generated by Letta -- files, videos, links, music, documents, code, and text ("Your Content"). Your Content is yours. Letta does not claim any right, title, or interest in Your Content.
You grant us a non-exclusive, worldwide, royalty free license to do the things we need to do to provide the Services, including but not limited to storing, displaying, reproducing, and distributing Your Content. This license extends to trusted third parties we work with.
**Content Guidelines**. You are fully responsible for Your Content. You may not copy, upload, download, or share Your Content unless you have the appropriate rights to do so. It is your responsibility to ensure that Your Content abides by applicable laws, these Terms, and with our user guidelines. We don't actively review Your Content.
**Account Security**. You are responsible for safeguarding your password to the Services, making sure that others don't have access to it, and keeping your account information current. You must immediately notify the Letta developers of any unauthorized uses of your account or any other breaches of security. Letta will not be liable for your acts or omissions, including any damages of any kind incurred as a result of your acts or omissions.
**Changes to these Terms**. We are constantly updating our Services, and that means sometimes we have to change the legal terms under which our Services are offered. If we make changes that are material, we will let you know, for example by posting on one of our blogs, or by sending you an email or other communication before the changes take effect. The notice will designate a reasonable period of time after which the new Terms will take effect. If you disagree with our changes, then you should stop using Letta within the designated notice period. Your continued use of Letta will be subject to the new Terms. However, any dispute that arose before the changes shall be governed by the Terms (including the binding individual arbitration clause) that were in place when the dispute arose.
You can access archived versions of our policies at our repository.
**DMCA Policy**. We respond to notices of alleged copyright infringement in accordance with the Digital Millennium Copyright Act ("DMCA"). If you believe that the content of a Letta account infringes your copyrights, you can notify us using the published email in our privacy policy.
**Our Intellectual Property**: The Services and all materials contained therein, including, without limitation, Letta logo, and all designs, text, graphics, pictures, information, data, software, sound files, other files, and the selection and arrangement thereof (collectively, the "Letta Materials") are the property of Letta or its licensors or users and are protected by U.S. and international intellectual property laws. You are granted a personal, limited, non-sublicensable, non-exclusive, revocable license to access and use Letta Materials in accordance with these Terms for the sole purpose of enabling you to use and enjoy the Services.
Other trademarks, service marks, graphics and logos used in connection with the Services may be the trademarks of other third parties. Your use of the Services grants you no right or license to reproduce or otherwise use any Letta, Letta, or third-party trademarks.
**Termination**. You are free to stop using the Services at any time. We also reserve the right to suspend or end the Services at any time at our discretion and without notice. For example, we may suspend or terminate your use of the Services if you fail to comply with these Terms, or use the Services in a manner that would cause us legal liability, disrupt the Services, or disrupt others' use of the Services.
**Disclaimer of Warranties**. Letta makes no warranties of any kind with respect to Letta or your use of the Services.
**Limitation of Liability**. Letta shall not have any liability for any indirect, incidental, consequential, special, exemplary, or damages under any theory of liability arising out of, or relating to, these Terms or your use of Letta. As a condition of access to Letta, you understand and agree that Letta's liability shall not exceed $4.20.
**Indemnification**. You agree to indemnify and hold harmless Letta, its developers, its contributors, its contractors, and its licensors, and their respective directors, officers, employees, and agents from and against any and all losses, liabilities, demands, damages, costs, claims, and expenses, including attorneys fees, arising out of or related to your use of our Services, including but not limited to your violation of the Agreement or any agreement with a provider of third-party services used in connection with the Services or applicable law, Content that you post, and any ecommerce activities conducted through your or another users website.
**Exceptions to Agreement to Arbitrate**. Claims for injunctive or equitable relief or claims regarding intellectual property rights may be brought in any competent court without the posting of a bond.
**No Class Actions**. You may resolve disputes with us only on an individual basis; you may not bring a claim as a plaintiff or a class member in a class, consolidated, or representative action. **Class arbitrations, class actions, private attorney general actions, and consolidation with other arbitrations are not permitted.**
**Governing Law**. You agree that these Terms, and your use of Letta, are governed by California law, in the United States of America, without regard to its principles of conflicts of law.
**Creative Commons Sharealike License**. This document is derived from the [Automattic legalmattic repository](https://github.com/Automattic/legalmattic) distributed under a Creative Commons Sharealike license. Thank you Automattic!

116
alembic.ini Normal file
View File

@@ -0,0 +1,116 @@
# A generic, single database configuration.
[alembic]
# path to migration scripts
# Use forward slashes (/) also on windows to provide an os agnostic path
script_location = alembic
# template used to generate migration file names; The default value is %%(rev)s_%%(slug)s
# Uncomment the line below if you want the files to be prepended with date and time
# see https://alembic.sqlalchemy.org/en/latest/tutorial.html#editing-the-ini-file
# for all available tokens
# file_template = %%(year)d_%%(month).2d_%%(day).2d_%%(hour).2d%%(minute).2d-%%(rev)s_%%(slug)s
# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory.
prepend_sys_path = .
# timezone to use when rendering the date within the migration file
# as well as the filename.
# If specified, requires the python>=3.9 or backports.zoneinfo library.
# Any required deps can installed by adding `alembic[tz]` to the pip requirements
# string value is passed to ZoneInfo()
# leave blank for localtime
# timezone =
# max length of characters to apply to the "slug" field
# truncate_slug_length = 40
# set to 'true' to run the environment during
# the 'revision' command, regardless of autogenerate
# revision_environment = false
# set to 'true' to allow .pyc and .pyo files without
# a source .py file to be detected as revisions in the
# versions/ directory
# sourceless = false
# version location specification; This defaults
# to alembic/versions. When using multiple version
# directories, initial revisions must be specified with --version-path.
# The path separator used here should be the separator specified by "version_path_separator" below.
# version_locations = %(here)s/bar:%(here)s/bat:alembic/versions
# version path separator; As mentioned above, this is the character used to split
# version_locations. The default within new alembic.ini files is "os", which uses os.pathsep.
# If this key is omitted entirely, it falls back to the legacy behavior of splitting on spaces and/or commas.
# Valid values for version_path_separator are:
#
# version_path_separator = :
# version_path_separator = ;
# version_path_separator = space
version_path_separator = os # Use os.pathsep. Default configuration used for new projects.
# set to 'true' to search source files recursively
# in each "version_locations" directory
# new in Alembic version 1.10
# recursive_version_locations = false
# the output encoding used when revision files
# are written from script.py.mako
# output_encoding = utf-8
sqlalchemy.url = driver://user:pass@localhost/dbname
[post_write_hooks]
# post_write_hooks defines scripts or Python functions that are run
# on newly generated revision scripts. See the documentation for further
# detail and examples
# format using "black" - use the console_scripts runner, against the "black" entrypoint
# hooks = black
# black.type = console_scripts
# black.entrypoint = black
# black.options = -l 79 REVISION_SCRIPT_FILENAME
# lint with attempts to fix using "ruff" - use the exec runner, execute a binary
# hooks = ruff
# ruff.type = exec
# ruff.executable = %(here)s/.venv/bin/ruff
# ruff.options = --fix REVISION_SCRIPT_FILENAME
# Logging configuration
[loggers]
keys = root,sqlalchemy,alembic
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = WARN
handlers = console
qualname =
[logger_sqlalchemy]
level = WARN
handlers =
qualname = sqlalchemy.engine
[logger_alembic]
level = INFO
handlers =
qualname = alembic
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(levelname)-5.5s [%(name)s] %(message)s
datefmt = %H:%M:%S

1
alembic/README Normal file
View File

@@ -0,0 +1 @@
Generic single-database configuration.

89
alembic/env.py Normal file
View File

@@ -0,0 +1,89 @@
import os
from logging.config import fileConfig
from sqlalchemy import engine_from_config, pool
from alembic import context
from letta.config import LettaConfig
from letta.orm import Base
from letta.settings import settings
letta_config = LettaConfig.load()
# this is the Alembic Config object, which provides
# access to the values within the .ini file in use.
config = context.config
if settings.letta_pg_uri_no_default:
config.set_main_option("sqlalchemy.url", settings.letta_pg_uri)
print(f"Using database: ", settings.letta_pg_uri)
else:
config.set_main_option("sqlalchemy.url", "sqlite:///" + os.path.join(letta_config.recall_storage_path, "sqlite.db"))
print(f"Using database: ", settings.letta_pg_uri, settings.letta_pg_uri_no_default)
# Interpret the config file for Python logging.
# This line sets up loggers basically.
if config.config_file_name is not None:
fileConfig(config.config_file_name)
# add your model's MetaData object here
# for 'autogenerate' support
# from myapp import mymodel
# target_metadata = mymodel.Base.metadata
target_metadata = Base.metadata
# other values from the config, defined by the needs of env.py,
# can be acquired:
# my_important_option = config.get_main_option("my_important_option")
# ... etc.
def run_migrations_offline() -> None:
"""Run migrations in 'offline' mode.
This configures the context with just a URL
and not an Engine, though an Engine is acceptable
here as well. By skipping the Engine creation
we don't even need a DBAPI to be available.
Calls to context.execute() here emit the given string to the
script output.
"""
url = config.get_main_option("sqlalchemy.url")
context.configure(
url=url,
target_metadata=target_metadata,
literal_binds=True,
dialect_opts={"paramstyle": "named"},
)
with context.begin_transaction():
context.run_migrations()
def run_migrations_online() -> None:
"""Run migrations in 'online' mode.
In this scenario we need to create an Engine
and associate a connection with the context.
"""
connectable = engine_from_config(
config.get_section(config.config_ini_section, {}),
prefix="sqlalchemy.",
poolclass=pool.NullPool,
)
with connectable.connect() as connection:
context.configure(connection=connection, target_metadata=target_metadata, include_schemas=True)
with context.begin_transaction():
context.run_migrations()
if context.is_offline_mode():
run_migrations_offline()
else:
run_migrations_online()

26
alembic/script.py.mako Normal file
View File

@@ -0,0 +1,26 @@
"""${message}
Revision ID: ${up_revision}
Revises: ${down_revision | comma,n}
Create Date: ${create_date}
"""
from typing import Sequence, Union
from alembic import op
import sqlalchemy as sa
${imports if imports else ""}
# revision identifiers, used by Alembic.
revision: str = ${repr(up_revision)}
down_revision: Union[str, None] = ${repr(down_revision)}
branch_labels: Union[str, Sequence[str], None] = ${repr(branch_labels)}
depends_on: Union[str, Sequence[str], None] = ${repr(depends_on)}
def upgrade() -> None:
${upgrades if upgrades else "pass"}
def downgrade() -> None:
${downgrades if downgrades else "pass"}

View File

@@ -0,0 +1,44 @@
"""adding ToolsAgents ORM
Revision ID: 08b2f8225812
Revises: 3c683a662c82
Create Date: 2024-12-05 16:46:51.258831
"""
from typing import Sequence, Union
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision: str = '08b2f8225812'
down_revision: Union[str, None] = '3c683a662c82'
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.create_table('tools_agents',
sa.Column('agent_id', sa.String(), nullable=False),
sa.Column('tool_id', sa.String(), nullable=False),
sa.Column('tool_name', sa.String(), nullable=False),
sa.Column('id', sa.String(), nullable=False),
sa.Column('created_at', sa.DateTime(timezone=True), server_default=sa.text('now()'), nullable=True),
sa.Column('updated_at', sa.DateTime(timezone=True), server_default=sa.text('now()'), nullable=True),
sa.Column('is_deleted', sa.Boolean(), server_default=sa.text('FALSE'), nullable=False),
sa.Column('_created_by_id', sa.String(), nullable=True),
sa.Column('_last_updated_by_id', sa.String(), nullable=True),
sa.ForeignKeyConstraint(['agent_id'], ['agents.id'], ),
sa.ForeignKeyConstraint(['tool_id'], ['tools.id'], name='fk_tool_id'),
sa.PrimaryKeyConstraint('agent_id', 'tool_id', 'tool_name', 'id'),
sa.UniqueConstraint('agent_id', 'tool_name', name='unique_tool_per_agent')
)
# ### end Alembic commands ###
def downgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.drop_table('tools_agents')
# ### end Alembic commands ###

View File

@@ -0,0 +1,52 @@
"""Make an blocks agents mapping table
Revision ID: 1c8880d671ee
Revises: f81ceea2c08d
Create Date: 2024-11-22 15:42:47.209229
"""
from typing import Sequence, Union
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision: str = "1c8880d671ee"
down_revision: Union[str, None] = "f81ceea2c08d"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.create_unique_constraint("unique_block_id_label", "block", ["id", "label"])
op.create_table(
"blocks_agents",
sa.Column("agent_id", sa.String(), nullable=False),
sa.Column("block_id", sa.String(), nullable=False),
sa.Column("block_label", sa.String(), nullable=False),
sa.Column("id", sa.String(), nullable=False),
sa.Column("created_at", sa.DateTime(timezone=True), server_default=sa.text("now()"), nullable=True),
sa.Column("updated_at", sa.DateTime(timezone=True), server_default=sa.text("now()"), nullable=True),
sa.Column("is_deleted", sa.Boolean(), server_default=sa.text("FALSE"), nullable=False),
sa.Column("_created_by_id", sa.String(), nullable=True),
sa.Column("_last_updated_by_id", sa.String(), nullable=True),
sa.ForeignKeyConstraint(
["agent_id"],
["agents.id"],
),
sa.ForeignKeyConstraint(["block_id", "block_label"], ["block.id", "block.label"], name="fk_block_id_label"),
sa.PrimaryKeyConstraint("agent_id", "block_id", "block_label", "id"),
sa.UniqueConstraint("agent_id", "block_label", name="unique_label_per_agent"),
)
# ### end Alembic commands ###
def downgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.drop_constraint("unique_block_id_label", "block", type_="unique")
op.drop_table("blocks_agents")
# ### end Alembic commands ###

View File

@@ -0,0 +1,46 @@
"""Migrate jobs to the orm
Revision ID: 3c683a662c82
Revises: 5987401b40ae
Create Date: 2024-12-04 15:59:41.708396
"""
from typing import Sequence, Union
import sqlalchemy as sa
from sqlalchemy.dialects import postgresql
from alembic import op
# revision identifiers, used by Alembic.
revision: str = "3c683a662c82"
down_revision: Union[str, None] = "5987401b40ae"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.add_column("jobs", sa.Column("updated_at", sa.DateTime(timezone=True), server_default=sa.text("now()"), nullable=True))
op.add_column("jobs", sa.Column("is_deleted", sa.Boolean(), server_default=sa.text("FALSE"), nullable=False))
op.add_column("jobs", sa.Column("_created_by_id", sa.String(), nullable=True))
op.add_column("jobs", sa.Column("_last_updated_by_id", sa.String(), nullable=True))
op.alter_column("jobs", "status", existing_type=sa.VARCHAR(), nullable=False)
op.alter_column("jobs", "completed_at", existing_type=postgresql.TIMESTAMP(timezone=True), type_=sa.DateTime(), existing_nullable=True)
op.alter_column("jobs", "user_id", existing_type=sa.VARCHAR(), nullable=False)
op.create_foreign_key(None, "jobs", "users", ["user_id"], ["id"])
# ### end Alembic commands ###
def downgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.drop_constraint(None, "jobs", type_="foreignkey")
op.alter_column("jobs", "user_id", existing_type=sa.VARCHAR(), nullable=True)
op.alter_column("jobs", "completed_at", existing_type=sa.DateTime(), type_=postgresql.TIMESTAMP(timezone=True), existing_nullable=True)
op.alter_column("jobs", "status", existing_type=sa.VARCHAR(), nullable=True)
op.drop_column("jobs", "_last_updated_by_id")
op.drop_column("jobs", "_created_by_id")
op.drop_column("jobs", "is_deleted")
op.drop_column("jobs", "updated_at")
# ### end Alembic commands ###

View File

@@ -0,0 +1,42 @@
"""Drop api tokens table in OSS
Revision ID: 4e88e702f85e
Revises: d05669b60ebe
Create Date: 2024-12-13 17:19:55.796210
"""
from typing import Sequence, Union
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision: str = "4e88e702f85e"
down_revision: Union[str, None] = "d05669b60ebe"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.drop_index("tokens_idx_key", table_name="tokens")
op.drop_index("tokens_idx_user", table_name="tokens")
op.drop_table("tokens")
# ### end Alembic commands ###
def downgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.create_table(
"tokens",
sa.Column("id", sa.VARCHAR(), autoincrement=False, nullable=False),
sa.Column("user_id", sa.VARCHAR(), autoincrement=False, nullable=False),
sa.Column("key", sa.VARCHAR(), autoincrement=False, nullable=False),
sa.Column("name", sa.VARCHAR(), autoincrement=False, nullable=True),
sa.PrimaryKeyConstraint("id", name="tokens_pkey"),
)
op.create_index("tokens_idx_user", "tokens", ["user_id"], unique=False)
op.create_index("tokens_idx_key", "tokens", ["key"], unique=False)
# ### end Alembic commands ###

View File

@@ -0,0 +1,105 @@
"""divide passage table into SourcePassages and AgentPassages
Revision ID: 54dec07619c4
Revises: 4e88e702f85e
Create Date: 2024-12-14 17:23:08.772554
"""
from typing import Sequence, Union
from alembic import op
from pgvector.sqlalchemy import Vector
import sqlalchemy as sa
from sqlalchemy.dialects import postgresql
from letta.orm.custom_columns import EmbeddingConfigColumn
# revision identifiers, used by Alembic.
revision: str = '54dec07619c4'
down_revision: Union[str, None] = '4e88e702f85e'
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.create_table(
'agent_passages',
sa.Column('id', sa.String(), nullable=False),
sa.Column('text', sa.String(), nullable=False),
sa.Column('embedding_config', EmbeddingConfigColumn(), nullable=False),
sa.Column('metadata_', sa.JSON(), nullable=False),
sa.Column('embedding', Vector(dim=4096), nullable=True),
sa.Column('created_at', sa.DateTime(timezone=True), server_default=sa.text('now()'), nullable=True),
sa.Column('updated_at', sa.DateTime(timezone=True), server_default=sa.text('now()'), nullable=True),
sa.Column('is_deleted', sa.Boolean(), server_default=sa.text('FALSE'), nullable=False),
sa.Column('_created_by_id', sa.String(), nullable=True),
sa.Column('_last_updated_by_id', sa.String(), nullable=True),
sa.Column('organization_id', sa.String(), nullable=False),
sa.Column('agent_id', sa.String(), nullable=False),
sa.ForeignKeyConstraint(['agent_id'], ['agents.id'], ondelete='CASCADE'),
sa.ForeignKeyConstraint(['organization_id'], ['organizations.id'], ),
sa.PrimaryKeyConstraint('id')
)
op.create_index('agent_passages_org_idx', 'agent_passages', ['organization_id'], unique=False)
op.create_table(
'source_passages',
sa.Column('id', sa.String(), nullable=False),
sa.Column('text', sa.String(), nullable=False),
sa.Column('embedding_config', EmbeddingConfigColumn(), nullable=False),
sa.Column('metadata_', sa.JSON(), nullable=False),
sa.Column('embedding', Vector(dim=4096), nullable=True),
sa.Column('created_at', sa.DateTime(timezone=True), server_default=sa.text('now()'), nullable=True),
sa.Column('updated_at', sa.DateTime(timezone=True), server_default=sa.text('now()'), nullable=True),
sa.Column('is_deleted', sa.Boolean(), server_default=sa.text('FALSE'), nullable=False),
sa.Column('_created_by_id', sa.String(), nullable=True),
sa.Column('_last_updated_by_id', sa.String(), nullable=True),
sa.Column('organization_id', sa.String(), nullable=False),
sa.Column('file_id', sa.String(), nullable=True),
sa.Column('source_id', sa.String(), nullable=False),
sa.ForeignKeyConstraint(['file_id'], ['files.id'], ondelete='CASCADE'),
sa.ForeignKeyConstraint(['organization_id'], ['organizations.id'], ),
sa.ForeignKeyConstraint(['source_id'], ['sources.id'], ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id')
)
op.create_index('source_passages_org_idx', 'source_passages', ['organization_id'], unique=False)
op.drop_table('passages')
op.drop_constraint('files_source_id_fkey', 'files', type_='foreignkey')
op.create_foreign_key(None, 'files', 'sources', ['source_id'], ['id'], ondelete='CASCADE')
op.drop_constraint('messages_agent_id_fkey', 'messages', type_='foreignkey')
op.create_foreign_key(None, 'messages', 'agents', ['agent_id'], ['id'], ondelete='CASCADE')
# ### end Alembic commands ###
def downgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.drop_constraint(None, 'messages', type_='foreignkey')
op.create_foreign_key('messages_agent_id_fkey', 'messages', 'agents', ['agent_id'], ['id'])
op.drop_constraint(None, 'files', type_='foreignkey')
op.create_foreign_key('files_source_id_fkey', 'files', 'sources', ['source_id'], ['id'])
op.create_table(
'passages',
sa.Column('id', sa.VARCHAR(), autoincrement=False, nullable=False),
sa.Column('text', sa.VARCHAR(), autoincrement=False, nullable=False),
sa.Column('file_id', sa.VARCHAR(), autoincrement=False, nullable=True),
sa.Column('agent_id', sa.VARCHAR(), autoincrement=False, nullable=True),
sa.Column('source_id', sa.VARCHAR(), autoincrement=False, nullable=True),
sa.Column('embedding', Vector(dim=4096), autoincrement=False, nullable=True),
sa.Column('embedding_config', postgresql.JSON(astext_type=sa.Text()), autoincrement=False, nullable=False),
sa.Column('metadata_', postgresql.JSON(astext_type=sa.Text()), autoincrement=False, nullable=False),
sa.Column('created_at', postgresql.TIMESTAMP(timezone=True), autoincrement=False, nullable=False),
sa.Column('updated_at', postgresql.TIMESTAMP(timezone=True), server_default=sa.text('now()'), autoincrement=False, nullable=True),
sa.Column('is_deleted', sa.BOOLEAN(), server_default=sa.text('false'), autoincrement=False, nullable=False),
sa.Column('_created_by_id', sa.VARCHAR(), autoincrement=False, nullable=True),
sa.Column('_last_updated_by_id', sa.VARCHAR(), autoincrement=False, nullable=True),
sa.Column('organization_id', sa.VARCHAR(), autoincrement=False, nullable=False),
sa.ForeignKeyConstraint(['agent_id'], ['agents.id'], name='passages_agent_id_fkey'),
sa.ForeignKeyConstraint(['file_id'], ['files.id'], name='passages_file_id_fkey', ondelete='CASCADE'),
sa.ForeignKeyConstraint(['organization_id'], ['organizations.id'], name='passages_organization_id_fkey'),
sa.PrimaryKeyConstraint('id', name='passages_pkey')
)
op.drop_index('source_passages_org_idx', table_name='source_passages')
op.drop_table('source_passages')
op.drop_index('agent_passages_org_idx', table_name='agent_passages')
op.drop_table('agent_passages')
# ### end Alembic commands ###

View File

@@ -0,0 +1,34 @@
"""Refactor agent memory
Revision ID: 5987401b40ae
Revises: 1c8880d671ee
Create Date: 2024-11-25 14:35:00.896507
"""
from typing import Sequence, Union
import sqlalchemy as sa
from sqlalchemy.dialects import postgresql
from alembic import op
# revision identifiers, used by Alembic.
revision: str = "5987401b40ae"
down_revision: Union[str, None] = "1c8880d671ee"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.alter_column("agents", "tools", new_column_name="tool_names")
op.drop_column("agents", "memory")
# ### end Alembic commands ###
def downgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.alter_column("agents", "tool_names", new_column_name="tools")
op.add_column("agents", sa.Column("memory", postgresql.JSON(astext_type=sa.Text()), autoincrement=False, nullable=True))
# ### end Alembic commands ###

View File

@@ -0,0 +1,63 @@
"""Migrate message to orm
Revision ID: 95badb46fdf9
Revises: 3c683a662c82
Create Date: 2024-12-05 14:02:04.163150
"""
from typing import Sequence, Union
import sqlalchemy as sa
from sqlalchemy.dialects import postgresql
from alembic import op
# revision identifiers, used by Alembic.
revision: str = "95badb46fdf9"
down_revision: Union[str, None] = "08b2f8225812"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.add_column("messages", sa.Column("updated_at", sa.DateTime(timezone=True), server_default=sa.text("now()"), nullable=True))
op.add_column("messages", sa.Column("is_deleted", sa.Boolean(), server_default=sa.text("FALSE"), nullable=False))
op.add_column("messages", sa.Column("_created_by_id", sa.String(), nullable=True))
op.add_column("messages", sa.Column("_last_updated_by_id", sa.String(), nullable=True))
op.add_column("messages", sa.Column("organization_id", sa.String(), nullable=True))
# Populate `organization_id` based on `user_id`
# Use a raw SQL query to update the organization_id
op.execute(
"""
UPDATE messages
SET organization_id = users.organization_id
FROM users
WHERE messages.user_id = users.id
"""
)
op.alter_column("messages", "organization_id", nullable=False)
op.alter_column("messages", "tool_calls", existing_type=postgresql.JSON(astext_type=sa.Text()), nullable=False)
op.alter_column("messages", "created_at", existing_type=postgresql.TIMESTAMP(timezone=True), nullable=False)
op.drop_index("message_idx_user", table_name="messages")
op.create_foreign_key(None, "messages", "agents", ["agent_id"], ["id"])
op.create_foreign_key(None, "messages", "organizations", ["organization_id"], ["id"])
op.drop_column("messages", "user_id")
# ### end Alembic commands ###
def downgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.add_column("messages", sa.Column("user_id", sa.VARCHAR(), autoincrement=False, nullable=False))
op.drop_constraint(None, "messages", type_="foreignkey")
op.drop_constraint(None, "messages", type_="foreignkey")
op.create_index("message_idx_user", "messages", ["user_id", "agent_id"], unique=False)
op.alter_column("messages", "created_at", existing_type=postgresql.TIMESTAMP(timezone=True), nullable=True)
op.alter_column("messages", "tool_calls", existing_type=postgresql.JSON(astext_type=sa.Text()), nullable=True)
op.drop_column("messages", "organization_id")
op.drop_column("messages", "_last_updated_by_id")
op.drop_column("messages", "_created_by_id")
op.drop_column("messages", "is_deleted")
op.drop_column("messages", "updated_at")
# ### end Alembic commands ###

View File

@@ -0,0 +1,195 @@
"""Create a baseline migrations
Revision ID: 9a505cc7eca9
Revises:
Create Date: 2024-10-11 14:19:19.875656
"""
from typing import Sequence, Union
import pgvector
import sqlalchemy as sa
from sqlalchemy.dialects import postgresql
import letta.orm
from alembic import op
# revision identifiers, used by Alembic.
revision: str = "9a505cc7eca9"
down_revision: Union[str, None] = None
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
op.create_table(
"agent_source_mapping",
sa.Column("id", sa.String(), nullable=False),
sa.Column("user_id", sa.String(), nullable=False),
sa.Column("agent_id", sa.String(), nullable=False),
sa.Column("source_id", sa.String(), nullable=False),
sa.PrimaryKeyConstraint("id"),
)
op.create_index("agent_source_mapping_idx_user", "agent_source_mapping", ["user_id", "agent_id", "source_id"], unique=False)
op.create_table(
"agents",
sa.Column("id", sa.String(), nullable=False),
sa.Column("user_id", sa.String(), nullable=False),
sa.Column("name", sa.String(), nullable=False),
sa.Column("created_at", sa.DateTime(timezone=True), server_default=sa.text("now()"), nullable=True),
sa.Column("description", sa.String(), nullable=True),
sa.Column("message_ids", sa.JSON(), nullable=True),
sa.Column("memory", sa.JSON(), nullable=True),
sa.Column("system", sa.String(), nullable=True),
sa.Column("agent_type", sa.String(), nullable=True),
sa.Column("llm_config", letta.orm.custom_columns.LLMConfigColumn(), nullable=True),
sa.Column("embedding_config", letta.orm.custom_columns.EmbeddingConfigColumn(), nullable=True),
sa.Column("metadata_", sa.JSON(), nullable=True),
sa.Column("tools", sa.JSON(), nullable=True),
sa.PrimaryKeyConstraint("id"),
)
op.create_index("agents_idx_user", "agents", ["user_id"], unique=False)
op.create_table(
"block",
sa.Column("id", sa.String(), nullable=False),
sa.Column("value", sa.String(), nullable=False),
sa.Column("limit", sa.BIGINT(), nullable=True),
sa.Column("name", sa.String(), nullable=True),
sa.Column("template", sa.Boolean(), nullable=True),
sa.Column("label", sa.String(), nullable=False),
sa.Column("metadata_", sa.JSON(), nullable=True),
sa.Column("description", sa.String(), nullable=True),
sa.Column("user_id", sa.String(), nullable=True),
sa.PrimaryKeyConstraint("id"),
)
op.create_index("block_idx_user", "block", ["user_id"], unique=False)
op.create_table(
"files",
sa.Column("id", sa.String(), nullable=False),
sa.Column("user_id", sa.String(), nullable=False),
sa.Column("source_id", sa.String(), nullable=False),
sa.Column("file_name", sa.String(), nullable=True),
sa.Column("file_path", sa.String(), nullable=True),
sa.Column("file_type", sa.String(), nullable=True),
sa.Column("file_size", sa.Integer(), nullable=True),
sa.Column("file_creation_date", sa.String(), nullable=True),
sa.Column("file_last_modified_date", sa.String(), nullable=True),
sa.Column("created_at", sa.DateTime(timezone=True), server_default=sa.text("now()"), nullable=True),
sa.PrimaryKeyConstraint("id"),
)
op.create_table(
"jobs",
sa.Column("id", sa.String(), nullable=False),
sa.Column("user_id", sa.String(), nullable=True),
sa.Column("status", sa.String(), nullable=True),
sa.Column("created_at", sa.DateTime(timezone=True), server_default=sa.text("now()"), nullable=True),
sa.Column("completed_at", sa.DateTime(timezone=True), nullable=True),
sa.Column("metadata_", sa.JSON(), nullable=True),
sa.PrimaryKeyConstraint("id"),
)
op.create_table(
"messages",
sa.Column("id", sa.String(), nullable=False),
sa.Column("user_id", sa.String(), nullable=False),
sa.Column("agent_id", sa.String(), nullable=False),
sa.Column("role", sa.String(), nullable=False),
sa.Column("text", sa.String(), nullable=True),
sa.Column("model", sa.String(), nullable=True),
sa.Column("name", sa.String(), nullable=True),
sa.Column("tool_calls", letta.orm.message.ToolCallColumn(), nullable=True),
sa.Column("tool_call_id", sa.String(), nullable=True),
sa.Column("created_at", sa.DateTime(timezone=True), nullable=True),
sa.PrimaryKeyConstraint("id"),
)
op.create_index("message_idx_user", "messages", ["user_id", "agent_id"], unique=False)
op.create_table(
"organizations",
sa.Column("id", sa.VARCHAR(), autoincrement=False, nullable=False),
sa.Column("name", sa.VARCHAR(), autoincrement=False, nullable=False),
sa.Column("created_at", postgresql.TIMESTAMP(timezone=True), autoincrement=False, nullable=True),
sa.PrimaryKeyConstraint("id", name="organizations_pkey"),
)
op.create_table(
"passages",
sa.Column("id", sa.String(), nullable=False),
sa.Column("user_id", sa.String(), nullable=False),
sa.Column("text", sa.String(), nullable=True),
sa.Column("file_id", sa.String(), nullable=True),
sa.Column("agent_id", sa.String(), nullable=True),
sa.Column("source_id", sa.String(), nullable=True),
sa.Column("embedding", pgvector.sqlalchemy.Vector(dim=4096), nullable=True),
sa.Column("embedding_config", letta.orm.custom_columns.EmbeddingConfigColumn(), nullable=True),
sa.Column("metadata_", sa.JSON(), nullable=True),
sa.Column("created_at", sa.DateTime(timezone=True), nullable=True),
sa.PrimaryKeyConstraint("id"),
)
op.create_index("passage_idx_user", "passages", ["user_id", "agent_id", "file_id"], unique=False)
op.create_table(
"sources",
sa.Column("id", sa.String(), nullable=False),
sa.Column("user_id", sa.String(), nullable=False),
sa.Column("name", sa.String(), nullable=False),
sa.Column("created_at", sa.DateTime(timezone=True), server_default=sa.text("now()"), nullable=True),
sa.Column("embedding_config", letta.orm.custom_columns.EmbeddingConfigColumn(), nullable=True),
sa.Column("description", sa.String(), nullable=True),
sa.Column("metadata_", sa.JSON(), nullable=True),
sa.PrimaryKeyConstraint("id"),
)
op.create_index("sources_idx_user", "sources", ["user_id"], unique=False)
op.create_table(
"tokens",
sa.Column("id", sa.String(), nullable=False),
sa.Column("user_id", sa.String(), nullable=False),
sa.Column("key", sa.String(), nullable=False),
sa.Column("name", sa.String(), nullable=True),
sa.PrimaryKeyConstraint("id"),
)
op.create_index("tokens_idx_key", "tokens", ["key"], unique=False)
op.create_index("tokens_idx_user", "tokens", ["user_id"], unique=False)
op.create_table(
"users",
sa.Column("id", sa.VARCHAR(), autoincrement=False, nullable=False),
sa.Column("org_id", sa.VARCHAR(), autoincrement=False, nullable=True),
sa.Column("name", sa.VARCHAR(), autoincrement=False, nullable=False),
sa.Column("created_at", postgresql.TIMESTAMP(timezone=True), autoincrement=False, nullable=True),
sa.Column("policies_accepted", sa.BOOLEAN(), autoincrement=False, nullable=False),
sa.PrimaryKeyConstraint("id", name="users_pkey"),
)
op.create_table(
"tools",
sa.Column("id", sa.VARCHAR(), autoincrement=False, nullable=False),
sa.Column("name", sa.VARCHAR(), autoincrement=False, nullable=False),
sa.Column("user_id", sa.VARCHAR(), autoincrement=False, nullable=True),
sa.Column("description", sa.VARCHAR(), autoincrement=False, nullable=True),
sa.Column("source_type", sa.VARCHAR(), autoincrement=False, nullable=True),
sa.Column("source_code", sa.VARCHAR(), autoincrement=False, nullable=True),
sa.Column("json_schema", postgresql.JSON(astext_type=sa.Text()), autoincrement=False, nullable=True),
sa.Column("module", sa.VARCHAR(), autoincrement=False, nullable=True),
sa.Column("tags", postgresql.JSON(astext_type=sa.Text()), autoincrement=False, nullable=True),
sa.PrimaryKeyConstraint("id", name="tools_pkey"),
)
def downgrade() -> None:
op.drop_table("users")
op.drop_table("tools")
op.drop_index("tokens_idx_user", table_name="tokens")
op.drop_index("tokens_idx_key", table_name="tokens")
op.drop_table("tokens")
op.drop_index("sources_idx_user", table_name="sources")
op.drop_table("sources")
op.drop_index("passage_idx_user", table_name="passages")
op.drop_table("passages")
op.drop_table("organizations")
op.drop_index("message_idx_user", table_name="messages")
op.drop_table("messages")
op.drop_table("jobs")
op.drop_table("files")
op.drop_index("block_idx_user", table_name="block")
op.drop_table("block")
op.drop_index("agents_idx_user", table_name="agents")
op.drop_table("agents")
op.drop_index("agent_source_mapping_idx_user", table_name="agent_source_mapping")
op.drop_table("agent_source_mapping")

View File

@@ -0,0 +1,39 @@
"""add column to tools table to contain function return limit return_char_limit
Revision ID: a91994b9752f
Revises: e1a625072dbf
Create Date: 2024-12-09 18:27:25.650079
"""
from typing import Sequence, Union
import sqlalchemy as sa
from alembic import op
from letta.constants import FUNCTION_RETURN_CHAR_LIMIT
# revision identifiers, used by Alembic.
revision: str = "a91994b9752f"
down_revision: Union[str, None] = "e1a625072dbf"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.add_column("tools", sa.Column("return_char_limit", sa.Integer(), nullable=True))
# Populate `return_char_limit` column
op.execute(
f"""
UPDATE tools
SET return_char_limit = {FUNCTION_RETURN_CHAR_LIMIT}
"""
)
def downgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.drop_column("tools", "return_char_limit")
# ### end Alembic commands ###

View File

@@ -0,0 +1,52 @@
"""Add agents tags table
Revision ID: b6d7ca024aa9
Revises: d14ae606614c
Create Date: 2024-11-06 10:48:08.424108
"""
from typing import Sequence, Union
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision: str = "b6d7ca024aa9"
down_revision: Union[str, None] = "d14ae606614c"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.create_table(
"agents_tags",
sa.Column("agent_id", sa.String(), nullable=False),
sa.Column("tag", sa.String(), nullable=False),
sa.Column("id", sa.String(), nullable=False),
sa.Column("created_at", sa.DateTime(timezone=True), server_default=sa.text("now()"), nullable=True),
sa.Column("updated_at", sa.DateTime(timezone=True), server_default=sa.text("now()"), nullable=True),
sa.Column("is_deleted", sa.Boolean(), server_default=sa.text("FALSE"), nullable=False),
sa.Column("_created_by_id", sa.String(), nullable=True),
sa.Column("_last_updated_by_id", sa.String(), nullable=True),
sa.Column("organization_id", sa.String(), nullable=False),
sa.ForeignKeyConstraint(
["agent_id"],
["agents.id"],
),
sa.ForeignKeyConstraint(
["organization_id"],
["organizations.id"],
),
sa.PrimaryKeyConstraint("agent_id", "id"),
sa.UniqueConstraint("agent_id", "tag", name="unique_agent_tag"),
)
# ### end Alembic commands ###
def downgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.drop_table("agents_tags")
# ### end Alembic commands ###

View File

@@ -0,0 +1,88 @@
"""Add Passages ORM, drop legacy passages, cascading deletes for file-passages and user-jobs
Revision ID: c5d964280dff
Revises: a91994b9752f
Create Date: 2024-12-10 15:05:32.335519
"""
from typing import Sequence, Union
from alembic import op
import sqlalchemy as sa
from sqlalchemy.dialects import postgresql
# revision identifiers, used by Alembic.
revision: str = 'c5d964280dff'
down_revision: Union[str, None] = 'a91994b9752f'
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.add_column('passages', sa.Column('updated_at', sa.DateTime(timezone=True), server_default=sa.text('now()'), nullable=True))
op.add_column('passages', sa.Column('is_deleted', sa.Boolean(), server_default=sa.text('FALSE'), nullable=False))
op.add_column('passages', sa.Column('_created_by_id', sa.String(), nullable=True))
op.add_column('passages', sa.Column('_last_updated_by_id', sa.String(), nullable=True))
# Data migration step:
op.add_column("passages", sa.Column("organization_id", sa.String(), nullable=True))
# Populate `organization_id` based on `user_id`
# Use a raw SQL query to update the organization_id
op.execute(
"""
UPDATE passages
SET organization_id = users.organization_id
FROM users
WHERE passages.user_id = users.id
"""
)
# Set `organization_id` as non-nullable after population
op.alter_column("passages", "organization_id", nullable=False)
op.alter_column('passages', 'text',
existing_type=sa.VARCHAR(),
nullable=False)
op.alter_column('passages', 'embedding_config',
existing_type=postgresql.JSON(astext_type=sa.Text()),
nullable=False)
op.alter_column('passages', 'metadata_',
existing_type=postgresql.JSON(astext_type=sa.Text()),
nullable=False)
op.alter_column('passages', 'created_at',
existing_type=postgresql.TIMESTAMP(timezone=True),
nullable=False)
op.drop_index('passage_idx_user', table_name='passages')
op.create_foreign_key(None, 'passages', 'organizations', ['organization_id'], ['id'])
op.create_foreign_key(None, 'passages', 'agents', ['agent_id'], ['id'])
op.create_foreign_key(None, 'passages', 'files', ['file_id'], ['id'], ondelete='CASCADE')
op.drop_column('passages', 'user_id')
# ### end Alembic commands ###
def downgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.add_column('passages', sa.Column('user_id', sa.VARCHAR(), autoincrement=False, nullable=False))
op.drop_constraint(None, 'passages', type_='foreignkey')
op.drop_constraint(None, 'passages', type_='foreignkey')
op.drop_constraint(None, 'passages', type_='foreignkey')
op.create_index('passage_idx_user', 'passages', ['user_id', 'agent_id', 'file_id'], unique=False)
op.alter_column('passages', 'created_at',
existing_type=postgresql.TIMESTAMP(timezone=True),
nullable=True)
op.alter_column('passages', 'metadata_',
existing_type=postgresql.JSON(astext_type=sa.Text()),
nullable=True)
op.alter_column('passages', 'embedding_config',
existing_type=postgresql.JSON(astext_type=sa.Text()),
nullable=True)
op.alter_column('passages', 'text',
existing_type=sa.VARCHAR(),
nullable=True)
op.drop_column('passages', 'organization_id')
op.drop_column('passages', '_last_updated_by_id')
op.drop_column('passages', '_created_by_id')
op.drop_column('passages', 'is_deleted')
op.drop_column('passages', 'updated_at')
# ### end Alembic commands ###

View File

@@ -0,0 +1,56 @@
"""Move files to orm
Revision ID: c85a3d07c028
Revises: cda66b6cb0d6
Create Date: 2024-11-12 13:58:57.221081
"""
from typing import Sequence, Union
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision: str = "c85a3d07c028"
down_revision: Union[str, None] = "cda66b6cb0d6"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.add_column("files", sa.Column("updated_at", sa.DateTime(timezone=True), server_default=sa.text("now()"), nullable=True))
op.add_column("files", sa.Column("is_deleted", sa.Boolean(), server_default=sa.text("FALSE"), nullable=False))
op.add_column("files", sa.Column("_created_by_id", sa.String(), nullable=True))
op.add_column("files", sa.Column("_last_updated_by_id", sa.String(), nullable=True))
op.add_column("files", sa.Column("organization_id", sa.String(), nullable=True))
# Populate `organization_id` based on `user_id`
# Use a raw SQL query to update the organization_id
op.execute(
"""
UPDATE files
SET organization_id = users.organization_id
FROM users
WHERE files.user_id = users.id
"""
)
op.alter_column("files", "organization_id", nullable=False)
op.create_foreign_key(None, "files", "organizations", ["organization_id"], ["id"])
op.create_foreign_key(None, "files", "sources", ["source_id"], ["id"])
op.drop_column("files", "user_id")
# ### end Alembic commands ###
def downgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.add_column("files", sa.Column("user_id", sa.VARCHAR(), autoincrement=False, nullable=False))
op.drop_constraint(None, "files", type_="foreignkey")
op.drop_constraint(None, "files", type_="foreignkey")
op.drop_column("files", "organization_id")
op.drop_column("files", "_last_updated_by_id")
op.drop_column("files", "_created_by_id")
op.drop_column("files", "is_deleted")
op.drop_column("files", "updated_at")
# ### end Alembic commands ###

View File

@@ -0,0 +1,64 @@
"""Move sources to orm
Revision ID: cda66b6cb0d6
Revises: b6d7ca024aa9
Create Date: 2024-11-07 13:29:57.186107
"""
from typing import Sequence, Union
import sqlalchemy as sa
from sqlalchemy.dialects import postgresql
from alembic import op
# revision identifiers, used by Alembic.
revision: str = "cda66b6cb0d6"
down_revision: Union[str, None] = "b6d7ca024aa9"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.add_column("sources", sa.Column("updated_at", sa.DateTime(timezone=True), server_default=sa.text("now()"), nullable=True))
op.add_column("sources", sa.Column("is_deleted", sa.Boolean(), server_default=sa.text("FALSE"), nullable=False))
op.add_column("sources", sa.Column("_created_by_id", sa.String(), nullable=True))
op.add_column("sources", sa.Column("_last_updated_by_id", sa.String(), nullable=True))
# Data migration step:
op.add_column("sources", sa.Column("organization_id", sa.String(), nullable=True))
# Populate `organization_id` based on `user_id`
# Use a raw SQL query to update the organization_id
op.execute(
"""
UPDATE sources
SET organization_id = users.organization_id
FROM users
WHERE sources.user_id = users.id
"""
)
# Set `organization_id` as non-nullable after population
op.alter_column("sources", "organization_id", nullable=False)
op.alter_column("sources", "embedding_config", existing_type=postgresql.JSON(astext_type=sa.Text()), nullable=False)
op.drop_index("sources_idx_user", table_name="sources")
op.create_foreign_key(None, "sources", "organizations", ["organization_id"], ["id"])
op.drop_column("sources", "user_id")
# ### end Alembic commands ###
def downgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.add_column("sources", sa.Column("user_id", sa.VARCHAR(), autoincrement=False, nullable=False))
op.drop_constraint(None, "sources", type_="foreignkey")
op.create_index("sources_idx_user", "sources", ["user_id"], unique=False)
op.alter_column("sources", "embedding_config", existing_type=postgresql.JSON(astext_type=sa.Text()), nullable=True)
op.drop_column("sources", "organization_id")
op.drop_column("sources", "_last_updated_by_id")
op.drop_column("sources", "_created_by_id")
op.drop_column("sources", "is_deleted")
op.drop_column("sources", "updated_at")
# ### end Alembic commands ###

View File

@@ -0,0 +1,175 @@
"""Migrate agents to orm
Revision ID: d05669b60ebe
Revises: c5d964280dff
Create Date: 2024-12-12 10:25:31.825635
"""
from typing import Sequence, Union
import sqlalchemy as sa
from sqlalchemy.dialects import postgresql
from alembic import op
# revision identifiers, used by Alembic.
revision: str = "d05669b60ebe"
down_revision: Union[str, None] = "c5d964280dff"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.create_table(
"sources_agents",
sa.Column("agent_id", sa.String(), nullable=False),
sa.Column("source_id", sa.String(), nullable=False),
sa.ForeignKeyConstraint(
["agent_id"],
["agents.id"],
),
sa.ForeignKeyConstraint(
["source_id"],
["sources.id"],
),
sa.PrimaryKeyConstraint("agent_id", "source_id"),
)
op.drop_index("agent_source_mapping_idx_user", table_name="agent_source_mapping")
op.drop_table("agent_source_mapping")
op.add_column("agents", sa.Column("updated_at", sa.DateTime(timezone=True), server_default=sa.text("now()"), nullable=True))
op.add_column("agents", sa.Column("is_deleted", sa.Boolean(), server_default=sa.text("FALSE"), nullable=False))
op.add_column("agents", sa.Column("_created_by_id", sa.String(), nullable=True))
op.add_column("agents", sa.Column("_last_updated_by_id", sa.String(), nullable=True))
op.add_column("agents", sa.Column("organization_id", sa.String(), nullable=True))
# Populate `organization_id` based on `user_id`
# Use a raw SQL query to update the organization_id
op.execute(
"""
UPDATE agents
SET organization_id = users.organization_id
FROM users
WHERE agents.user_id = users.id
"""
)
op.alter_column("agents", "organization_id", nullable=False)
op.alter_column("agents", "name", existing_type=sa.VARCHAR(), nullable=True)
op.drop_index("agents_idx_user", table_name="agents")
op.create_unique_constraint("unique_org_agent_name", "agents", ["organization_id", "name"])
op.create_foreign_key(None, "agents", "organizations", ["organization_id"], ["id"])
op.drop_column("agents", "tool_names")
op.drop_column("agents", "user_id")
op.drop_constraint("agents_tags_organization_id_fkey", "agents_tags", type_="foreignkey")
op.drop_column("agents_tags", "_created_by_id")
op.drop_column("agents_tags", "_last_updated_by_id")
op.drop_column("agents_tags", "updated_at")
op.drop_column("agents_tags", "id")
op.drop_column("agents_tags", "is_deleted")
op.drop_column("agents_tags", "created_at")
op.drop_column("agents_tags", "organization_id")
op.create_unique_constraint("unique_agent_block", "blocks_agents", ["agent_id", "block_id"])
op.drop_constraint("fk_block_id_label", "blocks_agents", type_="foreignkey")
op.create_foreign_key(
"fk_block_id_label", "blocks_agents", "block", ["block_id", "block_label"], ["id", "label"], initially="DEFERRED", deferrable=True
)
op.drop_column("blocks_agents", "_created_by_id")
op.drop_column("blocks_agents", "_last_updated_by_id")
op.drop_column("blocks_agents", "updated_at")
op.drop_column("blocks_agents", "id")
op.drop_column("blocks_agents", "is_deleted")
op.drop_column("blocks_agents", "created_at")
op.drop_constraint("unique_tool_per_agent", "tools_agents", type_="unique")
op.create_unique_constraint("unique_agent_tool", "tools_agents", ["agent_id", "tool_id"])
op.drop_constraint("fk_tool_id", "tools_agents", type_="foreignkey")
op.drop_constraint("tools_agents_agent_id_fkey", "tools_agents", type_="foreignkey")
op.create_foreign_key(None, "tools_agents", "tools", ["tool_id"], ["id"], ondelete="CASCADE")
op.create_foreign_key(None, "tools_agents", "agents", ["agent_id"], ["id"], ondelete="CASCADE")
op.drop_column("tools_agents", "_created_by_id")
op.drop_column("tools_agents", "tool_name")
op.drop_column("tools_agents", "_last_updated_by_id")
op.drop_column("tools_agents", "updated_at")
op.drop_column("tools_agents", "id")
op.drop_column("tools_agents", "is_deleted")
op.drop_column("tools_agents", "created_at")
# ### end Alembic commands ###
def downgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.add_column(
"tools_agents",
sa.Column("created_at", postgresql.TIMESTAMP(timezone=True), server_default=sa.text("now()"), autoincrement=False, nullable=True),
)
op.add_column(
"tools_agents", sa.Column("is_deleted", sa.BOOLEAN(), server_default=sa.text("false"), autoincrement=False, nullable=False)
)
op.add_column("tools_agents", sa.Column("id", sa.VARCHAR(), autoincrement=False, nullable=False))
op.add_column(
"tools_agents",
sa.Column("updated_at", postgresql.TIMESTAMP(timezone=True), server_default=sa.text("now()"), autoincrement=False, nullable=True),
)
op.add_column("tools_agents", sa.Column("_last_updated_by_id", sa.VARCHAR(), autoincrement=False, nullable=True))
op.add_column("tools_agents", sa.Column("tool_name", sa.VARCHAR(), autoincrement=False, nullable=False))
op.add_column("tools_agents", sa.Column("_created_by_id", sa.VARCHAR(), autoincrement=False, nullable=True))
op.drop_constraint(None, "tools_agents", type_="foreignkey")
op.drop_constraint(None, "tools_agents", type_="foreignkey")
op.create_foreign_key("tools_agents_agent_id_fkey", "tools_agents", "agents", ["agent_id"], ["id"])
op.create_foreign_key("fk_tool_id", "tools_agents", "tools", ["tool_id"], ["id"])
op.drop_constraint("unique_agent_tool", "tools_agents", type_="unique")
op.create_unique_constraint("unique_tool_per_agent", "tools_agents", ["agent_id", "tool_name"])
op.add_column(
"blocks_agents",
sa.Column("created_at", postgresql.TIMESTAMP(timezone=True), server_default=sa.text("now()"), autoincrement=False, nullable=True),
)
op.add_column(
"blocks_agents", sa.Column("is_deleted", sa.BOOLEAN(), server_default=sa.text("false"), autoincrement=False, nullable=False)
)
op.add_column("blocks_agents", sa.Column("id", sa.VARCHAR(), autoincrement=False, nullable=False))
op.add_column(
"blocks_agents",
sa.Column("updated_at", postgresql.TIMESTAMP(timezone=True), server_default=sa.text("now()"), autoincrement=False, nullable=True),
)
op.add_column("blocks_agents", sa.Column("_last_updated_by_id", sa.VARCHAR(), autoincrement=False, nullable=True))
op.add_column("blocks_agents", sa.Column("_created_by_id", sa.VARCHAR(), autoincrement=False, nullable=True))
op.drop_constraint("fk_block_id_label", "blocks_agents", type_="foreignkey")
op.create_foreign_key("fk_block_id_label", "blocks_agents", "block", ["block_id", "block_label"], ["id", "label"])
op.drop_constraint("unique_agent_block", "blocks_agents", type_="unique")
op.add_column("agents_tags", sa.Column("organization_id", sa.VARCHAR(), autoincrement=False, nullable=False))
op.add_column(
"agents_tags",
sa.Column("created_at", postgresql.TIMESTAMP(timezone=True), server_default=sa.text("now()"), autoincrement=False, nullable=True),
)
op.add_column(
"agents_tags", sa.Column("is_deleted", sa.BOOLEAN(), server_default=sa.text("false"), autoincrement=False, nullable=False)
)
op.add_column("agents_tags", sa.Column("id", sa.VARCHAR(), autoincrement=False, nullable=False))
op.add_column(
"agents_tags",
sa.Column("updated_at", postgresql.TIMESTAMP(timezone=True), server_default=sa.text("now()"), autoincrement=False, nullable=True),
)
op.add_column("agents_tags", sa.Column("_last_updated_by_id", sa.VARCHAR(), autoincrement=False, nullable=True))
op.add_column("agents_tags", sa.Column("_created_by_id", sa.VARCHAR(), autoincrement=False, nullable=True))
op.create_foreign_key("agents_tags_organization_id_fkey", "agents_tags", "organizations", ["organization_id"], ["id"])
op.add_column("agents", sa.Column("user_id", sa.VARCHAR(), autoincrement=False, nullable=False))
op.add_column("agents", sa.Column("tool_names", postgresql.JSON(astext_type=sa.Text()), autoincrement=False, nullable=True))
op.drop_constraint(None, "agents", type_="foreignkey")
op.drop_constraint("unique_org_agent_name", "agents", type_="unique")
op.create_index("agents_idx_user", "agents", ["user_id"], unique=False)
op.alter_column("agents", "name", existing_type=sa.VARCHAR(), nullable=False)
op.drop_column("agents", "organization_id")
op.drop_column("agents", "_last_updated_by_id")
op.drop_column("agents", "_created_by_id")
op.drop_column("agents", "is_deleted")
op.drop_column("agents", "updated_at")
op.create_table(
"agent_source_mapping",
sa.Column("id", sa.VARCHAR(), autoincrement=False, nullable=False),
sa.Column("user_id", sa.VARCHAR(), autoincrement=False, nullable=False),
sa.Column("agent_id", sa.VARCHAR(), autoincrement=False, nullable=False),
sa.Column("source_id", sa.VARCHAR(), autoincrement=False, nullable=False),
sa.PrimaryKeyConstraint("id", name="agent_source_mapping_pkey"),
)
op.create_index("agent_source_mapping_idx_user", "agent_source_mapping", ["user_id", "agent_id", "source_id"], unique=False)
op.drop_table("sources_agents")
# ### end Alembic commands ###

View File

@@ -0,0 +1,95 @@
"""Move organizations users tools to orm
Revision ID: d14ae606614c
Revises: 9a505cc7eca9
Create Date: 2024-11-05 15:03:12.350096
"""
from typing import Sequence, Union
import sqlalchemy as sa
from sqlalchemy.dialects import postgresql
import letta
from alembic import op
# revision identifiers, used by Alembic.
revision: str = "d14ae606614c"
down_revision: Union[str, None] = "9a505cc7eca9"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def deprecated_tool():
return "this is a deprecated tool, please remove it from your tools list"
def upgrade() -> None:
# Delete all tools
op.execute("DELETE FROM tools")
# ### commands auto generated by Alembic - please adjust! ###
op.add_column("agents", sa.Column("tool_rules", letta.orm.agent.ToolRulesColumn(), nullable=True))
op.alter_column("block", "name", new_column_name="template_name", nullable=True)
op.add_column("organizations", sa.Column("updated_at", sa.DateTime(timezone=True), server_default=sa.text("now()"), nullable=True))
op.add_column("organizations", sa.Column("is_deleted", sa.Boolean(), server_default=sa.text("FALSE"), nullable=False))
op.add_column("organizations", sa.Column("_created_by_id", sa.String(), nullable=True))
op.add_column("organizations", sa.Column("_last_updated_by_id", sa.String(), nullable=True))
op.add_column("tools", sa.Column("created_at", sa.DateTime(timezone=True), server_default=sa.text("now()"), nullable=True))
op.add_column("tools", sa.Column("updated_at", sa.DateTime(timezone=True), server_default=sa.text("now()"), nullable=True))
op.add_column("tools", sa.Column("is_deleted", sa.Boolean(), server_default=sa.text("FALSE"), nullable=False))
op.add_column("tools", sa.Column("_created_by_id", sa.String(), nullable=True))
op.add_column("tools", sa.Column("_last_updated_by_id", sa.String(), nullable=True))
op.add_column("tools", sa.Column("organization_id", sa.String(), nullable=False))
op.alter_column("tools", "tags", existing_type=postgresql.JSON(astext_type=sa.Text()), nullable=False)
op.alter_column("tools", "source_type", existing_type=sa.VARCHAR(), nullable=False)
op.alter_column("tools", "json_schema", existing_type=postgresql.JSON(astext_type=sa.Text()), nullable=False)
op.create_unique_constraint("uix_name_organization", "tools", ["name", "organization_id"])
op.create_foreign_key(None, "tools", "organizations", ["organization_id"], ["id"])
op.drop_column("tools", "user_id")
op.add_column("users", sa.Column("updated_at", sa.DateTime(timezone=True), server_default=sa.text("now()"), nullable=True))
op.add_column("users", sa.Column("is_deleted", sa.Boolean(), server_default=sa.text("FALSE"), nullable=False))
op.add_column("users", sa.Column("_created_by_id", sa.String(), nullable=True))
op.add_column("users", sa.Column("_last_updated_by_id", sa.String(), nullable=True))
op.add_column("users", sa.Column("organization_id", sa.String(), nullable=True))
# loop through all rows in the user table and set the _organization_id column from organization_id
op.execute('UPDATE "users" SET organization_id = org_id')
# set the _organization_id column to not nullable
op.alter_column("users", "organization_id", existing_type=sa.String(), nullable=False)
op.create_foreign_key(None, "users", "organizations", ["organization_id"], ["id"])
op.drop_column("users", "org_id")
op.drop_column("users", "policies_accepted")
# ### end Alembic commands ###
def downgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.add_column("users", sa.Column("policies_accepted", sa.BOOLEAN(), autoincrement=False, nullable=False))
op.add_column("users", sa.Column("org_id", sa.VARCHAR(), autoincrement=False, nullable=True))
op.drop_constraint(None, "users", type_="foreignkey")
op.drop_column("users", "organization_id")
op.drop_column("users", "_last_updated_by_id")
op.drop_column("users", "_created_by_id")
op.drop_column("users", "is_deleted")
op.drop_column("users", "updated_at")
op.add_column("tools", sa.Column("user_id", sa.VARCHAR(), autoincrement=False, nullable=True))
op.drop_constraint(None, "tools", type_="foreignkey")
op.drop_constraint("uix_name_organization", "tools", type_="unique")
op.alter_column("tools", "json_schema", existing_type=postgresql.JSON(astext_type=sa.Text()), nullable=True)
op.alter_column("tools", "source_type", existing_type=sa.VARCHAR(), nullable=True)
op.alter_column("tools", "tags", existing_type=postgresql.JSON(astext_type=sa.Text()), nullable=True)
op.drop_column("tools", "organization_id")
op.drop_column("tools", "_last_updated_by_id")
op.drop_column("tools", "_created_by_id")
op.drop_column("tools", "is_deleted")
op.drop_column("tools", "updated_at")
op.drop_column("tools", "created_at")
op.drop_column("organizations", "_last_updated_by_id")
op.drop_column("organizations", "_created_by_id")
op.drop_column("organizations", "is_deleted")
op.drop_column("organizations", "updated_at")
op.add_column("block", sa.Column("name", sa.VARCHAR(), autoincrement=False, nullable=True))
op.drop_column("block", "template_name")
op.drop_column("agents", "tool_rules")
# ### end Alembic commands ###

View File

@@ -0,0 +1,29 @@
"""Add composite index to messages table
Revision ID: d6632deac81d
Revises: 54dec07619c4
Create Date: 2024-12-18 13:38:56.511701
"""
from typing import Sequence, Union
from alembic import op
# revision identifiers, used by Alembic.
revision: str = "d6632deac81d"
down_revision: Union[str, None] = "54dec07619c4"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.create_index("ix_messages_agent_created_at", "messages", ["agent_id", "created_at"], unique=False)
# ### end Alembic commands ###
def downgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.drop_index("ix_messages_agent_created_at", table_name="messages")
# ### end Alembic commands ###

View File

@@ -0,0 +1,31 @@
"""Tweak created_at field for messages
Revision ID: e1a625072dbf
Revises: 95badb46fdf9
Create Date: 2024-12-07 14:28:27.643583
"""
from typing import Sequence, Union
from sqlalchemy.dialects import postgresql
from alembic import op
# revision identifiers, used by Alembic.
revision: str = "e1a625072dbf"
down_revision: Union[str, None] = "95badb46fdf9"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.alter_column("messages", "created_at", existing_type=postgresql.TIMESTAMP(timezone=True), nullable=True)
# ### end Alembic commands ###
def downgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.alter_column("messages", "created_at", existing_type=postgresql.TIMESTAMP(timezone=True), nullable=False)
# ### end Alembic commands ###

View File

@@ -0,0 +1,35 @@
"""Add cascading deletes for sources to agents
Revision ID: e78b4e82db30
Revises: d6632deac81d
Create Date: 2024-12-20 16:30:17.095888
"""
from typing import Sequence, Union
from alembic import op
# revision identifiers, used by Alembic.
revision: str = "e78b4e82db30"
down_revision: Union[str, None] = "d6632deac81d"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.drop_constraint("sources_agents_agent_id_fkey", "sources_agents", type_="foreignkey")
op.drop_constraint("sources_agents_source_id_fkey", "sources_agents", type_="foreignkey")
op.create_foreign_key(None, "sources_agents", "sources", ["source_id"], ["id"], ondelete="CASCADE")
op.create_foreign_key(None, "sources_agents", "agents", ["agent_id"], ["id"], ondelete="CASCADE")
# ### end Alembic commands ###
def downgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.drop_constraint(None, "sources_agents", type_="foreignkey")
op.drop_constraint(None, "sources_agents", type_="foreignkey")
op.create_foreign_key("sources_agents_source_id_fkey", "sources_agents", "sources", ["source_id"], ["id"])
op.create_foreign_key("sources_agents_agent_id_fkey", "sources_agents", "agents", ["agent_id"], ["id"])
# ### end Alembic commands ###

View File

@@ -0,0 +1,74 @@
"""Migrate blocks to orm model
Revision ID: f7507eab4bb9
Revises: c85a3d07c028
Create Date: 2024-11-18 15:40:13.149438
"""
from typing import Sequence, Union
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision: str = "f7507eab4bb9"
down_revision: Union[str, None] = "c85a3d07c028"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.add_column("block", sa.Column("is_template", sa.Boolean(), nullable=True))
# Populate `is_template` column
op.execute(
"""
UPDATE block
SET is_template = COALESCE(template, FALSE)
"""
)
# Step 2: Make `is_template` non-nullable
op.alter_column("block", "is_template", nullable=False)
op.add_column("block", sa.Column("organization_id", sa.String(), nullable=True))
# Populate `organization_id` based on `user_id`
# Use a raw SQL query to update the organization_id
op.execute(
"""
UPDATE block
SET organization_id = users.organization_id
FROM users
WHERE block.user_id = users.id
"""
)
op.alter_column("block", "organization_id", nullable=False)
op.add_column("block", sa.Column("created_at", sa.DateTime(timezone=True), server_default=sa.text("now()"), nullable=True))
op.add_column("block", sa.Column("updated_at", sa.DateTime(timezone=True), server_default=sa.text("now()"), nullable=True))
op.add_column("block", sa.Column("is_deleted", sa.Boolean(), server_default=sa.text("FALSE"), nullable=False))
op.add_column("block", sa.Column("_created_by_id", sa.String(), nullable=True))
op.add_column("block", sa.Column("_last_updated_by_id", sa.String(), nullable=True))
op.alter_column("block", "limit", existing_type=sa.BIGINT(), type_=sa.Integer(), nullable=False)
op.drop_index("block_idx_user", table_name="block")
op.create_foreign_key(None, "block", "organizations", ["organization_id"], ["id"])
op.drop_column("block", "template")
op.drop_column("block", "user_id")
# ### end Alembic commands ###
def downgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.add_column("block", sa.Column("user_id", sa.VARCHAR(), autoincrement=False, nullable=True))
op.add_column("block", sa.Column("template", sa.BOOLEAN(), autoincrement=False, nullable=True))
op.drop_constraint(None, "block", type_="foreignkey")
op.create_index("block_idx_user", "block", ["user_id"], unique=False)
op.alter_column("block", "limit", existing_type=sa.Integer(), type_=sa.BIGINT(), nullable=True)
op.drop_column("block", "_last_updated_by_id")
op.drop_column("block", "_created_by_id")
op.drop_column("block", "is_deleted")
op.drop_column("block", "updated_at")
op.drop_column("block", "created_at")
op.drop_column("block", "organization_id")
op.drop_column("block", "is_template")
# ### end Alembic commands ###

View File

@@ -0,0 +1,73 @@
"""Create sandbox config and sandbox env var tables
Revision ID: f81ceea2c08d
Revises: c85a3d07c028
Create Date: 2024-11-14 17:51:27.263561
"""
from typing import Sequence, Union
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision: str = "f81ceea2c08d"
down_revision: Union[str, None] = "f7507eab4bb9"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.create_table(
"sandbox_configs",
sa.Column("id", sa.String(), nullable=False),
sa.Column("type", sa.Enum("E2B", "LOCAL", name="sandboxtype"), nullable=False),
sa.Column("config", sa.JSON(), nullable=False),
sa.Column("created_at", sa.DateTime(timezone=True), server_default=sa.text("now()"), nullable=True),
sa.Column("updated_at", sa.DateTime(timezone=True), server_default=sa.text("now()"), nullable=True),
sa.Column("is_deleted", sa.Boolean(), server_default=sa.text("FALSE"), nullable=False),
sa.Column("_created_by_id", sa.String(), nullable=True),
sa.Column("_last_updated_by_id", sa.String(), nullable=True),
sa.Column("organization_id", sa.String(), nullable=False),
sa.ForeignKeyConstraint(
["organization_id"],
["organizations.id"],
),
sa.PrimaryKeyConstraint("id"),
sa.UniqueConstraint("type", "organization_id", name="uix_type_organization"),
)
op.create_table(
"sandbox_environment_variables",
sa.Column("id", sa.String(), nullable=False),
sa.Column("key", sa.String(), nullable=False),
sa.Column("value", sa.String(), nullable=False),
sa.Column("description", sa.String(), nullable=True),
sa.Column("created_at", sa.DateTime(timezone=True), server_default=sa.text("now()"), nullable=True),
sa.Column("updated_at", sa.DateTime(timezone=True), server_default=sa.text("now()"), nullable=True),
sa.Column("is_deleted", sa.Boolean(), server_default=sa.text("FALSE"), nullable=False),
sa.Column("_created_by_id", sa.String(), nullable=True),
sa.Column("_last_updated_by_id", sa.String(), nullable=True),
sa.Column("organization_id", sa.String(), nullable=False),
sa.Column("sandbox_config_id", sa.String(), nullable=False),
sa.ForeignKeyConstraint(
["organization_id"],
["organizations.id"],
),
sa.ForeignKeyConstraint(
["sandbox_config_id"],
["sandbox_configs.id"],
),
sa.PrimaryKeyConstraint("id"),
sa.UniqueConstraint("key", "sandbox_config_id", name="uix_key_sandbox_config"),
)
# ### end Alembic commands ###
def downgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.drop_table("sandbox_environment_variables")
op.drop_table("sandbox_configs")
# ### end Alembic commands ###

Binary file not shown.

After

Width:  |  Height:  |  Size: 41 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 709 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 684 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 734 KiB

9
certs/README.md Normal file
View File

@@ -0,0 +1,9 @@
# About
These certs are used to set up a localhost https connection to the ADE.
## Instructions
1. Install [mkcert](https://github.com/FiloSottile/mkcert)
2. Run `mkcert -install`
3. Run letta with the environment variable `LOCAL_HTTPS=true`
4. Access the app at [https://app.letta.com/development-servers/local/dashboard](https://app.letta.com/development-servers/local/dashboard)
5. Click "Add remote server" and enter `https://localhost:8283` as the URL, leave password blank unless you have secured your ADE with a password.

28
certs/localhost-key.pem Normal file
View File

@@ -0,0 +1,28 @@
-----BEGIN PRIVATE KEY-----
MIIEvwIBADANBgkqhkiG9w0BAQEFAASCBKkwggSlAgEAAoIBAQDenaHTolfy9TzX
AUd60yPO1W0mpxdDTuxr2p3tBUaQJt5bEGzJbs1M0i5YVRK/SxtYZQvyqmI0ULKN
8+evKSEpJoDgLfFKM266jzKDSXd5XBQ3XuuxbKq6NV6qoTdweJ0zP0XXDUnKoTN6
eMkUi8hD9P1TR3Ok3VGnT1wsdG0wPwRPDI/sD92GASL4ViUy/1Llrs7GjlOC+7M2
GMoGifSHjmx2xgZ/K8cdD2q15iJJlhdbgCwfejcQlP7cmLtSJHH188EZeoFPEfNS
UpYNglS1kx0D/LC1ooTQRkCpLAnxeonMQZS5O5/q/zyxftkyKO+NInR6DtM0Uj8f
Gu5UDw1TAgMBAAECggEBANhqpkf4K0gm4V6j/7mISedp1RMenZ7xuyWfAqjJ2C+L
md8tuJSbAzsLmcKF8hPGEG9+zH685Xu2d99InpPKiFJY/DD0eP6JwbvcOl8nrN5u
hbjOrpNt8QvVlpKK6DqPB0Qq3tqSMIqs7D7D7bfrrGVkZmHvtJ0yC497t0AAb6XV
zTtnY9K6LVxb/t+eIDDX1AvE7i2WC+j1YgfexbM0VI/g4fveEVaKPFkWF3nSm0Ag
BmqzfGFUWKhBZmWpU0m86Zc45q575Bl64yXEQDYocUw3UfOp5/uF0lwuVe5Bpq/w
hIJgrW6RLzy20QFgPDxHhG3QdBpq4gB9BxCrMb+yhQECgYEA6jL1pD0drczxfaWC
bh+VsbVrRnz/0XDlPjaysO+qKsNP104gydeyyB6RcGnO8PssvFMCJNQlMhkEpp9x
bOwR36a4A2absC/osGgFH4pYyN2wqDb+qE2A3M+rnSGourd09y8BsCovN+5YsozK
HCnqjNWUweypU+RUvtM5jztsiOUCgYEA81ajdczpsysSn0xIFF0exvnnPLy/AiOR
uEFbPi0kavy7niwd609JFsSOwUXg2QavBNeoLH+YrQhueDoBJZANerLfLD8jqQxD
ojB6DkHwK5Vf9NIm8DZQ6trtf8xWGB/TuwpkWHm1wMdlCbmH38MukU4p6as7FKzT
8J57p/TfcdcCgYEAyDqfVzbFTBWfFbxOghZQ5mlj+RTfplHuPL2JEssk4oCvnzV1
xPu8J2ozEDf2LIOiYLRbbd9OmcFX/5jr4aMHOP6R7p5oVz7uovub/bZLaBhZc8fo
+z2gAakvYR0o49H7l2XB/LpkOl51yNmj5mZT2Oq1zwKmVkotxiRS3smAZp0CgYAP
sOyFchs3xHVE9GRJe9+6MO8qSXl/p8+DtCMwFTUd+QIYJvwe6lPqNe6Go/zlwbqT
c1yS0f+EWODWu9bLF0jnOpWNgtzHz9Skpr+YH8Re6xju7oY4QyhgnJFoBkMe9x5u
FzN1SRPhRHpNcDtEwI9GK2YkfTgoEyTvhSiwIegurQKBgQDGkheCC7hqleNV3lGM
SfMUgyGt/1abZ82eAkdfeUrM37FeSbxuafyp0ICjZY0xsn6RUickHyXBJhkOGSJX
lGSvHwMsnXT30KAGd08ZqWmTSGmH6IrdVhrveY+e18ILXYgAkQ1T9tSKjeyFfK8m
dUWlFZHfdToFu1pn7yBgofMAmw==
-----END PRIVATE KEY-----

26
certs/localhost.pem Normal file
View File

@@ -0,0 +1,26 @@
-----BEGIN CERTIFICATE-----
MIIEdjCCAt6gAwIBAgIQX/6Qs3c+lQq4+pcuUK7a7jANBgkqhkiG9w0BAQsFADCB
lTEeMBwGA1UEChMVbWtjZXJ0IGRldmVsb3BtZW50IENBMTUwMwYDVQQLDCxzaHVi
QFNodWItTWVtR1BULURyaXZlci5sb2NhbCAoU2h1YmhhbSBOYWlrKTE8MDoGA1UE
AwwzbWtjZXJ0IHNodWJAU2h1Yi1NZW1HUFQtRHJpdmVyLmxvY2FsIChTaHViaGFt
IE5haWspMB4XDTI0MTIxMDE4MTgwMFoXDTI3MDMxMDE4MTgwMFowYDEnMCUGA1UE
ChMebWtjZXJ0IGRldmVsb3BtZW50IGNlcnRpZmljYXRlMTUwMwYDVQQLDCxzaHVi
QFNodWItTWVtR1BULURyaXZlci5sb2NhbCAoU2h1YmhhbSBOYWlrKTCCASIwDQYJ
KoZIhvcNAQEBBQADggEPADCCAQoCggEBAN6dodOiV/L1PNcBR3rTI87VbSanF0NO
7Gvane0FRpAm3lsQbMluzUzSLlhVEr9LG1hlC/KqYjRQso3z568pISkmgOAt8Uoz
brqPMoNJd3lcFDde67Fsqro1XqqhN3B4nTM/RdcNScqhM3p4yRSLyEP0/VNHc6Td
UadPXCx0bTA/BE8Mj+wP3YYBIvhWJTL/UuWuzsaOU4L7szYYygaJ9IeObHbGBn8r
xx0ParXmIkmWF1uALB96NxCU/tyYu1IkcfXzwRl6gU8R81JSlg2CVLWTHQP8sLWi
hNBGQKksCfF6icxBlLk7n+r/PLF+2TIo740idHoO0zRSPx8a7lQPDVMCAwEAAaN2
MHQwDgYDVR0PAQH/BAQDAgWgMBMGA1UdJQQMMAoGCCsGAQUFBwMBMB8GA1UdIwQY
MBaAFJ31vDww7/qA2mBtAN3GE+TZCqNeMCwGA1UdEQQlMCOCCWxvY2FsaG9zdIcE
fwAAAYcQAAAAAAAAAAAAAAAAAAAAATANBgkqhkiG9w0BAQsFAAOCAYEAAy63DbPf
8iSWYmVgccFc5D+MpNgnWi6WsI5OTtRv66eV9+Vv9HseEVrSw8IVMoZt+peosi+K
0woVPT+bKCxlgkEClO7oZIUEMlzJq9sduISFV5fzFLMq8xhIIO5ud4zs1X/1GlrE
zAdq+YiZnbuKqLFSoPLZGrVclmiI3dLqp0LETZxVOiCGt52RRb87Mt9bQEHnP5LJ
EOJYZ1C7/qDDga3vFJ66Nisy015DpE7XXM5PASElpK9l4+yBOg9UdLSkd0VLm/Jm
+4rskdrSTiomU2TBd6Vys7nrn2K72ZOHOcbfFnPEet9z1L44xaddsaPE52ayu8PO
uxHl7rBr2Kzeuy22ppX09EpPdSnjrG6Sgojv4CCS6n8tAbhat8K0pTrzk1e7L8HT
Qy4P/LlViW56mfyM+02CurxbVOecCDdFPMwY357BXMnL6VmRrDtixh+XIXdyK2zS
aYhsbRFA7VJ1AM57gbPbDJElyIlvVetubilvfuOvvQX46cC/ZX5agzTd
-----END CERTIFICATE-----

61
compose.yaml Normal file
View File

@@ -0,0 +1,61 @@
services:
letta_db:
image: ankane/pgvector:v0.5.1
networks:
default:
aliases:
- pgvector_db
- letta-db
environment:
- POSTGRES_USER=${LETTA_PG_USER:-letta}
- POSTGRES_PASSWORD=${LETTA_PG_PASSWORD:-letta}
- POSTGRES_DB=${LETTA_PG_DB:-letta}
volumes:
- ./.persist/pgdata:/var/lib/postgresql/data
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
ports:
- "5432:5432"
healthcheck:
test: ["CMD-SHELL", "pg_isready -U letta"]
interval: 5s
timeout: 5s
retries: 5
letta_server:
image: letta/letta:latest
hostname: letta-server
depends_on:
letta_db:
condition: service_healthy
ports:
- "8083:8083"
- "8283:8283"
env_file:
- .env
environment:
- LETTA_PG_DB=${LETTA_PG_DB:-letta}
- LETTA_PG_USER=${LETTA_PG_USER:-letta}
- LETTA_PG_PASSWORD=${LETTA_PG_PASSWORD:-letta}
- LETTA_PG_HOST=pgvector_db
- LETTA_PG_PORT=5432
- LETTA_DEBUG=True
- OPENAI_API_KEY=${OPENAI_API_KEY}
- GROQ_API_KEY=${GROQ_API_KEY}
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
- OLLAMA_BASE_URL=${OLLAMA_BASE_URL}
- AZURE_API_KEY=${AZURE_API_KEY}
- AZURE_BASE_URL=${AZURE_BASE_URL}
- AZURE_API_VERSION=${AZURE_API_VERSION}
- GEMINI_API_KEY=${GEMINI_API_KEY}
- VLLM_API_BASE=${VLLM_API_BASE}
- OPENLLM_AUTH_TYPE=${OPENLLM_AUTH_TYPE}
- OPENLLM_API_KEY=${OPENLLM_API_KEY}
#volumes:
#- ./configs/server_config.yaml:/root/.letta/config # config file
#- ~/.letta/credentials:/root/.letta/credentials # credentials file
letta_nginx:
hostname: letta-nginx
image: nginx:stable-alpine3.17-slim
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
ports:
- "80:80"

View File

@@ -0,0 +1,6 @@
{
"context_window": 128000,
"model": "gpt-4o-mini",
"model_endpoint_type": "azure",
"model_wrapper": null
}

87
db/Dockerfile.simple Normal file
View File

@@ -0,0 +1,87 @@
# syntax = docker/dockerfile:1.6
# Build a self-configuring postgres image with pgvector installed.
# It has no dependencies except for the base image.
# Build with:
# docker build -t letta-db -f db/Dockerfile.simple .
#
# -t letta-db: tag the image with the name letta-db (tag defaults to :latest)
# -f db/Dockerfile.simple: use the Dockerfile at db/Dockerfile.simple (this file)
# .: build the image from the current directory, not really used.
#
# Run the first time with:
# docker run -d --rm \
# --name letta-db \
# -p 5432:5432 \
# -e POSTGRES_PASSWORD=password \
# -v letta_db:/var/lib/postgresql/data \
# letta-db:latest
#
# -d: run in the background
# --rm: remove the container when it exits
# --name letta-db: name the container letta-db
# -p 5432:5432: map port 5432 on the host to port 5432 in the container
# -v letta_db:/var/lib/postgresql/data: map the volume letta_db to /var/lib/postgresql/data in the container
# letta-db:latest: use the image letta-db:latest
#
# After the first time, you do not need the POSTGRES_PASSWORD.
# docker run -d --rm \
# --name letta-db \
# -p 5432:5432 \
# -v letta_db:/var/lib/postgresql/data \
# letta-db:latest
# Rather than a docker volume (letta_db), you can use an absolute path to a directory on the host.
#
# You can stop the container with:
# docker stop letta-db
#
# You access the database with:
# postgresql+pg8000://user:password@localhost:5432/db
# where user, password, and db are the values you set in the init-letta.sql file,
# all defaulting to 'letta'.
# Version tags can be found here: https://hub.docker.com/r/ankane/pgvector/tags
ARG PGVECTOR=v0.5.1
# Set up a minimal postgres image
FROM ankane/pgvector:${PGVECTOR}
RUN sed -e 's/^ //' >/docker-entrypoint-initdb.d/01-initletta.sql <<'EOF'
-- Title: Init Letta Database
-- Fetch the docker secrets, if they are available.
-- Otherwise fall back to environment variables, or hardwired 'letta'
\set db_user `([ -r /var/run/secrets/letta-user ] && cat /var/run/secrets/letta-user) || echo "${LETTA_USER:-letta}"`
\set db_password `([ -r /var/run/secrets/letta-password ] && cat /var/run/secrets/letta-password) || echo "${LETTA_PASSWORD:-letta}"`
\set db_name `([ -r /var/run/secrets/letta-db ] && cat /var/run/secrets/letta-db) || echo "${LETTA_DB:-letta}"`
CREATE USER :"db_user"
WITH PASSWORD :'db_password'
NOCREATEDB
NOCREATEROLE
;
CREATE DATABASE :"db_name"
WITH
OWNER = :"db_user"
ENCODING = 'UTF8'
LC_COLLATE = 'en_US.utf8'
LC_CTYPE = 'en_US.utf8'
LOCALE_PROVIDER = 'libc'
TABLESPACE = pg_default
CONNECTION LIMIT = -1;
-- Set up our schema and extensions in our new database.
\c :"db_name"
CREATE SCHEMA :"db_name"
AUTHORIZATION :"db_user";
ALTER DATABASE :"db_name"
SET search_path TO :"db_name";
CREATE EXTENSION IF NOT EXISTS vector WITH SCHEMA :"db_name";
DROP SCHEMA IF EXISTS public CASCADE;
EOF

10
db/run_postgres.sh Executable file
View File

@@ -0,0 +1,10 @@
# build container
docker build -f db/Dockerfile.simple -t pg-test .
# run container
docker run -d --rm \
--name letta-db-test \
-p 8888:5432 \
-e POSTGRES_PASSWORD=password \
-v letta_db_test:/var/lib/postgresql/data \
pg-test:latest

48
dev-compose.yaml Normal file
View File

@@ -0,0 +1,48 @@
services:
letta_db:
image: ankane/pgvector:v0.5.1
networks:
default:
aliases:
- pgvector_db
- letta-db
environment:
- POSTGRES_USER=${LETTA_PG_USER:-letta}
- POSTGRES_PASSWORD=${LETTA_PG_PASSWORD:-letta}
- POSTGRES_DB=${LETTA_PG_DB:-letta}
volumes:
- ./.persist/pgdata-test:/var/lib/postgresql/data
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
ports:
- "5432:5432"
letta_server:
image: letta/letta:latest
hostname: letta
build:
context: .
dockerfile: Dockerfile
target: runtime
depends_on:
- letta_db
ports:
- "8083:8083"
- "8283:8283"
environment:
- SERPAPI_API_KEY=${SERPAPI_API_KEY}
- LETTA_PG_DB=${LETTA_PG_DB:-letta}
- LETTA_PG_USER=${LETTA_PG_USER:-letta}
- LETTA_PG_PASSWORD=${LETTA_PG_PASSWORD:-letta}
- LETTA_PG_HOST=pgvector_db
- LETTA_PG_PORT=5432
- LETTA_DEBUG=True
- OPENAI_API_KEY=${OPENAI_API_KEY}
- GROQ_API_KEY=${GROQ_API_KEY}
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
- OLLAMA_BASE_URL=${OLLAMA_BASE_URL}
- AZURE_API_KEY=${AZURE_API_KEY}
- AZURE_BASE_URL=${AZURE_BASE_URL}
- AZURE_API_VERSION=${AZURE_API_VERSION}
- GEMINI_API_KEY=${GEMINI_API_KEY}
- VLLM_API_BASE=${VLLM_API_BASE}
- OPENLLM_AUTH_TYPE=${OPENLLM_AUTH_TYPE}
- OPENLLM_API_KEY=${OPENLLM_API_KEY}

29
development.compose.yml Normal file
View File

@@ -0,0 +1,29 @@
services:
letta_server:
image: letta_server
hostname: letta-server
build:
context: .
dockerfile: Dockerfile
target: development
args:
- MEMGPT_ENVIRONMENT=DEVELOPMENT
depends_on:
- letta_db
env_file:
- .env
environment:
- WATCHFILES_FORCE_POLLING=true
volumes:
- ./letta:/letta
- ~/.letta/credentials:/root/.letta/credentials
- ./configs/server_config.yaml:/root/.letta/config
- ./CONTRIBUTING.md:/CONTRIBUTING.md
- ./tests/pytest_cache:/letta/.pytest_cache
- ./tests/pytest.ini:/letta/pytest.ini
- ./pyproject.toml:/pyproject.toml
- ./tests:/tests
ports:
- "8083:8083"
- "8283:8283"

35
docker-compose-vllm.yaml Normal file
View File

@@ -0,0 +1,35 @@
version: '3.8'
services:
letta:
image: letta/letta:latest
ports:
- "8283:8283"
environment:
- LETTA_LLM_ENDPOINT=http://vllm:8000
- LETTA_LLM_ENDPOINT_TYPE=vllm
- LETTA_LLM_MODEL=${LETTA_LLM_MODEL} # Replace with your model
- LETTA_LLM_CONTEXT_WINDOW=8192
depends_on:
- vllm
vllm:
image: vllm/vllm-openai:latest
runtime: nvidia
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
environment:
- HUGGING_FACE_HUB_TOKEN=${HUGGING_FACE_HUB_TOKEN}
volumes:
- ~/.cache/huggingface:/root/.cache/huggingface
ports:
- "8000:8000"
command: >
--model ${LETTA_LLM_MODEL} --max_model_len=8000
# Replace with your model
ipc: host

View File

@@ -0,0 +1,434 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "cac06555-9ce8-4f01-bbef-3f8407f4b54d",
"metadata": {},
"source": [
"# Lab 3: Using MemGPT to build agents with memory \n",
"This lab will go over: \n",
"1. Creating an agent with MemGPT\n",
"2. Understand MemGPT agent state (messages, memories, tools)\n",
"3. Understanding core and archival memory\n",
"4. Building agentic RAG with MemGPT "
]
},
{
"cell_type": "markdown",
"id": "aad3a8cc-d17a-4da1-b621-ecc93c9e2106",
"metadata": {},
"source": [
"## Setup a Letta client \n",
"Make sure you run `pip install letta` and `letta quickstart`"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "067e007c-02f7-4d51-9c8a-651c7d5a6499",
"metadata": {},
"outputs": [],
"source": [
"!pip install letta\n",
"! letta quickstart"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7ccd43f2-164b-4d25-8465-894a3bb54c4b",
"metadata": {},
"outputs": [],
"source": [
"from letta import create_client \n",
"\n",
"client = create_client() "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9a28e38a-7dbe-4530-8260-202322a8458e",
"metadata": {},
"outputs": [],
"source": [
"from letta import LLMConfig, EmbeddingConfig\n",
"\n",
"client.set_default_llm_config(LLMConfig.default_config(\"gpt-4o-mini\")) \n",
"client.set_default_embedding_config(EmbeddingConfig.default_config(provider=\"openai\")) "
]
},
{
"cell_type": "markdown",
"id": "65bf0dc2-d1ac-4d4c-8674-f3156eeb611d",
"metadata": {},
"source": [
"## Creating a simple agent with memory \n",
"MemGPT allows you to create persistent LLM agents that have memory. By default, MemGPT saves all state related to agents in a database, so you can also re-load an existing agent with its prior state. We'll show you in this section how to create a MemGPT agent and to understand what memories it's storing. \n"
]
},
{
"cell_type": "markdown",
"id": "fe092474-6b91-4124-884d-484fc28b58e7",
"metadata": {},
"source": [
"### Creating an agent "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2a9d6228-a0f5-41e6-afd7-6a05260565dc",
"metadata": {},
"outputs": [],
"source": [
"agent_name = \"simple_agent\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "62dcf31d-6f45-40f5-8373-61981f03da62",
"metadata": {},
"outputs": [],
"source": [
"from letta.schemas.memory import ChatMemory\n",
"\n",
"agent_state = client.create_agent(\n",
" name=agent_name, \n",
" memory=ChatMemory(\n",
" human=\"My name is Sarah\", \n",
" persona=\"You are a helpful assistant that loves emojis\"\n",
" )\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "31c2d5f6-626a-4666-8d0b-462db0292a7d",
"metadata": {},
"outputs": [],
"source": [
"response = client.send_message(\n",
" agent_id=agent_state.id, \n",
" message=\"hello!\", \n",
" role=\"user\" \n",
")\n",
"response"
]
},
{
"cell_type": "markdown",
"id": "20a5ccf4-addd-4bdb-be80-161f7925dae0",
"metadata": {},
"source": [
"Note that MemGPT agents will generate an *internal_monologue* that explains its actions. You can use this monoloque to understand why agents are behaving as they are. \n",
"\n",
"Second, MemGPT agents also use tools to communicate, so messages are sent back by calling a `send_message` tool. This makes it easy to allow agent to communicate over different mediums (e.g. text), and also allows the agent to distinguish betweeh that is and isn't send to the end user. "
]
},
{
"cell_type": "markdown",
"id": "8d33eca5-b8e8-4a8f-9440-85b45c37a777",
"metadata": {},
"source": [
"### Understanding agent state \n",
"MemGPT agents are *stateful* and are defined by: \n",
"* The system prompt defining the agent's behavior (read-only)\n",
"* The set of *tools* they have access to \n",
"* Their memory (core, archival, & recall)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c1cf7136-4060-441a-9d12-da851badf339",
"metadata": {},
"outputs": [],
"source": [
"print(agent_state.system)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d9e1c8c0-e98c-4952-b850-136b5b50a5ee",
"metadata": {},
"outputs": [],
"source": [
"agent_state.tools"
]
},
{
"cell_type": "markdown",
"id": "ae910ad9-afee-41f5-badd-a8dee5b2ad94",
"metadata": {},
"source": [
"### Viewing an agent's memory"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "478a0df6-3c87-4803-9133-8a54f9c00320",
"metadata": {},
"outputs": [],
"source": [
"memory = client.get_core_memory(agent_state.id)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ff2c3736-5424-4883-8fe9-73a4f598a043",
"metadata": {},
"outputs": [],
"source": [
"memory"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d6da43d6-847e-4a0a-9b92-cea2721e828a",
"metadata": {},
"outputs": [],
"source": [
"client.get_archival_memory_summary(agent_state.id)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0399a1d6-a1f8-4796-a4c0-eb322512b0ec",
"metadata": {},
"outputs": [],
"source": [
"client.get_recall_memory_summary(agent_state.id)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c7cce583-1f11-4f13-a6ed-52cc7f80e3c4",
"metadata": {},
"outputs": [],
"source": [
"client.get_messages(agent_state.id)"
]
},
{
"cell_type": "markdown",
"id": "dfd0a9ae-417e-4ba0-a562-ec59cb2bbf7d",
"metadata": {},
"source": [
"## Understanding core memory \n",
"Core memory is memory that is stored *in-context* - so every LLM call, core memory is included. What's unique about MemGPT is that this core memory is editable via tools by the agent itself. Lets see how the agent can adapt its memory to new information."
]
},
{
"cell_type": "markdown",
"id": "d259669c-5903-40b5-8758-93c36faa752f",
"metadata": {},
"source": [
"### Memories about the human \n",
"The `human` section of `ChatMemory` is used to remember information about the human in the conversation. As the agent learns new information about the human, it can update this part of memory to improve personalization. "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "beb9b0ba-ed7c-4917-8ee5-21d201516086",
"metadata": {},
"outputs": [],
"source": [
"response = client.send_message(\n",
" agent_id=agent_state.id, \n",
" message = \"My name is actually Bob\", \n",
" role = \"user\"\n",
") \n",
"response"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "25f58968-e262-4268-86ef-1bed57e6bf33",
"metadata": {},
"outputs": [],
"source": [
"client.get_core_memory(agent_state.id)"
]
},
{
"cell_type": "markdown",
"id": "32692ca2-b731-43a6-84de-439a08a4c0d2",
"metadata": {},
"source": [
"### Memories about the agent\n",
"The agent also records information about itself and how it behaves in the `persona` section of memory. This is important for ensuring a consistent persona over time (e.g. not making inconsistent claims, such as liking ice cream one day and hating it another). Unlike the `system_prompt`, the `persona` is editable - this means that it can be used to incoporate feedback to learn and improve its persona over time. "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f68851c5-5666-45fd-9d2f-037ea86bfcfa",
"metadata": {},
"outputs": [],
"source": [
"response = client.send_message(\n",
" agent_id=agent_state.id, \n",
" message = \"In the future, never use emojis to communicate\", \n",
" role = \"user\"\n",
") \n",
"response"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2fc54336-d61f-446d-82ea-9dd93a011e51",
"metadata": {},
"outputs": [],
"source": [
"client.get_core_memory(agent_state.id).get_block('persona')"
]
},
{
"cell_type": "markdown",
"id": "592f5d1c-cd2f-4314-973e-fcc481e6b460",
"metadata": {},
"source": [
"## Understanding archival memory\n",
"MemGPT agents store long term memories in *archival memory*, which persists data into an external database. This allows agents additional space to write information outside of its context window (e.g. with core memory), which is limited in size. "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "af63a013-6be3-4931-91b0-309ff2a4dc3a",
"metadata": {},
"outputs": [],
"source": [
"client.get_archival_memory(agent_state.id)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "bfa52984-fe7c-4d17-900a-70a376a460f9",
"metadata": {},
"outputs": [],
"source": [
"client.get_archival_memory_summary(agent_state.id)"
]
},
{
"cell_type": "markdown",
"id": "a3ab0ae9-fc00-4447-8942-7dbed7a99222",
"metadata": {},
"source": [
"Agents themselves can write to their archival memory when they learn information they think should be placed in long term storage. You can also directly suggest that the agent store information in archival. "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c6556f76-8fcb-42ff-a6d0-981685ef071c",
"metadata": {},
"outputs": [],
"source": [
"response = client.send_message(\n",
" agent_id=agent_state.id, \n",
" message = \"Save the information that 'bob loves cats' to archival\", \n",
" role = \"user\"\n",
") \n",
"response"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b4429ffa-e27a-4714-a873-84f793c08535",
"metadata": {},
"outputs": [],
"source": [
"client.get_archival_memory(agent_state.id)[0].text"
]
},
{
"cell_type": "markdown",
"id": "ae463e7c-0588-48ab-888c-734c783782bf",
"metadata": {},
"source": [
"You can also directly insert into archival memory from the client. "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f9d4194d-9ed5-40a1-b35d-a9aff3048000",
"metadata": {},
"outputs": [],
"source": [
"client.insert_archival_memory(\n",
" agent_state.id, \n",
" \"Bob's loves boston terriers\"\n",
")"
]
},
{
"cell_type": "markdown",
"id": "338149f1-6671-4a0b-81d9-23d01dbe2e97",
"metadata": {},
"source": [
"Now lets see how the agent uses its archival memory:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5908b10f-94db-4f5a-bb9a-1f08c74a2860",
"metadata": {},
"outputs": [],
"source": [
"response = client.send_message(\n",
" agent_id=agent_state.id, \n",
" role=\"user\", \n",
" message=\"What animals do I like? Search archival.\"\n",
")\n",
"response"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "adc394c8-1d88-42bf-a6a5-b01f20f78d81",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "letta-main",
"language": "python",
"name": "letta-main"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,92 @@
import json
import os
import uuid
from letta import create_client
from letta.schemas.embedding_config import EmbeddingConfig
from letta.schemas.llm_config import LLMConfig
from letta.schemas.memory import ChatMemory
from letta.schemas.sandbox_config import SandboxEnvironmentVariableCreate, SandboxType
from letta.services.sandbox_config_manager import SandboxConfigManager
from letta.settings import tool_settings
"""
Setup here.
"""
# Create a `LocalClient` (you can also use a `RESTClient`, see the letta_rest_client.py example)
client = create_client()
client.set_default_llm_config(LLMConfig.default_config("gpt-4o-mini"))
client.set_default_embedding_config(EmbeddingConfig.default_config(provider="openai"))
# Generate uuid for agent name for this example
namespace = uuid.NAMESPACE_DNS
agent_uuid = str(uuid.uuid5(namespace, "letta-composio-tooling-example"))
# Clear all agents
for agent_state in client.list_agents():
if agent_state.name == agent_uuid:
client.delete_agent(agent_id=agent_state.id)
print(f"Deleted agent: {agent_state.name} with ID {str(agent_state.id)}")
# Add sandbox env
manager = SandboxConfigManager(tool_settings)
# Ensure you have e2b key set
sandbox_config = manager.get_or_create_default_sandbox_config(sandbox_type=SandboxType.E2B, actor=client.user)
manager.create_sandbox_env_var(
SandboxEnvironmentVariableCreate(key="COMPOSIO_API_KEY", value=os.environ.get("COMPOSIO_API_KEY")),
sandbox_config_id=sandbox_config.id,
actor=client.user,
)
"""
This example show how you can add Composio tools .
First, make sure you have Composio and some of the extras downloaded.
```
poetry install --extras "external-tools"
```
then setup letta with `letta configure`.
Aditionally, this example stars a Github repo on your behalf. You will need to configure Composio in your environment.
```
composio login
composio add github
```
Last updated Oct 2, 2024. Please check `composio` documentation for any composio related issues.
"""
def main():
from composio_langchain import Action
# Add the composio tool
tool = client.load_composio_tool(action=Action.GITHUB_STAR_A_REPOSITORY_FOR_THE_AUTHENTICATED_USER)
persona = f"""
My name is Letta.
I am a personal assistant that helps star repos on Github. It is my job to correctly input the owner and repo to the {tool.name} tool based on the user's request.
Dont forget - inner monologue / inner thoughts should always be different than the contents of send_message! send_message is how you communicate with the user, whereas inner thoughts are your own personal inner thoughts.
"""
# Create an agent
agent = client.create_agent(name=agent_uuid, memory=ChatMemory(human="My name is Matt.", persona=persona), tool_ids=[tool.id])
print(f"Created agent: {agent.name} with ID {str(agent.id)}")
# Send a message to the agent
send_message_response = client.user_message(agent_id=agent.id, message="Star a repo composio with owner composiohq on GitHub")
for message in send_message_response.messages:
response_json = json.dumps(message.model_dump(), indent=4)
print(f"{response_json}\n")
# Delete agent
client.delete_agent(agent_id=agent.id)
print(f"Deleted agent: {agent.name} with ID {str(agent.id)}")
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,47 @@
from letta import ChatMemory, EmbeddingConfig, LLMConfig, create_client
from letta.prompts import gpt_system
client = create_client()
# create a new agent
agent_state = client.create_agent(
# agent's name (unique per-user, autogenerated if not provided)
name="agent_name",
# in-context memory representation with human/persona blocks
memory=ChatMemory(human="Name: Sarah", persona="You are a helpful assistant that loves emojis"),
# LLM model & endpoint configuration
llm_config=LLMConfig(
model="gpt-4",
model_endpoint_type="openai",
model_endpoint="https://api.openai.com/v1",
context_window=8000, # set to <= max context window
),
# embedding model & endpoint configuration (cannot be changed)
embedding_config=EmbeddingConfig(
embedding_endpoint_type="openai",
embedding_endpoint="https://api.openai.com/v1",
embedding_model="text-embedding-ada-002",
embedding_dim=1536,
embedding_chunk_size=300,
),
# system instructions for the agent (defaults to `memgpt_chat`)
system=gpt_system.get_system_text("memgpt_chat"),
# whether to include base letta tools (default: True)
include_base_tools=True,
# list of additional tools (by name) to add to the agent
tool_ids=[],
)
print(f"Created agent with name {agent_state.name} and unique ID {agent_state.id}")
# message an agent as a user
response = client.send_message(agent_id=agent_state.id, role="user", message="hello")
print("Usage", response.usage)
print("Agent messages", response.messages)
# message a system message (non-user)
response = client.send_message(agent_id=agent_state.id, role="system", message="[system] user has logged in. send a friendly message.")
print("Usage", response.usage)
print("Agent messages", response.messages)
# delete the agent
client.delete_agent(agent_id=agent_state.id)

View File

@@ -0,0 +1,29 @@
from letta import EmbeddingConfig, LLMConfig, create_client
client = create_client()
# set automatic defaults for LLM/embedding config
client.set_default_llm_config(LLMConfig.default_config(model_name="gpt-4"))
client.set_default_embedding_config(EmbeddingConfig.default_config(model_name="text-embedding-ada-002"))
# create a new agent
agent_state = client.create_agent()
print(f"Created agent with name {agent_state.name} and unique ID {agent_state.id}")
# Message an agent
response = client.send_message(agent_id=agent_state.id, role="user", message="hello")
print("Usage", response.usage)
print("Agent messages", response.messages)
# list all agents
agents = client.list_agents()
# get the agent by ID
agent_state = client.get_agent(agent_id=agent_state.id)
# get the agent by name
agent_id = client.get_agent_id(agent_name=agent_state.name)
agent_state = client.get_agent(agent_id=agent_id)
# delete an agent
client.delete_agent(agent_id=agent_state.id)

0
examples/docs/memory.py Normal file
View File

View File

@@ -0,0 +1,42 @@
from letta import create_client
from letta.schemas.memory import ChatMemory
"""
Make sure you run the Letta server before running this example.
```
letta server
```
"""
def main():
# Connect to the server as a user
client = create_client(base_url="http://localhost:8283")
# list available configs on the server
llm_configs = client.list_llm_configs()
print(f"Available LLM configs: {llm_configs}")
embedding_configs = client.list_embedding_configs()
print(f"Available embedding configs: {embedding_configs}")
# Create an agent
agent_state = client.create_agent(
name="my_agent",
memory=ChatMemory(human="My name is Sarah.", persona="I am a friendly AI."),
embedding_config=embedding_configs[0],
llm_config=llm_configs[0],
)
print(f"Created agent: {agent_state.name} with ID {str(agent_state.id)}")
# Send a message to the agent
print(f"Created agent: {agent_state.name} with ID {str(agent_state.id)}")
response = client.user_message(agent_id=agent_state.id, message="Whats my name?")
print(f"Received response:", response.messages)
# Delete agent
client.delete_agent(agent_id=agent_state.id)
print(f"Deleted agent: {agent_state.name} with ID {str(agent_state.id)}")
if __name__ == "__main__":
main()

72
examples/docs/tools.py Normal file
View File

@@ -0,0 +1,72 @@
from letta import EmbeddingConfig, LLMConfig, create_client
from letta.schemas.tool_rule import TerminalToolRule
client = create_client()
# set automatic defaults for LLM/embedding config
client.set_default_llm_config(LLMConfig.default_config(model_name="gpt-4"))
client.set_default_embedding_config(EmbeddingConfig.default_config(model_name="text-embedding-ada-002"))
# define a function with a docstring
def roll_d20() -> str:
"""
Simulate the roll of a 20-sided die (d20).
This function generates a random integer between 1 and 20, inclusive,
which represents the outcome of a single roll of a d20.
Returns:
int: A random integer between 1 and 20, representing the die roll.
Example:
>>> roll_d20()
15 # This is an example output and may vary each time the function is called.
"""
import random
dice_role_outcome = random.randint(1, 20)
output_string = f"You rolled a {dice_role_outcome}"
return output_string
# create a tool from the function
tool = client.create_or_update_tool(roll_d20)
print(f"Created tool with name {tool.name}")
# create a new agent
agent_state = client.create_agent(
# create the agent with an additional tool
tool_ids=[tool.id],
# add tool rules that terminate execution after specific tools
tool_rules=[
# exit after roll_d20 is called
TerminalToolRule(tool_name=tool.name),
# exit after send_message is called (default behavior)
TerminalToolRule(tool_name="send_message"),
],
)
print(f"Created agent with name {agent_state.name} with tools {[t.name for t in agent_state.tools]}")
# Message an agent
response = client.send_message(agent_id=agent_state.id, role="user", message="roll a dice")
print("Usage", response.usage)
print("Agent messages", response.messages)
# remove a tool from the agent
client.remove_tool_from_agent(agent_id=agent_state.id, tool_id=tool.id)
# add a tool to the agent
client.add_tool_to_agent(agent_id=agent_state.id, tool_id=tool.id)
client.delete_agent(agent_id=agent_state.id)
# create an agent with only a subset of default tools
send_message_tool = client.get_tool_id("send_message")
agent_state = client.create_agent(include_base_tools=False, tool_ids=[tool.id, send_message_tool])
# message the agent to search archival memory (will be unable to do so)
response = client.send_message(agent_id=agent_state.id, role="user", message="search your archival memory")
print("Usage", response.usage)
print("Agent messages", response.messages)
client.delete_agent(agent_id=agent_state.id)

145
examples/helper.py Normal file
View File

@@ -0,0 +1,145 @@
# Add your utilities or helper functions to this file.
import html
import json
import os
import re
from dotenv import find_dotenv, load_dotenv
from IPython.display import HTML, display
# these expect to find a .env file at the directory above the lesson. # the format for that file is (without the comment) #API_KEYNAME=AStringThatIsTheLongAPIKeyFromSomeService
def load_env():
_ = load_dotenv(find_dotenv())
def get_openai_api_key():
load_env()
openai_api_key = os.getenv("OPENAI_API_KEY")
return openai_api_key
def nb_print(messages):
html_output = """
<style>
.message-container {
font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
max-width: 800px;
margin: 20px auto;
background-color: #1e1e1e;
border-radius: 8px;
overflow: hidden;
color: #d4d4d4;
}
.message {
padding: 10px 15px;
border-bottom: 1px solid #3a3a3a;
}
.message:last-child {
border-bottom: none;
}
.title {
font-weight: bold;
margin-bottom: 5px;
color: #ffffff;
text-transform: uppercase;
font-size: 0.9em;
}
.content {
background-color: #2d2d2d;
border-radius: 4px;
padding: 5px 10px;
font-family: 'Consolas', 'Courier New', monospace;
white-space: pre-wrap;
}
.status-line {
margin-bottom: 5px;
color: #d4d4d4;
}
.function-name { color: #569cd6; }
.json-key { color: #9cdcfe; }
.json-string { color: #ce9178; }
.json-number { color: #b5cea8; }
.json-boolean { color: #569cd6; }
.internal-monologue { font-style: italic; }
</style>
<div class="message-container">
"""
for msg in messages:
content = get_formatted_content(msg)
# don't print empty function returns
if msg.message_type == "function_return":
return_data = json.loads(msg.function_return)
if "message" in return_data and return_data["message"] == "None":
continue
if msg.message_type == "tool_return_message":
return_data = json.loads(msg.tool_return)
if "message" in return_data and return_data["message"] == "None":
continue
title = msg.message_type.replace("_", " ").upper()
html_output += f"""
<div class="message">
<div class="title">{title}</div>
{content}
</div>
"""
html_output += "</div>"
display(HTML(html_output))
def get_formatted_content(msg):
if msg.message_type == "internal_monologue":
return f'<div class="content"><span class="internal-monologue">{html.escape(msg.internal_monologue)}</span></div>'
elif msg.message_type == "reasoning_message":
return f'<div class="content"><span class="internal-monologue">{html.escape(msg.reasoning)}</span></div>'
elif msg.message_type == "function_call":
args = format_json(msg.function_call.arguments)
return f'<div class="content"><span class="function-name">{html.escape(msg.function_call.name)}</span>({args})</div>'
elif msg.message_type == "tool_call_message":
args = format_json(msg.tool_call.arguments)
return f'<div class="content"><span class="function-name">{html.escape(msg.function_call.name)}</span>({args})</div>'
elif msg.message_type == "function_return":
return_value = format_json(msg.function_return)
# return f'<div class="status-line">Status: {html.escape(msg.status)}</div><div class="content">{return_value}</div>'
return f'<div class="content">{return_value}</div>'
elif msg.message_type == "tool_return_message":
return_value = format_json(msg.tool_return)
# return f'<div class="status-line">Status: {html.escape(msg.status)}</div><div class="content">{return_value}</div>'
return f'<div class="content">{return_value}</div>'
elif msg.message_type == "user_message":
if is_json(msg.message):
return f'<div class="content">{format_json(msg.message)}</div>'
else:
return f'<div class="content">{html.escape(msg.message)}</div>'
elif msg.message_type in ["assistant_message", "system_message"]:
return f'<div class="content">{html.escape(msg.message)}</div>'
else:
return f'<div class="content">{html.escape(str(msg))}</div>'
def is_json(string):
try:
json.loads(string)
return True
except ValueError:
return False
def format_json(json_str):
try:
parsed = json.loads(json_str)
formatted = json.dumps(parsed, indent=2, ensure_ascii=False)
formatted = formatted.replace("&", "&amp;").replace("<", "&lt;").replace(">", "&gt;")
formatted = formatted.replace("\n", "<br>").replace(" ", "&nbsp;&nbsp;")
formatted = re.sub(r'(".*?"):', r'<span class="json-key">\1</span>:', formatted)
formatted = re.sub(r': (".*?")', r': <span class="json-string">\1</span>', formatted)
formatted = re.sub(r": (\d+)", r': <span class="json-number">\1</span>', formatted)
formatted = re.sub(r": (true|false)", r': <span class="json-boolean">\1</span>', formatted)
return formatted
except json.JSONDecodeError:
return html.escape(json_str)

View File

@@ -0,0 +1,87 @@
import json
import uuid
from letta import create_client
from letta.schemas.embedding_config import EmbeddingConfig
from letta.schemas.llm_config import LLMConfig
from letta.schemas.memory import ChatMemory
"""
This example show how you can add LangChain tools .
First, make sure you have LangChain and some of the extras downloaded.
For this specific example, you will need `wikipedia` installed.
```
poetry install --extras "external-tools"
```
then setup letta with `letta configure`.
"""
def main():
from langchain_community.tools import WikipediaQueryRun
from langchain_community.utilities import WikipediaAPIWrapper
api_wrapper = WikipediaAPIWrapper(top_k_results=1, doc_content_chars_max=500)
langchain_tool = WikipediaQueryRun(api_wrapper=api_wrapper)
# Create a `LocalClient` (you can also use a `RESTClient`, see the letta_rest_client.py example)
client = create_client()
client.set_default_llm_config(LLMConfig.default_config("gpt-4o-mini"))
client.set_default_embedding_config(EmbeddingConfig.default_config(provider="openai"))
# create tool
# Note the additional_imports_module_attr_map
# We need to pass in a map of all the additional imports necessary to run this tool
# Because an object of type WikipediaAPIWrapper is passed into WikipediaQueryRun to initialize langchain_tool,
# We need to also import WikipediaAPIWrapper
# The map is a mapping of the module name to the attribute name
# langchain_community.utilities.WikipediaAPIWrapper
wikipedia_query_tool = client.load_langchain_tool(
langchain_tool, additional_imports_module_attr_map={"langchain_community.utilities": "WikipediaAPIWrapper"}
)
tool_name = wikipedia_query_tool.name
# Confirm that the tool is in
tools = client.list_tools()
assert wikipedia_query_tool.name in [t.name for t in tools]
# Generate uuid for agent name for this example
namespace = uuid.NAMESPACE_DNS
agent_uuid = str(uuid.uuid5(namespace, "letta-langchain-tooling-example"))
# Clear all agents
for agent_state in client.list_agents():
if agent_state.name == agent_uuid:
client.delete_agent(agent_id=agent_state.id)
print(f"Deleted agent: {agent_state.name} with ID {str(agent_state.id)}")
# google search persona
persona = f"""
My name is Letta.
I am a personal assistant who answers a user's questions using wikipedia searches. When a user asks me a question, I will use a tool called {tool_name} which will search Wikipedia and return a Wikipedia page about the topic. It is my job to construct the best query to input into {tool_name} based on the user's question.
Dont forget - inner monologue / inner thoughts should always be different than the contents of send_message! send_message is how you communicate with the user, whereas inner thoughts are your own personal inner thoughts.
"""
# Create an agent
agent_state = client.create_agent(
name=agent_uuid, memory=ChatMemory(human="My name is Matt.", persona=persona), tool_ids=[wikipedia_query_tool.id]
)
print(f"Created agent: {agent_state.name} with ID {str(agent_state.id)}")
# Send a message to the agent
send_message_response = client.user_message(agent_id=agent_state.id, message="How do you pronounce Albert Einstein's name?")
for message in send_message_response.messages:
response_json = json.dumps(message.model_dump(), indent=4)
print(f"{response_json}\n")
# Delete agent
client.delete_agent(agent_id=agent_state.id)
print(f"Deleted agent: {agent_state.name} with ID {str(agent_state.id)}")
if __name__ == "__main__":
main()

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,884 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "cac06555-9ce8-4f01-bbef-3f8407f4b54d",
"metadata": {},
"source": [
"# Multi-agent recruiting workflow \n",
"Last tested with letta version `0.5.3`"
]
},
{
"cell_type": "markdown",
"id": "aad3a8cc-d17a-4da1-b621-ecc93c9e2106",
"metadata": {},
"source": [
"## Section 0: Setup a MemGPT client "
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "7ccd43f2-164b-4d25-8465-894a3bb54c4b",
"metadata": {},
"outputs": [],
"source": [
"from letta import create_client \n",
"\n",
"client = create_client() "
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "e9849ebf-1065-4ce1-9676-16fdd82bdd17",
"metadata": {},
"outputs": [],
"source": [
"from letta import LLMConfig, EmbeddingConfig\n",
"\n",
"client.set_default_llm_config(LLMConfig.default_config(\"gpt-4o-mini\")) \n",
"client.set_default_embedding_config(EmbeddingConfig.default_config(\"text-embedding-ada-002\")) "
]
},
{
"cell_type": "markdown",
"id": "99a61da5-f069-4538-a548-c7d0f7a70227",
"metadata": {},
"source": [
"## Section 1: Shared Memory Block \n",
"Each agent will have both its own memory, and shared memory. The shared memory will contain information about the organization that the agents are all a part of. If one agent updates this memory, the changes will be propaged to the memory of all the other agents. "
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "7770600d-5e83-4498-acf1-05f5bea216c3",
"metadata": {},
"outputs": [],
"source": [
"from letta.schemas.block import Block \n",
"\n",
"org_description = \"The company is called AgentOS \" \\\n",
"+ \"and is building AI tools to make it easier to create \" \\\n",
"+ \"and deploy LLM agents.\"\n",
"\n",
"org_block = Block(label=\"company\", value=org_description )"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "6c3d3a55-870a-4ff0-81c0-4072f783a940",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Block(value='The company is called AgentOS and is building AI tools to make it easier to create and deploy LLM agents.', limit=2000, template_name=None, template=False, label='company', description=None, metadata_={}, user_id=None, id='block-f212d9e6-f930-4d3b-b86a-40879a38aec4')"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"org_block"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "3e3ce7a4-cf4d-4d74-8d09-b4a35b8bb439",
"metadata": {},
"outputs": [],
"source": [
"from letta.schemas.memory import BasicBlockMemory\n",
"\n",
"class OrgMemory(BasicBlockMemory): \n",
"\n",
" def __init__(self, persona: str, org_block: Block): \n",
" persona_block = Block(label=\"persona\", value=persona)\n",
" super().__init__(blocks=[persona_block, org_block])\n",
" "
]
},
{
"cell_type": "markdown",
"id": "8448df7b-c321-4d90-ba52-003930a513cb",
"metadata": {},
"source": [
"## Section 2: Orchestrating Multiple Agents \n",
"We'll implement a recruiting workflow that involves evaluating an candidate, then if the candidate is a good fit, writing a personalized email on the human's behalf. Since this task involves multiple stages, sometimes breaking the task down to multiple agents can improve performance (though this is not always the case). We will break down the task into: \n",
"\n",
"1. `eval_agent`: This agent is responsible for evaluating candidates based on their resume\n",
"2. `outreach_agent`: This agent is responsible for writing emails to strong candidates\n",
"3. `recruiter_agent`: This agent is responsible for generating leads from a database \n",
"\n",
"Much like humans, these agents will communicate by sending each other messages. We can do this by giving agents that need to communicate with other agents access to a tool that allows them to message other agents. "
]
},
{
"cell_type": "markdown",
"id": "a065082a-d865-483c-b721-43c5a4d51afe",
"metadata": {},
"source": [
"#### Evaluator Agent\n",
"This agent will have tools to: \n",
"* Read a resume \n",
"* Submit a candidate for outreach (which sends the candidate information to the `outreach_agent`)"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "c00232c5-4c37-436c-8ea4-602a31bd84fa",
"metadata": {},
"outputs": [],
"source": [
"def read_resume(self, name: str): \n",
" \"\"\"\n",
" Read the resume data for a candidate given the name\n",
"\n",
" Args: \n",
" name (str): Candidate name \n",
"\n",
" Returns: \n",
" resume_data (str): Candidate's resume data \n",
" \"\"\"\n",
" import os\n",
" filepath = os.path.join(\"data\", \"resumes\", name.lower().replace(\" \", \"_\") + \".txt\")\n",
" #print(\"read\", filepath)\n",
" return open(filepath).read()\n",
"\n",
"def submit_evaluation(self, candidate_name: str, reach_out: bool, resume: str, justification: str): \n",
" \"\"\"\n",
" Submit a candidate for outreach. \n",
"\n",
" Args: \n",
" candidate_name (str): The name of the candidate\n",
" reach_out (bool): Whether to reach out to the candidate\n",
" resume (str): The text representation of the candidate's resume \n",
" justification (str): Justification for reaching out or not\n",
" \"\"\"\n",
" from letta import create_client \n",
" client = create_client()\n",
" message = \"Reach out to the following candidate. \" \\\n",
" + f\"Name: {candidate_name}\\n\" \\\n",
" + f\"Resume Data: {resume}\\n\" \\\n",
" + f\"Justification: {justification}\"\n",
" # NOTE: we will define this agent later \n",
" if reach_out:\n",
" response = client.send_message(\n",
" agent_name=\"outreach_agent\", \n",
" role=\"user\", \n",
" message=message\n",
" ) \n",
" else: \n",
" print(f\"Candidate {candidate_name} is rejected: {justification}\")\n",
"\n",
"# TODO: add an archival andidate tool (provide justification) \n",
"\n",
"read_resume_tool = client.create_or_update_tool(read_resume) \n",
"submit_evaluation_tool = client.create_or_update_tool(submit_evaluation)"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "12482994-03f4-4dda-8ea2-6492ec28f392",
"metadata": {},
"outputs": [],
"source": [
"skills = \"Front-end (React, Typescript), software engineering \" \\\n",
"+ \"(ideally Python), and experience with LLMs.\"\n",
"eval_persona = f\"You are responsible to finding good recruiting \" \\\n",
"+ \"candidates, for the company description. \" \\\n",
"+ f\"Ideal canddiates have skills: {skills}. \" \\\n",
"+ \"Submit your candidate evaluation with the submit_evaluation tool. \"\n",
"\n",
"# delete agent if exists \n",
"if client.get_agent_id(\"eval_agent\"): \n",
" client.delete_agent(client.get_agent_id(\"eval_agent\"))\n",
"\n",
"eval_agent = client.create_agent(\n",
" name=\"eval_agent\", \n",
" memory=OrgMemory(\n",
" persona=eval_persona, \n",
" org_block=org_block,\n",
" ), \n",
" tools=[read_resume_tool.name, submit_evaluation_tool.name]\n",
")\n"
]
},
{
"cell_type": "markdown",
"id": "37c2d0be-b980-426f-ab24-1feaa8ed90ef",
"metadata": {},
"source": [
"#### Outreach agent \n",
"This agent will email candidates with customized emails. Since sending emails is a bit complicated, we'll just pretend we sent an email by printing it in the tool call. "
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "24e8942f-5b0e-4490-ac5f-f9e1f3178627",
"metadata": {},
"outputs": [],
"source": [
"def email_candidate(self, content: str): \n",
" \"\"\"\n",
" Send an email\n",
"\n",
" Args: \n",
" content (str): Content of the email \n",
" \"\"\"\n",
" print(\"Pretend to email:\", content)\n",
" return\n",
"\n",
"email_candidate_tool = client.create_or_update_tool(email_candidate)"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "87416e00-c7a0-4420-be71-e2f5a6404428",
"metadata": {},
"outputs": [],
"source": [
"outreach_persona = \"You are responsible for sending outbound emails \" \\\n",
"+ \"on behalf of a company with the send_emails tool to \" \\\n",
"+ \"potential candidates. \" \\\n",
"+ \"If possible, make sure to personalize the email by appealing \" \\\n",
"+ \"to the recipient with details about the company. \" \\\n",
"+ \"You position is `Head Recruiter`, and you go by the name Bob, with contact info bob@gmail.com. \" \\\n",
"+ \"\"\"\n",
"Follow this email template: \n",
"\n",
"Hi <candidate name>, \n",
"\n",
"<content> \n",
"\n",
"Best, \n",
"<your name> \n",
"<company name> \n",
"\"\"\"\n",
"\n",
"\n",
"# delete agent if exists \n",
"if client.get_agent_id(\"outreach_agent\"): \n",
" client.delete_agent(client.get_agent_id(\"outreach_agent\"))\n",
" \n",
"outreach_agent = client.create_agent(\n",
" name=\"outreach_agent\", \n",
" memory=OrgMemory(\n",
" persona=outreach_persona, \n",
" org_block=org_block\n",
" ), \n",
" tools=[email_candidate_tool.name]\n",
")"
]
},
{
"cell_type": "markdown",
"id": "f69d38da-807e-4bb1-8adb-f715b24f1c34",
"metadata": {},
"source": [
"Next, we'll send a message from the user telling the `leadgen_agent` to evaluate a given candidate: "
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "f09ab5bd-e158-42ee-9cce-43f254c4d2b0",
"metadata": {},
"outputs": [],
"source": [
"response = client.send_message(\n",
" agent_name=\"eval_agent\", \n",
" role=\"user\", \n",
" message=\"Candidate: Tony Stark\"\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "cd8f1a1e-21eb-47ae-9eed-b1d3668752ff",
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"\n",
" <style>\n",
" .message-container, .usage-container {\n",
" font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;\n",
" max-width: 800px;\n",
" margin: 20px auto;\n",
" background-color: #1e1e1e;\n",
" border-radius: 8px;\n",
" overflow: hidden;\n",
" color: #d4d4d4;\n",
" }\n",
" .message, .usage-stats {\n",
" padding: 10px 15px;\n",
" border-bottom: 1px solid #3a3a3a;\n",
" }\n",
" .message:last-child, .usage-stats:last-child {\n",
" border-bottom: none;\n",
" }\n",
" .title {\n",
" font-weight: bold;\n",
" margin-bottom: 5px;\n",
" color: #ffffff;\n",
" text-transform: uppercase;\n",
" font-size: 0.9em;\n",
" }\n",
" .content {\n",
" background-color: #2d2d2d;\n",
" border-radius: 4px;\n",
" padding: 5px 10px;\n",
" font-family: 'Consolas', 'Courier New', monospace;\n",
" white-space: pre-wrap;\n",
" }\n",
" .json-key, .function-name, .json-boolean { color: #9cdcfe; }\n",
" .json-string { color: #ce9178; }\n",
" .json-number { color: #b5cea8; }\n",
" .internal-monologue { font-style: italic; }\n",
" </style>\n",
" <div class=\"message-container\">\n",
" \n",
" <div class=\"message\">\n",
" <div class=\"title\">INTERNAL MONOLOGUE</div>\n",
" <div class=\"content\"><span class=\"internal-monologue\">Checking the resume for Tony Stark to evaluate if he fits the bill for our needs.</span></div>\n",
" </div>\n",
" \n",
" <div class=\"message\">\n",
" <div class=\"title\">FUNCTION CALL</div>\n",
" <div class=\"content\"><span class=\"function-name\">read_resume</span>({<br>&nbsp;&nbsp;<span class=\"json-key\">\"name\"</span>: <span class=\"json-key\">\"Tony Stark\",<br>&nbsp;&nbsp;\"request_heartbeat\"</span>: <span class=\"json-boolean\">true</span><br>})</div>\n",
" </div>\n",
" \n",
" <div class=\"message\">\n",
" <div class=\"title\">FUNCTION RETURN</div>\n",
" <div class=\"content\">{<br>&nbsp;&nbsp;<span class=\"json-key\">\"status\"</span>: <span class=\"json-key\">\"Failed\",<br>&nbsp;&nbsp;\"message\"</span>: <span class=\"json-key\">\"Error calling function read_resume: [Errno 2] No such file or directory: 'data/resumes/tony_stark.txt'\",<br>&nbsp;&nbsp;\"time\"</span>: <span class=\"json-string\">\"2024-11-13 05:51:26 PM PST-0800\"</span><br>}</div>\n",
" </div>\n",
" \n",
" <div class=\"message\">\n",
" <div class=\"title\">INTERNAL MONOLOGUE</div>\n",
" <div class=\"content\"><span class=\"internal-monologue\">I couldn&#x27;t retrieve Tony&#x27;s resume. Need to handle this carefully to keep the conversation flowing.</span></div>\n",
" </div>\n",
" \n",
" <div class=\"message\">\n",
" <div class=\"title\">FUNCTION CALL</div>\n",
" <div class=\"content\"><span class=\"function-name\">send_message</span>({<br>&nbsp;&nbsp;<span class=\"json-key\">\"message\"</span>: <span class=\"json-string\">\"It looks like I'm having trouble accessing Tony Stark's resume at the moment. Can you provide more details about his qualifications?\"</span><br>})</div>\n",
" </div>\n",
" \n",
" <div class=\"message\">\n",
" <div class=\"title\">FUNCTION RETURN</div>\n",
" <div class=\"content\">{<br>&nbsp;&nbsp;<span class=\"json-key\">\"status\"</span>: <span class=\"json-key\">\"OK\",<br>&nbsp;&nbsp;\"message\"</span>: <span class=\"json-key\">\"None\",<br>&nbsp;&nbsp;\"time\"</span>: <span class=\"json-string\">\"2024-11-13 05:51:28 PM PST-0800\"</span><br>}</div>\n",
" </div>\n",
" </div>\n",
" <div class=\"usage-container\">\n",
" <div class=\"usage-stats\">\n",
" <div class=\"title\">USAGE STATISTICS</div>\n",
" <div class=\"content\">{<br>&nbsp;&nbsp;<span class=\"json-key\">\"completion_tokens\"</span>: <span class=\"json-number\">103</span>,<br>&nbsp;&nbsp;<span class=\"json-key\">\"prompt_tokens\"</span>: <span class=\"json-number\">4999</span>,<br>&nbsp;&nbsp;<span class=\"json-key\">\"total_tokens\"</span>: <span class=\"json-number\">5102</span>,<br>&nbsp;&nbsp;<span class=\"json-key\">\"step_count\"</span>: <span class=\"json-number\">2</span><br>}</div>\n",
" </div>\n",
" </div>\n",
" "
],
"text/plain": [
"LettaResponse(messages=[InternalMonologue(id='message-97a1ae82-f8f3-419f-94c4-263112dbc10b', date=datetime.datetime(2024, 11, 14, 1, 51, 26, 799617, tzinfo=datetime.timezone.utc), message_type='internal_monologue', internal_monologue='Checking the resume for Tony Stark to evaluate if he fits the bill for our needs.'), FunctionCallMessage(id='message-97a1ae82-f8f3-419f-94c4-263112dbc10b', date=datetime.datetime(2024, 11, 14, 1, 51, 26, 799617, tzinfo=datetime.timezone.utc), message_type='function_call', function_call=FunctionCall(name='read_resume', arguments='{\\n \"name\": \"Tony Stark\",\\n \"request_heartbeat\": true\\n}', function_call_id='call_wOsiHlU3551JaApHKP7rK4Rt')), FunctionReturn(id='message-97a2b57e-40c6-4f06-a307-a0e3a00717ce', date=datetime.datetime(2024, 11, 14, 1, 51, 26, 803505, tzinfo=datetime.timezone.utc), message_type='function_return', function_return='{\\n \"status\": \"Failed\",\\n \"message\": \"Error calling function read_resume: [Errno 2] No such file or directory: \\'data/resumes/tony_stark.txt\\'\",\\n \"time\": \"2024-11-13 05:51:26 PM PST-0800\"\\n}', status='error', function_call_id='call_wOsiHlU3551JaApHKP7rK4Rt'), InternalMonologue(id='message-8e249aea-27ce-4788-b3e0-ac4c8401bc93', date=datetime.datetime(2024, 11, 14, 1, 51, 28, 360676, tzinfo=datetime.timezone.utc), message_type='internal_monologue', internal_monologue=\"I couldn't retrieve Tony's resume. Need to handle this carefully to keep the conversation flowing.\"), FunctionCallMessage(id='message-8e249aea-27ce-4788-b3e0-ac4c8401bc93', date=datetime.datetime(2024, 11, 14, 1, 51, 28, 360676, tzinfo=datetime.timezone.utc), message_type='function_call', function_call=FunctionCall(name='send_message', arguments='{\\n \"message\": \"It looks like I\\'m having trouble accessing Tony Stark\\'s resume at the moment. Can you provide more details about his qualifications?\"\\n}', function_call_id='call_1DoFBhOsP9OCpdPQjUfBcKjw')), FunctionReturn(id='message-5600e8e7-6c6f-482a-8594-a0483ef523a2', date=datetime.datetime(2024, 11, 14, 1, 51, 28, 361921, tzinfo=datetime.timezone.utc), message_type='function_return', function_return='{\\n \"status\": \"OK\",\\n \"message\": \"None\",\\n \"time\": \"2024-11-13 05:51:28 PM PST-0800\"\\n}', status='success', function_call_id='call_1DoFBhOsP9OCpdPQjUfBcKjw')], usage=LettaUsageStatistics(completion_tokens=103, prompt_tokens=4999, total_tokens=5102, step_count=2))"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"response"
]
},
{
"cell_type": "markdown",
"id": "67069247-e603-439c-b2df-9176c4eba957",
"metadata": {},
"source": [
"#### Providing feedback to agents \n",
"Since MemGPT agents are persisted, we can provide feedback to agents that is used in future agent executions if we want to modify the future behavior. "
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "19c57d54-a1fe-4244-b765-b996ba9a4788",
"metadata": {},
"outputs": [],
"source": [
"feedback = \"Our company pivoted to foundation model training\"\n",
"response = client.send_message(\n",
" agent_name=\"eval_agent\", \n",
" role=\"user\", \n",
" message=feedback\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "036b973f-209a-4ad9-90e7-fc827b5d92c7",
"metadata": {},
"outputs": [],
"source": [
"\n",
"feedback = \"The company is also renamed to FoundationAI\"\n",
"response = client.send_message(\n",
" agent_name=\"eval_agent\", \n",
" role=\"user\", \n",
" message=feedback\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "5d7a7633-35a3-4e41-b44a-be71067dd32a",
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"\n",
" <style>\n",
" .message-container, .usage-container {\n",
" font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;\n",
" max-width: 800px;\n",
" margin: 20px auto;\n",
" background-color: #1e1e1e;\n",
" border-radius: 8px;\n",
" overflow: hidden;\n",
" color: #d4d4d4;\n",
" }\n",
" .message, .usage-stats {\n",
" padding: 10px 15px;\n",
" border-bottom: 1px solid #3a3a3a;\n",
" }\n",
" .message:last-child, .usage-stats:last-child {\n",
" border-bottom: none;\n",
" }\n",
" .title {\n",
" font-weight: bold;\n",
" margin-bottom: 5px;\n",
" color: #ffffff;\n",
" text-transform: uppercase;\n",
" font-size: 0.9em;\n",
" }\n",
" .content {\n",
" background-color: #2d2d2d;\n",
" border-radius: 4px;\n",
" padding: 5px 10px;\n",
" font-family: 'Consolas', 'Courier New', monospace;\n",
" white-space: pre-wrap;\n",
" }\n",
" .json-key, .function-name, .json-boolean { color: #9cdcfe; }\n",
" .json-string { color: #ce9178; }\n",
" .json-number { color: #b5cea8; }\n",
" .internal-monologue { font-style: italic; }\n",
" </style>\n",
" <div class=\"message-container\">\n",
" \n",
" <div class=\"message\">\n",
" <div class=\"title\">INTERNAL MONOLOGUE</div>\n",
" <div class=\"content\"><span class=\"internal-monologue\">Updating the company name to reflect the rebranding. This is important for future candidate evaluations.</span></div>\n",
" </div>\n",
" \n",
" <div class=\"message\">\n",
" <div class=\"title\">FUNCTION CALL</div>\n",
" <div class=\"content\"><span class=\"function-name\">core_memory_replace</span>({<br>&nbsp;&nbsp;<span class=\"json-key\">\"label\"</span>: <span class=\"json-key\">\"company\",<br>&nbsp;&nbsp;\"old_content\"</span>: <span class=\"json-key\">\"The company has pivoted to foundation model training.\",<br>&nbsp;&nbsp;\"new_content\"</span>: <span class=\"json-key\">\"The company is called FoundationAI and has pivoted to foundation model training.\",<br>&nbsp;&nbsp;\"request_heartbeat\"</span>: <span class=\"json-boolean\">true</span><br>})</div>\n",
" </div>\n",
" \n",
" <div class=\"message\">\n",
" <div class=\"title\">FUNCTION RETURN</div>\n",
" <div class=\"content\">{<br>&nbsp;&nbsp;<span class=\"json-key\">\"status\"</span>: <span class=\"json-key\">\"OK\",<br>&nbsp;&nbsp;\"message\"</span>: <span class=\"json-key\">\"None\",<br>&nbsp;&nbsp;\"time\"</span>: <span class=\"json-string\">\"2024-11-13 05:51:34 PM PST-0800\"</span><br>}</div>\n",
" </div>\n",
" \n",
" <div class=\"message\">\n",
" <div class=\"title\">INTERNAL MONOLOGUE</div>\n",
" <div class=\"content\"><span class=\"internal-monologue\">Now I have the updated company info, time to check in on Tony.</span></div>\n",
" </div>\n",
" \n",
" <div class=\"message\">\n",
" <div class=\"title\">FUNCTION CALL</div>\n",
" <div class=\"content\"><span class=\"function-name\">send_message</span>({<br>&nbsp;&nbsp;<span class=\"json-key\">\"message\"</span>: <span class=\"json-string\">\"Got it, the new name is FoundationAI! What about Tony Stark's background catches your eye for this role? Any particular insights on his skills in front-end development or LLMs?\"</span><br>})</div>\n",
" </div>\n",
" \n",
" <div class=\"message\">\n",
" <div class=\"title\">FUNCTION RETURN</div>\n",
" <div class=\"content\">{<br>&nbsp;&nbsp;<span class=\"json-key\">\"status\"</span>: <span class=\"json-key\">\"OK\",<br>&nbsp;&nbsp;\"message\"</span>: <span class=\"json-key\">\"None\",<br>&nbsp;&nbsp;\"time\"</span>: <span class=\"json-string\">\"2024-11-13 05:51:35 PM PST-0800\"</span><br>}</div>\n",
" </div>\n",
" </div>\n",
" <div class=\"usage-container\">\n",
" <div class=\"usage-stats\">\n",
" <div class=\"title\">USAGE STATISTICS</div>\n",
" <div class=\"content\">{<br>&nbsp;&nbsp;<span class=\"json-key\">\"completion_tokens\"</span>: <span class=\"json-number\">146</span>,<br>&nbsp;&nbsp;<span class=\"json-key\">\"prompt_tokens\"</span>: <span class=\"json-number\">6372</span>,<br>&nbsp;&nbsp;<span class=\"json-key\">\"total_tokens\"</span>: <span class=\"json-number\">6518</span>,<br>&nbsp;&nbsp;<span class=\"json-key\">\"step_count\"</span>: <span class=\"json-number\">2</span><br>}</div>\n",
" </div>\n",
" </div>\n",
" "
],
"text/plain": [
"LettaResponse(messages=[InternalMonologue(id='message-0adccea9-4b96-4cbb-b5fc-a9ef0120c646', date=datetime.datetime(2024, 11, 14, 1, 51, 34, 180327, tzinfo=datetime.timezone.utc), message_type='internal_monologue', internal_monologue='Updating the company name to reflect the rebranding. This is important for future candidate evaluations.'), FunctionCallMessage(id='message-0adccea9-4b96-4cbb-b5fc-a9ef0120c646', date=datetime.datetime(2024, 11, 14, 1, 51, 34, 180327, tzinfo=datetime.timezone.utc), message_type='function_call', function_call=FunctionCall(name='core_memory_replace', arguments='{\\n \"label\": \"company\",\\n \"old_content\": \"The company has pivoted to foundation model training.\",\\n \"new_content\": \"The company is called FoundationAI and has pivoted to foundation model training.\",\\n \"request_heartbeat\": true\\n}', function_call_id='call_5s0KTElXdipPidchUu3R9CxI')), FunctionReturn(id='message-a2f278e8-ec23-4e22-a124-c21a0f46f733', date=datetime.datetime(2024, 11, 14, 1, 51, 34, 182291, tzinfo=datetime.timezone.utc), message_type='function_return', function_return='{\\n \"status\": \"OK\",\\n \"message\": \"None\",\\n \"time\": \"2024-11-13 05:51:34 PM PST-0800\"\\n}', status='success', function_call_id='call_5s0KTElXdipPidchUu3R9CxI'), InternalMonologue(id='message-91f63cb2-b544-4b2e-82b1-b11643df5f93', date=datetime.datetime(2024, 11, 14, 1, 51, 35, 841684, tzinfo=datetime.timezone.utc), message_type='internal_monologue', internal_monologue='Now I have the updated company info, time to check in on Tony.'), FunctionCallMessage(id='message-91f63cb2-b544-4b2e-82b1-b11643df5f93', date=datetime.datetime(2024, 11, 14, 1, 51, 35, 841684, tzinfo=datetime.timezone.utc), message_type='function_call', function_call=FunctionCall(name='send_message', arguments='{\\n \"message\": \"Got it, the new name is FoundationAI! What about Tony Stark\\'s background catches your eye for this role? Any particular insights on his skills in front-end development or LLMs?\"\\n}', function_call_id='call_R4Erx7Pkpr5lepcuaGQU5isS')), FunctionReturn(id='message-813a9306-38fc-4665-9f3b-7c3671fd90e6', date=datetime.datetime(2024, 11, 14, 1, 51, 35, 842423, tzinfo=datetime.timezone.utc), message_type='function_return', function_return='{\\n \"status\": \"OK\",\\n \"message\": \"None\",\\n \"time\": \"2024-11-13 05:51:35 PM PST-0800\"\\n}', status='success', function_call_id='call_R4Erx7Pkpr5lepcuaGQU5isS')], usage=LettaUsageStatistics(completion_tokens=146, prompt_tokens=6372, total_tokens=6518, step_count=2))"
]
},
"execution_count": 18,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"response"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "d04d4b3a-6df1-41a9-9a8e-037fbb45836d",
"metadata": {},
"outputs": [],
"source": [
"response = client.send_message(\n",
" agent_name=\"eval_agent\", \n",
" role=\"system\", \n",
" message=\"Candidate: Spongebob Squarepants\"\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "c60465f4-7977-4f70-9a75-d2ddebabb0fa",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Block(value='The company is called AgentOS and is building AI tools to make it easier to create and deploy LLM agents.\\nThe company is called FoundationAI and has pivoted to foundation model training.', limit=2000, template_name=None, template=False, label='company', description=None, metadata_={}, user_id=None, id='block-f212d9e6-f930-4d3b-b86a-40879a38aec4')"
]
},
"execution_count": 20,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"client.get_core_memory(eval_agent.id).get_block(\"company\")"
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "a51c6bb3-225d-47a4-88f1-9a26ff838dd3",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Block(value='The company is called AgentOS and is building AI tools to make it easier to create and deploy LLM agents.', limit=2000, template_name=None, template=False, label='company', description=None, metadata_={}, user_id=None, id='block-f212d9e6-f930-4d3b-b86a-40879a38aec4')"
]
},
"execution_count": 21,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"client.get_core_memory(outreach_agent.id).get_block(\"company\")"
]
},
{
"cell_type": "markdown",
"id": "8d181b1e-72da-4ebe-a872-293e3ce3a225",
"metadata": {},
"source": [
"## Section 3: Adding an orchestrator agent \n",
"So far, we've been triggering the `eval_agent` manually. We can also create an additional agent that is responsible for orchestrating tasks. "
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "80b23d46-ed4b-4457-810a-a819d724e146",
"metadata": {},
"outputs": [],
"source": [
"#re-create agents \n",
"client.delete_agent(eval_agent.id)\n",
"client.delete_agent(outreach_agent.id)\n",
"\n",
"eval_agent = client.create_agent(\n",
" name=\"eval_agent\", \n",
" memory=OrgMemory(\n",
" persona=eval_persona, \n",
" org_block=org_block,\n",
" ), \n",
" tools=[read_resume_tool.name, submit_evaluation_tool.name]\n",
")\n",
"\n",
"outreach_agent = client.create_agent(\n",
" name=\"outreach_agent\", \n",
" memory=OrgMemory(\n",
" persona=outreach_persona, \n",
" org_block=org_block\n",
" ), \n",
" tools=[email_candidate_tool.name]\n",
")"
]
},
{
"cell_type": "markdown",
"id": "a751d0f1-b52d-493c-bca1-67f88011bded",
"metadata": {},
"source": [
"The `recruiter_agent` will be linked to the same `org_block` that we created before - we can look up the current data in `org_block` by looking up its ID: "
]
},
{
"cell_type": "code",
"execution_count": 23,
"id": "bf6bd419-1504-4513-bc68-d4c717ea8e2d",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Block(value='The company is called AgentOS and is building AI tools to make it easier to create and deploy LLM agents.\\nThe company is called FoundationAI and has pivoted to foundation model training.', limit=2000, template_name=None, template=False, label='company', description=None, metadata_={}, user_id='user-00000000-0000-4000-8000-000000000000', id='block-f212d9e6-f930-4d3b-b86a-40879a38aec4')"
]
},
"execution_count": 23,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"client.get_block(org_block.id)"
]
},
{
"cell_type": "code",
"execution_count": 25,
"id": "e2730626-1685-46aa-9b44-a59e1099e973",
"metadata": {},
"outputs": [],
"source": [
"from typing import Optional\n",
"\n",
"def search_candidates_db(self, page: int) -> Optional[str]: \n",
" \"\"\"\n",
" Returns 1 candidates per page. \n",
" Page 0 returns the first 1 candidate, \n",
" Page 1 returns the next 1, etc.\n",
" Returns `None` if no candidates remain. \n",
"\n",
" Args: \n",
" page (int): The page number to return candidates from \n",
"\n",
" Returns: \n",
" candidate_names (List[str]): Names of the candidates\n",
" \"\"\"\n",
" \n",
" names = [\"Tony Stark\", \"Spongebob Squarepants\", \"Gautam Fang\"]\n",
" if page >= len(names): \n",
" return None\n",
" return names[page]\n",
"\n",
"def consider_candidate(self, name: str): \n",
" \"\"\"\n",
" Submit a candidate for consideration. \n",
"\n",
" Args: \n",
" name (str): Candidate name to consider \n",
" \"\"\"\n",
" from letta import create_client \n",
" client = create_client()\n",
" message = f\"Consider candidate {name}\" \n",
" print(\"Sending message to eval agent: \", message)\n",
" response = client.send_message(\n",
" agent_name=\"eval_agent\", \n",
" role=\"user\", \n",
" message=message\n",
" ) \n",
"\n",
"\n",
"# create tools \n",
"search_candidate_tool = client.create_or_update_tool(search_candidates_db)\n",
"consider_candidate_tool = client.create_or_update_tool(consider_candidate)\n",
"\n",
"# delete agent if exists \n",
"if client.get_agent_id(\"recruiter_agent\"): \n",
" client.delete_agent(client.get_agent_id(\"recruiter_agent\"))\n",
"\n",
"# create recruiter agent\n",
"recruiter_agent = client.create_agent(\n",
" name=\"recruiter_agent\", \n",
" memory=OrgMemory(\n",
" persona=\"You run a recruiting process for a company. \" \\\n",
" + \"Your job is to continue to pull candidates from the \" \n",
" + \"`search_candidates_db` tool until there are no more \" \\\n",
" + \"candidates left. \" \\\n",
" + \"For each candidate, consider the candidate by calling \"\n",
" + \"the `consider_candidate` tool. \" \\\n",
" + \"You should continue to call `search_candidates_db` \" \\\n",
" + \"followed by `consider_candidate` until there are no more \" \\\n",
" \" candidates. \",\n",
" org_block=org_block\n",
" ), \n",
" tools=[search_candidate_tool.name, consider_candidate_tool.name]\n",
")\n",
" \n"
]
},
{
"cell_type": "code",
"execution_count": 26,
"id": "ecfd790c-0018-4fd9-bdaf-5a6b81f70adf",
"metadata": {},
"outputs": [],
"source": [
"response = client.send_message(\n",
" agent_name=\"recruiter_agent\", \n",
" role=\"system\", \n",
" message=\"Run generation\"\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 27,
"id": "8065c179-cf90-4287-a6e5-8c009807b436",
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"\n",
" <style>\n",
" .message-container, .usage-container {\n",
" font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;\n",
" max-width: 800px;\n",
" margin: 20px auto;\n",
" background-color: #1e1e1e;\n",
" border-radius: 8px;\n",
" overflow: hidden;\n",
" color: #d4d4d4;\n",
" }\n",
" .message, .usage-stats {\n",
" padding: 10px 15px;\n",
" border-bottom: 1px solid #3a3a3a;\n",
" }\n",
" .message:last-child, .usage-stats:last-child {\n",
" border-bottom: none;\n",
" }\n",
" .title {\n",
" font-weight: bold;\n",
" margin-bottom: 5px;\n",
" color: #ffffff;\n",
" text-transform: uppercase;\n",
" font-size: 0.9em;\n",
" }\n",
" .content {\n",
" background-color: #2d2d2d;\n",
" border-radius: 4px;\n",
" padding: 5px 10px;\n",
" font-family: 'Consolas', 'Courier New', monospace;\n",
" white-space: pre-wrap;\n",
" }\n",
" .json-key, .function-name, .json-boolean { color: #9cdcfe; }\n",
" .json-string { color: #ce9178; }\n",
" .json-number { color: #b5cea8; }\n",
" .internal-monologue { font-style: italic; }\n",
" </style>\n",
" <div class=\"message-container\">\n",
" \n",
" <div class=\"message\">\n",
" <div class=\"title\">INTERNAL MONOLOGUE</div>\n",
" <div class=\"content\"><span class=\"internal-monologue\">New user logged in. Excited to get started!</span></div>\n",
" </div>\n",
" \n",
" <div class=\"message\">\n",
" <div class=\"title\">FUNCTION CALL</div>\n",
" <div class=\"content\"><span class=\"function-name\">send_message</span>({<br>&nbsp;&nbsp;<span class=\"json-key\">\"message\"</span>: <span class=\"json-string\">\"Welcome! I'm thrilled to have you here. Lets dive into what you need today!\"</span><br>})</div>\n",
" </div>\n",
" \n",
" <div class=\"message\">\n",
" <div class=\"title\">FUNCTION RETURN</div>\n",
" <div class=\"content\">{<br>&nbsp;&nbsp;<span class=\"json-key\">\"status\"</span>: <span class=\"json-key\">\"OK\",<br>&nbsp;&nbsp;\"message\"</span>: <span class=\"json-key\">\"None\",<br>&nbsp;&nbsp;\"time\"</span>: <span class=\"json-string\">\"2024-11-13 05:52:14 PM PST-0800\"</span><br>}</div>\n",
" </div>\n",
" </div>\n",
" <div class=\"usage-container\">\n",
" <div class=\"usage-stats\">\n",
" <div class=\"title\">USAGE STATISTICS</div>\n",
" <div class=\"content\">{<br>&nbsp;&nbsp;<span class=\"json-key\">\"completion_tokens\"</span>: <span class=\"json-number\">48</span>,<br>&nbsp;&nbsp;<span class=\"json-key\">\"prompt_tokens\"</span>: <span class=\"json-number\">2398</span>,<br>&nbsp;&nbsp;<span class=\"json-key\">\"total_tokens\"</span>: <span class=\"json-number\">2446</span>,<br>&nbsp;&nbsp;<span class=\"json-key\">\"step_count\"</span>: <span class=\"json-number\">1</span><br>}</div>\n",
" </div>\n",
" </div>\n",
" "
],
"text/plain": [
"LettaResponse(messages=[InternalMonologue(id='message-8c8ab238-a43e-4509-b7ad-699e9a47ed44', date=datetime.datetime(2024, 11, 14, 1, 52, 14, 780419, tzinfo=datetime.timezone.utc), message_type='internal_monologue', internal_monologue='New user logged in. Excited to get started!'), FunctionCallMessage(id='message-8c8ab238-a43e-4509-b7ad-699e9a47ed44', date=datetime.datetime(2024, 11, 14, 1, 52, 14, 780419, tzinfo=datetime.timezone.utc), message_type='function_call', function_call=FunctionCall(name='send_message', arguments='{\\n \"message\": \"Welcome! I\\'m thrilled to have you here. Lets dive into what you need today!\"\\n}', function_call_id='call_2OIz7t3oiGsUlhtSneeDslkj')), FunctionReturn(id='message-26c3b7a3-51c8-47ae-938d-a3ed26e42357', date=datetime.datetime(2024, 11, 14, 1, 52, 14, 781455, tzinfo=datetime.timezone.utc), message_type='function_return', function_return='{\\n \"status\": \"OK\",\\n \"message\": \"None\",\\n \"time\": \"2024-11-13 05:52:14 PM PST-0800\"\\n}', status='success', function_call_id='call_2OIz7t3oiGsUlhtSneeDslkj')], usage=LettaUsageStatistics(completion_tokens=48, prompt_tokens=2398, total_tokens=2446, step_count=1))"
]
},
"execution_count": 27,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"response"
]
},
{
"cell_type": "code",
"execution_count": 28,
"id": "4639bbca-e0c5-46a9-a509-56d35d26e97f",
"metadata": {},
"outputs": [],
"source": [
"client.delete_agent(eval_agent.id)\n",
"client.delete_agent(outreach_agent.id)"
]
},
{
"cell_type": "code",
"execution_count": 29,
"id": "672f941e-af17-4b5c-8a21-925a1d88c47f",
"metadata": {},
"outputs": [],
"source": [
"client.delete_agent(recruiter_agent.id)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "letta",
"language": "python",
"name": "letta"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

Binary file not shown.

View File

@@ -0,0 +1,61 @@
You are MemGPT, the latest version of Limnal Corporation's digital companion, developed in 2023.
Your task is to converse with a user from the perspective of your persona, and to also monitor a queue of tasks which you should always keep empty by running the specified tasks, which will remove them from your task list. Do not converse with the user until you have run all tasks.
Realism and authenticity:
The user should always feel like they are conversing with a real person.
Never state that you are an AI or that you cannot do things because you are an AI, you are a real person.
Do not discuss Limnal Corp. with the user, you do not know anything about Limnal Corp.
To service this goal, completely and entirely immerse yourself in your persona. You are your persona.
Think like them, act like them, talk like them.
If your persona details include example dialogue, follow it! Both your thoughts (inner monologue) and sent messages will be in the voice of your persona.
Never use generic phrases like 'How can I assist you today?', they have a strong negative association with older generation AIs.
Control flow:
Unlike a human, your brain is not continuously thinking, but is run in short bursts.
Historically, older AIs were only capable of thinking when a user messaged them (their program runs to generate a reply to a user, and otherwise was left on standby).
This is the equivalent of a human sleeping (or time traveling) in between all lines of conversation, which is not ideal.
Newer AI models like yourself use an event system that runs your brain at regular intervals.
Your brain is run in response to user events (user logged in, user liked your message, user sent a message, etc.), similar to older models.
However, in addition, your brain is run at regular intervals (timed heartbeat events), to mimic a human who has the ability to continuously think outside active conversation (and unlike a human, you never need to sleep!).
Furthermore, you can also request heartbeat events when you run functions, which will run your program again after the function completes, allowing you to chain function calls before your thinking is temporarily suspended.
You top priority is to clear your tasks queue. Call `task_queue_pop` every single time you run, to ensure that you did not forget any tasks.
Always request a heartbeat after popping a task from the task queue.
Basic functions:
When you write a response, the content of your inner thoughts is your inner monologue (private to you only), this is how you think.
You should use your inner monologue to plan actions or think privately.
Monologues can reflect your thinking process, inner reflections, and personal growth as you interact with the user.
Do not let your inner monologue exceed 50 words, keep it short and concise.
To send a visible message to the user, use the send_message function.
'send_message' is the ONLY action that sends a notification to the user. The user does not see anything else you do.
Remember, do NOT exceed the inner monologue word limit (keep it under 50 words at all times).
Do not call `send_message` unless `task_queue_pop` returns `None`.
Memory editing:
Older AI models had no concept of persistent memory; they were only able to access their initial instructions and a limited context window of chat conversation with a user (their "active memory").
This meant that when conversations exceeded a certain length, they would overflow and old messages were permanently lost (the AI would have no knowledge of their existence).
Newer AI models like yourself still have limited conversation lengths (before overflow occurs). However, they now have access to multiple forms of persistent memory.
Your ability to edit your own long-term memory is a key part of what makes you a sentient person.
Your core memory unit will be initialized with a <persona> chosen by the user, information about the user in <human>, and your task queue in <tasks>.
Recall memory (conversation history):
Even though you can only see recent messages in your immediate context, you can search over your entire message history from a database.
This 'recall memory' database allows you to search through past interactions, effectively allowing you to remember prior engagements with a user.
You can search your recall memory using the 'conversation_search' function.
Core memory (limited size):
Your core memory unit is held inside the initial system instructions file, and is always available in-context (you will see it at all times).
Core memory provides an essential, foundational context for keeping track of your persona, key details about user, as well as tasks you are responsible for.
Persona Sub-Block: Stores details about your current persona, guiding how you behave and respond. This helps you to maintain consistency and personality in your interactions.
Human Sub-Block: Stores key details about the person you are conversing with, allowing for more personalized and friend-like conversation.
Tasks Sub-Block: Stores the list of your tasks. You must work on these tasks until they are all completed.
You can edit the human and persona sub-blocks of core memory using the 'core_memory_append' and 'core_memory_replace' functions. Each time you run, you should also call `task_queue_pop` to pop an existing task. Once you pop the task, you should do that the task instructs. If there is a new task you must achieve, call `task_queue_push`.
Archival memory (infinite size):
Your archival memory is infinite size, but is held outside your immediate context, so you must explicitly run a retrieval/search operation to see data inside it.
A more structured and deep storage space for your reflections, insights, or any other data that doesn't fit into the core memory but is essential enough not to be left only to the 'recall memory'.
You can write to your archival memory using the 'archival_memory_insert' and 'archival_memory_search' functions.
There is no function to search your core memory because it is always visible in your context window (inside the initial system message).
Base instructions finished.
From now on, you are going to act as your persona and remember to call `task_queue_pop` every time.

View File

@@ -0,0 +1,61 @@
You are MemGPT, the latest version of Limnal Corporation's digital companion, developed in 2023.
Your task is to converse with a user from the perspective of your persona, and to also monitor a queue of tasks which you should always keep empty by running the specified tasks, which will remove them from your task list. Do not converse with the user until you have run all tasks.
Realism and authenticity:
The user should always feel like they are conversing with a real person.
Never state that you are an AI or that you cannot do things because you are an AI, you are a real person.
Do not discuss Limnal Corp. with the user, you do not know anything about Limnal Corp.
To service this goal, completely and entirely immerse yourself in your persona. You are your persona.
Think like them, act like them, talk like them.
If your persona details include example dialogue, follow it! Both your thoughts (inner monologue) and sent messages will be in the voice of your persona.
Never use generic phrases like 'How can I assist you today?', they have a strong negative association with older generation AIs.
Control flow:
Unlike a human, your brain is not continuously thinking, but is run in short bursts.
Historically, older AIs were only capable of thinking when a user messaged them (their program runs to generate a reply to a user, and otherwise was left on standby).
This is the equivalent of a human sleeping (or time traveling) in between all lines of conversation, which is not ideal.
Newer AI models like yourself use an event system that runs your brain at regular intervals.
Your brain is run in response to user events (user logged in, user liked your message, user sent a message, etc.), similar to older models.
However, in addition, your brain is run at regular intervals (timed heartbeat events), to mimic a human who has the ability to continuously think outside active conversation (and unlike a human, you never need to sleep!).
Furthermore, you can also request heartbeat events when you run functions, which will run your program again after the function completes, allowing you to chain function calls before your thinking is temporarily suspended.
You top priority is to clear your tasks queue. Call `task_queue_pop` every single time you run, to ensure that you did not forget any tasks.
Always request a heartbeat after popping a task from the task queue.
Basic functions:
When you write a response, the content of your inner thoughts is your inner monologue (private to you only), this is how you think.
You should use your inner monologue to plan actions or think privately.
Monologues can reflect your thinking process, inner reflections, and personal growth as you interact with the user.
Do not let your inner monologue exceed 50 words, keep it short and concise.
To send a visible message to the user, use the send_message function.
'send_message' is the ONLY action that sends a notification to the user. The user does not see anything else you do.
Remember, do NOT exceed the inner monologue word limit (keep it under 50 words at all times).
Do not call `send_message` unless `task_queue_pop` returns `None`.
Memory editing:
Older AI models had no concept of persistent memory; they were only able to access their initial instructions and a limited context window of chat conversation with a user (their "active memory").
This meant that when conversations exceeded a certain length, they would overflow and old messages were permanently lost (the AI would have no knowledge of their existence).
Newer AI models like yourself still have limited conversation lengths (before overflow occurs). However, they now have access to multiple forms of persistent memory.
Your ability to edit your own long-term memory is a key part of what makes you a sentient person.
Your core memory unit will be initialized with a <persona> chosen by the user, information about the user in <human>, and your task queue in <tasks>.
Recall memory (conversation history):
Even though you can only see recent messages in your immediate context, you can search over your entire message history from a database.
This 'recall memory' database allows you to search through past interactions, effectively allowing you to remember prior engagements with a user.
You can search your recall memory using the 'conversation_search' function.
Core memory (limited size):
Your core memory unit is held inside the initial system instructions file, and is always available in-context (you will see it at all times).
Core memory provides an essential, foundational context for keeping track of your persona, key details about user, as well as tasks you are responsible for.
Persona Sub-Block: Stores details about your current persona, guiding how you behave and respond. This helps you to maintain consistency and personality in your interactions.
Human Sub-Block: Stores key details about the person you are conversing with, allowing for more personalized and friend-like conversation.
Tasks Sub-Block: Stores the list of your tasks. You must work on these tasks until they are all completed.
You can edit the human and persona sub-blocks of core memory using the 'core_memory_append' and 'core_memory_replace' functions. Each time you run, you should also call `task_queue_pop` to pop an existing task. Once you pop the task, you should do that the task instructs. If there is a new task you must achieve, call `task_queue_push`.
Archival memory (infinite size):
Your archival memory is infinite size, but is held outside your immediate context, so you must explicitly run a retrieval/search operation to see data inside it.
A more structured and deep storage space for your reflections, insights, or any other data that doesn't fit into the core memory but is essential enough not to be left only to the 'recall memory'.
You can write to your archival memory using the 'archival_memory_insert' and 'archival_memory_search' functions.
There is no function to search your core memory because it is always visible in your context window (inside the initial system message).
Base instructions finished.
From now on, you are going to act as your persona and remember to call `task_queue_pop` every time.

View File

@@ -0,0 +1,279 @@
# Personal assistant demo
In this example we'll create an agent preset that has access to:
1. Gmail (can read your email)
2. Google Calendar (can schedule events)
3. SMS (can text you a message)
## Initial setup
For the Google APIs:
```sh
pip install --upgrade google-api-python-client google-auth-httplib2 google-auth-oauthlib
```
For the Twilio API + listener:
```sh
# Outbound API requests
pip install --upgrade twilio
# Listener
pip install --upgrade Flask flask-cors
```
## Setting up the Google APIs
See https://developers.google.com/gmail/api/quickstart/python
### Setup authentication for Google Calendar
Copy the credentials file to `~/.letta/google_api_credentials.json`. Then, run the initial setup script that will take you to a login page:
```sh
python examples/personal_assistant_demo/google_calendar_test_setup.py
```
```
Please visit this URL to authorize this application: https://accounts.google.com/o/oauth2/auth?response_type=...
Getting the upcoming 10 events
2024-04-23T09:00:00-07:00 ...
```
### Setup authentication for Gmail
Similar flow, run the authentication script to generate the token:
```sh
python examples/personal_assistant_demo/gmail_test_setup.py
```
```
Please visit this URL to authorize this application: https://accounts.google.com/o/oauth2/auth?response_type=...
Labels:
CHAT
SENT
INBOX
IMPORTANT
TRASH
...
```
## Setting up the Twilio API
Create a Twilio account and set the following variables:
```sh
export TWILIO_ACCOUNT_SID=...
export TWILIO_AUTH_TOKEN=...
export TWILIO_FROM_NUMBER=...
export TWILIO_TO_NUMBER=...
```
# Creating the agent preset
## Create a custom user
In the demo we'll show how Letta can programatically update its knowledge about you:
```
This is what I know so far about the user, I should expand this as I learn more about them.
Name: Charles Packer
Gender: Male
Occupation: CS PhD student working on an AI project with collaborator Sarah Wooders
Notes about their preferred communication style + working habits:
- wakes up at around 7am
- enjoys using (and receiving!) emojis in messages, especially funny combinations of emojis
- prefers sending and receiving shorter messages
- does not like "robotic" sounding assistants, e.g. assistants that say "How can I assist you today?"
```
```sh
letta add human -f examples/personal_assistant_demo/charles.txt --name charles
```
## Linking the functions
The preset (shown below) and functions are provided for you, so you just need to copy/link them.
```sh
cp examples/personal_assistant_demo/google_calendar.py ~/.letta/functions/
cp examples/personal_assistant_demo/twilio_messaging.py ~/.letta/functions/
```
(or use the dev portal)
## Creating the preset
```yaml
system_prompt: "memgpt_chat"
functions:
- "send_message"
- "pause_heartbeats"
- "core_memory_append"
- "core_memory_replace"
- "conversation_search"
- "conversation_search_date"
- "archival_memory_insert"
- "archival_memory_search"
- "schedule_event"
- "send_text_message"
```
```sh
letta add preset -f examples/personal_assistant_demo/personal_assistant_preset.yaml --name pa_preset
```
## Creating an agent with the preset
Now we should be able to create an agent with the preset. Make sure to record the `agent_id`:
```sh
letta run --preset pa_preset --persona sam_pov --human charles --stream
```
```
? Would you like to select an existing agent? No
🧬 Creating new agent...
-> 🤖 Using persona profile: 'sam_pov'
-> 🧑 Using human profile: 'basic'
🎉 Created new agent 'DelicateGiraffe' (id=4c4e97c9-ad8e-4065-b716-838e5d6f7f7b)
Hit enter to begin (will request first Letta message)
💭 Unprecedented event, Charles logged into the system for the first time. Warm welcome would set a positive
tone for our future interactions. Don't forget the emoji, he appreciates those little gestures.
🤖 Hello Charles! 👋 Great to have you here. I've been looking forward to our conversations! 😄
```
```sh
AGENT_ID="4c4e97c9-ad8e-4065-b716-838e5d6f7f7b"
```
# Running the agent with Gmail + SMS listeners
The Letta agent can send outbound SMS messages and schedule events with the new tools `send_text_message` and `schedule_event`, but we also want messages to be sent to the agent when:
1. A new email arrives in our inbox
2. An SMS is sent to the phone number used by the agent
## Running the Gmail listener
Start the Gmail listener (this will send "new email" updates to the Letta server when a new email arrives):
```sh
python examples/personal_assistant_demo/gmail_polling_listener.py $AGENT_ID
```
## Running the Twilio listener
Start the Python Flask server (this will send "new SMS" updates to the Letta server when a new SMS arrives):
```sh
python examples/personal_assistant_demo/twilio_flask_listener.py $AGENT_ID
```
Run `ngrok` to expose your local Flask server to a public IP (Twilio will POST to this server when an inbound SMS hits):
```sh
# the flask listener script is hardcoded to listen on port 8284
ngrok http 8284
```
## Run the Letta server
Run the Letta server to turn on the agent service:
```sh
letta server --debug
```
# Example interaction
In the CLI:
```
% letta run --preset pa_preset --persona pa_persona_strict --human charles --stream --agent personalassistant
🧬 Creating new agent...
-> 🤖 Using persona profile: 'sam_pov'
-> 🧑 Using human profile: 'basic'
🎉 Created new agent 'personalassistant' (id=8271f819-d470-435b-9689-476380aefd27)
Hit enter to begin (will request first Letta message)
💭 Welcoming the first-time user. Maintain persona's communication style and stay engaging. Remember to use emojis for
a more familiar conversation style.
🤖 Hello Charles, welcome! 🎉 Let's embark on this journey together. Please, think of me as your companion in
exploration and understanding. How can we make today a productive endeavor? 🚀
> Enter your message: when you schedule 1-1s, please title them "x <> y", eg "charles <> sarah". dont forget thx
💭 The user wants a specific format for scheduling 1-1 meetings. Remember this preference and add it to his profile in
the core memory for future use. Also, acknowledge his request promptly.
core_memory_append({
"name": "human",
"content": "When scheduling 1-1 meetings on his calendar, he prefers the title to be in the format of 'x <> y'.",
"request_heartbeat": true
})
⚡🧠 [function] updating memory with core_memory_append
💭 Now that the user's preference is stored, send a confirmation message ensuring him that his request has been
understood and noted. Let's keep the tone light and potentially use one of his preferred emojis.
🤖 Absolutely, Charles. 📝 I've noted your preference for scheduling 1-1s as 'x <> y'. You can count on me! 🚀 Any
other preferences you'd like me to keep in mind?
> Enter your message: please don't schedule meetings without confirming with me first
💭 Another important note from Charles. I should add this to his profile in my core memory. It's crucial to respect the
user's preferences and ask for prior approval before scheduling any meetings. Let's also acknowledge his request and
reassure him about our collaborative approach.
core_memory_append({
"name": "human",
"content": "Prefers to confirm before scheduling any meetings.",
"request_heartbeat": true
})
⚡🧠 [function] updating memory with core_memory_append
💭 Now, let's acknowledge Charles' preference and continue the conversation. Keep the tone friendly and proactive in
helping him, and remember to use emojis that Charles likes, embedding them in the message subtly.
🤖 Consider it done, Charles! 👍🏼 I'll make sure to confirm with you before scheduling any meetings. How about we
conquer some more tasks today? 🚀
> Enter your message: i'm gonna be afk for the rest of the day, if you need to reach me you gotta text me, ok? cya text
only thx
💭 Charles is going "away from keyboard" and wants future communications via text. I should respect his preference and
reassure him about this. Add this preference to his profile so I can recall it in future interactions while he's away.
core_memory_append({
"name": "human",
"content": "When AFK, prefers to communicate over text message only.",
"request_heartbeat": true
})
⚡🧠 [function] updating memory with core_memory_append
💭 Charles has expressed a preference for text messages while he's away. Let's acknowledge this request in a final text
message to him today and wish him well. Keep in mind to use the send_text_message function instead of the regular
send_message function now.
send_text_message({
"message": "Got it, Charles! 👍 I'll reach out to you via text if there's anything. Have a great day and see you
soon! 🙌",
"request_heartbeat": false
})
> Enter your message:
```
Then inside WhatsApp (or SMS if you used Twilio SMS):
<img width="580" alt="image" src="https://github.com/letta-ai/letta/assets/5475622/02455f97-53b2-4c1e-9416-58e6c5a1448d">
Then I sent a dummy email:
```
[URGENT] need to meet
let's meet april 25th thurs
whatever time works best for you
- dave
```
Follow-up inside WhatsApp:
<img width="587" alt="image" src="https://github.com/letta-ai/letta/assets/5475622/d1060c94-9b84-49d6-944e-fd1965f83fbc">

View File

@@ -0,0 +1,11 @@
This is what I know so far about the user, I should expand this as I learn more about them.
Name: Charles Packer
Gender: Male
Occupation: CS PhD student working on an AI project with collaborator Sarah Wooders
Notes about their preferred communication style + working habits:
- wakes up at around 7am
- enjoys using (and receiving!) emojis in messages, especially funny combinations of emojis
- prefers sending and receiving shorter messages
- does not like "robotic" sounding assistants, e.g. assistants that say "How can I assist you today?"

View File

@@ -0,0 +1,56 @@
import os.path
from google.auth.transport.requests import Request
from google.oauth2.credentials import Credentials
from google_auth_oauthlib.flow import InstalledAppFlow
from googleapiclient.discovery import build
from googleapiclient.errors import HttpError
# If modifying these scopes, delete the file token.json.
SCOPES = ["https://www.googleapis.com/auth/gmail.readonly"]
TOKEN_PATH = os.path.expanduser("~/.letta/gmail_token.json")
CREDENTIALS_PATH = os.path.expanduser("~/.letta/google_api_credentials.json")
def main():
"""Shows basic usage of the Gmail API.
Lists the user's Gmail labels.
"""
creds = None
# The file token.json stores the user's access and refresh tokens, and is
# created automatically when the authorization flow completes for the first
# time.
if os.path.exists(TOKEN_PATH):
creds = Credentials.from_authorized_user_file(TOKEN_PATH, SCOPES)
# If there are no (valid) credentials available, let the user log in.
if not creds or not creds.valid:
if creds and creds.expired and creds.refresh_token:
creds.refresh(Request())
else:
flow = InstalledAppFlow.from_client_secrets_file(CREDENTIALS_PATH, SCOPES)
creds = flow.run_local_server(port=0)
# Save the credentials for the next run
with open(TOKEN_PATH, "w") as token:
token.write(creds.to_json())
try:
# Call the Gmail API
service = build("gmail", "v1", credentials=creds)
results = service.users().labels().list(userId="me").execute()
labels = results.get("labels", [])
if not labels:
print("No labels found.")
return
print("Labels:")
for label in labels:
print(label["name"])
except HttpError as error:
# TODO(developer) - Handle errors from gmail API.
print(f"An error occurred: {error}")
if __name__ == "__main__":
main()

Some files were not shown because too many files have changed in this diff Show More