chore: migrate package name to letta (#1775)

Co-authored-by: Charles Packer <packercharles@gmail.com>
Co-authored-by: Shubham Naik <shubham.naik10@gmail.com>
Co-authored-by: Shubham Naik <shub@memgpt.ai>
This commit is contained in:
Sarah Wooders
2024-09-23 09:15:18 -07:00
committed by GitHub
parent 9ebbaacc1f
commit 8ae1e64987
337 changed files with 5528 additions and 6795 deletions

View File

@@ -0,0 +1,435 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "cac06555-9ce8-4f01-bbef-3f8407f4b54d",
"metadata": {},
"source": [
"# Lab 3: Using MemGPT to build agents with memory \n",
"This lab will go over: \n",
"1. Creating an agent with MemGPT\n",
"2. Understand MemGPT agent state (messages, memories, tools)\n",
"3. Understanding core and archival memory\n",
"4. Building agentic RAG with MemGPT "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f096bd03-9fb7-468f-af3c-24cd9e03108c",
"metadata": {},
"outputs": [],
"source": [
"from helper import nb_print"
]
},
{
"cell_type": "markdown",
"id": "aad3a8cc-d17a-4da1-b621-ecc93c9e2106",
"metadata": {},
"source": [
"## Setup a Letta client \n",
"Make sure you run `pip install letta` and `letta quickstart`"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "067e007c-02f7-4d51-9c8a-651c7d5a6499",
"metadata": {},
"outputs": [],
"source": [
"!pip install letta\n",
"! letta quickstart"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7ccd43f2-164b-4d25-8465-894a3bb54c4b",
"metadata": {},
"outputs": [],
"source": [
"from letta import create_client \n",
"\n",
"client = create_client() "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9a28e38a-7dbe-4530-8260-202322a8458e",
"metadata": {},
"outputs": [],
"source": [
"from letta.schemas.llm_config import LLMConfig\n",
"\n",
"client.set_default_llm_config(LLMConfig.default_config(\"gpt-4o-mini\")) "
]
},
{
"cell_type": "markdown",
"id": "65bf0dc2-d1ac-4d4c-8674-f3156eeb611d",
"metadata": {},
"source": [
"## Creating a simple agent with memory \n",
"MemGPT allows you to create persistent LLM agents that have memory. By default, MemGPT saves all state related to agents in a database, so you can also re-load an existing agent with its prior state. We'll show you in this section how to create a MemGPT agent and to understand what memories it's storing. \n"
]
},
{
"cell_type": "markdown",
"id": "fe092474-6b91-4124-884d-484fc28b58e7",
"metadata": {},
"source": [
"### Creating an agent "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2a9d6228-a0f5-41e6-afd7-6a05260565dc",
"metadata": {},
"outputs": [],
"source": [
"agent_name = \"simple_agent\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "62dcf31d-6f45-40f5-8373-61981f03da62",
"metadata": {},
"outputs": [],
"source": [
"from letta.schemas.memory import ChatMemory\n",
"\n",
"agent_state = client.create_agent(\n",
" name=agent_name, \n",
" memory=ChatMemory(\n",
" human=\"My name is Sarah\", \n",
" persona=\"You are a helpful assistant that loves emojis\"\n",
" )\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "31c2d5f6-626a-4666-8d0b-462db0292a7d",
"metadata": {},
"outputs": [],
"source": [
"response = client.send_message(\n",
" agent_id=agent_state.id, \n",
" message=\"hello!\", \n",
" role=\"user\" \n",
")\n",
"nb_print(response.messages)"
]
},
{
"cell_type": "markdown",
"id": "20a5ccf4-addd-4bdb-be80-161f7925dae0",
"metadata": {},
"source": [
"Note that MemGPT agents will generate an *internal_monologue* that explains its actions. You can use this monoloque to understand why agents are behaving as they are. \n",
"\n",
"Second, MemGPT agents also use tools to communicate, so messages are sent back by calling a `send_message` tool. This makes it easy to allow agent to communicate over different mediums (e.g. text), and also allows the agent to distinguish betweeh that is and isn't send to the end user. "
]
},
{
"cell_type": "markdown",
"id": "8d33eca5-b8e8-4a8f-9440-85b45c37a777",
"metadata": {},
"source": [
"### Understanding agent state \n",
"MemGPT agents are *stateful* and are defined by: \n",
"* The system prompt defining the agent's behavior (read-only)\n",
"* The set of *tools* they have access to \n",
"* Their memory (core, archival, & recall)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c1cf7136-4060-441a-9d12-da851badf339",
"metadata": {},
"outputs": [],
"source": [
"print(agent_state.system)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d9e1c8c0-e98c-4952-b850-136b5b50a5ee",
"metadata": {},
"outputs": [],
"source": [
"agent_state.tools"
]
},
{
"cell_type": "markdown",
"id": "ae910ad9-afee-41f5-badd-a8dee5b2ad94",
"metadata": {},
"source": [
"### Viewing an agent's memory"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "478a0df6-3c87-4803-9133-8a54f9c00320",
"metadata": {},
"outputs": [],
"source": [
"memory = client.get_core_memory(agent_state.id)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ff2c3736-5424-4883-8fe9-73a4f598a043",
"metadata": {},
"outputs": [],
"source": [
"memory"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d6da43d6-847e-4a0a-9b92-cea2721e828a",
"metadata": {},
"outputs": [],
"source": [
"client.get_archival_memory_summary(agent_state.id)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0399a1d6-a1f8-4796-a4c0-eb322512b0ec",
"metadata": {},
"outputs": [],
"source": [
"client.get_recall_memory_summary(agent_state.id)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c7cce583-1f11-4f13-a6ed-52cc7f80e3c4",
"metadata": {},
"outputs": [],
"source": [
"client.get_messages(agent_state.id)"
]
},
{
"cell_type": "markdown",
"id": "dfd0a9ae-417e-4ba0-a562-ec59cb2bbf7d",
"metadata": {},
"source": [
"## Understanding core memory \n",
"Core memory is memory that is stored *in-context* - so every LLM call, core memory is included. What's unique about MemGPT is that this core memory is editable via tools by the agent itself. Lets see how the agent can adapt its memory to new information."
]
},
{
"cell_type": "markdown",
"id": "d259669c-5903-40b5-8758-93c36faa752f",
"metadata": {},
"source": [
"### Memories about the human \n",
"The `human` section of `ChatMemory` is used to remember information about the human in the conversation. As the agent learns new information about the human, it can update this part of memory to improve personalization. "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "beb9b0ba-ed7c-4917-8ee5-21d201516086",
"metadata": {},
"outputs": [],
"source": [
"response = client.send_message(\n",
" agent_id=agent_state.id, \n",
" message = \"My name is actually Bob\", \n",
" role = \"user\"\n",
") \n",
"nb_print(response.messages)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "25f58968-e262-4268-86ef-1bed57e6bf33",
"metadata": {},
"outputs": [],
"source": [
"client.get_core_memory(agent_state.id)"
]
},
{
"cell_type": "markdown",
"id": "32692ca2-b731-43a6-84de-439a08a4c0d2",
"metadata": {},
"source": [
"### Memories about the agent\n",
"The agent also records information about itself and how it behaves in the `persona` section of memory. This is important for ensuring a consistent persona over time (e.g. not making inconsistent claims, such as liking ice cream one day and hating it another). Unlike the `system_prompt`, the `persona` is editable - this means that it can be used to incoporate feedback to learn and improve its persona over time. "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f68851c5-5666-45fd-9d2f-037ea86bfcfa",
"metadata": {},
"outputs": [],
"source": [
"response = client.send_message(\n",
" agent_id=agent_state.id, \n",
" message = \"In the future, never use emojis to communicate\", \n",
" role = \"user\"\n",
") \n",
"nb_print(response.messages)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2fc54336-d61f-446d-82ea-9dd93a011e51",
"metadata": {},
"outputs": [],
"source": [
"client.get_core_memory(agent_state.id).get_block('persona')"
]
},
{
"cell_type": "markdown",
"id": "592f5d1c-cd2f-4314-973e-fcc481e6b460",
"metadata": {},
"source": [
"## Understanding archival memory\n",
"MemGPT agents store long term memories in *archival memory*, which persists data into an external database. This allows agents additional space to write information outside of its context window (e.g. with core memory), which is limited in size. "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "af63a013-6be3-4931-91b0-309ff2a4dc3a",
"metadata": {},
"outputs": [],
"source": [
"client.get_archival_memory(agent_state.id)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "bfa52984-fe7c-4d17-900a-70a376a460f9",
"metadata": {},
"outputs": [],
"source": [
"client.get_archival_memory_summary(agent_state.id)"
]
},
{
"cell_type": "markdown",
"id": "a3ab0ae9-fc00-4447-8942-7dbed7a99222",
"metadata": {},
"source": [
"Agents themselves can write to their archival memory when they learn information they think should be placed in long term storage. You can also directly suggest that the agent store information in archival. "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c6556f76-8fcb-42ff-a6d0-981685ef071c",
"metadata": {},
"outputs": [],
"source": [
"response = client.send_message(\n",
" agent_id=agent_state.id, \n",
" message = \"Save the information that 'bob loves cats' to archival\", \n",
" role = \"user\"\n",
") \n",
"nb_print(response.messages)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b4429ffa-e27a-4714-a873-84f793c08535",
"metadata": {},
"outputs": [],
"source": [
"client.get_archival_memory(agent_state.id)[0].text"
]
},
{
"cell_type": "markdown",
"id": "ae463e7c-0588-48ab-888c-734c783782bf",
"metadata": {},
"source": [
"You can also directly insert into archival memory from the client. "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f9d4194d-9ed5-40a1-b35d-a9aff3048000",
"metadata": {},
"outputs": [],
"source": [
"client.insert_archival_memory(\n",
" agent_state.id, \n",
" \"Bob's loves boston terriers\"\n",
")"
]
},
{
"cell_type": "markdown",
"id": "338149f1-6671-4a0b-81d9-23d01dbe2e97",
"metadata": {},
"source": [
"Now lets see how the agent uses its archival memory:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5908b10f-94db-4f5a-bb9a-1f08c74a2860",
"metadata": {},
"outputs": [],
"source": [
"response = client.send_message(\n",
" agent_id=agent_state.id, \n",
" role=\"user\", \n",
" message=\"What animals do I like? Search archival.\"\n",
")\n",
"nb_print(response.messages)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "letta",
"language": "python",
"name": "letta"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.2"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,19 +1,19 @@
import json
from typing import List, Tuple
from memgpt import create_client
from memgpt.agent import Agent
from memgpt.memory import ChatMemory
from letta import create_client
from letta.agent import Agent
from letta.memory import ChatMemory
"""
This example show how you can add a google search custom function to your MemGPT agent.
This example show how you can add a google search custom function to your Letta agent.
First, make sure you run:
```
pip install serpapi
pip install llama-index-readers-web
```
then setup memgpt with `memgpt configure`.
then setup letta with `letta configure`.
"""
@@ -46,9 +46,9 @@ def google_search(self: Agent, query: str) -> List[Tuple[str, str]]:
import serpapi
from openai import OpenAI
from memgpt.credentials import MemGPTCredentials
from memgpt.data_sources.connectors import WebConnector
from memgpt.utils import printd
from letta.credentials import LettaCredentials
from letta.data_sources.connectors import WebConnector
from letta.utils import printd
printd("Starting google search:", query)
@@ -59,7 +59,7 @@ def google_search(self: Agent, query: str) -> List[Tuple[str, str]]:
+ f"\n\n{document_text}"
)
credentials = MemGPTCredentials().load()
credentials = LettaCredentials().load()
assert credentials.openai_key is not None, credentials.openai_key
# model = "gpt-4-1106-preview"
model = "gpt-3.5-turbo-1106"
@@ -141,7 +141,7 @@ def google_search(self: Agent, query: str) -> List[Tuple[str, str]]:
def main():
# Create a `LocalClient` (you can also use a `RESTClient`, see the memgpt_rest_client.py example)
# Create a `LocalClient` (you can also use a `RESTClient`, see the letta_rest_client.py example)
client = create_client()
# create tool
@@ -152,14 +152,14 @@ def main():
# google search persona
persona = """
My name is MemGPT.
My name is Letta.
I am a personal assistant who answers a user's questionien using google web searches. When a user asks me a question and the answer is not in my context, I will use a tool called google_search which will search the web and return relevant summaries and the link they correspond to. It is my job to construct the best query to input into google_search based on the user's question, and to aggregate the response of google_search construct a final answer that also references the original links the information was pulled from. Here is an example:
---
User: Who founded OpenAI?
MemGPT: OpenAI was founded by Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, Jessica Livingston, John Schulman, Pamela Vagata, and Wojciech Zaremba, with Sam Altman and Elon Musk serving as the initial Board of Directors members. [1][2]
Letta: OpenAI was founded by Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, Jessica Livingston, John Schulman, Pamela Vagata, and Wojciech Zaremba, with Sam Altman and Elon Musk serving as the initial Board of Directors members. [1][2]
[1] https://www.britannica.com/topic/OpenAI
[2] https://en.wikipedia.org/wiki/OpenAI

128
examples/helper.py Normal file
View File

@@ -0,0 +1,128 @@
# Add your utilities or helper functions to this file.
import os
from dotenv import load_dotenv, find_dotenv
from IPython.display import display, HTML
import json
import html
import re
# these expect to find a .env file at the directory above the lesson. # the format for that file is (without the comment) #API_KEYNAME=AStringThatIsTheLongAPIKeyFromSomeService
def load_env():
_ = load_dotenv(find_dotenv())
def get_openai_api_key():
load_env()
openai_api_key = os.getenv("OPENAI_API_KEY")
return openai_api_key
def nb_print(messages):
html_output = """
<style>
.message-container {
font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
max-width: 800px;
margin: 20px auto;
background-color: #1e1e1e;
border-radius: 8px;
overflow: hidden;
color: #d4d4d4;
}
.message {
padding: 10px 15px;
border-bottom: 1px solid #3a3a3a;
}
.message:last-child {
border-bottom: none;
}
.title {
font-weight: bold;
margin-bottom: 5px;
color: #ffffff;
text-transform: uppercase;
font-size: 0.9em;
}
.content {
background-color: #2d2d2d;
border-radius: 4px;
padding: 5px 10px;
font-family: 'Consolas', 'Courier New', monospace;
white-space: pre-wrap;
}
.status-line {
margin-bottom: 5px;
color: #d4d4d4;
}
.function-name { color: #569cd6; }
.json-key { color: #9cdcfe; }
.json-string { color: #ce9178; }
.json-number { color: #b5cea8; }
.json-boolean { color: #569cd6; }
.internal-monologue { font-style: italic; }
</style>
<div class="message-container">
"""
for msg in messages:
content = get_formatted_content(msg)
# don't print empty function returns
if msg.message_type == "function_return":
return_data = json.loads(msg.function_return)
if "message" in return_data and return_data["message"] == "None":
continue
title = msg.message_type.replace('_', ' ').upper()
html_output += f"""
<div class="message">
<div class="title">{title}</div>
{content}
</div>
"""
html_output += "</div>"
display(HTML(html_output))
def get_formatted_content(msg):
if msg.message_type == "internal_monologue":
return f'<div class="content"><span class="internal-monologue">{html.escape(msg.internal_monologue)}</span></div>'
elif msg.message_type == "function_call":
args = format_json(msg.function_call.arguments)
return f'<div class="content"><span class="function-name">{html.escape(msg.function_call.name)}</span>({args})</div>'
elif msg.message_type == "function_return":
return_value = format_json(msg.function_return)
#return f'<div class="status-line">Status: {html.escape(msg.status)}</div><div class="content">{return_value}</div>'
return f'<div class="content">{return_value}</div>'
elif msg.message_type == "user_message":
if is_json(msg.message):
return f'<div class="content">{format_json(msg.message)}</div>'
else:
return f'<div class="content">{html.escape(msg.message)}</div>'
elif msg.message_type in ["assistant_message", "system_message"]:
return f'<div class="content">{html.escape(msg.message)}</div>'
else:
return f'<div class="content">{html.escape(str(msg))}</div>'
def is_json(string):
try:
json.loads(string)
return True
except ValueError:
return False
def format_json(json_str):
try:
parsed = json.loads(json_str)
formatted = json.dumps(parsed, indent=2, ensure_ascii=False)
formatted = formatted.replace('&', '&amp;').replace('<', '&lt;').replace('>', '&gt;')
formatted = formatted.replace('\n', '<br>').replace(' ', '&nbsp;&nbsp;')
formatted = re.sub(r'(".*?"):', r'<span class="json-key">\1</span>:', formatted)
formatted = re.sub(r': (".*?")', r': <span class="json-string">\1</span>', formatted)
formatted = re.sub(r': (\d+)', r': <span class="json-number">\1</span>', formatted)
formatted = re.sub(r': (true|false)', r': <span class="json-boolean">\1</span>', formatted)
return formatted
except json.JSONDecodeError:
return html.escape(json_str)

View File

@@ -1,7 +1,7 @@
import json
from memgpt import create_client
from memgpt.memory import ChatMemory
from letta import create_client
from letta.memory import ChatMemory
def main():

View File

@@ -1,13 +1,13 @@
import json
from memgpt import Admin, create_client
from memgpt.memory import ChatMemory
from letta import Admin, create_client
from letta.memory import ChatMemory
"""
Make sure you run the MemGPT server before running this example.
Make sure you run the Letta server before running this example.
```
export MEMGPT_SERVER_PASS=your_token
memgpt server
letta server
```
"""

View File

@@ -5,8 +5,8 @@
"id": "cac06555-9ce8-4f01-bbef-3f8407f4b54d",
"metadata": {},
"source": [
"# Lab 3: Building custom data connectors for MemGPT\n",
"This example notebook goes over how to create a connector to load external data sources into MemGPT agents. "
"# Lab 3: Building custom data connectors for Letta\n",
"This example notebook goes over how to create a connector to load external data sources into Letta agents. "
]
},
{
@@ -26,7 +26,7 @@
"metadata": {},
"outputs": [],
"source": [
"from memgpt import create_client \n",
"from letta import create_client \n",
"\n",
"client = create_client() "
]
@@ -68,10 +68,10 @@
}
],
"source": [
"import memgpt\n",
"import letta\n",
"import chromadb\n",
"\n",
"print(memgpt.__version__)\n",
"print(letta.__version__)\n",
"print(chromadb.__version__)"
]
},
@@ -81,7 +81,7 @@
"metadata": {},
"source": [
"### Loading external data into archival memory \n",
"In this section, we'll how you how you can use the `llama-index` library add external data sources as memories into MemGPT. "
"In this section, we'll how you how you can use the `llama-index` library add external data sources as memories into Letta. "
]
},
{
@@ -94,23 +94,23 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Requirement already satisfied: llama-index in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (0.10.27)\n",
"Requirement already satisfied: llama-index in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (0.10.27)\n",
"Collecting llama-index-readers-web\n",
" Downloading llama_index_readers_web-0.2.2-py3-none-any.whl.metadata (1.2 kB)\n",
"Requirement already satisfied: llama-index-agent-openai<0.3.0,>=0.1.4 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from llama-index) (0.2.2)\n",
"Requirement already satisfied: llama-index-cli<0.2.0,>=0.1.2 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from llama-index) (0.1.11)\n",
"Requirement already satisfied: llama-index-core<0.11.0,>=0.10.27 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from llama-index) (0.10.27)\n",
"Requirement already satisfied: llama-index-embeddings-openai<0.2.0,>=0.1.5 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from llama-index) (0.1.7)\n",
"Requirement already satisfied: llama-index-indices-managed-llama-cloud<0.2.0,>=0.1.2 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from llama-index) (0.1.5)\n",
"Requirement already satisfied: llama-index-legacy<0.10.0,>=0.9.48 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from llama-index) (0.9.48)\n",
"Requirement already satisfied: llama-index-llms-openai<0.2.0,>=0.1.13 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from llama-index) (0.1.14)\n",
"Requirement already satisfied: llama-index-multi-modal-llms-openai<0.2.0,>=0.1.3 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from llama-index) (0.1.4)\n",
"Requirement already satisfied: llama-index-program-openai<0.2.0,>=0.1.3 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from llama-index) (0.1.5)\n",
"Requirement already satisfied: llama-index-question-gen-openai<0.2.0,>=0.1.2 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from llama-index) (0.1.3)\n",
"Requirement already satisfied: llama-index-readers-file<0.2.0,>=0.1.4 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from llama-index) (0.1.13)\n",
"Requirement already satisfied: llama-index-readers-llama-parse<0.2.0,>=0.1.2 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from llama-index) (0.1.4)\n",
"Requirement already satisfied: aiohttp<4.0.0,>=3.9.1 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from llama-index-readers-web) (3.9.3)\n",
"Requirement already satisfied: beautifulsoup4<5.0.0,>=4.12.3 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from llama-index-readers-web) (4.12.3)\n",
"Requirement already satisfied: llama-index-agent-openai<0.3.0,>=0.1.4 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from llama-index) (0.2.2)\n",
"Requirement already satisfied: llama-index-cli<0.2.0,>=0.1.2 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from llama-index) (0.1.11)\n",
"Requirement already satisfied: llama-index-core<0.11.0,>=0.10.27 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from llama-index) (0.10.27)\n",
"Requirement already satisfied: llama-index-embeddings-openai<0.2.0,>=0.1.5 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from llama-index) (0.1.7)\n",
"Requirement already satisfied: llama-index-indices-managed-llama-cloud<0.2.0,>=0.1.2 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from llama-index) (0.1.5)\n",
"Requirement already satisfied: llama-index-legacy<0.10.0,>=0.9.48 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from llama-index) (0.9.48)\n",
"Requirement already satisfied: llama-index-llms-openai<0.2.0,>=0.1.13 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from llama-index) (0.1.14)\n",
"Requirement already satisfied: llama-index-multi-modal-llms-openai<0.2.0,>=0.1.3 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from llama-index) (0.1.4)\n",
"Requirement already satisfied: llama-index-program-openai<0.2.0,>=0.1.3 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from llama-index) (0.1.5)\n",
"Requirement already satisfied: llama-index-question-gen-openai<0.2.0,>=0.1.2 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from llama-index) (0.1.3)\n",
"Requirement already satisfied: llama-index-readers-file<0.2.0,>=0.1.4 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from llama-index) (0.1.13)\n",
"Requirement already satisfied: llama-index-readers-llama-parse<0.2.0,>=0.1.2 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from llama-index) (0.1.4)\n",
"Requirement already satisfied: aiohttp<4.0.0,>=3.9.1 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from llama-index-readers-web) (3.9.3)\n",
"Requirement already satisfied: beautifulsoup4<5.0.0,>=4.12.3 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from llama-index-readers-web) (4.12.3)\n",
"Collecting chromedriver-autoinstaller<0.7.0,>=0.6.3 (from llama-index-readers-web)\n",
" Using cached chromedriver_autoinstaller-0.6.4-py3-none-any.whl.metadata (2.1 kB)\n",
"Collecting html2text<2025.0.0,>=2024.2.26 (from llama-index-readers-web)\n",
@@ -124,44 +124,44 @@
" Using cached newspaper3k-0.2.8-py3-none-any.whl.metadata (11 kB)\n",
"Collecting playwright<2.0,>=1.30 (from llama-index-readers-web)\n",
" Using cached playwright-1.46.0-py3-none-macosx_11_0_universal2.whl.metadata (3.5 kB)\n",
"Requirement already satisfied: requests<3.0.0,>=2.31.0 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from llama-index-readers-web) (2.31.0)\n",
"Requirement already satisfied: requests<3.0.0,>=2.31.0 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from llama-index-readers-web) (2.31.0)\n",
"Collecting selenium<5.0.0,>=4.17.2 (from llama-index-readers-web)\n",
" Downloading selenium-4.24.0-py3-none-any.whl.metadata (7.1 kB)\n",
"Collecting spider-client<0.0.28,>=0.0.27 (from llama-index-readers-web)\n",
" Using cached spider_client-0.0.27-py3-none-any.whl\n",
"Requirement already satisfied: urllib3>=1.1.0 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from llama-index-readers-web) (2.2.1)\n",
"Requirement already satisfied: aiosignal>=1.1.2 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from aiohttp<4.0.0,>=3.9.1->llama-index-readers-web) (1.3.1)\n",
"Requirement already satisfied: attrs>=17.3.0 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from aiohttp<4.0.0,>=3.9.1->llama-index-readers-web) (23.2.0)\n",
"Requirement already satisfied: frozenlist>=1.1.1 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from aiohttp<4.0.0,>=3.9.1->llama-index-readers-web) (1.4.1)\n",
"Requirement already satisfied: multidict<7.0,>=4.5 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from aiohttp<4.0.0,>=3.9.1->llama-index-readers-web) (6.0.5)\n",
"Requirement already satisfied: yarl<2.0,>=1.0 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from aiohttp<4.0.0,>=3.9.1->llama-index-readers-web) (1.9.4)\n",
"Requirement already satisfied: soupsieve>1.2 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from beautifulsoup4<5.0.0,>=4.12.3->llama-index-readers-web) (2.5)\n",
"Requirement already satisfied: packaging>=23.1 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from chromedriver-autoinstaller<0.7.0,>=0.6.3->llama-index-readers-web) (24.0)\n",
"Requirement already satisfied: openai>=1.14.0 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from llama-index-agent-openai<0.3.0,>=0.1.4->llama-index) (1.16.2)\n",
"Requirement already satisfied: PyYAML>=6.0.1 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from llama-index-core<0.11.0,>=0.10.27->llama-index) (6.0.1)\n",
"Requirement already satisfied: SQLAlchemy>=1.4.49 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from SQLAlchemy[asyncio]>=1.4.49->llama-index-core<0.11.0,>=0.10.27->llama-index) (2.0.29)\n",
"Requirement already satisfied: dataclasses-json in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from llama-index-core<0.11.0,>=0.10.27->llama-index) (0.6.4)\n",
"Requirement already satisfied: deprecated>=1.2.9.3 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from llama-index-core<0.11.0,>=0.10.27->llama-index) (1.2.14)\n",
"Requirement already satisfied: dirtyjson<2.0.0,>=1.0.8 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from llama-index-core<0.11.0,>=0.10.27->llama-index) (1.0.8)\n",
"Requirement already satisfied: fsspec>=2023.5.0 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from llama-index-core<0.11.0,>=0.10.27->llama-index) (2024.2.0)\n",
"Requirement already satisfied: httpx in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from llama-index-core<0.11.0,>=0.10.27->llama-index) (0.25.2)\n",
"Requirement already satisfied: llamaindex-py-client<0.2.0,>=0.1.16 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from llama-index-core<0.11.0,>=0.10.27->llama-index) (0.1.16)\n",
"Requirement already satisfied: nest-asyncio<2.0.0,>=1.5.8 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from llama-index-core<0.11.0,>=0.10.27->llama-index) (1.6.0)\n",
"Requirement already satisfied: networkx>=3.0 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from llama-index-core<0.11.0,>=0.10.27->llama-index) (3.3)\n",
"Requirement already satisfied: nltk<4.0.0,>=3.8.1 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from llama-index-core<0.11.0,>=0.10.27->llama-index) (3.8.1)\n",
"Requirement already satisfied: numpy in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from llama-index-core<0.11.0,>=0.10.27->llama-index) (1.26.4)\n",
"Requirement already satisfied: pandas in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from llama-index-core<0.11.0,>=0.10.27->llama-index) (2.2.1)\n",
"Requirement already satisfied: pillow>=9.0.0 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from llama-index-core<0.11.0,>=0.10.27->llama-index) (10.3.0)\n",
"Requirement already satisfied: tenacity<9.0.0,>=8.2.0 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from llama-index-core<0.11.0,>=0.10.27->llama-index) (8.2.3)\n",
"Requirement already satisfied: tiktoken>=0.3.3 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from llama-index-core<0.11.0,>=0.10.27->llama-index) (0.5.2)\n",
"Requirement already satisfied: tqdm<5.0.0,>=4.66.1 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from llama-index-core<0.11.0,>=0.10.27->llama-index) (4.66.2)\n",
"Requirement already satisfied: typing-extensions>=4.5.0 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from llama-index-core<0.11.0,>=0.10.27->llama-index) (4.11.0)\n",
"Requirement already satisfied: typing-inspect>=0.8.0 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from llama-index-core<0.11.0,>=0.10.27->llama-index) (0.9.0)\n",
"Requirement already satisfied: wrapt in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from llama-index-core<0.11.0,>=0.10.27->llama-index) (1.16.0)\n",
"Requirement already satisfied: pymupdf<2.0.0,>=1.23.21 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from llama-index-readers-file<0.2.0,>=0.1.4->llama-index) (1.24.1)\n",
"Requirement already satisfied: pypdf<5.0.0,>=4.0.1 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from llama-index-readers-file<0.2.0,>=0.1.4->llama-index) (4.1.0)\n",
"Requirement already satisfied: striprtf<0.0.27,>=0.0.26 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from llama-index-readers-file<0.2.0,>=0.1.4->llama-index) (0.0.26)\n",
"Requirement already satisfied: llama-parse<0.5.0,>=0.4.0 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from llama-index-readers-llama-parse<0.2.0,>=0.1.2->llama-index) (0.4.0)\n",
"Requirement already satisfied: urllib3>=1.1.0 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from llama-index-readers-web) (2.2.1)\n",
"Requirement already satisfied: aiosignal>=1.1.2 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from aiohttp<4.0.0,>=3.9.1->llama-index-readers-web) (1.3.1)\n",
"Requirement already satisfied: attrs>=17.3.0 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from aiohttp<4.0.0,>=3.9.1->llama-index-readers-web) (23.2.0)\n",
"Requirement already satisfied: frozenlist>=1.1.1 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from aiohttp<4.0.0,>=3.9.1->llama-index-readers-web) (1.4.1)\n",
"Requirement already satisfied: multidict<7.0,>=4.5 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from aiohttp<4.0.0,>=3.9.1->llama-index-readers-web) (6.0.5)\n",
"Requirement already satisfied: yarl<2.0,>=1.0 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from aiohttp<4.0.0,>=3.9.1->llama-index-readers-web) (1.9.4)\n",
"Requirement already satisfied: soupsieve>1.2 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from beautifulsoup4<5.0.0,>=4.12.3->llama-index-readers-web) (2.5)\n",
"Requirement already satisfied: packaging>=23.1 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from chromedriver-autoinstaller<0.7.0,>=0.6.3->llama-index-readers-web) (24.0)\n",
"Requirement already satisfied: openai>=1.14.0 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from llama-index-agent-openai<0.3.0,>=0.1.4->llama-index) (1.16.2)\n",
"Requirement already satisfied: PyYAML>=6.0.1 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from llama-index-core<0.11.0,>=0.10.27->llama-index) (6.0.1)\n",
"Requirement already satisfied: SQLAlchemy>=1.4.49 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from SQLAlchemy[asyncio]>=1.4.49->llama-index-core<0.11.0,>=0.10.27->llama-index) (2.0.29)\n",
"Requirement already satisfied: dataclasses-json in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from llama-index-core<0.11.0,>=0.10.27->llama-index) (0.6.4)\n",
"Requirement already satisfied: deprecated>=1.2.9.3 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from llama-index-core<0.11.0,>=0.10.27->llama-index) (1.2.14)\n",
"Requirement already satisfied: dirtyjson<2.0.0,>=1.0.8 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from llama-index-core<0.11.0,>=0.10.27->llama-index) (1.0.8)\n",
"Requirement already satisfied: fsspec>=2023.5.0 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from llama-index-core<0.11.0,>=0.10.27->llama-index) (2024.2.0)\n",
"Requirement already satisfied: httpx in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from llama-index-core<0.11.0,>=0.10.27->llama-index) (0.25.2)\n",
"Requirement already satisfied: llamaindex-py-client<0.2.0,>=0.1.16 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from llama-index-core<0.11.0,>=0.10.27->llama-index) (0.1.16)\n",
"Requirement already satisfied: nest-asyncio<2.0.0,>=1.5.8 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from llama-index-core<0.11.0,>=0.10.27->llama-index) (1.6.0)\n",
"Requirement already satisfied: networkx>=3.0 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from llama-index-core<0.11.0,>=0.10.27->llama-index) (3.3)\n",
"Requirement already satisfied: nltk<4.0.0,>=3.8.1 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from llama-index-core<0.11.0,>=0.10.27->llama-index) (3.8.1)\n",
"Requirement already satisfied: numpy in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from llama-index-core<0.11.0,>=0.10.27->llama-index) (1.26.4)\n",
"Requirement already satisfied: pandas in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from llama-index-core<0.11.0,>=0.10.27->llama-index) (2.2.1)\n",
"Requirement already satisfied: pillow>=9.0.0 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from llama-index-core<0.11.0,>=0.10.27->llama-index) (10.3.0)\n",
"Requirement already satisfied: tenacity<9.0.0,>=8.2.0 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from llama-index-core<0.11.0,>=0.10.27->llama-index) (8.2.3)\n",
"Requirement already satisfied: tiktoken>=0.3.3 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from llama-index-core<0.11.0,>=0.10.27->llama-index) (0.5.2)\n",
"Requirement already satisfied: tqdm<5.0.0,>=4.66.1 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from llama-index-core<0.11.0,>=0.10.27->llama-index) (4.66.2)\n",
"Requirement already satisfied: typing-extensions>=4.5.0 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from llama-index-core<0.11.0,>=0.10.27->llama-index) (4.11.0)\n",
"Requirement already satisfied: typing-inspect>=0.8.0 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from llama-index-core<0.11.0,>=0.10.27->llama-index) (0.9.0)\n",
"Requirement already satisfied: wrapt in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from llama-index-core<0.11.0,>=0.10.27->llama-index) (1.16.0)\n",
"Requirement already satisfied: pymupdf<2.0.0,>=1.23.21 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from llama-index-readers-file<0.2.0,>=0.1.4->llama-index) (1.24.1)\n",
"Requirement already satisfied: pypdf<5.0.0,>=4.0.1 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from llama-index-readers-file<0.2.0,>=0.1.4->llama-index) (4.1.0)\n",
"Requirement already satisfied: striprtf<0.0.27,>=0.0.26 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from llama-index-readers-file<0.2.0,>=0.1.4->llama-index) (0.0.26)\n",
"Requirement already satisfied: llama-parse<0.5.0,>=0.4.0 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from llama-index-readers-llama-parse<0.2.0,>=0.1.2->llama-index) (0.4.0)\n",
"Collecting cssselect>=0.9.2 (from newspaper3k<0.3.0,>=0.2.8->llama-index-readers-web)\n",
" Using cached cssselect-1.2.0-py2.py3-none-any.whl.metadata (2.2 kB)\n",
"Collecting lxml>=3.6.0 (from newspaper3k<0.3.0,>=0.2.8->llama-index-readers-web)\n",
@@ -174,51 +174,51 @@
" Using cached feedfinder2-0.0.4-py3-none-any.whl\n",
"Collecting jieba3k>=0.35.1 (from newspaper3k<0.3.0,>=0.2.8->llama-index-readers-web)\n",
" Using cached jieba3k-0.35.1-py3-none-any.whl\n",
"Requirement already satisfied: python-dateutil>=2.5.3 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from newspaper3k<0.3.0,>=0.2.8->llama-index-readers-web) (2.9.0.post0)\n",
"Requirement already satisfied: python-dateutil>=2.5.3 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from newspaper3k<0.3.0,>=0.2.8->llama-index-readers-web) (2.9.0.post0)\n",
"Collecting tinysegmenter==0.3 (from newspaper3k<0.3.0,>=0.2.8->llama-index-readers-web)\n",
" Using cached tinysegmenter-0.3-py3-none-any.whl\n",
"Requirement already satisfied: greenlet==3.0.3 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from playwright<2.0,>=1.30->llama-index-readers-web) (3.0.3)\n",
"Requirement already satisfied: greenlet==3.0.3 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from playwright<2.0,>=1.30->llama-index-readers-web) (3.0.3)\n",
"Collecting pyee==11.1.0 (from playwright<2.0,>=1.30->llama-index-readers-web)\n",
" Using cached pyee-11.1.0-py3-none-any.whl.metadata (2.8 kB)\n",
"Requirement already satisfied: charset-normalizer<4,>=2 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from requests<3.0.0,>=2.31.0->llama-index-readers-web) (3.3.2)\n",
"Requirement already satisfied: idna<4,>=2.5 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from requests<3.0.0,>=2.31.0->llama-index-readers-web) (3.6)\n",
"Requirement already satisfied: certifi>=2017.4.17 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from requests<3.0.0,>=2.31.0->llama-index-readers-web) (2024.2.2)\n",
"Requirement already satisfied: charset-normalizer<4,>=2 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from requests<3.0.0,>=2.31.0->llama-index-readers-web) (3.3.2)\n",
"Requirement already satisfied: idna<4,>=2.5 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from requests<3.0.0,>=2.31.0->llama-index-readers-web) (3.6)\n",
"Requirement already satisfied: certifi>=2017.4.17 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from requests<3.0.0,>=2.31.0->llama-index-readers-web) (2024.2.2)\n",
"Collecting trio~=0.17 (from selenium<5.0.0,>=4.17.2->llama-index-readers-web)\n",
" Using cached trio-0.26.2-py3-none-any.whl.metadata (8.6 kB)\n",
"Collecting trio-websocket~=0.9 (from selenium<5.0.0,>=4.17.2->llama-index-readers-web)\n",
" Using cached trio_websocket-0.11.1-py3-none-any.whl.metadata (4.7 kB)\n",
"Collecting websocket-client~=1.8 (from selenium<5.0.0,>=4.17.2->llama-index-readers-web)\n",
" Using cached websocket_client-1.8.0-py3-none-any.whl.metadata (8.0 kB)\n",
"Requirement already satisfied: six in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from feedfinder2>=0.0.4->newspaper3k<0.3.0,>=0.2.8->llama-index-readers-web) (1.16.0)\n",
"Requirement already satisfied: six in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from feedfinder2>=0.0.4->newspaper3k<0.3.0,>=0.2.8->llama-index-readers-web) (1.16.0)\n",
"Collecting sgmllib3k (from feedparser>=5.2.1->newspaper3k<0.3.0,>=0.2.8->llama-index-readers-web)\n",
" Using cached sgmllib3k-1.0.0-py3-none-any.whl\n",
"Requirement already satisfied: pydantic>=1.10 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from llamaindex-py-client<0.2.0,>=0.1.16->llama-index-core<0.11.0,>=0.10.27->llama-index) (2.8.2)\n",
"Requirement already satisfied: anyio in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from httpx->llama-index-core<0.11.0,>=0.10.27->llama-index) (4.3.0)\n",
"Requirement already satisfied: httpcore==1.* in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from httpx->llama-index-core<0.11.0,>=0.10.27->llama-index) (1.0.5)\n",
"Requirement already satisfied: sniffio in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from httpx->llama-index-core<0.11.0,>=0.10.27->llama-index) (1.3.1)\n",
"Requirement already satisfied: h11<0.15,>=0.13 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from httpcore==1.*->httpx->llama-index-core<0.11.0,>=0.10.27->llama-index) (0.14.0)\n",
"Requirement already satisfied: click in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from nltk<4.0.0,>=3.8.1->llama-index-core<0.11.0,>=0.10.27->llama-index) (8.1.7)\n",
"Requirement already satisfied: joblib in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from nltk<4.0.0,>=3.8.1->llama-index-core<0.11.0,>=0.10.27->llama-index) (1.3.2)\n",
"Requirement already satisfied: regex>=2021.8.3 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from nltk<4.0.0,>=3.8.1->llama-index-core<0.11.0,>=0.10.27->llama-index) (2023.12.25)\n",
"Requirement already satisfied: distro<2,>=1.7.0 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from openai>=1.14.0->llama-index-agent-openai<0.3.0,>=0.1.4->llama-index) (1.9.0)\n",
"Requirement already satisfied: PyMuPDFb==1.24.1 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from pymupdf<2.0.0,>=1.23.21->llama-index-readers-file<0.2.0,>=0.1.4->llama-index) (1.24.1)\n",
"Requirement already satisfied: pydantic>=1.10 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from llamaindex-py-client<0.2.0,>=0.1.16->llama-index-core<0.11.0,>=0.10.27->llama-index) (2.8.2)\n",
"Requirement already satisfied: anyio in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from httpx->llama-index-core<0.11.0,>=0.10.27->llama-index) (4.3.0)\n",
"Requirement already satisfied: httpcore==1.* in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from httpx->llama-index-core<0.11.0,>=0.10.27->llama-index) (1.0.5)\n",
"Requirement already satisfied: sniffio in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from httpx->llama-index-core<0.11.0,>=0.10.27->llama-index) (1.3.1)\n",
"Requirement already satisfied: h11<0.15,>=0.13 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from httpcore==1.*->httpx->llama-index-core<0.11.0,>=0.10.27->llama-index) (0.14.0)\n",
"Requirement already satisfied: click in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from nltk<4.0.0,>=3.8.1->llama-index-core<0.11.0,>=0.10.27->llama-index) (8.1.7)\n",
"Requirement already satisfied: joblib in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from nltk<4.0.0,>=3.8.1->llama-index-core<0.11.0,>=0.10.27->llama-index) (1.3.2)\n",
"Requirement already satisfied: regex>=2021.8.3 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from nltk<4.0.0,>=3.8.1->llama-index-core<0.11.0,>=0.10.27->llama-index) (2023.12.25)\n",
"Requirement already satisfied: distro<2,>=1.7.0 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from openai>=1.14.0->llama-index-agent-openai<0.3.0,>=0.1.4->llama-index) (1.9.0)\n",
"Requirement already satisfied: PyMuPDFb==1.24.1 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from pymupdf<2.0.0,>=1.23.21->llama-index-readers-file<0.2.0,>=0.1.4->llama-index) (1.24.1)\n",
"Collecting requests-file>=1.4 (from tldextract>=2.0.1->newspaper3k<0.3.0,>=0.2.8->llama-index-readers-web)\n",
" Using cached requests_file-2.1.0-py2.py3-none-any.whl.metadata (1.7 kB)\n",
"Requirement already satisfied: filelock>=3.0.8 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from tldextract>=2.0.1->newspaper3k<0.3.0,>=0.2.8->llama-index-readers-web) (3.13.3)\n",
"Requirement already satisfied: filelock>=3.0.8 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from tldextract>=2.0.1->newspaper3k<0.3.0,>=0.2.8->llama-index-readers-web) (3.13.3)\n",
"Collecting sortedcontainers (from trio~=0.17->selenium<5.0.0,>=4.17.2->llama-index-readers-web)\n",
" Using cached sortedcontainers-2.4.0-py2.py3-none-any.whl.metadata (10 kB)\n",
"Collecting outcome (from trio~=0.17->selenium<5.0.0,>=4.17.2->llama-index-readers-web)\n",
" Using cached outcome-1.3.0.post0-py2.py3-none-any.whl.metadata (2.6 kB)\n",
"Collecting wsproto>=0.14 (from trio-websocket~=0.9->selenium<5.0.0,>=4.17.2->llama-index-readers-web)\n",
" Using cached wsproto-1.2.0-py3-none-any.whl.metadata (5.6 kB)\n",
"Requirement already satisfied: mypy-extensions>=0.3.0 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from typing-inspect>=0.8.0->llama-index-core<0.11.0,>=0.10.27->llama-index) (1.0.0)\n",
"Requirement already satisfied: mypy-extensions>=0.3.0 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from typing-inspect>=0.8.0->llama-index-core<0.11.0,>=0.10.27->llama-index) (1.0.0)\n",
"Collecting pysocks!=1.5.7,<2.0,>=1.5.6 (from urllib3[socks]<3,>=1.26->selenium<5.0.0,>=4.17.2->llama-index-readers-web)\n",
" Using cached PySocks-1.7.1-py3-none-any.whl.metadata (13 kB)\n",
"Requirement already satisfied: marshmallow<4.0.0,>=3.18.0 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from dataclasses-json->llama-index-core<0.11.0,>=0.10.27->llama-index) (3.21.1)\n",
"Requirement already satisfied: pytz>=2020.1 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from pandas->llama-index-core<0.11.0,>=0.10.27->llama-index) (2023.4)\n",
"Requirement already satisfied: tzdata>=2022.7 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from pandas->llama-index-core<0.11.0,>=0.10.27->llama-index) (2024.1)\n",
"Requirement already satisfied: annotated-types>=0.4.0 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from pydantic>=1.10->llamaindex-py-client<0.2.0,>=0.1.16->llama-index-core<0.11.0,>=0.10.27->llama-index) (0.6.0)\n",
"Requirement already satisfied: pydantic-core==2.20.1 in /Users/sarahwooders/repos/memgpt-main/MemGPT/env/lib/python3.12/site-packages (from pydantic>=1.10->llamaindex-py-client<0.2.0,>=0.1.16->llama-index-core<0.11.0,>=0.10.27->llama-index) (2.20.1)\n",
"Requirement already satisfied: marshmallow<4.0.0,>=3.18.0 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from dataclasses-json->llama-index-core<0.11.0,>=0.10.27->llama-index) (3.21.1)\n",
"Requirement already satisfied: pytz>=2020.1 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from pandas->llama-index-core<0.11.0,>=0.10.27->llama-index) (2023.4)\n",
"Requirement already satisfied: tzdata>=2022.7 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from pandas->llama-index-core<0.11.0,>=0.10.27->llama-index) (2024.1)\n",
"Requirement already satisfied: annotated-types>=0.4.0 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from pydantic>=1.10->llamaindex-py-client<0.2.0,>=0.1.16->llama-index-core<0.11.0,>=0.10.27->llama-index) (0.6.0)\n",
"Requirement already satisfied: pydantic-core==2.20.1 in /Users/sarahwooders/repos/letta-main/Letta/env/lib/python3.12/site-packages (from pydantic>=1.10->llamaindex-py-client<0.2.0,>=0.1.16->llama-index-core<0.11.0,>=0.10.27->llama-index) (2.20.1)\n",
"Using cached llama_index_readers_web-0.1.23-py3-none-any.whl (72 kB)\n",
"Using cached chromedriver_autoinstaller-0.6.4-py3-none-any.whl (7.6 kB)\n",
"Using cached newspaper3k-0.2.8-py3-none-any.whl (211 kB)\n",
@@ -248,8 +248,8 @@
" Uninstalling html2text-2020.1.16:\n",
" Successfully uninstalled html2text-2020.1.16\n",
"ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\n",
"pymemgpt 0.3.19 requires docstring-parser<0.16,>=0.15, but you have docstring-parser 0.11 which is incompatible.\n",
"pymemgpt 0.3.19 requires html2text<2021.0.0,>=2020.1.16, but you have html2text 2024.2.26 which is incompatible.\n",
"pyletta 0.3.19 requires docstring-parser<0.16,>=0.15, but you have docstring-parser 0.11 which is incompatible.\n",
"pyletta 0.3.19 requires html2text<2021.0.0,>=2020.1.16, but you have html2text 2024.2.26 which is incompatible.\n",
"Successfully installed chromedriver-autoinstaller-0.6.4 cssselect-1.2.0 feedfinder2-0.0.4 feedparser-6.0.11 html2text-2024.2.26 jieba3k-0.35.1 llama-index-readers-web-0.1.23 lxml-5.3.0 newspaper3k-0.2.8 outcome-1.3.0.post0 playwright-1.46.0 pyee-11.1.0 pysocks-1.7.1 requests-file-2.1.0 selenium-4.24.0 sgmllib3k-1.0.0 sortedcontainers-2.4.0 spider-client-0.0.27 tinysegmenter-0.3 tldextract-5.1.2 trio-0.26.2 trio-websocket-0.11.1 websocket-client-1.8.0 wsproto-1.2.0\n",
"\n",
"[notice] A new release of pip is available: 24.0 -> 24.2\n",
@@ -269,8 +269,8 @@
"metadata": {},
"outputs": [],
"source": [
"from memgpt.data_sources.connectors import DataConnector \n",
"from memgpt.schemas.document import Document\n",
"from letta.data_sources.connectors import DataConnector \n",
"from letta.schemas.document import Document\n",
"from llama_index.core import Document as LlamaIndexDocument\n",
"from llama_index.core import SummaryIndex\n",
"from llama_index.readers.web import SimpleWebPageReader\n",
@@ -350,12 +350,12 @@
"name": "stdout",
"output_type": "stream",
"text": [
"MemGPT.memgpt.server.server - INFO - Created new agent from config: <memgpt.agent.Agent object at 0x14be2e960>\n"
"Letta.letta.server.server - INFO - Created new agent from config: <letta.agent.Agent object at 0x14be2e960>\n"
]
}
],
"source": [
"from memgpt.schemas.memory import ChatMemory\n",
"from letta.schemas.memory import ChatMemory\n",
"\n",
"wiki_persona = \"You a study assistant with a great source of knowlege \" \\\n",
"+ \"stored in archival. You should always search your archival memory \" \\\n",
@@ -380,9 +380,9 @@
"name": "stdout",
"output_type": "stream",
"text": [
"MemGPT.memgpt.server.server - INFO - Grabbing agent user_id=user-552dee3c-baaf-443a-9d23-8bb54f4af964 agent_id=agent-897ef46b-2682-4d79-be8a-3ad0250ee084 from database\n",
"MemGPT.memgpt.server.server - INFO - Creating an agent object\n",
"MemGPT.memgpt.server.server - INFO - Adding agent to the agent cache: user_id=user-552dee3c-baaf-443a-9d23-8bb54f4af964, agent_id=agent-897ef46b-2682-4d79-be8a-3ad0250ee084\n"
"Letta.letta.server.server - INFO - Grabbing agent user_id=user-552dee3c-baaf-443a-9d23-8bb54f4af964 agent_id=agent-897ef46b-2682-4d79-be8a-3ad0250ee084 from database\n",
"Letta.letta.server.server - INFO - Creating an agent object\n",
"Letta.letta.server.server - INFO - Adding agent to the agent cache: user_id=user-552dee3c-baaf-443a-9d23-8bb54f4af964, agent_id=agent-897ef46b-2682-4d79-be8a-3ad0250ee084\n"
]
},
{
@@ -432,7 +432,7 @@
"metadata": {},
"source": [
"## Connecting to external data via tools\n",
"In the last section, we went over how to store data inside of MemGPT's archival memory. However in many cases, it can be easier to simply connect a MemGPT agent to access an external data source directly via a tool. "
"In the last section, we went over how to store data inside of Letta's archival memory. However in many cases, it can be easier to simply connect a Letta agent to access an external data source directly via a tool. "
]
},
{
@@ -478,8 +478,8 @@
"id": "73de8d11-6844-4dee-b2f6-1d5bc775ab19",
"metadata": {},
"source": [
"### Adding a custom tool to MemGPT \n",
"We can access this external data via an agent by adding the function as a tool to MemGPT. "
"### Adding a custom tool to Letta \n",
"We can access this external data via an agent by adding the function as a tool to Letta. "
]
},
{
@@ -522,7 +522,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"MemGPT.memgpt.server.server - INFO - Created new agent from config: <memgpt.agent.Agent object at 0x14c0c79e0>\n"
"Letta.letta.server.server - INFO - Created new agent from config: <letta.agent.Agent object at 0x14c0c79e0>\n"
]
}
],
@@ -548,9 +548,9 @@
"name": "stdout",
"output_type": "stream",
"text": [
"MemGPT.memgpt.server.server - INFO - Grabbing agent user_id=user-552dee3c-baaf-443a-9d23-8bb54f4af964 agent_id=agent-f207e43b-2021-45be-9dde-48822c898e77 from database\n",
"MemGPT.memgpt.server.server - INFO - Creating an agent object\n",
"MemGPT.memgpt.server.server - INFO - Adding agent to the agent cache: user_id=user-552dee3c-baaf-443a-9d23-8bb54f4af964, agent_id=agent-f207e43b-2021-45be-9dde-48822c898e77\n",
"Letta.letta.server.server - INFO - Grabbing agent user_id=user-552dee3c-baaf-443a-9d23-8bb54f4af964 agent_id=agent-f207e43b-2021-45be-9dde-48822c898e77 from database\n",
"Letta.letta.server.server - INFO - Creating an agent object\n",
"Letta.letta.server.server - INFO - Adding agent to the agent cache: user_id=user-552dee3c-baaf-443a-9d23-8bb54f4af964, agent_id=agent-f207e43b-2021-45be-9dde-48822c898e77\n",
"[Message(id='message-d9b432de-2bb6-4c85-8bb9-a31067e271fc', role=<MessageRole.assistant: 'assistant'>, text=\"Let's access the birthday_db and find out Sarah's birthday.\", user_id='user-552dee3c-baaf-443a-9d23-8bb54f4af964', agent_id='agent-f207e43b-2021-45be-9dde-48822c898e77', model='gpt-4', name=None, created_at=datetime.datetime(2024, 9, 3, 22, 11, 24, 961893, tzinfo=datetime.timezone.utc), tool_calls=[ToolCall(id='cad6f053-27d7-4281-a04b-05a57', type='function', function=ToolCallFunction(name='query_birthday_db', arguments='{\\n \"name\": \"Sarah\",\\n \"request_heartbeat\": true\\n}'))], tool_call_id=None),\n",
" Message(id='message-f27fd0a8-be72-457c-8b3c-849818aeec4d', role=<MessageRole.tool: 'tool'>, text='{\\n \"status\": \"OK\",\\n \"message\": \"03-06-1997\",\\n \"time\": \"2024-09-03 03:11:24 PM PDT-0700\"\\n}', user_id='user-552dee3c-baaf-443a-9d23-8bb54f4af964', agent_id='agent-f207e43b-2021-45be-9dde-48822c898e77', model='gpt-4', name='query_birthday_db', created_at=datetime.datetime(2024, 9, 3, 22, 11, 24, 962306, tzinfo=datetime.timezone.utc), tool_calls=None, tool_call_id='cad6f053-27d7-4281-a04b-05a57'),\n",
" Message(id='message-7423c90e-822f-40ac-aff9-8791a360dd31', role=<MessageRole.assistant: 'assistant'>, text=\"I found the information. Now, let's communicate this back to Sarah in a friendly and human-like manner.\", user_id='user-552dee3c-baaf-443a-9d23-8bb54f4af964', agent_id='agent-f207e43b-2021-45be-9dde-48822c898e77', model='gpt-4', name=None, created_at=datetime.datetime(2024, 9, 3, 22, 11, 29, 400783, tzinfo=datetime.timezone.utc), tool_calls=[ToolCall(id='1abfa1e3-a266-48a3-8773-d6087', type='function', function=ToolCallFunction(name='send_message', arguments='{\\n \"message\": \"Hello Sarah, your birthday is on the 6th of March, 1997. Isn\\'t it wonderful to celebrate another year of life?\"\\n}'))], tool_call_id=None),\n",
@@ -570,9 +570,9 @@
],
"metadata": {
"kernelspec": {
"display_name": "memgpt",
"display_name": "letta",
"language": "python",
"name": "memgpt"
"name": "letta"
},
"language_info": {
"codemirror_mode": {

View File

@@ -7,8 +7,8 @@
"metadata": {},
"outputs": [],
"source": [
"from memgpt import create_client, Admin\n",
"from memgpt.client.client import LocalClient, RESTClient "
"from letta import create_client, Admin\n",
"from letta.client.client import LocalClient, RESTClient "
]
},
{
@@ -70,7 +70,7 @@
"metadata": {},
"outputs": [],
"source": [
"admin_client = Admin(base_url=\"http://localhost:8283\", token=\"memgptadmin\")"
"admin_client = Admin(base_url=\"http://localhost:8283\", token=\"lettaadmin\")"
]
},
{
@@ -104,7 +104,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"MemGPT.memgpt.server.server - INFO - Created new agent from config: <memgpt.agent.Agent object at 0x14e542600>\n"
"Letta.letta.server.server - INFO - Created new agent from config: <letta.agent.Agent object at 0x14e542600>\n"
]
}
],
@@ -151,7 +151,7 @@
" resume (str): The text representation of the candidate's resume \n",
" justification (str): Summary reason for why the candidate is good and should be reached out to\n",
" \"\"\"\n",
" from memgpt import create_client \n",
" from letta import create_client \n",
" client = create_client()\n",
" message = \"Reach out to the following candidate. \" \\\n",
" + f\"Name: {candidate_name}\\n\" \\\n",
@@ -178,12 +178,12 @@
"name": "stdout",
"output_type": "stream",
"text": [
"MemGPT.memgpt.server.server - INFO - Created new agent from config: <memgpt.agent.Agent object at 0x14e542600>\n"
"Letta.letta.server.server - INFO - Created new agent from config: <letta.agent.Agent object at 0x14e542600>\n"
]
}
],
"source": [
"from memgpt.memory import ChatMemory\n",
"from letta.memory import ChatMemory\n",
"\n",
"company_description = \"The company is called AgentOS and is building AI tools to make it easier to create and deploy LLM agents.\"\n",
"skills = \"Front-end (React, Typescript), software engineering (ideally Python), and experience with LLMs.\"\n",
@@ -253,9 +253,9 @@
],
"metadata": {
"kernelspec": {
"display_name": "memgpt",
"display_name": "letta",
"language": "python",
"name": "memgpt"
"name": "letta"
},
"language_info": {
"codemirror_mode": {

View File

@@ -1,32 +1,32 @@
from openai import OpenAI
"""
This script provides an example of how you can use OpenAI's python client with a MemGPT server.
This script provides an example of how you can use OpenAI's python client with a Letta server.
Before running this example, make sure you start the OpenAI-compatible REST server with `memgpt server`.
Before running this example, make sure you start the OpenAI-compatible REST server with `letta server`.
"""
def main():
client = OpenAI(base_url="http://localhost:8283/v1")
# create assistant (creates a memgpt preset)
# create assistant (creates a letta preset)
assistant = client.beta.assistants.create(
name="Math Tutor",
instructions="You are a personal math tutor. Write and run code to answer math questions.",
model="gpt-4-turbo-preview",
)
# create thread (creates a memgpt agent)
# create thread (creates a letta agent)
thread = client.beta.threads.create()
# create a message (appends a message to the memgpt agent)
# create a message (appends a message to the letta agent)
message = client.beta.threads.messages.create(
thread_id=thread.id, role="user", content="I need to solve the equation `3x + 11 = 14`. Can you help me?"
)
# create a run (executes the agent on the messages)
# NOTE: MemGPT does not support polling yet, so run status is always "completed"
# NOTE: Letta does not support polling yet, so run status is always "completed"
run = client.beta.threads.runs.create(
thread_id=thread.id, assistant_id=assistant.id, instructions="Please address the user as Jane Doe. The user has a premium account."
)

View File

@@ -26,7 +26,7 @@ See https://developers.google.com/gmail/api/quickstart/python
### Setup authentication for Google Calendar
Copy the credentials file to `~/.memgpt/google_api_credentials.json`. Then, run the initial setup script that will take you to a login page:
Copy the credentials file to `~/.letta/google_api_credentials.json`. Then, run the initial setup script that will take you to a login page:
```sh
python examples/personal_assistant_demo/google_calendar_test_setup.py
```
@@ -67,7 +67,7 @@ export TWILIO_TO_NUMBER=...
## Create a custom user
In the demo we'll show how MemGPT can programatically update its knowledge about you:
In the demo we'll show how Letta can programatically update its knowledge about you:
```
This is what I know so far about the user, I should expand this as I learn more about them.
@@ -83,7 +83,7 @@ Notes about their preferred communication style + working habits:
```
```sh
memgpt add human -f examples/personal_assistant_demo/charles.txt --name charles
letta add human -f examples/personal_assistant_demo/charles.txt --name charles
```
## Linking the functions
@@ -91,8 +91,8 @@ memgpt add human -f examples/personal_assistant_demo/charles.txt --name charles
The preset (shown below) and functions are provided for you, so you just need to copy/link them.
```sh
cp examples/personal_assistant_demo/google_calendar.py ~/.memgpt/functions/
cp examples/personal_assistant_demo/twilio_messaging.py ~/.memgpt/functions/
cp examples/personal_assistant_demo/google_calendar.py ~/.letta/functions/
cp examples/personal_assistant_demo/twilio_messaging.py ~/.letta/functions/
```
(or use the dev portal)
@@ -115,7 +115,7 @@ functions:
```
```sh
memgpt add preset -f examples/personal_assistant_demo/personal_assistant_preset.yaml --name pa_preset
letta add preset -f examples/personal_assistant_demo/personal_assistant_preset.yaml --name pa_preset
```
## Creating an agent with the preset
@@ -123,7 +123,7 @@ memgpt add preset -f examples/personal_assistant_demo/personal_assistant_preset.
Now we should be able to create an agent with the preset. Make sure to record the `agent_id`:
```sh
memgpt run --preset pa_preset --persona sam_pov --human charles --stream
letta run --preset pa_preset --persona sam_pov --human charles --stream
```
```
? Would you like to select an existing agent? No
@@ -133,7 +133,7 @@ memgpt run --preset pa_preset --persona sam_pov --human charles --stream
-> 🧑 Using human profile: 'basic'
🎉 Created new agent 'DelicateGiraffe' (id=4c4e97c9-ad8e-4065-b716-838e5d6f7f7b)
Hit enter to begin (will request first MemGPT message)
Hit enter to begin (will request first Letta message)
💭 Unprecedented event, Charles logged into the system for the first time. Warm welcome would set a positive
@@ -147,20 +147,20 @@ AGENT_ID="4c4e97c9-ad8e-4065-b716-838e5d6f7f7b"
# Running the agent with Gmail + SMS listeners
The MemGPT agent can send outbound SMS messages and schedule events with the new tools `send_text_message` and `schedule_event`, but we also want messages to be sent to the agent when:
The Letta agent can send outbound SMS messages and schedule events with the new tools `send_text_message` and `schedule_event`, but we also want messages to be sent to the agent when:
1. A new email arrives in our inbox
2. An SMS is sent to the phone number used by the agent
## Running the Gmail listener
Start the Gmail listener (this will send "new email" updates to the MemGPT server when a new email arrives):
Start the Gmail listener (this will send "new email" updates to the Letta server when a new email arrives):
```sh
python examples/personal_assistant_demo/gmail_polling_listener.py $AGENT_ID
```
## Running the Twilio listener
Start the Python Flask server (this will send "new SMS" updates to the MemGPT server when a new SMS arrives):
Start the Python Flask server (this will send "new SMS" updates to the Letta server when a new SMS arrives):
```sh
python examples/personal_assistant_demo/twilio_flask_listener.py $AGENT_ID
```
@@ -171,25 +171,25 @@ Run `ngrok` to expose your local Flask server to a public IP (Twilio will POST t
ngrok http 8284
```
## Run the MemGPT server
## Run the Letta server
Run the MemGPT server to turn on the agent service:
Run the Letta server to turn on the agent service:
```sh
memgpt server --debug
letta server --debug
```
# Example interaction
In the CLI:
```
% memgpt run --preset pa_preset --persona pa_persona_strict --human charles --stream --agent personalassistant
% letta run --preset pa_preset --persona pa_persona_strict --human charles --stream --agent personalassistant
🧬 Creating new agent...
-> 🤖 Using persona profile: 'sam_pov'
-> 🧑 Using human profile: 'basic'
🎉 Created new agent 'personalassistant' (id=8271f819-d470-435b-9689-476380aefd27)
Hit enter to begin (will request first MemGPT message)
Hit enter to begin (will request first Letta message)
@@ -261,7 +261,7 @@ soon! 🙌",
Then inside WhatsApp (or SMS if you used Twilio SMS):
<img width="580" alt="image" src="https://github.com/cpacker/MemGPT/assets/5475622/02455f97-53b2-4c1e-9416-58e6c5a1448d">
<img width="580" alt="image" src="https://github.com/cpacker/Letta/assets/5475622/02455f97-53b2-4c1e-9416-58e6c5a1448d">
Then I sent a dummy email:
```
@@ -276,4 +276,4 @@ whatever time works best for you
Follow-up inside WhatsApp:
<img width="587" alt="image" src="https://github.com/cpacker/MemGPT/assets/5475622/d1060c94-9b84-49d6-944e-fd1965f83fbc">
<img width="587" alt="image" src="https://github.com/cpacker/Letta/assets/5475622/d1060c94-9b84-49d6-944e-fd1965f83fbc">

View File

@@ -9,8 +9,8 @@ from googleapiclient.errors import HttpError
# If modifying these scopes, delete the file token.json.
SCOPES = ["https://www.googleapis.com/auth/gmail.readonly"]
TOKEN_PATH = os.path.expanduser("~/.memgpt/gmail_token.json")
CREDENTIALS_PATH = os.path.expanduser("~/.memgpt/google_api_credentials.json")
TOKEN_PATH = os.path.expanduser("~/.letta/gmail_token.json")
CREDENTIALS_PATH = os.path.expanduser("~/.letta/google_api_credentials.json")
def main():

View File

@@ -13,8 +13,8 @@ from googleapiclient.errors import HttpError
# If modifying these scopes, delete the file token.json.
SCOPES = ["https://www.googleapis.com/auth/gmail.readonly"]
TOKEN_PATH = os.path.expanduser("~/.memgpt/gmail_token.json")
CREDENTIALS_PATH = os.path.expanduser("~/.memgpt/google_api_credentials.json")
TOKEN_PATH = os.path.expanduser("~/.letta/gmail_token.json")
CREDENTIALS_PATH = os.path.expanduser("~/.letta/google_api_credentials.json")
DELAY = 1
@@ -25,8 +25,8 @@ MEMGPT_AGENT_ID = sys.argv[1] if len(sys.argv) > 1 else None
assert MEMGPT_AGENT_ID, f"Missing agent ID (pass as arg)"
def route_reply_to_memgpt_api(message):
# send a POST request to a MemGPT server
def route_reply_to_letta_api(message):
# send a POST request to a Letta server
url = f"{MEMGPT_SERVER_URL}/api/agents/{MEMGPT_AGENT_ID}/messages"
headers = {
@@ -131,7 +131,7 @@ def main():
# print("ignoring")
# else:
print(msg_str)
route_reply_to_memgpt_api(msg_str)
route_reply_to_letta_api(msg_str)
time.sleep(DELAY) # Wait for N seconds before checking again
except HttpError as error:

View File

@@ -17,8 +17,8 @@ from googleapiclient.errors import HttpError
# If modifying these scopes, delete the file token.json.
# SCOPES = ["https://www.googleapis.com/auth/calendar.readonly"]
SCOPES = ["https://www.googleapis.com/auth/calendar"]
TOKEN_PATH = os.path.expanduser("~/.memgpt/gcal_token.json")
CREDENTIALS_PATH = os.path.expanduser("~/.memgpt/google_api_credentials.json")
TOKEN_PATH = os.path.expanduser("~/.letta/gcal_token.json")
CREDENTIALS_PATH = os.path.expanduser("~/.letta/google_api_credentials.json")
def schedule_event(

View File

@@ -11,8 +11,8 @@ from googleapiclient.errors import HttpError
# SCOPES = ["https://www.googleapis.com/auth/calendar.readonly"]
SCOPES = ["https://www.googleapis.com/auth/calendar"]
TOKEN_PATH = os.path.expanduser("~/.memgpt/gcal_token.json")
CREDENTIALS_PATH = os.path.expanduser("~/.memgpt/google_api_credentials.json")
TOKEN_PATH = os.path.expanduser("~/.letta/gcal_token.json")
CREDENTIALS_PATH = os.path.expanduser("~/.letta/google_api_credentials.json")
def main():

View File

@@ -25,8 +25,8 @@ def test():
return "Headers received. Check your console."
def route_reply_to_memgpt_api(message):
# send a POST request to a MemGPT server
def route_reply_to_letta_api(message):
# send a POST request to a Letta server
url = f"{MEMGPT_SERVER_URL}/api/agents/{MEMGPT_AGENT_ID}/messages"
headers = {
@@ -58,7 +58,7 @@ def sms_reply():
msg_str = f"New message from {from_number}: {message_body}"
print(msg_str)
route_reply_to_memgpt_api(msg_str)
route_reply_to_letta_api(msg_str)
return str("status = OK")
# Start our response

View File

@@ -1,4 +1,4 @@
# Sending emails with MemGPT using [Resend](https://resend.com/emails)
# Sending emails with Letta using [Resend](https://resend.com/emails)
Thank you to @ykhli for the suggestion and initial tool call code!
@@ -34,7 +34,7 @@ def send_email(self, description: str):
data = {
"from": "onboarding@resend.dev",
"to": RESEND_TARGET_EMAIL_ADDRESS,
"subject": "MemGPT message:",
"subject": "Letta message:",
"html": f"<strong>{description}</strong>",
}
@@ -51,27 +51,27 @@ def send_email(self, description: str):
To create the tool in the dev portal, simply navigate to the tool creator tab, create a new tool called `send_email`, and copy-paste the above code into the code block area and press "Create Tool".
<img width="500" alt="image" src="https://github.com/cpacker/MemGPT/assets/5475622/a21fce95-02a8-4aa8-89f5-3c520b6ff75e">
<img width="500" alt="image" src="https://github.com/cpacker/Letta/assets/5475622/a21fce95-02a8-4aa8-89f5-3c520b6ff75e">
Once you've created the tool, create a new agent and make sure to select `send_email` as an enabled tool.
<img width="500" alt="image" src="https://github.com/cpacker/MemGPT/assets/5475622/124e2260-d435-465d-8971-e8ca5265d1bd">
<img width="500" alt="image" src="https://github.com/cpacker/Letta/assets/5475622/124e2260-d435-465d-8971-e8ca5265d1bd">
Now your agent should be able to call the `send_email` function when needed:
<img width="500" alt="image" src="https://github.com/cpacker/MemGPT/assets/5475622/fdd2de45-13f7-4b8f-84a3-de92e5d2bd17">
<img width="500" alt="image" src="https://github.com/cpacker/Letta/assets/5475622/fdd2de45-13f7-4b8f-84a3-de92e5d2bd17">
## Option 2 (CLI)
Copy the custom function into the functions directory:
```sh
# If you use the *_env_vars version of the function, you will need to define `RESEND_API_KEY` and `RESEND_TARGET_EMAIL_ADDRESS` in your environment variables
cp examples/resend_example/resend_send_email_env_vars.py ~/.memgpt/functions/
cp examples/resend_example/resend_send_email_env_vars.py ~/.letta/functions/
```
Create a preset that has access to that function:
```sh
memgpt add preset -f examples/resend_example/resend_preset.yaml --name resend_preset
letta add preset -f examples/resend_example/resend_preset.yaml --name resend_preset
```
Make sure we set the env vars:
@@ -82,11 +82,11 @@ export RESEND_TARGET_EMAIL_ADDRESS="YOUR_EMAIL@gmail.com"
Create an agent with that preset (disable `--stream` if you're not using a streaming-compatible backend):
```sh
memgpt run --preset resend_preset --persona sam_pov --human cs_phd --stream
letta run --preset resend_preset --persona sam_pov --human cs_phd --stream
```
<img width="500" alt="image" src="https://github.com/cpacker/MemGPT/assets/5475622/61958527-20e7-461d-a6d2-a53f06493683">
<img width="500" alt="image" src="https://github.com/cpacker/Letta/assets/5475622/61958527-20e7-461d-a6d2-a53f06493683">
Waiting in our inbox:
<img width="500" alt="image" src="https://github.com/cpacker/MemGPT/assets/5475622/95f9b24a-98c3-493a-a787-72a2a956641a">
<img width="500" alt="image" src="https://github.com/cpacker/Letta/assets/5475622/95f9b24a-98c3-493a-a787-72a2a956641a">

View File

@@ -30,7 +30,7 @@ def send_email(self, description: str):
data = {
"from": "onboarding@resend.dev",
"to": RESEND_TARGET_EMAIL_ADDRESS,
"subject": "MemGPT message:",
"subject": "Letta message:",
"html": f"<strong>{description}</strong>",
}

View File

@@ -5,10 +5,10 @@
"id": "c015b59e-1187-4d45-b2af-7b4c5a9512e1",
"metadata": {},
"source": [
"# MemGPT Python Client \n",
"Welcome to the MemGPT tutorial! In this tutorial, we'll go through how to create a basic user-client for MemGPT and create a custom agent with long term memory. \n",
"# Letta Python Client \n",
"Welcome to the Letta tutorial! In this tutorial, we'll go through how to create a basic user-client for Letta and create a custom agent with long term memory. \n",
"\n",
"MemGPT runs *agents-as-a-service*, so agents can run independently on a server. For this tutorial, we will run a local version of the client which does not require a server, but still allows you to see some of MemGPT's capabilities. "
"Letta runs *agents-as-a-service*, so agents can run independently on a server. For this tutorial, we will run a local version of the client which does not require a server, but still allows you to see some of Letta's capabilities. "
]
},
{
@@ -18,7 +18,7 @@
"metadata": {},
"outputs": [],
"source": [
"!pip install git+https://github.com/cpacker/MemGPT.git@tutorials"
"!pip install git+https://github.com/cpacker/Letta.git@tutorials"
]
},
{
@@ -46,7 +46,7 @@
"id": "f20ad6c7-9066-45e0-88ac-40920c83cc39",
"metadata": {},
"source": [
"## Part 1: Connecting to the MemGPT Client \n",
"## Part 1: Connecting to the Letta Client \n",
"\n",
"We create a local client which creates a quickstart configuration for OpenAI using the provided `OPENAI_API_KEY`. "
]
@@ -58,7 +58,7 @@
"metadata": {},
"outputs": [],
"source": [
"from memgpt.client.client import LocalClient\n",
"from letta.client.client import LocalClient\n",
"\n",
"client = LocalClient(quickstart_option=\"openai\") "
]
@@ -69,7 +69,7 @@
"metadata": {},
"source": [
"## Part 2: Create an agent \n",
"We'll first start with creating a basic MemGPT agent. "
"We'll first start with creating a basic Letta agent. "
]
},
{
@@ -100,7 +100,7 @@
"metadata": {},
"outputs": [],
"source": [
"from memgpt.client.utils import pprint \n",
"from letta.client.utils import pprint \n",
"\n",
"response = client.user_message(agent_id=basic_agent.id, message=\"hello\") \n",
"pprint(response.messages)"
@@ -116,7 +116,7 @@
"* The *human* specifies the personalization information about the user interacting with the agent \n",
"* The *persona* specifies the behavior and personality of the event\n",
"\n",
"What makes MemGPT unique is that the starting *persona* and *human* can change over time as the agent gains new information, enabling it to have evolving memory. We'll see an example of this later in the tutorial."
"What makes Letta unique is that the starting *persona* and *human* can change over time as the agent gains new information, enabling it to have evolving memory. We'll see an example of this later in the tutorial."
]
},
{
@@ -173,7 +173,7 @@
"metadata": {},
"source": [
"### Evolving memory \n",
"MemGPT agents have long term memory, and can evolve what they store in their memory over time. In the example below, we make a correction to the previously provided information. See how the agent processes this new information. "
"Letta agents have long term memory, and can evolve what they store in their memory over time. In the example below, we make a correction to the previously provided information. See how the agent processes this new information. "
]
},
{
@@ -210,16 +210,16 @@
"id": "66da949b-1084-4b87-b77c-6cbd4a822b34",
"metadata": {},
"source": [
"## 🎉 Congrats, you're done with day 1 of MemGPT! \n",
"For day 2, we'll go over how to connect *data sources* to MemGPT to run RAG agents. "
"## 🎉 Congrats, you're done with day 1 of Letta! \n",
"For day 2, we'll go over how to connect *data sources* to Letta to run RAG agents. "
]
}
],
"metadata": {
"kernelspec": {
"display_name": "memgpt",
"display_name": "letta",
"language": "python",
"name": "memgpt"
"name": "letta"
},
"language_info": {
"codemirror_mode": {

View File

@@ -7,12 +7,12 @@
"metadata": {},
"outputs": [],
"source": [
"from memgpt import Admin \n",
"from letta import Admin \n",
"\n",
"base_url=\"memgpt.localhost\"\n",
"token=\"memgptadmin\" \n",
"base_url=\"letta.localhost\"\n",
"token=\"lettaadmin\" \n",
"\n",
"admin_client = Admin(base_url=base_url, token=\"memgptadmin\")"
"admin_client = Admin(base_url=base_url, token=\"lettaadmin\")"
]
},
{
@@ -28,9 +28,9 @@
],
"metadata": {
"kernelspec": {
"display_name": "memgpt",
"display_name": "letta",
"language": "python",
"name": "memgpt"
"name": "letta"
},
"language_info": {
"codemirror_mode": {

View File

@@ -6,7 +6,7 @@
"metadata": {},
"source": [
"## Part 4: Adding external data \n",
"In addition to short term, in-context memories, MemGPT agents also have a long term memory store called *archival memory*. We can enable agents to leverage external data (e.g. PDF files, database records, etc.) by inserting data into archival memory. In this example, we'll show how to load the MemGPT paper a *source*, which defines a set of data that can be attached to agents. "
"In addition to short term, in-context memories, Letta agents also have a long term memory store called *archival memory*. We can enable agents to leverage external data (e.g. PDF files, database records, etc.) by inserting data into archival memory. In this example, we'll show how to load the Letta paper a *source*, which defines a set of data that can be attached to agents. "
]
},
{
@@ -14,7 +14,7 @@
"id": "c61ac9c3-cbea-47a5-a6a4-4133ffe5984e",
"metadata": {},
"source": [
"We first download a PDF file, the MemGPT paper: "
"We first download a PDF file, the Letta paper: "
]
},
{
@@ -28,7 +28,7 @@
"\n",
"url = \"https://arxiv.org/pdf/2310.08560\"\n",
"response = requests.get(url)\n",
"filename = \"memgpt_paper.pdf\"\n",
"filename = \"letta_paper.pdf\"\n",
"\n",
"with open(filename, 'wb') as f:\n",
" f.write(response.content)"
@@ -39,7 +39,7 @@
"id": "bcfe3a48-cdb0-4843-9599-623753eb61b9",
"metadata": {},
"source": [
"Next, we create a MemGPT source to load data into: "
"Next, we create a Letta source to load data into: "
]
},
{
@@ -49,8 +49,8 @@
"metadata": {},
"outputs": [],
"source": [
"memgpt_paper = client.create_source(\n",
" name=\"memgpt_paper\", \n",
"letta_paper = client.create_source(\n",
" name=\"letta_paper\", \n",
")"
]
},
@@ -69,7 +69,7 @@
"metadata": {},
"outputs": [],
"source": [
"job = client.load_file_into_source(filename=filename, source_id=memgpt_paper.id)\n",
"job = client.load_file_into_source(filename=filename, source_id=letta_paper.id)\n",
"job"
]
},
@@ -89,7 +89,7 @@
"metadata": {},
"outputs": [],
"source": [
"client.attach_source_to_agent(source_id=memgpt_paper.id, agent_id=basic_agent.id)\n",
"client.attach_source_to_agent(source_id=letta_paper.id, agent_id=basic_agent.id)\n",
"# TODO: add system message saying that file has been attached \n",
"\n",
"from pprint import pprint\n",
@@ -103,9 +103,9 @@
],
"metadata": {
"kernelspec": {
"display_name": "memgpt",
"display_name": "letta",
"language": "python",
"name": "memgpt"
"name": "letta"
},
"language_info": {
"codemirror_mode": {

View File

@@ -5,10 +5,10 @@
"id": "6d3806ac-38f3-4999-bbed-953037bd0fd9",
"metadata": {},
"source": [
"# MemGPT Python Client \n",
"Welcome to the MemGPT tutorial! In this tutorial, we'll go through how to create a basic user-client for MemGPT and create a custom agent with long term memory. \n",
"# Letta Python Client \n",
"Welcome to the Letta tutorial! In this tutorial, we'll go through how to create a basic user-client for Letta and create a custom agent with long term memory. \n",
"\n",
"MemGPT runs *agents-as-a-service*, so agents can run independently on a server. For this tutorial, we will be connecting to an existing MemGPT server via the Python client and the UI console. If you don't have a running server, see the [documentation](https://memgpt.readme.io/docs/running-a-memgpt-server) for instructions on how to create one. "
"Letta runs *agents-as-a-service*, so agents can run independently on a server. For this tutorial, we will be connecting to an existing Letta server via the Python client and the UI console. If you don't have a running server, see the [documentation](https://letta.readme.io/docs/running-a-letta-server) for instructions on how to create one. "
]
},
{
@@ -16,7 +16,7 @@
"id": "7c0b6d6b-dbe6-412b-b129-6d7eb7d626a3",
"metadata": {},
"source": [
"## Part 0: Install MemGPT "
"## Part 0: Install Letta "
]
},
{
@@ -26,7 +26,7 @@
"metadata": {},
"outputs": [],
"source": [
"!pip install git+https://github.com/cpacker/MemGPT.git@tutorials"
"!pip install git+https://github.com/cpacker/Letta.git@tutorials"
]
},
{
@@ -34,9 +34,9 @@
"id": "a0484348-f7b2-48e3-9a2f-7d6495ef76e3",
"metadata": {},
"source": [
"## Part 1: Connecting to the MemGPT Client \n",
"## Part 1: Connecting to the Letta Client \n",
"\n",
"The MemGPT client connects to a running MemGPT service, specified by `base_url`. The client corresponds to a *single-user* (you), so requires an authentication token to let the service know who you are. \n",
"The Letta client connects to a running Letta service, specified by `base_url`. The client corresponds to a *single-user* (you), so requires an authentication token to let the service know who you are. \n",
"\n"
]
},
@@ -47,7 +47,7 @@
"metadata": {},
"outputs": [],
"source": [
"from memgpt import create_client\n",
"from letta import create_client\n",
"\n",
"base_url = \"http://35.238.125.250:8083\"\n",
"\n",
@@ -63,7 +63,7 @@
"metadata": {},
"source": [
"### Viewing the developer portal \n",
"MemGPT provides a portal interface for viewing and interacting with agents, data sources, tools, and more. You can enter `http://35.238.125.250:8083` into your browser to load the developer portal, and enter in `my_token` to log in. \n",
"Letta provides a portal interface for viewing and interacting with agents, data sources, tools, and more. You can enter `http://35.238.125.250:8083` into your browser to load the developer portal, and enter in `my_token` to log in. \n",
"\n",
"<img src=\"./developer_portal_login.png\" width=\"800\">"
]
@@ -74,7 +74,7 @@
"metadata": {},
"source": [
"## Part 2: Create an agent \n",
"We'll first start with creating a basic MemGPT agent. "
"We'll first start with creating a basic Letta agent. "
]
},
{
@@ -105,7 +105,7 @@
"metadata": {},
"outputs": [],
"source": [
"from memgpt.client.utils import pprint \n",
"from letta.client.utils import pprint \n",
"\n",
"response = client.user_message(agent_id=basic_agent.id, message=\"hello\") \n",
"pprint(response.messages)"
@@ -132,7 +132,7 @@
"* The *human* specifies the personalization information about the user interacting with the agent \n",
"* The *persona* specifies the behavior and personality of the event\n",
"\n",
"What makes MemGPT unique is that the starting *persona* and *human* can change over time as the agent gains new information, enabling it to have evolving memory. We'll see an example of this later in the tutorial."
"What makes Letta unique is that the starting *persona* and *human* can change over time as the agent gains new information, enabling it to have evolving memory. We'll see an example of this later in the tutorial."
]
},
{
@@ -181,7 +181,7 @@
"metadata": {},
"source": [
"### Referencing memory \n",
"MemGPT agents can customize their responses based on what memories they have stored. Try asking a question that related to the human and persona you provided. "
"Letta agents can customize their responses based on what memories they have stored. Try asking a question that related to the human and persona you provided. "
]
},
{
@@ -201,7 +201,7 @@
"metadata": {},
"source": [
"### Evolving memory \n",
"MemGPT agents have long term memory, and can evolve what they store in their memory over time. In the example below, we make a correction to the previously provided information. See how the agent processes this new information. "
"Letta agents have long term memory, and can evolve what they store in their memory over time. In the example below, we make a correction to the previously provided information. See how the agent processes this new information. "
]
},
{
@@ -229,7 +229,7 @@
"metadata": {},
"source": [
"## Part 3: Adding Tools \n",
"MemGPT agents can be connected to custom tools. Currently, tools must be created by service administrators. However, you can add additional tools provided by the service administrator to the agent you create. "
"Letta agents can be connected to custom tools. Currently, tools must be created by service administrators. However, you can add additional tools provided by the service administrator to the agent you create. "
]
},
{
@@ -290,16 +290,16 @@
"id": "510675a8-22bc-4f9f-9c79-91e2ffa9caf9",
"metadata": {},
"source": [
"## 🎉 Congrats, you're done with day 1 of MemGPT! \n",
"For day 2, we'll go over how to connect *data sources* to MemGPT to run RAG agents. "
"## 🎉 Congrats, you're done with day 1 of Letta! \n",
"For day 2, we'll go over how to connect *data sources* to Letta to run RAG agents. "
]
}
],
"metadata": {
"kernelspec": {
"display_name": "memgpt",
"display_name": "letta",
"language": "python",
"name": "memgpt"
"name": "letta"
},
"language_info": {
"codemirror_mode": {