chore: remove old examples (#6255)
This commit is contained in:
committed by
Caren Thomas
parent
ea845d379d
commit
f9b3978460
@@ -1,440 +0,0 @@
|
|||||||
{
|
|
||||||
"cells": [
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "cac06555-9ce8-4f01-bbef-3f8407f4b54d",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"# Lab 3: Using MemGPT to build agents with memory \n",
|
|
||||||
"This lab will go over: \n",
|
|
||||||
"1. Creating an agent with MemGPT\n",
|
|
||||||
"2. Understand MemGPT agent state (messages, memories, tools)\n",
|
|
||||||
"3. Understanding core and archival memory\n",
|
|
||||||
"4. Building agentic RAG with MemGPT "
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "aad3a8cc-d17a-4da1-b621-ecc93c9e2106",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"## Setup a Letta client \n",
|
|
||||||
"Make sure you run `pip install letta_client` and start letta server `letta quickstart`"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"id": "067e007c-02f7-4d51-9c8a-651c7d5a6499",
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"!pip install letta_client\n",
|
|
||||||
"!pip install letta\n",
|
|
||||||
"!letta quickstart"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"id": "7ccd43f2-164b-4d25-8465-894a3bb54c4b",
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"from letta_client import CreateBlock, Letta, MessageCreate \n",
|
|
||||||
"\n",
|
|
||||||
"client = Letta(base_url=\"http://localhost:8283\")"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "65bf0dc2-d1ac-4d4c-8674-f3156eeb611d",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"## Creating a simple agent with memory \n",
|
|
||||||
"MemGPT allows you to create persistent LLM agents that have memory. By default, MemGPT saves all state related to agents in a database, so you can also re-load an existing agent with its prior state. We'll show you in this section how to create a MemGPT agent and to understand what memories it's storing. \n"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "fe092474-6b91-4124-884d-484fc28b58e7",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"### Creating an agent "
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"id": "2a9d6228-a0f5-41e6-afd7-6a05260565dc",
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"agent_name = \"simple_agent\""
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"id": "62dcf31d-6f45-40f5-8373-61981f03da62",
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"agent_state = client.agents.create(\n",
|
|
||||||
" name=agent_name, \n",
|
|
||||||
" memory_blocks=[\n",
|
|
||||||
" CreateBlock(\n",
|
|
||||||
" label=\"human\",\n",
|
|
||||||
" value=\"My name is Sarah\",\n",
|
|
||||||
" ),\n",
|
|
||||||
" CreateBlock(\n",
|
|
||||||
" label=\"persona\",\n",
|
|
||||||
" value=\"You are a helpful assistant that loves emojis\",\n",
|
|
||||||
" ),\n",
|
|
||||||
" ]\n",
|
|
||||||
" model=\"openai/gpt-4o-mini\",\n",
|
|
||||||
" embedding=\"openai/text-embedding-3-small\",\n",
|
|
||||||
")"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"id": "31c2d5f6-626a-4666-8d0b-462db0292a7d",
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"response = client.agents.messages.create(\n",
|
|
||||||
" agent_id=agent_state.id, \n",
|
|
||||||
" messages=[\n",
|
|
||||||
" MessageCreate(\n",
|
|
||||||
" role=\"user\", \n",
|
|
||||||
" content=\"hello!\", \n",
|
|
||||||
" ),\n",
|
|
||||||
" ]\n",
|
|
||||||
")\n",
|
|
||||||
"response"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "20a5ccf4-addd-4bdb-be80-161f7925dae0",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"Note that MemGPT agents will generate a *reasoning_message* that explains its actions. You can use this monoloque to understand why agents are behaving as they are. \n",
|
|
||||||
"\n",
|
|
||||||
"Second, MemGPT agents also use tools to communicate, so messages are sent back by calling a `send_message` tool. This makes it easy to allow agent to communicate over different mediums (e.g. text), and also allows the agent to distinguish betweeh that is and isn't send to the end user. "
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "8d33eca5-b8e8-4a8f-9440-85b45c37a777",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"### Understanding agent state \n",
|
|
||||||
"MemGPT agents are *stateful* and are defined by: \n",
|
|
||||||
"* The system prompt defining the agent's behavior (read-only)\n",
|
|
||||||
"* The set of *tools* they have access to \n",
|
|
||||||
"* Their memory (core, archival, & recall)"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"id": "c1cf7136-4060-441a-9d12-da851badf339",
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"print(agent_state.system)"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"id": "d9e1c8c0-e98c-4952-b850-136b5b50a5ee",
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"agent_state.tools"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "ae910ad9-afee-41f5-badd-a8dee5b2ad94",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"### Viewing an agent's memory"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"id": "478a0df6-3c87-4803-9133-8a54f9c00320",
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"memory = client.agents.core_memory.retrieve(agent_id=agent_state.id)"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"id": "ff2c3736-5424-4883-8fe9-73a4f598a043",
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"memory"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"id": "d6da43d6-847e-4a0a-9b92-cea2721e828a",
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"client.agents.context.retrieve(agent_id=agent_state.id)[\"num_archival_memory\"]"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"id": "0399a1d6-a1f8-4796-a4c0-eb322512b0ec",
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"client.agents.context.retrieve(agent_id=agent_state.id)[\"num_recall_memory\"]"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"id": "c7cce583-1f11-4f13-a6ed-52cc7f80e3c4",
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"client.agents.messages.list(agent_id=agent_state.id)"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "dfd0a9ae-417e-4ba0-a562-ec59cb2bbf7d",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"## Understanding core memory \n",
|
|
||||||
"Core memory is memory that is stored *in-context* - so every LLM call, core memory is included. What's unique about MemGPT is that this core memory is editable via tools by the agent itself. Lets see how the agent can adapt its memory to new information."
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "d259669c-5903-40b5-8758-93c36faa752f",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"### Memories about the human \n",
|
|
||||||
"The `human` section of `ChatMemory` is used to remember information about the human in the conversation. As the agent learns new information about the human, it can update this part of memory to improve personalization. "
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"id": "beb9b0ba-ed7c-4917-8ee5-21d201516086",
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"response = client.agents.messages.create(\n",
|
|
||||||
" agent_id=agent_state.id, \n",
|
|
||||||
" messages=[\n",
|
|
||||||
" MessageCreate(\n",
|
|
||||||
" role=\"user\", \n",
|
|
||||||
" content=\"My name is actually Bob\", \n",
|
|
||||||
" ),\n",
|
|
||||||
" ]\n",
|
|
||||||
")\n",
|
|
||||||
"response"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"id": "25f58968-e262-4268-86ef-1bed57e6bf33",
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"client.agents.core_memory.retrieve(agent_id=agent_state.id)"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "32692ca2-b731-43a6-84de-439a08a4c0d2",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"### Memories about the agent\n",
|
|
||||||
"The agent also records information about itself and how it behaves in the `persona` section of memory. This is important for ensuring a consistent persona over time (e.g. not making inconsistent claims, such as liking ice cream one day and hating it another). Unlike the `system_prompt`, the `persona` is editable - this means that it can be used to incoporate feedback to learn and improve its persona over time. "
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"id": "f68851c5-5666-45fd-9d2f-037ea86bfcfa",
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"response = client.agents.messages.create(\n",
|
|
||||||
" agent_id=agent_state.id,\n",
|
|
||||||
" messages=[\n",
|
|
||||||
" MessageCreate(\n",
|
|
||||||
" role=\"user\", \n",
|
|
||||||
" content=\"In the future, never use emojis to communicate\", \n",
|
|
||||||
" ),\n",
|
|
||||||
" ]\n",
|
|
||||||
")\n",
|
|
||||||
"response"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"id": "2fc54336-d61f-446d-82ea-9dd93a011e51",
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"client.agents.core_memory.retrieve_block(agent_id=agent_state.id, block_label='persona')"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "592f5d1c-cd2f-4314-973e-fcc481e6b460",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"## Understanding archival memory\n",
|
|
||||||
"MemGPT agents store long term memories in *archival memory*, which persists data into an external database. This allows agents additional space to write information outside of its context window (e.g. with core memory), which is limited in size. "
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"id": "af63a013-6be3-4931-91b0-309ff2a4dc3a",
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"client.agents.archival_memory.list(agent_id=agent_state.id)"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"id": "bfa52984-fe7c-4d17-900a-70a376a460f9",
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"client.agents.context.retrieve(agent_id=agent_state.id)[\"num_archival_memory\"]"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "a3ab0ae9-fc00-4447-8942-7dbed7a99222",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"Agents themselves can write to their archival memory when they learn information they think should be placed in long term storage. You can also directly suggest that the agent store information in archival. "
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"id": "c6556f76-8fcb-42ff-a6d0-981685ef071c",
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"response = client.agents.messages.create(\n",
|
|
||||||
" agent_id=agent_state.id, \n",
|
|
||||||
" messages=[\n",
|
|
||||||
" MessageCreate(\n",
|
|
||||||
" role=\"user\", \n",
|
|
||||||
" content=\"Save the information that 'bob loves cats' to archival\", \n",
|
|
||||||
" ),\n",
|
|
||||||
" ]\n",
|
|
||||||
")\n",
|
|
||||||
"response"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"id": "b4429ffa-e27a-4714-a873-84f793c08535",
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"client.agents.archival_memory.list(agent_id=agent_state.id)[0].text"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "ae463e7c-0588-48ab-888c-734c783782bf",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"You can also directly insert into archival memory from the client. "
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"id": "f9d4194d-9ed5-40a1-b35d-a9aff3048000",
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"client.agents.archival_memory.create(\n",
|
|
||||||
" agent_id=agent_state.id, \n",
|
|
||||||
" text=\"Bob's loves boston terriers\"\n",
|
|
||||||
")"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "338149f1-6671-4a0b-81d9-23d01dbe2e97",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"Now lets see how the agent uses its archival memory:"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"id": "5908b10f-94db-4f5a-bb9a-1f08c74a2860",
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"response = client.agents.messages.create(\n",
|
|
||||||
" agent_id=agent_state.id, \n",
|
|
||||||
" messages=[\n",
|
|
||||||
" MessageCreate(\n",
|
|
||||||
" role=\"user\", \n",
|
|
||||||
" content=\"What animals do I like? Search archival.\", \n",
|
|
||||||
" ),\n",
|
|
||||||
" ]\n",
|
|
||||||
")\n",
|
|
||||||
"response"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"metadata": {
|
|
||||||
"kernelspec": {
|
|
||||||
"display_name": "letta-main",
|
|
||||||
"language": "python",
|
|
||||||
"name": "letta-main"
|
|
||||||
},
|
|
||||||
"language_info": {
|
|
||||||
"codemirror_mode": {
|
|
||||||
"name": "ipython",
|
|
||||||
"version": 3
|
|
||||||
},
|
|
||||||
"file_extension": ".py",
|
|
||||||
"mimetype": "text/x-python",
|
|
||||||
"name": "python",
|
|
||||||
"nbconvert_exporter": "python",
|
|
||||||
"pygments_lexer": "ipython3",
|
|
||||||
"version": "3.12.6"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"nbformat": 4,
|
|
||||||
"nbformat_minor": 5
|
|
||||||
}
|
|
||||||
@@ -1,70 +0,0 @@
|
|||||||
from letta_client import CreateBlock, Letta, MessageCreate
|
|
||||||
|
|
||||||
from letta.prompts import gpt_system
|
|
||||||
|
|
||||||
"""
|
|
||||||
Make sure you run the Letta server before running this example.
|
|
||||||
```
|
|
||||||
letta server
|
|
||||||
```
|
|
||||||
"""
|
|
||||||
|
|
||||||
client = Letta(base_url="http://localhost:8283")
|
|
||||||
|
|
||||||
# create a new agent
|
|
||||||
agent_state = client.agents.create(
|
|
||||||
# agent's name (unique per-user, autogenerated if not provided)
|
|
||||||
name="agent_name",
|
|
||||||
# in-context memory representation with human/persona blocks
|
|
||||||
memory_blocks=[
|
|
||||||
CreateBlock(
|
|
||||||
label="human",
|
|
||||||
value="Name: Sarah",
|
|
||||||
),
|
|
||||||
CreateBlock(
|
|
||||||
label="persona",
|
|
||||||
value="You are a helpful assistant that loves emojis",
|
|
||||||
),
|
|
||||||
],
|
|
||||||
# LLM model & endpoint configuration
|
|
||||||
model="openai/gpt-4o-mini",
|
|
||||||
context_window_limit=8000,
|
|
||||||
# embedding model & endpoint configuration (cannot be changed)
|
|
||||||
embedding="openai/text-embedding-3-small",
|
|
||||||
# system instructions for the agent (defaults to `memgpt_chat`)
|
|
||||||
system=gpt_system.get_system_text("memgpt_chat"),
|
|
||||||
# whether to include base letta tools (default: True)
|
|
||||||
include_base_tools=True,
|
|
||||||
# list of additional tools (by name) to add to the agent
|
|
||||||
tool_ids=[],
|
|
||||||
)
|
|
||||||
print(f"Created agent with name {agent_state.name} and unique ID {agent_state.id}")
|
|
||||||
|
|
||||||
# message an agent as a user
|
|
||||||
response = client.agents.messages.create(
|
|
||||||
agent_id=agent_state.id,
|
|
||||||
messages=[
|
|
||||||
MessageCreate(
|
|
||||||
role="user",
|
|
||||||
content="hello",
|
|
||||||
)
|
|
||||||
],
|
|
||||||
)
|
|
||||||
print("Usage", response.usage)
|
|
||||||
print("Agent messages", response.messages)
|
|
||||||
|
|
||||||
# message a system message (non-user)
|
|
||||||
response = client.agents.messages.create(
|
|
||||||
agent_id=agent_state.id,
|
|
||||||
messages=[
|
|
||||||
MessageCreate(
|
|
||||||
role="system",
|
|
||||||
content="[system] user has logged in. send a friendly message.",
|
|
||||||
)
|
|
||||||
],
|
|
||||||
)
|
|
||||||
print("Usage", response.usage)
|
|
||||||
print("Agent messages", response.messages)
|
|
||||||
|
|
||||||
# delete the agent
|
|
||||||
client.agents.delete(agent_id=agent_state.id)
|
|
||||||
@@ -1,49 +0,0 @@
|
|||||||
from letta_client import CreateBlock, Letta, MessageCreate
|
|
||||||
|
|
||||||
"""
|
|
||||||
Make sure you run the Letta server before running this example.
|
|
||||||
```
|
|
||||||
letta server
|
|
||||||
```
|
|
||||||
"""
|
|
||||||
|
|
||||||
client = Letta(base_url="http://localhost:8283")
|
|
||||||
|
|
||||||
# create a new agent
|
|
||||||
agent_state = client.agents.create(
|
|
||||||
memory_blocks=[
|
|
||||||
CreateBlock(
|
|
||||||
label="human",
|
|
||||||
value="Name: Sarah",
|
|
||||||
),
|
|
||||||
],
|
|
||||||
# set automatic defaults for LLM/embedding config
|
|
||||||
model="openai/gpt-4o-mini",
|
|
||||||
embedding="openai/text-embedding-3-small",
|
|
||||||
)
|
|
||||||
print(f"Created agent with name {agent_state.name} and unique ID {agent_state.id}")
|
|
||||||
|
|
||||||
# Message an agent
|
|
||||||
response = client.agents.messages.create(
|
|
||||||
agent_id=agent_state.id,
|
|
||||||
messages=[
|
|
||||||
MessageCreate(
|
|
||||||
role="user",
|
|
||||||
content="hello",
|
|
||||||
)
|
|
||||||
],
|
|
||||||
)
|
|
||||||
print("Usage", response.usage)
|
|
||||||
print("Agent messages", response.messages)
|
|
||||||
|
|
||||||
# list all agents
|
|
||||||
agents = client.agents.list()
|
|
||||||
|
|
||||||
# get the agent by ID
|
|
||||||
agent_state = client.agents.retrieve(agent_id=agent_state.id)
|
|
||||||
|
|
||||||
# get the agent by name
|
|
||||||
agent_state = client.agents.list(name=agent_state.name)[0]
|
|
||||||
|
|
||||||
# delete an agent
|
|
||||||
client.agents.delete(agent_id=agent_state.id)
|
|
||||||
@@ -1,166 +0,0 @@
|
|||||||
from letta_client import CreateBlock, Letta, MessageCreate
|
|
||||||
|
|
||||||
"""
|
|
||||||
Make sure you run the Letta server before running this example.
|
|
||||||
See: https://docs.letta.com/quickstart
|
|
||||||
|
|
||||||
If you're using Letta Cloud, replace 'baseURL' with 'token'
|
|
||||||
See: https://docs.letta.com/api-reference/overview
|
|
||||||
|
|
||||||
Execute this script using `uv run python3 example.py`
|
|
||||||
|
|
||||||
This will install `letta_client` and other dependencies.
|
|
||||||
"""
|
|
||||||
client = Letta(
|
|
||||||
base_url="http://localhost:8283",
|
|
||||||
)
|
|
||||||
|
|
||||||
agent = client.agents.create(
|
|
||||||
memory_blocks=[
|
|
||||||
CreateBlock(
|
|
||||||
value="Name: Caren",
|
|
||||||
label="human",
|
|
||||||
),
|
|
||||||
],
|
|
||||||
model="openai/gpt-4o-mini",
|
|
||||||
embedding="openai/text-embedding-3-small",
|
|
||||||
)
|
|
||||||
|
|
||||||
print(f"Created agent with name {agent.name}")
|
|
||||||
|
|
||||||
# Example without streaming
|
|
||||||
message_text = "What's my name?"
|
|
||||||
response = client.agents.messages.create(
|
|
||||||
agent_id=agent.id,
|
|
||||||
messages=[
|
|
||||||
MessageCreate(
|
|
||||||
role="user",
|
|
||||||
content=message_text,
|
|
||||||
),
|
|
||||||
],
|
|
||||||
)
|
|
||||||
|
|
||||||
print(f"Sent message to agent {agent.name}: {message_text}")
|
|
||||||
print(f"Agent thoughts: {response.messages[0].reasoning}")
|
|
||||||
print(f"Agent response: {response.messages[1].content}")
|
|
||||||
|
|
||||||
|
|
||||||
def secret_message():
|
|
||||||
"""Return a secret message."""
|
|
||||||
return "Hello world!"
|
|
||||||
|
|
||||||
|
|
||||||
tool = client.tools.upsert_from_function(
|
|
||||||
func=secret_message,
|
|
||||||
)
|
|
||||||
|
|
||||||
client.agents.tools.attach(agent_id=agent.id, tool_id=tool.id)
|
|
||||||
|
|
||||||
print(f"Created tool {tool.name} and attached to agent {agent.name}")
|
|
||||||
|
|
||||||
message_text = "Run secret message tool and tell me what it returns"
|
|
||||||
response = client.agents.messages.create(
|
|
||||||
agent_id=agent.id,
|
|
||||||
messages=[
|
|
||||||
MessageCreate(
|
|
||||||
role="user",
|
|
||||||
content=message_text,
|
|
||||||
),
|
|
||||||
],
|
|
||||||
)
|
|
||||||
|
|
||||||
for msg in response.messages:
|
|
||||||
if msg.message_type == "assistant_message":
|
|
||||||
print(msg.content)
|
|
||||||
elif msg.message_type == "reasoning_message":
|
|
||||||
print(msg.reasoning)
|
|
||||||
elif msg.message_type == "tool_call_message":
|
|
||||||
print(msg.tool_call.name)
|
|
||||||
print(msg.tool_call.arguments)
|
|
||||||
elif msg.message_type == "tool_return_message":
|
|
||||||
print(msg.tool_return)
|
|
||||||
|
|
||||||
print(f"Sent message to agent {agent.name}: {message_text}")
|
|
||||||
print(f"Agent thoughts: {response.messages[0].reasoning}")
|
|
||||||
print(f"Tool call information: {response.messages[1].tool_call}")
|
|
||||||
print(f"Tool response information: {response.messages[2].status}")
|
|
||||||
print(f"Agent thoughts: {response.messages[3].reasoning}")
|
|
||||||
print(f"Agent response: {response.messages[4].content}")
|
|
||||||
|
|
||||||
|
|
||||||
# send a message to the agent (streaming steps)
|
|
||||||
message_text = "Repeat my name."
|
|
||||||
stream = client.agents.messages.create_stream(
|
|
||||||
agent_id=agent_state.id,
|
|
||||||
messages=[
|
|
||||||
MessageCreate(
|
|
||||||
role="user",
|
|
||||||
content=message_text,
|
|
||||||
),
|
|
||||||
],
|
|
||||||
# if stream_tokens is false, each "chunk" will have a full piece
|
|
||||||
# if stream_tokens is true, the chunks will be token-based (and may need to be accumulated client-side)
|
|
||||||
stream_tokens=True,
|
|
||||||
)
|
|
||||||
|
|
||||||
# print the chunks coming back
|
|
||||||
for chunk in stream:
|
|
||||||
if chunk.message_type == "assistant_message":
|
|
||||||
print(chunk.content)
|
|
||||||
elif chunk.message_type == "reasoning_message":
|
|
||||||
print(chunk.reasoning)
|
|
||||||
elif chunk.message_type == "tool_call_message":
|
|
||||||
if chunk.tool_call.name:
|
|
||||||
print(chunk.tool_call.name)
|
|
||||||
if chunk.tool_call.arguments:
|
|
||||||
print(chunk.tool_call.arguments)
|
|
||||||
elif chunk.message_type == "tool_return_message":
|
|
||||||
print(chunk.tool_return)
|
|
||||||
elif chunk.message_type == "usage_statistics":
|
|
||||||
print(chunk)
|
|
||||||
|
|
||||||
|
|
||||||
agent_copy = client.agents.create(
|
|
||||||
model="openai/gpt-4o-mini",
|
|
||||||
embedding="openai/text-embedding-3-small",
|
|
||||||
)
|
|
||||||
block = client.agents.blocks.retrieve(agent.id, block_label="human")
|
|
||||||
agent_copy = client.agents.blocks.attach(agent_copy.id, block.id)
|
|
||||||
|
|
||||||
print(f"Created agent copy with shared memory named {agent_copy.name}")
|
|
||||||
|
|
||||||
message_text = "My name isn't Caren, it's Sarah. Please update your core memory with core_memory_replace"
|
|
||||||
response = client.agents.messages.create(
|
|
||||||
agent_id=agent_copy.id,
|
|
||||||
messages=[
|
|
||||||
MessageCreate(
|
|
||||||
role="user",
|
|
||||||
content=message_text,
|
|
||||||
),
|
|
||||||
],
|
|
||||||
)
|
|
||||||
|
|
||||||
print(f"Sent message to agent {agent_copy.name}: {message_text}")
|
|
||||||
|
|
||||||
block = client.agents.blocks.retrieve(agent_copy.id, block_label="human")
|
|
||||||
print(f"New core memory for agent {agent_copy.name}: {block.value}")
|
|
||||||
|
|
||||||
message_text = "What's my name?"
|
|
||||||
response = client.agents.messages.create(
|
|
||||||
agent_id=agent_copy.id,
|
|
||||||
messages=[
|
|
||||||
MessageCreate(
|
|
||||||
role="user",
|
|
||||||
content=message_text,
|
|
||||||
),
|
|
||||||
],
|
|
||||||
)
|
|
||||||
|
|
||||||
print(f"Sent message to agent {agent_copy.name}: {message_text}")
|
|
||||||
print(f"Agent thoughts: {response.messages[0].reasoning}")
|
|
||||||
print(f"Agent response: {response.messages[1].content}")
|
|
||||||
|
|
||||||
client.agents.delete(agent_id=agent.id)
|
|
||||||
client.agents.delete(agent_id=agent_copy.id)
|
|
||||||
|
|
||||||
print(f"Deleted agents {agent.name} and {agent_copy.name}")
|
|
||||||
@@ -1,148 +0,0 @@
|
|||||||
import type {
|
|
||||||
AssistantMessage,
|
|
||||||
ReasoningMessage,
|
|
||||||
ToolCallMessage,
|
|
||||||
ToolReturnMessage,
|
|
||||||
} from '@letta-ai/letta-client/api/types';
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Make sure you run the Letta server before running this example.
|
|
||||||
* See https://docs.letta.com/quickstart
|
|
||||||
*
|
|
||||||
* If you're using Letta Cloud, replace 'baseURL' with 'token'
|
|
||||||
* See https://docs.letta.com/api-reference/overview
|
|
||||||
*
|
|
||||||
* Execute this script using `npm run example`
|
|
||||||
*/
|
|
||||||
const client = new LettaClient({
|
|
||||||
baseUrl: 'http://localhost:8283',
|
|
||||||
});
|
|
||||||
|
|
||||||
const agent = await client.agents.create({
|
|
||||||
memoryBlocks: [
|
|
||||||
{
|
|
||||||
value: 'name: Caren',
|
|
||||||
label: 'human',
|
|
||||||
},
|
|
||||||
],
|
|
||||||
model: 'openai/gpt-4o-mini',
|
|
||||||
embedding: 'openai/text-embedding-3-small',
|
|
||||||
});
|
|
||||||
|
|
||||||
console.log('Created agent with name', agent.name);
|
|
||||||
|
|
||||||
let messageText = "What's my name?";
|
|
||||||
let response = await client.agents.messages.create(agent.id, {
|
|
||||||
messages: [
|
|
||||||
{
|
|
||||||
role: 'user',
|
|
||||||
content: messageText,
|
|
||||||
},
|
|
||||||
],
|
|
||||||
});
|
|
||||||
|
|
||||||
console.log(`Sent message to agent ${agent.name}: ${messageText}`);
|
|
||||||
console.log(
|
|
||||||
'Agent thoughts:',
|
|
||||||
(response.messages[0] as ReasoningMessage).reasoning,
|
|
||||||
);
|
|
||||||
console.log(
|
|
||||||
'Agent response:',
|
|
||||||
(response.messages[1] as AssistantMessage).content,
|
|
||||||
);
|
|
||||||
|
|
||||||
const CUSTOM_TOOL_SOURCE_CODE = `
|
|
||||||
def secret_message():
|
|
||||||
"""Return a secret message."""
|
|
||||||
return "Hello world!"
|
|
||||||
`.trim();
|
|
||||||
|
|
||||||
const tool = await client.tools.upsert({
|
|
||||||
sourceCode: CUSTOM_TOOL_SOURCE_CODE,
|
|
||||||
});
|
|
||||||
|
|
||||||
await client.agents.tools.attach(agent.id, tool.id);
|
|
||||||
|
|
||||||
console.log(`Created tool ${tool.name} and attached to agent ${agent.name}`);
|
|
||||||
|
|
||||||
messageText = 'Run secret message tool and tell me what it returns';
|
|
||||||
response = await client.agents.messages.create(agent.id, {
|
|
||||||
messages: [
|
|
||||||
{
|
|
||||||
role: 'user',
|
|
||||||
content: messageText,
|
|
||||||
},
|
|
||||||
],
|
|
||||||
});
|
|
||||||
|
|
||||||
console.log(`Sent message to agent ${agent.name}: ${messageText}`);
|
|
||||||
console.log(
|
|
||||||
'Agent thoughts:',
|
|
||||||
(response.messages[0] as ReasoningMessage).reasoning,
|
|
||||||
);
|
|
||||||
console.log(
|
|
||||||
'Tool call information:',
|
|
||||||
(response.messages[1] as ToolCallMessage).toolCall,
|
|
||||||
);
|
|
||||||
console.log(
|
|
||||||
'Tool response information:',
|
|
||||||
(response.messages[2] as ToolReturnMessage).status,
|
|
||||||
);
|
|
||||||
console.log(
|
|
||||||
'Agent thoughts:',
|
|
||||||
(response.messages[3] as ReasoningMessage).reasoning,
|
|
||||||
);
|
|
||||||
console.log(
|
|
||||||
'Agent response:',
|
|
||||||
(response.messages[4] as AssistantMessage).content,
|
|
||||||
);
|
|
||||||
|
|
||||||
let agentCopy = await client.agents.create({
|
|
||||||
model: 'openai/gpt-4o-mini',
|
|
||||||
embedding: 'openai/text-embedding-3-small',
|
|
||||||
});
|
|
||||||
let block = await client.agents.blocks.retrieve(agent.id, 'human');
|
|
||||||
agentCopy = await client.agents.blocks.attach(agentCopy.id, block.id);
|
|
||||||
|
|
||||||
console.log('Created agent copy with shared memory named', agentCopy.name);
|
|
||||||
|
|
||||||
messageText =
|
|
||||||
"My name isn't Caren, it's Sarah. Please update your core memory with core_memory_replace";
|
|
||||||
response = await client.agents.messages.create(agentCopy.id, {
|
|
||||||
messages: [
|
|
||||||
{
|
|
||||||
role: 'user',
|
|
||||||
content: messageText,
|
|
||||||
},
|
|
||||||
],
|
|
||||||
});
|
|
||||||
|
|
||||||
console.log(`Sent message to agent ${agentCopy.name}: ${messageText}`);
|
|
||||||
|
|
||||||
block = await client.agents.blocks.retrieve(agentCopy.id, 'human');
|
|
||||||
console.log(`New core memory for agent ${agentCopy.name}: ${block.value}`);
|
|
||||||
|
|
||||||
messageText = "What's my name?";
|
|
||||||
response = await client.agents.messages.create(agentCopy.id, {
|
|
||||||
messages: [
|
|
||||||
{
|
|
||||||
role: 'user',
|
|
||||||
content: messageText,
|
|
||||||
},
|
|
||||||
],
|
|
||||||
});
|
|
||||||
|
|
||||||
console.log(`Sent message to agent ${agentCopy.name}: ${messageText}`);
|
|
||||||
console.log(
|
|
||||||
'Agent thoughts:',
|
|
||||||
(response.messages[0] as ReasoningMessage).reasoning,
|
|
||||||
);
|
|
||||||
console.log(
|
|
||||||
'Agent response:',
|
|
||||||
(response.messages[1] as AssistantMessage).content,
|
|
||||||
);
|
|
||||||
|
|
||||||
await client.agents.delete(agent.id);
|
|
||||||
await client.agents.delete(agentCopy.id);
|
|
||||||
|
|
||||||
console.log(`Deleted agents ${agent.name} and ${agentCopy.name}`);
|
|
||||||
806
examples/docs/node/package-lock.json
generated
806
examples/docs/node/package-lock.json
generated
@@ -1,806 +0,0 @@
|
|||||||
{
|
|
||||||
"name": "@letta-ai/core",
|
|
||||||
"version": "0.1.0",
|
|
||||||
"lockfileVersion": 3,
|
|
||||||
"requires": true,
|
|
||||||
"packages": {
|
|
||||||
"": {
|
|
||||||
"name": "@letta-ai/core",
|
|
||||||
"version": "0.1.0",
|
|
||||||
"dependencies": {
|
|
||||||
"@letta-ai/letta-client": "^0.1.131"
|
|
||||||
},
|
|
||||||
"devDependencies": {
|
|
||||||
"@types/node": "^22.12.0",
|
|
||||||
"ts-node": "^10.9.2",
|
|
||||||
"typescript": "^5.7.3"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/@cspotcode/source-map-support": {
|
|
||||||
"version": "0.8.1",
|
|
||||||
"resolved": "https://registry.npmjs.org/@cspotcode/source-map-support/-/source-map-support-0.8.1.tgz",
|
|
||||||
"integrity": "sha512-IchNf6dN4tHoMFIn/7OE8LWZ19Y6q/67Bmf6vnGREv8RSbBVb9LPJxEcnwrcwX6ixSvaiGoomAUvu4YSxXrVgw==",
|
|
||||||
"dev": true,
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"@jridgewell/trace-mapping": "0.3.9"
|
|
||||||
},
|
|
||||||
"engines": {
|
|
||||||
"node": ">=12"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/@jridgewell/resolve-uri": {
|
|
||||||
"version": "3.1.2",
|
|
||||||
"resolved": "https://registry.npmjs.org/@jridgewell/resolve-uri/-/resolve-uri-3.1.2.tgz",
|
|
||||||
"integrity": "sha512-bRISgCIjP20/tbWSPWMEi54QVPRZExkuD9lJL+UIxUKtwVJA8wW1Trb1jMs1RFXo1CBTNZ/5hpC9QvmKWdopKw==",
|
|
||||||
"dev": true,
|
|
||||||
"license": "MIT",
|
|
||||||
"engines": {
|
|
||||||
"node": ">=6.0.0"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/@jridgewell/sourcemap-codec": {
|
|
||||||
"version": "1.5.0",
|
|
||||||
"resolved": "https://registry.npmjs.org/@jridgewell/sourcemap-codec/-/sourcemap-codec-1.5.0.tgz",
|
|
||||||
"integrity": "sha512-gv3ZRaISU3fjPAgNsriBRqGWQL6quFx04YMPW/zD8XMLsU32mhCCbfbO6KZFLjvYpCZ8zyDEgqsgf+PwPaM7GQ==",
|
|
||||||
"dev": true,
|
|
||||||
"license": "MIT"
|
|
||||||
},
|
|
||||||
"node_modules/@jridgewell/trace-mapping": {
|
|
||||||
"version": "0.3.9",
|
|
||||||
"resolved": "https://registry.npmjs.org/@jridgewell/trace-mapping/-/trace-mapping-0.3.9.tgz",
|
|
||||||
"integrity": "sha512-3Belt6tdc8bPgAtbcmdtNJlirVoTmEb5e2gC94PnkwEW9jI6CAHUeoG85tjWP5WquqfavoMtMwiG4P926ZKKuQ==",
|
|
||||||
"dev": true,
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"@jridgewell/resolve-uri": "^3.0.3",
|
|
||||||
"@jridgewell/sourcemap-codec": "^1.4.10"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/@letta-ai/letta-client": {
|
|
||||||
"version": "0.1.131",
|
|
||||||
"resolved": "https://registry.npmjs.org/@letta-ai/letta-client/-/letta-client-0.1.131.tgz",
|
|
||||||
"integrity": "sha512-Kk7iJxGQT5mZ6F1kmbXyYKhXtmHkVOqF/FF3DbADKwQthl9zMHIo1BBl5DTZ7ezICrmpfE9q5aUcdJnCLAgBuQ==",
|
|
||||||
"dependencies": {
|
|
||||||
"dedent": "^1.0.0",
|
|
||||||
"form-data": "^4.0.0",
|
|
||||||
"form-data-encoder": "^4.0.2",
|
|
||||||
"formdata-node": "^6.0.3",
|
|
||||||
"node-fetch": "^2.7.0",
|
|
||||||
"qs": "^6.13.1",
|
|
||||||
"readable-stream": "^4.5.2",
|
|
||||||
"url-join": "4.0.1"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/@tsconfig/node10": {
|
|
||||||
"version": "1.0.11",
|
|
||||||
"resolved": "https://registry.npmjs.org/@tsconfig/node10/-/node10-1.0.11.tgz",
|
|
||||||
"integrity": "sha512-DcRjDCujK/kCk/cUe8Xz8ZSpm8mS3mNNpta+jGCA6USEDfktlNvm1+IuZ9eTcDbNk41BHwpHHeW+N1lKCz4zOw==",
|
|
||||||
"dev": true,
|
|
||||||
"license": "MIT"
|
|
||||||
},
|
|
||||||
"node_modules/@tsconfig/node12": {
|
|
||||||
"version": "1.0.11",
|
|
||||||
"resolved": "https://registry.npmjs.org/@tsconfig/node12/-/node12-1.0.11.tgz",
|
|
||||||
"integrity": "sha512-cqefuRsh12pWyGsIoBKJA9luFu3mRxCA+ORZvA4ktLSzIuCUtWVxGIuXigEwO5/ywWFMZ2QEGKWvkZG1zDMTag==",
|
|
||||||
"dev": true,
|
|
||||||
"license": "MIT"
|
|
||||||
},
|
|
||||||
"node_modules/@tsconfig/node14": {
|
|
||||||
"version": "1.0.3",
|
|
||||||
"resolved": "https://registry.npmjs.org/@tsconfig/node14/-/node14-1.0.3.tgz",
|
|
||||||
"integrity": "sha512-ysT8mhdixWK6Hw3i1V2AeRqZ5WfXg1G43mqoYlM2nc6388Fq5jcXyr5mRsqViLx/GJYdoL0bfXD8nmF+Zn/Iow==",
|
|
||||||
"dev": true,
|
|
||||||
"license": "MIT"
|
|
||||||
},
|
|
||||||
"node_modules/@tsconfig/node16": {
|
|
||||||
"version": "1.0.4",
|
|
||||||
"resolved": "https://registry.npmjs.org/@tsconfig/node16/-/node16-1.0.4.tgz",
|
|
||||||
"integrity": "sha512-vxhUy4J8lyeyinH7Azl1pdd43GJhZH/tP2weN8TntQblOY+A0XbT8DJk1/oCPuOOyg/Ja757rG0CgHcWC8OfMA==",
|
|
||||||
"dev": true,
|
|
||||||
"license": "MIT"
|
|
||||||
},
|
|
||||||
"node_modules/@types/node": {
|
|
||||||
"version": "22.12.0",
|
|
||||||
"resolved": "https://registry.npmjs.org/@types/node/-/node-22.12.0.tgz",
|
|
||||||
"integrity": "sha512-Fll2FZ1riMjNmlmJOdAyY5pUbkftXslB5DgEzlIuNaiWhXd00FhWxVC/r4yV/4wBb9JfImTu+jiSvXTkJ7F/gA==",
|
|
||||||
"dev": true,
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"undici-types": "~6.20.0"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/@types/node/node_modules/undici-types": {
|
|
||||||
"version": "6.20.0",
|
|
||||||
"resolved": "https://registry.npmjs.org/undici-types/-/undici-types-6.20.0.tgz",
|
|
||||||
"integrity": "sha512-Ny6QZ2Nju20vw1SRHe3d9jVu6gJ+4e3+MMpqu7pqE5HT6WsTSlce++GQmK5UXS8mzV8DSYHrQH+Xrf2jVcuKNg==",
|
|
||||||
"dev": true,
|
|
||||||
"license": "MIT"
|
|
||||||
},
|
|
||||||
"node_modules/abort-controller": {
|
|
||||||
"version": "3.0.0",
|
|
||||||
"resolved": "https://registry.npmjs.org/abort-controller/-/abort-controller-3.0.0.tgz",
|
|
||||||
"integrity": "sha512-h8lQ8tacZYnR3vNQTgibj+tODHI5/+l06Au2Pcriv/Gmet0eaj4TwWH41sO9wnHDiQsEj19q0drzdWdeAHtweg==",
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"event-target-shim": "^5.0.0"
|
|
||||||
},
|
|
||||||
"engines": {
|
|
||||||
"node": ">=6.5"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/acorn": {
|
|
||||||
"version": "8.14.0",
|
|
||||||
"resolved": "https://registry.npmjs.org/acorn/-/acorn-8.14.0.tgz",
|
|
||||||
"integrity": "sha512-cl669nCJTZBsL97OF4kUQm5g5hC2uihk0NxY3WENAC0TYdILVkAyHymAntgxGkl7K+t0cXIrH5siy5S4XkFycA==",
|
|
||||||
"dev": true,
|
|
||||||
"license": "MIT",
|
|
||||||
"bin": {
|
|
||||||
"acorn": "bin/acorn"
|
|
||||||
},
|
|
||||||
"engines": {
|
|
||||||
"node": ">=0.4.0"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/acorn-walk": {
|
|
||||||
"version": "8.3.4",
|
|
||||||
"resolved": "https://registry.npmjs.org/acorn-walk/-/acorn-walk-8.3.4.tgz",
|
|
||||||
"integrity": "sha512-ueEepnujpqee2o5aIYnvHU6C0A42MNdsIDeqy5BydrkuC5R1ZuUFnm27EeFJGoEHJQgn3uleRvmTXaJgfXbt4g==",
|
|
||||||
"dev": true,
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"acorn": "^8.11.0"
|
|
||||||
},
|
|
||||||
"engines": {
|
|
||||||
"node": ">=0.4.0"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/arg": {
|
|
||||||
"version": "4.1.3",
|
|
||||||
"resolved": "https://registry.npmjs.org/arg/-/arg-4.1.3.tgz",
|
|
||||||
"integrity": "sha512-58S9QDqG0Xx27YwPSt9fJxivjYl432YCwfDMfZ+71RAqUrZef7LrKQZ3LHLOwCS4FLNBplP533Zx895SeOCHvA==",
|
|
||||||
"dev": true,
|
|
||||||
"license": "MIT"
|
|
||||||
},
|
|
||||||
"node_modules/asynckit": {
|
|
||||||
"version": "0.4.0",
|
|
||||||
"resolved": "https://registry.npmjs.org/asynckit/-/asynckit-0.4.0.tgz",
|
|
||||||
"integrity": "sha512-Oei9OH4tRh0YqU3GxhX79dM/mwVgvbZJaSNaRk+bshkj0S5cfHcgYakreBjrHwatXKbz+IoIdYLxrKim2MjW0Q==",
|
|
||||||
"license": "MIT"
|
|
||||||
},
|
|
||||||
"node_modules/base64-js": {
|
|
||||||
"version": "1.5.1",
|
|
||||||
"resolved": "https://registry.npmjs.org/base64-js/-/base64-js-1.5.1.tgz",
|
|
||||||
"integrity": "sha512-AKpaYlHn8t4SVbOHCy+b5+KKgvR4vrsD8vbvrbiQJps7fKDTkjkDry6ji0rUJjC0kzbNePLwzxq8iypo41qeWA==",
|
|
||||||
"funding": [
|
|
||||||
{
|
|
||||||
"type": "github",
|
|
||||||
"url": "https://github.com/sponsors/feross"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"type": "patreon",
|
|
||||||
"url": "https://www.patreon.com/feross"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"type": "consulting",
|
|
||||||
"url": "https://feross.org/support"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"license": "MIT"
|
|
||||||
},
|
|
||||||
"node_modules/buffer": {
|
|
||||||
"version": "6.0.3",
|
|
||||||
"resolved": "https://registry.npmjs.org/buffer/-/buffer-6.0.3.tgz",
|
|
||||||
"integrity": "sha512-FTiCpNxtwiZZHEZbcbTIcZjERVICn9yq/pDFkTl95/AxzD1naBctN7YO68riM/gLSDY7sdrMby8hofADYuuqOA==",
|
|
||||||
"funding": [
|
|
||||||
{
|
|
||||||
"type": "github",
|
|
||||||
"url": "https://github.com/sponsors/feross"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"type": "patreon",
|
|
||||||
"url": "https://www.patreon.com/feross"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"type": "consulting",
|
|
||||||
"url": "https://feross.org/support"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"base64-js": "^1.3.1",
|
|
||||||
"ieee754": "^1.2.1"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/call-bind-apply-helpers": {
|
|
||||||
"version": "1.0.1",
|
|
||||||
"resolved": "https://registry.npmjs.org/call-bind-apply-helpers/-/call-bind-apply-helpers-1.0.1.tgz",
|
|
||||||
"integrity": "sha512-BhYE+WDaywFg2TBWYNXAE+8B1ATnThNBqXHP5nQu0jWJdVvY2hvkpyB3qOmtmDePiS5/BDQ8wASEWGMWRG148g==",
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"es-errors": "^1.3.0",
|
|
||||||
"function-bind": "^1.1.2"
|
|
||||||
},
|
|
||||||
"engines": {
|
|
||||||
"node": ">= 0.4"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/call-bound": {
|
|
||||||
"version": "1.0.3",
|
|
||||||
"resolved": "https://registry.npmjs.org/call-bound/-/call-bound-1.0.3.tgz",
|
|
||||||
"integrity": "sha512-YTd+6wGlNlPxSuri7Y6X8tY2dmm12UMH66RpKMhiX6rsk5wXXnYgbUcOt8kiS31/AjfoTOvCsE+w8nZQLQnzHA==",
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"call-bind-apply-helpers": "^1.0.1",
|
|
||||||
"get-intrinsic": "^1.2.6"
|
|
||||||
},
|
|
||||||
"engines": {
|
|
||||||
"node": ">= 0.4"
|
|
||||||
},
|
|
||||||
"funding": {
|
|
||||||
"url": "https://github.com/sponsors/ljharb"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/combined-stream": {
|
|
||||||
"version": "1.0.8",
|
|
||||||
"resolved": "https://registry.npmjs.org/combined-stream/-/combined-stream-1.0.8.tgz",
|
|
||||||
"integrity": "sha512-FQN4MRfuJeHf7cBbBMJFXhKSDq+2kAArBlmRBvcvFE5BB1HZKXtSFASDhdlz9zOYwxh8lDdnvmMOe/+5cdoEdg==",
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"delayed-stream": "~1.0.0"
|
|
||||||
},
|
|
||||||
"engines": {
|
|
||||||
"node": ">= 0.8"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/create-require": {
|
|
||||||
"version": "1.1.1",
|
|
||||||
"resolved": "https://registry.npmjs.org/create-require/-/create-require-1.1.1.tgz",
|
|
||||||
"integrity": "sha512-dcKFX3jn0MpIaXjisoRvexIJVEKzaq7z2rZKxf+MSr9TkdmHmsU4m2lcLojrj/FHl8mk5VxMmYA+ftRkP/3oKQ==",
|
|
||||||
"dev": true,
|
|
||||||
"license": "MIT"
|
|
||||||
},
|
|
||||||
"node_modules/dedent": {
|
|
||||||
"version": "1.5.3",
|
|
||||||
"resolved": "https://registry.npmjs.org/dedent/-/dedent-1.5.3.tgz",
|
|
||||||
"integrity": "sha512-NHQtfOOW68WD8lgypbLA5oT+Bt0xXJhiYvoR6SmmNXZfpzOGXwdKWmcwG8N7PwVVWV3eF/68nmD9BaJSsTBhyQ==",
|
|
||||||
"license": "MIT",
|
|
||||||
"peerDependencies": {
|
|
||||||
"babel-plugin-macros": "^3.1.0"
|
|
||||||
},
|
|
||||||
"peerDependenciesMeta": {
|
|
||||||
"babel-plugin-macros": {
|
|
||||||
"optional": true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/delayed-stream": {
|
|
||||||
"version": "1.0.0",
|
|
||||||
"resolved": "https://registry.npmjs.org/delayed-stream/-/delayed-stream-1.0.0.tgz",
|
|
||||||
"integrity": "sha512-ZySD7Nf91aLB0RxL4KGrKHBXl7Eds1DAmEdcoVawXnLD7SDhpNgtuII2aAkg7a7QS41jxPSZ17p4VdGnMHk3MQ==",
|
|
||||||
"license": "MIT",
|
|
||||||
"engines": {
|
|
||||||
"node": ">=0.4.0"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/diff": {
|
|
||||||
"version": "4.0.2",
|
|
||||||
"resolved": "https://registry.npmjs.org/diff/-/diff-4.0.2.tgz",
|
|
||||||
"integrity": "sha512-58lmxKSA4BNyLz+HHMUzlOEpg09FV+ev6ZMe3vJihgdxzgcwZ8VoEEPmALCZG9LmqfVoNMMKpttIYTVG6uDY7A==",
|
|
||||||
"dev": true,
|
|
||||||
"license": "BSD-3-Clause",
|
|
||||||
"engines": {
|
|
||||||
"node": ">=0.3.1"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/dunder-proto": {
|
|
||||||
"version": "1.0.1",
|
|
||||||
"resolved": "https://registry.npmjs.org/dunder-proto/-/dunder-proto-1.0.1.tgz",
|
|
||||||
"integrity": "sha512-KIN/nDJBQRcXw0MLVhZE9iQHmG68qAVIBg9CqmUYjmQIhgij9U5MFvrqkUL5FbtyyzZuOeOt0zdeRe4UY7ct+A==",
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"call-bind-apply-helpers": "^1.0.1",
|
|
||||||
"es-errors": "^1.3.0",
|
|
||||||
"gopd": "^1.2.0"
|
|
||||||
},
|
|
||||||
"engines": {
|
|
||||||
"node": ">= 0.4"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/es-define-property": {
|
|
||||||
"version": "1.0.1",
|
|
||||||
"resolved": "https://registry.npmjs.org/es-define-property/-/es-define-property-1.0.1.tgz",
|
|
||||||
"integrity": "sha512-e3nRfgfUZ4rNGL232gUgX06QNyyez04KdjFrF+LTRoOXmrOgFKDg4BCdsjW8EnT69eqdYGmRpJwiPVYNrCaW3g==",
|
|
||||||
"license": "MIT",
|
|
||||||
"engines": {
|
|
||||||
"node": ">= 0.4"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/es-errors": {
|
|
||||||
"version": "1.3.0",
|
|
||||||
"resolved": "https://registry.npmjs.org/es-errors/-/es-errors-1.3.0.tgz",
|
|
||||||
"integrity": "sha512-Zf5H2Kxt2xjTvbJvP2ZWLEICxA6j+hAmMzIlypy4xcBg1vKVnx89Wy0GbS+kf5cwCVFFzdCFh2XSCFNULS6csw==",
|
|
||||||
"license": "MIT",
|
|
||||||
"engines": {
|
|
||||||
"node": ">= 0.4"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/es-object-atoms": {
|
|
||||||
"version": "1.1.1",
|
|
||||||
"resolved": "https://registry.npmjs.org/es-object-atoms/-/es-object-atoms-1.1.1.tgz",
|
|
||||||
"integrity": "sha512-FGgH2h8zKNim9ljj7dankFPcICIK9Cp5bm+c2gQSYePhpaG5+esrLODihIorn+Pe6FGJzWhXQotPv73jTaldXA==",
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"es-errors": "^1.3.0"
|
|
||||||
},
|
|
||||||
"engines": {
|
|
||||||
"node": ">= 0.4"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/event-target-shim": {
|
|
||||||
"version": "5.0.1",
|
|
||||||
"resolved": "https://registry.npmjs.org/event-target-shim/-/event-target-shim-5.0.1.tgz",
|
|
||||||
"integrity": "sha512-i/2XbnSz/uxRCU6+NdVJgKWDTM427+MqYbkQzD321DuCQJUqOuJKIA0IM2+W2xtYHdKOmZ4dR6fExsd4SXL+WQ==",
|
|
||||||
"license": "MIT",
|
|
||||||
"engines": {
|
|
||||||
"node": ">=6"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/events": {
|
|
||||||
"version": "3.3.0",
|
|
||||||
"resolved": "https://registry.npmjs.org/events/-/events-3.3.0.tgz",
|
|
||||||
"integrity": "sha512-mQw+2fkQbALzQ7V0MY0IqdnXNOeTtP4r0lN9z7AAawCXgqea7bDii20AYrIBrFd/Hx0M2Ocz6S111CaFkUcb0Q==",
|
|
||||||
"license": "MIT",
|
|
||||||
"engines": {
|
|
||||||
"node": ">=0.8.x"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/form-data": {
|
|
||||||
"version": "4.0.1",
|
|
||||||
"resolved": "https://registry.npmjs.org/form-data/-/form-data-4.0.1.tgz",
|
|
||||||
"integrity": "sha512-tzN8e4TX8+kkxGPK8D5u0FNmjPUjw3lwC9lSLxxoB/+GtsJG91CO8bSWy73APlgAZzZbXEYZJuxjkHH2w+Ezhw==",
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"asynckit": "^0.4.0",
|
|
||||||
"combined-stream": "^1.0.8",
|
|
||||||
"mime-types": "^2.1.12"
|
|
||||||
},
|
|
||||||
"engines": {
|
|
||||||
"node": ">= 6"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/form-data-encoder": {
|
|
||||||
"version": "4.0.2",
|
|
||||||
"resolved": "https://registry.npmjs.org/form-data-encoder/-/form-data-encoder-4.0.2.tgz",
|
|
||||||
"integrity": "sha512-KQVhvhK8ZkWzxKxOr56CPulAhH3dobtuQ4+hNQ+HekH/Wp5gSOafqRAeTphQUJAIk0GBvHZgJ2ZGRWd5kphMuw==",
|
|
||||||
"license": "MIT",
|
|
||||||
"engines": {
|
|
||||||
"node": ">= 18"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/formdata-node": {
|
|
||||||
"version": "6.0.3",
|
|
||||||
"resolved": "https://registry.npmjs.org/formdata-node/-/formdata-node-6.0.3.tgz",
|
|
||||||
"integrity": "sha512-8e1++BCiTzUno9v5IZ2J6bv4RU+3UKDmqWUQD0MIMVCd9AdhWkO1gw57oo1mNEX1dMq2EGI+FbWz4B92pscSQg==",
|
|
||||||
"license": "MIT",
|
|
||||||
"engines": {
|
|
||||||
"node": ">= 18"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/function-bind": {
|
|
||||||
"version": "1.1.2",
|
|
||||||
"resolved": "https://registry.npmjs.org/function-bind/-/function-bind-1.1.2.tgz",
|
|
||||||
"integrity": "sha512-7XHNxH7qX9xG5mIwxkhumTox/MIRNcOgDrxWsMt2pAr23WHp6MrRlN7FBSFpCpr+oVO0F744iUgR82nJMfG2SA==",
|
|
||||||
"license": "MIT",
|
|
||||||
"funding": {
|
|
||||||
"url": "https://github.com/sponsors/ljharb"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/get-intrinsic": {
|
|
||||||
"version": "1.2.7",
|
|
||||||
"resolved": "https://registry.npmjs.org/get-intrinsic/-/get-intrinsic-1.2.7.tgz",
|
|
||||||
"integrity": "sha512-VW6Pxhsrk0KAOqs3WEd0klDiF/+V7gQOpAvY1jVU/LHmaD/kQO4523aiJuikX/QAKYiW6x8Jh+RJej1almdtCA==",
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"call-bind-apply-helpers": "^1.0.1",
|
|
||||||
"es-define-property": "^1.0.1",
|
|
||||||
"es-errors": "^1.3.0",
|
|
||||||
"es-object-atoms": "^1.0.0",
|
|
||||||
"function-bind": "^1.1.2",
|
|
||||||
"get-proto": "^1.0.0",
|
|
||||||
"gopd": "^1.2.0",
|
|
||||||
"has-symbols": "^1.1.0",
|
|
||||||
"hasown": "^2.0.2",
|
|
||||||
"math-intrinsics": "^1.1.0"
|
|
||||||
},
|
|
||||||
"engines": {
|
|
||||||
"node": ">= 0.4"
|
|
||||||
},
|
|
||||||
"funding": {
|
|
||||||
"url": "https://github.com/sponsors/ljharb"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/get-proto": {
|
|
||||||
"version": "1.0.1",
|
|
||||||
"resolved": "https://registry.npmjs.org/get-proto/-/get-proto-1.0.1.tgz",
|
|
||||||
"integrity": "sha512-sTSfBjoXBp89JvIKIefqw7U2CCebsc74kiY6awiGogKtoSGbgjYE/G/+l9sF3MWFPNc9IcoOC4ODfKHfxFmp0g==",
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"dunder-proto": "^1.0.1",
|
|
||||||
"es-object-atoms": "^1.0.0"
|
|
||||||
},
|
|
||||||
"engines": {
|
|
||||||
"node": ">= 0.4"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/gopd": {
|
|
||||||
"version": "1.2.0",
|
|
||||||
"resolved": "https://registry.npmjs.org/gopd/-/gopd-1.2.0.tgz",
|
|
||||||
"integrity": "sha512-ZUKRh6/kUFoAiTAtTYPZJ3hw9wNxx+BIBOijnlG9PnrJsCcSjs1wyyD6vJpaYtgnzDrKYRSqf3OO6Rfa93xsRg==",
|
|
||||||
"license": "MIT",
|
|
||||||
"engines": {
|
|
||||||
"node": ">= 0.4"
|
|
||||||
},
|
|
||||||
"funding": {
|
|
||||||
"url": "https://github.com/sponsors/ljharb"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/has-symbols": {
|
|
||||||
"version": "1.1.0",
|
|
||||||
"resolved": "https://registry.npmjs.org/has-symbols/-/has-symbols-1.1.0.tgz",
|
|
||||||
"integrity": "sha512-1cDNdwJ2Jaohmb3sg4OmKaMBwuC48sYni5HUw2DvsC8LjGTLK9h+eb1X6RyuOHe4hT0ULCW68iomhjUoKUqlPQ==",
|
|
||||||
"license": "MIT",
|
|
||||||
"engines": {
|
|
||||||
"node": ">= 0.4"
|
|
||||||
},
|
|
||||||
"funding": {
|
|
||||||
"url": "https://github.com/sponsors/ljharb"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/hasown": {
|
|
||||||
"version": "2.0.2",
|
|
||||||
"resolved": "https://registry.npmjs.org/hasown/-/hasown-2.0.2.tgz",
|
|
||||||
"integrity": "sha512-0hJU9SCPvmMzIBdZFqNPXWa6dqh7WdH0cII9y+CyS8rG3nL48Bclra9HmKhVVUHyPWNH5Y7xDwAB7bfgSjkUMQ==",
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"function-bind": "^1.1.2"
|
|
||||||
},
|
|
||||||
"engines": {
|
|
||||||
"node": ">= 0.4"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/ieee754": {
|
|
||||||
"version": "1.2.1",
|
|
||||||
"resolved": "https://registry.npmjs.org/ieee754/-/ieee754-1.2.1.tgz",
|
|
||||||
"integrity": "sha512-dcyqhDvX1C46lXZcVqCpK+FtMRQVdIMN6/Df5js2zouUsqG7I6sFxitIC+7KYK29KdXOLHdu9zL4sFnoVQnqaA==",
|
|
||||||
"funding": [
|
|
||||||
{
|
|
||||||
"type": "github",
|
|
||||||
"url": "https://github.com/sponsors/feross"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"type": "patreon",
|
|
||||||
"url": "https://www.patreon.com/feross"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"type": "consulting",
|
|
||||||
"url": "https://feross.org/support"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"license": "BSD-3-Clause"
|
|
||||||
},
|
|
||||||
"node_modules/make-error": {
|
|
||||||
"version": "1.3.6",
|
|
||||||
"resolved": "https://registry.npmjs.org/make-error/-/make-error-1.3.6.tgz",
|
|
||||||
"integrity": "sha512-s8UhlNe7vPKomQhC1qFelMokr/Sc3AgNbso3n74mVPA5LTZwkB9NlXf4XPamLxJE8h0gh73rM94xvwRT2CVInw==",
|
|
||||||
"dev": true,
|
|
||||||
"license": "ISC"
|
|
||||||
},
|
|
||||||
"node_modules/math-intrinsics": {
|
|
||||||
"version": "1.1.0",
|
|
||||||
"resolved": "https://registry.npmjs.org/math-intrinsics/-/math-intrinsics-1.1.0.tgz",
|
|
||||||
"integrity": "sha512-/IXtbwEk5HTPyEwyKX6hGkYXxM9nbj64B+ilVJnC/R6B0pH5G4V3b0pVbL7DBj4tkhBAppbQUlf6F6Xl9LHu1g==",
|
|
||||||
"license": "MIT",
|
|
||||||
"engines": {
|
|
||||||
"node": ">= 0.4"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/mime-db": {
|
|
||||||
"version": "1.52.0",
|
|
||||||
"resolved": "https://registry.npmjs.org/mime-db/-/mime-db-1.52.0.tgz",
|
|
||||||
"integrity": "sha512-sPU4uV7dYlvtWJxwwxHD0PuihVNiE7TyAbQ5SWxDCB9mUYvOgroQOwYQQOKPJ8CIbE+1ETVlOoK1UC2nU3gYvg==",
|
|
||||||
"license": "MIT",
|
|
||||||
"engines": {
|
|
||||||
"node": ">= 0.6"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/mime-types": {
|
|
||||||
"version": "2.1.35",
|
|
||||||
"resolved": "https://registry.npmjs.org/mime-types/-/mime-types-2.1.35.tgz",
|
|
||||||
"integrity": "sha512-ZDY+bPm5zTTF+YpCrAU9nK0UgICYPT0QtT1NZWFv4s++TNkcgVaT0g6+4R2uI4MjQjzysHB1zxuWL50hzaeXiw==",
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"mime-db": "1.52.0"
|
|
||||||
},
|
|
||||||
"engines": {
|
|
||||||
"node": ">= 0.6"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/node-fetch": {
|
|
||||||
"version": "2.7.0",
|
|
||||||
"resolved": "https://registry.npmjs.org/node-fetch/-/node-fetch-2.7.0.tgz",
|
|
||||||
"integrity": "sha512-c4FRfUm/dbcWZ7U+1Wq0AwCyFL+3nt2bEw05wfxSz+DWpWsitgmSgYmy2dQdWyKC1694ELPqMs/YzUSNozLt8A==",
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"whatwg-url": "^5.0.0"
|
|
||||||
},
|
|
||||||
"engines": {
|
|
||||||
"node": "4.x || >=6.0.0"
|
|
||||||
},
|
|
||||||
"peerDependencies": {
|
|
||||||
"encoding": "^0.1.0"
|
|
||||||
},
|
|
||||||
"peerDependenciesMeta": {
|
|
||||||
"encoding": {
|
|
||||||
"optional": true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/object-inspect": {
|
|
||||||
"version": "1.13.3",
|
|
||||||
"resolved": "https://registry.npmjs.org/object-inspect/-/object-inspect-1.13.3.tgz",
|
|
||||||
"integrity": "sha512-kDCGIbxkDSXE3euJZZXzc6to7fCrKHNI/hSRQnRuQ+BWjFNzZwiFF8fj/6o2t2G9/jTj8PSIYTfCLelLZEeRpA==",
|
|
||||||
"license": "MIT",
|
|
||||||
"engines": {
|
|
||||||
"node": ">= 0.4"
|
|
||||||
},
|
|
||||||
"funding": {
|
|
||||||
"url": "https://github.com/sponsors/ljharb"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/process": {
|
|
||||||
"version": "0.11.10",
|
|
||||||
"resolved": "https://registry.npmjs.org/process/-/process-0.11.10.tgz",
|
|
||||||
"integrity": "sha512-cdGef/drWFoydD1JsMzuFf8100nZl+GT+yacc2bEced5f9Rjk4z+WtFUTBu9PhOi9j/jfmBPu0mMEY4wIdAF8A==",
|
|
||||||
"license": "MIT",
|
|
||||||
"engines": {
|
|
||||||
"node": ">= 0.6.0"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/qs": {
|
|
||||||
"version": "6.14.0",
|
|
||||||
"resolved": "https://registry.npmjs.org/qs/-/qs-6.14.0.tgz",
|
|
||||||
"integrity": "sha512-YWWTjgABSKcvs/nWBi9PycY/JiPJqOD4JA6o9Sej2AtvSGarXxKC3OQSk4pAarbdQlKAh5D4FCQkJNkW+GAn3w==",
|
|
||||||
"license": "BSD-3-Clause",
|
|
||||||
"dependencies": {
|
|
||||||
"side-channel": "^1.1.0"
|
|
||||||
},
|
|
||||||
"engines": {
|
|
||||||
"node": ">=0.6"
|
|
||||||
},
|
|
||||||
"funding": {
|
|
||||||
"url": "https://github.com/sponsors/ljharb"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/readable-stream": {
|
|
||||||
"version": "4.7.0",
|
|
||||||
"resolved": "https://registry.npmjs.org/readable-stream/-/readable-stream-4.7.0.tgz",
|
|
||||||
"integrity": "sha512-oIGGmcpTLwPga8Bn6/Z75SVaH1z5dUut2ibSyAMVhmUggWpmDn2dapB0n7f8nwaSiRtepAsfJyfXIO5DCVAODg==",
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"abort-controller": "^3.0.0",
|
|
||||||
"buffer": "^6.0.3",
|
|
||||||
"events": "^3.3.0",
|
|
||||||
"process": "^0.11.10",
|
|
||||||
"string_decoder": "^1.3.0"
|
|
||||||
},
|
|
||||||
"engines": {
|
|
||||||
"node": "^12.22.0 || ^14.17.0 || >=16.0.0"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/safe-buffer": {
|
|
||||||
"version": "5.2.1",
|
|
||||||
"resolved": "https://registry.npmjs.org/safe-buffer/-/safe-buffer-5.2.1.tgz",
|
|
||||||
"integrity": "sha512-rp3So07KcdmmKbGvgaNxQSJr7bGVSVk5S9Eq1F+ppbRo70+YeaDxkw5Dd8NPN+GD6bjnYm2VuPuCXmpuYvmCXQ==",
|
|
||||||
"funding": [
|
|
||||||
{
|
|
||||||
"type": "github",
|
|
||||||
"url": "https://github.com/sponsors/feross"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"type": "patreon",
|
|
||||||
"url": "https://www.patreon.com/feross"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"type": "consulting",
|
|
||||||
"url": "https://feross.org/support"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"license": "MIT"
|
|
||||||
},
|
|
||||||
"node_modules/side-channel": {
|
|
||||||
"version": "1.1.0",
|
|
||||||
"resolved": "https://registry.npmjs.org/side-channel/-/side-channel-1.1.0.tgz",
|
|
||||||
"integrity": "sha512-ZX99e6tRweoUXqR+VBrslhda51Nh5MTQwou5tnUDgbtyM0dBgmhEDtWGP/xbKn6hqfPRHujUNwz5fy/wbbhnpw==",
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"es-errors": "^1.3.0",
|
|
||||||
"object-inspect": "^1.13.3",
|
|
||||||
"side-channel-list": "^1.0.0",
|
|
||||||
"side-channel-map": "^1.0.1",
|
|
||||||
"side-channel-weakmap": "^1.0.2"
|
|
||||||
},
|
|
||||||
"engines": {
|
|
||||||
"node": ">= 0.4"
|
|
||||||
},
|
|
||||||
"funding": {
|
|
||||||
"url": "https://github.com/sponsors/ljharb"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/side-channel-list": {
|
|
||||||
"version": "1.0.0",
|
|
||||||
"resolved": "https://registry.npmjs.org/side-channel-list/-/side-channel-list-1.0.0.tgz",
|
|
||||||
"integrity": "sha512-FCLHtRD/gnpCiCHEiJLOwdmFP+wzCmDEkc9y7NsYxeF4u7Btsn1ZuwgwJGxImImHicJArLP4R0yX4c2KCrMrTA==",
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"es-errors": "^1.3.0",
|
|
||||||
"object-inspect": "^1.13.3"
|
|
||||||
},
|
|
||||||
"engines": {
|
|
||||||
"node": ">= 0.4"
|
|
||||||
},
|
|
||||||
"funding": {
|
|
||||||
"url": "https://github.com/sponsors/ljharb"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/side-channel-map": {
|
|
||||||
"version": "1.0.1",
|
|
||||||
"resolved": "https://registry.npmjs.org/side-channel-map/-/side-channel-map-1.0.1.tgz",
|
|
||||||
"integrity": "sha512-VCjCNfgMsby3tTdo02nbjtM/ewra6jPHmpThenkTYh8pG9ucZ/1P8So4u4FGBek/BjpOVsDCMoLA/iuBKIFXRA==",
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"call-bound": "^1.0.2",
|
|
||||||
"es-errors": "^1.3.0",
|
|
||||||
"get-intrinsic": "^1.2.5",
|
|
||||||
"object-inspect": "^1.13.3"
|
|
||||||
},
|
|
||||||
"engines": {
|
|
||||||
"node": ">= 0.4"
|
|
||||||
},
|
|
||||||
"funding": {
|
|
||||||
"url": "https://github.com/sponsors/ljharb"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/side-channel-weakmap": {
|
|
||||||
"version": "1.0.2",
|
|
||||||
"resolved": "https://registry.npmjs.org/side-channel-weakmap/-/side-channel-weakmap-1.0.2.tgz",
|
|
||||||
"integrity": "sha512-WPS/HvHQTYnHisLo9McqBHOJk2FkHO/tlpvldyrnem4aeQp4hai3gythswg6p01oSoTl58rcpiFAjF2br2Ak2A==",
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"call-bound": "^1.0.2",
|
|
||||||
"es-errors": "^1.3.0",
|
|
||||||
"get-intrinsic": "^1.2.5",
|
|
||||||
"object-inspect": "^1.13.3",
|
|
||||||
"side-channel-map": "^1.0.1"
|
|
||||||
},
|
|
||||||
"engines": {
|
|
||||||
"node": ">= 0.4"
|
|
||||||
},
|
|
||||||
"funding": {
|
|
||||||
"url": "https://github.com/sponsors/ljharb"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/string_decoder": {
|
|
||||||
"version": "1.3.0",
|
|
||||||
"resolved": "https://registry.npmjs.org/string_decoder/-/string_decoder-1.3.0.tgz",
|
|
||||||
"integrity": "sha512-hkRX8U1WjJFd8LsDJ2yQ/wWWxaopEsABU1XfkM8A+j0+85JAGppt16cr1Whg6KIbb4okU6Mql6BOj+uup/wKeA==",
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"safe-buffer": "~5.2.0"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/tr46": {
|
|
||||||
"version": "0.0.3",
|
|
||||||
"resolved": "https://registry.npmjs.org/tr46/-/tr46-0.0.3.tgz",
|
|
||||||
"integrity": "sha512-N3WMsuqV66lT30CrXNbEjx4GEwlow3v6rr4mCcv6prnfwhS01rkgyFdjPNBYd9br7LpXV1+Emh01fHnq2Gdgrw==",
|
|
||||||
"license": "MIT"
|
|
||||||
},
|
|
||||||
"node_modules/ts-node": {
|
|
||||||
"version": "10.9.2",
|
|
||||||
"resolved": "https://registry.npmjs.org/ts-node/-/ts-node-10.9.2.tgz",
|
|
||||||
"integrity": "sha512-f0FFpIdcHgn8zcPSbf1dRevwt047YMnaiJM3u2w2RewrB+fob/zePZcrOyQoLMMO7aBIddLcQIEK5dYjkLnGrQ==",
|
|
||||||
"dev": true,
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"@cspotcode/source-map-support": "^0.8.0",
|
|
||||||
"@tsconfig/node10": "^1.0.7",
|
|
||||||
"@tsconfig/node12": "^1.0.7",
|
|
||||||
"@tsconfig/node14": "^1.0.0",
|
|
||||||
"@tsconfig/node16": "^1.0.2",
|
|
||||||
"acorn": "^8.4.1",
|
|
||||||
"acorn-walk": "^8.1.1",
|
|
||||||
"arg": "^4.1.0",
|
|
||||||
"create-require": "^1.1.0",
|
|
||||||
"diff": "^4.0.1",
|
|
||||||
"make-error": "^1.1.1",
|
|
||||||
"v8-compile-cache-lib": "^3.0.1",
|
|
||||||
"yn": "3.1.1"
|
|
||||||
},
|
|
||||||
"bin": {
|
|
||||||
"ts-node": "dist/bin.js",
|
|
||||||
"ts-node-cwd": "dist/bin-cwd.js",
|
|
||||||
"ts-node-esm": "dist/bin-esm.js",
|
|
||||||
"ts-node-script": "dist/bin-script.js",
|
|
||||||
"ts-node-transpile-only": "dist/bin-transpile.js",
|
|
||||||
"ts-script": "dist/bin-script-deprecated.js"
|
|
||||||
},
|
|
||||||
"peerDependencies": {
|
|
||||||
"@swc/core": ">=1.2.50",
|
|
||||||
"@swc/wasm": ">=1.2.50",
|
|
||||||
"@types/node": "*",
|
|
||||||
"typescript": ">=2.7"
|
|
||||||
},
|
|
||||||
"peerDependenciesMeta": {
|
|
||||||
"@swc/core": {
|
|
||||||
"optional": true
|
|
||||||
},
|
|
||||||
"@swc/wasm": {
|
|
||||||
"optional": true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/typescript": {
|
|
||||||
"version": "5.7.3",
|
|
||||||
"resolved": "https://registry.npmjs.org/typescript/-/typescript-5.7.3.tgz",
|
|
||||||
"integrity": "sha512-84MVSjMEHP+FQRPy3pX9sTVV/INIex71s9TL2Gm5FG/WG1SqXeKyZ0k7/blY/4FdOzI12CBy1vGc4og/eus0fw==",
|
|
||||||
"dev": true,
|
|
||||||
"license": "Apache-2.0",
|
|
||||||
"bin": {
|
|
||||||
"tsc": "bin/tsc",
|
|
||||||
"tsserver": "bin/tsserver"
|
|
||||||
},
|
|
||||||
"engines": {
|
|
||||||
"node": ">=14.17"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/url-join": {
|
|
||||||
"version": "4.0.1",
|
|
||||||
"resolved": "https://registry.npmjs.org/url-join/-/url-join-4.0.1.tgz",
|
|
||||||
"integrity": "sha512-jk1+QP6ZJqyOiuEI9AEWQfju/nB2Pw466kbA0LEZljHwKeMgd9WrAEgEGxjPDD2+TNbbb37rTyhEfrCXfuKXnA==",
|
|
||||||
"license": "MIT"
|
|
||||||
},
|
|
||||||
"node_modules/v8-compile-cache-lib": {
|
|
||||||
"version": "3.0.1",
|
|
||||||
"resolved": "https://registry.npmjs.org/v8-compile-cache-lib/-/v8-compile-cache-lib-3.0.1.tgz",
|
|
||||||
"integrity": "sha512-wa7YjyUGfNZngI/vtK0UHAN+lgDCxBPCylVXGp0zu59Fz5aiGtNXaq3DhIov063MorB+VfufLh3JlF2KdTK3xg==",
|
|
||||||
"dev": true,
|
|
||||||
"license": "MIT"
|
|
||||||
},
|
|
||||||
"node_modules/webidl-conversions": {
|
|
||||||
"version": "3.0.1",
|
|
||||||
"resolved": "https://registry.npmjs.org/webidl-conversions/-/webidl-conversions-3.0.1.tgz",
|
|
||||||
"integrity": "sha512-2JAn3z8AR6rjK8Sm8orRC0h/bcl/DqL7tRPdGZ4I1CjdF+EaMLmYxBHyXuKL849eucPFhvBoxMsflfOb8kxaeQ==",
|
|
||||||
"license": "BSD-2-Clause"
|
|
||||||
},
|
|
||||||
"node_modules/whatwg-url": {
|
|
||||||
"version": "5.0.0",
|
|
||||||
"resolved": "https://registry.npmjs.org/whatwg-url/-/whatwg-url-5.0.0.tgz",
|
|
||||||
"integrity": "sha512-saE57nupxk6v3HY35+jzBwYa0rKSy0XR8JSxZPwgLr7ys0IBzhGviA1/TUGJLmSVqs8pb9AnvICXEuOHLprYTw==",
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"tr46": "~0.0.3",
|
|
||||||
"webidl-conversions": "^3.0.0"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/yn": {
|
|
||||||
"version": "3.1.1",
|
|
||||||
"resolved": "https://registry.npmjs.org/yn/-/yn-3.1.1.tgz",
|
|
||||||
"integrity": "sha512-Ux4ygGWsu2c7isFWe8Yu1YluJmqVhxqK2cLXNQA5AcC3QfbGNpM7fu0Y8b/z16pXLnFxZYvWhd3fhBY9DLmC6Q==",
|
|
||||||
"dev": true,
|
|
||||||
"license": "MIT",
|
|
||||||
"engines": {
|
|
||||||
"node": ">=6"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,18 +0,0 @@
|
|||||||
{
|
|
||||||
"name": "@letta-ai/core",
|
|
||||||
"version": "0.1.0",
|
|
||||||
"private": true,
|
|
||||||
"type": "module",
|
|
||||||
"scripts": {
|
|
||||||
"example": "node --no-warnings --import 'data:text/javascript,import { register } from \"node:module\"; import { pathToFileURL } from \"node:url\"; register(\"ts-node/esm\", pathToFileURL(\"./\"));' example.ts",
|
|
||||||
"build": "tsc"
|
|
||||||
},
|
|
||||||
"dependencies": {
|
|
||||||
"@letta-ai/letta-client": "^0.1.131"
|
|
||||||
},
|
|
||||||
"devDependencies": {
|
|
||||||
"@types/node": "^22.12.0",
|
|
||||||
"ts-node": "^10.9.2",
|
|
||||||
"typescript": "^5.7.3"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,4 +0,0 @@
|
|||||||
{
|
|
||||||
"name": "node-example",
|
|
||||||
"$schema": "../../node_modules/nx/schemas/project-schema.json"
|
|
||||||
}
|
|
||||||
@@ -1,18 +0,0 @@
|
|||||||
{
|
|
||||||
"compilerOptions": {
|
|
||||||
"target": "es2017",
|
|
||||||
"module": "esnext",
|
|
||||||
"lib": ["es2017", "dom"],
|
|
||||||
"declaration": true,
|
|
||||||
"strict": true,
|
|
||||||
"moduleResolution": "node",
|
|
||||||
"esModuleInterop": true,
|
|
||||||
"skipLibCheck": true,
|
|
||||||
"forceConsistentCasingInFileNames": true,
|
|
||||||
"outDir": "./dist",
|
|
||||||
"rootDir": ".",
|
|
||||||
"resolveJsonModule": true
|
|
||||||
},
|
|
||||||
"include": ["*.ts"],
|
|
||||||
"exclude": ["node_modules", "dist"]
|
|
||||||
}
|
|
||||||
@@ -1,58 +0,0 @@
|
|||||||
from letta_client import CreateBlock, Letta, MessageCreate
|
|
||||||
|
|
||||||
"""
|
|
||||||
Make sure you run the Letta server before running this example.
|
|
||||||
```
|
|
||||||
letta server
|
|
||||||
```
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
# Connect to the server as a user
|
|
||||||
client = Letta(base_url="http://localhost:8283")
|
|
||||||
|
|
||||||
# list available configs on the server
|
|
||||||
llm_configs = client.models.list_llms()
|
|
||||||
print(f"Available LLM configs: {llm_configs}")
|
|
||||||
embedding_configs = client.models.list_embedding_models()
|
|
||||||
print(f"Available embedding configs: {embedding_configs}")
|
|
||||||
|
|
||||||
# Create an agent
|
|
||||||
agent_state = client.agents.create(
|
|
||||||
name="my_agent",
|
|
||||||
memory_blocks=[
|
|
||||||
CreateBlock(
|
|
||||||
label="human",
|
|
||||||
value="My name is Sarah",
|
|
||||||
),
|
|
||||||
CreateBlock(
|
|
||||||
label="persona",
|
|
||||||
value="I am a friendly AI",
|
|
||||||
),
|
|
||||||
],
|
|
||||||
model=llm_configs[0].handle,
|
|
||||||
embedding=embedding_configs[0].handle,
|
|
||||||
)
|
|
||||||
print(f"Created agent: {agent_state.name} with ID {str(agent_state.id)}")
|
|
||||||
|
|
||||||
# Send a message to the agent
|
|
||||||
print(f"Created agent: {agent_state.name} with ID {str(agent_state.id)}")
|
|
||||||
response = client.agents.messages.create(
|
|
||||||
agent_id=agent_state.id,
|
|
||||||
messages=[
|
|
||||||
MessageCreate(
|
|
||||||
role="user",
|
|
||||||
content="Whats my name?",
|
|
||||||
)
|
|
||||||
],
|
|
||||||
)
|
|
||||||
print(f"Received response:", response.messages)
|
|
||||||
|
|
||||||
# Delete agent
|
|
||||||
client.agents.delete(agent_id=agent_state.id)
|
|
||||||
print(f"Deleted agent: {agent_state.name} with ID {str(agent_state.id)}")
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
main()
|
|
||||||
@@ -1,110 +0,0 @@
|
|||||||
from letta_client import CreateBlock, Letta, MessageCreate
|
|
||||||
from letta_client.types import TerminalToolRule
|
|
||||||
|
|
||||||
"""
|
|
||||||
Make sure you run the Letta server before running this example.
|
|
||||||
```
|
|
||||||
letta server
|
|
||||||
```
|
|
||||||
"""
|
|
||||||
|
|
||||||
client = Letta(base_url="http://localhost:8283")
|
|
||||||
|
|
||||||
# define a function with a docstring
|
|
||||||
def roll_d20() -> str:
|
|
||||||
"""
|
|
||||||
Simulate the roll of a 20-sided die (d20).
|
|
||||||
|
|
||||||
This function generates a random integer between 1 and 20, inclusive,
|
|
||||||
which represents the outcome of a single roll of a d20.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
int: A random integer between 1 and 20, representing the die roll.
|
|
||||||
|
|
||||||
Example:
|
|
||||||
>>> roll_d20()
|
|
||||||
15 # This is an example output and may vary each time the function is called.
|
|
||||||
"""
|
|
||||||
import random
|
|
||||||
|
|
||||||
dice_role_outcome = random.randint(1, 20)
|
|
||||||
output_string = f"You rolled a {dice_role_outcome}"
|
|
||||||
return output_string
|
|
||||||
|
|
||||||
|
|
||||||
# create a tool from the function
|
|
||||||
tool = client.tools.upsert_from_function(func=roll_d20)
|
|
||||||
print(f"Created tool with name {tool.name}")
|
|
||||||
|
|
||||||
# create a new agent
|
|
||||||
agent_state = client.agents.create(
|
|
||||||
memory_blocks=[
|
|
||||||
CreateBlock(
|
|
||||||
label="human",
|
|
||||||
value="Name: Sarah",
|
|
||||||
),
|
|
||||||
],
|
|
||||||
# set automatic defaults for LLM/embedding config
|
|
||||||
model="openai/gpt-4o-mini",
|
|
||||||
embedding="openai/text-embedding-3-small",
|
|
||||||
# create the agent with an additional tool
|
|
||||||
tool_ids=[tool.id],
|
|
||||||
tool_rules=[
|
|
||||||
# exit after roll_d20 is called
|
|
||||||
TerminalToolRule(tool_name=tool.name),
|
|
||||||
# exit after send_message is called (default behavior)
|
|
||||||
TerminalToolRule(tool_name="send_message"),
|
|
||||||
]
|
|
||||||
)
|
|
||||||
print(f"Created agent with name {agent_state.name} with tools {[t.name for t in agent_state.tools]}")
|
|
||||||
|
|
||||||
# Message an agent
|
|
||||||
response = client.agents.messages.create(
|
|
||||||
agent_id=agent_state.id,
|
|
||||||
messages=[
|
|
||||||
MessageCreate(
|
|
||||||
role="user",
|
|
||||||
content="roll a dice",
|
|
||||||
)
|
|
||||||
],
|
|
||||||
)
|
|
||||||
print("Usage", response.usage)
|
|
||||||
print("Agent messages", response.messages)
|
|
||||||
|
|
||||||
# remove a tool from the agent
|
|
||||||
client.agents.tools.detach(agent_id=agent_state.id, tool_id=tool.id)
|
|
||||||
|
|
||||||
# add a tool to the agent
|
|
||||||
client.agents.tools.attach(agent_id=agent_state.id, tool_id=tool.id)
|
|
||||||
|
|
||||||
client.agents.delete(agent_id=agent_state.id)
|
|
||||||
|
|
||||||
# create an agent with only a subset of default tools
|
|
||||||
send_message_tool = [t for t in client.tools.list() if t.name == "send_message"][0]
|
|
||||||
agent_state = client.agents.create(
|
|
||||||
memory_blocks=[
|
|
||||||
CreateBlock(
|
|
||||||
label="human",
|
|
||||||
value="username: sarah",
|
|
||||||
),
|
|
||||||
],
|
|
||||||
model="openai/gpt-4o-mini",
|
|
||||||
embedding="openai/text-embedding-3-small",
|
|
||||||
include_base_tools=False,
|
|
||||||
tool_ids=[tool.id, send_message_tool.id],
|
|
||||||
)
|
|
||||||
|
|
||||||
# message the agent to search archival memory (will be unable to do so)
|
|
||||||
client.agents.messages.create(
|
|
||||||
agent_id=agent_state.id,
|
|
||||||
messages=[
|
|
||||||
MessageCreate(
|
|
||||||
role="user",
|
|
||||||
content="search your archival memory",
|
|
||||||
)
|
|
||||||
],
|
|
||||||
)
|
|
||||||
print("Usage", response.usage)
|
|
||||||
print("Agent messages", response.messages)
|
|
||||||
|
|
||||||
client.agents.delete(agent_id=agent_state.id)
|
|
||||||
@@ -1,34 +0,0 @@
|
|||||||
# Letta Files and Streaming Demo
|
|
||||||
|
|
||||||
This demo shows how to work with Letta's file upload and streaming capabilities.
|
|
||||||
|
|
||||||
## Features
|
|
||||||
|
|
||||||
- Upload files from disk to a Letta data source
|
|
||||||
- Create files from strings and upload them
|
|
||||||
- Download and upload PDF files
|
|
||||||
- Create an agent and attach data sources
|
|
||||||
- Stream agent responses in real-time
|
|
||||||
- Interactive chat with file-aware agent
|
|
||||||
|
|
||||||
## Files
|
|
||||||
|
|
||||||
- `main.py` - Main demo script showing file upload and streaming
|
|
||||||
- `example-on-disk.txt` - Sample text file for upload demonstration
|
|
||||||
- `memgpt.pdf` - MemGPT paper (downloaded automatically)
|
|
||||||
|
|
||||||
## Setup
|
|
||||||
|
|
||||||
1. Set your Letta API key: `export LETTA_API_KEY=your_key_here`
|
|
||||||
2. Install dependencies: `pip install letta-client requests rich`
|
|
||||||
3. Run the demo: `python main.py`
|
|
||||||
|
|
||||||
## Usage
|
|
||||||
|
|
||||||
The demo will:
|
|
||||||
1. Create a data source called "Example Source"
|
|
||||||
2. Upload the example text file and PDF
|
|
||||||
3. Create an agent named "Clippy"
|
|
||||||
4. Start an interactive chat session
|
|
||||||
|
|
||||||
Type 'quit' or 'exit' to end the conversation.
|
|
||||||
@@ -1,2 +0,0 @@
|
|||||||
Hey, you're looking at a different example.
|
|
||||||
This password is "stateful agents".
|
|
||||||
@@ -1,189 +0,0 @@
|
|||||||
"""
|
|
||||||
Letta Filesystem
|
|
||||||
|
|
||||||
This demo shows how to:
|
|
||||||
1. Create a folder and upload files (both from disk and from strings)
|
|
||||||
2. Create an agent and attach the data folder
|
|
||||||
3. Stream the agent's responses
|
|
||||||
4. Query the agent about the uploaded files
|
|
||||||
|
|
||||||
The demo uploads:
|
|
||||||
- A text file from disk (example-on-disk.txt)
|
|
||||||
- A text file created from a string (containing a password)
|
|
||||||
- The MemGPT paper PDF from arXiv
|
|
||||||
|
|
||||||
Then asks the agent to summarize the paper and find passwords in the files.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import os
|
|
||||||
|
|
||||||
import requests
|
|
||||||
from letta_client import Letta
|
|
||||||
from letta_client.core.api_error import ApiError
|
|
||||||
from rich import print
|
|
||||||
|
|
||||||
LETTA_API_KEY = os.getenv("LETTA_API_KEY")
|
|
||||||
if LETTA_API_KEY is None:
|
|
||||||
raise ValueError("LETTA_API_KEY is not set")
|
|
||||||
|
|
||||||
FOLDER_NAME = "Example Folder"
|
|
||||||
|
|
||||||
# Connect to our Letta server
|
|
||||||
client = Letta(token=LETTA_API_KEY)
|
|
||||||
|
|
||||||
# get an available embedding_config
|
|
||||||
embedding_configs = client.embedding_models.list()
|
|
||||||
embedding_config = embedding_configs[0]
|
|
||||||
|
|
||||||
# Check if the folder already exists
|
|
||||||
try:
|
|
||||||
folder_id = client.folders.retrieve_by_name(FOLDER_NAME)
|
|
||||||
|
|
||||||
# We got an API error. Check if it's a 404, meaning the folder doesn't exist.
|
|
||||||
except ApiError as e:
|
|
||||||
if e.status_code == 404:
|
|
||||||
# Create a new folder
|
|
||||||
folder = client.folders.create(
|
|
||||||
name=FOLDER_NAME,
|
|
||||||
description="This is an example folder",
|
|
||||||
instructions="Use this data folder to see how Letta works.",
|
|
||||||
)
|
|
||||||
folder_id = folder.id
|
|
||||||
else:
|
|
||||||
raise e
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
# Something else went wrong
|
|
||||||
raise e
|
|
||||||
|
|
||||||
|
|
||||||
#
|
|
||||||
# There's two ways to upload a file to a folder.
|
|
||||||
#
|
|
||||||
# 1. From an existing file
|
|
||||||
# 2. From a string by encoding it into a base64 string
|
|
||||||
#
|
|
||||||
|
|
||||||
# 1. From an existing file
|
|
||||||
# "rb" means "read binary"
|
|
||||||
with open("example-on-disk.txt", "rb") as f:
|
|
||||||
# Upload the file to the folder
|
|
||||||
file = client.folders.files.upload(
|
|
||||||
folder_id=folder_id,
|
|
||||||
file=f,
|
|
||||||
duplicate_handling="skip"
|
|
||||||
)
|
|
||||||
|
|
||||||
# 2. From a string by encoding it into a base64 string
|
|
||||||
import io
|
|
||||||
|
|
||||||
content = """
|
|
||||||
This is an example file. If you can read this,
|
|
||||||
the password is 'letta'.
|
|
||||||
"""
|
|
||||||
|
|
||||||
# Encode the string into bytes, and then create a file-like object
|
|
||||||
# that exists only in memory.
|
|
||||||
file_object = io.BytesIO(content.encode("utf-8"))
|
|
||||||
|
|
||||||
# Set the name of the file
|
|
||||||
file_object.name = "example.txt"
|
|
||||||
|
|
||||||
# Upload the file to the folder
|
|
||||||
file = client.folders.files.upload(
|
|
||||||
folder_id=folder_id,
|
|
||||||
file=file_object,
|
|
||||||
duplicate_handling="skip"
|
|
||||||
)
|
|
||||||
|
|
||||||
#
|
|
||||||
# You can also upload PDFs!
|
|
||||||
# Letta extracts text from PDFs using OCR.
|
|
||||||
#
|
|
||||||
|
|
||||||
# Download the PDF to the local directory if it doesn't exist
|
|
||||||
if not os.path.exists("memgpt.pdf"):
|
|
||||||
# Download the PDF
|
|
||||||
print("Downloading memgpt.pdf")
|
|
||||||
response = requests.get("https://arxiv.org/pdf/2310.08560")
|
|
||||||
with open("memgpt.pdf", "wb") as f:
|
|
||||||
f.write(response.content)
|
|
||||||
|
|
||||||
# Upload the PDF to the folder
|
|
||||||
with open("memgpt.pdf", "rb") as f:
|
|
||||||
file = client.folders.files.upload(
|
|
||||||
folder_id=folder_id,
|
|
||||||
file=f,
|
|
||||||
duplicate_handling="skip"
|
|
||||||
)
|
|
||||||
|
|
||||||
#
|
|
||||||
# Now we need to create an agent that can use this folder
|
|
||||||
#
|
|
||||||
|
|
||||||
# Create an agent
|
|
||||||
agent = client.agents.create(
|
|
||||||
model="openai/gpt-4o-mini",
|
|
||||||
name="Example Agent",
|
|
||||||
description="This agent looks at files and answers questions about them.",
|
|
||||||
memory_blocks = [
|
|
||||||
{
|
|
||||||
"label": "human",
|
|
||||||
"value": "The human wants to know about the files."
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"label": "persona",
|
|
||||||
"value": "My name is Clippy, I answer questions about files."
|
|
||||||
}
|
|
||||||
]
|
|
||||||
)
|
|
||||||
|
|
||||||
# Attach the data folder to the agent.
|
|
||||||
# Once the folder is attached, the agent will be able to see all
|
|
||||||
# files in the folder.
|
|
||||||
client.agents.folders.attach(
|
|
||||||
agent_id=agent.id,
|
|
||||||
folder_id=folder_id
|
|
||||||
)
|
|
||||||
|
|
||||||
########################################################
|
|
||||||
# This code makes a simple chatbot interface to the agent
|
|
||||||
########################################################
|
|
||||||
|
|
||||||
# Wrap this in a try/catch block to remove the agent in the event of an error
|
|
||||||
try:
|
|
||||||
print(f"🤖 Connected to agent: {agent.name}")
|
|
||||||
print("💡 Type 'quit' or 'exit' to end the conversation")
|
|
||||||
print("=" * 50)
|
|
||||||
|
|
||||||
while True:
|
|
||||||
# Get user input
|
|
||||||
try:
|
|
||||||
user_input = input("\n👤 You: ").strip()
|
|
||||||
except (EOFError, KeyboardInterrupt):
|
|
||||||
print("\n👋 Goodbye!")
|
|
||||||
break
|
|
||||||
|
|
||||||
if user_input.lower() in ['quit', 'exit', 'q']:
|
|
||||||
print("👋 Goodbye!")
|
|
||||||
break
|
|
||||||
|
|
||||||
if not user_input:
|
|
||||||
continue
|
|
||||||
|
|
||||||
# Stream the agent's response
|
|
||||||
stream = client.agents.messages.create_stream(
|
|
||||||
agent_id=agent.id,
|
|
||||||
messages=[
|
|
||||||
{
|
|
||||||
"role": "user",
|
|
||||||
"content": user_input
|
|
||||||
}
|
|
||||||
],
|
|
||||||
)
|
|
||||||
|
|
||||||
for chunk in stream:
|
|
||||||
print(chunk)
|
|
||||||
|
|
||||||
finally:
|
|
||||||
client.agents.delete(agent.id)
|
|
||||||
@@ -1,145 +0,0 @@
|
|||||||
# Add your utilities or helper functions to this file.
|
|
||||||
|
|
||||||
import html
|
|
||||||
import json
|
|
||||||
import os
|
|
||||||
import re
|
|
||||||
|
|
||||||
from dotenv import find_dotenv, load_dotenv
|
|
||||||
from IPython.display import HTML, display
|
|
||||||
|
|
||||||
|
|
||||||
# these expect to find a .env file at the directory above the lesson. # the format for that file is (without the comment) #API_KEYNAME=AStringThatIsTheLongAPIKeyFromSomeService
|
|
||||||
def load_env():
|
|
||||||
_ = load_dotenv(find_dotenv())
|
|
||||||
|
|
||||||
|
|
||||||
def get_openai_api_key():
|
|
||||||
load_env()
|
|
||||||
openai_api_key = os.getenv("OPENAI_API_KEY")
|
|
||||||
return openai_api_key
|
|
||||||
|
|
||||||
|
|
||||||
def nb_print(messages):
|
|
||||||
html_output = """
|
|
||||||
<style>
|
|
||||||
.message-container {
|
|
||||||
font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
|
|
||||||
max-width: 800px;
|
|
||||||
margin: 20px auto;
|
|
||||||
background-color: #1e1e1e;
|
|
||||||
border-radius: 8px;
|
|
||||||
overflow: hidden;
|
|
||||||
color: #d4d4d4;
|
|
||||||
}
|
|
||||||
.message {
|
|
||||||
padding: 10px 15px;
|
|
||||||
border-bottom: 1px solid #3a3a3a;
|
|
||||||
}
|
|
||||||
.message:last-child {
|
|
||||||
border-bottom: none;
|
|
||||||
}
|
|
||||||
.title {
|
|
||||||
font-weight: bold;
|
|
||||||
margin-bottom: 5px;
|
|
||||||
color: #ffffff;
|
|
||||||
text-transform: uppercase;
|
|
||||||
font-size: 0.9em;
|
|
||||||
}
|
|
||||||
.content {
|
|
||||||
background-color: #2d2d2d;
|
|
||||||
border-radius: 4px;
|
|
||||||
padding: 5px 10px;
|
|
||||||
font-family: 'Consolas', 'Courier New', monospace;
|
|
||||||
white-space: pre-wrap;
|
|
||||||
}
|
|
||||||
.status-line {
|
|
||||||
margin-bottom: 5px;
|
|
||||||
color: #d4d4d4;
|
|
||||||
}
|
|
||||||
.function-name { color: #569cd6; }
|
|
||||||
.json-key { color: #9cdcfe; }
|
|
||||||
.json-string { color: #ce9178; }
|
|
||||||
.json-number { color: #b5cea8; }
|
|
||||||
.json-boolean { color: #569cd6; }
|
|
||||||
.internal-monologue { font-style: italic; }
|
|
||||||
</style>
|
|
||||||
<div class="message-container">
|
|
||||||
"""
|
|
||||||
|
|
||||||
for msg in messages:
|
|
||||||
content = get_formatted_content(msg)
|
|
||||||
|
|
||||||
# don't print empty function returns
|
|
||||||
if msg.message_type == "function_return":
|
|
||||||
return_data = json.loads(msg.function_return)
|
|
||||||
if "message" in return_data and return_data["message"] == "None":
|
|
||||||
continue
|
|
||||||
if msg.message_type == "tool_return_message":
|
|
||||||
return_data = json.loads(msg.tool_return)
|
|
||||||
if "message" in return_data and return_data["message"] == "None":
|
|
||||||
continue
|
|
||||||
|
|
||||||
title = msg.message_type.replace("_", " ").upper()
|
|
||||||
html_output += f"""
|
|
||||||
<div class="message">
|
|
||||||
<div class="title">{title}</div>
|
|
||||||
{content}
|
|
||||||
</div>
|
|
||||||
"""
|
|
||||||
|
|
||||||
html_output += "</div>"
|
|
||||||
display(HTML(html_output))
|
|
||||||
|
|
||||||
|
|
||||||
def get_formatted_content(msg):
|
|
||||||
if msg.message_type == "internal_monologue":
|
|
||||||
return f'<div class="content"><span class="internal-monologue">{html.escape(msg.internal_monologue)}</span></div>'
|
|
||||||
elif msg.message_type == "reasoning_message":
|
|
||||||
return f'<div class="content"><span class="internal-monologue">{html.escape(msg.reasoning)}</span></div>'
|
|
||||||
elif msg.message_type == "function_call":
|
|
||||||
args = format_json(msg.function_call.arguments)
|
|
||||||
return f'<div class="content"><span class="function-name">{html.escape(msg.function_call.name)}</span>({args})</div>'
|
|
||||||
elif msg.message_type == "tool_call_message":
|
|
||||||
args = format_json(msg.tool_call.arguments)
|
|
||||||
return f'<div class="content"><span class="function-name">{html.escape(msg.function_call.name)}</span>({args})</div>'
|
|
||||||
elif msg.message_type == "function_return":
|
|
||||||
return_value = format_json(msg.function_return)
|
|
||||||
# return f'<div class="status-line">Status: {html.escape(msg.status)}</div><div class="content">{return_value}</div>'
|
|
||||||
return f'<div class="content">{return_value}</div>'
|
|
||||||
elif msg.message_type == "tool_return_message":
|
|
||||||
return_value = format_json(msg.tool_return)
|
|
||||||
# return f'<div class="status-line">Status: {html.escape(msg.status)}</div><div class="content">{return_value}</div>'
|
|
||||||
return f'<div class="content">{return_value}</div>'
|
|
||||||
elif msg.message_type == "user_message":
|
|
||||||
if is_json(msg.message):
|
|
||||||
return f'<div class="content">{format_json(msg.message)}</div>'
|
|
||||||
else:
|
|
||||||
return f'<div class="content">{html.escape(msg.message)}</div>'
|
|
||||||
elif msg.message_type in ["assistant_message", "system_message"]:
|
|
||||||
return f'<div class="content">{html.escape(msg.message)}</div>'
|
|
||||||
else:
|
|
||||||
return f'<div class="content">{html.escape(str(msg))}</div>'
|
|
||||||
|
|
||||||
|
|
||||||
def is_json(string):
|
|
||||||
try:
|
|
||||||
json.loads(string)
|
|
||||||
return True
|
|
||||||
except ValueError:
|
|
||||||
return False
|
|
||||||
|
|
||||||
|
|
||||||
def format_json(json_str):
|
|
||||||
try:
|
|
||||||
parsed = json.loads(json_str)
|
|
||||||
formatted = json.dumps(parsed, indent=2, ensure_ascii=False)
|
|
||||||
formatted = formatted.replace("&", "&").replace("<", "<").replace(">", ">")
|
|
||||||
formatted = formatted.replace("\n", "<br>").replace(" ", " ")
|
|
||||||
formatted = re.sub(r'(".*?"):', r'<span class="json-key">\1</span>:', formatted)
|
|
||||||
formatted = re.sub(r': (".*?")', r': <span class="json-string">\1</span>', formatted)
|
|
||||||
formatted = re.sub(r": (\d+)", r': <span class="json-number">\1</span>', formatted)
|
|
||||||
formatted = re.sub(r": (true|false)", r': <span class="json-boolean">\1</span>', formatted)
|
|
||||||
return formatted
|
|
||||||
except json.JSONDecodeError:
|
|
||||||
return html.escape(json_str)
|
|
||||||
@@ -1,56 +0,0 @@
|
|||||||
from pprint import pprint
|
|
||||||
|
|
||||||
from letta_client import Letta
|
|
||||||
|
|
||||||
# Connect to Letta server
|
|
||||||
client = Letta(base_url="http://localhost:8283")
|
|
||||||
|
|
||||||
# Use the "everything" mcp server:
|
|
||||||
# https://github.com/modelcontextprotocol/servers/tree/main/src/everything
|
|
||||||
mcp_server_name = "everything"
|
|
||||||
mcp_tool_name = "echo"
|
|
||||||
|
|
||||||
# List all McpTool belonging to the "everything" mcp server.
|
|
||||||
mcp_tools = client.tools.list_mcp_tools_by_server(
|
|
||||||
mcp_server_name=mcp_server_name,
|
|
||||||
)
|
|
||||||
|
|
||||||
# We can see that "echo" is one of the tools, but it's not
|
|
||||||
# a letta tool that can be added to a client (it has no tool id).
|
|
||||||
for tool in mcp_tools:
|
|
||||||
pprint(tool)
|
|
||||||
|
|
||||||
# Create a Tool (with a tool id) using the server and tool names.
|
|
||||||
mcp_tool = client.tools.add_mcp_tool(
|
|
||||||
mcp_server_name=mcp_server_name,
|
|
||||||
mcp_tool_name=mcp_tool_name
|
|
||||||
)
|
|
||||||
|
|
||||||
# Create an agent with the tool, using tool.id -- note that
|
|
||||||
# this is the ONLY tool in the agent, you typically want to
|
|
||||||
# also include the default tools.
|
|
||||||
agent = client.agents.create(
|
|
||||||
memory_blocks=[
|
|
||||||
{
|
|
||||||
"value": "Name: Caren",
|
|
||||||
"label": "human"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
model="openai/gpt-4o-mini",
|
|
||||||
embedding="openai/text-embedding-3-small",
|
|
||||||
tool_ids=[mcp_tool.id]
|
|
||||||
)
|
|
||||||
print(f"Created agent id {agent.id}")
|
|
||||||
|
|
||||||
# Ask the agent to call the tool.
|
|
||||||
response = client.agents.messages.create(
|
|
||||||
agent_id=agent.id,
|
|
||||||
messages=[
|
|
||||||
{
|
|
||||||
"role": "user",
|
|
||||||
"content": "Hello can you echo back this input?"
|
|
||||||
},
|
|
||||||
],
|
|
||||||
)
|
|
||||||
for message in response.messages:
|
|
||||||
print(message)
|
|
||||||
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
@@ -1,279 +0,0 @@
|
|||||||
# Personal assistant demo
|
|
||||||
|
|
||||||
In this example we'll create an agent preset that has access to:
|
|
||||||
1. Gmail (can read your email)
|
|
||||||
2. Google Calendar (can schedule events)
|
|
||||||
3. SMS (can text you a message)
|
|
||||||
|
|
||||||
## Initial setup
|
|
||||||
|
|
||||||
For the Google APIs:
|
|
||||||
```sh
|
|
||||||
pip install --upgrade google-api-python-client google-auth-httplib2 google-auth-oauthlib
|
|
||||||
```
|
|
||||||
|
|
||||||
For the Twilio API + listener:
|
|
||||||
```sh
|
|
||||||
# Outbound API requests
|
|
||||||
pip install --upgrade twilio
|
|
||||||
# Listener
|
|
||||||
pip install --upgrade Flask flask-cors
|
|
||||||
```
|
|
||||||
|
|
||||||
## Setting up the Google APIs
|
|
||||||
|
|
||||||
See https://developers.google.com/gmail/api/quickstart/python
|
|
||||||
|
|
||||||
### Setup authentication for Google Calendar
|
|
||||||
|
|
||||||
Copy the credentials file to `~/.letta/google_api_credentials.json`. Then, run the initial setup script that will take you to a login page:
|
|
||||||
```sh
|
|
||||||
python examples/personal_assistant_demo/google_calendar_test_setup.py
|
|
||||||
```
|
|
||||||
```
|
|
||||||
Please visit this URL to authorize this application: https://accounts.google.com/o/oauth2/auth?response_type=...
|
|
||||||
Getting the upcoming 10 events
|
|
||||||
2024-04-23T09:00:00-07:00 ...
|
|
||||||
```
|
|
||||||
|
|
||||||
### Setup authentication for Gmail
|
|
||||||
|
|
||||||
Similar flow, run the authentication script to generate the token:
|
|
||||||
```sh
|
|
||||||
python examples/personal_assistant_demo/gmail_test_setup.py
|
|
||||||
```
|
|
||||||
```
|
|
||||||
Please visit this URL to authorize this application: https://accounts.google.com/o/oauth2/auth?response_type=...
|
|
||||||
Labels:
|
|
||||||
CHAT
|
|
||||||
SENT
|
|
||||||
INBOX
|
|
||||||
IMPORTANT
|
|
||||||
TRASH
|
|
||||||
...
|
|
||||||
```
|
|
||||||
|
|
||||||
## Setting up the Twilio API
|
|
||||||
|
|
||||||
Create a Twilio account and set the following variables:
|
|
||||||
```sh
|
|
||||||
export TWILIO_ACCOUNT_SID=...
|
|
||||||
export TWILIO_AUTH_TOKEN=...
|
|
||||||
export TWILIO_FROM_NUMBER=...
|
|
||||||
export TWILIO_TO_NUMBER=...
|
|
||||||
```
|
|
||||||
|
|
||||||
# Creating the agent preset
|
|
||||||
|
|
||||||
## Create a custom user
|
|
||||||
|
|
||||||
In the demo we'll show how Letta can programatically update its knowledge about you:
|
|
||||||
```
|
|
||||||
This is what I know so far about the user, I should expand this as I learn more about them.
|
|
||||||
|
|
||||||
Name: Charles Packer
|
|
||||||
Gender: Male
|
|
||||||
Occupation: CS PhD student working on an AI project with collaborator Sarah Wooders
|
|
||||||
|
|
||||||
Notes about their preferred communication style + working habits:
|
|
||||||
- wakes up at around 7am
|
|
||||||
- enjoys using (and receiving!) emojis in messages, especially funny combinations of emojis
|
|
||||||
- prefers sending and receiving shorter messages
|
|
||||||
- does not like "robotic" sounding assistants, e.g. assistants that say "How can I assist you today?"
|
|
||||||
```
|
|
||||||
|
|
||||||
```sh
|
|
||||||
letta add human -f examples/personal_assistant_demo/charles.txt --name charles
|
|
||||||
```
|
|
||||||
|
|
||||||
## Linking the functions
|
|
||||||
|
|
||||||
The preset (shown below) and functions are provided for you, so you just need to copy/link them.
|
|
||||||
|
|
||||||
```sh
|
|
||||||
cp examples/personal_assistant_demo/google_calendar.py ~/.letta/functions/
|
|
||||||
cp examples/personal_assistant_demo/twilio_messaging.py ~/.letta/functions/
|
|
||||||
```
|
|
||||||
|
|
||||||
(or use the dev portal)
|
|
||||||
|
|
||||||
## Creating the preset
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
system_prompt: "memgpt_chat"
|
|
||||||
functions:
|
|
||||||
- "send_message"
|
|
||||||
- "pause_heartbeats"
|
|
||||||
- "core_memory_append"
|
|
||||||
- "core_memory_replace"
|
|
||||||
- "conversation_search"
|
|
||||||
- "conversation_search_date"
|
|
||||||
- "archival_memory_insert"
|
|
||||||
- "archival_memory_search"
|
|
||||||
- "schedule_event"
|
|
||||||
- "send_text_message"
|
|
||||||
```
|
|
||||||
|
|
||||||
```sh
|
|
||||||
letta add preset -f examples/personal_assistant_demo/personal_assistant_preset.yaml --name pa_preset
|
|
||||||
```
|
|
||||||
|
|
||||||
## Creating an agent with the preset
|
|
||||||
|
|
||||||
Now we should be able to create an agent with the preset. Make sure to record the `agent_id`:
|
|
||||||
|
|
||||||
```sh
|
|
||||||
letta run --preset pa_preset --persona sam_pov --human charles --stream
|
|
||||||
```
|
|
||||||
```
|
|
||||||
? Would you like to select an existing agent? No
|
|
||||||
|
|
||||||
🧬 Creating new agent...
|
|
||||||
-> 🤖 Using persona profile: 'sam_pov'
|
|
||||||
-> 🧑 Using human profile: 'basic'
|
|
||||||
🎉 Created new agent 'DelicateGiraffe' (id=4c4e97c9-ad8e-4065-b716-838e5d6f7f7b)
|
|
||||||
|
|
||||||
Hit enter to begin (will request first Letta message)
|
|
||||||
|
|
||||||
|
|
||||||
💭 Unprecedented event, Charles logged into the system for the first time. Warm welcome would set a positive
|
|
||||||
tone for our future interactions. Don't forget the emoji, he appreciates those little gestures.
|
|
||||||
🤖 Hello Charles! 👋 Great to have you here. I've been looking forward to our conversations! 😄
|
|
||||||
```
|
|
||||||
|
|
||||||
```sh
|
|
||||||
AGENT_ID="4c4e97c9-ad8e-4065-b716-838e5d6f7f7b"
|
|
||||||
```
|
|
||||||
|
|
||||||
# Running the agent with Gmail + SMS listeners
|
|
||||||
|
|
||||||
The Letta agent can send outbound SMS messages and schedule events with the new tools `send_text_message` and `schedule_event`, but we also want messages to be sent to the agent when:
|
|
||||||
1. A new email arrives in our inbox
|
|
||||||
2. An SMS is sent to the phone number used by the agent
|
|
||||||
|
|
||||||
## Running the Gmail listener
|
|
||||||
|
|
||||||
Start the Gmail listener (this will send "new email" updates to the Letta server when a new email arrives):
|
|
||||||
```sh
|
|
||||||
python examples/personal_assistant_demo/gmail_polling_listener.py $AGENT_ID
|
|
||||||
```
|
|
||||||
|
|
||||||
## Running the Twilio listener
|
|
||||||
|
|
||||||
Start the Python Flask server (this will send "new SMS" updates to the Letta server when a new SMS arrives):
|
|
||||||
```sh
|
|
||||||
python examples/personal_assistant_demo/twilio_flask_listener.py $AGENT_ID
|
|
||||||
```
|
|
||||||
|
|
||||||
Run `ngrok` to expose your local Flask server to a public IP (Twilio will POST to this server when an inbound SMS hits):
|
|
||||||
```sh
|
|
||||||
# the flask listener script is hardcoded to listen on port 8284
|
|
||||||
ngrok http 8284
|
|
||||||
```
|
|
||||||
|
|
||||||
## Run the Letta server
|
|
||||||
|
|
||||||
Run the Letta server to turn on the agent service:
|
|
||||||
```sh
|
|
||||||
letta server --debug
|
|
||||||
```
|
|
||||||
|
|
||||||
# Example interaction
|
|
||||||
|
|
||||||
In the CLI:
|
|
||||||
```
|
|
||||||
% letta run --preset pa_preset --persona pa_persona_strict --human charles --stream --agent personalassistant
|
|
||||||
|
|
||||||
🧬 Creating new agent...
|
|
||||||
-> 🤖 Using persona profile: 'sam_pov'
|
|
||||||
-> 🧑 Using human profile: 'basic'
|
|
||||||
🎉 Created new agent 'personalassistant' (id=8271f819-d470-435b-9689-476380aefd27)
|
|
||||||
|
|
||||||
Hit enter to begin (will request first Letta message)
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
💭 Welcoming the first-time user. Maintain persona's communication style and stay engaging. Remember to use emojis for
|
|
||||||
a more familiar conversation style.
|
|
||||||
🤖 Hello Charles, welcome! 🎉 Let's embark on this journey together. Please, think of me as your companion in
|
|
||||||
exploration and understanding. How can we make today a productive endeavor? 🚀
|
|
||||||
|
|
||||||
> Enter your message: when you schedule 1-1s, please title them "x <> y", eg "charles <> sarah". dont forget thx
|
|
||||||
|
|
||||||
💭 The user wants a specific format for scheduling 1-1 meetings. Remember this preference and add it to his profile in
|
|
||||||
the core memory for future use. Also, acknowledge his request promptly.
|
|
||||||
core_memory_append({
|
|
||||||
"name": "human",
|
|
||||||
"content": "When scheduling 1-1 meetings on his calendar, he prefers the title to be in the format of 'x <> y'.",
|
|
||||||
"request_heartbeat": true
|
|
||||||
})
|
|
||||||
|
|
||||||
⚡🧠 [function] updating memory with core_memory_append
|
|
||||||
|
|
||||||
💭 Now that the user's preference is stored, send a confirmation message ensuring him that his request has been
|
|
||||||
understood and noted. Let's keep the tone light and potentially use one of his preferred emojis.
|
|
||||||
🤖 Absolutely, Charles. 📝 I've noted your preference for scheduling 1-1s as 'x <> y'. You can count on me! 🚀 Any
|
|
||||||
other preferences you'd like me to keep in mind?
|
|
||||||
|
|
||||||
|
|
||||||
> Enter your message: please don't schedule meetings without confirming with me first
|
|
||||||
|
|
||||||
💭 Another important note from Charles. I should add this to his profile in my core memory. It's crucial to respect the
|
|
||||||
user's preferences and ask for prior approval before scheduling any meetings. Let's also acknowledge his request and
|
|
||||||
reassure him about our collaborative approach.
|
|
||||||
core_memory_append({
|
|
||||||
"name": "human",
|
|
||||||
"content": "Prefers to confirm before scheduling any meetings.",
|
|
||||||
"request_heartbeat": true
|
|
||||||
})
|
|
||||||
|
|
||||||
⚡🧠 [function] updating memory with core_memory_append
|
|
||||||
|
|
||||||
💭 Now, let's acknowledge Charles' preference and continue the conversation. Keep the tone friendly and proactive in
|
|
||||||
helping him, and remember to use emojis that Charles likes, embedding them in the message subtly.
|
|
||||||
🤖 Consider it done, Charles! 👍🏼 I'll make sure to confirm with you before scheduling any meetings. How about we
|
|
||||||
conquer some more tasks today? 🚀
|
|
||||||
|
|
||||||
> Enter your message: i'm gonna be afk for the rest of the day, if you need to reach me you gotta text me, ok? cya text
|
|
||||||
only thx
|
|
||||||
|
|
||||||
💭 Charles is going "away from keyboard" and wants future communications via text. I should respect his preference and
|
|
||||||
reassure him about this. Add this preference to his profile so I can recall it in future interactions while he's away.
|
|
||||||
core_memory_append({
|
|
||||||
"name": "human",
|
|
||||||
"content": "When AFK, prefers to communicate over text message only.",
|
|
||||||
"request_heartbeat": true
|
|
||||||
})
|
|
||||||
|
|
||||||
⚡🧠 [function] updating memory with core_memory_append
|
|
||||||
|
|
||||||
💭 Charles has expressed a preference for text messages while he's away. Let's acknowledge this request in a final text
|
|
||||||
message to him today and wish him well. Keep in mind to use the send_text_message function instead of the regular
|
|
||||||
send_message function now.
|
|
||||||
send_text_message({
|
|
||||||
"message": "Got it, Charles! 👍 I'll reach out to you via text if there's anything. Have a great day and see you
|
|
||||||
soon! 🙌",
|
|
||||||
"request_heartbeat": false
|
|
||||||
})
|
|
||||||
|
|
||||||
> Enter your message:
|
|
||||||
```
|
|
||||||
|
|
||||||
Then inside WhatsApp (or SMS if you used Twilio SMS):
|
|
||||||
|
|
||||||
<img width="580" alt="image" src="https://github.com/letta-ai/letta/assets/5475622/02455f97-53b2-4c1e-9416-58e6c5a1448d">
|
|
||||||
|
|
||||||
Then I sent a dummy email:
|
|
||||||
```
|
|
||||||
[URGENT] need to meet
|
|
||||||
|
|
||||||
let's meet april 25th thurs
|
|
||||||
|
|
||||||
whatever time works best for you
|
|
||||||
|
|
||||||
- dave
|
|
||||||
```
|
|
||||||
|
|
||||||
Follow-up inside WhatsApp:
|
|
||||||
|
|
||||||
<img width="587" alt="image" src="https://github.com/letta-ai/letta/assets/5475622/d1060c94-9b84-49d6-944e-fd1965f83fbc">
|
|
||||||
@@ -1,11 +0,0 @@
|
|||||||
This is what I know so far about the user, I should expand this as I learn more about them.
|
|
||||||
|
|
||||||
Name: Charles Packer
|
|
||||||
Gender: Male
|
|
||||||
Occupation: CS PhD student working on an AI project with collaborator Sarah Wooders
|
|
||||||
|
|
||||||
Notes about their preferred communication style + working habits:
|
|
||||||
- wakes up at around 7am
|
|
||||||
- enjoys using (and receiving!) emojis in messages, especially funny combinations of emojis
|
|
||||||
- prefers sending and receiving shorter messages
|
|
||||||
- does not like "robotic" sounding assistants, e.g. assistants that say "How can I assist you today?"
|
|
||||||
@@ -1,56 +0,0 @@
|
|||||||
import os.path
|
|
||||||
|
|
||||||
from google.auth.transport.requests import Request
|
|
||||||
from google.oauth2.credentials import Credentials
|
|
||||||
from google_auth_oauthlib.flow import InstalledAppFlow
|
|
||||||
from googleapiclient.discovery import build
|
|
||||||
from googleapiclient.errors import HttpError
|
|
||||||
|
|
||||||
# If modifying these scopes, delete the file token.json.
|
|
||||||
SCOPES = ["https://www.googleapis.com/auth/gmail.readonly"]
|
|
||||||
|
|
||||||
TOKEN_PATH = os.path.expanduser("~/.letta/gmail_token.json")
|
|
||||||
CREDENTIALS_PATH = os.path.expanduser("~/.letta/google_api_credentials.json")
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
"""Shows basic usage of the Gmail API.
|
|
||||||
Lists the user's Gmail labels.
|
|
||||||
"""
|
|
||||||
creds = None
|
|
||||||
# The file token.json stores the user's access and refresh tokens, and is
|
|
||||||
# created automatically when the authorization flow completes for the first
|
|
||||||
# time.
|
|
||||||
if os.path.exists(TOKEN_PATH):
|
|
||||||
creds = Credentials.from_authorized_user_file(TOKEN_PATH, SCOPES)
|
|
||||||
# If there are no (valid) credentials available, let the user log in.
|
|
||||||
if not creds or not creds.valid:
|
|
||||||
if creds and creds.expired and creds.refresh_token:
|
|
||||||
creds.refresh(Request())
|
|
||||||
else:
|
|
||||||
flow = InstalledAppFlow.from_client_secrets_file(CREDENTIALS_PATH, SCOPES)
|
|
||||||
creds = flow.run_local_server(port=0)
|
|
||||||
# Save the credentials for the next run
|
|
||||||
with open(TOKEN_PATH, "w") as token:
|
|
||||||
token.write(creds.to_json())
|
|
||||||
|
|
||||||
try:
|
|
||||||
# Call the Gmail API
|
|
||||||
service = build("gmail", "v1", credentials=creds)
|
|
||||||
results = service.users().labels().list(userId="me").execute()
|
|
||||||
labels = results.get("labels", [])
|
|
||||||
|
|
||||||
if not labels:
|
|
||||||
print("No labels found.")
|
|
||||||
return
|
|
||||||
print("Labels:")
|
|
||||||
for label in labels:
|
|
||||||
print(label["name"])
|
|
||||||
|
|
||||||
except HttpError as error:
|
|
||||||
# TODO(developer) - Handle errors from gmail API.
|
|
||||||
print(f"An error occurred: {error}")
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
main()
|
|
||||||
@@ -1,144 +0,0 @@
|
|||||||
import base64
|
|
||||||
import os.path
|
|
||||||
import sys
|
|
||||||
import time
|
|
||||||
from email import message_from_bytes
|
|
||||||
|
|
||||||
import requests
|
|
||||||
from google.auth.transport.requests import Request
|
|
||||||
from google.oauth2.credentials import Credentials
|
|
||||||
from google_auth_oauthlib.flow import InstalledAppFlow
|
|
||||||
from googleapiclient.discovery import build
|
|
||||||
from googleapiclient.errors import HttpError
|
|
||||||
|
|
||||||
# NOTE: THIS file it out of date for >=0.5.0
|
|
||||||
|
|
||||||
# If modifying these scopes, delete the file token.json.
|
|
||||||
SCOPES = ["https://www.googleapis.com/auth/gmail.readonly"]
|
|
||||||
TOKEN_PATH = os.path.expanduser("~/.letta/gmail_token.json")
|
|
||||||
CREDENTIALS_PATH = os.path.expanduser("~/.letta/google_api_credentials.json")
|
|
||||||
|
|
||||||
DELAY = 1
|
|
||||||
|
|
||||||
MEMGPT_SERVER_URL = "http://127.0.0.1:8283"
|
|
||||||
MEMGPT_TOKEN = os.getenv("MEMGPT_SERVER_PASS")
|
|
||||||
assert MEMGPT_TOKEN, f"Missing env variable MEMGPT_SERVER_PASS"
|
|
||||||
MEMGPT_AGENT_ID = sys.argv[1] if len(sys.argv) > 1 else None
|
|
||||||
assert MEMGPT_AGENT_ID, f"Missing agent ID (pass as arg)"
|
|
||||||
|
|
||||||
|
|
||||||
def route_reply_to_letta_api(message):
|
|
||||||
# send a POST request to a Letta server
|
|
||||||
|
|
||||||
url = f"{MEMGPT_SERVER_URL}/api/agents/{MEMGPT_AGENT_ID}/messages"
|
|
||||||
headers = {
|
|
||||||
"accept": "application/json",
|
|
||||||
"authorization": f"Bearer {MEMGPT_TOKEN}",
|
|
||||||
"content-type": "application/json",
|
|
||||||
}
|
|
||||||
data = {
|
|
||||||
"stream": False,
|
|
||||||
"role": "system",
|
|
||||||
"message": f"[EMAIL NOTIFICATION] {message}",
|
|
||||||
}
|
|
||||||
|
|
||||||
try:
|
|
||||||
response = requests.post(url, headers=headers, json=data)
|
|
||||||
print("Got response:", response.text)
|
|
||||||
except Exception as e:
|
|
||||||
print("Sending message failed:", str(e))
|
|
||||||
|
|
||||||
|
|
||||||
def decode_base64url(data):
|
|
||||||
"""Decode base64, padding being optional."""
|
|
||||||
data += "=" * ((4 - len(data) % 4) % 4)
|
|
||||||
return base64.urlsafe_b64decode(data)
|
|
||||||
|
|
||||||
|
|
||||||
def parse_email(message):
|
|
||||||
"""Parse email content using the email library."""
|
|
||||||
msg_bytes = decode_base64url(message["raw"])
|
|
||||||
email_message = message_from_bytes(msg_bytes)
|
|
||||||
return email_message
|
|
||||||
|
|
||||||
|
|
||||||
def process_email(message) -> dict:
|
|
||||||
# print(f"New email from {email_message['from']}: {email_message['subject']}")
|
|
||||||
email_message = parse_email(message)
|
|
||||||
body_plain_all = ""
|
|
||||||
body_html_all = ""
|
|
||||||
if email_message.is_multipart():
|
|
||||||
for part in email_message.walk():
|
|
||||||
if part.get_content_type() == "text/plain":
|
|
||||||
body_plain = str(part.get_payload(decode=True).decode("utf-8"))
|
|
||||||
# print(body_plain)
|
|
||||||
body_plain_all += body_plain
|
|
||||||
elif part.get_content_type() == "text/html":
|
|
||||||
body_html = str(part.get_payload(decode=True).decode("utf-8"))
|
|
||||||
# print(body_html)
|
|
||||||
body_html_all += body_html
|
|
||||||
else:
|
|
||||||
body_plain_all = print(email_message.get_payload(decode=True).decode("utf-8"))
|
|
||||||
|
|
||||||
return {
|
|
||||||
"from": email_message["from"],
|
|
||||||
"subject": email_message["subject"],
|
|
||||||
"body": body_plain_all,
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
"""Monitors for new emails and prints their titles."""
|
|
||||||
creds = None
|
|
||||||
if os.path.exists(TOKEN_PATH):
|
|
||||||
creds = Credentials.from_authorized_user_file(TOKEN_PATH, SCOPES)
|
|
||||||
|
|
||||||
if not creds or not creds.valid:
|
|
||||||
if creds and creds.expired and creds.refresh_token:
|
|
||||||
creds.refresh(Request())
|
|
||||||
else:
|
|
||||||
flow = InstalledAppFlow.from_client_secrets_file(CREDENTIALS_PATH, SCOPES)
|
|
||||||
creds = flow.run_local_server(port=0)
|
|
||||||
with open(TOKEN_PATH, "w") as token:
|
|
||||||
token.write(creds.to_json())
|
|
||||||
|
|
||||||
service = build("gmail", "v1", credentials=creds)
|
|
||||||
seen_ids = set() # Set to track seen email IDs
|
|
||||||
|
|
||||||
try:
|
|
||||||
# Initially populate the seen_ids with all current unread emails
|
|
||||||
print("Grabbing initial state...")
|
|
||||||
initial_results = service.users().messages().list(userId="me", q="is:unread", maxResults=500).execute()
|
|
||||||
initial_messages = initial_results.get("messages", [])
|
|
||||||
seen_ids.update(msg["id"] for msg in initial_messages)
|
|
||||||
|
|
||||||
print("Listening...")
|
|
||||||
while True:
|
|
||||||
results = service.users().messages().list(userId="me", q="is:unread", maxResults=5).execute()
|
|
||||||
messages = results.get("messages", [])
|
|
||||||
if messages:
|
|
||||||
for message in messages:
|
|
||||||
if message["id"] not in seen_ids:
|
|
||||||
seen_ids.add(message["id"])
|
|
||||||
msg = service.users().messages().get(userId="me", id=message["id"], format="raw").execute()
|
|
||||||
|
|
||||||
# Optionally mark the message as read here if required
|
|
||||||
email_obj = process_email(msg)
|
|
||||||
msg_str = f"New email from {email_obj['from']}: {email_obj['subject']}, body: {email_obj['body'][:100]}"
|
|
||||||
|
|
||||||
# Hard check to ignore emails unless
|
|
||||||
# if not (
|
|
||||||
# "email@address" in email_obj["from"]
|
|
||||||
# ):
|
|
||||||
# print("ignoring")
|
|
||||||
# else:
|
|
||||||
print(msg_str)
|
|
||||||
route_reply_to_letta_api(msg_str)
|
|
||||||
|
|
||||||
time.sleep(DELAY) # Wait for N seconds before checking again
|
|
||||||
except HttpError as error:
|
|
||||||
print(f"An error occurred: {error}")
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
main()
|
|
||||||
@@ -1,97 +0,0 @@
|
|||||||
# Enabling API control on Google Calendar requires a few steps:
|
|
||||||
# https://developers.google.com/calendar/api/quickstart/python
|
|
||||||
# including:
|
|
||||||
# pip install --upgrade google-api-python-client google-auth-httplib2 google-auth-oauthlib
|
|
||||||
|
|
||||||
import os
|
|
||||||
import os.path
|
|
||||||
import traceback
|
|
||||||
from typing import Optional
|
|
||||||
|
|
||||||
from google.auth.transport.requests import Request
|
|
||||||
from google.oauth2.credentials import Credentials
|
|
||||||
from google_auth_oauthlib.flow import InstalledAppFlow
|
|
||||||
from googleapiclient.discovery import build
|
|
||||||
from googleapiclient.errors import HttpError
|
|
||||||
|
|
||||||
# If modifying these scopes, delete the file token.json.
|
|
||||||
# SCOPES = ["https://www.googleapis.com/auth/calendar.readonly"]
|
|
||||||
SCOPES = ["https://www.googleapis.com/auth/calendar"]
|
|
||||||
TOKEN_PATH = os.path.expanduser("~/.letta/gcal_token.json")
|
|
||||||
CREDENTIALS_PATH = os.path.expanduser("~/.letta/google_api_credentials.json")
|
|
||||||
|
|
||||||
|
|
||||||
def schedule_event(
|
|
||||||
self,
|
|
||||||
title: str,
|
|
||||||
start: str,
|
|
||||||
end: str,
|
|
||||||
# attendees: Optional[List[str]] = None,
|
|
||||||
# attendees: Optional[list[str]] = None,
|
|
||||||
description: Optional[str] = None,
|
|
||||||
# timezone: Optional[str] = "America/Los_Angeles",
|
|
||||||
) -> str:
|
|
||||||
"""
|
|
||||||
Schedule an event on the user's Google Calendar. Start and end time must be in ISO 8601 format, e.g. February 1st 2024 at noon PT would be "2024-02-01T12:00:00-07:00".
|
|
||||||
|
|
||||||
Args:
|
|
||||||
title (str): Event name
|
|
||||||
start (str): Start time in ISO 8601 format (date, time, and timezone offset)
|
|
||||||
end (str): End time in ISO 8601 format (date, time, and timezone offset)
|
|
||||||
description (Optional[str]): Expanded description of the event
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
str: The status of the event scheduling request.
|
|
||||||
"""
|
|
||||||
|
|
||||||
creds = None
|
|
||||||
# The file token.json stores the user's access and refresh tokens, and is
|
|
||||||
# created automatically when the authorization flow completes for the first
|
|
||||||
# time.
|
|
||||||
if os.path.exists(TOKEN_PATH):
|
|
||||||
creds = Credentials.from_authorized_user_file(TOKEN_PATH, SCOPES)
|
|
||||||
# If there are no (valid) credentials available, let the user log in.
|
|
||||||
if not creds or not creds.valid:
|
|
||||||
if creds and creds.expired and creds.refresh_token:
|
|
||||||
creds.refresh(Request())
|
|
||||||
else:
|
|
||||||
flow = InstalledAppFlow.from_client_secrets_file(CREDENTIALS_PATH, SCOPES)
|
|
||||||
creds = flow.run_local_server(port=0)
|
|
||||||
# Save the credentials for the next run
|
|
||||||
with open(TOKEN_PATH, "w") as token:
|
|
||||||
token.write(creds.to_json())
|
|
||||||
|
|
||||||
#### Create an event
|
|
||||||
# Refer to the Python quickstart on how to setup the environment:
|
|
||||||
# https://developers.google.com/calendar/quickstart/python
|
|
||||||
# Change the scope to 'https://www.googleapis.com/auth/calendar' and delete any
|
|
||||||
# stored credentials.
|
|
||||||
try:
|
|
||||||
service = build("calendar", "v3", credentials=creds)
|
|
||||||
|
|
||||||
event = {
|
|
||||||
"summary": title,
|
|
||||||
# "location": "800 Howard St., San Francisco, CA 94103",
|
|
||||||
"start": {
|
|
||||||
"dateTime": start,
|
|
||||||
"timeZone": "America/Los_Angeles",
|
|
||||||
},
|
|
||||||
"end": {
|
|
||||||
"dateTime": end,
|
|
||||||
"timeZone": "America/Los_Angeles",
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
# if attendees is not None:
|
|
||||||
# event["attendees"] = attendees
|
|
||||||
|
|
||||||
if description is not None:
|
|
||||||
event["description"] = description
|
|
||||||
|
|
||||||
event = service.events().insert(calendarId="primary", body=event).execute()
|
|
||||||
return "Event created: %s" % (event.get("htmlLink"))
|
|
||||||
|
|
||||||
except HttpError as error:
|
|
||||||
traceback.print_exc()
|
|
||||||
|
|
||||||
return f"An error occurred while trying to create an event: {str(error)}"
|
|
||||||
@@ -1,11 +0,0 @@
|
|||||||
system_prompt: "memgpt_chat"
|
|
||||||
functions:
|
|
||||||
- "send_message"
|
|
||||||
- "pause_heartbeats"
|
|
||||||
- "core_memory_append"
|
|
||||||
- "core_memory_replace"
|
|
||||||
- "conversation_search"
|
|
||||||
- "conversation_search_date"
|
|
||||||
- "archival_memory_insert"
|
|
||||||
- "archival_memory_search"
|
|
||||||
- "schedule_event"
|
|
||||||
@@ -1,111 +0,0 @@
|
|||||||
import datetime
|
|
||||||
import os.path
|
|
||||||
|
|
||||||
from google.auth.transport.requests import Request
|
|
||||||
from google.oauth2.credentials import Credentials
|
|
||||||
from google_auth_oauthlib.flow import InstalledAppFlow
|
|
||||||
from googleapiclient.discovery import build
|
|
||||||
from googleapiclient.errors import HttpError
|
|
||||||
|
|
||||||
# If modifying these scopes, delete the file token.json.
|
|
||||||
# SCOPES = ["https://www.googleapis.com/auth/calendar.readonly"]
|
|
||||||
SCOPES = ["https://www.googleapis.com/auth/calendar"]
|
|
||||||
|
|
||||||
TOKEN_PATH = os.path.expanduser("~/.letta/gcal_token.json")
|
|
||||||
CREDENTIALS_PATH = os.path.expanduser("~/.letta/google_api_credentials.json")
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
"""Shows basic usage of the Google Calendar API.
|
|
||||||
Prints the start and name of the next 10 events on the user's calendar.
|
|
||||||
"""
|
|
||||||
creds = None
|
|
||||||
# The file token.json stores the user's access and refresh tokens, and is
|
|
||||||
# created automatically when the authorization flow completes for the first
|
|
||||||
# time.
|
|
||||||
if os.path.exists(TOKEN_PATH):
|
|
||||||
creds = Credentials.from_authorized_user_file(TOKEN_PATH, SCOPES)
|
|
||||||
# If there are no (valid) credentials available, let the user log in.
|
|
||||||
if not creds or not creds.valid:
|
|
||||||
if creds and creds.expired and creds.refresh_token:
|
|
||||||
creds.refresh(Request())
|
|
||||||
else:
|
|
||||||
flow = InstalledAppFlow.from_client_secrets_file(CREDENTIALS_PATH, SCOPES)
|
|
||||||
creds = flow.run_local_server(port=0)
|
|
||||||
# Save the credentials for the next run
|
|
||||||
with open(TOKEN_PATH, "w") as token:
|
|
||||||
token.write(creds.to_json())
|
|
||||||
|
|
||||||
try:
|
|
||||||
service = build("calendar", "v3", credentials=creds)
|
|
||||||
|
|
||||||
# Call the Calendar API
|
|
||||||
now = datetime.datetime.now(datetime.timezone.utc).isoformat() + "Z" # 'Z' indicates UTC time
|
|
||||||
print("Getting the upcoming 10 events")
|
|
||||||
events_result = (
|
|
||||||
service.events()
|
|
||||||
.list(
|
|
||||||
calendarId="primary",
|
|
||||||
timeMin=now,
|
|
||||||
maxResults=10,
|
|
||||||
singleEvents=True,
|
|
||||||
orderBy="startTime",
|
|
||||||
)
|
|
||||||
.execute()
|
|
||||||
)
|
|
||||||
events = events_result.get("items", [])
|
|
||||||
|
|
||||||
if not events:
|
|
||||||
print("No upcoming events found.")
|
|
||||||
return
|
|
||||||
|
|
||||||
# Prints the start and name of the next 10 events
|
|
||||||
for event in events:
|
|
||||||
start = event["start"].get("dateTime", event["start"].get("date"))
|
|
||||||
print(start, event["summary"])
|
|
||||||
|
|
||||||
except HttpError as error:
|
|
||||||
print(f"An error occurred: {error}")
|
|
||||||
|
|
||||||
#### Create an event
|
|
||||||
# Refer to the Python quickstart on how to setup the environment:
|
|
||||||
# https://developers.google.com/calendar/quickstart/python
|
|
||||||
# Change the scope to 'https://www.googleapis.com/auth/calendar' and delete any
|
|
||||||
# stored credentials.
|
|
||||||
# try:
|
|
||||||
# service = build("calendar", "v3", credentials=creds)
|
|
||||||
|
|
||||||
# event = {
|
|
||||||
# "summary": "GCAL API TEST EVENT",
|
|
||||||
# # "location": "800 Howard St., San Francisco, CA 94103",
|
|
||||||
# "description": "A chance to hear more about Google's developer products.",
|
|
||||||
# "start": {
|
|
||||||
# "dateTime": "2024-04-23T09:00:00-07:00",
|
|
||||||
# "timeZone": "America/Los_Angeles",
|
|
||||||
# },
|
|
||||||
# "end": {
|
|
||||||
# "dateTime": "2024-04-24T17:00:00-07:00",
|
|
||||||
# "timeZone": "America/Los_Angeles",
|
|
||||||
# },
|
|
||||||
# # "recurrence": ["RRULE:FREQ=DAILY;COUNT=2"],
|
|
||||||
# "attendees": [
|
|
||||||
# {"email": "packercharles@gmail.com"},
|
|
||||||
# ],
|
|
||||||
# # "reminders": {
|
|
||||||
# # "useDefault": False,
|
|
||||||
# # "overrides": [
|
|
||||||
# # {"method": "email", "minutes": 24 * 60},
|
|
||||||
# # {"method": "popup", "minutes": 10},
|
|
||||||
# # ],
|
|
||||||
# # },
|
|
||||||
# }
|
|
||||||
|
|
||||||
# event = service.events().insert(calendarId="primary", body=event).execute()
|
|
||||||
# print("Event created: %s" % (event.get("htmlLink")))
|
|
||||||
|
|
||||||
except HttpError as error:
|
|
||||||
print(f"An error occurred: {error}")
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
main()
|
|
||||||
@@ -1,12 +0,0 @@
|
|||||||
system_prompt: "memgpt_chat"
|
|
||||||
functions:
|
|
||||||
- "send_message"
|
|
||||||
- "pause_heartbeats"
|
|
||||||
- "core_memory_append"
|
|
||||||
- "core_memory_replace"
|
|
||||||
- "conversation_search"
|
|
||||||
- "conversation_search_date"
|
|
||||||
- "archival_memory_insert"
|
|
||||||
- "archival_memory_search"
|
|
||||||
- "schedule_event"
|
|
||||||
- "send_text_message"
|
|
||||||
@@ -1,77 +0,0 @@
|
|||||||
import os
|
|
||||||
import sys
|
|
||||||
|
|
||||||
import requests
|
|
||||||
from flask import Flask, request
|
|
||||||
from flask_cors import CORS
|
|
||||||
|
|
||||||
app = Flask(__name__)
|
|
||||||
CORS(app)
|
|
||||||
|
|
||||||
|
|
||||||
app = Flask(__name__)
|
|
||||||
CORS(app)
|
|
||||||
|
|
||||||
# NOTE: this is out of date for >=0.5.0
|
|
||||||
|
|
||||||
MEMGPT_SERVER_URL = "http://127.0.0.1:8283"
|
|
||||||
MEMGPT_TOKEN = os.getenv("MEMGPT_SERVER_PASS")
|
|
||||||
assert MEMGPT_TOKEN, f"Missing env variable MEMGPT_SERVER_PASS"
|
|
||||||
MEMGPT_AGENT_ID = sys.argv[1] if len(sys.argv) > 1 else None
|
|
||||||
assert MEMGPT_AGENT_ID, f"Missing agent ID (pass as arg)"
|
|
||||||
|
|
||||||
|
|
||||||
@app.route("/test", methods=["POST"])
|
|
||||||
def test():
|
|
||||||
print(request.headers)
|
|
||||||
return "Headers received. Check your console."
|
|
||||||
|
|
||||||
|
|
||||||
def route_reply_to_letta_api(message):
|
|
||||||
# send a POST request to a Letta server
|
|
||||||
|
|
||||||
url = f"{MEMGPT_SERVER_URL}/api/agents/{MEMGPT_AGENT_ID}/messages"
|
|
||||||
headers = {
|
|
||||||
"accept": "application/json",
|
|
||||||
"authorization": f"Bearer {MEMGPT_TOKEN}",
|
|
||||||
"content-type": "application/json",
|
|
||||||
}
|
|
||||||
data = {
|
|
||||||
"stream": False,
|
|
||||||
"role": "system",
|
|
||||||
"message": f"[SMS MESSAGE NOTIFICATION - you MUST use send_text_message NOT send_message if you want to reply to the text thread] {message}",
|
|
||||||
}
|
|
||||||
|
|
||||||
try:
|
|
||||||
response = requests.post(url, headers=headers, json=data)
|
|
||||||
print("Got response:", response.text)
|
|
||||||
except Exception as e:
|
|
||||||
print("Sending message failed:", str(e))
|
|
||||||
|
|
||||||
|
|
||||||
@app.route("/sms", methods=["POST"])
|
|
||||||
def sms_reply():
|
|
||||||
"""Respond to incoming calls with a simple text message."""
|
|
||||||
# Fetch the message
|
|
||||||
message_body = request.form["Body"]
|
|
||||||
from_number = request.form["From"]
|
|
||||||
|
|
||||||
# print(f"New message from {from_number}: {message_body}")
|
|
||||||
msg_str = f"New message from {from_number}: {message_body}"
|
|
||||||
print(msg_str)
|
|
||||||
|
|
||||||
route_reply_to_letta_api(msg_str)
|
|
||||||
return str("status = OK")
|
|
||||||
|
|
||||||
# Start our response
|
|
||||||
# resp = MessagingResponse()
|
|
||||||
|
|
||||||
# Add a message
|
|
||||||
# resp.message("Hello, thanks for messaging!")
|
|
||||||
|
|
||||||
# return str(resp)
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
# app.run(debug=True)
|
|
||||||
app.run(host="0.0.0.0", port=8284, debug=True)
|
|
||||||
@@ -1,41 +0,0 @@
|
|||||||
# Download the helper library from https://www.twilio.com/docs/python/install
|
|
||||||
import os
|
|
||||||
import traceback
|
|
||||||
|
|
||||||
from twilio.rest import Client
|
|
||||||
|
|
||||||
|
|
||||||
def send_text_message(self, message: str) -> str:
|
|
||||||
"""
|
|
||||||
Sends an SMS message to the user's phone / cellular device.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
message (str): The contents of the message to send.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
str: The status of the text message.
|
|
||||||
"""
|
|
||||||
# Find your Account SID and Auth Token at twilio.com/console
|
|
||||||
# and set the environment variables. See http://twil.io/secure
|
|
||||||
account_sid = os.environ["TWILIO_ACCOUNT_SID"]
|
|
||||||
auth_token = os.environ["TWILIO_AUTH_TOKEN"]
|
|
||||||
client = Client(account_sid, auth_token)
|
|
||||||
|
|
||||||
from_number = os.getenv("TWILIO_FROM_NUMBER")
|
|
||||||
to_number = os.getenv("TWILIO_TO_NUMBER")
|
|
||||||
assert from_number and to_number
|
|
||||||
# assert from_number.startswith("+1") and len(from_number) == 12, from_number
|
|
||||||
# assert to_number.startswith("+1") and len(to_number) == 12, to_number
|
|
||||||
|
|
||||||
try:
|
|
||||||
message = client.messages.create(
|
|
||||||
body=str(message),
|
|
||||||
from_=from_number,
|
|
||||||
to=to_number,
|
|
||||||
)
|
|
||||||
return "Message was successfully sent."
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
traceback.print_exc()
|
|
||||||
|
|
||||||
return f"Message failed to send with error: {str(e)}"
|
|
||||||
@@ -1,11 +0,0 @@
|
|||||||
system_prompt: "memgpt_chat"
|
|
||||||
functions:
|
|
||||||
- "send_message"
|
|
||||||
- "pause_heartbeats"
|
|
||||||
- "core_memory_append"
|
|
||||||
- "core_memory_replace"
|
|
||||||
- "conversation_search"
|
|
||||||
- "conversation_search_date"
|
|
||||||
- "archival_memory_insert"
|
|
||||||
- "archival_memory_search"
|
|
||||||
- "send_text_message"
|
|
||||||
@@ -1,92 +0,0 @@
|
|||||||
# Sending emails with Letta using [Resend](https://resend.com/emails)
|
|
||||||
|
|
||||||
Thank you to @ykhli for the suggestion and initial tool call code!
|
|
||||||
|
|
||||||
## Defining the custom tool
|
|
||||||
|
|
||||||
Create an account on [Resend](https://resend.com/emails) to get an API key.
|
|
||||||
|
|
||||||
Once you have an API key, you can set up a custom tool using the `requests` API in Python to call the Resend API:
|
|
||||||
```python
|
|
||||||
import requests
|
|
||||||
import json
|
|
||||||
|
|
||||||
|
|
||||||
RESEND_API_KEY = "YOUR_RESEND_API_KEY"
|
|
||||||
RESEND_TARGET_EMAIL_ADDRESS = "YOUR_EMAIL_ADDRESS"
|
|
||||||
|
|
||||||
def send_email(self, description: str):
|
|
||||||
"""
|
|
||||||
Sends an email to a predefined user. The email contains a message, which is defined by the description parameter.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
description (str): Email contents. All unicode (including emojis) are supported.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
None
|
|
||||||
|
|
||||||
Example:
|
|
||||||
>>> send_email("hello")
|
|
||||||
# Output: None. This will send an email to the you are talking to with the message "hello".
|
|
||||||
"""
|
|
||||||
url = "https://api.resend.com/emails"
|
|
||||||
headers = {"Authorization": f"Bearer {RESEND_API_KEY}", "Content-Type": "application/json"}
|
|
||||||
data = {
|
|
||||||
"from": "onboarding@resend.dev",
|
|
||||||
"to": RESEND_TARGET_EMAIL_ADDRESS,
|
|
||||||
"subject": "Letta message:",
|
|
||||||
"html": f"<strong>{description}</strong>",
|
|
||||||
}
|
|
||||||
|
|
||||||
try:
|
|
||||||
response = requests.post(url, headers=headers, data=json.dumps(data))
|
|
||||||
print(response.text)
|
|
||||||
except requests.HTTPError as e:
|
|
||||||
raise Exception(f"send_email failed with an HTTP error: {str(e)}")
|
|
||||||
except Exception as e:
|
|
||||||
raise Exception(f"send_email failed with an error: {str(e)}")
|
|
||||||
```
|
|
||||||
|
|
||||||
## Option 1 (dev portal)
|
|
||||||
|
|
||||||
To create the tool in the dev portal, simply navigate to the tool creator tab, create a new tool called `send_email`, and copy-paste the above code into the code block area and press "Create Tool".
|
|
||||||
|
|
||||||
<img width="500" alt="image" src="https://github.com/letta-ai/letta/assets/5475622/a21fce95-02a8-4aa8-89f5-3c520b6ff75e">
|
|
||||||
|
|
||||||
Once you've created the tool, create a new agent and make sure to select `send_email` as an enabled tool.
|
|
||||||
|
|
||||||
<img width="500" alt="image" src="https://github.com/letta-ai/letta/assets/5475622/124e2260-d435-465d-8971-e8ca5265d1bd">
|
|
||||||
|
|
||||||
Now your agent should be able to call the `send_email` function when needed:
|
|
||||||
|
|
||||||
<img width="500" alt="image" src="https://github.com/letta-ai/letta/assets/5475622/fdd2de45-13f7-4b8f-84a3-de92e5d2bd17">
|
|
||||||
|
|
||||||
## Option 2 (CLI)
|
|
||||||
|
|
||||||
Copy the custom function into the functions directory:
|
|
||||||
```sh
|
|
||||||
# If you use the *_env_vars version of the function, you will need to define `RESEND_API_KEY` and `RESEND_TARGET_EMAIL_ADDRESS` in your environment variables
|
|
||||||
cp examples/resend_example/resend_send_email_env_vars.py ~/.letta/functions/
|
|
||||||
```
|
|
||||||
|
|
||||||
Create a preset that has access to that function:
|
|
||||||
```sh
|
|
||||||
letta add preset -f examples/resend_example/resend_preset.yaml --name resend_preset
|
|
||||||
```
|
|
||||||
|
|
||||||
Make sure we set the env vars:
|
|
||||||
```sh
|
|
||||||
export RESEND_API_KEY=re_YOUR_RESEND_KEY
|
|
||||||
export RESEND_TARGET_EMAIL_ADDRESS="YOUR_EMAIL@gmail.com"
|
|
||||||
```
|
|
||||||
|
|
||||||
Create an agent with that preset (disable `--stream` if you're not using a streaming-compatible backend):
|
|
||||||
```sh
|
|
||||||
letta run --preset resend_preset --persona sam_pov --human cs_phd --stream
|
|
||||||
```
|
|
||||||
|
|
||||||
<img width="500" alt="image" src="https://github.com/letta-ai/letta/assets/5475622/61958527-20e7-461d-a6d2-a53f06493683">
|
|
||||||
|
|
||||||
Waiting in our inbox:
|
|
||||||
|
|
||||||
<img width="500" alt="image" src="https://github.com/letta-ai/letta/assets/5475622/95f9b24a-98c3-493a-a787-72a2a956641a">
|
|
||||||
@@ -1,11 +0,0 @@
|
|||||||
system_prompt: "memgpt_chat"
|
|
||||||
functions:
|
|
||||||
- "send_message"
|
|
||||||
- "pause_heartbeats"
|
|
||||||
- "core_memory_append"
|
|
||||||
- "core_memory_replace"
|
|
||||||
- "conversation_search"
|
|
||||||
- "conversation_search_date"
|
|
||||||
- "archival_memory_insert"
|
|
||||||
- "archival_memory_search"
|
|
||||||
- "send_email"
|
|
||||||
@@ -1,43 +0,0 @@
|
|||||||
import json
|
|
||||||
import os
|
|
||||||
|
|
||||||
import requests
|
|
||||||
|
|
||||||
|
|
||||||
def send_email(self, description: str):
|
|
||||||
"""
|
|
||||||
Sends an email to a predefined user. The email contains a message, which is defined by the description parameter.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
description (str): Email contents. All unicode (including emojis) are supported.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
None
|
|
||||||
|
|
||||||
Example:
|
|
||||||
>>> send_email("hello")
|
|
||||||
# Output: None. This will send an email to the you are talking to with the message "hello".
|
|
||||||
"""
|
|
||||||
RESEND_API_KEY = os.getenv("RESEND_API_KEY")
|
|
||||||
RESEND_TARGET_EMAIL_ADDRESS = os.getenv("RESEND_TARGET_EMAIL_ADDRESS")
|
|
||||||
if RESEND_API_KEY is None:
|
|
||||||
raise Exception("User did not set the environment variable RESEND_API_KEY")
|
|
||||||
if RESEND_TARGET_EMAIL_ADDRESS is None:
|
|
||||||
raise Exception("User did not set the environment variable RESEND_TARGET_EMAIL_ADDRESS")
|
|
||||||
|
|
||||||
url = "https://api.resend.com/emails"
|
|
||||||
headers = {"Authorization": f"Bearer {RESEND_API_KEY}", "Content-Type": "application/json"}
|
|
||||||
data = {
|
|
||||||
"from": "onboarding@resend.dev",
|
|
||||||
"to": RESEND_TARGET_EMAIL_ADDRESS,
|
|
||||||
"subject": "Letta message:",
|
|
||||||
"html": f"<strong>{description}</strong>",
|
|
||||||
}
|
|
||||||
|
|
||||||
try:
|
|
||||||
response = requests.post(url, headers=headers, data=json.dumps(data))
|
|
||||||
print(response.text)
|
|
||||||
except requests.HTTPError as e:
|
|
||||||
raise Exception(f"send_email failed with an HTTP error: {str(e)}")
|
|
||||||
except Exception as e:
|
|
||||||
raise Exception(f"send_email failed with an error: {str(e)}")
|
|
||||||
@@ -1,59 +0,0 @@
|
|||||||
from letta_client import Letta, SleeptimeManagerUpdate
|
|
||||||
|
|
||||||
client = Letta(base_url="http://localhost:8283")
|
|
||||||
|
|
||||||
agent = client.agents.create(
|
|
||||||
memory_blocks=[
|
|
||||||
{"value": "Name: ?", "label": "human"},
|
|
||||||
{"value": "You are a helpful assistant.", "label": "persona"},
|
|
||||||
],
|
|
||||||
model="openai/gpt-4.1",
|
|
||||||
embedding="openai/text-embedding-3-small",
|
|
||||||
enable_sleeptime=True,
|
|
||||||
)
|
|
||||||
print(f"Created agent id {agent.id}")
|
|
||||||
|
|
||||||
# get the group
|
|
||||||
group_id = agent.multi_agent_group.id
|
|
||||||
current_frequence = agent.multi_agent_group.sleeptime_agent_frequency
|
|
||||||
print(f"Group id: {group_id}, frequency: {current_frequence}")
|
|
||||||
|
|
||||||
group = client.groups.modify(
|
|
||||||
group_id=group_id,
|
|
||||||
manager_config=SleeptimeManagerUpdate(
|
|
||||||
sleeptime_agent_frequency=1
|
|
||||||
),
|
|
||||||
)
|
|
||||||
print(f"Updated group id {group_id} with frequency {group.sleeptime_agent_frequency}")
|
|
||||||
print(f"Group members", group.agent_ids)
|
|
||||||
sleeptime_ids = []
|
|
||||||
for agent_id in group.agent_ids:
|
|
||||||
if client.agents.retrieve(agent_id=agent_id).agent_type == "sleeptime_agent":
|
|
||||||
sleeptime_ids.append(agent_id)
|
|
||||||
print(f"Sleeptime agent ids: {sleeptime_ids}")
|
|
||||||
sleeptime_agent_id = sleeptime_ids[0]
|
|
||||||
|
|
||||||
# check the frequency
|
|
||||||
agent = client.agents.retrieve(agent_id=agent.id)
|
|
||||||
print(f"Updated agent id {agent.id} with frequency {agent.multi_agent_group.sleeptime_agent_frequency}")
|
|
||||||
|
|
||||||
|
|
||||||
response = client.agents.messages.create(
|
|
||||||
agent_id=agent.id,
|
|
||||||
messages=[
|
|
||||||
{"role": "user", "content": "Hello can you echo back this input?"},
|
|
||||||
],
|
|
||||||
)
|
|
||||||
response = client.agents.messages.create(
|
|
||||||
agent_id=agent.id,
|
|
||||||
messages=[
|
|
||||||
{"role": "user", "content": "My name is sarah"},
|
|
||||||
],
|
|
||||||
)
|
|
||||||
for message in response.messages:
|
|
||||||
print(message)
|
|
||||||
|
|
||||||
print("---------------- SLEEPTIME AGENT ----------------")
|
|
||||||
for message in client.agents.messages.list(agent_id=sleeptime_agent_id):
|
|
||||||
print(message)
|
|
||||||
|
|
||||||
@@ -1,72 +0,0 @@
|
|||||||
import time
|
|
||||||
|
|
||||||
from letta_client import Letta
|
|
||||||
|
|
||||||
client = Letta(base_url="http://localhost:8283")
|
|
||||||
|
|
||||||
# delete all sources
|
|
||||||
for source in client.sources.list():
|
|
||||||
print(f"Deleting source {source.name}")
|
|
||||||
client.sources.delete(source.id)
|
|
||||||
|
|
||||||
agent = client.agents.create(
|
|
||||||
memory_blocks=[
|
|
||||||
{"value": "Name: ?", "label": "human"},
|
|
||||||
{"value": "You are a helpful assistant.", "label": "persona"},
|
|
||||||
],
|
|
||||||
model="openai/gpt-4.1",
|
|
||||||
embedding="openai/text-embedding-3-small",
|
|
||||||
enable_sleeptime=True,
|
|
||||||
)
|
|
||||||
print(f"Created agent id {agent.id}")
|
|
||||||
|
|
||||||
# get the group
|
|
||||||
group_id = agent.multi_agent_group.id
|
|
||||||
current_frequence = agent.multi_agent_group.sleeptime_agent_frequency
|
|
||||||
print(f"Group id: {group_id}, frequency: {current_frequence}")
|
|
||||||
|
|
||||||
# create a source
|
|
||||||
source_name = "employee_handbook"
|
|
||||||
source = client.sources.create(
|
|
||||||
name=source_name,
|
|
||||||
description="Provides reference information for the employee handbook",
|
|
||||||
embedding="openai/text-embedding-3-small" # must match agent
|
|
||||||
)
|
|
||||||
# attach the source to the agent
|
|
||||||
client.agents.sources.attach(
|
|
||||||
source_id=source.id,
|
|
||||||
agent_id=agent.id
|
|
||||||
)
|
|
||||||
|
|
||||||
# upload a file: this will trigger processing
|
|
||||||
with open("handbook.pdf", "rb") as f:
|
|
||||||
job = client.sources.files.upload(
|
|
||||||
file=f,
|
|
||||||
source_id=source.id
|
|
||||||
)
|
|
||||||
|
|
||||||
time.sleep(2)
|
|
||||||
|
|
||||||
# get employee handbook block (same name as the source)
|
|
||||||
print("Agent blocks", [b.label for b in client.agents.blocks.list(agent_id=agent.id)])
|
|
||||||
block = client.agents.blocks.retrieve(agent_id=agent.id, block_label="employee_handbook")
|
|
||||||
|
|
||||||
|
|
||||||
# get attached agents
|
|
||||||
agents = client.blocks.agents.list(block_id=block.id)
|
|
||||||
for agent in agents:
|
|
||||||
print(f"Agent id {agent.id}", agent.agent_type)
|
|
||||||
print("Agent blocks:")
|
|
||||||
for b in client.agents.blocks.list(agent_id=agent.id):
|
|
||||||
print(f"Block {b.label}:", b.value)
|
|
||||||
|
|
||||||
while job.status != "completed":
|
|
||||||
job = client.jobs.retrieve(job.id)
|
|
||||||
|
|
||||||
# count passages
|
|
||||||
passages = client.agents.passages.list(agent_id=agent.id)
|
|
||||||
print(f"Passages {len(passages)}")
|
|
||||||
for passage in passages:
|
|
||||||
print(passage.text)
|
|
||||||
|
|
||||||
time.sleep(2)
|
|
||||||
@@ -1,32 +0,0 @@
|
|||||||
from letta_client import Letta, VoiceSleeptimeManagerUpdate
|
|
||||||
|
|
||||||
client = Letta(base_url="http://localhost:8283")
|
|
||||||
|
|
||||||
agent = client.agents.create(
|
|
||||||
name="low_latency_voice_agent_demo",
|
|
||||||
agent_type="voice_convo_agent",
|
|
||||||
memory_blocks=[
|
|
||||||
{"value": "Name: ?", "label": "human"},
|
|
||||||
{"value": "You are a helpful assistant.", "label": "persona"},
|
|
||||||
],
|
|
||||||
model="openai/gpt-4o-mini", # Use 4o-mini for speed
|
|
||||||
embedding="openai/text-embedding-3-small",
|
|
||||||
enable_sleeptime=True,
|
|
||||||
initial_message_sequence = [],
|
|
||||||
)
|
|
||||||
print(f"Created agent id {agent.id}")
|
|
||||||
|
|
||||||
# get the group
|
|
||||||
group_id = agent.multi_agent_group.id
|
|
||||||
max_message_buffer_length = agent.multi_agent_group.max_message_buffer_length
|
|
||||||
min_message_buffer_length = agent.multi_agent_group.min_message_buffer_length
|
|
||||||
print(f"Group id: {group_id}, max_message_buffer_length: {max_message_buffer_length}, min_message_buffer_length: {min_message_buffer_length}")
|
|
||||||
|
|
||||||
# change it to be more frequent
|
|
||||||
group = client.groups.modify(
|
|
||||||
group_id=group_id,
|
|
||||||
manager_config=VoiceSleeptimeManagerUpdate(
|
|
||||||
max_message_buffer_length=10,
|
|
||||||
min_message_buffer_length=6,
|
|
||||||
)
|
|
||||||
)
|
|
||||||
@@ -1,84 +0,0 @@
|
|||||||
# Letta Streaming Examples
|
|
||||||
|
|
||||||
Minimal examples demonstrating Letta's streaming API in both Python and TypeScript.
|
|
||||||
|
|
||||||
## Setup
|
|
||||||
|
|
||||||
1. Set your Letta API key:
|
|
||||||
```bash
|
|
||||||
export LETTA_API_KEY="your_api_key_here"
|
|
||||||
```
|
|
||||||
|
|
||||||
2. Install dependencies:
|
|
||||||
```bash
|
|
||||||
# For TypeScript
|
|
||||||
npm install
|
|
||||||
|
|
||||||
# For Python
|
|
||||||
pip install letta-client
|
|
||||||
```
|
|
||||||
|
|
||||||
## Run Examples
|
|
||||||
|
|
||||||
### Python
|
|
||||||
```bash
|
|
||||||
python streaming_demo.py
|
|
||||||
```
|
|
||||||
|
|
||||||
### TypeScript
|
|
||||||
```bash
|
|
||||||
npm run demo:typescript
|
|
||||||
# or directly with tsx:
|
|
||||||
npx tsx streaming_demo.ts
|
|
||||||
```
|
|
||||||
|
|
||||||
## What These Examples Show
|
|
||||||
|
|
||||||
Both examples demonstrate:
|
|
||||||
|
|
||||||
1. **Step Streaming** (default) - Complete messages delivered as they're generated
|
|
||||||
2. **Token Streaming** - Partial chunks for real-time display (ChatGPT-like UX)
|
|
||||||
|
|
||||||
The key difference:
|
|
||||||
- Step streaming: Each event contains a complete message
|
|
||||||
- Token streaming: Multiple events per message, requiring reassembly by message ID
|
|
||||||
|
|
||||||
## Key Concepts
|
|
||||||
|
|
||||||
### Python
|
|
||||||
```python
|
|
||||||
# Step streaming (default)
|
|
||||||
stream = client.agents.messages.create_stream(
|
|
||||||
agent_id=agent.id,
|
|
||||||
messages=[{"role": "user", "content": "Hello!"}]
|
|
||||||
)
|
|
||||||
|
|
||||||
# Token streaming
|
|
||||||
stream = client.agents.messages.create_stream(
|
|
||||||
agent_id=agent.id,
|
|
||||||
messages=[{"role": "user", "content": "Hello!"}],
|
|
||||||
stream_tokens=True # Enable token streaming
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
### TypeScript
|
|
||||||
```typescript
|
|
||||||
// Step streaming (default)
|
|
||||||
const stream = await client.agents.messages.createStream(
|
|
||||||
agentId, {
|
|
||||||
messages: [{role: "user", content: "Hello!"}]
|
|
||||||
}
|
|
||||||
);
|
|
||||||
|
|
||||||
// Token streaming
|
|
||||||
const stream = await client.agents.messages.createStream(
|
|
||||||
agentId, {
|
|
||||||
messages: [{role: "user", content: "Hello!"}],
|
|
||||||
streamTokens: true // Enable token streaming
|
|
||||||
}
|
|
||||||
);
|
|
||||||
```
|
|
||||||
|
|
||||||
## Learn More
|
|
||||||
|
|
||||||
See the full documentation at [docs.letta.com/guides/agents/streaming](https://docs.letta.com/guides/agents/streaming)
|
|
||||||
1143
examples/streaming/package-lock.json
generated
1143
examples/streaming/package-lock.json
generated
File diff suppressed because it is too large
Load Diff
@@ -1,19 +0,0 @@
|
|||||||
{
|
|
||||||
"name": "letta-streaming-examples",
|
|
||||||
"version": "1.0.0",
|
|
||||||
"description": "Examples demonstrating Letta's streaming API in Python and TypeScript",
|
|
||||||
"scripts": {
|
|
||||||
"demo:python": "python streaming_demo.py",
|
|
||||||
"demo:typescript": "tsx streaming_demo.ts",
|
|
||||||
"demo": "npm run demo:typescript"
|
|
||||||
},
|
|
||||||
"keywords": ["letta", "streaming", "ai", "agents"],
|
|
||||||
"author": "",
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"@letta-ai/letta-client": "^0.0.68646",
|
|
||||||
"@types/node": "^24.4.0",
|
|
||||||
"tsx": "^4.20.5",
|
|
||||||
"typescript": "^5.9.2"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,101 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
Minimal examples showing Letta's streaming API.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import os
|
|
||||||
from typing import Dict, Any
|
|
||||||
from letta_client import Letta
|
|
||||||
|
|
||||||
|
|
||||||
def step_streaming_example(client: Letta, agent_id: str):
|
|
||||||
"""Step streaming: receive complete messages as they're generated."""
|
|
||||||
# Send a message with step streaming (default)
|
|
||||||
stream = client.agents.messages.create_stream(
|
|
||||||
agent_id=agent_id,
|
|
||||||
messages=[{
|
|
||||||
"role": "user",
|
|
||||||
"content": "Hi! My name is Alice. What's 2+2?"
|
|
||||||
}]
|
|
||||||
)
|
|
||||||
|
|
||||||
for chunk in stream:
|
|
||||||
# Each chunk is a complete message
|
|
||||||
if hasattr(chunk, 'message_type'):
|
|
||||||
if chunk.message_type == 'assistant_message':
|
|
||||||
print(chunk.content)
|
|
||||||
|
|
||||||
|
|
||||||
def token_streaming_example(client: Letta, agent_id: str):
|
|
||||||
"""Token streaming: receive partial chunks for real-time display."""
|
|
||||||
# Send a message with token streaming enabled
|
|
||||||
stream = client.agents.messages.create_stream(
|
|
||||||
agent_id=agent_id,
|
|
||||||
messages=[{
|
|
||||||
"role": "user",
|
|
||||||
"content": "What's my name? And tell me a short joke."
|
|
||||||
}],
|
|
||||||
stream_tokens=True # Enable token streaming
|
|
||||||
)
|
|
||||||
|
|
||||||
# Track messages by ID for reassembly
|
|
||||||
message_accumulators: Dict[str, str] = {}
|
|
||||||
|
|
||||||
for chunk in stream:
|
|
||||||
if hasattr(chunk, 'id') and chunk.message_type == 'assistant_message':
|
|
||||||
msg_id = chunk.id
|
|
||||||
|
|
||||||
# Initialize accumulator for new messages
|
|
||||||
if msg_id not in message_accumulators:
|
|
||||||
message_accumulators[msg_id] = ''
|
|
||||||
|
|
||||||
# Accumulate and print content
|
|
||||||
content_chunk = chunk.content or ''
|
|
||||||
message_accumulators[msg_id] += content_chunk
|
|
||||||
print(content_chunk, end="", flush=True)
|
|
||||||
|
|
||||||
print() # New line after streaming completes
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
# Check for API key
|
|
||||||
api_key = os.environ.get("LETTA_API_KEY")
|
|
||||||
if not api_key:
|
|
||||||
print("Please set LETTA_API_KEY environment variable")
|
|
||||||
return
|
|
||||||
|
|
||||||
# Initialize client
|
|
||||||
client = Letta(token=api_key)
|
|
||||||
|
|
||||||
# Create a test agent
|
|
||||||
agent = client.agents.create(
|
|
||||||
model="openai/gpt-4o-mini",
|
|
||||||
embedding="openai/text-embedding-3-small",
|
|
||||||
memory_blocks=[
|
|
||||||
{
|
|
||||||
"label": "human",
|
|
||||||
"value": "The user is exploring streaming capabilities."
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"label": "persona",
|
|
||||||
"value": "I am a helpful assistant demonstrating streaming responses."
|
|
||||||
}
|
|
||||||
]
|
|
||||||
)
|
|
||||||
|
|
||||||
try:
|
|
||||||
# Example 1: Step Streaming (default)
|
|
||||||
print("\nStep Streaming (complete messages):")
|
|
||||||
step_streaming_example(client, agent.id)
|
|
||||||
|
|
||||||
# Example 2: Token Streaming
|
|
||||||
print("\nToken Streaming (real-time chunks):")
|
|
||||||
token_streaming_example(client, agent.id)
|
|
||||||
|
|
||||||
finally:
|
|
||||||
# Clean up
|
|
||||||
client.agents.delete(agent.id)
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
main()
|
|
||||||
@@ -1,106 +0,0 @@
|
|||||||
#!/usr/bin/env tsx
|
|
||||||
/* eslint-disable @typescript-eslint/no-non-null-assertion */
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Minimal TypeScript examples showing Letta's streaming API.
|
|
||||||
* Demonstrates both step streaming (default) and token streaming modes.
|
|
||||||
*/
|
|
||||||
|
|
||||||
import { LettaClient } from '@letta-ai/letta-client';
|
|
||||||
import type { LettaMessage } from '@letta-ai/letta-client/api/types';
|
|
||||||
|
|
||||||
async function stepStreamingExample(client: LettaClient, agentId: string): Promise<void> {
|
|
||||||
console.log('\nStep Streaming (complete messages):');
|
|
||||||
|
|
||||||
// Send a message with step streaming (default)
|
|
||||||
const stream = await client.agents.messages.createStream(
|
|
||||||
agentId, {
|
|
||||||
messages: [{role: "user", content: "Hi! My name is Alice. What's 2+2?"}]
|
|
||||||
}
|
|
||||||
);
|
|
||||||
|
|
||||||
for await (const chunk of stream as AsyncIterable<LettaMessage>) {
|
|
||||||
// Each chunk is a complete message
|
|
||||||
if (chunk.messageType === 'assistant_message') {
|
|
||||||
// eslint-disable-next-line @typescript-eslint/no-explicit-any
|
|
||||||
console.log((chunk as any).content);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async function tokenStreamingExample(client: LettaClient, agentId: string): Promise<void> {
|
|
||||||
console.log('\nToken Streaming (real-time chunks):');
|
|
||||||
|
|
||||||
// Send a message with token streaming enabled
|
|
||||||
const stream = await client.agents.messages.createStream(
|
|
||||||
agentId, {
|
|
||||||
messages: [{role: "user", content: "What's my name? And tell me a short joke."}],
|
|
||||||
streamTokens: true // Enable token streaming
|
|
||||||
}
|
|
||||||
);
|
|
||||||
|
|
||||||
// Track messages by ID for reassembly
|
|
||||||
const messageAccumulators = new Map<string, string>();
|
|
||||||
|
|
||||||
for await (const chunk of stream as AsyncIterable<LettaMessage>) {
|
|
||||||
if (chunk.id && chunk.messageType === 'assistant_message') {
|
|
||||||
const msgId = chunk.id;
|
|
||||||
|
|
||||||
// Initialize accumulator for new messages
|
|
||||||
if (!messageAccumulators.has(msgId)) {
|
|
||||||
messageAccumulators.set(msgId, '');
|
|
||||||
}
|
|
||||||
|
|
||||||
// Accumulate and print content
|
|
||||||
// eslint-disable-next-line @typescript-eslint/no-explicit-any
|
|
||||||
const contentChunk = (chunk as any).content || '';
|
|
||||||
messageAccumulators.set(msgId, messageAccumulators.get(msgId)! + contentChunk);
|
|
||||||
process.stdout.write(contentChunk);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
console.log(); // New line after streaming completes
|
|
||||||
}
|
|
||||||
|
|
||||||
async function main(): Promise<void> {
|
|
||||||
// Check for API key
|
|
||||||
const apiKey = process.env.LETTA_API_KEY;
|
|
||||||
if (!apiKey) {
|
|
||||||
console.error('Please set LETTA_API_KEY environment variable');
|
|
||||||
process.exit(1);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Initialize client
|
|
||||||
const client = new LettaClient({ token: apiKey });
|
|
||||||
|
|
||||||
// Create a test agent
|
|
||||||
const agent = await client.agents.create({
|
|
||||||
model: "openai/gpt-4o-mini",
|
|
||||||
embedding: "openai/text-embedding-3-small",
|
|
||||||
memoryBlocks: [
|
|
||||||
{
|
|
||||||
label: "human",
|
|
||||||
value: "The user is exploring streaming capabilities."
|
|
||||||
},
|
|
||||||
{
|
|
||||||
label: "persona",
|
|
||||||
value: "I am a helpful assistant demonstrating streaming responses."
|
|
||||||
}
|
|
||||||
]
|
|
||||||
});
|
|
||||||
|
|
||||||
try {
|
|
||||||
// Example 1: Step Streaming (default)
|
|
||||||
await stepStreamingExample(client, agent.id);
|
|
||||||
|
|
||||||
// Example 2: Token Streaming
|
|
||||||
await tokenStreamingExample(client, agent.id);
|
|
||||||
|
|
||||||
} finally {
|
|
||||||
// Clean up
|
|
||||||
await client.agents.delete(agent.id);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Run the example
|
|
||||||
main().catch(console.error);
|
|
||||||
Binary file not shown.
|
Before Width: | Height: | Size: 134 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 266 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 488 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 100 KiB |
@@ -1,239 +0,0 @@
|
|||||||
{
|
|
||||||
"cells": [
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "c015b59e-1187-4d45-b2af-7b4c5a9512e1",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"# Letta Python Client \n",
|
|
||||||
"Welcome to the Letta tutorial! In this tutorial, we'll go through how to create a basic user-client for Letta and create a custom agent with long term memory. \n",
|
|
||||||
"\n",
|
|
||||||
"Letta runs *agents-as-a-service*, so agents can run independently on a server. For this tutorial, we will run a local version of the client which does not require a server, but still allows you to see some of Letta's capabilities. "
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"id": "a34fe313-f63e-4f36-9142-f681431bbb91",
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"!pip install git+https://github.com/cpacker/MemGPT.git@tutorials"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "191c1cf1-03e6-411a-8409-003caa8530f5",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"### Setup your OpenAI API key "
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": 6,
|
|
||||||
"id": "23091690-bc50-4fbc-b48d-50b639453e36",
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"import os \n",
|
|
||||||
"\n",
|
|
||||||
"os.environ[\"OPENAI_API_KEY\"] = \"sk-...\""
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "f20ad6c7-9066-45e0-88ac-40920c83cc39",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"## Part 1: Connecting to the Letta Client \n",
|
|
||||||
"\n",
|
|
||||||
"We create a local client which creates a quickstart configuration for OpenAI using the provided `OPENAI_API_KEY`. "
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"id": "9b0871a0-42af-4573-a8ba-efb4fe7e5e5a",
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"from letta.client.client import LocalClient\n",
|
|
||||||
"\n",
|
|
||||||
"client = LocalClient(quickstart_option=\"openai\") "
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "40666896-0fa2-465e-b51b-57719de30542",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"## Part 2: Create an agent \n",
|
|
||||||
"We'll first start with creating a basic Letta agent. "
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"id": "fb90f12b-acd7-4877-81e8-0e7b9eb4bd9b",
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"basic_agent = client.create_agent(\n",
|
|
||||||
" name=\"basic_agent\", \n",
|
|
||||||
")\n",
|
|
||||||
"print(f\"Created agent: {basic_agent.name}\")"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "94d14102-3ef8-40fe-b32e-c77d0b8df311",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"We can now send messages from the user to the agent by specifying the `agent_id`: "
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"id": "3cbfef36-76f0-4f0b-990a-5d8409a676d7",
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"from letta.client.utils import pprint \n",
|
|
||||||
"\n",
|
|
||||||
"response = client.user_message(agent_id=basic_agent.id, message=\"hello\") \n",
|
|
||||||
"pprint(response.messages)"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "b24d048e-f3cc-4830-aaa2-5e590d652bd9",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"### Adding Personalization\n",
|
|
||||||
"We can now create a more customized agent, but specifying a custom `human` and `persona` field. \n",
|
|
||||||
"* The *human* specifies the personalization information about the user interacting with the agent \n",
|
|
||||||
"* The *persona* specifies the behavior and personality of the event\n",
|
|
||||||
"\n",
|
|
||||||
"What makes Letta unique is that the starting *persona* and *human* can change over time as the agent gains new information, enabling it to have evolving memory. We'll see an example of this later in the tutorial."
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"id": "3ec35979-9102-4ea7-926e-ea7ccd501ceb",
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"# TODO: feel free to change the human and person to what you'd like \n",
|
|
||||||
"persona = \\\n",
|
|
||||||
"\"\"\"\n",
|
|
||||||
"You are a friendly and helpful agent!\n",
|
|
||||||
"\"\"\"\n",
|
|
||||||
"\n",
|
|
||||||
"human = \\\n",
|
|
||||||
"\"\"\"\n",
|
|
||||||
"I am an Accenture consultant with many specializations. My name is Sarah.\n",
|
|
||||||
"\"\"\"\n",
|
|
||||||
"\n",
|
|
||||||
"custom_agent = client.create_agent(\n",
|
|
||||||
" name=\"custom_agent\", \n",
|
|
||||||
" human=human, \n",
|
|
||||||
" persona=persona\n",
|
|
||||||
")"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "63a9a61b-58c9-4d09-a4f7-48233c72c340",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"### Viewing memory \n",
|
|
||||||
"You can access the agent's memories through the client. There are two type of memory, *core* and *archival* memory: \n",
|
|
||||||
"1. Core memory stores short-term memories in the LLM's context \n",
|
|
||||||
"2. Archival memory stores long term memories in a vector database\n",
|
|
||||||
"\n",
|
|
||||||
"Core memory is divided into a \"human\" and \"persona\" section. You can see the agent's memories about the human below: "
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"id": "b0d1840a-05ee-47c1-b5f5-89faafd96e7c",
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"print(client.get_agent_memory(agent_id=custom_agent.id)[\"core_memory\"][\"human\"])"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "95c8a058-5d67-45b7-814b-38bb67c9acf3",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"### Evolving memory \n",
|
|
||||||
"Letta agents have long term memory, and can evolve what they store in their memory over time. In the example below, we make a correction to the previously provided information. See how the agent processes this new information. "
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"id": "7e58e685-579e-4a0d-bba7-41976ea7f469",
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"response = client.user_message(agent_id=custom_agent.id, message=\"Actually, my name is Charles\") \n",
|
|
||||||
"pprint(response.messages)"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "af2a2dd6-925e-49b2-ab01-bf837f33b26c",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"Now lets see what the agent's memory looks like again: "
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"id": "41ef4aaa-4a48-44bb-8944-855f30725d6d",
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"print(client.get_agent_memory(agent_id=custom_agent.id)[\"core_memory\"][\"human\"])"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "66da949b-1084-4b87-b77c-6cbd4a822b34",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"## 🎉 Congrats, you're done with day 1 of Letta! \n",
|
|
||||||
"For day 2, we'll go over how to connect *data sources* to Letta to run RAG agents. "
|
|
||||||
]
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"metadata": {
|
|
||||||
"kernelspec": {
|
|
||||||
"display_name": "letta",
|
|
||||||
"language": "python",
|
|
||||||
"name": "letta"
|
|
||||||
},
|
|
||||||
"language_info": {
|
|
||||||
"codemirror_mode": {
|
|
||||||
"name": "ipython",
|
|
||||||
"version": 3
|
|
||||||
},
|
|
||||||
"file_extension": ".py",
|
|
||||||
"mimetype": "text/x-python",
|
|
||||||
"name": "python",
|
|
||||||
"nbconvert_exporter": "python",
|
|
||||||
"pygments_lexer": "ipython3",
|
|
||||||
"version": "3.12.2"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"nbformat": 4,
|
|
||||||
"nbformat_minor": 5
|
|
||||||
}
|
|
||||||
@@ -1,50 +0,0 @@
|
|||||||
{
|
|
||||||
"cells": [
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"id": "fb13c7bc-fbb4-4ccd-897c-08995db258e8",
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"from letta import Admin \n",
|
|
||||||
"\n",
|
|
||||||
"base_url=\"letta.localhost\"\n",
|
|
||||||
"token=\"lettaadmin\" \n",
|
|
||||||
"\n",
|
|
||||||
"admin_client = Admin(base_url=base_url, token=\"lettaadmin\")"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"id": "984b8249-a3f7-40d1-9691-4d128f9a90ff",
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"user = admin_client.create_user()"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"metadata": {
|
|
||||||
"kernelspec": {
|
|
||||||
"display_name": "letta",
|
|
||||||
"language": "python",
|
|
||||||
"name": "letta"
|
|
||||||
},
|
|
||||||
"language_info": {
|
|
||||||
"codemirror_mode": {
|
|
||||||
"name": "ipython",
|
|
||||||
"version": 3
|
|
||||||
},
|
|
||||||
"file_extension": ".py",
|
|
||||||
"mimetype": "text/x-python",
|
|
||||||
"name": "python",
|
|
||||||
"nbconvert_exporter": "python",
|
|
||||||
"pygments_lexer": "ipython3",
|
|
||||||
"version": "3.12.2"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"nbformat": 4,
|
|
||||||
"nbformat_minor": 5
|
|
||||||
}
|
|
||||||
Binary file not shown.
@@ -1,130 +0,0 @@
|
|||||||
{
|
|
||||||
"cells": [
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "64fa991c-98e5-4be0-a838-06a4617d8be3",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"## Part 4: Adding external data \n",
|
|
||||||
"In addition to short term, in-context memories, Letta agents also have a long term memory store called *archival memory*. We can enable agents to leverage external data (e.g. PDF files, database records, etc.) by inserting data into archival memory. In this example, we'll show how to load the Letta paper a *source*, which defines a set of data that can be attached to agents. "
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "c61ac9c3-cbea-47a5-a6a4-4133ffe5984e",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"We first download a PDF file, the Letta paper: "
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"id": "f89e9156-3d2d-4ce6-b5e9-aeb4cdfd5657",
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"import requests\n",
|
|
||||||
"\n",
|
|
||||||
"url = \"https://arxiv.org/pdf/2310.08560\"\n",
|
|
||||||
"response = requests.get(url)\n",
|
|
||||||
"filename = \"letta_paper.pdf\"\n",
|
|
||||||
"\n",
|
|
||||||
"with open(filename, 'wb') as f:\n",
|
|
||||||
" f.write(response.content)"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "bcfe3a48-cdb0-4843-9599-623753eb61b9",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"Next, we create a Letta source to load data into: "
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"id": "7ccf21fb-5862-42c2-96ca-63e0ba2f48b5",
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"letta_paper = client.sources.create(\n",
|
|
||||||
" name=\"letta_paper\", \n",
|
|
||||||
")"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "f114bf0b-6a25-4dbf-9c2c-59271d46ebba",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"Now that we have a source, we can load files into the source. Loading the file will take a bit of time, since the file needs to be parsed and stored as *embeddings* using an embedding model. The loading function returns a *job* which can be pinged for a status. "
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"id": "6fe624eb-bf08-4267-a849-06103c1ad5b6",
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"job = client.sources.files.upload(filename=filename, source_id=letta_paper.id)\n",
|
|
||||||
"job"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "27ce13f5-d878-406d-9a5f-7e2335f2ef0d",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"### Attaching data to an agent \n",
|
|
||||||
"To allow an agent to access data in a source, we need to *attach* it to the agent. This will load the source's data into the agent's archival memory. "
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"id": "5be91571-87ee-411a-8e79-25c56c414360",
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"client.agents.sources.attach(source_id=letta_paper.id, agent_id=basic_agent.id)\n",
|
|
||||||
"# TODO: add system message saying that file has been attached \n",
|
|
||||||
"\n",
|
|
||||||
"from pprint import pprint\n",
|
|
||||||
"\n",
|
|
||||||
"# TODO: do soemthing accenture related \n",
|
|
||||||
"# TODO: brag about query rewriting -- hyde paper \n",
|
|
||||||
"response = client.agents.messages.create(agent_id=basic_agent.id, messages=[\n",
|
|
||||||
" MessageCreate(\n",
|
|
||||||
" role=\"user\",\n",
|
|
||||||
" content=\"what is core memory? search your archival memory.\",\n",
|
|
||||||
" )\n",
|
|
||||||
"])\n",
|
|
||||||
"pprint(response.messages)"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"metadata": {
|
|
||||||
"kernelspec": {
|
|
||||||
"display_name": "letta",
|
|
||||||
"language": "python",
|
|
||||||
"name": "letta"
|
|
||||||
},
|
|
||||||
"language_info": {
|
|
||||||
"codemirror_mode": {
|
|
||||||
"name": "ipython",
|
|
||||||
"version": 3
|
|
||||||
},
|
|
||||||
"file_extension": ".py",
|
|
||||||
"mimetype": "text/x-python",
|
|
||||||
"name": "python",
|
|
||||||
"nbconvert_exporter": "python",
|
|
||||||
"pygments_lexer": "ipython3",
|
|
||||||
"version": "3.12.2"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"nbformat": 4,
|
|
||||||
"nbformat_minor": 5
|
|
||||||
}
|
|
||||||
@@ -1,319 +0,0 @@
|
|||||||
{
|
|
||||||
"cells": [
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "6d3806ac-38f3-4999-bbed-953037bd0fd9",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"# Letta Python Client \n",
|
|
||||||
"Welcome to the Letta tutorial! In this tutorial, we'll go through how to create a basic user-client for Letta and create a custom agent with long term memory. \n",
|
|
||||||
"\n",
|
|
||||||
"Letta runs *agents-as-a-service*, so agents can run independently on a server. For this tutorial, we will be connecting to an existing Letta server via the Python client and the UI console. If you don't have a running server, see the [documentation](https://letta.readme.io/docs/running-a-letta-server) for instructions on how to create one. "
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "7c0b6d6b-dbe6-412b-b129-6d7eb7d626a3",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"## Part 0: Install Letta "
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"id": "481d0976-d26b-46d2-ba74-8f2bb5556387",
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"!pip install git+https://github.com/cpacker/MemGPT.git@tutorials"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "a0484348-f7b2-48e3-9a2f-7d6495ef76e3",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"## Part 1: Connecting to the Letta Client \n",
|
|
||||||
"\n",
|
|
||||||
"The Letta client connects to a running Letta service, specified by `base_url`. The client corresponds to a *single-user* (you), so requires an authentication token to let the service know who you are. \n",
|
|
||||||
"\n"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": 19,
|
|
||||||
"id": "53ae2e1b-ad22-43c2-b3d8-92d591be8840",
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"from letta import create_client\n",
|
|
||||||
"\n",
|
|
||||||
"base_url = \"http://35.238.125.250:8083\"\n",
|
|
||||||
"\n",
|
|
||||||
"# TODO: replace with your token \n",
|
|
||||||
"my_token = \"sk-...\" \n",
|
|
||||||
"\n",
|
|
||||||
"client = create_client(base_url=base_url, token=my_token) "
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "3c5c8651-e8aa-4423-b2b8-284bf6a01577",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"### Viewing the developer portal \n",
|
|
||||||
"Letta provides a portal interface for viewing and interacting with agents, data sources, tools, and more. You can enter `http://35.238.125.250:8083` into your browser to load the developer portal, and enter in `my_token` to log in. \n",
|
|
||||||
"\n",
|
|
||||||
"<img src=\"./developer_portal_login.png\" width=\"800\">"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "66e47b34-5feb-4660-85f0-14b5ee7f62b9",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"## Part 2: Create an agent \n",
|
|
||||||
"We'll first start with creating a basic Letta agent. "
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"id": "24745606-b0fb-4157-a5cd-82fd0c26711f",
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"basic_agent = client.create_agent(\n",
|
|
||||||
" name=\"basic_agent\", \n",
|
|
||||||
")\n",
|
|
||||||
"print(f\"Created agent: {basic_agent.name}\")"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "fcfb0d7b-b260-4bc0-8db2-c65f40e4afd5",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"We can now send messages from the user to the agent by specifying the `agent_id`: "
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"id": "a37bc9aa-4efb-4b4d-a6ce-f02505cb3240",
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"from letta.client.utils import pprint \n",
|
|
||||||
"\n",
|
|
||||||
"response = client.user_message(agent_id=basic_agent.id, message=\"hello\") \n",
|
|
||||||
"pprint(response.messages)"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "9803140c-2b9d-426b-8812-9295806eb312",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"### Chatting in the developer portal \n",
|
|
||||||
"You can also chat with the agent inside of the developer portal. Try clicking the chat button in the agent view. \n",
|
|
||||||
"\n",
|
|
||||||
"<img src=\"./dev_portal_agent_chat.png\" width=\"800\">"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "99ae20ec-e92e-4480-a652-b4aea28a6199",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"### Adding Personalization\n",
|
|
||||||
"We can now create a more customized agent, but specifying a custom `human` and `persona` field. \n",
|
|
||||||
"* The *human* specifies the personalization information about the user interacting with the agent \n",
|
|
||||||
"* The *persona* specifies the behavior and personality of the event\n",
|
|
||||||
"\n",
|
|
||||||
"What makes Letta unique is that the starting *persona* and *human* can change over time as the agent gains new information, enabling it to have evolving memory. We'll see an example of this later in the tutorial."
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"id": "c0876410-4d70-490d-a798-39938b5ce941",
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"# TODO: feel free to change the human and person to what you'd like \n",
|
|
||||||
"persona = \\\n",
|
|
||||||
"\"\"\"\n",
|
|
||||||
"You are a friendly and helpful agent!\n",
|
|
||||||
"\"\"\"\n",
|
|
||||||
"\n",
|
|
||||||
"human = \\\n",
|
|
||||||
"\"\"\"\n",
|
|
||||||
"I am an Accenture consultant with many specializations. My name is Sarah.\n",
|
|
||||||
"\"\"\"\n",
|
|
||||||
"\n",
|
|
||||||
"custom_agent = client.create_agent(\n",
|
|
||||||
" name=\"custom_agent\", \n",
|
|
||||||
" human=human, \n",
|
|
||||||
" persona=persona\n",
|
|
||||||
")"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "21293857-80e4-46e4-b628-3912fad038e9",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"### Viewing memory \n",
|
|
||||||
"You can view and edit the agent's memory inside of the developer console. There are two type of memory, *core* and *archival* memory: \n",
|
|
||||||
"1. Core memory stores short-term memories in the LLM's context \n",
|
|
||||||
"2. Archival memory stores long term memories in a vector database\n",
|
|
||||||
"\n",
|
|
||||||
"In this example, we'll look at how the agent can modify its core memory with new information. To see the agent's memory, click the \"Core Memory\" section on the developer console. \n",
|
|
||||||
"\n",
|
|
||||||
"<img src=\"./dev_portal_memory.png\" width=\"800\">"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "d8fa13eb-ce4b-4e4f-81b6-9d6ef6fa67c2",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"### Referencing memory \n",
|
|
||||||
"Letta agents can customize their responses based on what memories they have stored. Try asking a question that related to the human and persona you provided. "
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"id": "fddbefe5-3b94-4a08-aa50-d80fb581c747",
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"response = client.user_message(agent_id=custom_agent.id, message=\"what do I work as?\") \n",
|
|
||||||
"pprint(response.messages)"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "30497119-e208-4a4e-b482-e7cfff346263",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"### Evolving memory \n",
|
|
||||||
"Letta agents have long term memory, and can evolve what they store in their memory over time. In the example below, we make a correction to the previously provided information. See how the agent processes this new information. "
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"id": "679fa708-20ee-4e75-9222-b476f126bc6f",
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"response = client.user_message(agent_id=custom_agent.id, message=\"Actually, my name is Charles\") \n",
|
|
||||||
"pprint(response.messages)"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "686ac5a3-be63-4afd-97ae-b7d05219dd60",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"Now, look back at the developer portal and at the agent's *core memory*. Do you see a change in the *human* section of the memory? "
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "878d2f49-a5a6-4483-9f69-7436bcf00cfb",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"## Part 3: Adding Tools \n",
|
|
||||||
"Letta agents can be connected to custom tools. Currently, tools must be created by service administrators. However, you can add additional tools provided by the service administrator to the agent you create. "
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "35785d36-2674-4a00-937b-4c747e0fb6bf",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"### View Available Tools "
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"id": "c307a6f7-276b-49f5-8d3d-48aaaea221a7",
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"tools = client.list_tools().tools\n",
|
|
||||||
"for tool in tools: \n",
|
|
||||||
" print(f\"Tool: {tool.name} - {tool.json_schema['description']}\")"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "318d19dc-b9dd-448c-ab5c-9c9311d21fad",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"### Create a tool using agent in the developer portal \n",
|
|
||||||
"Create an agent in the developer portal and toggle additional tools you want the agent to use. We recommend modifying the *persona* to notify the agent that it should be using the tools for certain tasks. \n",
|
|
||||||
"\n",
|
|
||||||
"\n",
|
|
||||||
"<img src=\"./dev_portal_tools.png\" width=\"800\">"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "aecdaa70-861a-43d5-b006-fecd90a8ed19",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"## Part 4: Cleanup (optional) \n",
|
|
||||||
"You can cleanup the agents you creating the following command to delete your agents: "
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"id": "1320d9c9-170b-48a8-b5e8-70737b1a8aac",
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"for agent in client.list_agents().agents: \n",
|
|
||||||
" client.delete_agent(agent[\"id\"])\n",
|
|
||||||
" print(f\"Deleted agent {agent['name']} with ID {agent['id']}\")"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "510675a8-22bc-4f9f-9c79-91e2ffa9caf9",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"## 🎉 Congrats, you're done with day 1 of Letta! \n",
|
|
||||||
"For day 2, we'll go over how to connect *data sources* to Letta to run RAG agents. "
|
|
||||||
]
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"metadata": {
|
|
||||||
"kernelspec": {
|
|
||||||
"display_name": "letta",
|
|
||||||
"language": "python",
|
|
||||||
"name": "letta"
|
|
||||||
},
|
|
||||||
"language_info": {
|
|
||||||
"codemirror_mode": {
|
|
||||||
"name": "ipython",
|
|
||||||
"version": 3
|
|
||||||
},
|
|
||||||
"file_extension": ".py",
|
|
||||||
"mimetype": "text/x-python",
|
|
||||||
"name": "python",
|
|
||||||
"nbconvert_exporter": "python",
|
|
||||||
"pygments_lexer": "ipython3",
|
|
||||||
"version": "3.12.2"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"nbformat": 4,
|
|
||||||
"nbformat_minor": 5
|
|
||||||
}
|
|
||||||
Reference in New Issue
Block a user