feat: Add MemGPT "Python Client" (#713)

* First commit of memgpt client and some messy test code

* rolled back unnecessary changes to abstract interface; switched client to always use Queueing Interface

* Added missing interface clear() in run_command;  added convenience method for checking if an agent exists, used that in create_agent

* Formatting fixes

* Fixed incorrect naming of get_agent_memory in rest server

* Removed erroneous clear from client save method;  Replaced print statements with appropriate logger calls in server

* Updated readme with client usage instructions

* added tests for Client

* make printing to terminal togglable on queininginterface (should probably refactor this to a logger)

* turn off printing to stdout via interface by default

* allow importing the python client in a similar fashion to openai-python (see https://github.com/openai/openai-python)

* Allowed quickstart on init of client;  updated readme and test_client accordingly

* oops, fixed name of openai_api_key config key

* Fixed small typo

* Fixed broken test by adding memgpt hosted model details to agent config

* silence llamaindex 'LLM is explicitly disabled. Using MockLLM.' on server

* default to openai if user's memgpt directory is empty (first time)

* correct type hint

* updated section on client in readme

* added comment about how MemGPT config != Agent config

* patch unrelated test

* update wording on readme

* patch another unrelated test

* added python client to readme docs

* Changed 'user' to 'human' in example;  Defaulted AgentConfig.model to 'None';  Fixed issue in create_agent (accounting for dict config);  matched test code to example

* Fixed advanced example

* patch test

* patch

---------

Co-authored-by: cpacker <packercharles@gmail.com>
This commit is contained in:
BabellDev
2023-12-30 15:43:46 -05:00
committed by GitHub
parent 0b9fdcf46c
commit b2e9a24671
13 changed files with 387 additions and 32 deletions

View File

@@ -125,6 +125,94 @@ poetry install
```
</details>
## Python integration (for developers)
The fastest way to integrate MemGPT with your own Python projects is through the `MemGPT` client class:
```python
from memgpt import MemGPT
# Create a MemGPT client object (sets up the persistent state)
client = MemGPT(
quickstart="openai",
config={
"openai_api_key": "YOUR_API_KEY"
}
)
# You can set many more parameters, this is just a basic example
agent_id = client.create_agent(
agent_config={
"persona": "sam_pov",
"human": "cs_phd",
}
)
# Now that we have an agent_name identifier, we can send it a message!
# The response will have data from the MemGPT agent
my_message = "Hi MemGPT! How's it going?"
response = client.user_message(agent_id=agent_id, message=my_message)
```
<details>
<summary>
<strong>More in-depth example of using MemGPT Client</strong>
</summary>
```python
from memgpt.config import AgentConfig
from memgpt import MemGPT
from memgpt.cli.cli import QuickstartChoice
client = MemGPT(
# When auto_save is 'True' then the agent(s) will be saved after every
# user message. This may have performance implications, so you
# can otherwise choose when to save explicitly using client.save().
auto_save=True,
# Quickstart will automatically configure MemGPT (without having to run `memgpt configure`
# If you choose 'openai' then you must set the api key (env or in config)
quickstart=QuickstartChoice.memgpt_hosted,
# Allows you to override default config generated by quickstart or `memgpt configure`
config={}
)
# Create an AgentConfig with default persona and human txt
# In this case, assume we wrote a custom persona file "my_persona.txt", located at ~/.memgpt/personas/my_persona.txt
# Same for a custom user file "my_user.txt", located at ~/.memgpt/humans/my_user.txt
agent_config = AgentConfig(
name="CustomAgent",
persona="my_persona",
human="my_user",
)
# Create the agent according to AgentConfig we set up. If an agent with
# the same name already exists it will simply return, unless you set
# throw_if_exists to 'True'
agent_id = client.create_agent(agent_config=agent_config)
# Create a helper that sends a message and prints the assistant response only
def send_message(message: str):
"""
sends a message and prints the assistant output only.
:param message: the message to send
"""
response = client.user_message(agent_id=agent_id, message=message)
for r in response:
# Can also handle other types "function_call", "function_return", "function_message"
if "assistant_message" in r:
print("ASSISTANT:", r["assistant_message"])
elif "thoughts" in r:
print("THOUGHTS:", r["internal_monologue"])
# Send a message and see the response
send_message("Please introduce yourself and tell me about your abilities!")
```
</details>
## Support
For issues and feature requests, please [open a GitHub issue](https://github.com/cpacker/MemGPT/issues) or message us on our `#support` channel on [Discord](https://discord.gg/9GEQrxmVyE).

88
docs/python_client.md Normal file
View File

@@ -0,0 +1,88 @@
---
title: Python client
excerpt: Developing using the MemGPT Python client
category: 6580dab16cade8003f996d17
---
The fastest way to integrate MemGPT with your own Python projects is through the `MemGPT` client class:
```python
from memgpt import MemGPT
# Create a MemGPT client object (sets up the persistent state)
client = MemGPT(
quickstart="openai",
config={
"openai_api_key": "YOUR_API_KEY"
}
)
# You can set many more parameters, this is just a basic example
agent_id = client.create_agent(
agent_config={
"persona": "sam_pov",
"user": "cs_phd",
}
)
# Now that we have an agent_name identifier, we can send it a message!
# The response will have data from the MemGPT agent
my_message = "Hi MemGPT! How's it going?"
response = client.user_message(agent_id=agent_id, message=my_message)
```
## More in-depth example of using the MemGPT Python client
```python
from memgpt.config import AgentConfig
from memgpt import MemGPT
from memgpt import constants
from memgpt.cli.cli import QuickstartChoice
client = MemGPT(
# When auto_save is 'True' then the agent(s) will be saved after every
# user message. This may have performance implications, so you
# can otherwise choose when to save explicitly using client.save().
auto_save=True,
# Quickstart will automatically configure MemGPT (without having to run `memgpt configure`
# If you choose 'openai' then you must set the api key (env or in config)
quickstart=QuickstartChoice.memgpt_hosted,
# Allows you to override default config generated by quickstart or `memgpt configure`
config={}
)
# Create an AgentConfig with default persona and human txt
# In this case, assume we wrote a custom persona file "my_persona.txt", located at ~/.memgpt/personas/my_persona.txt
# Same for a custom user file "my_user.txt", located at ~/.memgpt/humans/my_user.txt
agent_config = AgentConfig(
name="CustomAgent",
persona="my_persona",
human="my_user",
preset="memgpt_chat",
model="gpt-4",
)
# Create the agent according to AgentConfig we set up. If an agent with
# the same name already exists it will simply return, unless you set
# throw_if_exists to 'True'
agent_id = client.create_agent(agent_config=agent_config)
# Create a helper that sends a message and prints the assistant response only
def send_message(message: str):
"""
sends a message and prints the assistant output only.
:param message: the message to send
"""
response = client.user_message(agent_id=agent_id, message=message)
for r in response:
# Can also handle other types "function_call", "function_return", "function_message"
if "assistant_message" in r:
print("ASSISTANT:", r["assistant_message"])
elif "thoughts" in r:
print("THOUGHTS:", r["internal_monologue"])
# Send a message and see the response
send_message("Please introduce yourself and tell me about your abilities!")
```

View File

@@ -1 +1,3 @@
__version__ = "0.2.10"
from memgpt.client.client import Client as MemGPT

View File

@@ -33,6 +33,14 @@ class QuickstartChoice(Enum):
memgpt_hosted = "memgpt"
def str_to_quickstart_choice(choice_str: str) -> QuickstartChoice:
try:
return QuickstartChoice[choice_str]
except KeyError:
valid_options = [choice.name for choice in QuickstartChoice]
raise ValueError(f"{choice_str} is not a valid QuickstartChoice. Valid options are: {valid_options}")
def set_config_with_dict(new_config: dict) -> bool:
"""Set the base config using a dict"""
from memgpt.utils import printd

View File

114
memgpt/client/client.py Normal file
View File

@@ -0,0 +1,114 @@
import os
from typing import Dict, List, Union
from memgpt.cli.cli import QuickstartChoice
from memgpt.cli.cli import set_config_with_dict, quickstart as quickstart_func, str_to_quickstart_choice
from memgpt.config import MemGPTConfig, AgentConfig
from memgpt.persistence_manager import PersistenceManager
from memgpt.server.rest_api.interface import QueuingInterface
from memgpt.server.server import SyncServer
class Client(object):
def __init__(
self,
auto_save: bool = False,
quickstart: Union[QuickstartChoice, str, None] = None,
config: Union[Dict, MemGPTConfig] = None, # not the same thing as AgentConfig
debug: bool = False,
):
"""
Initializes a new instance of Client class.
:param auto_save: indicates whether to automatically save after every message.
:param quickstart: allows running quickstart on client init.
:param config: optional config settings to apply after quickstart
:param debug: indicates whether to display debug messages.
"""
self.user_id = "null"
self.auto_save = auto_save
# make sure everything is set up properly
MemGPTConfig.create_config_dir()
# If this is the first ever start, do basic initialization
if not MemGPTConfig.exists() and config is None and quickstart is None:
# Default to openai
print("Detecting uninitialized MemGPT, defaulting to quickstart == openai")
quickstart = "openai"
if quickstart:
# api key passed in config has priority over env var
if isinstance(config, dict) and "openai_api_key" in config:
openai_key = config["openai_api_key"]
else:
openai_key = os.environ.get("OPENAI_API_KEY", None)
# throw an error if we can't resolve the key
if openai_key:
os.environ["OPENAI_API_KEY"] = openai_key
elif quickstart == QuickstartChoice.openai or quickstart == "openai":
raise ValueError("Please set OPENAI_API_KEY or pass 'openai_api_key' in config dict")
if isinstance(quickstart, str):
quickstart = str_to_quickstart_choice(quickstart)
quickstart_func(backend=quickstart, debug=debug)
if config is not None:
set_config_with_dict(config)
self.interface = QueuingInterface(debug=debug)
self.server = SyncServer(default_interface=self.interface)
def list_agents(self):
self.interface.clear()
return self.server.list_agents(user_id=self.user_id)
def agent_exists(self, agent_id: str) -> bool:
existing = self.list_agents()
return agent_id in existing["agent_names"]
def create_agent(
self,
agent_config: Union[Dict, AgentConfig],
persistence_manager: Union[PersistenceManager, None] = None,
throw_if_exists: bool = False,
) -> str:
if isinstance(agent_config, dict):
agent_name = agent_config.get("name")
else:
agent_name = agent_config.name
if not self.agent_exists(agent_id=agent_name):
self.interface.clear()
return self.server.create_agent(user_id=self.user_id, agent_config=agent_config, persistence_manager=persistence_manager)
if throw_if_exists:
raise ValueError(f"Agent {agent_config.name} already exists")
return agent_config.name
def get_agent_config(self, agent_id: str) -> Dict:
self.interface.clear()
return self.server.get_agent_config(user_id=self.user_id, agent_id=agent_id)
def get_agent_memory(self, agent_id: str) -> Dict:
self.interface.clear()
return self.server.get_agent_memory(user_id=self.user_id, agent_id=agent_id)
def update_agent_core_memory(self, agent_id: str, new_memory_contents: Dict) -> Dict:
self.interface.clear()
return self.server.update_agent_core_memory(user_id=self.user_id, agent_id=agent_id, new_memory_contents=new_memory_contents)
def user_message(self, agent_id: str, message: str) -> List[Dict]:
self.interface.clear()
self.server.user_message(user_id=self.user_id, agent_id=agent_id, message=message)
if self.auto_save:
self.save()
return self.interface.to_list()
def run_command(self, agent_id: str, command: str) -> Union[str, None]:
self.interface.clear()
return self.server.run_command(user_id=self.user_id, agent_id=agent_id, command=command)
def save(self):
self.server.save_agents()

View File

@@ -233,7 +233,7 @@ class AgentConfig:
persona,
human,
# model info
model,
model=None,
model_endpoint_type=None,
model_endpoint=None,
model_wrapper=None,

View File

@@ -7,8 +7,9 @@ from memgpt.interface import AgentInterface
class QueuingInterface(AgentInterface):
"""Messages are queued inside an internal buffer and manually flushed"""
def __init__(self):
def __init__(self, debug=True):
self.buffer = queue.Queue()
self.debug = debug
def to_list(self):
"""Convert queue to a list (empties it out at the same time)"""
@@ -48,17 +49,20 @@ class QueuingInterface(AgentInterface):
def internal_monologue(self, msg: str) -> None:
"""Handle the agent's internal monologue"""
print(msg)
if self.debug:
print(msg)
self.buffer.put({"internal_monologue": msg})
def assistant_message(self, msg: str) -> None:
"""Handle the agent sending a message"""
print(msg)
if self.debug:
print(msg)
self.buffer.put({"assistant_message": msg})
def function_message(self, msg: str) -> None:
"""Handle the agent calling a function"""
print(msg)
if self.debug:
print(msg)
if msg.startswith("Running "):
msg = msg.replace("Running ", "")

View File

@@ -81,7 +81,7 @@ def get_agent_memory(user_id: str, agent_id: str):
@app.put("/agents/memory")
def get_agent_memory(body: CoreMemory):
def put_agent_memory(body: CoreMemory):
interface.clear()
new_memory_contents = {"persona": body.persona, "human": body.human}
return server.update_agent_core_memory(user_id=body.user_id, agent_id=body.agent_id, new_memory_contents=new_memory_contents)

View File

@@ -1,6 +1,7 @@
from abc import abstractmethod
from typing import Union, Callable
import json
import logging
from threading import Lock
from functools import wraps
from fastapi import HTTPException
@@ -21,6 +22,8 @@ from memgpt.persistence_manager import PersistenceManager, LocalStateManager
from memgpt.interface import CLIInterface # for printing to terminal
from memgpt.interface import AgentInterface # abstract
logger = logging.getLogger(__name__)
class Server(object):
"""Abstract server class that supports multi-agent multi-user"""
@@ -85,25 +88,25 @@ class LockingServer(Server):
def agent_lock_decorator(func: Callable) -> Callable:
@wraps(func)
def wrapper(self, user_id: str, agent_id: str, *args, **kwargs):
# print("Locking check")
# logger.info("Locking check")
# Initialize the lock for the agent_id if it doesn't exist
if agent_id not in self._agent_locks:
# print(f"Creating lock for agent_id = {agent_id}")
# logger.info(f"Creating lock for agent_id = {agent_id}")
self._agent_locks[agent_id] = Lock()
# Check if the agent is currently locked
if not self._agent_locks[agent_id].acquire(blocking=False):
# print(f"agent_id = {agent_id} is busy")
# logger.info(f"agent_id = {agent_id} is busy")
raise HTTPException(status_code=423, detail=f"Agent '{agent_id}' is currently busy.")
try:
# Execute the function
# print(f"running function on agent_id = {agent_id}")
# logger.info(f"running function on agent_id = {agent_id}")
return func(self, user_id, agent_id, *args, **kwargs)
finally:
# Release the lock
# print(f"releasing lock on agent_id = {agent_id}")
# logger.info(f"releasing lock on agent_id = {agent_id}")
self._agent_locks[agent_id].release()
return wrapper
@@ -152,9 +155,9 @@ class SyncServer(LockingServer):
for agent_d in self.active_agents:
try:
agent_d["agent"].save()
print(f"Saved agent {agent_d['agent_id']}")
logger.info(f"Saved agent {agent_d['agent_id']}")
except Exception as e:
print(f"Error occured while trying to save agent {agent_d['agent_id']}:\n{e}")
logger.exception(f"Error occurred while trying to save agent {agent_d['agent_id']}")
def _get_agent(self, user_id: str, agent_id: str) -> Union[Agent, None]:
"""Get the agent object from the in-memory object store"""
@@ -179,7 +182,6 @@ class SyncServer(LockingServer):
def _load_agent(self, user_id: str, agent_id: str, interface: Union[AgentInterface, None] = None) -> Agent:
"""Loads a saved agent into memory (if it doesn't exist, throw an error)"""
from memgpt.utils import printd
# If an interface isn't specified, use the default
if interface is None:
@@ -187,9 +189,10 @@ class SyncServer(LockingServer):
# If the agent isn't load it, load it and put it into memory
if AgentConfig.exists(agent_id):
printd(f"(user={user_id}, agent={agent_id}) exists, loading into memory...")
logger.debug(f"(user={user_id}, agent={agent_id}) exists, loading into memory...")
agent_config = AgentConfig.load(agent_id)
memgpt_agent = Agent.load_agent(interface=interface, agent_config=agent_config)
with utils.suppress_stdout():
memgpt_agent = Agent.load_agent(interface=interface, agent_config=agent_config)
self._add_agent(user_id=user_id, agent_id=agent_id, agent_obj=memgpt_agent)
return memgpt_agent
@@ -206,16 +209,15 @@ class SyncServer(LockingServer):
def _step(self, user_id: str, agent_id: str, input_message: str) -> None:
"""Send the input message through the agent"""
from memgpt.utils import printd
printd(f"Got input message: {input_message}")
logger.debug(f"Got input message: {input_message}")
# Get the agent object (loaded in memory)
memgpt_agent = self._get_or_load_agent(user_id=user_id, agent_id=agent_id)
if memgpt_agent is None:
raise KeyError(f"Agent (user={user_id}, agent={agent_id}) is not loaded")
printd(f"Starting agent step")
logger.debug(f"Starting agent step")
no_verify = True
next_input_message = input_message
counter = 0
@@ -227,10 +229,10 @@ class SyncServer(LockingServer):
# Chain stops
if not self.chaining:
printd("No chaining, stopping after one step")
logger.debug("No chaining, stopping after one step")
break
elif self.max_chaining_steps is not None and counter > self.max_chaining_steps:
printd(f"Hit max chaining steps, stopping after {counter} steps")
logger.debug(f"Hit max chaining steps, stopping after {counter} steps")
break
# Chain handlers
elif token_warning:
@@ -247,13 +249,12 @@ class SyncServer(LockingServer):
break
memgpt_agent.interface.step_yield()
printd(f"Finished agent step")
logger.debug(f"Finished agent step")
def _command(self, user_id: str, agent_id: str, command: str) -> Union[str, None]:
"""Process a CLI command"""
from memgpt.utils import printd
printd(f"Got command: {command}")
logger.debug(f"Got command: {command}")
# Get the agent object (loaded in memory)
memgpt_agent = self._get_or_load_agent(user_id=user_id, agent_id=agent_id)
@@ -320,17 +321,17 @@ class SyncServer(LockingServer):
n_messages = len(memgpt_agent.messages)
MIN_MESSAGES = 2
if n_messages <= MIN_MESSAGES:
print(f"Agent only has {n_messages} messages in stack, none left to pop")
logger.info(f"Agent only has {n_messages} messages in stack, none left to pop")
elif n_messages - pop_amount < MIN_MESSAGES:
print(f"Agent only has {n_messages} messages in stack, cannot pop more than {n_messages - MIN_MESSAGES}")
logger.info(f"Agent only has {n_messages} messages in stack, cannot pop more than {n_messages - MIN_MESSAGES}")
else:
print(f"Popping last {pop_amount} messages from stack")
logger.info(f"Popping last {pop_amount} messages from stack")
for _ in range(min(pop_amount, len(memgpt_agent.messages))):
memgpt_agent.messages.pop()
elif command.lower() == "retry":
# TODO this needs to also modify the persistence manager
print(f"Retrying for another answer")
logger.info(f"Retrying for another answer")
while len(memgpt_agent.messages) > 0:
if memgpt_agent.messages[-1].get("role") == "user":
# we want to pop up to the last user message and send it again
@@ -342,7 +343,7 @@ class SyncServer(LockingServer):
elif command.lower() == "rethink" or command.lower().startswith("rethink "):
# TODO this needs to also modify the persistence manager
if len(command) < len("rethink "):
print("Missing text after the command")
logger.warning("Missing text after the command")
else:
for x in range(len(memgpt_agent.messages) - 1, 0, -1):
if memgpt_agent.messages[x].get("role") == "assistant":
@@ -353,7 +354,7 @@ class SyncServer(LockingServer):
elif command.lower() == "rewrite" or command.lower().startswith("rewrite "):
# TODO this needs to also modify the persistence manager
if len(command) < len("rewrite "):
print("Missing text after the command")
logger.warning("Missing text after the command")
else:
for x in range(len(memgpt_agent.messages) - 1, 0, -1):
if memgpt_agent.messages[x].get("role") == "assistant":
@@ -379,7 +380,6 @@ class SyncServer(LockingServer):
@LockingServer.agent_lock_decorator
def user_message(self, user_id: str, agent_id: str, message: str) -> None:
"""Process an incoming user message and feed it through the MemGPT agent"""
from memgpt.utils import printd
# Basic input sanitization
if not isinstance(message, str) or len(message) == 0:
@@ -436,7 +436,7 @@ class SyncServer(LockingServer):
persistence_manager,
)
agent.save()
print(f"Created new agent from config: {agent}")
logger.info(f"Created new agent from config: {agent}")
return agent.config.name

35
tests/test_client.py Normal file
View File

@@ -0,0 +1,35 @@
from memgpt import MemGPT
from memgpt import constants
from .utils import wipe_config
test_agent_id = "test_client_agent"
client = None
def test_create_agent():
wipe_config()
global client
client = MemGPT(quickstart="memgpt_hosted")
agent_id = client.create_agent(
agent_config={
"name": test_agent_id,
"persona": constants.DEFAULT_PERSONA,
"human": constants.DEFAULT_HUMAN,
}
)
assert agent_id is not None
return client, agent_id
def test_user_message():
assert client is not None, "Run create_agent test first"
response = client.user_message(agent_id=test_agent_id, message="Hello my name is Test, Client Test")
assert response is not None and len(response) > 0
if __name__ == "__main__":
test_create_agent()
test_user_message()

View File

@@ -42,6 +42,7 @@ async def test_dummy():
assert True
@pytest.mark.skipif(not os.getenv("OPENAI_API_KEY"), reason="Missing PG URI and/or OpenAI API key")
@pytest.mark.asyncio
async def test_websockets():
# Mock a WebSocket connection

View File

@@ -1,9 +1,24 @@
import os
import pexpect
from memgpt.config import MemGPTConfig
from .constants import TIMEOUT
def wipe_config():
if MemGPTConfig.exists():
# delete
if os.getenv("MEMGPT_CONFIG_PATH"):
config_path = os.getenv("MEMGPT_CONFIG_PATH")
else:
config_path = MemGPTConfig.config_path
# TODO delete file config_path
os.remove(config_path)
def configure_memgpt_localllm():
wipe_config()
child = pexpect.spawn("memgpt configure")
child.expect("Select LLM inference provider", timeout=TIMEOUT)