config.model_endpoint_type (#422)
* mark depricated API section * CLI bug fixes for azure * check azure before running * Update README.md * Update README.md * bug fix with persona loading * remove print * make errors for cli flags more clear * format * fix imports * fix imports * add prints * update lock * update config fields * cleanup config loading * commit * remove asserts * refactor configure * put into different functions * add embedding default * pass in config * fixes * allow overriding openai embedding endpoint * black * trying to patch tests (some circular import errors) * update flags and docs * patched support for local llms using endpoint and endpoint type passed via configs, not env vars * missing files * fix naming * fix import * fix two runtime errors * patch ollama typo, move ollama model question pre-wrapper, modify question phrasing to include link to readthedocs, also have a default ollama model that has a tag included * disable debug messages * made error message for failed load more informative * don't print dynamic linking function warning unless --debug * updated tests to work with new cli workflow (disabled openai config test for now) * added skips for tests when vars are missing * update bad arg * revise test to soft pass on empty string too * don't run configure twice * extend timeout (try to pass against nltk download) * update defaults * typo with endpoint type default * patch runtime errors for when model is None * catching another case of 'x in model' when model is None (preemptively) * allow overrides to local llm related config params * made model wrapper selection from a list vs raw input * update test for select instead of input * Fixed bug in endpoint when using local->openai selection, also added validation loop to manual endpoint entry * updated error messages to be more informative with links to readthedocs * add back gpt3.5-turbo --------- Co-authored-by: cpacker <packercharles@gmail.com>
MemGPT
Quick setup
Join Discord and message the MemGPT bot (in the #memgpt channel). Then run the following commands (messaged to "MemGPT Bot"):
/profile(to create your profile)/key(to enter your OpenAI key)/create(to create a MemGPT chatbot)
Make sure your privacy settings on this server are open so that MemGPT Bot can DM you:
MemGPT → Privacy Settings → Direct Messages set to ON
You can see the full list of available commands when you enter / into the message box.
What is MemGPT?
Memory-GPT (or MemGPT in short) is a system that intelligently manages different memory tiers in LLMs in order to effectively provide extended context within the LLM's limited context window. For example, MemGPT knows when to push critical information to a vector database and when to retrieve it later in the chat, enabling perpetual conversations. Learn more about MemGPT in our paper.
Running MemGPT locally
Install MemGPT:
pip install pymemgpt
Add your OpenAI API key to your environment:
export OPENAI_API_KEY=YOUR_API_KEY # on Linux/Mac
set OPENAI_API_KEY=YOUR_API_KEY # on Windows
$Env:OPENAI_API_KEY = "YOUR_API_KEY" # on Windows (PowerShell)
Configure default setting for MemGPT by running:
memgpt configure
Now, you can run MemGPT with:
memgpt run
You can run the following commands in the MemGPT CLI prompt:
/exit: Exit the CLI/attach: Attach a loaded data source to the agent/save: Save a checkpoint of the current agent/conversation state/dump: View the current message log (see the contents of main context)/dump <count>: View the last messages (all if is omitted)/memory: Print the current contents of agent memory/pop: Undo the last message in the conversation/pop <count>: Undo the last messages in the conversation. It defaults to 3, which usually is one turn around in the conversation/retry: Pops the last answer and tries to get another one/rethink <text>: Will replace the inner dialog of the last assistant message with the to help shaping the conversation/rewrite: Will replace the last assistant answer with the given text to correct or force the answer/heartbeat: Send a heartbeat system message to the agent/memorywarning: Send a memory warning system message to the agent
Once you exit the CLI with /exit, you can resume chatting with the same agent by specifying the agent name in memgpt run --agent <NAME>.
Documentation
See full documentation at: https://memgpt.readthedocs.io/
Installing from source
To install MemGPT from source, start by cloning the repo:
git clone git@github.com:cpacker/MemGPT.git
Then navigate to the main MemGPT directory, and do:
pip install -e .
Now, you should be able to run memgpt from the command-line using the downloaded source code.
If you are having dependency issues using pip install -e ., we recommend you install the package using Poetry (see below). Installing MemGPT from source using Poetry will ensure that you are using exact package versions that have been tested for the production build.
Installing from source (using Poetry)
First, install Poetry using the official instructions here.
Then, you can install MemGPT from source with:
git clone git@github.com:cpacker/MemGPT.git
poetry shell
poetry install
Support
For issues and feature requests, please open a GitHub issue or message us on our #support channel on Discord
Datasets
Datasets used in our paper can be downloaded at Hugging Face.
🚀 Project Roadmap
- Release MemGPT Discord bot demo (perpetual chatbot)
- Add additional workflows (load SQL/text into MemGPT external context)
- Integration tests
- Integrate with AutoGen (discussion)
- Add official gpt-3.5-turbo support (discussion)
- CLI UI improvements (issue)
- Add support for other LLM backends (issue, discussion)
- Release MemGPT family of open models (eg finetuned Mistral) (discussion)




