diff --git a/README.md b/README.md
index 66ed4a01..66473361 100644
--- a/README.md
+++ b/README.md
@@ -22,29 +22,10 @@
- 🗃️ Chat with your data - talk to your SQL database or your local files!
- SQL Database
+ 🗃️ Chat with your data - talk to your local files or SQL database!
-

+
- Local files
-
-

-
-
-
-
- 📄 You can also talk to docs - for example ask about LlamaIndex!
-
-

-
-
- ChatGPT (GPT-4) when asked the same question:
-
-

-
- (Question from https://github.com/run-llama/llama_index/issues/7756)
-
## Quick setup
@@ -77,34 +58,117 @@ Install MemGPT:
pip install pymemgpt
```
-To update the package, run
-```sh
-pip install pymemgpt -U
-```
-
Add your OpenAI API key to your environment:
```sh
-# on Linux/Mac
-export OPENAI_API_KEY=YOUR_API_KEY
-```
+export OPENAI_API_KEY=YOUR_API_KEY # on Linux/Mac
+set OPENAI_API_KEY=YOUR_API_KEY # on Windows
+$Env:OPENAI_API_KEY = "YOUR_API_KEY" # on Windows (PowerShell)
+```
+Configure default setting for MemGPT by running:
+```
+memgpt configure
+```
+Now, you can run MemGPT with:
```sh
-# on Windows
-set OPENAI_API_KEY=YOUR_API_KEY
+memgpt run
+```
+The `run` command supports the following optional flags (if set, will override config defaults):
+* `--agent`: (str) Name of agent to create or to resume chatting with.
+* `--human`: (str) Name of the human to run the agent with.
+* `--persona`: (str) Name of agent persona to use.
+* `--model`: (str) LLM model to run [gpt-4, gpt-3.5].
+* `--preset`: (str) MemGPT preset to run agent with.
+* `--data_source`: (str) Name of data source (loaded with `memgpt load`) to connect to agent.
+* `--first`: (str) Allow user to sent the first message.
+* `--debug`: (bool) Show debug logs (default=False)
+* `--no_verify`: (bool) Bypass message verification (default=False)
+* `--yes`/`-y`: (bool) Skip confirmation prompt and use defaults (default=False)
+
+You can run the following commands in the MemGPT CLI prompt:
+* `/exit`: Exit the CLI
+* `/save`: Save a checkpoint of the current agent/conversation state
+* `/dump`: View the current message log (see the contents of main context)
+* `/memory`: Print the current contents of agent memory
+* `/pop`: Undo the last message in the conversation
+* `/heartbeat`: Send a heartbeat system message to the agent
+* `/memorywarning`: Send a memory warning system message to the agent
+
+
+Once you exit the CLI with `/exit`, you can resume chatting with the same agent by specifying the agent name in `memgpt run --agent `.
+
+### Adding Custom Personas/Humans
+You can add new human or persona definitions either by providing a file (using the `-f` flag) or text (using the `--text` flag).
+```
+# add a human
+memgpt add human [-f ] [--text ]
+
+# add a persona
+memgpt add persona [-f ] [--text ]
```
+You can view available persona and human files with the following command:
+```
+memgpt list [human/persona]
+```
+
+### Data Sources (i.e. chat with your data)
+MemGPT supports pre-loading data into archival memory, so your agent can reference loaded data in your conversations with an agent by specifying the data source with the flag `memgpt run --data-source `.
+
+#### Loading Data
+We currently support loading from a directory and database dumps. We highly encourage contributions for new data sources, which can be added as a new [CLI data load command](https://github.com/cpacker/MemGPT/blob/main/memgpt/cli/cli_load.py).
+
+Loading from a directorsy:
+```
+# loading a directory
+memgpt load directory --name \
+ [--input_dir ] [--input-files ...] [--recursive]
+```
+Loading from a database dump:
```sh
-# on Windows (PowerShell)
-$Env:OPENAI_API_KEY = "YOUR_API_KEY"
+memgpt load database --name \
+ --query \ # Query to run on database to get data
+ --dump-path \ # Path to dump file
+ --scheme \ # Database scheme
+ --host \ # Database host
+ --port \ # Database port
+ --user \ # Database user
+ --password \ # Database password
+ --dbname # Database name
+```
+To encourage your agent to reference its archival memory, we recommend adding phrases like "search your archival memory..." for the best results.
+
+#### Viewing available data sources
+You can view loaded data source with:
+```
+memgpt list sources
```
-To run MemGPT for as a conversation agent in CLI mode, simply run `memgpt`:
+### Using other endpoints
+#### Azure
+To use MemGPT with Azure, expore the following variables and then re-run `memgpt configure`:
```sh
-memgpt
+# see https://github.com/openai/openai-python#microsoft-azure-endpoints
+export AZURE_OPENAI_KEY = ...
+export AZURE_OPENAI_ENDPOINT = ...
+export AZURE_OPENAI_VERSION = ...
+
+# set the below if you are using deployment ids
+export AZURE_OPENAI_DEPLOYMENT = ...
+export AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT = ...
```
+#### Custom Endpoints
+To use custom endpoints, run `export OPENAI_API_BASE=` and then re-run `memgpt configure` to set the custom endpoint as the default endpoint.
+
+
+
+
+
+
+Deprecated API
Debugging command not found
@@ -330,6 +394,7 @@ MemGPT also enables you to chat with docs -- try running this example to talk to
If you downloaded from Hugging Face, it will be `memgpt/personas/docqa/llamaindex-api-docs`.
If you built the index yourself, it will be `memgpt/personas/docqa`.
+
## Support