update docs (#547)
* update admonitions * Update local_llm.md * Update webui.md * Update autogen.md * Update storage.md * Update example_chat.md * Update example_data.md * Update example_chat.md * Update example_data.md
This commit is contained in:
@@ -1,4 +1,4 @@
|
||||
!!! warning "Need help?"
|
||||
!!! question "Need help?"
|
||||
|
||||
If you need help visit our [Discord server](https://discord.gg/9GEQrxmVyE) and post in the #support channel.
|
||||
|
||||
@@ -20,7 +20,7 @@ For the purposes of this example, we're going to serve (host) the LLMs using [oo
|
||||
|
||||
Install web UI and get a model set up on a local web server. You can use [our instructions on setting up web UI](https://memgpt.readthedocs.io/en/latest/webui/).
|
||||
|
||||
!!! warning "Choosing an LLM / model to use"
|
||||
!!! info "Choosing an LLM / model to use"
|
||||
|
||||
You'll need to decide on an LLM / model to use with web UI.
|
||||
|
||||
@@ -36,7 +36,7 @@ Try setting up MemGPT with your local web UI backend [using the instructions her
|
||||
|
||||
Once you've confirmed that you're able to chat with a MemGPT agent using `memgpt configure` and `memgpt run`, you're ready to move on to the next step.
|
||||
|
||||
!!! warning "Using RunPod as an LLM backend"
|
||||
!!! info "Using RunPod as an LLM backend"
|
||||
|
||||
If you're using RunPod to run web UI, make sure that you set your endpoint to the RunPod IP address, **not the default localhost address**.
|
||||
|
||||
@@ -134,7 +134,7 @@ config_list_memgpt = [
|
||||
]
|
||||
```
|
||||
|
||||
!!! warning "Making internal monologue visible to AutoGen"
|
||||
!!! info "Making internal monologue visible to AutoGen"
|
||||
|
||||
By default, MemGPT's inner monologue and function traces are hidden from other AutoGen agents.
|
||||
|
||||
|
||||
@@ -35,7 +35,7 @@ memgpt attach --agent <AGENT-NAME> --data-source <DATA-SOURCE-NAME>
|
||||
```
|
||||
|
||||
|
||||
!!! note "Hint"
|
||||
!!! tip "Hint"
|
||||
To encourage your agent to reference its archival memory, we recommend adding phrases like "_search your archival memory..._" for the best results.
|
||||
|
||||
|
||||
|
||||
@@ -1,7 +1,8 @@
|
||||
!!! note "Note"
|
||||
|
||||
Before starting this example, make sure that you've [properly installed MemGPT](../quickstart)
|
||||
|
||||
## Using MemGPT to create a perpetual chatbot
|
||||
|
||||
_Before starting this example, make sure that you've [properly installed MemGPT](../quickstart)._
|
||||
|
||||
In this example, we're going to use MemGPT to create a chatbot with a custom persona. MemGPT chatbots are "perpetual chatbots", meaning that they can be run indefinitely without any context length limitations. MemGPT chatbots are self-aware that they have a "fixed context window", and will manually manage their own memories to get around this problem by moving information in and out of their small memory window and larger external storage.
|
||||
|
||||
MemGPT chatbots always keep a reserved space in their "core" memory window to store their `persona` information (describes the bot's personality + basic functionality), and `human` information (which describes the human that the bot is chatting with). The MemGPT chatbot will update the `persona` and `human` core memory blocks over time as it learns more about the user (and itself).
|
||||
|
||||
@@ -1,7 +1,8 @@
|
||||
!!! note "Note"
|
||||
|
||||
Before starting this example, make sure that you've [properly installed MemGPT](../quickstart)
|
||||
|
||||
## Using MemGPT to chat with your own data
|
||||
|
||||
_Before starting this example, make sure that you've [properly installed MemGPT](../quickstart)._
|
||||
|
||||
In this example, we're going to use MemGPT to chat with a custom data source. Specifically, we'll try loading in the MemGPT research paper and ask MemGPT questions about it.
|
||||
|
||||
### Creating an external data source
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
!!! warning "Need help?"
|
||||
!!! question "Need help?"
|
||||
|
||||
If you need help visit our [Discord server](https://discord.gg/9GEQrxmVyE) and post in the #support channel.
|
||||
|
||||
@@ -104,7 +104,7 @@ If you would like us to support a new backend, feel free to open an issue or pul
|
||||
|
||||
### Which model should I use?
|
||||
|
||||
!!! warning "Recommended LLMs / models"
|
||||
!!! info "Recommended LLMs / models"
|
||||
|
||||
To see a list of recommended LLMs to use with MemGPT, visit our [Discord server](https://discord.gg/9GEQrxmVyE) and check the #model-chat channel.
|
||||
|
||||
|
||||
@@ -1,10 +1,11 @@
|
||||
# Configuring Storage Backends
|
||||
MemGPT supports both local and database storage for archival memory. You can configure which storage backend to use via `memgpt configure`. For larger datasets, we recommend using a database backend.
|
||||
|
||||
!!! warning "Switching storage backends"
|
||||
|
||||
MemGPT can only use one storage backend at a time. If you switch from local to database storage, you will need to re-load data and start agents from scratch. We currently do not support migrating between storage backends.
|
||||
|
||||
MemGPT supports both local and database storage for archival memory. You can configure which storage backend to use via `memgpt configure`. For larger datasets, we recommend using a database backend.
|
||||
|
||||
## Local
|
||||
MemGPT will default to using local storage (saved at `~/.memgpt/archival/` for loaded data sources, and `~/.memgpt/agents/` for agent storage).
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
!!! warning "web UI troubleshooting"
|
||||
!!! question "web UI troubleshooting"
|
||||
|
||||
If you have problems getting web UI set up, please use the [official web UI repo for support](https://github.com/oobabooga/text-generation-webui)! There will be more answered questions about web UI there vs here on the MemGPT repo.
|
||||
|
||||
|
||||
Reference in New Issue
Block a user