Files
letta-server/docs/webui.md
Charles Packer caba2f468c Create docs pages (#328)
* Create docs  (#323)

* Create .readthedocs.yaml

* Update mkdocs.yml

* update

* revise

* syntax

* syntax

* syntax

* syntax

* revise

* revise

* spacing

* Docs (#327)

* add stuff

* patch homepage

* more docs

* updated

* updated

* refresh

* refresh

* refresh

* update

* refresh

* refresh

* refresh

* refresh

* missing file

* refresh

* refresh

* refresh

* refresh

* fix black

* refresh

* refresh

* refresh

* refresh

* add readme for just the docs

* Update README.md

* add more data loading docs

* cleanup data sources

* refresh

* revised

* add search

* make prettier

* revised

* updated

* refresh

* favi

* updated

---------

Co-authored-by: Sarah Wooders <sarahwooders@gmail.com>
2023-11-06 12:38:49 -08:00

2.2 KiB

MemGPT + web UI

⁉️ If you have problems getting WebUI setup, please use the official web UI repo for support! There will be more answered questions about web UI there vs here on the MemGPT repo.

⁉️ Do NOT enable any extensions in web UI, including the openai extension! Just run web UI as-is, unless you are running MemGPT+Autogen with non-MemGPT agents.

To get MemGPT to work with a local LLM, you need to have the LLM running on a server that takes API requests.

For the purposes of this example, we're going to serve (host) the LLMs using oobabooga web UI, but if you want to use something else you can! This also assumes your running web UI locally - if you're running on e.g. Runpod, you'll want to follow Runpod specific instructions (for example use TheBloke's one-click UI and API)

  1. Install oobabooga web UI using the instructions here
  2. Once installed, launch the web server with python server.py
  3. Navigate to the web app (if local, this is probably http://127.0.0.1:7860), select the model you want to use, adjust your GPU and CPU memory settings, and click "load"
  4. If the model was loaded successfully, you should be able to access it via the API (if local, this is probably on port 5000)
  5. Assuming steps 1-4 went correctly, the LLM is now properly hosted on a port you can point MemGPT to!

In your terminal where you're running MemGPT, run:

# if you are running web UI locally, the default port will be 5000
export OPENAI_API_BASE=http://127.0.0.1:5000
export BACKEND_TYPE=webui

WebUI exposes a lot of parameters that can dramatically change LLM outputs, to change these you can modify the WebUI settings file.