* Create docs (#323) * Create .readthedocs.yaml * Update mkdocs.yml * update * revise * syntax * syntax * syntax * syntax * revise * revise * spacing * Docs (#327) * add stuff * patch homepage * more docs * updated * updated * refresh * refresh * refresh * update * refresh * refresh * refresh * refresh * missing file * refresh * refresh * refresh * refresh * fix black * refresh * refresh * refresh * refresh * add readme for just the docs * Update README.md * add more data loading docs * cleanup data sources * refresh * revised * add search * make prettier * revised * updated * refresh * favi * updated --------- Co-authored-by: Sarah Wooders <sarahwooders@gmail.com>
922 B
922 B
MemGPT + LM Studio
- Download LM Studio and the model you want to test with
- Go to the "local inference server" tab, load the model and configure your settings (make sure to set the context length to something reasonable like 8k!)
- Click "Start server"
- Copy the IP address + port that your server is running on (in the example screenshot, the address is
http://localhost:1234)
In your terminal where you're running MemGPT, run:
# if you used a different port in LM Studio, change 1234 to the actual port
export OPENAI_API_BASE=http://localhost:1234
export BACKEND_TYPE=lmstudio
- Make sure that "context length" is set to the max context length of the model you're using (e.g. 8000 for Mistral 7B variants)
- If you see "Prompt Formatting" in your menu, turn it off