Files
letta-server/docs/config.md
Sarah Wooders ec2bda4966 Refactor config + determine LLM via config.model_endpoint_type (#422)
* mark depricated API section

* CLI bug fixes for azure

* check azure before running

* Update README.md

* Update README.md

* bug fix with persona loading

* remove print

* make errors for cli flags more clear

* format

* fix imports

* fix imports

* add prints

* update lock

* update config fields

* cleanup config loading

* commit

* remove asserts

* refactor configure

* put into different functions

* add embedding default

* pass in config

* fixes

* allow overriding openai embedding endpoint

* black

* trying to patch tests (some circular import errors)

* update flags and docs

* patched support for local llms using endpoint and endpoint type passed via configs, not env vars

* missing files

* fix naming

* fix import

* fix two runtime errors

* patch ollama typo, move ollama model question pre-wrapper, modify question phrasing to include link to readthedocs, also have a default ollama model that has a tag included

* disable debug messages

* made error message for failed load more informative

* don't print dynamic linking function warning unless --debug

* updated tests to work with new cli workflow (disabled openai config test for now)

* added skips for tests when vars are missing

* update bad arg

* revise test to soft pass on empty string too

* don't run configure twice

* extend timeout (try to pass against nltk download)

* update defaults

* typo with endpoint type default

* patch runtime errors for when model is None

* catching another case of 'x in model' when model is None (preemptively)

* allow overrides to local llm related config params

* made model wrapper selection from a list vs raw input

* update test for select instead of input

* Fixed bug in endpoint when using local->openai selection, also added validation loop to manual endpoint entry

* updated error messages to be more informative with links to readthedocs

* add back gpt3.5-turbo

---------

Co-authored-by: cpacker <packercharles@gmail.com>
2023-11-14 15:58:19 -08:00

1.9 KiB

Configuring the agent

You can set agent defaults by running memgpt configure, which will store config information at ~/.memgpt/config by default.

The memgpt run command supports the following optional flags (if set, will override config defaults):

  • --agent: (str) Name of agent to create or to resume chatting with.
  • --human: (str) Name of the human to run the agent with.
  • --persona: (str) Name of agent persona to use.
  • --model: (str) LLM model to run (e.g. gpt-4, dolphin_xxx)
  • --preset: (str) MemGPT preset to run agent with.
  • --first: (str) Allow user to sent the first message.
  • --debug: (bool) Show debug logs (default=False)
  • --no-verify: (bool) Bypass message verification (default=False)
  • --yes/-y: (bool) Skip confirmation prompt and use defaults (default=False)

You can override the parameters you set with memgpt configure with the following additional flags specific to local LLMs:

  • --model-wrapper: (str) Model wrapper used by backend (e.g. airoboros_xxx)
  • --model-endpoint-type: (str) Model endpoint backend type (e.g. lmstudio, ollama)
  • --model-endpoint: (str) Model endpoint url (e.g. localhost:5000)
  • --context-window: (int) Size of model context window (specific to model type)

Updating the config location

You can override the location of the config path by setting the environment variable MEMGPT_CONFIG_PATH:

export MEMGPT_CONFIG_PATH=/my/custom/path/config # make sure this is a file, not a directory

Adding Custom Personas/Humans

You can add new human or persona definitions either by providing a file (using the -f flag) or text (using the --text flag).

# add a human
memgpt add human [--name <NAME>] [-f <FILENAME>] [--text <TEXT>]

# add a persona
memgpt add persona [--name <NAME>] [-f <FILENAME>] [--text <TEXT>]

You can view available persona and human files with the following command:

memgpt list [humans/personas]

Custom Presets