Commit Graph

10 Commits

Author SHA1 Message Date
Charles Packer
e5add4e430 Configurable presets to support easy extension of MemGPT's function set (#420)
* partial

* working schema builder, tested that it matches the hand-written schemas

* correct another schema diff

* refactor

* basic working test

* refactored preset creation to use yaml files

* added docstring-parser

* add code for dynamic function linking in agent loading

* pretty schema diff printer

* support pulling from ~/.memgpt/functions/*.py

* clean

* allow looking for system prompts in ~/.memgpt/system_prompts

* create ~/.memgpt/system_prompts if it doesn't exist

* pull presets from ~/.memgpt/presets in addition to examples folder

* add support for loading agent configs that have additional keys

---------

Co-authored-by: Sarah Wooders <sarahwooders@gmail.com>
2023-11-13 10:43:28 -08:00
Charles Packer
dab47001a9 Fix max tokens constant (#374)
* stripped LLM_MAX_TOKENS constant, instead it's a dictionary, and context_window is set via the config (defaults to 8k)

* pass context window in the calls to local llm APIs

* safety check

* remove dead imports

* context_length -> context_window

* add default for agent.load

* in configure, ask for the model context window if not specified via dictionary

* fix default, also make message about OPENAI_API_BASE missing more informative

* make openai default embedding if openai is default llm

* make openai on top of list

* typo

* also make local the default for embeddings if you're using localllm instead of the locallm endpoint

* provide --context_window flag to memgpt run

* fix runtime error

* stray comments

* stray comment
2023-11-09 17:59:03 -08:00
Charles Packer
fde0087a19 Patch summarize when running with local llms (#213)
* trying to patch summarize when running with local llms

* moved token magic numbers to constants, made special localllm exception class (TODO catch these for retry), fix summarize bug where it exits early if empty list

* missing file

* raise an exception on no-op summary

* changed summarization logic to walk forwards in list until fraction of tokens in buffer is reached

* added same diff to sync agent

* reverted default max tokens to 8k, cleanup + more error wrapping for better error messages that get caught on retry

* patch for web UI context limit error propogation, using best guess for what the web UI error message is

* add webui token length exception

* remove print

* make no wrapper warning only pop up once

* cleanup

* Add errors to other wrappers

---------

Co-authored-by: Vivian Fang <hi@vivi.sh>
2023-11-02 23:44:02 -07:00
Sarah Wooders
5c44790ad0 add black to poetry and reformat 2023-10-26 15:33:50 -07:00
Vivian Fang
bc81cdcef4 Revert "Revert "cleanup""
This reverts commit 6cd2a0049b02643ef800f7c2ddb45a1f4bd5babf.
2023-10-25 12:42:35 -07:00
Vivian Fang
8c3409cf02 Revert "cleanup"
This reverts commit 85d9fba811f237fc0c625e920d4ee5995a9308f6, reversing
changes made to a7e06d0acc1b69b311fb16e386c4867337fe76f8.
2023-10-25 01:02:43 -07:00
Vivian Fang
1cb89c4f47 fix memgpt_dir circular import 2023-10-24 13:28:17 -07:00
Vivian Fang
86d52c4cdf fix summarizer 2023-10-15 21:07:45 -07:00
cpacker
1c26546896 relax inner monologue check based on model 2023-10-14 17:59:24 -07:00
Charles Packer
257c3998f7 init commit 2023-10-12 18:48:58 -07:00