* partial
* working schema builder, tested that it matches the hand-written schemas
* correct another schema diff
* refactor
* basic working test
* refactored preset creation to use yaml files
* added docstring-parser
* add code for dynamic function linking in agent loading
* pretty schema diff printer
* support pulling from ~/.memgpt/functions/*.py
* clean
* allow looking for system prompts in ~/.memgpt/system_prompts
* create ~/.memgpt/system_prompts if it doesn't exist
* pull presets from ~/.memgpt/presets in addition to examples folder
* add support for loading agent configs that have additional keys
---------
Co-authored-by: Sarah Wooders <sarahwooders@gmail.com>
* stripped LLM_MAX_TOKENS constant, instead it's a dictionary, and context_window is set via the config (defaults to 8k)
* pass context window in the calls to local llm APIs
* safety check
* remove dead imports
* context_length -> context_window
* add default for agent.load
* in configure, ask for the model context window if not specified via dictionary
* fix default, also make message about OPENAI_API_BASE missing more informative
* make openai default embedding if openai is default llm
* make openai on top of list
* typo
* also make local the default for embeddings if you're using localllm instead of the locallm endpoint
* provide --context_window flag to memgpt run
* fix runtime error
* stray comments
* stray comment
* I added some json repairs that helped me with malformed messages
There are two of them: The first will remove hard line feeds that appear
in the message part because the model added those instead of escaped
line feeds. This happens a lot in my experiments and that actually fixes
them.
The second one is less tested and should handle the case that the model
answers with multiple blocks of strings in quotes or even uses unescaped
quotes. It should grab everything betwenn the message: " and the ending
curly braces, escape them and makes it propper json that way.
Disclaimer: Both function were written with the help of ChatGPT-4 (I
can't write much Python). I think the first one is quite solid but doubt
that the second one is fully working. Maybe somebody with more Python
skills than me (or with more time) has a better idea for this type of
malformed replies.
* Moved the repair output behind the debug flag and removed the "clean" one
* Added even more fixes (out of what I just encountered while testing)
It seems that cut of json can be corrected and sometimes the model is to
lazy to add not just one curly brace but two. I think it does not "cost"
a lot to try them all out. But the expeptions get massive that way :)
* black
* for the final hail mary with extract_first_json, might as well add a double end bracket instead of single
---------
Co-authored-by: cpacker <packercharles@gmail.com>
* Remove AsyncAgent and async from cli
Refactor agent.py memory.py
Refactor interface.py
Refactor main.py
Refactor openai_tools.py
Refactor cli/cli.py
stray asyncs
save
make legacy embeddings not use async
Refactor presets
Remove deleted function from import
* remove stray prints
* typo
* another stray print
* patch test
---------
Co-authored-by: cpacker <packercharles@gmail.com>
* softpass test when keys are missing
* update to use local model
* both openai and local
* typo
* fix
* Specify model inference and embedding endpoint separately (#286)
* Fix config tests (#343)
Co-authored-by: Vivian Fang <hi@vivi.sh>
* Avoid throwing error for older `~/.memgpt/config` files due to missing section `archival_storage` (#344)
* avoid error if has old config type
* Dependency management (#337)
* Divides dependencies into `pip install pymemgpt[legacy,local,postgres,dev]`.
* Update docs
* Relax verify_first_message_correctness to accept any function call (#340)
* Relax verify_first_message_correctness to accept any function call
* Also allow missing internal monologue if request_heartbeat
* Cleanup
* get instead of raw dict access
* Update `poetry.lock` (#346)
* mark depricated API section
* add readme
* add readme
* add readme
* add readme
* add readme
* add readme
* add readme
* add readme
* add readme
* CLI bug fixes for azure
* check azure before running
* Update README.md
* Update README.md
* bug fix with persona loading
* remove print
* make errors for cli flags more clear
* format
* fix imports
* fix imports
* add prints
* update lock
* Add autogen example that lets you chat with docs (#342)
* Relax verify_first_message_correctness to accept any function call
* Also allow missing internal monologue if request_heartbeat
* Cleanup
* get instead of raw dict access
* Support attach in memgpt autogen agent
* Add docs example
* Add documentation, cleanup
* add gpt-4-turbo (#349)
* add gpt-4-turbo
* add in another place
* change to 3.5 16k
* Revert relaxing verify_first_message_correctness, still add archival_memory_search as an exception (#350)
* Revert "Relax verify_first_message_correctness to accept any function call (#340)"
This reverts commit 30e911057d755f5946d7bc2ba54619b5f2e08dc3.
* add archival_memory_search as an exception for verify
* Bump version to 0.1.18 (#351)
* Remove `requirements.txt` and `requirements_local.txt` (#358)
* update requirements to match poetry
* update with extras
* remove requirements
* disable pretty exceptions (#367)
* Updated documentation for users (#365)
---------
Co-authored-by: Vivian Fang <hi@vivi.sh>
* Create pull_request_template.md (#368)
* Create pull_request_template.md
* Add pymemgpt-nightly workflow (#373)
* Add pymemgpt-nightly workflow
* change token name
* Update lmstudio.md (#382)
* Update lmstudio.md
* Update lmstudio.md
* Update lmstudio.md to show the Prompt Formatting Option (#384)
* Update lmstudio.md to show the Prompt Formatting Option
* Update lmstudio.md Update the screenshot
* Swap asset location from #384 (#385)
* Update poetry with `pg8000` and include `pgvector` in docs (#390)
* Allow overriding config location with `MEMGPT_CONFIG_PATH` (#383)
* Always default to local embeddings if not OpenAI or Azure (#387)
* Add support for larger archival memory stores (#359)
* Replace `memgpt run` flags error with warning + remove custom embedding endpoint option + add agent create time (#364)
* Update webui.md (#397)
turn emoji warning into markdown warning
* Update webui.md (#398)
* dont hard code embeddings
* formatting
* black
* add full deps
* remove changes
* update poetry
---------
Co-authored-by: Sarah Wooders <sarahwooders@gmail.com>
Co-authored-by: Vivian Fang <hi@vivi.sh>
Co-authored-by: MSZ-MGS <65172063+MSZ-MGS@users.noreply.github.com>
* Revert "Relax verify_first_message_correctness to accept any function call (#340)"
This reverts commit 30e911057d755f5946d7bc2ba54619b5f2e08dc3.
* add archival_memory_search as an exception for verify
* Relax verify_first_message_correctness to accept any function call
* Also allow missing internal monologue if request_heartbeat
* Cleanup
* get instead of raw dict access
* Support attach in memgpt autogen agent
* Add docs example
* Add documentation, cleanup
* Relax verify_first_message_correctness to accept any function call
* Also allow missing internal monologue if request_heartbeat
* Cleanup
* get instead of raw dict access