Removed Logging from configurations and migrated to constants.py
Modified log.py to configure using constants to configure logging
Conflicts:
memgpt/config.py resolved
* First commit of memgpt client and some messy test code
* rolled back unnecessary changes to abstract interface; switched client to always use Queueing Interface
* Added missing interface clear() in run_command; added convenience method for checking if an agent exists, used that in create_agent
* Formatting fixes
* Fixed incorrect naming of get_agent_memory in rest server
* Removed erroneous clear from client save method; Replaced print statements with appropriate logger calls in server
* Updated readme with client usage instructions
* added tests for Client
* make printing to terminal togglable on queininginterface (should probably refactor this to a logger)
* turn off printing to stdout via interface by default
* allow importing the python client in a similar fashion to openai-python (see https://github.com/openai/openai-python)
* Allowed quickstart on init of client; updated readme and test_client accordingly
* oops, fixed name of openai_api_key config key
* Fixed small typo
* Fixed broken test by adding memgpt hosted model details to agent config
* silence llamaindex 'LLM is explicitly disabled. Using MockLLM.' on server
* default to openai if user's memgpt directory is empty (first time)
* correct type hint
* updated section on client in readme
* added comment about how MemGPT config != Agent config
* patch unrelated test
* update wording on readme
* patch another unrelated test
* added python client to readme docs
* Changed 'user' to 'human' in example; Defaulted AgentConfig.model to 'None'; Fixed issue in create_agent (accounting for dict config); matched test code to example
* Fixed advanced example
* patch test
* patch
---------
Co-authored-by: cpacker <packercharles@gmail.com>
* add folder generation
* disable default temp until more testing is done
* apply embedding payload patch to search, add input checking for better runtime error messages
* streamlined memory pressure warning now that heartbeats get forced
* sort agents by directory-last-modified time
* only save agent config when agent is saved
---------
Co-authored-by: Sarah Wooders <sarahwooders@gmail.com>
* mark depricated API section
* CLI bug fixes for azure
* check azure before running
* Update README.md
* Update README.md
* bug fix with persona loading
* remove print
* make errors for cli flags more clear
* format
* fix imports
* fix imports
* add prints
* update lock
* update config fields
* cleanup config loading
* commit
* remove asserts
* refactor configure
* put into different functions
* add embedding default
* pass in config
* fixes
* allow overriding openai embedding endpoint
* black
* trying to patch tests (some circular import errors)
* update flags and docs
* patched support for local llms using endpoint and endpoint type passed via configs, not env vars
* missing files
* fix naming
* fix import
* fix two runtime errors
* patch ollama typo, move ollama model question pre-wrapper, modify question phrasing to include link to readthedocs, also have a default ollama model that has a tag included
* disable debug messages
* made error message for failed load more informative
* don't print dynamic linking function warning unless --debug
* updated tests to work with new cli workflow (disabled openai config test for now)
* added skips for tests when vars are missing
* update bad arg
* revise test to soft pass on empty string too
* don't run configure twice
* extend timeout (try to pass against nltk download)
* update defaults
* typo with endpoint type default
* patch runtime errors for when model is None
* catching another case of 'x in model' when model is None (preemptively)
* allow overrides to local llm related config params
* made model wrapper selection from a list vs raw input
* update test for select instead of input
* Fixed bug in endpoint when using local->openai selection, also added validation loop to manual endpoint entry
* updated error messages to be more informative with links to readthedocs
* add back gpt3.5-turbo
---------
Co-authored-by: cpacker <packercharles@gmail.com>
* partial
* working schema builder, tested that it matches the hand-written schemas
* correct another schema diff
* refactor
* basic working test
* refactored preset creation to use yaml files
* added docstring-parser
* add code for dynamic function linking in agent loading
* pretty schema diff printer
* support pulling from ~/.memgpt/functions/*.py
* clean
* allow looking for system prompts in ~/.memgpt/system_prompts
* create ~/.memgpt/system_prompts if it doesn't exist
* pull presets from ~/.memgpt/presets in addition to examples folder
* add support for loading agent configs that have additional keys
---------
Co-authored-by: Sarah Wooders <sarahwooders@gmail.com>
* stripped LLM_MAX_TOKENS constant, instead it's a dictionary, and context_window is set via the config (defaults to 8k)
* pass context window in the calls to local llm APIs
* safety check
* remove dead imports
* context_length -> context_window
* add default for agent.load
* in configure, ask for the model context window if not specified via dictionary
* fix default, also make message about OPENAI_API_BASE missing more informative
* make openai default embedding if openai is default llm
* make openai on top of list
* typo
* also make local the default for embeddings if you're using localllm instead of the locallm endpoint
* provide --context_window flag to memgpt run
* fix runtime error
* stray comments
* stray comment
* Remove AsyncAgent and async from cli
Refactor agent.py memory.py
Refactor interface.py
Refactor main.py
Refactor openai_tools.py
Refactor cli/cli.py
stray asyncs
save
make legacy embeddings not use async
Refactor presets
Remove deleted function from import
* remove stray prints
* typo
* another stray print
* patch test
---------
Co-authored-by: cpacker <packercharles@gmail.com>
* strip '/' and use osp.join
* grepped for MEMGPT_DIR, found more places to replace '/'
* typo
* grep pass over filesep
---------
Co-authored-by: Vivian Fang <hi@vivi.sh>