* made quickstart to openai or memgpt the default option when the user doesn't have a config set
* modified formatting + message styles
* revised quickstart guides in docs to talk about quickstart command
* make message consistent
---------
Co-authored-by: Sarah Wooders <sarahwooders@gmail.com>
* allow entering custom model name when using openai/azure
* pull models from endpoint
* added/tested vllm and azure
* no print
* make red
* make the endpoint question give you an opportunity to enter your openai api key again in case you made a mitake / want to swap it out
* add cascading workflow for openai+azure model listings
* patched bug w/ azure listing
* for openai, check for key and if missing allow user to pass it, for azure, throw error if the key isn't present
* correct prior checking of azure to be more strict, added similar checks at the embedding endpoint config stage
* forgot to override value in config before saving
* clean up the valuerrors from missing keys so that no stacktrace gets printed, make success text green to match others
* Revert "Revert "nonfunctional 404 quickstart command w/ some other typo corrections""
This reverts commit 5dbdf31f1ce939843ff97e649554d8bc0556a834.
* Revert "Revert "added example config file""
This reverts commit 72a58f6de31f3ff71847bbaf083a91182469f9af.
* tested and working
* added and tested openai quickstart, added fallback if internet 404's to pull from local copy
* typo
* updated openai key input message to include html link
* renamed --type to --backend, added --latest flag which fetches from online default is to pull from local file
* fixed links
* added memgpt server command
* added the option to specify a port (rest default 8283, ws default 8282)
* fixed import in test
* added agent saving on shutdown
* added basic locking mechanism (assumes only one server.py is running at the same time)
* remove 'STOP' from buffer when converting to list for the non-streaming POST resposne
* removed duplicate on_event (redundant to lifespan)
* added GET agents/memory route
* added GET agent config
* added GET server config
* added PUT route for modifying agent core memory
* refactored to put server loop in separate function called via main
* Fix bug where embeddings endpoint was getting set to deployment, upgraded pinned llama-index to use new version that has azure endpoint
* updated documentation
* added memgpt example for openai
* change wording to match configure
* patched bugs in autogen agent example, updated autogen agent creation to follow agentconfig paradigm
* more fixes
* black
* fix bug in autoreply
* black
* pass default autoreply through to the memgpt autogen conversibleagent subclass so that it doesn't leave empty messages which can trigger errors in local llm backends like lmstudio
* init vllm (not tested), uses POST API not openai wrapper
* add to cli config list
* working vllm endpoint
* add model configuration for vllm
---------
Co-authored-by: Sarah Wooders <sarahwooders@gmail.com>
* updated local llm documentation
* updated cli flags to be consistent with documentation
* added preset documentation
* update test to use new arg
* update test to use new arg