* added memgpt server command
* added the option to specify a port (rest default 8283, ws default 8282)
* fixed import in test
* added agent saving on shutdown
* added basic locking mechanism (assumes only one server.py is running at the same time)
* remove 'STOP' from buffer when converting to list for the non-streaming POST resposne
* removed duplicate on_event (redundant to lifespan)
* added GET agents/memory route
* added GET agent config
* added GET server config
* added PUT route for modifying agent core memory
* refactored to put server loop in separate function called via main
* init server refactor
* refactored websockets server/client code to use internal server API
* added intentional fail on test
* update workflow to try and get test to pass remotely
* refactor to put websocket code in a separate subdirectory
* added fastapi rest server
* add error handling
* modified interface return style
* disabled certain tests on remote
* added SSE response option for user_message
* fix ws interface test
* fallback for oai key
* add soft fail for test when localhost is borked
* add step_yield for all server related interfaces
* extra catch
* update toml + lock with server add-ons (add uvicorn+fastapi, move websockets to server extra)
* regen lock file
* added pytest-asyncio as an extra in dev
* add pydantic to deps
* renamed CreateConfig to CreateAgentConfig
* fixed POST request for creating agent + tested it
* don't add anything except for assistant messages to the global autogen message historoy
* properly format autogen messages when using local llms (allow naming to get passed through to the prompt formatter)
* add extra handling of autogen's name field in step()
* comments
* Fix bug where embeddings endpoint was getting set to deployment, upgraded pinned llama-index to use new version that has azure endpoint
* updated documentation
* added memgpt example for openai
* change wording to match configure
* sort agents by directory-last-modified time
* only save agent config when agent is saved
---------
Co-authored-by: Sarah Wooders <sarahwooders@gmail.com>
* Update autogen.md
* in groupchat example add an azure elif
* fixed missing azure mappings + corrected the gpt-4-turbo one
* Updated MemGPT AutoGen agent to take credentials and store them in the config (allows users to use memgpt+autogen without running memgpt configure), also patched api_base kwarg for autogen >=v0.2
* add note about 0.2 testing
* added overview to autogen integration page
* default examples to openai, sync config header between the two main examples, change speaker mode to round-robin in 2-way chat to supress warning
* sync config header on last example (not used in docs)
* refactor to make sure we use existing config when writing out extra credentials
* fixed bug in local LLM where we need to comment out api_type (for pyautogen>=0.2.0)