* feat: new iteration of chatui - beware it is still buggy
added some error handling, but I believe this still needs a lot of improvements.
added timestamps for when messages are sent.
when changing to a new agent the agent initiates the conversation.
persisting messages for now. Storing them in localstorage so users can see their
history and don't lose it on reload. replacing this with intelligent fetching asap.
* chore: build frontend
* autogenerate openapi file on server startup
* added endpoint for paginated retrieval of in-context agent messages
* missing diff
* added ability to pass system messages via message endpoint
* patched bad depends into queries to fix the param info not showing up in get requests, fixed some bad copy paste
* adding docstrings + pydantic models to rest api for autogenerating the openapi documentation
* add tags to all endpoitns
* updated docstrings, added response type, patched runtime error
* feat: add dark mode & make minor UI improvements
added dark mode toggle & picked a color scheme that is closer to the memgpt icons
cleaned up the home page a little bit.
* feat: add thinking indicator & make minor UI improvements
we now show a thinking while the current message is loading.
removed status indicator as we do not work with websockets anymore.
also adjusted some of the chat styles to better fit the new theme.
* feat: add memory viewer and allow memory edit
* chore: build frontend
* feat: add loading indicator when creating new agent
* feat: reorder front page to avoid overflow and always show add button
* feat: display function calls
* feat: set up proxy during development & remove explicit inclusion of host/port in backend calls
* fix: introduce api prefix, split up fastapi server to become more modular, use app directly instead of subprocess
the api prefix allows us to create a proxy for frontend development that relays all /api
requests to our fastapi, while serving the development files for other paths.
splitting up the fastapi server will allow us to branch out and divide up the work better
in the future. using the application directly in our cli instead of a subprocess makes
debugging a thing in development and overall this python native way just seems cleaner.
we can discuss if we should keep the api prefix or if we should distinguish between a REST only
mode and one that also serves the static files for the GUI.
This is just my initial take on things
* chore: build latest frontend
* updated local APIs to return usage info (#585)
* updated APIs to return usage info
* tested all endpoints
* added autogen as an extra (#616)
* added autogen as an extra
* updated docs
Co-authored-by: hemanthsavasere <hemanth.savasere@gmail.com>
* Update LICENSE
* Add safeguard on tokens returned by functions (#576)
* swapping out hardcoded str for prefix (forgot to include in #569)
* add extra failout when the summarizer tries to run on a single message
* added function response validation code, currently will truncate responses based on character count
* added return type hints (functions/tools should either return strings or None)
* discuss function output length in custom function section
* made the truncation more informative
* patch bug where None.copy() throws runtime error (#617)
* allow passing custom host to uvicorn (#618)
* feat: initial poc for socket server
* feat: initial poc for frontend based on react
Set up an nx workspace which maks it easy to manage dependencies and added shadcn components
that allow us to build good-looking ui in a fairly simple way.
UI is a very simple and basic chat that starts with a message of the user and then simply displays the
answer string that is sent back from the fastapi ws endpoint
* feat: mapp arguments to json and return new messages
Except for the previous user message we return all newly generated messages and let the frontend figure out how to display them.
* feat: display messages based on role and show inner thoughts and connection status
* chore: build newest frontend
* feat(frontend): show loader while waiting for first message and disable send button until connection is open
* feat: make agent send the first message and loop similar to CLI
currently the CLI loops until the correct function call sends a message to the user. this is an initial try to achieve a similar behavior in the socket server
* chore: build new version of frontend
* fix: rename lib directory so it is not excluded as part of python gitignore
* chore: rebuild frontend app
* fix: save agent at end of each response to allow the conversation to carry on over multiple sessions
* feat: restructure server to support multiple endpoints and add agents and sources endpoint
* feat: setup frontend routing and settings page
* chore: build frontend
* feat: another iteration of web interface
changes include: websocket for chat. switching between different agents. introduction of zustand state management
* feat: adjust frontend to work with memgpt rest-api
* feat: adjust existing rest_api to serve and interact with frontend
* feat: build latest frontend
* chore: build latest frontend
* fix: cleanup workspace
---------
Co-authored-by: Charles Packer <packercharles@gmail.com>
Co-authored-by: hemanthsavasere <hemanth.savasere@gmail.com>