* Update README.md
* fix: 'ollama run' should be 'ollama pull'
* fix: linting, syntax, spelling corrections for all docs
* fix: markdown linting rules and missed fixes
* fix: readded space to block
* fix: changed sh blocks to text
* docs: added exception for bare urls in markdown
* docs: added exception for in-line html (MD033/no-inline-html)
* docs: made python indentation level consistent (4 space tabs) even though I prefer 2.
---------
Co-authored-by: Charles Packer <packercharles@gmail.com>
* Include steps for Local LLMs
Added install instructions if running LLMs locally.
* Add Windows warning
* Update installation warning for Local LLMs
Remove exact install instructions to keep page clean for QuickStart and not duplicating knowledge.
* Update local_llm_faq.md
Added WSL troubleshooting section.
* Update local_llm.md
Update FAQ Link wording
* Update local_llm_faq.md
Improve punctuation and add link to WSL Issue thread
* First commit of memgpt client and some messy test code
* rolled back unnecessary changes to abstract interface; switched client to always use Queueing Interface
* Added missing interface clear() in run_command; added convenience method for checking if an agent exists, used that in create_agent
* Formatting fixes
* Fixed incorrect naming of get_agent_memory in rest server
* Removed erroneous clear from client save method; Replaced print statements with appropriate logger calls in server
* Updated readme with client usage instructions
* added tests for Client
* make printing to terminal togglable on queininginterface (should probably refactor this to a logger)
* turn off printing to stdout via interface by default
* allow importing the python client in a similar fashion to openai-python (see https://github.com/openai/openai-python)
* Allowed quickstart on init of client; updated readme and test_client accordingly
* oops, fixed name of openai_api_key config key
* Fixed small typo
* Fixed broken test by adding memgpt hosted model details to agent config
* silence llamaindex 'LLM is explicitly disabled. Using MockLLM.' on server
* default to openai if user's memgpt directory is empty (first time)
* correct type hint
* updated section on client in readme
* added comment about how MemGPT config != Agent config
* patch unrelated test
* update wording on readme
* patch another unrelated test
* added python client to readme docs
* Changed 'user' to 'human' in example; Defaulted AgentConfig.model to 'None'; Fixed issue in create_agent (accounting for dict config); matched test code to example
* Fixed advanced example
* patch test
* patch
---------
Co-authored-by: cpacker <packercharles@gmail.com>
* made quickstart to openai or memgpt the default option when the user doesn't have a config set
* modified formatting + message styles
* revised quickstart guides in docs to talk about quickstart command
* make message consistent
---------
Co-authored-by: Sarah Wooders <sarahwooders@gmail.com>
* swapping out hardcoded str for prefix (forgot to include in #569)
* add extra failout when the summarizer tries to run on a single message
* added function response validation code, currently will truncate responses based on character count
* added return type hints (functions/tools should either return strings or None)
* discuss function output length in custom function section
* made the truncation more informative
* Fix bug where embeddings endpoint was getting set to deployment, upgraded pinned llama-index to use new version that has azure endpoint
* updated documentation
* added memgpt example for openai
* change wording to match configure