Commit Graph

3426 Commits

Author SHA1 Message Date
Sarah Wooders
58bdb1ebd0 update version (#497) 2023-11-22 08:58:24 -08:00
Charles Packer
7712a06ffd Fixes bugs with AutoGen implementation and exampes (#498)
* patched bugs in autogen agent example, updated autogen agent creation to follow agentconfig paradigm

* more fixes

* black

* fix bug in autoreply

* black

* pass default autoreply through to the memgpt autogen conversibleagent subclass so that it doesn't leave empty messages which can trigger errors in local llm backends like lmstudio
2023-11-21 19:15:28 -08:00
Charles Packer
823a3e1694 Add error handling during linking imports (#495)
* Add error handling during linking imports

* correct typo + make error message even more explicit

* deadcode
2023-11-21 15:16:16 -08:00
Charles Packer
de0ccea181 vLLM support (#492)
* init vllm (not tested), uses POST API not openai wrapper

* add to cli config list

* working vllm endpoint

* add model configuration for vllm

---------

Co-authored-by: Sarah Wooders <sarahwooders@gmail.com>
2023-11-21 15:16:03 -08:00
Max Blackmer, CSM
d72edb6a99 ANNA, an acronym for Adaptive Neural Network Assistant. Which acts as your personal research assistant really good with archival documents and research. (#494) 2023-11-20 11:43:08 -08:00
Charles Packer
8a7a64c7f9 patch web UI (#484)
* patch web UI

* set truncation_length
2023-11-19 14:56:10 -08:00
Charles Packer
9989fd9a52 Fix #487 (summarize call uses OpenAI even with local LLM config) (#488)
* use new chatcompletion function that takes agent config inside of summarize

* patch issue with model now missing
2023-11-19 14:54:12 -08:00
Charles Packer
4ba4c02fa1 Remove .DS_Store from agents list (#485) 2023-11-19 14:35:51 -08:00
sahusiddharth
351f8094b5 Docs: Fix typos (#477) 2023-11-17 15:12:14 -08:00
Prashant Dixit
11e11bfac4 Lancedb storage integration (#455) 2023-11-17 11:36:30 -08:00
Charles Packer
86ac4ff4de updated websocket protocol and server (#473) 2023-11-16 22:50:00 -08:00
Charles Packer
576795ffdb move webui to new openai completions endpoint, but also provide existing functionality via webui-legacy backend (#468) 2023-11-15 23:08:30 -08:00
Charles Packer
398287d1ca Add d20 function example to readthedocs (#464)
* Update functions.md

* Update functions.md
2023-11-15 16:12:16 -08:00
Charles Packer
b592328a71 bugfix for linking functions from ~/.memgpt/functions (#463) 2023-11-15 15:56:42 -08:00
Charles Packer
40ecc8e7e7 Update functions.md (#461) 2023-11-15 15:51:08 -08:00
Sarah Wooders
f781d4426a Set service context for llama index in local.py (#462)
* mark depricated API section

* add readme

* add readme

* add readme

* add readme

* add readme

* add readme

* add readme

* add readme

* add readme

* CLI bug fixes for azure

* check azure before running

* Update README.md

* Update README.md

* bug fix with persona loading

* remove print

* make errors for cli flags more clear

* format

* fix imports

* fix imports

* add prints

* update lock

* remove asserts

* bump version

* set global context for llama index
2023-11-15 15:39:35 -08:00
Sarah Wooders
2bd7773f25 [version] bump version to 0.2.3 (#457) 2023-11-15 10:21:10 -08:00
Sarah Wooders
d117557b6a Upgrade workflows to Python 3.11 (#441)
* use python 3.11

* change format
2023-11-15 10:11:59 -08:00
Oliver Smith
a9b5a3d806 When default_mode_endpoint has a value, it needs to become model_endpoint. (#452)
Co-authored-by: Oliver Smith <oliver.smith@superevilmegacorp.com>
2023-11-15 01:18:23 -08:00
cpacker
6275a78222 missing .md file 2023-11-15 01:11:10 -08:00
Charles Packer
f63419c78b Update documentation [local LLMs, presets] (#453)
* updated local llm documentation

* updated cli flags to be consistent with documentation

* added preset documentation

* update test to use new arg

* update test to use new arg
2023-11-15 01:02:57 -08:00
Wes
2597ff2eb8 Add load and load_and_attach functions to memgpt autogen agent. (#430)
* Add load and load_and_attach functions to memgpt autogen agent.

* Only recompute files if dataset does not exist.
2023-11-14 22:51:21 -08:00
Sarah Wooders
a23ba80ac8 Update config to include memgpt_version and re-run configuration for old versions on memgpt run (#450)
* mark depricated API section

* add readme

* add readme

* add readme

* add readme

* add readme

* add readme

* add readme

* add readme

* add readme

* CLI bug fixes for azure

* check azure before running

* Update README.md

* Update README.md

* bug fix with persona loading

* remove print

* make errors for cli flags more clear

* format

* fix imports

* fix imports

* add prints

* update lock

* remove asserts

* store config versions and force update in some cases
2023-11-14 22:50:24 -08:00
cpacker
d705271aff patch websocket server after presets refactor 2023-11-14 16:09:31 -08:00
cpacker
88cc33244a patch bad merge 2023-11-14 16:09:13 -08:00
Sarah Wooders
ec2bda4966 Refactor config + determine LLM via config.model_endpoint_type (#422)
* mark depricated API section

* CLI bug fixes for azure

* check azure before running

* Update README.md

* Update README.md

* bug fix with persona loading

* remove print

* make errors for cli flags more clear

* format

* fix imports

* fix imports

* add prints

* update lock

* update config fields

* cleanup config loading

* commit

* remove asserts

* refactor configure

* put into different functions

* add embedding default

* pass in config

* fixes

* allow overriding openai embedding endpoint

* black

* trying to patch tests (some circular import errors)

* update flags and docs

* patched support for local llms using endpoint and endpoint type passed via configs, not env vars

* missing files

* fix naming

* fix import

* fix two runtime errors

* patch ollama typo, move ollama model question pre-wrapper, modify question phrasing to include link to readthedocs, also have a default ollama model that has a tag included

* disable debug messages

* made error message for failed load more informative

* don't print dynamic linking function warning unless --debug

* updated tests to work with new cli workflow (disabled openai config test for now)

* added skips for tests when vars are missing

* update bad arg

* revise test to soft pass on empty string too

* don't run configure twice

* extend timeout (try to pass against nltk download)

* update defaults

* typo with endpoint type default

* patch runtime errors for when model is None

* catching another case of 'x in model' when model is None (preemptively)

* allow overrides to local llm related config params

* made model wrapper selection from a list vs raw input

* update test for select instead of input

* Fixed bug in endpoint when using local->openai selection, also added validation loop to manual endpoint entry

* updated error messages to be more informative with links to readthedocs

* add back gpt3.5-turbo

---------

Co-authored-by: cpacker <packercharles@gmail.com>
2023-11-14 15:58:19 -08:00
Charles Packer
442a0ca8bf always cast config.context_window to int before use (#444)
* always cast config.context_window to int before use

* extra code to be super safe if self.config.context_window is somehow None
2023-11-14 15:12:00 -08:00
Charles Packer
b86d3e8f96 patch getargspec error (#440) 2023-11-13 17:49:01 -08:00
Charles Packer
2d8c9f15a2 WebSocket interface and basic server.py process (#399) 2023-11-13 17:30:24 -08:00
Charles Packer
e5add4e430 Configurable presets to support easy extension of MemGPT's function set (#420)
* partial

* working schema builder, tested that it matches the hand-written schemas

* correct another schema diff

* refactor

* basic working test

* refactored preset creation to use yaml files

* added docstring-parser

* add code for dynamic function linking in agent loading

* pretty schema diff printer

* support pulling from ~/.memgpt/functions/*.py

* clean

* allow looking for system prompts in ~/.memgpt/system_prompts

* create ~/.memgpt/system_prompts if it doesn't exist

* pull presets from ~/.memgpt/presets in addition to examples folder

* add support for loading agent configs that have additional keys

---------

Co-authored-by: Sarah Wooders <sarahwooders@gmail.com>
2023-11-13 10:43:28 -08:00
Sarah Wooders
dd5819893a fix config (#438) 2023-11-13 08:47:53 -08:00
Charles Packer
46a3fd9290 [version] bump release to 0.2.2 (#436) 2023-11-13 07:13:00 -08:00
Charles Packer
624650c13d patch #428 (#433) 2023-11-12 22:59:53 -08:00
Charles Packer
d6335f81cc patch (#435) 2023-11-12 22:59:41 -08:00
Sarah Wooders
ecdefc661b [fix] remove asserts for OPENAI_API_BASE (#432)
* mark depricated API section

* add readme

* add readme

* add readme

* add readme

* add readme

* add readme

* add readme

* add readme

* add readme

* CLI bug fixes for azure

* check azure before running

* Update README.md

* Update README.md

* bug fix with persona loading

* remove print

* make errors for cli flags more clear

* format

* fix imports

* fix imports

* add prints

* update lock

* remove asserts
2023-11-12 14:34:00 -08:00
Anjalee Sudasinghe
9a9d1e7937 fix memgptagent attach docs error (#427)
Co-authored-by: Anjalee Sudasinghe <anjalee@codegen.net>
2023-11-12 13:49:11 -08:00
Sarah Wooders
1837665625 [version] bump version to 0.2.1 (#417) 2023-11-10 13:14:58 -08:00
Sarah Wooders
c55835f507 add errors to make sure envs set correctly (#418) 2023-11-10 12:45:32 -08:00
Sarah Wooders
551952083b Fix model configuration for when config.model == "local" previously (#415)
* fix agent load

* fix model config
2023-11-10 12:16:33 -08:00
Charles Packer
7f950b05e8 Patch local LLMs with context_window (#416)
* patch

* patch ollama

* patch lmstudio

* patch kobold
2023-11-10 12:06:41 -08:00
Sarah Wooders
5647f8b63b fix agent load (#412) 2023-11-10 10:34:37 -08:00
Vivian Fang
05aeaf1927 Hotfix openai create all with context_window kwarg (#413) 2023-11-10 10:30:11 -08:00
Vivian Fang
f5ab162852 Fix main.yml to not rely on requirements.txt (#411) 2023-11-10 09:24:40 -08:00
Sarah Wooders
d6115d65d4 [version] bump version to 0.2.0 (#410) 2023-11-10 08:54:50 -08:00
Charles Packer
dab47001a9 Fix max tokens constant (#374)
* stripped LLM_MAX_TOKENS constant, instead it's a dictionary, and context_window is set via the config (defaults to 8k)

* pass context window in the calls to local llm APIs

* safety check

* remove dead imports

* context_length -> context_window

* add default for agent.load

* in configure, ask for the model context window if not specified via dictionary

* fix default, also make message about OPENAI_API_BASE missing more informative

* make openai default embedding if openai is default llm

* make openai on top of list

* typo

* also make local the default for embeddings if you're using localllm instead of the locallm endpoint

* provide --context_window flag to memgpt run

* fix runtime error

* stray comments

* stray comment
2023-11-09 17:59:03 -08:00
Hans Raaf
12f9bf29fd I added some json repairs that helped me with malformed messages (#341)
* I added some json repairs that helped me with malformed messages

There are two of them: The first will remove hard line feeds that appear
in the message part because the model added those instead of escaped
line feeds. This happens a lot in my experiments and that actually fixes
them.

The second one is less tested and should handle the case that the model
answers with multiple blocks of strings in quotes or even uses unescaped
quotes. It should grab everything betwenn the message: " and the ending
curly braces, escape them and makes it propper json that way.

Disclaimer: Both function were written with the help of ChatGPT-4 (I
can't write much Python). I think the first one is quite solid but doubt
that the second one is fully working. Maybe somebody with more Python
skills than me (or with more time) has a better idea for this type of
malformed replies.

* Moved the repair output behind the debug flag and removed the "clean" one

* Added even more fixes (out of what I just encountered while testing)

It seems that cut of json can be corrected and sometimes the model is to
lazy to add not just one curly brace but two. I think it does not "cost"
a lot to try them all out. But the expeptions get massive that way :)

* black

* for the final hail mary with extract_first_json, might as well add a double end bracket instead of single

---------

Co-authored-by: cpacker <packercharles@gmail.com>
2023-11-09 17:05:42 -08:00
Vivian Fang
11326ec24e Remove AsyncAgent and async from cli (#400)
* Remove AsyncAgent and async from cli

Refactor agent.py memory.py

Refactor interface.py

Refactor main.py

Refactor openai_tools.py

Refactor cli/cli.py

stray asyncs

save

make legacy embeddings not use async

Refactor presets

Remove deleted function from import

* remove stray prints

* typo

* another stray print

* patch test

---------

Co-authored-by: cpacker <packercharles@gmail.com>
2023-11-09 14:51:12 -08:00
Sarah Wooders
01adfaa4be Return empty list if archival memory search over empty local index (#402) 2023-11-09 14:49:23 -08:00
Bob Kerns
b17a8e89fb Dockerfile for running postgres locally (#393) 2023-11-09 14:26:53 -08:00
Sarah Wooders
ecad9a45ad Use ~/.memgpt/config to set questionary defaults in memgpt configure + update tests to use specific config path (#389) 2023-11-09 14:01:11 -08:00