Commit Graph

28 Commits

Author SHA1 Message Date
Charles Packer
624650c13d patch #428 (#433) 2023-11-12 22:59:53 -08:00
Charles Packer
dab47001a9 Fix max tokens constant (#374)
* stripped LLM_MAX_TOKENS constant, instead it's a dictionary, and context_window is set via the config (defaults to 8k)

* pass context window in the calls to local llm APIs

* safety check

* remove dead imports

* context_length -> context_window

* add default for agent.load

* in configure, ask for the model context window if not specified via dictionary

* fix default, also make message about OPENAI_API_BASE missing more informative

* make openai default embedding if openai is default llm

* make openai on top of list

* typo

* also make local the default for embeddings if you're using localllm instead of the locallm endpoint

* provide --context_window flag to memgpt run

* fix runtime error

* stray comments

* stray comment
2023-11-09 17:59:03 -08:00
Vivian Fang
11326ec24e Remove AsyncAgent and async from cli (#400)
* Remove AsyncAgent and async from cli

Refactor agent.py memory.py

Refactor interface.py

Refactor main.py

Refactor openai_tools.py

Refactor cli/cli.py

stray asyncs

save

make legacy embeddings not use async

Refactor presets

Remove deleted function from import

* remove stray prints

* typo

* another stray print

* patch test

---------

Co-authored-by: cpacker <packercharles@gmail.com>
2023-11-09 14:51:12 -08:00
Sarah Wooders
cef4d8489d Add support for larger archival memory stores (#359) 2023-11-09 09:09:57 -08:00
Sarah Wooders
fb29290dd4 Dependency management (#337)
* Divides dependencies into `pip install pymemgpt[legacy,local,postgres,dev]`. 
* Update docs
2023-11-06 19:45:44 -08:00
Charles Packer
caba2f468c Create docs pages (#328)
* Create docs  (#323)

* Create .readthedocs.yaml

* Update mkdocs.yml

* update

* revise

* syntax

* syntax

* syntax

* syntax

* revise

* revise

* spacing

* Docs (#327)

* add stuff

* patch homepage

* more docs

* updated

* updated

* refresh

* refresh

* refresh

* update

* refresh

* refresh

* refresh

* refresh

* missing file

* refresh

* refresh

* refresh

* refresh

* fix black

* refresh

* refresh

* refresh

* refresh

* add readme for just the docs

* Update README.md

* add more data loading docs

* cleanup data sources

* refresh

* revised

* add search

* make prettier

* revised

* updated

* refresh

* favi

* updated

---------

Co-authored-by: Sarah Wooders <sarahwooders@gmail.com>
2023-11-06 12:38:49 -08:00
Charles Packer
cc1ce0ce33 Remove embeddings as argument in archival_memory.insert (#284) 2023-11-05 12:48:22 -08:00
Vivian Fang
1871823c99 hotfix DummyArchivalMemoryWithFaiss 2023-11-03 16:41:06 -07:00
Sarah Wooders
b9ce763fda VectorDB support (pgvector) for archival memory (#226) 2023-11-03 16:19:15 -07:00
Charles Packer
25dd225d04 strip '/' and use osp.join (Windows support) (#283)
* strip '/' and use osp.join

* grepped for MEMGPT_DIR, found more places to replace '/'

* typo

* grep pass over filesep

---------

Co-authored-by: Vivian Fang <hi@vivi.sh>
2023-11-03 13:54:29 -07:00
Charles Packer
fde0087a19 Patch summarize when running with local llms (#213)
* trying to patch summarize when running with local llms

* moved token magic numbers to constants, made special localllm exception class (TODO catch these for retry), fix summarize bug where it exits early if empty list

* missing file

* raise an exception on no-op summary

* changed summarization logic to walk forwards in list until fraction of tokens in buffer is reached

* added same diff to sync agent

* reverted default max tokens to 8k, cleanup + more error wrapping for better error messages that get caught on retry

* patch for web UI context limit error propogation, using best guess for what the web UI error message is

* add webui token length exception

* remove print

* make no wrapper warning only pop up once

* cleanup

* Add errors to other wrappers

---------

Co-authored-by: Vivian Fang <hi@vivi.sh>
2023-11-02 23:44:02 -07:00
Robin Goetz
30bb866142 fix: LocalArchivalMemory prints ref_doc_info on if not using EmptyIndex (#240)
Currently, if you run the /memory command the application breaks if the LocalArchivalMemory
has no existing archival storage and defaults to the EmptyIndex. This is caused by EmptyIndex
not having a ref_doc_info implementation and throwing an Exception when that is used to print
the memory information to the console. This hot fix simply makes sure that we do not try to
use the function if using EmptyIndex and instead prints a message to the console indicating
an EmptyIndex is used.
2023-11-01 18:45:04 -07:00
Vivian Fang
79b72fd7ae await async_get_embeddings_with_backoff (#239) 2023-11-01 01:43:17 -07:00
Charles Packer
250252f105 len needs to be implemented in all memory classes (#236)
* len needs to be implemented in all memory classes so that the pretty print of memory shows statistics

* stub
2023-11-01 01:02:25 -07:00
Vivian Fang
0ac2ac10db Fix conversation_date_search async bug (#215)
* Fix conversation_date_search async bug

* Also catch TypeError
2023-10-31 00:35:09 -07:00
Vivian Fang
2a54511e52 hotfix LocalArchivalMemory (#209) 2023-10-30 20:37:33 -07:00
Sarah Wooders
23f3d42fae Refactoring CLI to use config file, connect to Llama Index data sources, and allow for multiple agents (#154)
* Migrate to `memgpt run` and `memgpt configure` 
* Add Llama index data sources via `memgpt load` 
* Save config files for defaults and agents
2023-10-30 16:47:54 -07:00
Vivian Fang
5a6b0ef1e8 Hotfix bug from async refactor (#203) 2023-10-30 15:38:25 -07:00
Kamelowy
11d576f7e6 New wrapper for Zephyr models + little fix in memory.py (#183)
* VectorIndex -> VectorStoreIndex

VectorStoreIndex is imported but non-existent VectorIndex is used.

* New wrapper for Zephyr family of models.

With inner thoughts.

* Update chat_completion_proxy.py for Zephyr Wrapper
2023-10-29 21:17:01 -07:00
Charles Packer
06871cc298 black patch on outstanding files that were causing workflow fails on PRs (#193) 2023-10-29 20:53:46 -07:00
Vivian Fang
53cacad075 Add synchronous memgpt agent (#156) 2023-10-27 16:48:14 -07:00
Sarah Wooders
0ab3d098d2 reformat 2023-10-26 16:08:25 -07:00
Sarah Wooders
bbacf0fb33 add database test 2023-10-26 15:30:31 -07:00
Sarah Wooders
b45b5b6a75 add llama index querying 2023-10-26 14:25:35 -07:00
Charles Packer
7aab4588ec fixed bug where persistence manager was not saving in demo CLI 2023-10-17 23:40:31 -07:00
Vivian Fang
86d52c4cdf fix summarizer 2023-10-15 21:07:45 -07:00
Vivian Fang
15540c24ac fix paging bug, implement llamaindex api search on top of memgpt 2023-10-15 16:45:41 -07:00
Charles Packer
257c3998f7 init commit 2023-10-12 18:48:58 -07:00