Khoi-Nguyen Nguyen-Ngoc
a865e2b095
feat: modify base dependencies + remove unnecessary packages in lite docker ( #310 )
...
* feat: update base/adv dependencies
* feat: update Dockerfile
* ci: update free disk for docker build
2024-09-21 12:11:58 +07:00
Anush
e2bd78e9c4
feat: Qdrant vectorstore support ( #260 )
...
* feat: Qdrant vectorstore support
* chore: review changes
* docs: Updated README.md
2024-09-16 04:17:36 +07:00
kan_cin
d3fd75297f
feat: add multi-stages docker and support platform arm ( #274 )
...
* feat: add multi-stages docker and support platform arm
* refactor: pre-commit
* fix: raise ImportError (fastembed) instead of auto install
* feat: add dependencies for local llm
* feat: free disk
* feat: update README
* feat: update README
* chore: fix typo
---------
Co-authored-by: cin-niko <niko@cinnamon.is >
2024-09-12 20:25:03 +07:00
Tuan Anh Nguyen Dang (Tadashi_Cin)
ef7e91fcae
fix: update requirements ( #230 )
2024-09-06 09:36:21 +07:00
Tuan Anh Nguyen Dang (Tadashi_Cin)
e2ed3564ce
fix: limit fastapi version ( #229 )
2024-09-06 09:23:26 +07:00
Tadashi
318895b287
fix: disable default install for anthropic
2024-09-05 23:18:53 +07:00
Tadashi
3267e6c654
fix: disable default install for google-genai package
2024-09-05 23:08:28 +07:00
Tuan Anh Nguyen Dang (Tadashi_Cin)
05245f501c
feat: add support for Gemini, Claude through Langchain ( #225 ) (bump:patch)
2024-09-05 21:58:20 +07:00
ChengZi
772186b6e5
feat: support milvus vector db ( #188 ) #none
...
Signed-off-by: ChengZi <chen.zhang@zilliz.com >
2024-09-04 20:22:50 +07:00
Quang (Albert)
4b2b334d2c
fix: refine kotaemon/pyproject.toml ( #153 )
2024-08-30 23:02:14 +07:00
Tuan Anh Nguyen Dang (Tadashi_Cin)
2570e11501
feat: merge develop ( #123 )
...
* Support hybrid vector retrieval
* Enable figures and table reading in Azure DI
* Retrieve with multi-modal
* Fix mixing up table
* Add txt loader
* Add Anthropic Chat
* Raising error when retrieving help file
* Allow same filename for different people if private is True
* Allow declaring extra LLM vendors
* Show chunks on the File page
* Allow elasticsearch to get more docs
* Fix Cohere response (#86 )
* Fix Cohere response
* Remove Adobe pdfservice from dependency
kotaemon doesn't rely more pdfservice for its core functionality,
and pdfservice uses very out-dated dependency that causes conflict.
---------
Co-authored-by: trducng <trungduc1992@gmail.com >
* Add confidence score (#87 )
* Save question answering data as a log file
* Save the original information besides the rewritten info
* Export Cohere relevance score as confidence score
* Fix style check
* Upgrade the confidence score appearance (#90 )
* Highlight the relevance score
* Round relevance score. Get key from config instead of env
* Cohere return all scores
* Display relevance score for image
* Remove columns and rows in Excel loader which contains all NaN (#91 )
* remove columns and rows which contains all NaN
* back to multiple joiner options
* Fix style
---------
Co-authored-by: linhnguyen-cinnamon <cinmc0019@CINMC0019-LinhNguyen.local >
Co-authored-by: trducng <trungduc1992@gmail.com >
* Track retriever state
* Bump llama-index version 0.10
* feat/save-azuredi-mhtml-to-markdown (#93 )
* feat/save-azuredi-mhtml-to-markdown
* fix: replace os.path to pathlib change theflow.settings
* refactor: base on pre-commit
* chore: move the func of saving content markdown above removed_spans
---------
Co-authored-by: jacky0218 <jacky0218@github.com >
* fix: losing first chunk (#94 )
* fix: losing first chunk.
* fix: update the method of preventing losing chunks
---------
Co-authored-by: jacky0218 <jacky0218@github.com >
* fix: adding the base64 image in markdown (#95 )
* feat: more chunk info on UI
* fix: error when reindexing files
* refactor: allow more information exception trace when using gpt4v
* feat: add excel reader that treats each worksheet as a document
* Persist loader information when indexing file
* feat: allow hiding unneeded setting panels
* feat: allow specific timezone when creating conversation
* feat: add more confidence score (#96 )
* Allow a list of rerankers
* Export llm reranking score instead of filter with boolean
* Get logprobs from LLMs
* Rename cohere reranking score
* Call 2 rerankers at once
* Run QA pipeline for each chunk to get qa_score
* Display more relevance scores
* Define another LLMScoring instead of editing the original one
* Export logprobs instead of probs
* Call LLMScoring
* Get qa_score only in the final answer
* feat: replace text length with token in file list
* ui: show index name instead of id in the settings
* feat(ai): restrict the vision temperature
* fix(ui): remove the misleading message about non-retrieved evidences
* feat(ui): show the reasoning name and description in the reasoning setting page
* feat(ui): show version on the main windows
* feat(ui): show default llm name in the setting page
* fix(conf): append the result of doc in llm_scoring (#97 )
* fix: constraint maximum number of images
* feat(ui): allow filter file by name in file list page
* Fix exceeding token length error for OpenAI embeddings by chunking then averaging (#99 )
* Average embeddings in case the text exceeds max size
* Add docstring
* fix: Allow empty string when calling embedding
* fix: update trulens LLM ranking score for retrieval confidence, improve citation (#98 )
* Round when displaying not by default
* Add LLMTrulens reranking model
* Use llmtrulensscoring in pipeline
* fix: update UI display for trulen score
---------
Co-authored-by: taprosoft <tadashi@cinnamon.is >
* feat: add question decomposition & few-shot rewrite pipeline (#89 )
* Create few-shot query-rewriting. Run and display the result in info_panel
* Fix style check
* Put the functions to separate modules
* Add zero-shot question decomposition
* Fix fewshot rewriting
* Add default few-shot examples
* Fix decompose question
* Fix importing rewriting pipelines
* fix: update decompose logic in fullQA pipeline
---------
Co-authored-by: taprosoft <tadashi@cinnamon.is >
* fix: add encoding utf-8 when save temporal markdown in vectorIndex (#101 )
* fix: improve retrieval pipeline and relevant score display (#102 )
* fix: improve retrieval pipeline by extending first round top_k with multiplier
* fix: minor fix
* feat: improve UI default settings and add quick switch option for pipeline
* fix: improve agent logics (#103 )
* fix: improve agent progres display
* fix: update retrieval logic
* fix: UI display
* fix: less verbose debug log
* feat: add warning message for low confidence
* fix: LLM scoring enabled by default
* fix: minor update logics
* fix: hotfix image citation
* feat: update docx loader for handle merged table cells + handle zip file upload (#104 )
* feat: update docx loader for handle merged table cells
* feat: handle zip file
* refactor: pre-commit
* fix: escape text in download UI
* feat: optimize vector store query db (#105 )
* feat: optimize vector store query db
* feat: add file_id to chroma metadatas
* feat: remove unnecessary logs and update migrate script
* feat: iterate through file index
* fix: remove unused code
---------
Co-authored-by: taprosoft <tadashi@cinnamon.is >
* fix: add openai embedidng exponential back-off
* fix: update import download_loader
* refactor: codespell
* fix: update some default settings
* fix: update installation instruction
* fix: default chunk length in simple QA
* feat: add share converstation feature and enable retrieval history (#108 )
* feat: add share converstation feature and enable retrieval history
* fix: update share conversation UI
---------
Co-authored-by: taprosoft <tadashi@cinnamon.is >
* fix: allow exponential backoff for failed OCR call (#109 )
* fix: update default prompt when no retrieval is used
* fix: create embedding for long image chunks
* fix: add exception handling for additional table retriever
* fix: clean conversation & file selection UI
* fix: elastic search with empty doc_ids
* feat: add thumbnail PDF reader for quick multimodal QA
* feat: add thumbnail handling logic in indexing
* fix: UI text update
* fix: PDF thumb loader page number logic
* feat: add quick indexing pipeline and update UI
* feat: add conv name suggestion
* fix: minor UI change
* feat: citation in thread
* fix: add conv name suggestion in regen
* chore: add assets for usage doc
* chore: update usage doc
* feat: pdf viewer (#110 )
* feat: update pdfviewer
* feat: update missing files
* fix: update rendering logic of infor panel
* fix: improve thumbnail retrieval logic
* fix: update PDF evidence rendering logic
* fix: remove pdfjs built dist
* fix: reduce thumbnail evidence count
* chore: update gitignore
* fix: add js event on chat msg select
* fix: update css for viewer
* fix: add env var for PDFJS prebuilt
* fix: move language setting to reasoning utils
---------
Co-authored-by: phv2312 <kat87yb@gmail.com >
Co-authored-by: trducng <trungduc1992@gmail.com >
* feat: graph rag (#116 )
* fix: reload server when add/delete index
* fix: rework indexing pipeline to be able to disable vectorstore and splitter if needed
* feat: add graphRAG index with plot view
* fix: update requirement for graphRAG and lighten unnecessary packages
* feat: add knowledge network index (#118 )
* feat: add Knowledge Network index
* fix: update reader mode setting for knet
* fix: update init knet
* fix: update collection name to index pipeline
* fix: missing req
---------
Co-authored-by: jeff52415 <jeff.yang@cinnamon.is >
* fix: update info panel return for graphrag
* fix: retriever setting graphrag
* feat: local llm settings (#122 )
* feat: expose context length as reasoning setting to better fit local models
* fix: update context length setting for agents
* fix: rework threadpool llm call
* fix: fix improve indexing logic
* fix: fix improve UI
* feat: add lancedb
* fix: improve lancedb logic
* feat: add lancedb vectorstore
* fix: lighten requirement
* fix: improve lanceDB vs
* fix: improve UI
* fix: openai retry
* fix: update reqs
* fix: update launch command
* feat: update Dockerfile
* feat: add plot history
* fix: update default config
* fix: remove verbose print
* fix: update default setting
* fix: update gradio plot return
* fix: default gradio tmp
* fix: improve lancedb docstore
* fix: fix question decompose pipeline
* feat: add multimodal reader in UI
* fix: udpate docs
* fix: update default settings & docker build
* fix: update app startup
* chore: update documentation
* chore: update README
* chore: update README
---------
Co-authored-by: trducng <trungduc1992@gmail.com >
* chore: update README
* chore: update README
---------
Co-authored-by: trducng <trungduc1992@gmail.com >
Co-authored-by: cin-ace <ace@cinnamon.is >
Co-authored-by: Linh Nguyen <70562198+linhnguyen-cinnamon@users.noreply.github.com >
Co-authored-by: linhnguyen-cinnamon <cinmc0019@CINMC0019-LinhNguyen.local >
Co-authored-by: cin-jacky <101088014+jacky0218@users.noreply.github.com >
Co-authored-by: jacky0218 <jacky0218@github.com >
Co-authored-by: kan_cin <kan@cinnamon.is >
Co-authored-by: phv2312 <kat87yb@gmail.com >
Co-authored-by: jeff52415 <jeff.yang@cinnamon.is >
2024-08-26 08:50:37 +07:00
ian_Cin
b2296cfcdf
(bump:patch) Feat: Show app version in the Help page ( #68 )
...
* typo
* show version in the Help page
* update docs
* pump duckduckgo-search
* allow app version to be set by env var
2024-05-16 14:27:51 +07:00
ian_Cin
a122dc0a94
(bump:patch) Fix: llama-cpp-python security bug and setup local latest branch in github action ( #66 )
...
* update llama-cpp-python version in response to https://github.com/Cinnamon/kotaemon/security/dependabot/1
* setup local latest branch in github action
2024-05-15 17:57:37 +07:00
ian_Cin
654501e01c
(bump:minor) Feat: Add mechanism for user-site update and auto creating releases ( #56 )
...
* move flowsettings.py and launch.py to root
* update docs
* sync sub package versions
* rename launch.py to app.py and make run scripts work with installation package
* add update scripts
* auto version for root package
* rename authors and update doc dir
* Update auto-bump-and-release.yaml to trigger on push to main branch
* latest as branch instead of tag
* pin deps versions
* cache the changelogs
2024-05-15 16:34:50 +07:00
Duc Nguyen (john)
ec11b54ff2
Add Azure AI Document Intelligence loader ( #52 )
...
* Add azureai document intelligence loader
* Add load_data interface to Azure DI
* Bump version
* Access azure credentials from environment variables
2024-04-29 14:49:55 +07:00
Duc Nguyen (john)
456f020caf
Enable MHTML reader ( #44 )
...
* Enable mhtml loader
* Use default supported file types
* Add tests and bump version
2024-04-23 14:16:24 +07:00
ian_Cin
4022af7e9b
allow LlamaCppChat to auto download model from hf hub ( #29 )
2024-04-13 18:57:04 +07:00
Duc Nguyen (john)
e75354d410
Enable fastembed as a local embedding vendor ( #12 )
...
* Prepend all Langchain-based embeddings with LC
* Provide vanilla OpenAI embeddings
* Add test for AzureOpenAIEmbeddings and OpenAIEmbeddings
* Incorporate fastembed
---------
Co-authored-by: ian_Cin <ian@cinnamon.is >
2024-04-09 01:44:34 +07:00
ian_Cin
8001c86b16
Feat/new UI ( #13 )
...
* new custom theme
* improve css: scrollbar, header, tabs and buttons
* update settings tab
* open file index selector by default
* update chat control panel
* update chat panel
* update file index page
* cap gradio<=4.22.0
* rename admin page
* adjust UI
* update flowsettings
* auto start in browser
* change colour for edit LLM page's button
2024-04-08 22:23:00 +07:00
Duc Nguyen (john)
a203fc0f7c
Allow users to add LLM within the UI ( #6 )
...
* Rename AzureChatOpenAI to LCAzureChatOpenAI
* Provide vanilla ChatOpenAI and AzureChatOpenAI
* Remove the highest accuracy, lowest cost criteria
These criteria are unnecessary. The users, not pipeline creators, should choose
which LLM to use. Furthermore, it's cumbersome to input this information,
really degrades user experience.
* Remove the LLM selection in simple reasoning pipeline
* Provide a dedicated stream method to generate the output
* Return placeholder message to chat if the text is empty
2024-04-06 11:53:17 +07:00
ian_Cin
e67a25c0bd
Feat/add multimodal loader ( #5 )
...
* Add Adobe reader as the multimodal loader
* Allow FullQAPipeline to reasoning on figures
* fix: move the adobe import to avoid ImportError, notify users whenever they run the AdobeReader
---------
Co-authored-by: cin-albert <albert@cinnamon.is >
2024-04-03 14:52:40 +07:00
ian_Cin
a3bf728400
Update various docs ( #4 )
...
* rename cli tool
* remove redundant docs
* update docs
* update macos instructions
* add badges
2024-03-29 19:47:03 +07:00
ian_Cin
d22ae88c7a
make default installation faster ( #2 )
...
* remove cohere as default
* refractor dependencies
* use llama-index pdf reader as default (pypdf)
* fix some lazy docstring
* update install scripts
* minor fix
2024-03-21 22:48:20 +07:00
Duc Nguyen (john)
033e7e05cc
Improve kotaemon based on insights from projects ( #147 )
...
- Include static files in the package.
- More reliable information panel. Faster & not breaking randomly.
- Add directory upload.
- Enable zip file to upload.
- Allow setting endpoint for the OCR reader using environment variable.
2024-02-28 22:18:29 +07:00
Duc Nguyen (john)
767aaaa1ef
Utilize llama.cpp for both completion and chat models ( #141 )
2024-02-20 18:17:48 +07:00
Duc Nguyen (john)
d36522129f
refactor: replace llama-index based loader, to a llama-index mixin loader ( #142 )
2024-02-20 02:33:28 +07:00
Duc Nguyen (john)
65852b7d71
Add docx + html reader ( #139 )
2024-01-31 19:21:30 +07:00
Duc Nguyen (john)
2dd531114f
Make ktem official ( #134 )
...
* Move kotaemon and ktem into same folder
* Update docs
* Update CI
* Resolve mypy, isorts
* Re-allow test pdf files
2024-01-23 10:54:18 +07:00