* Allow customizing the base application
* Make the core llms and embeddings customizable
* Make the settings, reasoning and index customizable
* Import from langchain_openai
- Migrate the MVP into kotaemon.
- Preliminary include the pipeline within chatbot interface.
- Organize MVP as an application.
Todo:
- Add an info panel to view the planning of agents -> Fix streaming agents' output.
Resolve: #60Resolve: #61Resolve: #62
Refactor the `kotaemon/pipelines` module to `kotaemon/indices`. Create the VectorIndex.
Note: currently I place `qa` to be inside `kotaemon/indices` since at the moment we only have `qa` in RAG. At the same time, I think `qa` can be an independent module in `kotaemon/qa`. Since this can be changed later, I still go at the 1st option for now to observe if we can change it later.
* enforce Document as IO
* Separate rerankers, splitters and extractors (#85)
* partially refractor importing
* add text to embedding outputs
---------
Co-authored-by: Nguyen Trung Duc (john) <trungduc1992@gmail.com>
* add rerankers in retrieving pipeline
* update example MVP pipeline
* add citation pipeline and function call interface
* change return type of QA and AgentPipeline to Document
* Move splitter into indexing module
* Rename post_processing module to parsers
* Migrate LLM-specific composite pipelines into llms module
This change moves the `splitters` module into `indexing` module. The `indexing` module will be created soon, to house `indexing`-related components.
This change renames `post_processing` module into `parsers` module. Post-processing is a generic term which provides very little information. In the future, we will add other extractors into the `parser` module, like Metadata extractor...
This change migrates the composite elements into `llms` module. These elements heavily assume that the internal nodes are llm-specific. As a result, migrating these elements into `llms` module will make them more discoverable, and simplify code base structure.
Since the only usage of prompt is within LLMs, it is reasonable to keep it within the LLM module. This way, it would be easier to discover module, and make the code base less complicated.
Changes:
* Move prompt components into llms
* Bump version 0.3.1
* Make pip install dependencies in eager mode
---------
Co-authored-by: ian <ian@cinnamon.is>
This change speeds up OCR extraction by allowing bypassing OCR for texts that are irrelevant (not in table).
---------
Co-authored-by: Nguyen Trung Duc (john) <trungduc1992@gmail.com>
This change remove `BaseComponent`'s:
- run_raw
- run_batch_raw
- run_document
- run_batch_document
- is_document
- is_batch
Each component is expected to support multiple types of inputs and a single type of output. Since we want the component to work out-of-the-box with both standardized and customized use cases, supporting multiple types of inputs are expected. At the same time, to reduce the complexity of understanding how to use a component, we restrict a component to only have a single output type.
To accommodate these changes, we also refactor some components to remove their run_raw, run_batch_raw... methods, and to decide the common output interface for those components.
Tests are updated accordingly.
Commit changes:
* Add kwargs to vector store's query
* Simplify the BaseComponent
* Update tests
* Remove support for Python 3.8 and 3.9
* Bump version 0.3.0
* Fix github PR caching still use old environment after bumping version
---------
Co-authored-by: ian <ian@cinnamon.is>
By allowing specifying the UI outputs in the code, any time user runs `kh export ...`, that outputs in the code will be included in the UI YAML file. Otherwise, any time the user runs `kh export ...`, the output section in the UI YAML file will be reset to the default output.