* Rename AzureChatOpenAI to LCAzureChatOpenAI * Provide vanilla ChatOpenAI and AzureChatOpenAI * Remove the highest accuracy, lowest cost criteria These criteria are unnecessary. The users, not pipeline creators, should choose which LLM to use. Furthermore, it's cumbersome to input this information, really degrades user experience. * Remove the LLM selection in simple reasoning pipeline * Provide a dedicated stream method to generate the output * Return placeholder message to chat if the text is empty |
||
---|---|---|
.. | ||
ktem | ||
ktem_tests | ||
migrations | ||
.gitignore | ||
alembic.ini | ||
flowsettings.py | ||
launch.py | ||
MANIFEST.in | ||
pyproject.toml | ||
README.md | ||
requirements.txt |
Example of MVP pipeline for example
Prerequisite
To run the system out-of-the-box, please supply the following environment variables:
OPENAI_API_KEY=
OPENAI_API_BASE=
OPENAI_API_VERSION=
SERPAPI_API_KEY=
COHERE_API_KEY=
OPENAI_API_KEY_EMBEDDING=
# optional
KH_APP_NAME=
Run
gradio launch.py