LocalAI/examples/chainlit
dependabot[bot] 5da07b0a84
chore(deps): Bump llama-index from 0.11.1 to 0.11.4 in /examples/chainlit (#3462)
chore(deps): Bump llama-index in /examples/chainlit

Bumps [llama-index](https://github.com/run-llama/llama_index) from 0.11.1 to 0.11.4.
- [Release notes](https://github.com/run-llama/llama_index/releases)
- [Changelog](https://github.com/run-llama/llama_index/blob/main/CHANGELOG.md)
- [Commits](https://github.com/run-llama/llama_index/compare/v0.11.1...v0.11.4)

---
updated-dependencies:
- dependency-name: llama-index
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-02 23:18:14 +00:00
..
config.yaml chianlit example (#1238) 2023-11-02 22:56:46 +01:00
Dockerfile chore(deps): Update Dependencies (#2538) 2024-07-12 19:54:08 +00:00
main.py chianlit example (#1238) 2023-11-02 22:56:46 +01:00
README.md chianlit example (#1238) 2023-11-02 22:56:46 +01:00
requirements.txt chore(deps): Bump llama-index from 0.11.1 to 0.11.4 in /examples/chainlit (#3462) 2024-09-02 23:18:14 +00:00

LocalAI Demonstration with Embeddings and Chainlit

This demonstration shows you how to use embeddings with existing data in LocalAI, and how to integrate it with Chainlit for an interactive querying experience. We are using the llama_index library to facilitate the embedding and querying processes, and chainlit to provide an interactive interface. The Weaviate client is used as the embedding source.

Prerequisites

Before proceeding, make sure you have the following installed:

  • Weaviate client
  • LocalAI and its dependencies
  • Chainlit and its dependencies

Getting Started

  1. Clone this repository:
  2. Navigate to the project directory:
  3. Run the example: chainlit run main.py

Highlight on llama_index and chainlit

llama_index is the key library that facilitates the process of embedding and querying data in LocalAI. It provides a seamless interface to integrate various components, such as WeaviateVectorStore, LocalAI, ServiceContext, and more, for a smooth querying experience.

chainlit is used to provide an interactive interface for users to query the data and see the results in real-time. It integrates with llama_index to handle the querying process and display the results to the user.

In this example, llama_index is used to set up the VectorStoreIndex and QueryEngine, and chainlit is used to handle the user interactions with LocalAI and display the results.