RAG is back stronger than ever

This commit is contained in:
Saifeddine ALOUI 2024-06-23 22:27:27 +02:00
parent 05292677eb
commit 6eb082191f
18 changed files with 626 additions and 330 deletions

View File

@ -1,5 +1,5 @@
# =================== Lord Of Large Language Multimodal Systems Configuration file =========================== # =================== Lord Of Large Language Multimodal Systems Configuration file ===========================
version: 115 version: 118
binding_name: null binding_name: null
model_name: null model_name: null
model_variant: null model_variant: null
@ -241,8 +241,14 @@ audio_silenceTimer: 5000
# Data vectorization # Data vectorization
rag_databases: [] # This is the list of paths to database sources. Each database is a folder containing data rag_databases: [] # This is the list of paths to database sources. Each database is a folder containing data
rag_vectorizer: bert # possible values bert, tfidf, word2vec rag_vectorizer: bert # possible values bert, tfidf, word2vec
rag_vectorizer_model: bert-base-nli-mean-tokens # The model name if applicable
rag_vectorizer_parameters: null # Parameters of the model in json format
rag_chunk_size: 512 # number of tokens per chunk rag_chunk_size: 512 # number of tokens per chunk
rag_n_chunks: 4 #Number of chunks to recover from the database rag_n_chunks: 4 #Number of chunks to recover from the database
rag_clean_chunks: true #Removed all uinecessary spaces and line returns
rag_follow_subfolders: true #if true the vectorizer will vectorize the content of subfolders too
rag_check_new_files_at_startup: false #if true, the vectorizer will automatically check for any new files in the folder and adds it to the database
rag_preprocess_chunks: false #if true, an LLM will preprocess the content of the chunk before writing it in a simple format
activate_skills_lib: false # Activate vectorizing previous conversations activate_skills_lib: false # Activate vectorizing previous conversations
skills_lib_database_name: "default" # Default skills database skills_lib_database_name: "default" # Default skills database

View File

@ -0,0 +1,88 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>LoLLMs Help Documentation</title>
<style>
body {
font-family: Arial, sans-serif;
line-height: 1.6;
margin: 0;
padding: 0;
background-color: #f4f4f4;
}
.container {
max-width: 800px;
margin: 20px auto;
padding: 20px;
background-color: #fff;
box-shadow: 0 0 10px rgba(0, 0, 0, 0.1);
}
h1, h2, h3 {
color: #333;
}
a {
color: #007BFF;
text-decoration: none;
}
a:hover {
text-decoration: underline;
}
ul {
padding-left: 20px;
}
</style>
</head>
<body>
<div class="container">
<h1>LoLLMs Help Documentation</h1>
<p>Welcome to the LoLLMs help documentation. Here you will find information and guides on how to use various features and functionalities of LoLLMs.</p>
<h2>Table of Contents</h2>
<ul>
<li><a href="#introduction">Introduction</a></li>
<li><a href="#getting-started">Getting Started</a></li>
<li><a href="#personalities">Personalities</a>
<ul>
<li><a href="#document-summarization">Document Summarization</a></li>
<!-- Add more personality links here -->
</ul>
</li>
<li><a href="#advanced-features">Advanced Features</a></li>
<li><a href="#troubleshooting">Troubleshooting</a></li>
<li><a href="#contact">Contact</a></li>
</ul>
<h2 id="introduction">Introduction</h2>
<p>LoLLMs (Lord of Large Language Multimodal Systems) is a versatile system designed to handle various tasks, including document summarization, code interpretation, and more. This documentation will guide you through the different features and how to use them effectively.</p>
<h2 id="getting-started">Getting Started</h2>
<p>To get started with LoLLMs, you need to install and configure the system on your machine. Follow the installation guide provided in the documentation to set up LoLLMs.</p>
<h2 id="personalities">Personalities</h2>
<p>LoLLMs supports various personalities that allow it to perform specific tasks. Below are some of the personalities available:</p>
<h3 id="document-summarization">Document Summarization</h3>
<p>Learn how to perform contextual summarization of documents using the <code>docs_zipper</code> personality.</p>
<p><a href="/help/personalities/documents summary/index.html">Go to Document Summarization Help</a></p>
<!-- Add more personality sections here -->
<h2 id="advanced-features">Advanced Features</h2>
<p>Explore the advanced features of LoLLMs, including code interpretation, internet search integration, and more.</p>
<h2 id="troubleshooting">Troubleshooting</h2>
<p>If you encounter any issues while using LoLLMs, refer to the troubleshooting section for solutions to common problems.</p>
<h2 id="contact">Contact</h2>
<p>If you need further assistance, feel free to reach out to us:</p>
<p><strong>Email</strong>: <a href="mailto:parisneoai@gmail.com">parisneoai@gmail.com</a></p>
<p><strong>Twitter</strong>: <a href="https://twitter.com/ParisNeo_AI" target="_blank">@ParisNeo_AI</a></p>
<p><strong>Discord</strong>: <a href="https://discord.gg/BDxacQmv" target="_blank">Join our Discord</a></p>
<p><strong>Sub-Reddit</strong>: <a href="https://www.reddit.com/r/lollms" target="_blank">r/lollms</a></p>
<p><strong>Instagram</strong>: <a href="https://www.instagram.com/spacenerduino/" target="_blank">spacenerduino</a></p>
<p>See ya!</p>
</div>
</body>
</html>

View File

@ -0,0 +1,131 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Contextual Summarization with LoLLMs</title>
<style>
body {
font-family: Arial, sans-serif;
line-height: 1.6;
margin: 0;
padding: 0;
background-color: #f4f4f4;
}
.container {
max-width: 800px;
margin: 20px auto;
padding: 20px;
background-color: #fff;
box-shadow: 0 0 10px rgba(0, 0, 0, 0.1);
}
h1, h2, h3 {
color: #333;
}
a {
color: #007BFF;
text-decoration: none;
}
a:hover {
text-decoration: underline;
}
ul {
padding-left: 20px;
}
</style>
</head>
<body>
<div class="container">
<h1>Contextual Summarization with LoLLMs</h1>
<p>Welcome to the guide on performing contextual summarization using LoLLMs (Lord of Large Language Multimodal Systems). This document will walk you through the steps required to summarize documents contextually using the <code>docs_zipper</code> personality.</p>
<h2>Table of Contents</h2>
<ul>
<li><a href="#introduction">Introduction</a></li>
<li><a href="#prerequisites">Prerequisites</a></li>
<li><a href="#steps-to-perform-contextual-summarization">Steps to Perform Contextual Summarization</a>
<ul>
<li><a href="#1-go-to-the-settings-page">1. Go to the Settings Page</a></li>
<li><a href="#2-select-the-personality">2. Select the Personality</a></li>
<li><a href="#3-configure-summary-parameters">3. Configure Summary Parameters</a></li>
<li><a href="#4-add-the-document">4. Add the Document</a></li>
<li><a href="#5-start-the-summarization">5. Start the Summarization</a></li>
</ul>
</li>
<li><a href="#example">Example</a></li>
<li><a href="#conclusion">Conclusion</a></li>
</ul>
<h2 id="introduction">Introduction</h2>
<p>LoLLMs is a versatile system designed to handle various tasks, including contextual summarization of documents. By leveraging the <code>docs_zipper</code> personality, you can generate concise summaries that respect specific constraints such as keeping the title, author names, method, and numerical results.</p>
<h2 id="prerequisites">Prerequisites</h2>
<ul>
<li>LoLLMs installed and configured on your system.</li>
<li>The <code>docs_zipper</code> personality available in the personalities section.</li>
</ul>
<h2 id="steps-to-perform-contextual-summarization">Steps to Perform Contextual Summarization</h2>
<h3 id="1-go-to-the-settings-page">1. Go to the Settings Page</h3>
<p>Navigate to the settings page of LoLLMs.</p>
<h3 id="2-select-the-personality">2. Select the Personality</h3>
<p>Under the personalities section, select the category <code>data</code> and mount the <code>docs_zipper</code> personality.</p>
<h3 id="3-configure-summary-parameters">3. Configure Summary Parameters</h3>
<ul>
<li>Go to the personality settings.</li>
<li>Set specific summary parameters such as:
<ul>
<li>Keep the method description.</li>
<li>Keep document title and authors in the summary.</li>
<li>Set the summary size in tokens.</li>
</ul>
</li>
<li>Validate the settings.</li>
</ul>
<h3 id="4-add-the-document">4. Add the Document</h3>
<p>Add the document you want to summarize.</p>
<h3 id="5-start-the-summarization">5. Start the Summarization</h3>
<ul>
<li>Go to the personality menu.</li>
<li>Select <code>start</code>.</li>
</ul>
<p>The document will be decomposed into chunks, and each chunk will be contextually summarized. The summaries are then tied together, and the operation is repeated until the compressed text is smaller than the maximum number of tokens set in the configuration.</p>
<h2 id="example">Example</h2>
<p>Here is an example of how to perform contextual summarization:</p>
<ul>
<li><strong>Settings Page</strong>: Navigate to the settings page.</li>
<li><strong>Select Personality</strong>: Choose <code>data</code> category and mount <code>docs_zipper</code>.</li>
<li><strong>Configure Parameters</strong>:
<ul>
<li>Keep method description.</li>
<li>Keep document title and authors.</li>
<li>Set summary size in tokens.</li>
<li>Validate settings.</li>
</ul>
</li>
<li><strong>Add Document</strong>: Upload the document to be summarized.</li>
<li><strong>Start Summarization</strong>: Go to the personality menu and select <code>start</code>.</li>
</ul>
<p>The contextual nature of this algorithm allows for better control over the summary, ensuring that specified constraints are respected.</p>
<h2 id="conclusion">Conclusion</h2>
<p>By following these steps, you can efficiently perform contextual summarization using LoLLMs. This method provides a high degree of control over the summary content, making it a powerful tool for document analysis.</p>
<p>For more detailed information, refer to the document titled "lollms_contextual_summery" located at <code>C:\Users\aloui\Documents\content\lollms_contextual_summery.md</code>.</p>
<hr>
<p><strong>Author</strong>: ParisNeo</p>
<p><strong>Contact</strong>: <a href="mailto:parisneoai@gmail.com">parisneoai@gmail.com</a></p>
<p><strong>Twitter</strong>: <a href="https://twitter.com/ParisNeo_AI" target="_blank">@ParisNeo_AI</a></p>
<p><strong>Discord</strong>: <a href="https://discord.gg/BDxacQmv" target="_blank">Join our Discord</a></p>
<p><strong>Sub-Reddit</strong>: <a href="https://www.reddit.com/r/lollms" target="_blank">r/lollms</a></p>
<p><strong>Instagram</strong>: <a href="https://www.instagram.com/spacenerduino/" target="_blank">spacenerduino</a></p>
<p>See ya!</p>
</div>
</body>
</html>

@ -1 +1 @@
Subproject commit cc7269869af4289072ce625341d93d7c52be00b0 Subproject commit 824e02fac852e023aa16ba5dab36b8f971fc1123

View File

@ -12,7 +12,7 @@ import time
import subprocess import subprocess
import json import json
from lollms.client_session import Client from lollms.client_session import Client
from lollms.utilities import discussion_path_2_url from lollms.utilities import discussion_path_to_url
from pathlib import Path from pathlib import Path
lollmsElfServer:LOLLMSWebUI = LOLLMSWebUI.get_instance() lollmsElfServer:LOLLMSWebUI = LOLLMSWebUI.get_instance()
@ -106,7 +106,7 @@ def execute_graphviz(code, client:Client, message_id, build_file=False):
tmp_file = root_folder/f"ai_code_{message_id}.html" tmp_file = root_folder/f"ai_code_{message_id}.html"
with open(tmp_file,"w",encoding="utf8") as f: with open(tmp_file,"w",encoding="utf8") as f:
f.write(build_graphviz_output(code)["output"]) f.write(build_graphviz_output(code)["output"])
link = f"{host}:{lollmsElfServer.config.port}/{discussion_path_2_url(tmp_file)}" link = f"{host}:{lollmsElfServer.config.port}/{discussion_path_to_url(tmp_file)}"
execution_time = time.time() - start_time execution_time = time.time() - start_time
output_json = {"output": f'<b>Page built successfully</b><br><a href="{link}" target="_blank">Press here to view the page</a>', "execution_time": execution_time} output_json = {"output": f'<b>Page built successfully</b><br><a href="{link}" target="_blank">Press here to view the page</a>', "execution_time": execution_time}
return output_json return output_json

View File

@ -12,7 +12,7 @@ import time
import subprocess import subprocess
import json import json
from lollms.client_session import Client from lollms.client_session import Client
from lollms.utilities import discussion_path_2_url from lollms.utilities import discussion_path_to_url
lollmsElfServer:LOLLMSWebUI = LOLLMSWebUI.get_instance() lollmsElfServer:LOLLMSWebUI = LOLLMSWebUI.get_instance()
@ -56,7 +56,7 @@ def execute_html(code, client:Client, message_id, build_file=False):
tmp_file = root_folder/f"ai_code_{message_id}.html" tmp_file = root_folder/f"ai_code_{message_id}.html"
with open(tmp_file,"w",encoding="utf8") as f: with open(tmp_file,"w",encoding="utf8") as f:
f.write(code) f.write(code)
link = f"{host}:{lollmsElfServer.config.port}/{discussion_path_2_url(tmp_file)}" link = f"{host}:{lollmsElfServer.config.port}/{discussion_path_to_url(tmp_file)}"
# Stop the timer. # Stop the timer.
execution_time = time.time() - start_time execution_time = time.time() - start_time
output_json = {"output": f'<b>Page built successfully</b><br><a href="{link}" target="_blank">Press here to view the page</a>', "execution_time": execution_time} output_json = {"output": f'<b>Page built successfully</b><br><a href="{link}" target="_blank">Press here to view the page</a>', "execution_time": execution_time}

View File

@ -12,7 +12,7 @@ import time
import subprocess import subprocess
import json import json
from lollms.client_session import Client from lollms.client_session import Client
from lollms.utilities import discussion_path_2_url from lollms.utilities import discussion_path_to_url
lollmsElfServer:LOLLMSWebUI = LOLLMSWebUI.get_instance() lollmsElfServer:LOLLMSWebUI = LOLLMSWebUI.get_instance()
@ -75,7 +75,7 @@ def execute_javascript(code, client:Client, message_id, build_file=False):
tmp_file = root_folder/f"ai_code_{message_id}.html" tmp_file = root_folder/f"ai_code_{message_id}.html"
with open(tmp_file,"w",encoding="utf8") as f: with open(tmp_file,"w",encoding="utf8") as f:
f.write(build_javascript_output(code)["output"]) f.write(build_javascript_output(code)["output"])
link = f"{host}:{lollmsElfServer.config.port}/{discussion_path_2_url(tmp_file)}" link = f"{host}:{lollmsElfServer.config.port}/{discussion_path_to_url(tmp_file)}"
execution_time = time.time() - start_time execution_time = time.time() - start_time
output_json = {"output": f'<b>Page built successfully</b><br><a href="{link}" target="_blank">Press here to view the page</a>', "execution_time": execution_time} output_json = {"output": f'<b>Page built successfully</b><br><a href="{link}" target="_blank">Press here to view the page</a>', "execution_time": execution_time}
return output_json return output_json

View File

@ -24,7 +24,7 @@ import time
import subprocess import subprocess
import json import json
from lollms.client_session import Client from lollms.client_session import Client
from lollms.utilities import discussion_path_2_url from lollms.utilities import discussion_path_to_url
lollmsElfServer:LOLLMSWebUI = LOLLMSWebUI.get_instance() lollmsElfServer:LOLLMSWebUI = LOLLMSWebUI.get_instance()
@ -86,7 +86,7 @@ def execute_latex(code, client:Client, message_id):
else: else:
host = lollmsElfServer.config.host host = lollmsElfServer.config.host
url = f"{host}:{lollmsElfServer.config.port}/{discussion_path_2_url(pdf_file)}" url = f"{host}:{lollmsElfServer.config.port}/{discussion_path_to_url(pdf_file)}"
error_json = {"output": f"<div>Pdf file generated at: {pdf_file}\n<a href='{url}' target='_blank'>Click here to show</a></div><div>Output:{output.decode('utf-8', errors='ignore')}\n</div><div class='text-red-500'>"+error_message+"</div>", "execution_time": execution_time} error_json = {"output": f"<div>Pdf file generated at: {pdf_file}\n<a href='{url}' target='_blank'>Click here to show</a></div><div>Output:{output.decode('utf-8', errors='ignore')}\n</div><div class='text-red-500'>"+error_message+"</div>", "execution_time": execution_time}
else: else:
@ -105,6 +105,6 @@ def execute_latex(code, client:Client, message_id):
else: else:
host = lollmsElfServer.config.host host = lollmsElfServer.config.host
url = f"{host}:{lollmsElfServer.config.port}/{discussion_path_2_url(pdf_file)}" url = f"{host}:{lollmsElfServer.config.port}/{discussion_path_to_url(pdf_file)}"
output_json = {"output": f"Pdf file generated at: {pdf_file}\n<a href='{url}' target='_blank'>Click here to show</a>", "execution_time": execution_time} output_json = {"output": f"Pdf file generated at: {pdf_file}\n<a href='{url}' target='_blank'>Click here to show</a>", "execution_time": execution_time}
return output_json return output_json

View File

@ -12,7 +12,7 @@ import time
import subprocess import subprocess
import json import json
from lollms.client_session import Client from lollms.client_session import Client
from lollms.utilities import discussion_path_2_url from lollms.utilities import discussion_path_to_url
lollmsElfServer:LOLLMSWebUI = LOLLMSWebUI.get_instance() lollmsElfServer:LOLLMSWebUI = LOLLMSWebUI.get_instance()
@ -139,7 +139,7 @@ def execute_mermaid(code, client:Client, message_id, build_file=False):
tmp_file = root_folder/f"ai_code_{message_id}.html" tmp_file = root_folder/f"ai_code_{message_id}.html"
with open(tmp_file,"w",encoding="utf8") as f: with open(tmp_file,"w",encoding="utf8") as f:
f.write(build_mermaid_output(code)["output"]) f.write(build_mermaid_output(code)["output"])
link = f"{host}:{lollmsElfServer.config.port}/{discussion_path_2_url(tmp_file)}" link = f"{host}:{lollmsElfServer.config.port}/{discussion_path_to_url(tmp_file)}"
# Stop the timer. # Stop the timer.
execution_time = time.time() - start_time execution_time = time.time() - start_time
output_json = {"output": f'<b>Page built successfully</b><br><a href="{link}" target="_blank">Press here to view the page</a>', "execution_time": execution_time} output_json = {"output": f'<b>Page built successfully</b><br><a href="{link}" target="_blank">Press here to view the page</a>', "execution_time": execution_time}

View File

@ -12,7 +12,7 @@ import time
import subprocess import subprocess
import json import json
from lollms.client_session import Client from lollms.client_session import Client
from lollms.utilities import discussion_path_2_url from lollms.utilities import discussion_path_to_url
lollmsElfServer:LOLLMSWebUI = LOLLMSWebUI.get_instance() lollmsElfServer:LOLLMSWebUI = LOLLMSWebUI.get_instance()
@ -139,7 +139,7 @@ def execute_svg(code, client:Client, message_id, build_file=False):
tmp_file = root_folder/f"ai_svg_{message_id}.svg" tmp_file = root_folder/f"ai_svg_{message_id}.svg"
with open(tmp_file,"w",encoding="utf8") as f: with open(tmp_file,"w",encoding="utf8") as f:
f.write(code) f.write(code)
link = f"{host}:{lollmsElfServer.config.port}/{discussion_path_2_url(tmp_file)}" link = f"{host}:{lollmsElfServer.config.port}/{discussion_path_to_url(tmp_file)}"
# Stop the timer. # Stop the timer.
execution_time = time.time() - start_time execution_time = time.time() - start_time
output_json = {"output": f'<b>Page built successfully</b><br><a href="{link}" target="_blank">Press here to view the page</a>', "execution_time": execution_time} output_json = {"output": f'<b>Page built successfully</b><br><a href="{link}" target="_blank">Press here to view the page</a>', "execution_time": execution_time}

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

4
web/dist/index.html vendored
View File

@ -6,8 +6,8 @@
<meta name="viewport" content="width=device-width, initial-scale=1.0"> <meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>LoLLMS WebUI - Welcome</title> <title>LoLLMS WebUI - Welcome</title>
<script type="module" crossorigin src="/assets/index-07528ce6.js"></script> <script type="module" crossorigin src="/assets/index-7439e9f8.js"></script>
<link rel="stylesheet" href="/assets/index-23969ee7.css"> <link rel="stylesheet" href="/assets/index-1118f82b.css">
</head> </head>
<body> <body>
<div id="app"></div> <div id="app"></div>

View File

@ -1,5 +1,5 @@
<template> <template>
<div class="w-full h-full overflow-y-auto scrollbar-thin scrollbar-track-bg-light-tone scrollbar-thumb-bg-light-tone-panel hover:scrollbar-thumb-primary dark:scrollbar-track-bg-dark-tone dark:scrollbar-thumb-bg-dark-tone-panel dark:hover:scrollbar-thumb-primary active:scrollbar-thumb-secondary" v-html="evaluatedCode" :key="componentKey"> <div :id="`ui_${componentKey}`" class="w-full h-full overflow-y-auto scrollbar-thin scrollbar-track-bg-light-tone scrollbar-thumb-bg-light-tone-panel hover:scrollbar-thumb-primary dark:scrollbar-track-bg-dark-tone dark:scrollbar-thumb-bg-dark-tone-panel dark:hover:scrollbar-thumb-primary active:scrollbar-thumb-secondary" v-html="evaluatedCode" :key="componentKey">
</div> </div>
</template> </template>

View File

@ -966,8 +966,9 @@
@change="settingsChanged=true" @change="settingsChanged=true"
class="w-full mt-1 px-2 py-1 border border-gray-300 rounded dark:bg-gray-600" class="w-full mt-1 px-2 py-1 border border-gray-300 rounded dark:bg-gray-600"
> >
<button @click="select_folder(index)" class="ml-2 px-2 py-1 bg-blue-500 text-white rounded">Select Folder</button> <button @click="vectorize_folder(index)" class="w-500 ml-2 px-2 py-1 bg-green-500 text-white hover:bg-green-300 rounded">(Re)Vectorize</button>
<button @click="removeDataSource(index)" class="ml-2 px-2 py-1 bg-red-500 text-white rounded">Remove</button> <button @click="select_folder(index)" class="w-500 ml-2 px-2 py-1 bg-blue-500 text-white hover:bg-green-300 rounded">Select Folder</button>
<button @click="removeDataSource(index)" class="ml-2 px-2 py-1 bg-red-500 text-white hover:bg-green-300 rounded">Remove</button>
</div> </div>
<button @click="addDataSource" class="mt-2 px-2 py-1 bg-blue-500 text-white rounded">Add Data Source</button> <button @click="addDataSource" class="mt-2 px-2 py-1 bg-blue-500 text-white rounded">Add Data Source</button>
</td> </td>
@ -990,6 +991,24 @@
</select> </select>
</td> </td>
</tr> </tr>
<tr>
<td style="min-width: 200px;">
<label for="rag_vectorizer_model" class="text-sm font-bold" style="margin-right: 1rem;">RAG Vectorizer model:</label>
</td>
<td>
<select
id="rag_vectorizer_model"
required
v-model="configFile.rag_vectorizer_model"
@change="settingsChanged=true"
class="w-full mt-1 px-2 py-1 border border-gray-300 rounded dark:bg-gray-600"
>
<option value="bert-large-uncased">bert-large-uncased</option>
<option value="bert-base-uncased">bert-base-uncased</option>
<option value="word2vec">Word2Vec Vectorizer</option>
</select>
</td>
</tr>
<tr> <tr>
<td style="min-width: 200px;"> <td style="min-width: 200px;">
<label for="rag_chunk_size" class="text-sm font-bold" style="margin-right: 1rem;">RAG chunk size:</label> <label for="rag_chunk_size" class="text-sm font-bold" style="margin-right: 1rem;">RAG chunk size:</label>
@ -1024,6 +1043,55 @@
> >
</td> </td>
</tr> </tr>
<tr>
<td style="min-width: 200px;">
<label for="rag_clean_chunks" class="text-sm font-bold" style="margin-right: 1rem;">Clean chunks:</label>
</td>
<td>
<input v-model="configFile.rag_clean_chunks"
type="checkbox"
@change="settingsChanged=true"
class="w-5 mt-1 px-2 py-1 border border-gray-300 rounded dark:bg-gray-600"
>
</td>
</tr>
<tr>
<td style="min-width: 200px;">
<label for="rag_follow_subfolders" class="text-sm font-bold" style="margin-right: 1rem;">Follow subfolders:</label>
</td>
<td>
<input v-model="configFile.rag_follow_subfolders"
type="checkbox"
@change="settingsChanged=true"
class="w-5 mt-1 px-2 py-1 border border-gray-300 rounded dark:bg-gray-600"
>
</td>
</tr>
<tr>
<td style="min-width: 200px;">
<label for="rag_check_new_files_at_startup" class="text-sm font-bold" style="margin-right: 1rem;">Check for new files at startup:</label>
</td>
<td>
<input v-model="configFile.rag_check_new_files_at_startup"
type="checkbox"
@change="settingsChanged=true"
class="w-5 mt-1 px-2 py-1 border border-gray-300 rounded dark:bg-gray-600"
>
</td>
</tr>
<tr>
<td style="min-width: 200px;">
<label for="rag_preprocess_chunks" class="text-sm font-bold" style="margin-right: 1rem;">Preprocess chunks:</label>
</td>
<td>
<input v-model="configFile.rag_preprocess_chunks"
type="checkbox"
@change="settingsChanged=true"
class="w-5 mt-1 px-2 py-1 border border-gray-300 rounded dark:bg-gray-600"
>
</td>
</tr>
</table> </table>
</Card> </Card>
<Card title="Data Vectorization" :is_subcard="true" class="pb-2 m-2"> <Card title="Data Vectorization" :is_subcard="true" class="pb-2 m-2">
@ -4000,6 +4068,9 @@ export default {
this.$store.state.config.rag_databases.splice(index, 1); this.$store.state.config.rag_databases.splice(index, 1);
this.settingsChanged = true; this.settingsChanged = true;
}, },
async vectorize_folder(index){
await axios.post('/vectorize_folder', {client_id:this.$store.state.client_id, db_path:this.$store.state.config.rag_databases[index]}, this.posts_headers)
},
async select_folder(index){ async select_folder(index){
try{ try{
socket.on("rag_db_added", (infos)=>{ socket.on("rag_db_added", (infos)=>{

@ -1 +1 @@
Subproject commit 05542e74a87daa356e0dd6bd187e5e804b5f56c5 Subproject commit b4ef274f5f3f6dbe44c1d4d2f76749e2a1892c9c

@ -1 +1 @@
Subproject commit 9fb265fc97add24810c75b7e33a6610501bc9fae Subproject commit 381c2a90bcf6d02100448a7d91c66222b8cc68a8

@ -1 +1 @@
Subproject commit 332f328a0920c01c421c198cc4b9b9cc73560cbc Subproject commit 22276d142e3928ed1aeb0f2be290618e3d750c72